venue
stringclasses 2
values | paper_content
stringlengths 7.54k
83.7k
| prompt
stringlengths 161
2.5k
| format
stringclasses 5
values | review
stringlengths 293
9.84k
|
---|---|---|---|---|
NIPS | Title
Constrained episodic reinforcement learning in concave-convex and knapsack settings
Abstract
We propose an algorithm for tabular episodic reinforcement learning (RL) with constraints. We provide a modular analysis with strong theoretical guarantees for two general settings. First is the convex-concave setting: maximization of a concave reward function subject to constraints that expected values of some vector quantities (such as the use of unsafe actions) lie in a convex set. Second is the knapsack setting: maximization of reward subject to the constraint that the total consumption of any of the specified resources does not exceed specified levels during the whole learning process. Previous work in constrained RL is limited to linear expectation constraints (a special case of convex-concave setting), or focuses on feasibility question, or on single-episode settings. Our experiments demonstrate that the proposed algorithm significantly outperforms these approaches in constrained episodic benchmarks.
1 Introduction
Standard reinforcement learning (RL) approaches seek to maximize a scalar reward (Sutton and Barto, 1998, 2018; Schulman et al., 2015; Mnih et al., 2015), but in many settings this is insufficient, because the desired properties of the agent behavior are better described using constraints. For example, an autonomous vehicle should not only get to the destination, but should also respect safety, fuel efficiency, and human comfort constraints along the way (Le et al., 2019); a robot should not only fulfill its task, but should also control its wear and tear, for example, by limiting the torque exerted on its motors (Tessler et al., 2019). Moreover, in many settings, we wish to satisfy such constraints already during training and not only during the deployment. For example, a power grid, an autonomous vehicle, or a real robotic hardware should avoid costly failures, where the hardware is damaged or humans are harmed, already during training (Leike et al., 2017; Ray et al., 2020). Constraints are also key in additional sequential decision making applications, such as dynamic pricing with limited supply (e.g., Besbes and Zeevi, 2009; Babaioff et al., 2015), scheduling of resources on a computer cluster (Mao et al., 2016), and imitation learning, where the goal is to stay close to an expert behavior (Syed and Schapire, 2007; Ziebart et al., 2008; Sun et al., 2019).
In this paper we study constrained episodic reinforcement learning, which encompasses all of these applications. An important characteristic of our approach, distinguishing it from previous work (e.g., Altman, 1999; Achiam et al., 2017; Tessler et al., 2019; Miryoosefi et al., 2019; Ray et al., 2020), is our focus on efficient exploration, leading to reduced sample complexity. Notably, the modularity of
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
our approach enables extensions to more complex settings such as (i) maximizing concave objectives under convex constraints, and (ii) reinforcement learning under hard constraints, where the learner has to stop when some constraint is violated (e.g., a car runs out of gas). For these extensions, which we refer to as concave-convex setting and knapsack setting, we provide the first regret guarantees in the episodic setting (see related work below for a detailed comparison). Moreover, our guarantees are anytime, meaning that the constraint violations are bounded at any point during learning, even if the learning process is interrupted. This is important for those applications where the system continues to learn after it is deployed.
Our approach relies on the principle of optimism under uncertainty to efficiently explore. Our learning algorithms optimize their actions with respect to a model based on the empirical statistics, while optimistically overestimating rewards and underestimating the resource consumption (i.e., overestimating the distance from the constraint). This idea was previously introduced in multiarmed bandits (Agrawal and Devanur, 2014); extending it to episodic reinforcement learning poses additional challenges since the policy space is exponential in the episode horizon. Circumventing these challenges, we provide a modular way to analyze this approach in the basic setting where both rewards and constraints are linear (Section 3) and then transfer this result to the more complicated concave-convex and knapsack settings (Sections 4 and 5). We empirically compare our approach with the only previous works that can handle convex constraints and show that our algorithmic innovations lead to significant empirical improvements (Section 6).
Related work. Sample-efficient exploration in constrained episodic reinforcement learning has only recently started to receive attention. Most previous works on episodic reinforcement learning focus on unconstrained settings (Jaksch et al., 2010; Azar et al., 2017; Dann et al., 2017). A notable exception is the work of Cheung (2019) and Tarbouriech and Lazaric (2019). Both of these works consider vectorial feedback and aggregate reward functions, and provide theoretical guarantees for the reinforcement learning setting with a single episode, but require a strong reachability or communication assumption, which is not needed in the episodic setting studied here. Also, compared to Cheung (2019), our results for the knapsack setting allow for a significantly smaller budget, as we illustrate in Section 5. Moreover, our approach is based on a tighter bonus, which leads to a superior empirical performance (see Section 6). Recently, there have also been several concurrent and independent works on sample-efficient exploration for reinforcement learning with constraints (Singh et al., 2020; Efroni et al., 2020; Qiu et al., 2020; Ding et al., 2020; Zheng and Ratliff, 2020). Unlike our work, all of these approaches focus on linear reward objective and linear constraints and do not handle the concave-convex and knapsack settings that we consider.
Constrained reinforcement learning has also been studied in settings that do not focus on sampleefficient exploration (Achiam et al., 2017; Tessler et al., 2019; Miryoosefi et al., 2019). Among these, only Miryoosefi et al. (2019) handle convex constraints, albeit without a reward objective (they solve the feasibility problem). Since these works do not focus on sample-efficient exploration, their performance drastically deteriorates when the task requires exploration (as we show in Section 6).
Sample-efficient exploration under constraints has been studied in multi-armed bandits, starting with a line of work on dynamic pricing with limited supply (Besbes and Zeevi, 2009, 2011; Babaioff et al., 2015; Wang et al., 2014). A general setting for bandits with global knapsack constraints (bandits with knapsacks) was defined and solved by Badanidiyuru et al. (2018) (see also Ch. 10 of Slivkins, 2019). Within this literature, the closest to ours is the work of Agrawal and Devanur (2014), who study bandits with concave objectives and convex constraints. Our work is directly inspired by theirs and lifts their techniques to the more general episodic reinforcement learning setting.
2 Model and preliminaries
In episodic reinforcement learning, a learner repeatedly interacts with an environment across K episodes. The environment includes the state space S , the action spaceA, the episode horizon H , and the initial state s0.1 To capture constrained settings, the environment includes a set D of d resources where each i ∈ D has a capacity constraint ξ(i) ∈ R+. The above are fixed and known to the learner.
1A fixed and known initial state is without loss of generality. In general, there is a fixed but unknown distribution ρ from which the initial state is drawn before each episode. We modify the MDP by adding a new state s0 as initial state, such that the next state is sampled from ρ for any action. Then ρ is “included” within the transition probabilities. The extra state s0 does not contribute any reward and does not consume any resources.
Constrained Markov decision process. We work with MDPs that have resource consumption in addition to rewards. Formally, a constrained MDP (CMDP) is a tripleM = (p, r, c) that describes transition probabilities p : S ×A → ∆(S), rewards r : S ×A → [0, 1], and resource consumption c : S ×A → [0, 1]d. For convenience, we denote c(s, a, i) = ci(s, a). We allow stochastic rewards and consumptions, in which case r and c refer to the conditional expectations, conditioned on s and a (our definitions and algorithms are based on this conditional expectation rather than the full conditional distribution).
We use the above definition to describe two kinds of CMDPs. The true CMDPM? = (p?, r?, c?) is fixed but unknown to the learner. Selecting action a at state s results in rewards and consumptions drawn from (possibly correlated) distributions with means r?(s, a) and c?(s, a) and supports in [0, 1] and [0, 1]d respectively. Next states are generated from transition probabilities p?(s, a). The second kind of CMDP arises in our algorithm, which is model-based and at episode k uses a CMDPM(k).
Episodic reinforcement learning protocol. At episode k ∈ [K], the learner commits to a policy πk = (πk,h) H h=1 where πk,h : S → ∆(A) specifies how to select actions at step h for every state. The learner starts from state sk,1 = s0. At step h = 1, . . . ,H , she selects an action ak,h ∼ πk,h(sk,h). The learner earns reward rk,h and suffers consumption ck,h, both drawn from the true CMDPM? on state-action pair (sk,h, ak,h) as described above, and transitions to state sk,h+1 ∼ p?(sk,h, ak,h).
Objectives. In the basic setting (Section 3), the learner wishes to maximize reward while respecting the consumption constraints in expectation by competing favorably against the following benchmark:
max π Eπ,p ? [ H∑ h=1 r? ( sh, ah )] s.t. ∀i ∈ D : Eπ,p ? [ H∑ h=1 c? ( sh, ah, i )] ≤ ξ(i), (1)
where Eπ,p denotes the expectation over the run of policy π according to transitions p, and sh, ah are the induced random state-action pairs. We denote by π? the policy that maximizes this objective.
For the basic setting, we track two performance measures: reward regret compares the learner’s total reward to the benchmark and consumption regret bounds excess in resource consumption:
REWREG(k) := Eπ ?,p? [ H∑ h=1 r? ( sh, ah )] − 1 k k∑ t=1 Eπt,p ? [ H∑ h=1 r? ( sh, ah )] , (2)
CONSREG(k) := max i∈D (1 k k∑ t=1 Eπt,p ? [ H∑ h=1 c? ( sh, ah, i )] − ξ(i) ) . (3)
Our guarantees are anytime, i.e., they hold at any episode k and not only after the last episode.
We also consider two extensions. In Section 4, we consider a concave reward objective and convex consumption constraints. In Section 5, we require consumption constraints to be satisfied with high probability under a cumulative budget across all K episodes, rather than in expectation in a single episode.
Tabular MDPs. We assume that the state space S and the action space A are finite (tabular setting). We construct standard empirical estimates separately for each state-action pair (s, a), using the learner’s observations up to and not including a given episode k. Eqs. (4–7) define sample counts, empirical transition probabilities, empirical rewards, and empirical resource consumption.2
Nk(s, a) = max
{ 1, ∑ t∈[k−1], h∈[H] 1{st,h = s, at,h = a} } , (4)
p̂k(s ′|s, a) = 1
Nk(s, a) ∑ t∈[k−1], h∈[H] 1{st,h = s, at,h = a, st,h+1 = s′}, (5)
r̂k(s, a) = 1
Nk(s, a) ∑ t∈[k−1], h∈[H] rt,h · 1{st,h = s, at,h = a}, (6)
ĉk(s, a, i) = 1
Nk(s, a) ∑ t∈[k−1], h∈[H] ct,h,i · 1{st,h = s, at,h = a} ∀i ∈ D. (7)
2The max operator in Eq. (4) is to avoid dividing by 0.
Preliminaries for theoretical analysis. The Q-function is a standard object in RL that tracks the learner’s expected performance if she starts from state s ∈ S at step h, selects action a ∈ A, and then follows a policy π under a model with transitions p for the remainder of the episode. We parameterize it by the objective function m : S ×A → [0, 1], which can be either a reward, i.e., m(s, a) = r(s, a), or consumption of some resource i ∈ D, i.e., m(s, a) = c(s, a, i). (For the unconstrained setting, the objective is the reward.) The performance of the policy in a particular step h is evaluated by the value function V which corresponds to the expected Q-function of the selected action (where the expectation is taken over the possibly randomized action selection of π). The Q and value functions can be both recursively defined by dynamic programming:
Qπ,pm (s, a, h) = m(s, a) + ∑ s′∈S p(s′|s, a)V π,pm (s′, h+ 1),
V π,pm (s, h) = Ea∼π(·|s) [ Qπ,pm (s, a, h) ] and V π,pm (s,H + 1) = 0.
By slight abuse of notation, for m ∈ {r} ∪ {ci}i∈D, we denote by m? ∈ {r?} ∪ {c?i }i∈D the corresponding objectives with respect to the rewards and consumptions of the true CMDPM?. For objectives m? and transitions p?, the above are the Bellman equations of the system (Bellman, 1957).
Estimating the Q-function based on the model parameters p and m rather than the ground truth parameters p? and m? introduces errors. These errors are localized across stages by the notion of Bellman error which contrasts the performance of policy π starting from stage h under the model parameters to a benchmark that behaves according to the model parameters starting from the next stage h+ 1 but uses the true parameters of the system in stage h. More formally, for objective m:
BELLπ,pm (s, a, h) = Q π,p m (s, a, h)−
( m?(s, a) + ∑ s′∈S p?(s′|s, a)V π,pm (s′, h+ 1) ) . (8)
Note that when the CMDP isM? (m = m?, p = p?), there is no mismatch and BELLπ,p ?
m? = 0.
3 Warm-up algorithm and analysis in the basic setting
In this section, we introduce a simple algorithm that allows to simultaneously bound reward and consumption regrets for the basic setting introduced in the previous section. Even in this basic setting, we provide the first sample-efficient guarantees in constrained episodic reinforcement learning.3 The modular analysis of the guarantees also allows us to subsequently extend (in Sections 4 and 5) the algorithm and guarantees to the more general concave-convex and knapsack settings.
Our algorithm. At episode k, we construct an estimated CMDPM(k) = ( p(k), r(k), c(k) ) based on the observations collected so far. The estimates are bonus-enhanced (formalized below) to encourage more targeted exploration. Our algorithm CONRL selects a policy πk by solving the following constrained optimization problem which we refer to as BASICCONPLANNER(p(k), r(k), c(k)):
max π Eπ,p (k) [ H∑ h=1 r(k) ( sh, ah )] s.t. ∀i ∈ D : Eπ,p (k) [ H∑ h=1 c(k) ( sh, ah, i )] ≤ ξ(i).
The above optimization problem is similar to the objective (1) but uses the estimated model instead of the (unknown to the learner) true model. We also note that this optimization problem can be optimally solved as it is a linear program on the occupation measures (Puterman, 2014), i.e., setting as variables the probability of each state-action pair and imposing flow conservation constraints with respect to the transitions. This program is described in Appendix A.1.
Bonus-enhanced model. A standard approach to implement the principle of optimism under uncertainty is to introduce, at each episode k, a bonus term b̂k(s, a) that favors under-explored actions. Specifically, we add this bonus to the empirical rewards (6), and subtract it from the consumptions (7): r(k)(s, a) = r̂k(s, a) + b̂k(s, a) and c(k)(s, a, i) = ĉk(s, a, i)− b̂k(s, a) for each resource i.
3We refer the reader to the related work (in Section 1) for discussion on concurrent and independent papers. Unlike our results, these papers do not extend to either concave-convex or knapsack settings.
Following the unconstrained analogues (Azar et al., 2017; Dann et al., 2017), we define the bonus as:
b̂k(s, a) = H
√ 2 ln ( 8SAH(d+ 1)k2/δ)
Nk(s, a) , (9)
where δ > 0 is the desired failure probability of the algorithm and Nk(s, a) is the number of times (s, a) pair is visited, c.f. (4), S = |S|, and A = |A|. Thus, under-explored actions have a larger bonus, and therefore appear more appealing to the planner. For estimated transition probabilities, we just use the empirical averages (5): p(k)(s′|s, a) = p̂(s′|s, a).
Valid bonus and Bellman-error decomposition. For a bonus-enhanced model to achieve effective exploration, the resulting bonuses need to be valid, i.e., they should ensure that the estimated rewards overestimate the true rewards and the estimated consumptions underestimate the true consumptions. Definition 3.1. A bonus bk : S ×A → R is valid if, ∀s ∈ S, a ∈ A, h ∈ [H],m ∈ {r} ∪ {ci}i∈D:∣∣∣(m̂k(s, a)−m?(s, a))+ ∑ s′∈S ( p̂k(s ′|s, a)− p?(s′|s, a) ) V π ?,p? m? (s ′, h+ 1)
∣∣∣ ≤ bk(s, a). By classical concentration bounds (Appendix B.1), the bonus b̂k of Eq. (9) satisfies this condition:
Lemma 3.2. With probability 1− δ, the bonus b̂k(s, a) is valid for all episodes k simultaneously. Our algorithm solves the BASICCONPLANNER optimization problem based on a bonus-enhanced model. When the bonuses are valid, we can upper bound the per-episode regret by the expected sum of Bellman errors across steps. This is the first part in classical unconstrained analyses and the following proposition extends this decomposition to constrained episodic reinforcement learning. The proof uses the so-called simulation lemma (Kearns and Singh, 2002) and is provided in Appendix B.3.
Proposition 3.3. If b̂k(s, a) is valid for all episodes k simultaneously then the per-episode reward and consumption regrets can be upper bounded by the expected sum of Bellman errors (8):
Eπ ?,p? [ H∑ h=1 r? ( sh, ah )] − Eπk,p ? [ H∑ h=1 r? ( sh, ah )] ≤ Eπk [ H∑ h=1 ∣∣∣BELLπk,p(k)r(k) (sh, ah, h)∣∣∣] (10) ∀i ∈ D : Eπk,p
? [ H∑ h=1 c? ( sh, ah, i )] − ξ(i) ≤ Eπk [ H∑ h=1 ∣∣∣BELLπk,p(k) c (k) i ( sh, ah, h )∣∣∣]. (11) Final guarantee. One difficulty with directly bounding the Bellman error is that the value function is not independent of the draws forming r(k)(s, a), c(k)(s, a), and p(k)(s′|s, a). Hence we cannot apply Hoeffding inequality directly. While Azar et al. (2017) propose a trick to get an O( √ S) bound on Bellman error in unconstrained settings, the trick relies on the crucial property of Bellman optimality: for an unconstrained MDP, its optimal policy π? satisfies the condition, V π ?
r? (s, h) ≥ V πr?(s, h) for all s, h, π (i.e., π? is optimal at any state). However, when constraints exist, the optimal policy does not satisfy the Bellman optimality property. Indeed, we can only guarantee optimality with respect to the initial state distribution, i.e., V π ?
r? (s0, 1) ≥ V πr?(s0, 1) for any π, but not everywhere else. This illustrates a fundamental difference between constrained MDPs and unconstrained MDPs. Thus we cannot directly apply the trick from Azar et al. (2017). Instead we follow an alternative approach of bounding the value function via an -net over the possible values. This analysis leads to a guarantee that is weaker by a factor of √ S than the unconstrained results. The proof is provided in Appendix B.6.
Theorem 3.4. There exists an absolute constant c ∈ R+ such that, with probability at least 1− 3δ, reward and consumption regrets are both upper bounded by:
c√ k · S √ AH3 ·
√ ln(k) ln ( SAH(d+ 1)k/δ ) + ck · S 3/2AH2 √ ln ( 2SAH(d+ 1)k/δ ) .
Comparison to single-episode results. In single-episode setting, Cheung (2019) achieves √ S dependency under the further assumption that the transitions are sparse, i.e., ‖p?(s, a)‖0 S for all (s, a). We do not make such assumptions on the sparsity of the MDP and we note that the regret bound of Cheung (2019) scales linearly in S when ‖p?(s, a)‖0 = Θ(S). Also, the single-episode setting requires a strong reachability assumption, not present in the episodic setting.
Remark 3.5. The aforementioned regret bound can be turned into a PAC bound of Õ ( S2AH3
2
) by
taking the uniform mixture of policies π1, π2, . . . , πk.
4 Concave-convex setting
We now extend the algorithm and guarantees derived for the basic setting to when the objective is concave function of the accumulated reward and the constraints are expressed as a convex function of the cumulative consumptions. Our approach is modular, seamlessly building on the basic setting.
Setting and objective. Formally, there is a concave reward-objective function f : R → R and a convex consumption-objective function g : Rd → R; the only assumption is that these functions are L-Lipschitz for some constant L, i.e., |f(x)−f(y)| ≤ L|x−y| for any x, y ∈ R, and |g(x)−g(y)| ≤ L‖x− y‖1 for any x, y ∈ Rd. Analogous to (1), the learner wishes to compete against the following benchmark which can be viewed as a reinforcement learning variant of the benchmark used by Agrawal and Devanur (2014) in multi-armed bandits:
max π
f ( Eπ,p ? [ H∑ h=1 r? ( sh, ah )]) s.t. g ( Eπ,p ? [ H∑ h=1 c? ( sh, ah )]) ≤ 0. (12)
The reward and consumption regrets are therefore adapted to:
CONVEXREWREG(k) := f ( Eπ ?,p? [ H∑ h=1 r? ( sh, ah )]) − f (1 k k∑ t=1 Eπt,p ? [ H∑ h=1 r? ( sh, ah )]) ,
CONVEXCONSREG(k) := g (1 k k∑ t=1 Eπt,p ? [ H∑ h=1 c? ( sh, ah )]) .
Our algorithm. As in the basic setting, we wish to create a bonus-enhanced model and optimize over it. To model the transition probabilites, we use empirical estimates p(k) = p̂k of Eq. (5) as before. However, since reward and consumption objectives are no longer monotone in the accumulated rewards and consumption respectively, it does not make sense to simply add or subtract b̂k (defined in Eq. 9) as we did before. Instead we compute the policy πk of episode k together with the model by solving the following optimization problem which we call CONVEXCONPLANNER:
max π max r(k)∈[r̂k±b̂k]
f ( Eπ,p (k) [ H∑ h=1 r(k) ( sh, ah )]) s.t. min c(k)∈[ĉk±b̂k·1] g ( Eπ,p (k) [ H∑ h=1 c(k) ( sh, ah )]) ≤ 0.
The above problem is convex in the occupation measures,4 i.e., the probability ρ(s, a, h) that the learner is at state-action-step (s, a, h) — c.f. Appendix A.2 for further discussion.
max ρ max r∈[r̂k±b̂k] f ( ∑ s,a,h ρ(s, a, h)r(s, a) ) s.t. min c∈[ĉk±b̂k·1] g ( ∑ s,a,h ρ(s, a, h)c(s, a) ) ≤ 0
∀s′, h : ∑ a ρ(s′, a, h+ 1) = ∑ s,a ρ(s, a, h)p̂k(s ′|s, a)
∀s, a, h : 0 ≤ ρ(s, a, h) ≤ 1 and ∑ s,a ρ(s, a, h) = 1.
Guarantee for concave-convex setting. To extend the guarantee of the basic setting to the concaveconvex setting, we face an additional challenge: it is not immediately clear that the optimal policy π? is feasible for the CONVEXCONPLANNER program because CONVEXCONPLANNER is defined with respect to the empirical transition probabilities p(k).5 Moreover, whenH > 1, it is not straightforward to show that objective in the used model is always greater than the one in the true model as the used
4Under mild assumptions, this program can be solved in polynomial time similar to its bandit analogue of Lemma 4.3 in (Agrawal and Devanur, 2014). We note that in the basic setting, it reduces to just a linear program.
5Note that in multi-armed bandit concave-convex setting (Agrawal and Devanur, 2014), proving feasibility of the best arm is straightforward as there are no transitions.
model transitions p(k)(s, a) can lead to different states than the ones encountered in the true model.6 We deal with both of these issues by introducing a novel application of the mean-value theorem to show that π? is indeed a feasible solution of that program and create a similar regret decomposition to Proposition 3.3 (see Proposition C.1 and more discussion in Appendix C.1); this allows us to plug in the results developed for the basic setting. The full proof is provided in Appendix C.
Theorem 4.1. Let L be the Lipschitz constant for f and g and let REWREG and CONSREG be the reward and consumption regrets for the basic setting (Theorem 3.4) with the failure probability δ. With probability 1 − δ, our algorithm in the concave-convex setting has reward and consumption regret upper bounded by L · REWREG and Ld · CONSREG respectively.
The linear dependence on d in the consumption regret above comes from the fact that we assume g is Lipschitz under `1 norm.
5 Knapsack setting
Our last technical section extends the algorithm and guarantee of the basic setting to scenarios where the constraints are hard which is in accordance with most of the literature on bandits with knapsacks. The goal here is to achieve aggregate reward regret that is sublinear in the time horizon (in our case, the number of episodes K), while also respecting budget constraints for as small budgets as possible. We derive guarantees in terms of reward regret, as defined previously, and then argue that our guarantee extends to the seemingly stronger benchmark of the best dynamic policy.
Setting and objective. Each resource i ∈ D has an aggregate budget Bi that the learner should not exceed over K episodes. Unlike the basic setting, where we track the consumption regret, here we view this as a hard constraint. As in most works on bandits with knapsacks, the algorithm is allowed to use a “null action” for an episode, i.e., an action that yields a zero reward and consumption when selected at the beginning of an episode. The learner wishes to maximize her aggregate reward while respecting these hard constraints. We reduce this problem to a specific variant of the basic problem (1) with ξ(i) = BiK . We modify the solution to (1) to take the null action if any constraint is violated and call the resulting benchmark π?. Note that π? satisfies constraints in expectation. At the end of this section, we explain how our algorithm also competes against a benchmark that is required to respect constraints deterministically (i.e., with probability one across all episodes).
Our algorithm. In the basic setting of Section 3, we showed a reward regret guarantee and a consumption regret guarantee, proving that the average constraint violation is O(1/ √ K). Now we seek a stronger guarantee: the learned policy needs to satisfy budget constraints with high probability. Our algorithm optimizes a mathematical program KNAPSACKCONPLANNER (13) that strengthens the consumption constraints:
max π Eπ,p (k) [ H∑ h=1 r(k) ( sh, ah )] s.t. ∀i ∈ D : Eπ,p (k) [ H∑ h=1 c(k) ( sh, ah, i )] ≤ (1− )Bi K . (13)
In the above, p(k), r(k), c(k) are exactly as in the basic setting and > 0 is instantiated in the theorem below. Note that the program (13) is feasible thanks to the existence of the null action. The following mixture policy induces a feasible solution: with probability 1 − , we play the optimal policy π? for the entire episode; with probability , we play the null action for the entire episode. Note that the above program can again be cast as a linear program in the occupancy measure space — c.f. Appendix A.3 for further discussion.
Guarantee for knapsack setting. The guarantee of the basic setting on this tighter mathematical program seamlessly transfers to a reward guarantee that does not violate the hard constraints.
Theorem 5.1. Assume that miniBi ≤ KH , i.e., constraints are non-vacuous. Let AGGREG(δ) be a bound on the aggregate (across episodes) reward or consumption regret for the soft-constraint setting (Theorem 3.4) with the failure probability δ. Let = AGGREG(δ)mini Bi . If miniBi > AGGREG(δ) then, with probability 1− δ, the reward regret in the hard-constraint setting is at most 2HAGGREG(δ)mini Bi and constraints are not violated.
6Again, this is not an issue in multi-armed bandits.
The above theorem implies that the aggregate reward regret is sublinear in K as long as miniBi HAGGREG(δ). The analysis in the above main theorem (provided in Appendix D) is modular in the sense that it leverages the CONRL’s performance to solve (13) in a black-box manner. Smaller AGGREG(δ) from the basic soft-constraint setting immediately translates to smaller reward regret and smaller budget regime (i.e., miniBi can be smaller). In particular, using the AGGREG(δ) bound of Theorem 3.4, the reward regret is sublinear as long as miniBi = Ω( √ K).
In contrast, previous work of Cheung (2019) can only deal with larger budget regime, i.e., miniBi = Ω(K2/3). Although the guarantees are not directly comparable as the latter is for the single-episode setting, which requires further reachability assumptions, the budget we can handle is significantly smaller and in the next section we show that our algorithm has superior empirical performance in episodic settings even when such assumptions are granted.
Dynamic policy benchmark. The common benchmark used in bandits with knapsacks is not the best stationary policy π? that respects constraints in expectation but rather the best dynamic policy (i.e., a policy that makes decisions based on the history) that never violates hard constraints deterministically. In Appendix D, we show that the optimal dynamic policy (formally defined there) has reward less than policy π? (informally, this is because π? respects constraints in expectation while the dynamic policy has to satisfy constraints deterministically) and therefore the guarantee of Theorem 5.1 also applies against the optimal dynamic policy.
6 Empirical comparison to other concave-convex approaches
In this section, we evaluate the performance of CONRL against previous approaches.7 Although our CONPLANNER (see Appendix A) can be solved exactly using linear programming (Altman, 1999), in our experiments, it suffices to use Lagrangian heuristic, denoted as LAGRCONPLANNER (see Appendix E.1). This Lagrangian heuristic only needs a planner for the unconstrained RL task. We consider two unconstrained RL algorithms as planners: value iteration and a model-based Advantage Actor-Critic (A2C) (Mnih et al., 2016) (based on fictitious samples drawn from the model provided as an input). The resulting variants of LAGRCONPLANNER are denoted CONRL-VALUE ITERATION
7Code is available at https://github.com/miryoosefi/ConRL
and CONRL-A2C. We run our experiments on two grid-world environments Mars rover (Tessler et al., 2019) and Box (Leike et al., 2017).8
Mars rover. The agent must move from the initial position to the goal without crashing into rocks. If the agent reaches the goal or crashes into a rock it will stay in that cell for the remainder of the episode. Reward is 1 when the agent reaches the goal and 1/H afterwards. Consumption is 1 when the agent crashes into a rock and 1/H afterwards. The episode horizon H is 30 and the agent’s action is perturbed with probability 0.1 to a random action.
Box. The agent must move a box from the initial position to the goal while avoiding corners (cells adjacent to at least two walls). If the agent reaches the goal it stays in that cell for the remainder of the episode. Reward is 1 when agent reaches the goal for the first time and 1/H afterwards; consumption is 1/H whenever the box is in a corner. Horizon H is 30 and the agent’s action is perturbed with probability 0.1 to a random action.
We compare CONRL to previous constrained approaches (derived for either episodic or single-episode settings) in Figure 1. We keep track of three metrics: episode-level reward and consumption (the first two rows) and cumulative consumption (the third row). Episode-level metrics are based on the most recent episode in the first two columns, i.e., we plot Eπk [ ∑H h=1 r ? h] and Eπk [ ∑H h=1 c ? h].
In the third column, we plot the average across episodes so far, i.e., 1k ∑k t=1 Eπt [ ∑H h=1 r ? h] and 1 k ∑k t=1 Eπt [ ∑H h=1 c
? h], and we use the log scale for the x-axis. The cumulative consumption is∑k
t=1 ∑H h=1 ct,h in all columns. See Appendix E for further details about experiments.
Episodic setting. We first compare our algorithms to two episodic RL approaches: APPROPO (Miryoosefi et al., 2019) and RCPO (Tessler et al., 2019). We note that none of the previous approaches in this setting address sample-efficient exploration. In addition, most of them are limited to linear constraints, with the exception of APPROPO (Miryoosefi et al., 2019), which can handle general convex constraints.9 Both APPROPO and RCPO (used as a baseline by Miryoosefi et al., 2019) maintain and update a weight vectorλ, used to derive reward for an unconstrained RL algorithm, which we instantiate as A2C. APPROPO focuses on the feasibility problem, so it requires to specify a lower bound on the reward, which we set to 0.3 for Mars rover and 0.1 for Box. In the first two columns of Figure 1 we see that both versions of CONRL are able to solve the constrained RL task with a much smaller number of trajectories (see top two rows), and their overall consumption levels are substantially lower (the final row) than those of the previous approaches.
Single-episode setting. Closest to our work is TFW-UCRL2 (Cheung, 2019), which is based on UCRL (Jaksch et al., 2010). However, that approach focuses on the single-episode setting and requires a strong reachability assumption. By connecting terminal states of our MDP to the intial state, we reduce our episodic setting to single-episode setting in which we can compare CONRL against TFW-UCRL2. Results for Mars rover are depicted in last column of Figure 1.10 Again, both versions of CONRL find the solution with a much smaller number of trajectories (note the log scale on the x-axis) and their overall consumption levels are much lower than those of TFW-UCRL2. This suggests that TFW-UCRL2 might be impractical in (at least some) episodic settings.
7 Conclusions
In this paper we study two types of constraints in the framework of constrained tabular episodic reinforcement learning: concave rewards and convex constraints, and knapsacks constraints. Our algorithms achieve near-optimal regret in both settings, and experimentally we show that our approach outperforms prior works on constrained reinforcement learning.
Regarding future work, it would be interesting to extend our framework to continuous state and action spaces. Potential directions include extensions to Lipschitz MDPs (Song and Sun, 2019) and MDPs with linear parameterization (Jin et al., 2019) where optimism-based exploration algorithms exist under the classic reinforcement learning setting without constraints.
8We are not aware of any benchmarks for convex/knapsack constraints. For transparency, we compare against prior works handling concave-convex or knapsack settings on established benchmarks for the linear case.
9In addition to that, trust region methods like CPO (Achiam et al., 2017) address a more restrictive setting and require constraint satisfaction at each iteration; for this reason, they are not included in the experiments.
10Due to a larger state space, it was computationally infeasible to run TFW-UCRL2 in the Box environment.
Broader Impact Our work focuses on the theoretical foundations of reinforcement learning by addressing the important challenge of constrained optimization in reinforcement learning. We strongly believe that understanding the theoretical underpinnings of the main machine learning paradigms is essential and can guide principled and effective deployment of such methods.
Beyond its theoretical contribution, our work may help the design of reinforcement learning algorithms that go beyond classical digital applications of RL (board games and video games) and extend to settings with complex and often competing objectives. We believe that constraints constitute a fundamental limitation in extending RL beyond the digital world, as they exist in a wide variety of sequential decision-making applications (robotics, medical treatment, education, advertising). Our work provides a paradigm to design algorithms with efficient exploration despite the presence of constraints.
That said, one needs to ensure that an algorithm offers acceptable quality in applications. Any exploration method that does not rely on off-policy samples will inevitably violate constraints sometimes in order to learn. In some applications, this is totally acceptable: a car staying out of fuel in rare circumstances is not detrimental, an advertiser exhausting their budget some month is even less significant, a student dissatisfaction in an online test is unpleasant but probably acceptable. On the other hand, if the constraint violation involves critical issues like drug recommendation for severe diseases or decisions by self-driving cars that can cause physical harm to passengers then the algorithm needs to be carefully reviewed. It may be necessary to “prime” the algorithm with some data collected in advance (however costly it may be). One may need to make a judgement call on whether the ethical or societal standards are consistent with deploying an algorithm in a particular setting.
To summarize, our work is theoretical in nature and makes significant progress on a problem at the heart of RL. It has the potential to guide deployment of constrained RL methods in many important applications and tackle a fundamental bottleneck in deploying RL beyond the digital world. However, an application needs to be carefully reviewed before deployment.
Acknowledgments and Disclosure of Funding The authors would like to thank Rob Schapire for useful discussions that helped in the initial stages of this work. Part of the work was done when WS was at Microsoft Research NYC. | 1. What is the main contribution of the paper in the field of reinforcement learning?
2. What are the strengths of the proposed algorithm, particularly in terms of its theoretical guarantees?
3. What are the weaknesses of the paper regarding its experimental setup and limitations in demonstrating the algorithm's effectiveness in resource-constrained scenarios?
4. How does the reviewer assess the clarity and validity of the paper's assumptions and objectives?
5. Are there any suggestions or recommendations for improving the paper's content or experimental design? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The paper proposes an algorithm to learn policies in environments with concave rewards and convex constraints, in the tabular and episodic setting. The authors provide theoretical guarantees on the performance w.r.t reward and consumption regrets. They also demonstrate the algorithm on 2 environments - box and mars rover with comparison against constrained policy optimization baselines. I've read the rebuttal and going to stick to my rating.
Strengths
The problem statement is clearly defined. The assumptions are clearly laid out. The objectives of bounding reward and consumption regrets seem fair. I haven't verified the theoretical analysis of the claims, so can't comment on that.
Weaknesses
The experimental setup could improve: 1. The environments chosen don't adequately demonstrate the resource constrained setup that they wish to deploy the algorithm in. Specifically, the knapsack setting is interesting but a challenging environment is lacking. A real world example is that of budgets earmarked for campaigns or energy resources in games. 2. The tabular setting is helpful for theoretical analysis but it would be helpful to mention how the algorithm and analysis would translate to the function approximation case. |
NIPS | Title
Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post Hoc Explanations
Abstract
A critical problem in the field of post hoc explainability is the lack of a common foundational goal among methods. For example, some methods are motivated by function approximation, some by game theoretic notions, and some by obtaining clean visualizations. This fragmentation of goals causes not only an inconsistent conceptual understanding of explanations but also the practical challenge of not knowing which method to use when. In this work, we begin to address these challenges by unifying eight popular post hoc explanation methods (LIME, C-LIME, KernelSHAP, Occlusion, Vanilla Gradients, Gradients ⇥ Input, SmoothGrad, and Integrated Gradients). We show that these methods all perform local function approximation of the black-box model, differing only in the neighbourhood and loss function used to perform the approximation. This unification enables us to (1) state a no free lunch theorem for explanation methods, demonstrating that no method can perform optimally across all neighbourhoods, and (2) provide a guiding principle to choose among methods based on faithfulness to the black-box model. We empirically validate these theoretical results using various real-world datasets, model classes, and prediction tasks. By bringing diverse explanation methods into a common framework, this work (1) advances the conceptual understanding of these methods, revealing their shared local function approximation objective, properties, and relation to one another, and (2) guides the use of these methods in practice, providing a principled approach to choose among methods and paving the way for the creation of new ones.
1 Introduction
As machine learning models become increasingly complex and are increasingly deployed in highstakes settings (e.g., medicine [1], law [2], and finance [3]), there is a growing emphasis on understanding how models make predictions so that decision-makers (e.g., doctors, judges, and loan officers) can assess the extent to which they can trust model predictions. To this end, several post hoc explanation methods have been developed, including LIME [4], C-LIME [5], SHAP [6], Occlusion [7], Vanilla Gradients [8], Gradient x Input [9], SmoothGrad [10], and Integrated Gradients [11]. However, different methods have different goals. Such differences lead to both conceptual and practical challenges to understanding and using explanation methods, thwarting progress in the field.
From a conceptual standpoint, the misalignment of goals among methods leads to an inconsistent view of explanations. What is an explanation? This is unclear as different methods have different notions of explanation. Depending on the method, explanations may be local function approximations (LIME and C-LIME), Shapley values (SHAP), raw gradients (Vanilla Gradients), raw gradients
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
scaled by the input (Gradient x Input), de-noised gradients (SmoothGrad), or a straight-line path integral of gradients (Integrated Gradients). Furthermore, the lack of a common mathematical framework for studying these diverse methods prevents a systematic understanding of these methods and their properties. To address these challenges, this paper unifies diverse explanation methods under a common framework, showing that diverse methods share a common motivation of local function approximation, and uses the framework to investigate and evaluate properties of these methods.
From a practical standpoint, the misalignment of goals among methods leads to the disagreement problem [12], the phenomenon that different methods provide disagreeing explanations for the same model prediction. Not only do different methods often generate disagreeing explanations in practice, but practitioners do not have a principled approach to select among explanations, resorting to ad hoc heuristics such as personal preference [12]. These findings prompt one to ask why explanation methods disagree and how to select among them in a principled manner. This paper addresses these questions, providing both an explanation for the disagreement problem and a principled approach to select among methods.
Thus, to address these conceptual and practical challenges, we study post hoc explanation methods from a function approximation perspective. We formalize a mathematical framework that unifies and characterizes diverse methods and that provides a principled approach to select among methods. Our work makes the following contributions:
1. We show that eight diverse, popular explanation methods (LIME, C-LIME, KernelSHAP, Occlusion, Vanilla Gradients, Gradient x Input, SmoothGrad, and Integrated Gradients) all perform local function approximation of the black-box model, differing only in the neighbourhoods and loss functions used to perform the approximation.
2. We introduce a no free lunch theorem for explanation methods which demonstrates that no single explanation method can perform local function approximation faithfully across all neighbourhoods, which in turn calls for a principled approach to select among methods.
3. To select among methods, we set forth a guiding principle based on function approximation, deeming a method to be effective if its explanation recovers the black-box model when the two are in the same model class (i.e., if the explanation perfectly approximates the black-box model when possible).
4. We empirically validate the theoretical results above using various real-world datasets, model classes, and prediction tasks.
2 Related Work
Post hoc explanation methods. Post hoc explanation methods can be classified based on model access (black-box model vs. access to model internals), explanation scope (global vs. local), search technique (perturbation-based vs. gradient-based), and basic unit of explanation (feature importance vs. rule-based). This paper focuses on local post hoc explanation methods based on feature importance. It analyzes four perturbation-based methods (LIME, C-LIME, KernelSHAP, and Occlusion) and four gradient-based methods (Vanilla Gradients, Gradient x Input, SmoothGrad, and Integrated Gradients).
Connections among post hoc explanation methods. Prior works have taken initial steps towards characterizing post hoc explanation methods and the connections among them. Agarwal et al. [5] proved that C-LIME and SmoothGrad converge to the same explanation in expectation. Lundberg and Lee [6] proposed a framework based on Shapley values to unify binary perturbation-based explanations. Covert et al. [13] found that many perturbation-based methods share the property of estimating feature importance based on the change in model behavior upon feature removal. In addition, Ancona et al. [14] analyzed four gradient-based explanation methods and the conditions under which they produce similar explanations. However, these analyses are based on mechanistic properties of methods (e.g., Shapley values or feature removal), are limited in scope (connecting only two methods, only perturbation-based methods, or only gradient-based methods), and do not inform when one method is preferable to another. In contrast, this paper formalizes a mathematical framework based on the concept of local function approximation, unifies eight diverse methods (spanning perturbation-based and gradient-based methods), and guides the use of these methods in practice.
Properties of post hoc explanation methods. Prior works have examined various properties of post hoc explanation methods, including faithfulness to the black-box model [15–17], robustness to adversarial attack [18–20, 15, 21], and fairness across subgroups [22]. This paper focuses on explanation faithfulness. Related works [15–17] assessed explanations generated by gradient-based methods, finding that they are not always faithful to the underlying model. Different from these works, this paper provides a framework for generating faithful explanations in the first place, theoretically characterizes the faithfulness of existing methods in different input domains, and provides a principled approach to select among methods and develop new ones based on explanation faithfulness.
3 Explanation as Local Function Approximation
In this section, we formalize the local function approximation framework and show its connection to existing explanation methods. We start by defining the notation used in the paper.
Notation. Let f : X ! Y be the black-box function we seek to explain in a post hoc manner, with input domain X (e.g., X = Rd or {0, 1}d) and output domain Y (e.g., Y = R or [0, 1]). Let G = {g : X ! Y} be the class of interpretable models used to generate a local explanation for f by selecting a suitable interpretable model g 2 G. We characterize locality around a point x0 2 X using a noise random variable ⇠ which is sampled from distribution Z . Let x⇠ = x0 ⇠ be a perturbation of x0 generated by combining x0 and ⇠ using a binary operator (e.g., addition, multiplication). Lastly, let `(f, g,x0, ⇠) 2 R+ be the loss function (e.g., squared error, cross-entropy) measuring the distance between f and g over the noise random variable ⇠ around x0.
We now define the local function approximation framework.
Definition 1. Local function approximation (LFA) of a black-box model f on a neighbourhood distribution Z around x0 by an interpretable model class G and a loss function ` is given by
g⇤ = argmin g2G E ⇠⇠Z `(f, g,x0, ⇠) (1)
where a valid loss ` is such that E⇠⇠Z `(f, g,x0, ⇠) = 0 () f(x⇠) = g(x⇠) 8⇠ ⇠ Z
The LFA framework is a formalization of the function approximation perspective first introduced by LIME [4] to motivate local explanations. Note that this conceptual framework is distinct from the algorithm introduced by LIME. We elaborate on this distinction below.
(1) The LFA framework requires that f and g share the same input domain X and output domain Y , a fundamental prerequisite for function approximation. This implies, for example, that using an interpretable model g with binary inputs (X = {0, 1}d) to approximate a black-box model f with continuous inputs (X = Rd), as proposed in LIME, is not true function approximation. (2) By imposing a condition on the loss function, the LFA framework ensures model recovery under specific conditions: g⇤ recovers f (i.e., g⇤ = f ) through LFA when f itself is of the interpretable model class G (i.e., f 2 G) and perturbations span the input domain of f (i.e., domain(x) = X ). This is a key distinction between the LFA framework and LIME (which has no such requirement) and guides the characterization of explanation methods in Section §4.
(3) Efficiently minimizing Equation 1 requires following standard machine learning methodology of splitting the perturbation data into train / validation / test sets and tuning hyper-parameters on the validation set to ensure generalization. To our knowledge, implementations of LIME do not adopt this procedure, making it possible to overfit to a small number of perturbations.
The LFA framework is generic enough to accommodate a variety of explanation methods. In fact, we show that specific instances of this framework converge to existing methods, as summarized in Table 1. At a high level, existing methods use a linear model g to locally approximate the black-box model f in different input domains (binary or continuous) over different local neighbourhoods specified by noise random variable ⇠ (where ⇠ is binary or continuous, drawn from a specified distribution, and combined additively or multiplicatively with point x0) using different loss functions (squared-error or gradient-matching loss). We discuss the details of these connections in the following sections.
3.1 LFA with Continuous Noise: Gradient-Based Explanation Methods
To connect gradient-based explanation methods to the LFA framework, we leverage the gradientmatching loss function `gm. We define `gm and show that it is a valid loss function for LFA.
`gm(f, g,x0, ⇠) = kr⇠f(x0 ⇠) r⇠g(x0 ⇠)k22 (2)
This loss function has been previously used in the contexts of generative modeling (where it is dubbed score-matching) [23] and model distillation [16]. However, to our knowledge, its use in interpretability is novel. Proposition 1. The gradient-matching loss function `gm is a valid loss function for LFA up to a constant, i.e., E⇠⇠Z `gm(f, g,x0, ⇠) = 0 () f(x⇠) = g(x⇠) + C 8⇠ ⇠ Z , where C 2 R.
Proof. If f(x⇠) = g(x⇠), then r⇠f(x⇠) = r⇠g(x⇠) and it follows from the definition of `gm that `gm = 0. Integrating r⇠f(x⇠) = r⇠g(x⇠) gives f(x⇠) = g(x⇠) + C.
Proposition 1 implies that, when using the linear model class G parameterized by g(x) = w>x+ b to approximate f , g⇤ recovers w but not b. This can be fixed by setting b = f(0). Theorem 1. LFA with gradient-matching loss is equivalent to (1) SmoothGrad for additive continuous Gaussian noise, which converges to Vanilla Gradients in the limit of a small standard deviation for the Gaussian distribution; and (2) Integrated Gradients for multiplicative continuous Uniform noise, which converges to Gradient x Input in the limit of a small support for the Uniform distribution.
Proof Sketch. For SmoothGrad and Integrated Gradients, the idea is that these methods are exactly the first-order stationary points of the gradient-matching loss function under their respective noise distributions. In other words, the weights of the interpretable model g that minimize the loss function is the explanation returned by each method. For Vanilla Gradients and Gradient x Input, the result is derived by taking the specified limits and using the Dirac delta function to calculate the limit. In the limit, the weights of the interpretable model g converge to the explanation of each method. The full proof is in Appendix A.1.
Along with gradient-based methods, C-LIME (a perturbation-based method) is an instance of the LFA framework by definition, using the squared-error loss function. The analysis in this section characterizes methods that use continuous noise. It does not extend to binary or discrete noise methods because gradients and continuous random variables do not apply in these domains. In the next section, we discuss binary noise methods.
3.2 LFA with Binary Noise: LIME, KernelSHAP and Occlusion maps
Theorem 2. LFA with multiplicative binary noise and squared-error loss is equivalent to (1) LIME for noise sampled from an unnormalized exponential kernel over binary vectors; (2) KernelSHAP
for noise sampled from an unnormalized Shapley kernel; and (3) Occlusion for noise in the form of one-hot vectors.
Proof Sketch. For LIME and KernelSHAP, the equivalence is mostly by definition: these methods have components that correspond to the interpretable model g and the loss function ` of the LFA framework and we need only to determine the local neighbourhood Z . We define the local neighbourhood Z using each method’s weighting kernel. In this setup, the LFA framework yields the respective explanation methods in expectation via importance sampling. For Occlusion, the equivalence involves enumerating all perturbations, specifying an appropriate loss function, and computing the resulting stationary points of the loss function. The full proof is in Appendix A.1.
3.3 Which Methods Do Not Perform LFA?
Some popular explanation methods are not instances of the LFA framework due to their properties. These methods include guided backpropagation [24], DeconvNet [25], Grad-CAM [26], GradCAM++ [27], FullGrad [28], and DeepLIFT [9]. Further details are in Appendix A.2.
4 When Do Explanations Perform Model Recovery?
Having described the LFA framework and its connections to existing explanation methods, we now leverage this framework to analyze the performance of methods under different conditions. We introduce a no free lunch theorem for explanation methods, inspired by classical no free lunch theorems in learning theory and optimization. Then, we assess the ability of existing methods to perform model recovery based on which we provide recommendations for choosing among methods.
4.1 No Free Lunch Theorem for Explanation Methods
An important implication of the function approximation perspective is that no explanation can be optimal across all neighbourhoods because each explanation is designed to perform LFA in a specific neighbourhood. This is especially true for explanations of non-linear models. We formalize this intuition into the following theorem. Theorem 3 (No Free Lunch for Explanation Methods). Consider explaining a black-box model f around point x0 using an interpretable model g from model class G and a valid loss function ` where the distance between f and G is given by d(f,G) = ming2G maxx2X `(f, g, 0,x). Then, for any explanation g⇤ over a neighbourhood distribution ⇠1 ⇠ Z1 such that max⇠1 `(f, g
⇤,x0, ⇠1) ✏, there always exists another neighbourhood ⇠2 ⇠ Z2 such that max⇠2 `(f, g ⇤,x0, ⇠2) d(f,G).
Proof Sketch. The idea is that, given an explanation obtained by using g to approximate f over a specific local neighbourhood Z , it is always possible to find a local neighbourhood over which this explanation does not perform well (i.e., does not perform faithful LFA). Thus, no single explanation method can perform well over all local neighbourhoods. The proof entails constructing an “adversarial” input for an explanation g⇤ such that g⇤ has a large loss for this input and then creating a neighbourhood that contains this adversarial input which will provably have a large loss. The magnitude of this loss is d(f,G), the distance between f and the model class G, inspired by the Haussdorf distance. The proof is generic and makes no assumptions regarding the forms of `, G or Z1. The full proof is in Appendix A.3. Thus, an explanation on a finite Z1 necessarily cannot approximate function behaviour at all other points, especially when G is less expressive than f , which is indicated by a large value of d(f,G). Thus, in the general case, one cannot perform model recovery as G is less expressive than f . An important implication of Theorem 3 is that seeking to find the “best” explanation without specifying a corresponding neighbourhood is futile as no universal “best” explanation exists. Furthermore, once the neighbourhood is specified, the best explanation is exactly the one given by the corresponding instance of the LFA framework.
In the next section, we consider the special case when d(f,G) = 0 (i.e., when f 2 G), where Theorem 3 does not apply because the same explanation can be optimal for multiple neighbourhoods and model recovery is thus possible.
4.2 Characterizing Explanation Methods via Model Recovery
Next, we formally state the model recovery condition for explanation methods. Then, we use this condition as a guiding principle to choose among methods. Definition 2 (Model Recovery: Guiding Principle). Given an instance of the LFA framework with a black-box model f such that f 2 G and a specific noise type (e.g., Gaussian, Uniform), an explanation method performs model recovery if there exists some noise distribution Z such that LFA returns g⇤ = f .
In other words, when the black-box model f itself is of the interpretable model class G, there must exist some setting of the noise distribution (within the noise type specified in the instance of the LFA framework) that is able to recover the black-box model. Thus, in this special case, we require local function approximation to lead to global model recovery over all inputs. This criterion can be thought of as a “sanity check” for explanation methods to ensure that they remain faithful to the black-box model.
Next, we analyze the impact of the choice of perturbation neighbourhood Z , the binary operator , and the interpretable model class G on an explanation method’s ability to satisfy the model recovery guiding principle in different input domains X . Note that while we can choose Z , , and G, we cannot choose X , the input domain. Which explanation should I choose for continuous X? We now analyze the model recovery properties of existing explanation methods when the input domain is continuous. We consider methods based on additive continuous noise (SmoothGrad, Vanilla Gradients, and C-LIME), multiplicative continuous noise (Integrated Gradients and Gradient x Input), and multiplicative binary noise (LIME, KernelSHAP, and Occlusion). For these methods, we make the following remark regarding model recovery for the class of linear models. Remark 1. For X = Rd and linear models f and g where f(x) = w>f x and g(x) = w>g x, additive continuous noise methods recover f (i.e., wg = wf ) while multiplicative continuous and multiplicative binary noise methods do not and instead recover wg = wf x.
This remark can be verified by directly evaluating the explanations (weights) of linear models, where the gradient exactly corresponds to the weights.
Note that the inability of multiplicative continuous noise methods to recover the black-box model is not due to the multiplicative nature of the noise, but due to the parameterization of the loss function. Specifically, these methods (implicitly) use the loss function `(f, g,x0, ⇠) = kr⇠f(x⇠) r⇠g(⇠)k22. Slightly changing the loss function to `(f, g,x0, ⇠) = kr⇠f(x⇠) r⇠g(x⇠)k22, i.e., replacing g(⇠) with g(x⇠), would enable g⇤ to recover f . This would change Integrated Gradients to R 1 ↵=0 r↵xf(↵x) (omitting the input multiplication term) and Gradient x Input to Vanilla Gradients.
A similar argument can be made for binary noise methods which parameterize the loss function as `(f, g,x0, ⇠) = kf(x⇠) g(⇠)k2. By changing the loss function to `(f, g,x0, ⇠) = kf(x⇠) g(x⇠)k2, binary noise methods can recover f for the case described in Remark 1. However, binary noise methods for continuous domains are unreliable, as there are cases where, despite the modification to `, model recovery is not guaranteed. The following is an example of this scenario. Remark 2. For X = Rd, periodic functions f and g where f(x) = Pd
i=1 sin(wfi xi) and g(x) = Pd i=1 sin(wgi xi), and an integer n, binary noise methods do not perform model recovery for |wfi | n⇡x0i .
This is because, for the conditions specified, sin(wfix0i) = sin(±n⇡) = sin(0) = 0, i.e., sin(wfix0i) outputs zero for all binary perturbations, thereby preventing model recovery. In this case, the discrete nature of the noise makes model recovery impossible. In general, discrete noise is inadequate for the recovery of models with large frequency components.
Which explanation should I choose for binary X? In the binary domain, continuous noise methods are invalid, restricting the choice of methods to binary noise methods. For reasons discussed above, methods with perturbation neighbourhoods characterized by multiplicative binary perturbations (e.g., LIME, KernelSHAP, and Occlusion) only enable g⇤ to recover f in the binary domain. Note that
the sinusoidal example in Remark 2 does not apply in this regime due to the continuous nature of its domain.
Which explanation should I choose for discrete X? In the discrete domain, continuous noise methods are also invalid. In addition, binary noise methods (e.g., LIME, KernelSHAP and Occlusion) cannot be used either because model recovery is not guaranteed in the sinusoidal case (Remark 2), following similar logic to that presented for continuous noise. Note that none of the existing methods in Table 1 perform general discrete perturbations, suggesting that these methods are not suitable for the discrete domain. Thus, in the discrete domain, a user can apply the LFA framework to define a new explanation method, specifying an appropriate discrete noise type. In the next section, we discuss more broadly about how one can use the LFA framework to create novel explanation methods.
4.3 Designing Novel Explanations with LFA
The LFA framework not only unifies existing explanation methods but also guides the creation of new ones. To explain a given black-box model prediction using the LFA framework, a user must specify the (1) interpretable model class G, (2) neighbourhood distribution Z , (3) loss function `, and (4) binary operator to combine the input and the noise. Specifying these four components completely specifies an instance of the LFA framework, thereby generating an explanation method tailored to a given context.
To illustrate this, consider a scenario in which a user seeks to create a sparse variant of SmoothGrad that yields non-zero gradients for only a small number of features (“SparseSmoothGrad”). Designing SparseSmoothGrad only requires the addition of a regularization term to the loss function used in the SmoothGrad instance of the LFA framework (e.g., ` = `SmoothGrad + kr⇠g(x⇠)k0), at which point, sparse solvers may be employed to solve the problem. Note that, unlike SmoothGrad, SparseSmoothGrad does not have a closed form solution, but that is not an issue for the LFA framework. More generally, by allowing customization of (1), (2), (3), and (4), the LFA framework creates new explanation methods through “variations on a theme”.
We summarize Section §4 as a table in Appendix A.4 and discuss the practical implications of Section §4 by providing the following recommendation for choosing among explanation methods.
Recommendation for choosing among explanation methods. In general, choose methods that satisfy the guiding principle of model recovery in the input domain in question. For continuous data, use additive continuous noise methods (e.g., SmoothGrad, Vanilla Gradients, C-LIME) or modified multiplicative continuous noise methods (e.g., Integrated Gradients, Gradient x Input) as described in Section §4.2. For binary data, use binary noise methods (e.g., LIME, KernelSHAP, Occlusion). Given that methods that use discrete noise do not exist, in case of discrete data, design novel explanation methods using the LFA framework with discrete noise neighbourhoods. Within each input domain, choosing among appropriate methods boils down to determining the perturbation neighbourhood most suitable in the given context.
5 Empirical Evaluation
In this section, we present an empirical evaluation of the LFA framework. We first describe the experimental setup and then discuss three experiments and their findings.
5.1 Datasets, Models, and Metrics
Datasets. We experiment with two real-world datasets for two prediction tasks. The first dataset is the life expectancy dataset from the World Health Organization (WHO) [29]. It consists of countries’ demographic, economic, and health factors from 2000 to 2015, with 2,938 observations for 20 continuous features. We use this dataset to perform regression, predicting life expectancy. The other dataset is the home equity line of credit (HELOC) dataset from FICO [30]. It consists of information on HELOC applications, with 9,871 observations for 24 continuous features. We use this dataset to perform classification, predicting whether an applicant made payments without being 90 days overdue. Additional dataset details are described in Appendix A.5.
Models. For each dataset, we train four models: a simple model (linear regression for the WHO dataset and logistic regression for the HELOC dataset) that can satisfy conditions of the guiding prin-
ciple and three more complex models (neural networks of varying complexity) that are more reflective of real-world applications. Model architectures and performance are described in Appendix A.5.
Metrics. To measure the similarity between two vectors (e.g., between two sets of explanations or between an explanation and the true model weights), we use L1 distance and cosine distance. L1 distance ranges between [0, 1) and is 0 when two vectors are the same. Cosine distance ranges between [0, 2] and is 0 when the angle between two vectors is 0 (or 360 ). For both metrics, the lower the value, the more similar two given vectors are.
5.2 Experiments
Here, we describe the setup of the experiments, present results, and discuss their implications.
Experiment 1: Existing explanation methods are instances of the LFA framework. First, we compare existing methods with corresponding instances of the LFA framework to assess whether they generate the same explanations. To this end, we use seven methods to explain the predictions of black-box models for 100 randomly-selected test set points. For each method, explanations are computed using either the existing method (implemented by Meta’s Captum library [31]) or the corresponding instance of the LFA framework (Table 1). The similarity of a given pair of explanations is measured using L1 distance and cosine distance.
The L1 distance values for a neural network with three hidden layers trained on the WHO dataset are shown in Figure 1. In Figure 1a, lowest L1 distance values appear in the diagonal of the heatmap, indicating that explanations generated by existing methods and corresponding instances of the LFA framework are very similar. Figures 1b and 1c show that explanations generated by instances of the LFA framework corresponding to SmoothGrad and Integrated Gradients converge to those of Vanilla Gradients and Gradient x Input, respectively. Together, these results demonstrate that, consistent with the theoretical results derived in Section §3, existing methods are instances of the LFA framework. In addition, the clustering of the methods in Figure 1a indicates that, consistent with the theoretical analysis in Section §4, for continuous data, SmoothGrad and Vanilla Gradients generate similar explanations while LIME, KernelSHAP, Occlusion, Integrated Gradients, and Gradient x Input generate similar explanations. We observe similar results across various datasets, models, and metrics (Appendix A.6.1).
Experiment 2: Some methods recover the underlying model while others do not (guiding principle). Next, we empirically assess which existing methods satisfy the guiding principle, i.e., which methods recover the black-box model f when f is of the interpretable model class G. We specify a setting in which f and g are of the same model class, generate explanations using each method, and assess whether g recovers f for each explanation. For the WHO dataset, we set f
and g to be linear regression models and generate explanations for 100 randomly-selected test set points. Then, for each point, we compare g’s weights with f ’s gradients alone or with f ’s gradients multiplied by the input because, based on Section §4, some methods generate explanations on the scale of gradients while others on the scale of gradient-times-input. Note that, for linear regression, f ’s gradients are f ’s weights.
Results are shown in Figure 2. Consistent with Section §4, for continuous data, SmoothGrad and Vanilla Gradients recover the black-box model, thereby satisfying the guiding principle, while LIME, KernelSHAP, Occlusion, Integrated Gradients, and Gradient x Input do not. We observe similar results for the HELOC dataset using logistic regression models for f and g (Appendix A.6.2).
Experiment 3: No single method performs best across all neighbourhoods (no free lunch theorem). Lastly, we perform a set of experiments to illustrate the no free lunch theorem in Section §4. We generate explanations for black-box model predictions for 100 randomly-selected test set points and evaluate the explanations using perturbation tests based on top-k or bottom-k features. For perturbation tests based on top-k features, the setup is as follows. For a given data point, k, and explanation, we identify the top-k features and either replace them with zero (binary perturbation) or add Gaussian noise to them (continuous perturbation). Then, we calculate the absolute difference in model prediction before and after perturbation. For each point, we generate one binary perturbation (since such perturbations are deterministic) and 100 continuous perturbations (since such perturbations are random), computing the average absolute difference in model prediction for the latter. In this setup, methods that better identify important features yield larger changes in model prediction. For perturbation tests based on bottom-k features, we follow the same procedure
but perturb the bottom-k features instead. In this setup, methods that better identify unimportant features yield smaller changes in model prediction.
Results of perturbation tests based on bottom-k features performed on explanations for a neural network with three hidden layers trained on the WHO dataset are displayed in Figure 3. Consistent with the no free lunch theorem in Section §4, LIME, KernelSHAP, Occlusion, Integrated Gradients, and Gradient x Input perform best on binary perturbation neighbourhoods (Figure 3a) while SmoothGrad and Vanilla Gradients perform best on continuous perturbation neighborhoods (Figure 3b). We observe consistent results across perturbation test types (top-k and bottom-k), datasets, and models (Appendix A.6.3). These results have important implications: one should carefully consider the perturbation neighborhood not only when selecting a method to generate explanations but also when selecting a method to evaluate explanations. In fact, the type of perturbations used to evaluate explanations directly determines explanation method performance.
6 Conclusions and Future Work
In this work, we formalize the local function approximation (LFA) framework and demonstrate that eight popular explanation methods can be characterized as instances of this framework with different local neighbourhoods and loss functions. We also introduce the no free lunch theorem for explanation methods, showing that no single method can perform optimally across all neighbourhoods, and provide a guiding principle for choosing among methods.
The function approximation perspective captures the essence of an explanation – a simplification of the real world (i.e., a black-box model) that is nonetheless accurate enough to be useful (i.e., predict outcomes of a set of perturbations). When the real world is “simple”, an explanation should completely capture its behaviour, a hallmark expressed precisely by the guiding principle. When the requirements of two explanations are distinct (i.e., they are trained to predict different sets of perturbations), then the explanations are each accurate in their own domain and may disagree, a phenomenon captured by the no free lunch theorem.
Our work makes fundamental contributions. We unify popular explanation methods, bringing diverse methods into a common framework. Unification brings conceptual coherence and clarity: diverse explanation methods, even those seemingly unrelated to function approximation, perform LFA but differ in the way they perform it. Unification also enables theoretical simplicity: to study diverse explanation methods, instead of analyzing each method individually, one can simply analyze the LFA framework and apply the findings to each method. An example of this is the no free lunch theorem which holds true for all instances of the LFA framework. Furthermore, our work provides practical guidance by presenting a principled approach to select among methods and design new ones.
Our work also addresses key open questions in the field. In response to criticism about the lack of consensus in the field regarding the overarching goals of post hoc explainability [32], our work points to function approximation as a principled goal. It also provides an explanation for the disagreement problem [12], i.e., why different methods generate different explanations for the same model prediction. According to the LFA framework, this disagreement occurs because different methods approximate the black-box model over different neighbourhoods using different loss functions.
Future research includes the following directions. First, we analyzed eight popular post hoc explanation methods and this analysis could be extended to other methods. Second, our work focuses on the faithfulness rather than interpretability of explanations. The latter is encapsulated in the “interpretable” model class G, which includes all the information about human preferences with regards to interpretability. However, it is unclear what constitutes an interpretable explanation and elucidating this takes not only conceptual understanding but also human-computer interaction research such as user studies. These are important directions for future research.
Acknowledgements
The authors would like to thank the anonymous reviewers for their helpful feedback and the following funding agencies for supporting this work. This work is supported in part by NSF awards #IIS2008461 and #IIS-2040989, and research awards from Google, JP Morgan, Amazon, Harvard Data Science Initiative, and Dˆ3 Institute at Harvard. H.L. would like to thank Sujatha and Mohan Lakkaraju for their continued support and encouragement. T.H. is supported in part by an NSF GRFP fellowship. The views expressed here are those of the authors and do not reflect the official policy or position of the funding agencies. | 1. What is the main contribution of the paper regarding local explanation techniques?
2. What are the strengths and weaknesses of the proposed approach compared to prior works?
3. Do you have any concerns or suggestions for improving the paper's content or experiments?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any limitations or potential biases in the paper's approach or conclusions? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
In this paper, the authors propose a generic framework which encapsulates different local explanation techniques as special cases of their LFA framework (linear function approximation). They introduce the no free lunch theorem here within the perspective of local explanations claiming that no single explanation method can perform local function approximation across all neighborhoods. Some of the key experimental results include comparison between Captum's explanation and the explanation from proposed approach by comparing the proximity of these. In addition, perturbation based tests have also been done to reinforce the notion of the no free lunch theorem
Strengths And Weaknesses
Strengths
Paper is easy to follow and the authors have supported their claims through benchmarked results. Overall coverage of local explainers within their framework is also reasonably exhaustive.
Weakness
While the no free lunch theorem here might appear a really novel contribution, I don't find it particularly appealing as papers in the past Dylan et al (Fooling LIME and SHAP - AIES 2020) and several others have demonstrated that neighborhood samplers are always prone to issues such as adversarial attacks, bias, etc. I believe the no free lunch theorem is just a fancier way of presenting something already known in the community.
I would have liked to see some more results on how LFA is more robust to issues such as overfitting etc (due to train/dev/test and hyper parameter tuning advantages, etc). Its a simple experiment but can enhance value of the paper possibly.
Post Rebuttal
I have changed my score after reading the author response to my queries.
Questions
In line with the review above, I would expect authors to add on what insight is this no free lunch theorem providing to the XAI community which was not known before ?
Limitations
Yes |
NIPS | Title
Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post Hoc Explanations
Abstract
A critical problem in the field of post hoc explainability is the lack of a common foundational goal among methods. For example, some methods are motivated by function approximation, some by game theoretic notions, and some by obtaining clean visualizations. This fragmentation of goals causes not only an inconsistent conceptual understanding of explanations but also the practical challenge of not knowing which method to use when. In this work, we begin to address these challenges by unifying eight popular post hoc explanation methods (LIME, C-LIME, KernelSHAP, Occlusion, Vanilla Gradients, Gradients ⇥ Input, SmoothGrad, and Integrated Gradients). We show that these methods all perform local function approximation of the black-box model, differing only in the neighbourhood and loss function used to perform the approximation. This unification enables us to (1) state a no free lunch theorem for explanation methods, demonstrating that no method can perform optimally across all neighbourhoods, and (2) provide a guiding principle to choose among methods based on faithfulness to the black-box model. We empirically validate these theoretical results using various real-world datasets, model classes, and prediction tasks. By bringing diverse explanation methods into a common framework, this work (1) advances the conceptual understanding of these methods, revealing their shared local function approximation objective, properties, and relation to one another, and (2) guides the use of these methods in practice, providing a principled approach to choose among methods and paving the way for the creation of new ones.
1 Introduction
As machine learning models become increasingly complex and are increasingly deployed in highstakes settings (e.g., medicine [1], law [2], and finance [3]), there is a growing emphasis on understanding how models make predictions so that decision-makers (e.g., doctors, judges, and loan officers) can assess the extent to which they can trust model predictions. To this end, several post hoc explanation methods have been developed, including LIME [4], C-LIME [5], SHAP [6], Occlusion [7], Vanilla Gradients [8], Gradient x Input [9], SmoothGrad [10], and Integrated Gradients [11]. However, different methods have different goals. Such differences lead to both conceptual and practical challenges to understanding and using explanation methods, thwarting progress in the field.
From a conceptual standpoint, the misalignment of goals among methods leads to an inconsistent view of explanations. What is an explanation? This is unclear as different methods have different notions of explanation. Depending on the method, explanations may be local function approximations (LIME and C-LIME), Shapley values (SHAP), raw gradients (Vanilla Gradients), raw gradients
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
scaled by the input (Gradient x Input), de-noised gradients (SmoothGrad), or a straight-line path integral of gradients (Integrated Gradients). Furthermore, the lack of a common mathematical framework for studying these diverse methods prevents a systematic understanding of these methods and their properties. To address these challenges, this paper unifies diverse explanation methods under a common framework, showing that diverse methods share a common motivation of local function approximation, and uses the framework to investigate and evaluate properties of these methods.
From a practical standpoint, the misalignment of goals among methods leads to the disagreement problem [12], the phenomenon that different methods provide disagreeing explanations for the same model prediction. Not only do different methods often generate disagreeing explanations in practice, but practitioners do not have a principled approach to select among explanations, resorting to ad hoc heuristics such as personal preference [12]. These findings prompt one to ask why explanation methods disagree and how to select among them in a principled manner. This paper addresses these questions, providing both an explanation for the disagreement problem and a principled approach to select among methods.
Thus, to address these conceptual and practical challenges, we study post hoc explanation methods from a function approximation perspective. We formalize a mathematical framework that unifies and characterizes diverse methods and that provides a principled approach to select among methods. Our work makes the following contributions:
1. We show that eight diverse, popular explanation methods (LIME, C-LIME, KernelSHAP, Occlusion, Vanilla Gradients, Gradient x Input, SmoothGrad, and Integrated Gradients) all perform local function approximation of the black-box model, differing only in the neighbourhoods and loss functions used to perform the approximation.
2. We introduce a no free lunch theorem for explanation methods which demonstrates that no single explanation method can perform local function approximation faithfully across all neighbourhoods, which in turn calls for a principled approach to select among methods.
3. To select among methods, we set forth a guiding principle based on function approximation, deeming a method to be effective if its explanation recovers the black-box model when the two are in the same model class (i.e., if the explanation perfectly approximates the black-box model when possible).
4. We empirically validate the theoretical results above using various real-world datasets, model classes, and prediction tasks.
2 Related Work
Post hoc explanation methods. Post hoc explanation methods can be classified based on model access (black-box model vs. access to model internals), explanation scope (global vs. local), search technique (perturbation-based vs. gradient-based), and basic unit of explanation (feature importance vs. rule-based). This paper focuses on local post hoc explanation methods based on feature importance. It analyzes four perturbation-based methods (LIME, C-LIME, KernelSHAP, and Occlusion) and four gradient-based methods (Vanilla Gradients, Gradient x Input, SmoothGrad, and Integrated Gradients).
Connections among post hoc explanation methods. Prior works have taken initial steps towards characterizing post hoc explanation methods and the connections among them. Agarwal et al. [5] proved that C-LIME and SmoothGrad converge to the same explanation in expectation. Lundberg and Lee [6] proposed a framework based on Shapley values to unify binary perturbation-based explanations. Covert et al. [13] found that many perturbation-based methods share the property of estimating feature importance based on the change in model behavior upon feature removal. In addition, Ancona et al. [14] analyzed four gradient-based explanation methods and the conditions under which they produce similar explanations. However, these analyses are based on mechanistic properties of methods (e.g., Shapley values or feature removal), are limited in scope (connecting only two methods, only perturbation-based methods, or only gradient-based methods), and do not inform when one method is preferable to another. In contrast, this paper formalizes a mathematical framework based on the concept of local function approximation, unifies eight diverse methods (spanning perturbation-based and gradient-based methods), and guides the use of these methods in practice.
Properties of post hoc explanation methods. Prior works have examined various properties of post hoc explanation methods, including faithfulness to the black-box model [15–17], robustness to adversarial attack [18–20, 15, 21], and fairness across subgroups [22]. This paper focuses on explanation faithfulness. Related works [15–17] assessed explanations generated by gradient-based methods, finding that they are not always faithful to the underlying model. Different from these works, this paper provides a framework for generating faithful explanations in the first place, theoretically characterizes the faithfulness of existing methods in different input domains, and provides a principled approach to select among methods and develop new ones based on explanation faithfulness.
3 Explanation as Local Function Approximation
In this section, we formalize the local function approximation framework and show its connection to existing explanation methods. We start by defining the notation used in the paper.
Notation. Let f : X ! Y be the black-box function we seek to explain in a post hoc manner, with input domain X (e.g., X = Rd or {0, 1}d) and output domain Y (e.g., Y = R or [0, 1]). Let G = {g : X ! Y} be the class of interpretable models used to generate a local explanation for f by selecting a suitable interpretable model g 2 G. We characterize locality around a point x0 2 X using a noise random variable ⇠ which is sampled from distribution Z . Let x⇠ = x0 ⇠ be a perturbation of x0 generated by combining x0 and ⇠ using a binary operator (e.g., addition, multiplication). Lastly, let `(f, g,x0, ⇠) 2 R+ be the loss function (e.g., squared error, cross-entropy) measuring the distance between f and g over the noise random variable ⇠ around x0.
We now define the local function approximation framework.
Definition 1. Local function approximation (LFA) of a black-box model f on a neighbourhood distribution Z around x0 by an interpretable model class G and a loss function ` is given by
g⇤ = argmin g2G E ⇠⇠Z `(f, g,x0, ⇠) (1)
where a valid loss ` is such that E⇠⇠Z `(f, g,x0, ⇠) = 0 () f(x⇠) = g(x⇠) 8⇠ ⇠ Z
The LFA framework is a formalization of the function approximation perspective first introduced by LIME [4] to motivate local explanations. Note that this conceptual framework is distinct from the algorithm introduced by LIME. We elaborate on this distinction below.
(1) The LFA framework requires that f and g share the same input domain X and output domain Y , a fundamental prerequisite for function approximation. This implies, for example, that using an interpretable model g with binary inputs (X = {0, 1}d) to approximate a black-box model f with continuous inputs (X = Rd), as proposed in LIME, is not true function approximation. (2) By imposing a condition on the loss function, the LFA framework ensures model recovery under specific conditions: g⇤ recovers f (i.e., g⇤ = f ) through LFA when f itself is of the interpretable model class G (i.e., f 2 G) and perturbations span the input domain of f (i.e., domain(x) = X ). This is a key distinction between the LFA framework and LIME (which has no such requirement) and guides the characterization of explanation methods in Section §4.
(3) Efficiently minimizing Equation 1 requires following standard machine learning methodology of splitting the perturbation data into train / validation / test sets and tuning hyper-parameters on the validation set to ensure generalization. To our knowledge, implementations of LIME do not adopt this procedure, making it possible to overfit to a small number of perturbations.
The LFA framework is generic enough to accommodate a variety of explanation methods. In fact, we show that specific instances of this framework converge to existing methods, as summarized in Table 1. At a high level, existing methods use a linear model g to locally approximate the black-box model f in different input domains (binary or continuous) over different local neighbourhoods specified by noise random variable ⇠ (where ⇠ is binary or continuous, drawn from a specified distribution, and combined additively or multiplicatively with point x0) using different loss functions (squared-error or gradient-matching loss). We discuss the details of these connections in the following sections.
3.1 LFA with Continuous Noise: Gradient-Based Explanation Methods
To connect gradient-based explanation methods to the LFA framework, we leverage the gradientmatching loss function `gm. We define `gm and show that it is a valid loss function for LFA.
`gm(f, g,x0, ⇠) = kr⇠f(x0 ⇠) r⇠g(x0 ⇠)k22 (2)
This loss function has been previously used in the contexts of generative modeling (where it is dubbed score-matching) [23] and model distillation [16]. However, to our knowledge, its use in interpretability is novel. Proposition 1. The gradient-matching loss function `gm is a valid loss function for LFA up to a constant, i.e., E⇠⇠Z `gm(f, g,x0, ⇠) = 0 () f(x⇠) = g(x⇠) + C 8⇠ ⇠ Z , where C 2 R.
Proof. If f(x⇠) = g(x⇠), then r⇠f(x⇠) = r⇠g(x⇠) and it follows from the definition of `gm that `gm = 0. Integrating r⇠f(x⇠) = r⇠g(x⇠) gives f(x⇠) = g(x⇠) + C.
Proposition 1 implies that, when using the linear model class G parameterized by g(x) = w>x+ b to approximate f , g⇤ recovers w but not b. This can be fixed by setting b = f(0). Theorem 1. LFA with gradient-matching loss is equivalent to (1) SmoothGrad for additive continuous Gaussian noise, which converges to Vanilla Gradients in the limit of a small standard deviation for the Gaussian distribution; and (2) Integrated Gradients for multiplicative continuous Uniform noise, which converges to Gradient x Input in the limit of a small support for the Uniform distribution.
Proof Sketch. For SmoothGrad and Integrated Gradients, the idea is that these methods are exactly the first-order stationary points of the gradient-matching loss function under their respective noise distributions. In other words, the weights of the interpretable model g that minimize the loss function is the explanation returned by each method. For Vanilla Gradients and Gradient x Input, the result is derived by taking the specified limits and using the Dirac delta function to calculate the limit. In the limit, the weights of the interpretable model g converge to the explanation of each method. The full proof is in Appendix A.1.
Along with gradient-based methods, C-LIME (a perturbation-based method) is an instance of the LFA framework by definition, using the squared-error loss function. The analysis in this section characterizes methods that use continuous noise. It does not extend to binary or discrete noise methods because gradients and continuous random variables do not apply in these domains. In the next section, we discuss binary noise methods.
3.2 LFA with Binary Noise: LIME, KernelSHAP and Occlusion maps
Theorem 2. LFA with multiplicative binary noise and squared-error loss is equivalent to (1) LIME for noise sampled from an unnormalized exponential kernel over binary vectors; (2) KernelSHAP
for noise sampled from an unnormalized Shapley kernel; and (3) Occlusion for noise in the form of one-hot vectors.
Proof Sketch. For LIME and KernelSHAP, the equivalence is mostly by definition: these methods have components that correspond to the interpretable model g and the loss function ` of the LFA framework and we need only to determine the local neighbourhood Z . We define the local neighbourhood Z using each method’s weighting kernel. In this setup, the LFA framework yields the respective explanation methods in expectation via importance sampling. For Occlusion, the equivalence involves enumerating all perturbations, specifying an appropriate loss function, and computing the resulting stationary points of the loss function. The full proof is in Appendix A.1.
3.3 Which Methods Do Not Perform LFA?
Some popular explanation methods are not instances of the LFA framework due to their properties. These methods include guided backpropagation [24], DeconvNet [25], Grad-CAM [26], GradCAM++ [27], FullGrad [28], and DeepLIFT [9]. Further details are in Appendix A.2.
4 When Do Explanations Perform Model Recovery?
Having described the LFA framework and its connections to existing explanation methods, we now leverage this framework to analyze the performance of methods under different conditions. We introduce a no free lunch theorem for explanation methods, inspired by classical no free lunch theorems in learning theory and optimization. Then, we assess the ability of existing methods to perform model recovery based on which we provide recommendations for choosing among methods.
4.1 No Free Lunch Theorem for Explanation Methods
An important implication of the function approximation perspective is that no explanation can be optimal across all neighbourhoods because each explanation is designed to perform LFA in a specific neighbourhood. This is especially true for explanations of non-linear models. We formalize this intuition into the following theorem. Theorem 3 (No Free Lunch for Explanation Methods). Consider explaining a black-box model f around point x0 using an interpretable model g from model class G and a valid loss function ` where the distance between f and G is given by d(f,G) = ming2G maxx2X `(f, g, 0,x). Then, for any explanation g⇤ over a neighbourhood distribution ⇠1 ⇠ Z1 such that max⇠1 `(f, g
⇤,x0, ⇠1) ✏, there always exists another neighbourhood ⇠2 ⇠ Z2 such that max⇠2 `(f, g ⇤,x0, ⇠2) d(f,G).
Proof Sketch. The idea is that, given an explanation obtained by using g to approximate f over a specific local neighbourhood Z , it is always possible to find a local neighbourhood over which this explanation does not perform well (i.e., does not perform faithful LFA). Thus, no single explanation method can perform well over all local neighbourhoods. The proof entails constructing an “adversarial” input for an explanation g⇤ such that g⇤ has a large loss for this input and then creating a neighbourhood that contains this adversarial input which will provably have a large loss. The magnitude of this loss is d(f,G), the distance between f and the model class G, inspired by the Haussdorf distance. The proof is generic and makes no assumptions regarding the forms of `, G or Z1. The full proof is in Appendix A.3. Thus, an explanation on a finite Z1 necessarily cannot approximate function behaviour at all other points, especially when G is less expressive than f , which is indicated by a large value of d(f,G). Thus, in the general case, one cannot perform model recovery as G is less expressive than f . An important implication of Theorem 3 is that seeking to find the “best” explanation without specifying a corresponding neighbourhood is futile as no universal “best” explanation exists. Furthermore, once the neighbourhood is specified, the best explanation is exactly the one given by the corresponding instance of the LFA framework.
In the next section, we consider the special case when d(f,G) = 0 (i.e., when f 2 G), where Theorem 3 does not apply because the same explanation can be optimal for multiple neighbourhoods and model recovery is thus possible.
4.2 Characterizing Explanation Methods via Model Recovery
Next, we formally state the model recovery condition for explanation methods. Then, we use this condition as a guiding principle to choose among methods. Definition 2 (Model Recovery: Guiding Principle). Given an instance of the LFA framework with a black-box model f such that f 2 G and a specific noise type (e.g., Gaussian, Uniform), an explanation method performs model recovery if there exists some noise distribution Z such that LFA returns g⇤ = f .
In other words, when the black-box model f itself is of the interpretable model class G, there must exist some setting of the noise distribution (within the noise type specified in the instance of the LFA framework) that is able to recover the black-box model. Thus, in this special case, we require local function approximation to lead to global model recovery over all inputs. This criterion can be thought of as a “sanity check” for explanation methods to ensure that they remain faithful to the black-box model.
Next, we analyze the impact of the choice of perturbation neighbourhood Z , the binary operator , and the interpretable model class G on an explanation method’s ability to satisfy the model recovery guiding principle in different input domains X . Note that while we can choose Z , , and G, we cannot choose X , the input domain. Which explanation should I choose for continuous X? We now analyze the model recovery properties of existing explanation methods when the input domain is continuous. We consider methods based on additive continuous noise (SmoothGrad, Vanilla Gradients, and C-LIME), multiplicative continuous noise (Integrated Gradients and Gradient x Input), and multiplicative binary noise (LIME, KernelSHAP, and Occlusion). For these methods, we make the following remark regarding model recovery for the class of linear models. Remark 1. For X = Rd and linear models f and g where f(x) = w>f x and g(x) = w>g x, additive continuous noise methods recover f (i.e., wg = wf ) while multiplicative continuous and multiplicative binary noise methods do not and instead recover wg = wf x.
This remark can be verified by directly evaluating the explanations (weights) of linear models, where the gradient exactly corresponds to the weights.
Note that the inability of multiplicative continuous noise methods to recover the black-box model is not due to the multiplicative nature of the noise, but due to the parameterization of the loss function. Specifically, these methods (implicitly) use the loss function `(f, g,x0, ⇠) = kr⇠f(x⇠) r⇠g(⇠)k22. Slightly changing the loss function to `(f, g,x0, ⇠) = kr⇠f(x⇠) r⇠g(x⇠)k22, i.e., replacing g(⇠) with g(x⇠), would enable g⇤ to recover f . This would change Integrated Gradients to R 1 ↵=0 r↵xf(↵x) (omitting the input multiplication term) and Gradient x Input to Vanilla Gradients.
A similar argument can be made for binary noise methods which parameterize the loss function as `(f, g,x0, ⇠) = kf(x⇠) g(⇠)k2. By changing the loss function to `(f, g,x0, ⇠) = kf(x⇠) g(x⇠)k2, binary noise methods can recover f for the case described in Remark 1. However, binary noise methods for continuous domains are unreliable, as there are cases where, despite the modification to `, model recovery is not guaranteed. The following is an example of this scenario. Remark 2. For X = Rd, periodic functions f and g where f(x) = Pd
i=1 sin(wfi xi) and g(x) = Pd i=1 sin(wgi xi), and an integer n, binary noise methods do not perform model recovery for |wfi | n⇡x0i .
This is because, for the conditions specified, sin(wfix0i) = sin(±n⇡) = sin(0) = 0, i.e., sin(wfix0i) outputs zero for all binary perturbations, thereby preventing model recovery. In this case, the discrete nature of the noise makes model recovery impossible. In general, discrete noise is inadequate for the recovery of models with large frequency components.
Which explanation should I choose for binary X? In the binary domain, continuous noise methods are invalid, restricting the choice of methods to binary noise methods. For reasons discussed above, methods with perturbation neighbourhoods characterized by multiplicative binary perturbations (e.g., LIME, KernelSHAP, and Occlusion) only enable g⇤ to recover f in the binary domain. Note that
the sinusoidal example in Remark 2 does not apply in this regime due to the continuous nature of its domain.
Which explanation should I choose for discrete X? In the discrete domain, continuous noise methods are also invalid. In addition, binary noise methods (e.g., LIME, KernelSHAP and Occlusion) cannot be used either because model recovery is not guaranteed in the sinusoidal case (Remark 2), following similar logic to that presented for continuous noise. Note that none of the existing methods in Table 1 perform general discrete perturbations, suggesting that these methods are not suitable for the discrete domain. Thus, in the discrete domain, a user can apply the LFA framework to define a new explanation method, specifying an appropriate discrete noise type. In the next section, we discuss more broadly about how one can use the LFA framework to create novel explanation methods.
4.3 Designing Novel Explanations with LFA
The LFA framework not only unifies existing explanation methods but also guides the creation of new ones. To explain a given black-box model prediction using the LFA framework, a user must specify the (1) interpretable model class G, (2) neighbourhood distribution Z , (3) loss function `, and (4) binary operator to combine the input and the noise. Specifying these four components completely specifies an instance of the LFA framework, thereby generating an explanation method tailored to a given context.
To illustrate this, consider a scenario in which a user seeks to create a sparse variant of SmoothGrad that yields non-zero gradients for only a small number of features (“SparseSmoothGrad”). Designing SparseSmoothGrad only requires the addition of a regularization term to the loss function used in the SmoothGrad instance of the LFA framework (e.g., ` = `SmoothGrad + kr⇠g(x⇠)k0), at which point, sparse solvers may be employed to solve the problem. Note that, unlike SmoothGrad, SparseSmoothGrad does not have a closed form solution, but that is not an issue for the LFA framework. More generally, by allowing customization of (1), (2), (3), and (4), the LFA framework creates new explanation methods through “variations on a theme”.
We summarize Section §4 as a table in Appendix A.4 and discuss the practical implications of Section §4 by providing the following recommendation for choosing among explanation methods.
Recommendation for choosing among explanation methods. In general, choose methods that satisfy the guiding principle of model recovery in the input domain in question. For continuous data, use additive continuous noise methods (e.g., SmoothGrad, Vanilla Gradients, C-LIME) or modified multiplicative continuous noise methods (e.g., Integrated Gradients, Gradient x Input) as described in Section §4.2. For binary data, use binary noise methods (e.g., LIME, KernelSHAP, Occlusion). Given that methods that use discrete noise do not exist, in case of discrete data, design novel explanation methods using the LFA framework with discrete noise neighbourhoods. Within each input domain, choosing among appropriate methods boils down to determining the perturbation neighbourhood most suitable in the given context.
5 Empirical Evaluation
In this section, we present an empirical evaluation of the LFA framework. We first describe the experimental setup and then discuss three experiments and their findings.
5.1 Datasets, Models, and Metrics
Datasets. We experiment with two real-world datasets for two prediction tasks. The first dataset is the life expectancy dataset from the World Health Organization (WHO) [29]. It consists of countries’ demographic, economic, and health factors from 2000 to 2015, with 2,938 observations for 20 continuous features. We use this dataset to perform regression, predicting life expectancy. The other dataset is the home equity line of credit (HELOC) dataset from FICO [30]. It consists of information on HELOC applications, with 9,871 observations for 24 continuous features. We use this dataset to perform classification, predicting whether an applicant made payments without being 90 days overdue. Additional dataset details are described in Appendix A.5.
Models. For each dataset, we train four models: a simple model (linear regression for the WHO dataset and logistic regression for the HELOC dataset) that can satisfy conditions of the guiding prin-
ciple and three more complex models (neural networks of varying complexity) that are more reflective of real-world applications. Model architectures and performance are described in Appendix A.5.
Metrics. To measure the similarity between two vectors (e.g., between two sets of explanations or between an explanation and the true model weights), we use L1 distance and cosine distance. L1 distance ranges between [0, 1) and is 0 when two vectors are the same. Cosine distance ranges between [0, 2] and is 0 when the angle between two vectors is 0 (or 360 ). For both metrics, the lower the value, the more similar two given vectors are.
5.2 Experiments
Here, we describe the setup of the experiments, present results, and discuss their implications.
Experiment 1: Existing explanation methods are instances of the LFA framework. First, we compare existing methods with corresponding instances of the LFA framework to assess whether they generate the same explanations. To this end, we use seven methods to explain the predictions of black-box models for 100 randomly-selected test set points. For each method, explanations are computed using either the existing method (implemented by Meta’s Captum library [31]) or the corresponding instance of the LFA framework (Table 1). The similarity of a given pair of explanations is measured using L1 distance and cosine distance.
The L1 distance values for a neural network with three hidden layers trained on the WHO dataset are shown in Figure 1. In Figure 1a, lowest L1 distance values appear in the diagonal of the heatmap, indicating that explanations generated by existing methods and corresponding instances of the LFA framework are very similar. Figures 1b and 1c show that explanations generated by instances of the LFA framework corresponding to SmoothGrad and Integrated Gradients converge to those of Vanilla Gradients and Gradient x Input, respectively. Together, these results demonstrate that, consistent with the theoretical results derived in Section §3, existing methods are instances of the LFA framework. In addition, the clustering of the methods in Figure 1a indicates that, consistent with the theoretical analysis in Section §4, for continuous data, SmoothGrad and Vanilla Gradients generate similar explanations while LIME, KernelSHAP, Occlusion, Integrated Gradients, and Gradient x Input generate similar explanations. We observe similar results across various datasets, models, and metrics (Appendix A.6.1).
Experiment 2: Some methods recover the underlying model while others do not (guiding principle). Next, we empirically assess which existing methods satisfy the guiding principle, i.e., which methods recover the black-box model f when f is of the interpretable model class G. We specify a setting in which f and g are of the same model class, generate explanations using each method, and assess whether g recovers f for each explanation. For the WHO dataset, we set f
and g to be linear regression models and generate explanations for 100 randomly-selected test set points. Then, for each point, we compare g’s weights with f ’s gradients alone or with f ’s gradients multiplied by the input because, based on Section §4, some methods generate explanations on the scale of gradients while others on the scale of gradient-times-input. Note that, for linear regression, f ’s gradients are f ’s weights.
Results are shown in Figure 2. Consistent with Section §4, for continuous data, SmoothGrad and Vanilla Gradients recover the black-box model, thereby satisfying the guiding principle, while LIME, KernelSHAP, Occlusion, Integrated Gradients, and Gradient x Input do not. We observe similar results for the HELOC dataset using logistic regression models for f and g (Appendix A.6.2).
Experiment 3: No single method performs best across all neighbourhoods (no free lunch theorem). Lastly, we perform a set of experiments to illustrate the no free lunch theorem in Section §4. We generate explanations for black-box model predictions for 100 randomly-selected test set points and evaluate the explanations using perturbation tests based on top-k or bottom-k features. For perturbation tests based on top-k features, the setup is as follows. For a given data point, k, and explanation, we identify the top-k features and either replace them with zero (binary perturbation) or add Gaussian noise to them (continuous perturbation). Then, we calculate the absolute difference in model prediction before and after perturbation. For each point, we generate one binary perturbation (since such perturbations are deterministic) and 100 continuous perturbations (since such perturbations are random), computing the average absolute difference in model prediction for the latter. In this setup, methods that better identify important features yield larger changes in model prediction. For perturbation tests based on bottom-k features, we follow the same procedure
but perturb the bottom-k features instead. In this setup, methods that better identify unimportant features yield smaller changes in model prediction.
Results of perturbation tests based on bottom-k features performed on explanations for a neural network with three hidden layers trained on the WHO dataset are displayed in Figure 3. Consistent with the no free lunch theorem in Section §4, LIME, KernelSHAP, Occlusion, Integrated Gradients, and Gradient x Input perform best on binary perturbation neighbourhoods (Figure 3a) while SmoothGrad and Vanilla Gradients perform best on continuous perturbation neighborhoods (Figure 3b). We observe consistent results across perturbation test types (top-k and bottom-k), datasets, and models (Appendix A.6.3). These results have important implications: one should carefully consider the perturbation neighborhood not only when selecting a method to generate explanations but also when selecting a method to evaluate explanations. In fact, the type of perturbations used to evaluate explanations directly determines explanation method performance.
6 Conclusions and Future Work
In this work, we formalize the local function approximation (LFA) framework and demonstrate that eight popular explanation methods can be characterized as instances of this framework with different local neighbourhoods and loss functions. We also introduce the no free lunch theorem for explanation methods, showing that no single method can perform optimally across all neighbourhoods, and provide a guiding principle for choosing among methods.
The function approximation perspective captures the essence of an explanation – a simplification of the real world (i.e., a black-box model) that is nonetheless accurate enough to be useful (i.e., predict outcomes of a set of perturbations). When the real world is “simple”, an explanation should completely capture its behaviour, a hallmark expressed precisely by the guiding principle. When the requirements of two explanations are distinct (i.e., they are trained to predict different sets of perturbations), then the explanations are each accurate in their own domain and may disagree, a phenomenon captured by the no free lunch theorem.
Our work makes fundamental contributions. We unify popular explanation methods, bringing diverse methods into a common framework. Unification brings conceptual coherence and clarity: diverse explanation methods, even those seemingly unrelated to function approximation, perform LFA but differ in the way they perform it. Unification also enables theoretical simplicity: to study diverse explanation methods, instead of analyzing each method individually, one can simply analyze the LFA framework and apply the findings to each method. An example of this is the no free lunch theorem which holds true for all instances of the LFA framework. Furthermore, our work provides practical guidance by presenting a principled approach to select among methods and design new ones.
Our work also addresses key open questions in the field. In response to criticism about the lack of consensus in the field regarding the overarching goals of post hoc explainability [32], our work points to function approximation as a principled goal. It also provides an explanation for the disagreement problem [12], i.e., why different methods generate different explanations for the same model prediction. According to the LFA framework, this disagreement occurs because different methods approximate the black-box model over different neighbourhoods using different loss functions.
Future research includes the following directions. First, we analyzed eight popular post hoc explanation methods and this analysis could be extended to other methods. Second, our work focuses on the faithfulness rather than interpretability of explanations. The latter is encapsulated in the “interpretable” model class G, which includes all the information about human preferences with regards to interpretability. However, it is unclear what constitutes an interpretable explanation and elucidating this takes not only conceptual understanding but also human-computer interaction research such as user studies. These are important directions for future research.
Acknowledgements
The authors would like to thank the anonymous reviewers for their helpful feedback and the following funding agencies for supporting this work. This work is supported in part by NSF awards #IIS2008461 and #IIS-2040989, and research awards from Google, JP Morgan, Amazon, Harvard Data Science Initiative, and Dˆ3 Institute at Harvard. H.L. would like to thank Sujatha and Mohan Lakkaraju for their continued support and encouragement. T.H. is supported in part by an NSF GRFP fellowship. The views expressed here are those of the authors and do not reflect the official policy or position of the funding agencies. | 1. What is the focus and contribution of the paper regarding local feature importance explanations?
2. What are the strengths and weaknesses of the proposed Local Function Approximation (LFA) framework?
3. How does the reviewer assess the usefulness and potential applications of the LFA framework in practical scenarios?
4. What are some questions the reviewer has regarding the LFA framework's ability to guide practitioners in selecting appropriate explanation methods for specific use cases like debugging or identifying bias in models?
5. How does the reviewer think the LFA framework could help evaluate and compare different explanation methods unified within the framework? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This work presents a framework called local function approximation that can be used to unify local feature importance explanations (based on gradients and perturbations).
Strengths And Weaknesses
The overall usefulness of the LFA framework is questionable.
Based on the current work, a framework that unifies perturbation methods and gradient based methods is "good to have" in terms of understanding all the methods together and perhaps designing new methods in future. The idea of characterising explanation methods via model recovery which is also good. However, whether the framework can help practitioners decide or guide which explanation method to use for which dataset is still not clear.
Some of the ideas discussed related to perturbation methods are partially known in AI explainability community. The fact that one could fit a model alternate to ridge regression for LIME or one could do train/test on the perturbed dataset generated by LIME, etc. or may be use an alternate loss function, etc.
Questions
a) Can you mention a few practical uses of the LFA framework for data scientists or practitioners ?
b) Is it possible to overcome any limitation in any of the explanation methods proposed in the literature through the LFA framework ?
c) If my use case of explainability is debugging a model or identifying bias. Can you elaborate how the LFA framework could help me decide which explanation method I should use for these use cases.
d) Could you please elaborate if the LFA framework could help me evaluate the different explanation methods that it unifies and how?
Limitations
See above. |
NIPS | Title
Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post Hoc Explanations
Abstract
A critical problem in the field of post hoc explainability is the lack of a common foundational goal among methods. For example, some methods are motivated by function approximation, some by game theoretic notions, and some by obtaining clean visualizations. This fragmentation of goals causes not only an inconsistent conceptual understanding of explanations but also the practical challenge of not knowing which method to use when. In this work, we begin to address these challenges by unifying eight popular post hoc explanation methods (LIME, C-LIME, KernelSHAP, Occlusion, Vanilla Gradients, Gradients ⇥ Input, SmoothGrad, and Integrated Gradients). We show that these methods all perform local function approximation of the black-box model, differing only in the neighbourhood and loss function used to perform the approximation. This unification enables us to (1) state a no free lunch theorem for explanation methods, demonstrating that no method can perform optimally across all neighbourhoods, and (2) provide a guiding principle to choose among methods based on faithfulness to the black-box model. We empirically validate these theoretical results using various real-world datasets, model classes, and prediction tasks. By bringing diverse explanation methods into a common framework, this work (1) advances the conceptual understanding of these methods, revealing their shared local function approximation objective, properties, and relation to one another, and (2) guides the use of these methods in practice, providing a principled approach to choose among methods and paving the way for the creation of new ones.
1 Introduction
As machine learning models become increasingly complex and are increasingly deployed in highstakes settings (e.g., medicine [1], law [2], and finance [3]), there is a growing emphasis on understanding how models make predictions so that decision-makers (e.g., doctors, judges, and loan officers) can assess the extent to which they can trust model predictions. To this end, several post hoc explanation methods have been developed, including LIME [4], C-LIME [5], SHAP [6], Occlusion [7], Vanilla Gradients [8], Gradient x Input [9], SmoothGrad [10], and Integrated Gradients [11]. However, different methods have different goals. Such differences lead to both conceptual and practical challenges to understanding and using explanation methods, thwarting progress in the field.
From a conceptual standpoint, the misalignment of goals among methods leads to an inconsistent view of explanations. What is an explanation? This is unclear as different methods have different notions of explanation. Depending on the method, explanations may be local function approximations (LIME and C-LIME), Shapley values (SHAP), raw gradients (Vanilla Gradients), raw gradients
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
scaled by the input (Gradient x Input), de-noised gradients (SmoothGrad), or a straight-line path integral of gradients (Integrated Gradients). Furthermore, the lack of a common mathematical framework for studying these diverse methods prevents a systematic understanding of these methods and their properties. To address these challenges, this paper unifies diverse explanation methods under a common framework, showing that diverse methods share a common motivation of local function approximation, and uses the framework to investigate and evaluate properties of these methods.
From a practical standpoint, the misalignment of goals among methods leads to the disagreement problem [12], the phenomenon that different methods provide disagreeing explanations for the same model prediction. Not only do different methods often generate disagreeing explanations in practice, but practitioners do not have a principled approach to select among explanations, resorting to ad hoc heuristics such as personal preference [12]. These findings prompt one to ask why explanation methods disagree and how to select among them in a principled manner. This paper addresses these questions, providing both an explanation for the disagreement problem and a principled approach to select among methods.
Thus, to address these conceptual and practical challenges, we study post hoc explanation methods from a function approximation perspective. We formalize a mathematical framework that unifies and characterizes diverse methods and that provides a principled approach to select among methods. Our work makes the following contributions:
1. We show that eight diverse, popular explanation methods (LIME, C-LIME, KernelSHAP, Occlusion, Vanilla Gradients, Gradient x Input, SmoothGrad, and Integrated Gradients) all perform local function approximation of the black-box model, differing only in the neighbourhoods and loss functions used to perform the approximation.
2. We introduce a no free lunch theorem for explanation methods which demonstrates that no single explanation method can perform local function approximation faithfully across all neighbourhoods, which in turn calls for a principled approach to select among methods.
3. To select among methods, we set forth a guiding principle based on function approximation, deeming a method to be effective if its explanation recovers the black-box model when the two are in the same model class (i.e., if the explanation perfectly approximates the black-box model when possible).
4. We empirically validate the theoretical results above using various real-world datasets, model classes, and prediction tasks.
2 Related Work
Post hoc explanation methods. Post hoc explanation methods can be classified based on model access (black-box model vs. access to model internals), explanation scope (global vs. local), search technique (perturbation-based vs. gradient-based), and basic unit of explanation (feature importance vs. rule-based). This paper focuses on local post hoc explanation methods based on feature importance. It analyzes four perturbation-based methods (LIME, C-LIME, KernelSHAP, and Occlusion) and four gradient-based methods (Vanilla Gradients, Gradient x Input, SmoothGrad, and Integrated Gradients).
Connections among post hoc explanation methods. Prior works have taken initial steps towards characterizing post hoc explanation methods and the connections among them. Agarwal et al. [5] proved that C-LIME and SmoothGrad converge to the same explanation in expectation. Lundberg and Lee [6] proposed a framework based on Shapley values to unify binary perturbation-based explanations. Covert et al. [13] found that many perturbation-based methods share the property of estimating feature importance based on the change in model behavior upon feature removal. In addition, Ancona et al. [14] analyzed four gradient-based explanation methods and the conditions under which they produce similar explanations. However, these analyses are based on mechanistic properties of methods (e.g., Shapley values or feature removal), are limited in scope (connecting only two methods, only perturbation-based methods, or only gradient-based methods), and do not inform when one method is preferable to another. In contrast, this paper formalizes a mathematical framework based on the concept of local function approximation, unifies eight diverse methods (spanning perturbation-based and gradient-based methods), and guides the use of these methods in practice.
Properties of post hoc explanation methods. Prior works have examined various properties of post hoc explanation methods, including faithfulness to the black-box model [15–17], robustness to adversarial attack [18–20, 15, 21], and fairness across subgroups [22]. This paper focuses on explanation faithfulness. Related works [15–17] assessed explanations generated by gradient-based methods, finding that they are not always faithful to the underlying model. Different from these works, this paper provides a framework for generating faithful explanations in the first place, theoretically characterizes the faithfulness of existing methods in different input domains, and provides a principled approach to select among methods and develop new ones based on explanation faithfulness.
3 Explanation as Local Function Approximation
In this section, we formalize the local function approximation framework and show its connection to existing explanation methods. We start by defining the notation used in the paper.
Notation. Let f : X ! Y be the black-box function we seek to explain in a post hoc manner, with input domain X (e.g., X = Rd or {0, 1}d) and output domain Y (e.g., Y = R or [0, 1]). Let G = {g : X ! Y} be the class of interpretable models used to generate a local explanation for f by selecting a suitable interpretable model g 2 G. We characterize locality around a point x0 2 X using a noise random variable ⇠ which is sampled from distribution Z . Let x⇠ = x0 ⇠ be a perturbation of x0 generated by combining x0 and ⇠ using a binary operator (e.g., addition, multiplication). Lastly, let `(f, g,x0, ⇠) 2 R+ be the loss function (e.g., squared error, cross-entropy) measuring the distance between f and g over the noise random variable ⇠ around x0.
We now define the local function approximation framework.
Definition 1. Local function approximation (LFA) of a black-box model f on a neighbourhood distribution Z around x0 by an interpretable model class G and a loss function ` is given by
g⇤ = argmin g2G E ⇠⇠Z `(f, g,x0, ⇠) (1)
where a valid loss ` is such that E⇠⇠Z `(f, g,x0, ⇠) = 0 () f(x⇠) = g(x⇠) 8⇠ ⇠ Z
The LFA framework is a formalization of the function approximation perspective first introduced by LIME [4] to motivate local explanations. Note that this conceptual framework is distinct from the algorithm introduced by LIME. We elaborate on this distinction below.
(1) The LFA framework requires that f and g share the same input domain X and output domain Y , a fundamental prerequisite for function approximation. This implies, for example, that using an interpretable model g with binary inputs (X = {0, 1}d) to approximate a black-box model f with continuous inputs (X = Rd), as proposed in LIME, is not true function approximation. (2) By imposing a condition on the loss function, the LFA framework ensures model recovery under specific conditions: g⇤ recovers f (i.e., g⇤ = f ) through LFA when f itself is of the interpretable model class G (i.e., f 2 G) and perturbations span the input domain of f (i.e., domain(x) = X ). This is a key distinction between the LFA framework and LIME (which has no such requirement) and guides the characterization of explanation methods in Section §4.
(3) Efficiently minimizing Equation 1 requires following standard machine learning methodology of splitting the perturbation data into train / validation / test sets and tuning hyper-parameters on the validation set to ensure generalization. To our knowledge, implementations of LIME do not adopt this procedure, making it possible to overfit to a small number of perturbations.
The LFA framework is generic enough to accommodate a variety of explanation methods. In fact, we show that specific instances of this framework converge to existing methods, as summarized in Table 1. At a high level, existing methods use a linear model g to locally approximate the black-box model f in different input domains (binary or continuous) over different local neighbourhoods specified by noise random variable ⇠ (where ⇠ is binary or continuous, drawn from a specified distribution, and combined additively or multiplicatively with point x0) using different loss functions (squared-error or gradient-matching loss). We discuss the details of these connections in the following sections.
3.1 LFA with Continuous Noise: Gradient-Based Explanation Methods
To connect gradient-based explanation methods to the LFA framework, we leverage the gradientmatching loss function `gm. We define `gm and show that it is a valid loss function for LFA.
`gm(f, g,x0, ⇠) = kr⇠f(x0 ⇠) r⇠g(x0 ⇠)k22 (2)
This loss function has been previously used in the contexts of generative modeling (where it is dubbed score-matching) [23] and model distillation [16]. However, to our knowledge, its use in interpretability is novel. Proposition 1. The gradient-matching loss function `gm is a valid loss function for LFA up to a constant, i.e., E⇠⇠Z `gm(f, g,x0, ⇠) = 0 () f(x⇠) = g(x⇠) + C 8⇠ ⇠ Z , where C 2 R.
Proof. If f(x⇠) = g(x⇠), then r⇠f(x⇠) = r⇠g(x⇠) and it follows from the definition of `gm that `gm = 0. Integrating r⇠f(x⇠) = r⇠g(x⇠) gives f(x⇠) = g(x⇠) + C.
Proposition 1 implies that, when using the linear model class G parameterized by g(x) = w>x+ b to approximate f , g⇤ recovers w but not b. This can be fixed by setting b = f(0). Theorem 1. LFA with gradient-matching loss is equivalent to (1) SmoothGrad for additive continuous Gaussian noise, which converges to Vanilla Gradients in the limit of a small standard deviation for the Gaussian distribution; and (2) Integrated Gradients for multiplicative continuous Uniform noise, which converges to Gradient x Input in the limit of a small support for the Uniform distribution.
Proof Sketch. For SmoothGrad and Integrated Gradients, the idea is that these methods are exactly the first-order stationary points of the gradient-matching loss function under their respective noise distributions. In other words, the weights of the interpretable model g that minimize the loss function is the explanation returned by each method. For Vanilla Gradients and Gradient x Input, the result is derived by taking the specified limits and using the Dirac delta function to calculate the limit. In the limit, the weights of the interpretable model g converge to the explanation of each method. The full proof is in Appendix A.1.
Along with gradient-based methods, C-LIME (a perturbation-based method) is an instance of the LFA framework by definition, using the squared-error loss function. The analysis in this section characterizes methods that use continuous noise. It does not extend to binary or discrete noise methods because gradients and continuous random variables do not apply in these domains. In the next section, we discuss binary noise methods.
3.2 LFA with Binary Noise: LIME, KernelSHAP and Occlusion maps
Theorem 2. LFA with multiplicative binary noise and squared-error loss is equivalent to (1) LIME for noise sampled from an unnormalized exponential kernel over binary vectors; (2) KernelSHAP
for noise sampled from an unnormalized Shapley kernel; and (3) Occlusion for noise in the form of one-hot vectors.
Proof Sketch. For LIME and KernelSHAP, the equivalence is mostly by definition: these methods have components that correspond to the interpretable model g and the loss function ` of the LFA framework and we need only to determine the local neighbourhood Z . We define the local neighbourhood Z using each method’s weighting kernel. In this setup, the LFA framework yields the respective explanation methods in expectation via importance sampling. For Occlusion, the equivalence involves enumerating all perturbations, specifying an appropriate loss function, and computing the resulting stationary points of the loss function. The full proof is in Appendix A.1.
3.3 Which Methods Do Not Perform LFA?
Some popular explanation methods are not instances of the LFA framework due to their properties. These methods include guided backpropagation [24], DeconvNet [25], Grad-CAM [26], GradCAM++ [27], FullGrad [28], and DeepLIFT [9]. Further details are in Appendix A.2.
4 When Do Explanations Perform Model Recovery?
Having described the LFA framework and its connections to existing explanation methods, we now leverage this framework to analyze the performance of methods under different conditions. We introduce a no free lunch theorem for explanation methods, inspired by classical no free lunch theorems in learning theory and optimization. Then, we assess the ability of existing methods to perform model recovery based on which we provide recommendations for choosing among methods.
4.1 No Free Lunch Theorem for Explanation Methods
An important implication of the function approximation perspective is that no explanation can be optimal across all neighbourhoods because each explanation is designed to perform LFA in a specific neighbourhood. This is especially true for explanations of non-linear models. We formalize this intuition into the following theorem. Theorem 3 (No Free Lunch for Explanation Methods). Consider explaining a black-box model f around point x0 using an interpretable model g from model class G and a valid loss function ` where the distance between f and G is given by d(f,G) = ming2G maxx2X `(f, g, 0,x). Then, for any explanation g⇤ over a neighbourhood distribution ⇠1 ⇠ Z1 such that max⇠1 `(f, g
⇤,x0, ⇠1) ✏, there always exists another neighbourhood ⇠2 ⇠ Z2 such that max⇠2 `(f, g ⇤,x0, ⇠2) d(f,G).
Proof Sketch. The idea is that, given an explanation obtained by using g to approximate f over a specific local neighbourhood Z , it is always possible to find a local neighbourhood over which this explanation does not perform well (i.e., does not perform faithful LFA). Thus, no single explanation method can perform well over all local neighbourhoods. The proof entails constructing an “adversarial” input for an explanation g⇤ such that g⇤ has a large loss for this input and then creating a neighbourhood that contains this adversarial input which will provably have a large loss. The magnitude of this loss is d(f,G), the distance between f and the model class G, inspired by the Haussdorf distance. The proof is generic and makes no assumptions regarding the forms of `, G or Z1. The full proof is in Appendix A.3. Thus, an explanation on a finite Z1 necessarily cannot approximate function behaviour at all other points, especially when G is less expressive than f , which is indicated by a large value of d(f,G). Thus, in the general case, one cannot perform model recovery as G is less expressive than f . An important implication of Theorem 3 is that seeking to find the “best” explanation without specifying a corresponding neighbourhood is futile as no universal “best” explanation exists. Furthermore, once the neighbourhood is specified, the best explanation is exactly the one given by the corresponding instance of the LFA framework.
In the next section, we consider the special case when d(f,G) = 0 (i.e., when f 2 G), where Theorem 3 does not apply because the same explanation can be optimal for multiple neighbourhoods and model recovery is thus possible.
4.2 Characterizing Explanation Methods via Model Recovery
Next, we formally state the model recovery condition for explanation methods. Then, we use this condition as a guiding principle to choose among methods. Definition 2 (Model Recovery: Guiding Principle). Given an instance of the LFA framework with a black-box model f such that f 2 G and a specific noise type (e.g., Gaussian, Uniform), an explanation method performs model recovery if there exists some noise distribution Z such that LFA returns g⇤ = f .
In other words, when the black-box model f itself is of the interpretable model class G, there must exist some setting of the noise distribution (within the noise type specified in the instance of the LFA framework) that is able to recover the black-box model. Thus, in this special case, we require local function approximation to lead to global model recovery over all inputs. This criterion can be thought of as a “sanity check” for explanation methods to ensure that they remain faithful to the black-box model.
Next, we analyze the impact of the choice of perturbation neighbourhood Z , the binary operator , and the interpretable model class G on an explanation method’s ability to satisfy the model recovery guiding principle in different input domains X . Note that while we can choose Z , , and G, we cannot choose X , the input domain. Which explanation should I choose for continuous X? We now analyze the model recovery properties of existing explanation methods when the input domain is continuous. We consider methods based on additive continuous noise (SmoothGrad, Vanilla Gradients, and C-LIME), multiplicative continuous noise (Integrated Gradients and Gradient x Input), and multiplicative binary noise (LIME, KernelSHAP, and Occlusion). For these methods, we make the following remark regarding model recovery for the class of linear models. Remark 1. For X = Rd and linear models f and g where f(x) = w>f x and g(x) = w>g x, additive continuous noise methods recover f (i.e., wg = wf ) while multiplicative continuous and multiplicative binary noise methods do not and instead recover wg = wf x.
This remark can be verified by directly evaluating the explanations (weights) of linear models, where the gradient exactly corresponds to the weights.
Note that the inability of multiplicative continuous noise methods to recover the black-box model is not due to the multiplicative nature of the noise, but due to the parameterization of the loss function. Specifically, these methods (implicitly) use the loss function `(f, g,x0, ⇠) = kr⇠f(x⇠) r⇠g(⇠)k22. Slightly changing the loss function to `(f, g,x0, ⇠) = kr⇠f(x⇠) r⇠g(x⇠)k22, i.e., replacing g(⇠) with g(x⇠), would enable g⇤ to recover f . This would change Integrated Gradients to R 1 ↵=0 r↵xf(↵x) (omitting the input multiplication term) and Gradient x Input to Vanilla Gradients.
A similar argument can be made for binary noise methods which parameterize the loss function as `(f, g,x0, ⇠) = kf(x⇠) g(⇠)k2. By changing the loss function to `(f, g,x0, ⇠) = kf(x⇠) g(x⇠)k2, binary noise methods can recover f for the case described in Remark 1. However, binary noise methods for continuous domains are unreliable, as there are cases where, despite the modification to `, model recovery is not guaranteed. The following is an example of this scenario. Remark 2. For X = Rd, periodic functions f and g where f(x) = Pd
i=1 sin(wfi xi) and g(x) = Pd i=1 sin(wgi xi), and an integer n, binary noise methods do not perform model recovery for |wfi | n⇡x0i .
This is because, for the conditions specified, sin(wfix0i) = sin(±n⇡) = sin(0) = 0, i.e., sin(wfix0i) outputs zero for all binary perturbations, thereby preventing model recovery. In this case, the discrete nature of the noise makes model recovery impossible. In general, discrete noise is inadequate for the recovery of models with large frequency components.
Which explanation should I choose for binary X? In the binary domain, continuous noise methods are invalid, restricting the choice of methods to binary noise methods. For reasons discussed above, methods with perturbation neighbourhoods characterized by multiplicative binary perturbations (e.g., LIME, KernelSHAP, and Occlusion) only enable g⇤ to recover f in the binary domain. Note that
the sinusoidal example in Remark 2 does not apply in this regime due to the continuous nature of its domain.
Which explanation should I choose for discrete X? In the discrete domain, continuous noise methods are also invalid. In addition, binary noise methods (e.g., LIME, KernelSHAP and Occlusion) cannot be used either because model recovery is not guaranteed in the sinusoidal case (Remark 2), following similar logic to that presented for continuous noise. Note that none of the existing methods in Table 1 perform general discrete perturbations, suggesting that these methods are not suitable for the discrete domain. Thus, in the discrete domain, a user can apply the LFA framework to define a new explanation method, specifying an appropriate discrete noise type. In the next section, we discuss more broadly about how one can use the LFA framework to create novel explanation methods.
4.3 Designing Novel Explanations with LFA
The LFA framework not only unifies existing explanation methods but also guides the creation of new ones. To explain a given black-box model prediction using the LFA framework, a user must specify the (1) interpretable model class G, (2) neighbourhood distribution Z , (3) loss function `, and (4) binary operator to combine the input and the noise. Specifying these four components completely specifies an instance of the LFA framework, thereby generating an explanation method tailored to a given context.
To illustrate this, consider a scenario in which a user seeks to create a sparse variant of SmoothGrad that yields non-zero gradients for only a small number of features (“SparseSmoothGrad”). Designing SparseSmoothGrad only requires the addition of a regularization term to the loss function used in the SmoothGrad instance of the LFA framework (e.g., ` = `SmoothGrad + kr⇠g(x⇠)k0), at which point, sparse solvers may be employed to solve the problem. Note that, unlike SmoothGrad, SparseSmoothGrad does not have a closed form solution, but that is not an issue for the LFA framework. More generally, by allowing customization of (1), (2), (3), and (4), the LFA framework creates new explanation methods through “variations on a theme”.
We summarize Section §4 as a table in Appendix A.4 and discuss the practical implications of Section §4 by providing the following recommendation for choosing among explanation methods.
Recommendation for choosing among explanation methods. In general, choose methods that satisfy the guiding principle of model recovery in the input domain in question. For continuous data, use additive continuous noise methods (e.g., SmoothGrad, Vanilla Gradients, C-LIME) or modified multiplicative continuous noise methods (e.g., Integrated Gradients, Gradient x Input) as described in Section §4.2. For binary data, use binary noise methods (e.g., LIME, KernelSHAP, Occlusion). Given that methods that use discrete noise do not exist, in case of discrete data, design novel explanation methods using the LFA framework with discrete noise neighbourhoods. Within each input domain, choosing among appropriate methods boils down to determining the perturbation neighbourhood most suitable in the given context.
5 Empirical Evaluation
In this section, we present an empirical evaluation of the LFA framework. We first describe the experimental setup and then discuss three experiments and their findings.
5.1 Datasets, Models, and Metrics
Datasets. We experiment with two real-world datasets for two prediction tasks. The first dataset is the life expectancy dataset from the World Health Organization (WHO) [29]. It consists of countries’ demographic, economic, and health factors from 2000 to 2015, with 2,938 observations for 20 continuous features. We use this dataset to perform regression, predicting life expectancy. The other dataset is the home equity line of credit (HELOC) dataset from FICO [30]. It consists of information on HELOC applications, with 9,871 observations for 24 continuous features. We use this dataset to perform classification, predicting whether an applicant made payments without being 90 days overdue. Additional dataset details are described in Appendix A.5.
Models. For each dataset, we train four models: a simple model (linear regression for the WHO dataset and logistic regression for the HELOC dataset) that can satisfy conditions of the guiding prin-
ciple and three more complex models (neural networks of varying complexity) that are more reflective of real-world applications. Model architectures and performance are described in Appendix A.5.
Metrics. To measure the similarity between two vectors (e.g., between two sets of explanations or between an explanation and the true model weights), we use L1 distance and cosine distance. L1 distance ranges between [0, 1) and is 0 when two vectors are the same. Cosine distance ranges between [0, 2] and is 0 when the angle between two vectors is 0 (or 360 ). For both metrics, the lower the value, the more similar two given vectors are.
5.2 Experiments
Here, we describe the setup of the experiments, present results, and discuss their implications.
Experiment 1: Existing explanation methods are instances of the LFA framework. First, we compare existing methods with corresponding instances of the LFA framework to assess whether they generate the same explanations. To this end, we use seven methods to explain the predictions of black-box models for 100 randomly-selected test set points. For each method, explanations are computed using either the existing method (implemented by Meta’s Captum library [31]) or the corresponding instance of the LFA framework (Table 1). The similarity of a given pair of explanations is measured using L1 distance and cosine distance.
The L1 distance values for a neural network with three hidden layers trained on the WHO dataset are shown in Figure 1. In Figure 1a, lowest L1 distance values appear in the diagonal of the heatmap, indicating that explanations generated by existing methods and corresponding instances of the LFA framework are very similar. Figures 1b and 1c show that explanations generated by instances of the LFA framework corresponding to SmoothGrad and Integrated Gradients converge to those of Vanilla Gradients and Gradient x Input, respectively. Together, these results demonstrate that, consistent with the theoretical results derived in Section §3, existing methods are instances of the LFA framework. In addition, the clustering of the methods in Figure 1a indicates that, consistent with the theoretical analysis in Section §4, for continuous data, SmoothGrad and Vanilla Gradients generate similar explanations while LIME, KernelSHAP, Occlusion, Integrated Gradients, and Gradient x Input generate similar explanations. We observe similar results across various datasets, models, and metrics (Appendix A.6.1).
Experiment 2: Some methods recover the underlying model while others do not (guiding principle). Next, we empirically assess which existing methods satisfy the guiding principle, i.e., which methods recover the black-box model f when f is of the interpretable model class G. We specify a setting in which f and g are of the same model class, generate explanations using each method, and assess whether g recovers f for each explanation. For the WHO dataset, we set f
and g to be linear regression models and generate explanations for 100 randomly-selected test set points. Then, for each point, we compare g’s weights with f ’s gradients alone or with f ’s gradients multiplied by the input because, based on Section §4, some methods generate explanations on the scale of gradients while others on the scale of gradient-times-input. Note that, for linear regression, f ’s gradients are f ’s weights.
Results are shown in Figure 2. Consistent with Section §4, for continuous data, SmoothGrad and Vanilla Gradients recover the black-box model, thereby satisfying the guiding principle, while LIME, KernelSHAP, Occlusion, Integrated Gradients, and Gradient x Input do not. We observe similar results for the HELOC dataset using logistic regression models for f and g (Appendix A.6.2).
Experiment 3: No single method performs best across all neighbourhoods (no free lunch theorem). Lastly, we perform a set of experiments to illustrate the no free lunch theorem in Section §4. We generate explanations for black-box model predictions for 100 randomly-selected test set points and evaluate the explanations using perturbation tests based on top-k or bottom-k features. For perturbation tests based on top-k features, the setup is as follows. For a given data point, k, and explanation, we identify the top-k features and either replace them with zero (binary perturbation) or add Gaussian noise to them (continuous perturbation). Then, we calculate the absolute difference in model prediction before and after perturbation. For each point, we generate one binary perturbation (since such perturbations are deterministic) and 100 continuous perturbations (since such perturbations are random), computing the average absolute difference in model prediction for the latter. In this setup, methods that better identify important features yield larger changes in model prediction. For perturbation tests based on bottom-k features, we follow the same procedure
but perturb the bottom-k features instead. In this setup, methods that better identify unimportant features yield smaller changes in model prediction.
Results of perturbation tests based on bottom-k features performed on explanations for a neural network with three hidden layers trained on the WHO dataset are displayed in Figure 3. Consistent with the no free lunch theorem in Section §4, LIME, KernelSHAP, Occlusion, Integrated Gradients, and Gradient x Input perform best on binary perturbation neighbourhoods (Figure 3a) while SmoothGrad and Vanilla Gradients perform best on continuous perturbation neighborhoods (Figure 3b). We observe consistent results across perturbation test types (top-k and bottom-k), datasets, and models (Appendix A.6.3). These results have important implications: one should carefully consider the perturbation neighborhood not only when selecting a method to generate explanations but also when selecting a method to evaluate explanations. In fact, the type of perturbations used to evaluate explanations directly determines explanation method performance.
6 Conclusions and Future Work
In this work, we formalize the local function approximation (LFA) framework and demonstrate that eight popular explanation methods can be characterized as instances of this framework with different local neighbourhoods and loss functions. We also introduce the no free lunch theorem for explanation methods, showing that no single method can perform optimally across all neighbourhoods, and provide a guiding principle for choosing among methods.
The function approximation perspective captures the essence of an explanation – a simplification of the real world (i.e., a black-box model) that is nonetheless accurate enough to be useful (i.e., predict outcomes of a set of perturbations). When the real world is “simple”, an explanation should completely capture its behaviour, a hallmark expressed precisely by the guiding principle. When the requirements of two explanations are distinct (i.e., they are trained to predict different sets of perturbations), then the explanations are each accurate in their own domain and may disagree, a phenomenon captured by the no free lunch theorem.
Our work makes fundamental contributions. We unify popular explanation methods, bringing diverse methods into a common framework. Unification brings conceptual coherence and clarity: diverse explanation methods, even those seemingly unrelated to function approximation, perform LFA but differ in the way they perform it. Unification also enables theoretical simplicity: to study diverse explanation methods, instead of analyzing each method individually, one can simply analyze the LFA framework and apply the findings to each method. An example of this is the no free lunch theorem which holds true for all instances of the LFA framework. Furthermore, our work provides practical guidance by presenting a principled approach to select among methods and design new ones.
Our work also addresses key open questions in the field. In response to criticism about the lack of consensus in the field regarding the overarching goals of post hoc explainability [32], our work points to function approximation as a principled goal. It also provides an explanation for the disagreement problem [12], i.e., why different methods generate different explanations for the same model prediction. According to the LFA framework, this disagreement occurs because different methods approximate the black-box model over different neighbourhoods using different loss functions.
Future research includes the following directions. First, we analyzed eight popular post hoc explanation methods and this analysis could be extended to other methods. Second, our work focuses on the faithfulness rather than interpretability of explanations. The latter is encapsulated in the “interpretable” model class G, which includes all the information about human preferences with regards to interpretability. However, it is unclear what constitutes an interpretable explanation and elucidating this takes not only conceptual understanding but also human-computer interaction research such as user studies. These are important directions for future research.
Acknowledgements
The authors would like to thank the anonymous reviewers for their helpful feedback and the following funding agencies for supporting this work. This work is supported in part by NSF awards #IIS2008461 and #IIS-2040989, and research awards from Google, JP Morgan, Amazon, Harvard Data Science Initiative, and Dˆ3 Institute at Harvard. H.L. would like to thank Sujatha and Mohan Lakkaraju for their continued support and encouragement. T.H. is supported in part by an NSF GRFP fellowship. The views expressed here are those of the authors and do not reflect the official policy or position of the funding agencies. | 1. What is the main contribution of the paper in the field of machine learning explanations?
2. What are the strengths of the proposed framework, particularly in terms of its theoretical and computational results?
3. Are there any weaknesses or limitations in the paper's approach to local function approximation explanations? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The authors present a framework for local function approximation explanations (LFA) for generating explanations of black box ML models. They present theoretical results that place several existing explanation methods from the literature within the LFA framework; they provide a variety of theoretical and computational results demonstrating the value of this framework.
Strengths And Weaknesses
The authors very clearly present their conceptual framework (LFA). Their theoretical results are helpful (and intiutive, though this isn't bad). Their theoretical and computational results clearly demonstrate the utility of LFA.
In the context of explaining ML models, their contributions are a bit narrow: this paper focuses on explanations based on local function approximations, and they use fidelity as a measure of explanation "goodness". But the authors are extremely thorough in exploring this region of ML explanations. Overall, I don't have anything bad to say about this paper.
Questions
None
Limitations
The authors sufficiently addressed the limitations of their work. |
NIPS | Title
Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post Hoc Explanations
Abstract
A critical problem in the field of post hoc explainability is the lack of a common foundational goal among methods. For example, some methods are motivated by function approximation, some by game theoretic notions, and some by obtaining clean visualizations. This fragmentation of goals causes not only an inconsistent conceptual understanding of explanations but also the practical challenge of not knowing which method to use when. In this work, we begin to address these challenges by unifying eight popular post hoc explanation methods (LIME, C-LIME, KernelSHAP, Occlusion, Vanilla Gradients, Gradients ⇥ Input, SmoothGrad, and Integrated Gradients). We show that these methods all perform local function approximation of the black-box model, differing only in the neighbourhood and loss function used to perform the approximation. This unification enables us to (1) state a no free lunch theorem for explanation methods, demonstrating that no method can perform optimally across all neighbourhoods, and (2) provide a guiding principle to choose among methods based on faithfulness to the black-box model. We empirically validate these theoretical results using various real-world datasets, model classes, and prediction tasks. By bringing diverse explanation methods into a common framework, this work (1) advances the conceptual understanding of these methods, revealing their shared local function approximation objective, properties, and relation to one another, and (2) guides the use of these methods in practice, providing a principled approach to choose among methods and paving the way for the creation of new ones.
1 Introduction
As machine learning models become increasingly complex and are increasingly deployed in highstakes settings (e.g., medicine [1], law [2], and finance [3]), there is a growing emphasis on understanding how models make predictions so that decision-makers (e.g., doctors, judges, and loan officers) can assess the extent to which they can trust model predictions. To this end, several post hoc explanation methods have been developed, including LIME [4], C-LIME [5], SHAP [6], Occlusion [7], Vanilla Gradients [8], Gradient x Input [9], SmoothGrad [10], and Integrated Gradients [11]. However, different methods have different goals. Such differences lead to both conceptual and practical challenges to understanding and using explanation methods, thwarting progress in the field.
From a conceptual standpoint, the misalignment of goals among methods leads to an inconsistent view of explanations. What is an explanation? This is unclear as different methods have different notions of explanation. Depending on the method, explanations may be local function approximations (LIME and C-LIME), Shapley values (SHAP), raw gradients (Vanilla Gradients), raw gradients
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
scaled by the input (Gradient x Input), de-noised gradients (SmoothGrad), or a straight-line path integral of gradients (Integrated Gradients). Furthermore, the lack of a common mathematical framework for studying these diverse methods prevents a systematic understanding of these methods and their properties. To address these challenges, this paper unifies diverse explanation methods under a common framework, showing that diverse methods share a common motivation of local function approximation, and uses the framework to investigate and evaluate properties of these methods.
From a practical standpoint, the misalignment of goals among methods leads to the disagreement problem [12], the phenomenon that different methods provide disagreeing explanations for the same model prediction. Not only do different methods often generate disagreeing explanations in practice, but practitioners do not have a principled approach to select among explanations, resorting to ad hoc heuristics such as personal preference [12]. These findings prompt one to ask why explanation methods disagree and how to select among them in a principled manner. This paper addresses these questions, providing both an explanation for the disagreement problem and a principled approach to select among methods.
Thus, to address these conceptual and practical challenges, we study post hoc explanation methods from a function approximation perspective. We formalize a mathematical framework that unifies and characterizes diverse methods and that provides a principled approach to select among methods. Our work makes the following contributions:
1. We show that eight diverse, popular explanation methods (LIME, C-LIME, KernelSHAP, Occlusion, Vanilla Gradients, Gradient x Input, SmoothGrad, and Integrated Gradients) all perform local function approximation of the black-box model, differing only in the neighbourhoods and loss functions used to perform the approximation.
2. We introduce a no free lunch theorem for explanation methods which demonstrates that no single explanation method can perform local function approximation faithfully across all neighbourhoods, which in turn calls for a principled approach to select among methods.
3. To select among methods, we set forth a guiding principle based on function approximation, deeming a method to be effective if its explanation recovers the black-box model when the two are in the same model class (i.e., if the explanation perfectly approximates the black-box model when possible).
4. We empirically validate the theoretical results above using various real-world datasets, model classes, and prediction tasks.
2 Related Work
Post hoc explanation methods. Post hoc explanation methods can be classified based on model access (black-box model vs. access to model internals), explanation scope (global vs. local), search technique (perturbation-based vs. gradient-based), and basic unit of explanation (feature importance vs. rule-based). This paper focuses on local post hoc explanation methods based on feature importance. It analyzes four perturbation-based methods (LIME, C-LIME, KernelSHAP, and Occlusion) and four gradient-based methods (Vanilla Gradients, Gradient x Input, SmoothGrad, and Integrated Gradients).
Connections among post hoc explanation methods. Prior works have taken initial steps towards characterizing post hoc explanation methods and the connections among them. Agarwal et al. [5] proved that C-LIME and SmoothGrad converge to the same explanation in expectation. Lundberg and Lee [6] proposed a framework based on Shapley values to unify binary perturbation-based explanations. Covert et al. [13] found that many perturbation-based methods share the property of estimating feature importance based on the change in model behavior upon feature removal. In addition, Ancona et al. [14] analyzed four gradient-based explanation methods and the conditions under which they produce similar explanations. However, these analyses are based on mechanistic properties of methods (e.g., Shapley values or feature removal), are limited in scope (connecting only two methods, only perturbation-based methods, or only gradient-based methods), and do not inform when one method is preferable to another. In contrast, this paper formalizes a mathematical framework based on the concept of local function approximation, unifies eight diverse methods (spanning perturbation-based and gradient-based methods), and guides the use of these methods in practice.
Properties of post hoc explanation methods. Prior works have examined various properties of post hoc explanation methods, including faithfulness to the black-box model [15–17], robustness to adversarial attack [18–20, 15, 21], and fairness across subgroups [22]. This paper focuses on explanation faithfulness. Related works [15–17] assessed explanations generated by gradient-based methods, finding that they are not always faithful to the underlying model. Different from these works, this paper provides a framework for generating faithful explanations in the first place, theoretically characterizes the faithfulness of existing methods in different input domains, and provides a principled approach to select among methods and develop new ones based on explanation faithfulness.
3 Explanation as Local Function Approximation
In this section, we formalize the local function approximation framework and show its connection to existing explanation methods. We start by defining the notation used in the paper.
Notation. Let f : X ! Y be the black-box function we seek to explain in a post hoc manner, with input domain X (e.g., X = Rd or {0, 1}d) and output domain Y (e.g., Y = R or [0, 1]). Let G = {g : X ! Y} be the class of interpretable models used to generate a local explanation for f by selecting a suitable interpretable model g 2 G. We characterize locality around a point x0 2 X using a noise random variable ⇠ which is sampled from distribution Z . Let x⇠ = x0 ⇠ be a perturbation of x0 generated by combining x0 and ⇠ using a binary operator (e.g., addition, multiplication). Lastly, let `(f, g,x0, ⇠) 2 R+ be the loss function (e.g., squared error, cross-entropy) measuring the distance between f and g over the noise random variable ⇠ around x0.
We now define the local function approximation framework.
Definition 1. Local function approximation (LFA) of a black-box model f on a neighbourhood distribution Z around x0 by an interpretable model class G and a loss function ` is given by
g⇤ = argmin g2G E ⇠⇠Z `(f, g,x0, ⇠) (1)
where a valid loss ` is such that E⇠⇠Z `(f, g,x0, ⇠) = 0 () f(x⇠) = g(x⇠) 8⇠ ⇠ Z
The LFA framework is a formalization of the function approximation perspective first introduced by LIME [4] to motivate local explanations. Note that this conceptual framework is distinct from the algorithm introduced by LIME. We elaborate on this distinction below.
(1) The LFA framework requires that f and g share the same input domain X and output domain Y , a fundamental prerequisite for function approximation. This implies, for example, that using an interpretable model g with binary inputs (X = {0, 1}d) to approximate a black-box model f with continuous inputs (X = Rd), as proposed in LIME, is not true function approximation. (2) By imposing a condition on the loss function, the LFA framework ensures model recovery under specific conditions: g⇤ recovers f (i.e., g⇤ = f ) through LFA when f itself is of the interpretable model class G (i.e., f 2 G) and perturbations span the input domain of f (i.e., domain(x) = X ). This is a key distinction between the LFA framework and LIME (which has no such requirement) and guides the characterization of explanation methods in Section §4.
(3) Efficiently minimizing Equation 1 requires following standard machine learning methodology of splitting the perturbation data into train / validation / test sets and tuning hyper-parameters on the validation set to ensure generalization. To our knowledge, implementations of LIME do not adopt this procedure, making it possible to overfit to a small number of perturbations.
The LFA framework is generic enough to accommodate a variety of explanation methods. In fact, we show that specific instances of this framework converge to existing methods, as summarized in Table 1. At a high level, existing methods use a linear model g to locally approximate the black-box model f in different input domains (binary or continuous) over different local neighbourhoods specified by noise random variable ⇠ (where ⇠ is binary or continuous, drawn from a specified distribution, and combined additively or multiplicatively with point x0) using different loss functions (squared-error or gradient-matching loss). We discuss the details of these connections in the following sections.
3.1 LFA with Continuous Noise: Gradient-Based Explanation Methods
To connect gradient-based explanation methods to the LFA framework, we leverage the gradientmatching loss function `gm. We define `gm and show that it is a valid loss function for LFA.
`gm(f, g,x0, ⇠) = kr⇠f(x0 ⇠) r⇠g(x0 ⇠)k22 (2)
This loss function has been previously used in the contexts of generative modeling (where it is dubbed score-matching) [23] and model distillation [16]. However, to our knowledge, its use in interpretability is novel. Proposition 1. The gradient-matching loss function `gm is a valid loss function for LFA up to a constant, i.e., E⇠⇠Z `gm(f, g,x0, ⇠) = 0 () f(x⇠) = g(x⇠) + C 8⇠ ⇠ Z , where C 2 R.
Proof. If f(x⇠) = g(x⇠), then r⇠f(x⇠) = r⇠g(x⇠) and it follows from the definition of `gm that `gm = 0. Integrating r⇠f(x⇠) = r⇠g(x⇠) gives f(x⇠) = g(x⇠) + C.
Proposition 1 implies that, when using the linear model class G parameterized by g(x) = w>x+ b to approximate f , g⇤ recovers w but not b. This can be fixed by setting b = f(0). Theorem 1. LFA with gradient-matching loss is equivalent to (1) SmoothGrad for additive continuous Gaussian noise, which converges to Vanilla Gradients in the limit of a small standard deviation for the Gaussian distribution; and (2) Integrated Gradients for multiplicative continuous Uniform noise, which converges to Gradient x Input in the limit of a small support for the Uniform distribution.
Proof Sketch. For SmoothGrad and Integrated Gradients, the idea is that these methods are exactly the first-order stationary points of the gradient-matching loss function under their respective noise distributions. In other words, the weights of the interpretable model g that minimize the loss function is the explanation returned by each method. For Vanilla Gradients and Gradient x Input, the result is derived by taking the specified limits and using the Dirac delta function to calculate the limit. In the limit, the weights of the interpretable model g converge to the explanation of each method. The full proof is in Appendix A.1.
Along with gradient-based methods, C-LIME (a perturbation-based method) is an instance of the LFA framework by definition, using the squared-error loss function. The analysis in this section characterizes methods that use continuous noise. It does not extend to binary or discrete noise methods because gradients and continuous random variables do not apply in these domains. In the next section, we discuss binary noise methods.
3.2 LFA with Binary Noise: LIME, KernelSHAP and Occlusion maps
Theorem 2. LFA with multiplicative binary noise and squared-error loss is equivalent to (1) LIME for noise sampled from an unnormalized exponential kernel over binary vectors; (2) KernelSHAP
for noise sampled from an unnormalized Shapley kernel; and (3) Occlusion for noise in the form of one-hot vectors.
Proof Sketch. For LIME and KernelSHAP, the equivalence is mostly by definition: these methods have components that correspond to the interpretable model g and the loss function ` of the LFA framework and we need only to determine the local neighbourhood Z . We define the local neighbourhood Z using each method’s weighting kernel. In this setup, the LFA framework yields the respective explanation methods in expectation via importance sampling. For Occlusion, the equivalence involves enumerating all perturbations, specifying an appropriate loss function, and computing the resulting stationary points of the loss function. The full proof is in Appendix A.1.
3.3 Which Methods Do Not Perform LFA?
Some popular explanation methods are not instances of the LFA framework due to their properties. These methods include guided backpropagation [24], DeconvNet [25], Grad-CAM [26], GradCAM++ [27], FullGrad [28], and DeepLIFT [9]. Further details are in Appendix A.2.
4 When Do Explanations Perform Model Recovery?
Having described the LFA framework and its connections to existing explanation methods, we now leverage this framework to analyze the performance of methods under different conditions. We introduce a no free lunch theorem for explanation methods, inspired by classical no free lunch theorems in learning theory and optimization. Then, we assess the ability of existing methods to perform model recovery based on which we provide recommendations for choosing among methods.
4.1 No Free Lunch Theorem for Explanation Methods
An important implication of the function approximation perspective is that no explanation can be optimal across all neighbourhoods because each explanation is designed to perform LFA in a specific neighbourhood. This is especially true for explanations of non-linear models. We formalize this intuition into the following theorem. Theorem 3 (No Free Lunch for Explanation Methods). Consider explaining a black-box model f around point x0 using an interpretable model g from model class G and a valid loss function ` where the distance between f and G is given by d(f,G) = ming2G maxx2X `(f, g, 0,x). Then, for any explanation g⇤ over a neighbourhood distribution ⇠1 ⇠ Z1 such that max⇠1 `(f, g
⇤,x0, ⇠1) ✏, there always exists another neighbourhood ⇠2 ⇠ Z2 such that max⇠2 `(f, g ⇤,x0, ⇠2) d(f,G).
Proof Sketch. The idea is that, given an explanation obtained by using g to approximate f over a specific local neighbourhood Z , it is always possible to find a local neighbourhood over which this explanation does not perform well (i.e., does not perform faithful LFA). Thus, no single explanation method can perform well over all local neighbourhoods. The proof entails constructing an “adversarial” input for an explanation g⇤ such that g⇤ has a large loss for this input and then creating a neighbourhood that contains this adversarial input which will provably have a large loss. The magnitude of this loss is d(f,G), the distance between f and the model class G, inspired by the Haussdorf distance. The proof is generic and makes no assumptions regarding the forms of `, G or Z1. The full proof is in Appendix A.3. Thus, an explanation on a finite Z1 necessarily cannot approximate function behaviour at all other points, especially when G is less expressive than f , which is indicated by a large value of d(f,G). Thus, in the general case, one cannot perform model recovery as G is less expressive than f . An important implication of Theorem 3 is that seeking to find the “best” explanation without specifying a corresponding neighbourhood is futile as no universal “best” explanation exists. Furthermore, once the neighbourhood is specified, the best explanation is exactly the one given by the corresponding instance of the LFA framework.
In the next section, we consider the special case when d(f,G) = 0 (i.e., when f 2 G), where Theorem 3 does not apply because the same explanation can be optimal for multiple neighbourhoods and model recovery is thus possible.
4.2 Characterizing Explanation Methods via Model Recovery
Next, we formally state the model recovery condition for explanation methods. Then, we use this condition as a guiding principle to choose among methods. Definition 2 (Model Recovery: Guiding Principle). Given an instance of the LFA framework with a black-box model f such that f 2 G and a specific noise type (e.g., Gaussian, Uniform), an explanation method performs model recovery if there exists some noise distribution Z such that LFA returns g⇤ = f .
In other words, when the black-box model f itself is of the interpretable model class G, there must exist some setting of the noise distribution (within the noise type specified in the instance of the LFA framework) that is able to recover the black-box model. Thus, in this special case, we require local function approximation to lead to global model recovery over all inputs. This criterion can be thought of as a “sanity check” for explanation methods to ensure that they remain faithful to the black-box model.
Next, we analyze the impact of the choice of perturbation neighbourhood Z , the binary operator , and the interpretable model class G on an explanation method’s ability to satisfy the model recovery guiding principle in different input domains X . Note that while we can choose Z , , and G, we cannot choose X , the input domain. Which explanation should I choose for continuous X? We now analyze the model recovery properties of existing explanation methods when the input domain is continuous. We consider methods based on additive continuous noise (SmoothGrad, Vanilla Gradients, and C-LIME), multiplicative continuous noise (Integrated Gradients and Gradient x Input), and multiplicative binary noise (LIME, KernelSHAP, and Occlusion). For these methods, we make the following remark regarding model recovery for the class of linear models. Remark 1. For X = Rd and linear models f and g where f(x) = w>f x and g(x) = w>g x, additive continuous noise methods recover f (i.e., wg = wf ) while multiplicative continuous and multiplicative binary noise methods do not and instead recover wg = wf x.
This remark can be verified by directly evaluating the explanations (weights) of linear models, where the gradient exactly corresponds to the weights.
Note that the inability of multiplicative continuous noise methods to recover the black-box model is not due to the multiplicative nature of the noise, but due to the parameterization of the loss function. Specifically, these methods (implicitly) use the loss function `(f, g,x0, ⇠) = kr⇠f(x⇠) r⇠g(⇠)k22. Slightly changing the loss function to `(f, g,x0, ⇠) = kr⇠f(x⇠) r⇠g(x⇠)k22, i.e., replacing g(⇠) with g(x⇠), would enable g⇤ to recover f . This would change Integrated Gradients to R 1 ↵=0 r↵xf(↵x) (omitting the input multiplication term) and Gradient x Input to Vanilla Gradients.
A similar argument can be made for binary noise methods which parameterize the loss function as `(f, g,x0, ⇠) = kf(x⇠) g(⇠)k2. By changing the loss function to `(f, g,x0, ⇠) = kf(x⇠) g(x⇠)k2, binary noise methods can recover f for the case described in Remark 1. However, binary noise methods for continuous domains are unreliable, as there are cases where, despite the modification to `, model recovery is not guaranteed. The following is an example of this scenario. Remark 2. For X = Rd, periodic functions f and g where f(x) = Pd
i=1 sin(wfi xi) and g(x) = Pd i=1 sin(wgi xi), and an integer n, binary noise methods do not perform model recovery for |wfi | n⇡x0i .
This is because, for the conditions specified, sin(wfix0i) = sin(±n⇡) = sin(0) = 0, i.e., sin(wfix0i) outputs zero for all binary perturbations, thereby preventing model recovery. In this case, the discrete nature of the noise makes model recovery impossible. In general, discrete noise is inadequate for the recovery of models with large frequency components.
Which explanation should I choose for binary X? In the binary domain, continuous noise methods are invalid, restricting the choice of methods to binary noise methods. For reasons discussed above, methods with perturbation neighbourhoods characterized by multiplicative binary perturbations (e.g., LIME, KernelSHAP, and Occlusion) only enable g⇤ to recover f in the binary domain. Note that
the sinusoidal example in Remark 2 does not apply in this regime due to the continuous nature of its domain.
Which explanation should I choose for discrete X? In the discrete domain, continuous noise methods are also invalid. In addition, binary noise methods (e.g., LIME, KernelSHAP and Occlusion) cannot be used either because model recovery is not guaranteed in the sinusoidal case (Remark 2), following similar logic to that presented for continuous noise. Note that none of the existing methods in Table 1 perform general discrete perturbations, suggesting that these methods are not suitable for the discrete domain. Thus, in the discrete domain, a user can apply the LFA framework to define a new explanation method, specifying an appropriate discrete noise type. In the next section, we discuss more broadly about how one can use the LFA framework to create novel explanation methods.
4.3 Designing Novel Explanations with LFA
The LFA framework not only unifies existing explanation methods but also guides the creation of new ones. To explain a given black-box model prediction using the LFA framework, a user must specify the (1) interpretable model class G, (2) neighbourhood distribution Z , (3) loss function `, and (4) binary operator to combine the input and the noise. Specifying these four components completely specifies an instance of the LFA framework, thereby generating an explanation method tailored to a given context.
To illustrate this, consider a scenario in which a user seeks to create a sparse variant of SmoothGrad that yields non-zero gradients for only a small number of features (“SparseSmoothGrad”). Designing SparseSmoothGrad only requires the addition of a regularization term to the loss function used in the SmoothGrad instance of the LFA framework (e.g., ` = `SmoothGrad + kr⇠g(x⇠)k0), at which point, sparse solvers may be employed to solve the problem. Note that, unlike SmoothGrad, SparseSmoothGrad does not have a closed form solution, but that is not an issue for the LFA framework. More generally, by allowing customization of (1), (2), (3), and (4), the LFA framework creates new explanation methods through “variations on a theme”.
We summarize Section §4 as a table in Appendix A.4 and discuss the practical implications of Section §4 by providing the following recommendation for choosing among explanation methods.
Recommendation for choosing among explanation methods. In general, choose methods that satisfy the guiding principle of model recovery in the input domain in question. For continuous data, use additive continuous noise methods (e.g., SmoothGrad, Vanilla Gradients, C-LIME) or modified multiplicative continuous noise methods (e.g., Integrated Gradients, Gradient x Input) as described in Section §4.2. For binary data, use binary noise methods (e.g., LIME, KernelSHAP, Occlusion). Given that methods that use discrete noise do not exist, in case of discrete data, design novel explanation methods using the LFA framework with discrete noise neighbourhoods. Within each input domain, choosing among appropriate methods boils down to determining the perturbation neighbourhood most suitable in the given context.
5 Empirical Evaluation
In this section, we present an empirical evaluation of the LFA framework. We first describe the experimental setup and then discuss three experiments and their findings.
5.1 Datasets, Models, and Metrics
Datasets. We experiment with two real-world datasets for two prediction tasks. The first dataset is the life expectancy dataset from the World Health Organization (WHO) [29]. It consists of countries’ demographic, economic, and health factors from 2000 to 2015, with 2,938 observations for 20 continuous features. We use this dataset to perform regression, predicting life expectancy. The other dataset is the home equity line of credit (HELOC) dataset from FICO [30]. It consists of information on HELOC applications, with 9,871 observations for 24 continuous features. We use this dataset to perform classification, predicting whether an applicant made payments without being 90 days overdue. Additional dataset details are described in Appendix A.5.
Models. For each dataset, we train four models: a simple model (linear regression for the WHO dataset and logistic regression for the HELOC dataset) that can satisfy conditions of the guiding prin-
ciple and three more complex models (neural networks of varying complexity) that are more reflective of real-world applications. Model architectures and performance are described in Appendix A.5.
Metrics. To measure the similarity between two vectors (e.g., between two sets of explanations or between an explanation and the true model weights), we use L1 distance and cosine distance. L1 distance ranges between [0, 1) and is 0 when two vectors are the same. Cosine distance ranges between [0, 2] and is 0 when the angle between two vectors is 0 (or 360 ). For both metrics, the lower the value, the more similar two given vectors are.
5.2 Experiments
Here, we describe the setup of the experiments, present results, and discuss their implications.
Experiment 1: Existing explanation methods are instances of the LFA framework. First, we compare existing methods with corresponding instances of the LFA framework to assess whether they generate the same explanations. To this end, we use seven methods to explain the predictions of black-box models for 100 randomly-selected test set points. For each method, explanations are computed using either the existing method (implemented by Meta’s Captum library [31]) or the corresponding instance of the LFA framework (Table 1). The similarity of a given pair of explanations is measured using L1 distance and cosine distance.
The L1 distance values for a neural network with three hidden layers trained on the WHO dataset are shown in Figure 1. In Figure 1a, lowest L1 distance values appear in the diagonal of the heatmap, indicating that explanations generated by existing methods and corresponding instances of the LFA framework are very similar. Figures 1b and 1c show that explanations generated by instances of the LFA framework corresponding to SmoothGrad and Integrated Gradients converge to those of Vanilla Gradients and Gradient x Input, respectively. Together, these results demonstrate that, consistent with the theoretical results derived in Section §3, existing methods are instances of the LFA framework. In addition, the clustering of the methods in Figure 1a indicates that, consistent with the theoretical analysis in Section §4, for continuous data, SmoothGrad and Vanilla Gradients generate similar explanations while LIME, KernelSHAP, Occlusion, Integrated Gradients, and Gradient x Input generate similar explanations. We observe similar results across various datasets, models, and metrics (Appendix A.6.1).
Experiment 2: Some methods recover the underlying model while others do not (guiding principle). Next, we empirically assess which existing methods satisfy the guiding principle, i.e., which methods recover the black-box model f when f is of the interpretable model class G. We specify a setting in which f and g are of the same model class, generate explanations using each method, and assess whether g recovers f for each explanation. For the WHO dataset, we set f
and g to be linear regression models and generate explanations for 100 randomly-selected test set points. Then, for each point, we compare g’s weights with f ’s gradients alone or with f ’s gradients multiplied by the input because, based on Section §4, some methods generate explanations on the scale of gradients while others on the scale of gradient-times-input. Note that, for linear regression, f ’s gradients are f ’s weights.
Results are shown in Figure 2. Consistent with Section §4, for continuous data, SmoothGrad and Vanilla Gradients recover the black-box model, thereby satisfying the guiding principle, while LIME, KernelSHAP, Occlusion, Integrated Gradients, and Gradient x Input do not. We observe similar results for the HELOC dataset using logistic regression models for f and g (Appendix A.6.2).
Experiment 3: No single method performs best across all neighbourhoods (no free lunch theorem). Lastly, we perform a set of experiments to illustrate the no free lunch theorem in Section §4. We generate explanations for black-box model predictions for 100 randomly-selected test set points and evaluate the explanations using perturbation tests based on top-k or bottom-k features. For perturbation tests based on top-k features, the setup is as follows. For a given data point, k, and explanation, we identify the top-k features and either replace them with zero (binary perturbation) or add Gaussian noise to them (continuous perturbation). Then, we calculate the absolute difference in model prediction before and after perturbation. For each point, we generate one binary perturbation (since such perturbations are deterministic) and 100 continuous perturbations (since such perturbations are random), computing the average absolute difference in model prediction for the latter. In this setup, methods that better identify important features yield larger changes in model prediction. For perturbation tests based on bottom-k features, we follow the same procedure
but perturb the bottom-k features instead. In this setup, methods that better identify unimportant features yield smaller changes in model prediction.
Results of perturbation tests based on bottom-k features performed on explanations for a neural network with three hidden layers trained on the WHO dataset are displayed in Figure 3. Consistent with the no free lunch theorem in Section §4, LIME, KernelSHAP, Occlusion, Integrated Gradients, and Gradient x Input perform best on binary perturbation neighbourhoods (Figure 3a) while SmoothGrad and Vanilla Gradients perform best on continuous perturbation neighborhoods (Figure 3b). We observe consistent results across perturbation test types (top-k and bottom-k), datasets, and models (Appendix A.6.3). These results have important implications: one should carefully consider the perturbation neighborhood not only when selecting a method to generate explanations but also when selecting a method to evaluate explanations. In fact, the type of perturbations used to evaluate explanations directly determines explanation method performance.
6 Conclusions and Future Work
In this work, we formalize the local function approximation (LFA) framework and demonstrate that eight popular explanation methods can be characterized as instances of this framework with different local neighbourhoods and loss functions. We also introduce the no free lunch theorem for explanation methods, showing that no single method can perform optimally across all neighbourhoods, and provide a guiding principle for choosing among methods.
The function approximation perspective captures the essence of an explanation – a simplification of the real world (i.e., a black-box model) that is nonetheless accurate enough to be useful (i.e., predict outcomes of a set of perturbations). When the real world is “simple”, an explanation should completely capture its behaviour, a hallmark expressed precisely by the guiding principle. When the requirements of two explanations are distinct (i.e., they are trained to predict different sets of perturbations), then the explanations are each accurate in their own domain and may disagree, a phenomenon captured by the no free lunch theorem.
Our work makes fundamental contributions. We unify popular explanation methods, bringing diverse methods into a common framework. Unification brings conceptual coherence and clarity: diverse explanation methods, even those seemingly unrelated to function approximation, perform LFA but differ in the way they perform it. Unification also enables theoretical simplicity: to study diverse explanation methods, instead of analyzing each method individually, one can simply analyze the LFA framework and apply the findings to each method. An example of this is the no free lunch theorem which holds true for all instances of the LFA framework. Furthermore, our work provides practical guidance by presenting a principled approach to select among methods and design new ones.
Our work also addresses key open questions in the field. In response to criticism about the lack of consensus in the field regarding the overarching goals of post hoc explainability [32], our work points to function approximation as a principled goal. It also provides an explanation for the disagreement problem [12], i.e., why different methods generate different explanations for the same model prediction. According to the LFA framework, this disagreement occurs because different methods approximate the black-box model over different neighbourhoods using different loss functions.
Future research includes the following directions. First, we analyzed eight popular post hoc explanation methods and this analysis could be extended to other methods. Second, our work focuses on the faithfulness rather than interpretability of explanations. The latter is encapsulated in the “interpretable” model class G, which includes all the information about human preferences with regards to interpretability. However, it is unclear what constitutes an interpretable explanation and elucidating this takes not only conceptual understanding but also human-computer interaction research such as user studies. These are important directions for future research.
Acknowledgements
The authors would like to thank the anonymous reviewers for their helpful feedback and the following funding agencies for supporting this work. This work is supported in part by NSF awards #IIS2008461 and #IIS-2040989, and research awards from Google, JP Morgan, Amazon, Harvard Data Science Initiative, and Dˆ3 Institute at Harvard. H.L. would like to thank Sujatha and Mohan Lakkaraju for their continued support and encouragement. T.H. is supported in part by an NSF GRFP fellowship. The views expressed here are those of the authors and do not reflect the official policy or position of the funding agencies. | 1. What is the focus and contribution of the paper regarding local function approximation?
2. What are the strengths of the proposed approach, particularly in its ability to unify explanation techniques?
3. What are the weaknesses of the paper, especially regarding its density and reliance on compact forms of writing?
4. Do you have any concerns or questions regarding the paper's claims and findings, especially in comparison to other works like LIME?
5. How do the authors address the limitations of their work, particularly in terms of the subsets of explanation techniques and application scenarios? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The authors propose the local function approximation (LFA) framework and prove that numerous local explanations are an instance of this framework. In their proposed no-free lunch theorem of explanations, the authors show that no single explanation can outperform the rest of the explanations in all neighbors around the local instances. The claims are shown across numerous datasets and models.
Overall, I vote for the acceptance of the paper. I do believe that the proposed approach is sound and interesting. However, I suspect that some of the assumptions in the paper need more careful refinement.
I am open to changing my score in case the authors can provide sound answers to the questions I have written in the questions section.
Strengths And Weaknesses
Strength:
The problem of unifications explanation techniques is an essential and important topic of study
The authors have built upon numerous other related works and include formalism to strengthen their arguments.
The evaluation includes numerous explanation techniques and datasets
Limitations:
The paper is dense and was very hard for me to read. I kindly suggest to the authors to try to help readers navigate all the theories in their work by including more explanations and not relying on compact forms of writing as theorems (especially in Section 3)
Questions
I agree with the authors that the loss formulation of LFA resembles LIME (Equation 1). But we also know that LIME does much. LIME does feature selection before training the surrogate model. Can the authors explain whether the claims and findings still apply?
Can the authors provide a similar visualization as Figure 3 to show the permutation tests for cases where important features were removed? I think it is important to understand the model recovery in those cases as well.
Limitations
Line 364-370: The authors provide some limitations of their work about including a subset of all explanation techniques. As a reader, I was keen to know a more detailed discussion on 1) limitations of their proposed framework, e.g. where LFA formulation cannot capture explanation techniques 2) Limitations of the assumptions in LFA framework for some application scenarios. |
NIPS | Title
S3GC: Scalable Self-Supervised Graph Clustering
Abstract
We study the problem of clustering graphs with additional side-information of node features. The problem is extensively studied, and several existing methods exploit Graph Neural Networks to learn node representations [29]. However, most of the existing methods focus on generic representations instead of their cluster-ability or do not scale to large scale graph datasets. In this work, we propose S3GC which uses contrastive learning along with Graph Neural Networks and node features to learn clusterable features. We empirically demonstrate that S3GC is able to learn the correct cluster structure even when graph information or node features are individually not informative enough to learn correct clusters. Finally, using extensive evaluation on a variety of benchmarks, we demonstrate that S3GC is able to significantly outperform state-of-the-art methods in terms of clustering accuracy – with as much as 5% gain in NMI – while being scalable to graphs of size 100M.
1 Introduction
Graphs are commonplace data structures to store information about entities/users, and have been investigated for decades [5, 15, 54, 31, 8, 57]. In modern ML systems, the entities/nodes are often equipped with vector embeddings from different sources. For example, authors are nodes in a citation graph and can be equipped with embeddings of the title/content of the authored papers [16, 41] as relevant side information. Owing to the utlility of graphs in large-scale systems, tremendous progress has been made in the domain of supervised learning from graphs and node features, with Graph Neural Networks (GNNs) headlining the state-of-the-art methods [28, 19, 52]. However, typical realworld ML workflows start with unsupervised data analysis to better understand the data and design supervised methods accordingly. In fact, many times clustering is a key tool to ensure scalability to web-scale data [26]. Furthermore, even independent of supervised learning, clustering the graph data with node features is critical for a variety of real-world applications like recommendation, routing, triaging [6, 2, 32] etc.
Effective graph clustering methods should be scalable, especially with respect to the number of nodes, which can be in millions even for a moderate-scale system[57]. Furthermore, in the presence of side-information, the system should be able to use both the views – node features and graph information – of the data “effectively". For example, the method should be more accurate than single-view methods that either consider only the graph information [27] or only the node feature
⇤work done while the author was an intern at Google Research †Now at University of Illinois, Urbana-Champaign
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
information [33, 43, 7]. This problem of graph clustering with side information has been extensively studied in the literature [61]; see Section 2 for a review of the existing and recent methods. Most methods map the problem to that of learning vector embeddings and then apply standard k-means [33] style clustering techniques. However, such methods – like Node2vec [18] – don’t explicitly optimize for clusterability, therefore the resulting embeddings might not be suitable for effective clustering. Furthermore, several existing methods tend to be highly reliant on the graph information and thus tend to perform poorly when graph information is noisy/incomplete. Finally, several existing methods such as GraphCL [58] propose expensive augmentation and training modules, and thus do not scale to realistic web-scale datasets.
We propose S3GC which uses a one-layer GNN encoder to combine both the graph and node-feature information, along with graph only and node feature only encodings. S3GC applies contrastive learning to ensure that the embedding of a node is close to “near-by" nodes – obtained by random walk – while being far away from all other nodes. That is, S3GC explicitly addresses the above three mentioned challenges: a) S3GC is based on contrastive learning which is known to promote linear separability and hence clustering [20], b) S3GC carefully combines information from both the graph view and the feature view, thus performs well when one of the views is highly noisy/incomplete, c) S3GC use a light-weight encoder and simple random walk based sampler/augmentation, and can be scaled to hundreds of millions of nodes on a single virtual machine (VM).
For example, consider a dataset where the adjacency matrix of the graph is sampled from a stochastic block model with 10 clusters; let probability of an edge between nodes from same cluster is p and from different clusters is q. Furthermore, features of each node are also sampled from a mixture of 10 Gaussians where c is the distance between any two cluster centers while is the standard deviation of each Gaussian. Now, consider a setting where p > q but p, q are close, hence information from the graph structure is weak. Similarly, c < but they are close. Figure 1 plots two-dimensional tSNE projection [51] of embeddings learned by the state-of-the-art Node2vec[18] and DGI[53] methods, along with S3GC. Note that while Node2vec’s objective function is optimized well, the embeddings do not appear to be separable. DGI’s embeddings are better separated, still there is a significant overlap. In contrast, S3GC is able to produce well-separated embeddings due to the contrastive learning objective along with explicit utilization of both data views.
We conduct extensive empirical evaluation of S3GC and compare it to a variety of baselines and standard state-of-the-art benchmarks, particularly: Spectral Clustering[43], k-means[33], METIS[27], Node2vec[18], DGI[53], GRACE[62], MVGRL[21] and BGRL[48]. Overall, we observe that our method consistently outperforms Node2vec, DGI – SOTA scalable methods – on all seven datasets, achieving as much as 5% higher NMI than both the methods. For two small scale datasets, our method is competitive with MVGRL method, but MVGRL does not scale to even moderate sized datasets with about 2.5M nodes and 61M edges, while our method scales to datasets with 111M nodes and 1.6B edges.
2 Related Work
Below, we discuss works related to various aspects of graph clustering and self-supervised learning, and place our contribution in the context of these related works.
Graph OR features-only clustering: Graph clustering is a well-studied problem, and several techniques address the problem including Spectral Clustering (SC) [43], Graclus [12], METIS [27], Node2vec [18], and DeepWalk [40]. In particular, Node2Vec [18] is a probabilistic framework that is an extension to DeepWalk, and maps nodes to low-dimensional feature spaces such that the likelihood of preserving the local and global neighborhood of the nodes is maximized. In the setting of node-features only data, k-means clustering is one of the classical methods, in addition to several others like agglomerative clustering [44], density based clustering [59], and deep clustering [7].
As demonstrated in Figure 1 and Table 1, S3GC attempts to exploit both the views, and if both views are meaningful then it can be significantly more accurate than single-view methods.
Self Supervised Learning: Self-supervised learning methods have demonstrated that they can learn linearly separable features/representations in the absence of any labeled information. Typical approach is to define instance-wise “augmentations" and then pose the problem as that of learning contrastive representations that map instance augmentations close to the instance embedding, while pushing it far apart from all other instance embeddings. Popular examples include MoCo [22], MoCo v2 [11], SimCLR [9], and BYOL [17]. Such methods require augmentations, and as such do not apply directly to the graph+node-features clustering problem. S3GC uses simple random walk based augmentations to enable contrastive learning based techniques.
Graph Clustering with Node Features: To exploit both the graph and feature information, several existing works use the approach of autoencoder. That is, they encode nodes using Graph Neural Networks (GNN) [28], with the goal that inner-product of encodings can reconstruct the graph structure; GAE and VGAE [29] use this technique. GALA [38], ARGA and ARVGA [37] extend the idea by using Laplacian Sharpening and generative adversarial learning. Structural Deep Clustering Network (SDCN) [4] jointly learns an Auto-Encoder (AE) along with a Graph Auto-Encoder (GAE) for better node representations, while Deep Fusion Clustering Network (DFCN) [50] merges the representations learned by AE and GAE for consensus representation learning. Since AE type approaches attempt to solve a much harder problem, their accuracy in practice lags significantly to the state-of-the-art; for example, see Table 3 in [21] which shows that such techniques can be 5-8% less accurate. MinCutPool [42] and DMoN [49] extend spectral clustering with graph encoders, but the resulting problem is somewhat unstable and leads to relatively poor partitions; see Table 3.
Graph Contrastive Learning: Recently several papers have explored contrastive Graph Representation Learning based approaches and have demonstrated state-of-the-art performance. Deep Graph Infomax (DGI) [53] is based on MINE [24] method, and is one of the most scalable method with nearly SOTA performance. It uses edge permutations to learn augmentations and embeddings. Infograph [47] extends the DGI idea to learn unsupervised representations for graphs as well. GraphCL [58] design a framework with four types of graph augmentations for learning unsupervised representations of graph data using a contrastive objective. MVGRL [21] extends these ideas by performing node diffusion and contrasting node representations with augmented graph representations while GRACE [62] maximizes agreement of node embeddings across two corrupted views of the graph. Bootstrapped Graph Latents (BGRL) [48] adapts the BYOL [17] methodology to the graph domain, and eliminates the need for negative sampling by minimizing an invariance based loss for augmented graphs within a batch. While these methods are able to obtain more powerful embeddings, the augmentations and objective function setup become expensive, and hence they are hard to scale to large datasets beyond ⇠ 1M nodes. In contrast, S3GC is able to provide competitive or better clustering accuracy, while still being scalable to graphs of size 100M nodes.
3 S3GC: Scalable Self-Supervised Graph Contrastive Clustering In this section, we first formally introduce the problem of graph clustering and notations. Then we discuss challenges faced by the current methods and outline the framework of our method S3GC. Finally, we detail each component of our method and highlight the overall training methodology.
3.1 Problem Statement and Notations
Consider a graph G = (V,E) with the vertex set V = {v1, · · · , vn} and the edge set E ✓ V ⇥ V , where |E| = m. Let A 2 Rn⇥n be the adjacency matrix of G, where Aij = 1 if (vi, vj) 2 E, else Aij = 0. Let X 2 Rn⇥d be the node attributes or feature matrix, where the i-th row Xi denotes the d-dimensional feature vector of node i. Given the graph G and attributes X , the aim is to partition the graph G into k partitions {G1, G2, G3, ..., Gk} such that nodes in the same cluster are similar/close to each other in terms of the graph structure as well as in terms of attributes.
Now, in general, one can define several loss functions to evaluate quality of clustering but that might not reflect the underlying ground truth. So, to evaluate the quality of clustering, we use standard benchmarks which have ground truth labels apriori. Furthermore, Normalized Mutual Information (NMI) between the ground truth labels and the estimated cluster labels is used as the key metric. NMI between two labellings Y1 and Y2 is defined as:
NMI(Y1, Y2) = 2 · I(Y1, Y2)
H(Y1) +H(Y2) (1)
where I(Y1, Y2) is the Mutual Information between labellings Y1 and Y2, and H(·) is the entropy. Normalized Adjacency Matrix is denoted by à = D 12AD 12 2 Rn⇥n where D = diag (A1N ) is the degree matrix. We also compute a k hop Diffusion Matrix, denoted by Sk = Pk i=0 ↵ià i 2
Rn⇥n, where ↵i 2 [0, 1] 8i 2 [k], and P
i ↵i 1. Intuitively, k hop diffusion matrix captures a weighted average of k-hop neighbourhood around every node. For specific ↵i and for k = 1, diffusion matrix can be computed in closed form [30, 36]. However, in this work we focus on finite k.
3.2 Challenges in Graph Clustering
Clustering in general is a challenging problem as the underlying function to evaluate quality of the clustering solution is unknown apriori. However, graph partitioning/clustering with attributes poses several more challenges. In particular, scaling the methods is challenging as graphs are sparse data structures, while neural network based approaches produce dense artifacts. Furthermore, it is challenging to effectively combine information from the two data views: graph and the feature attributes. Node2vec [18] uses only graph structure information, DGI [53] and related methods [21, 39] are highly dependent upon attribute quality. Motivated by the above mentioned challenges, we propose S3GC which uses a self-supervised variant of GNNs.
3.3 S3GC: Scalable Self Supervised Graph Clustering – Methodology
At a high level, S3GC uses a Graph Convolution Network (GCN) based encoder and optimizes it using a contrastive loss where the nodes are sampled via a random walk. Below we describe the three components of S3GC and then provide the resulting training algorithm.
Graph Convolutional Encoder: We use a 1-layer Graph Convolutional Network [28] to encode the graph and feature information for each node:
X = ⇣ PReLU(ÃX⇥) + PReLU(SkX⇥ 0) + I ⌘
(2)
where X 2 Rn⇥d stores the learned d-dimensional representation of each node. Recall that à is the normalized adjacency matrix and Sk is the k-hop diffusion matrix. I 2 Rn⇥d is a learnable matrix. {⇥,⇥0} are the weights of the GCN layer, and PReLU is the parameteric ReLU activation function [23]: f(zi) = zi if zi 0, f(zi) = a · zi otherwise, (3) where a is a learnable parameter. Our choice of encoder makes the method scalable as a 1-layer GCN requires storing only the learnable parameters in the GPU/memory, which is small ( O(d2), where d is the dimensionality of the node attributes). The parameter I scales only linearly with the number of nodes n. More importantly, we use mini-batches that reduce the memory requirement of forward and backward pass to order O(rsd+ d2) where r is the batch size in consideration and s is the average degree of nodes, therefore making our method scalable to graphs of very large sizes as well. We provide further discussion on memory requirement of our method in Section 3.4.
Random Walk Sampler: Next, inspired by [40, 18], we utilise biased second order Random Walks with restarts to generate points similar to a given node and thus capture the local neighborhood of each node. Formally following [18], we start with a source node u, and simulate a random walk of length l. We use ci to denote the i-th node in the random walk starting from c0 = u. Every other node in walk ci is generated from the distribution:
P (ci = x | ci 1 = v) = ⇡vx Z , if (v, x) 2 E, P (ci = x | ci 1 = v) = 0 otherwise (4)
where ⇡vx is the unnormalized transition probability between nodes v and x and Z is the normalization constant. To bias the random walks and compute the next edge x we follow a methodology similar
to [18], and from node v after traveling (t, v), the transition probability ⇡vx is set to ↵pq(t, x) · wvx where wvx is the weight on the edge between v and x, and the bias parameter ↵ is defined by:
↵pq(t, x) = 1
p , if dtx = 0, ↵pq(t, x) = 1 ,if dtx = 1, alphapq(t, x) =
1 q ,if dtx = 2, (5)
where p is the return parameter, controlling the likelihood of immediately revisiting a node, q is the in-out parameter [18], allowing the search to differentiate between “inward" and “outward" nodes, and dtx denotes the shortest path distance between nodes t and x. We note that dtx from node t to x can only take values 2 {0, 1, 2}. Setting p to a high value (> max(q, 1)) ensures a lesser likelihood of revisiting a node and setting it to a low value (< min(q, 1)) would make the walk more “local". Similarly, setting q > 1 would bias the random walk to nodes near t and obtain a local view of the graph encouraging BFS-like behaviour, whereas a q < 1 would bias the walk towards nodes further away from t and encourage DFS-like behaviour.
Contrastive Loss Formulation: Now to learn the encoder parameters, we use SimCLR style loss function where nodes generated from the random walk are considered to be positives while rest of the samples are considered to be negative. That is, we use graph neighborhood information to produce augmentations of a node. Formally, let C(u)+ = {c0, c1, ..., cl} be the nodes generated by a random walk starting at c0 = u. Then, C(u) is the set of positive samples p+u , while the set of negatives p u is generated by sampling l nodes from the remaining set of nodes [n]\p+u . Given p+u and p u , we can now define the loss for each u as:
LSimCLR(u) = P v2p+u exp(sim(Xu,Xv))P
v2p+u exp(sim(Xu,Xv)) + P v02p u exp(sim(Xu,Xv0)) (6)
where sim is some similarity function, for example inner product: sim(u, v) = u T v
kukkvk .
Note that SimCLR style loss functions have been shown to lead to "linearly separable" representations [20] and hence aligns well with the clustering objective [55, 10]. In contrast, loss functions like those used in Node2vec [18] might not necessarily lead to "clusterable" representations which is also indicated by their performance on synthetic as well as real-world datasets.
3.4 Algorithm
Now that we have discussed the individual components of our method, we describe the overall training methodology in Algorithm 1. We begin with the initialization of the learnable parameters in line 1. In line 4,5 we generate the positive and negative samples for each node in the current batch. Since we operate with embeddings of only the nodes in batch and their positive/negative samples, we take a union of these to create a “node set” in line 6. This helps in reducing the memory requirements of our algorithm, since we do not do forward/backward pass on the entire AX , but only on the nodes needed for the current batch. Once we have the node set, we compute representations for the nodes in the current batch using a forward pass in line 8, compute the loss for nodes in this set in line 9, and perform back-propagation to generate the gradient updates for the learnable parameters in line 10. Finally, we update the learnable parameters in line 11 and repeat the process for the next batch.
Space Complexity: The space complexity for the forward and backward pass of our algorithm is O(rsd + d2), where r is the batch size, s is the average degree of nodes, and d is the attribute dimension. The process of random walk generation is fast and can be done in memory, which is abundantly available and highly parallelizable. Therefore, storing the graph structure in memory for sampling of positives doesn’t create a memory bottleneck and takes O(m) space. For all the datasets other than ogbn-papers100M, we store the AX,SX , and I in the GPU memory as well, requiring additional O(nd) space. However, for very large-scale datasets, one can conveniently store these in the memory itself and interface with the GPU when required, thereby restricting the GPU memory requirement to O(rsd+ d2).
Time Complexity: The forward and backward computation for a given batch takes O(rsd2) time. Hence, for n nodes, batch size of r, and K epochs, time complexity is O(Knsd2).
Embedding property: Detecting communities ideally requires nodes to be clustered based on their position, rather than structural similarities. We show in Appendix C that S3GC produces positional embeddings [46]. Code: Implementation code of S3GC is available at: https://github.com/devvrit/S3GC
Algorithm 1 S3GC: Training and Backpropagation
Input: Graph G, Matrices ÃX 2 Rn⇥d and SkX 2 Rn⇥d, number of epochs K, Batched inputs of nodes B, Self-supervised Loss formulation LSimCLR, Encoder definition ENC, Learning Rate ⌘
1: Initialize model parameters: ⇥, ⇥0, I 2: for epoch = 1, 2, . . .K do 3: for each batch: b 2 B do 4: Generate Positive Samples p+v using biased random walks Section 3.3 8 v 2 b 5: Generate Negative Samples p v using random sampling 8 v 2 b 6: Compute the node set Nb : UNION(p+v , p v ) 8 v 7: Select subset rows (ÃX)Nb and (SkX)Nb corresponding to the node set Nb 8: Forward Pass to compute the representations: X ENC((ÃX)Nb , (SkX)Nb ,⇥,⇥0, I) 9: Compute loss using the self-supervised formulation: L(X)
10: Compute Gradients for learnable parameters at time t: ut(⇥,⇥0, I) r⇥,⇥0,IL(X) 11: Refresh the parameters: (⇥,⇥0, I)t+1 (⇥,⇥0, I)t ⌘|b|ut(⇥,⇥ 0, I) Output: X;⇥,⇥0, I
3.5 Synthetic Dataset – Stochastic Blockmodel with Gaussian Features
To better understand the working of our method in scenarios with varied quality of the graph structure and node attributes, we propose a study on a synthetic dataset using Stochastic Block Models (SBM)[1] with Gaussian features. For a given parameter k, the SBM [45] constructs a graph G = (V,E) with k partitions of nodes V . The probability of an intra-cluster edge is p and an inter-cluster edge is q, where p > q.3 Similar studies have been proposed for benchmarking of GNNs [13] and Graph clustering methods[14, 49] using SBM. In this work, we create an attributed SBM model, where each node has an s-dimensional attribute associated with it. Following the setup in [49], for k clusters (partitions) we generate k cluster centers using s-multivariate normal distributions N (0s, 2c · Is), where 2c is a hyperparameter we define. Then attributes of nodes of a given cluster are sampled from an s-multivariate gaussian distribution with the corresponding cluster center and 2I variance. The ratio 2c/ 2 controls the expected value of the classical between vs within sum-of-squares of the clusters.
We compare our method with: k-means on the attributes, Spectral Clustering [43], DGI [53], and Node2vec [18]. This choice of baseline methods focuses on different facets of graph data and clustering across which we want to assess the performance of our method. k-means on attributes utilizes only the nodes attribute information. Spectral Clustering is a non-trainable classical algorithm commonly used for solving SBMs, but uses only the graph-structure information. Similarly, Node2vec is a common graph-embedding trainable algorithm that utilizes only the structural information. DGI is a scalable SOTA self-supervised graph representation learning algorithm that uses both structure as well as node attributes. To demonstrate the effectiveness of our choice of loss formulation, we also run our method without using any attribute information and using only the learnable embedding I 2 Rn⇥d, i.e. X = I.
3Note that these are parameters for the SBM dataset generation, unrelated to the random walk sampling parameters in the S3GC model.
Setup and Observations: We set the number of nodes n = 1000 and number of clusters k = 10, where each cluster contains n/k = 100 nodes, and vary p and q to generate graphs of different structural qualities. Varying 2c/ 2 controls the quality of the attributes. The first row in table 1 represents a graph with high structural as well as attribute quality. The second row represents low structural as well as low attribute quality. While the last row represents low structural but high attribute quality. We make several observations: 1) Even without using any attribute information, our method performs significantly better as compared to other structure-only based methods like Spectral Clustering and Node2Vec, which demonstrates the effectiveness of our loss formulation and training methodology that promotes clusterability, which is also in line with recent observations [10, 55]. 2) We observe that DGI depends highly on the quality of the attributes and is not able to utilize the high-quality graph structure as well, when the attributes are noisy. In contrast, our method uses both sources of information effectively and performs reasonably well even when even only one of the structure or attribute quality is high (first and the last row in the table).
Visualization of the Embeddings: We further observe the quality of the generated embeddings using t-SNE[51] projected in 2-dimensions. Figure 1 corresponds to the second setting with weak graph and weak attributes, where we observe that S3GC generates representations which are more cluster-like as compared to the other methods. Additionally, we note that S3GC shows similar behaviour in the other two settings as well, the plots for which are provided in the Appendix.
4 Empirical Evaluation
We conduct extensive experiments on several node classification benchmark datasets to evaluate the performance of S3GC as compared to key state-of-the-art (SOTA) baselines across multiple facets associated with Graph Clustering.
4.1 Datasets and Setup
Datasets: We use 3 small scale, 3 moderate/large scale, and 1 extra large scale dataset from GCN [28], GraphSAGE [19] and the OGB-suite [25] to demonstrate the efficacy of our method. The details of the datasets are given in Table 2 and additional details of the sources are mentioned in Appendix.
Baselines: We compare our method with k-means on features and 8 recent state-of-the-art baseline algorithms, including MinCutPool [3], METIS [27], Node2vec [18], DGI [53], DMoN [49], GRACE [62], BGRL [48] and MVGRL [21]. We choose baseline methods from a broad spectrum of methodologies, namely methods that utilize only the graph structure, methods that utilize only the features and specific methods that utilize a combination of the graph structure and attribute information to provide an exhaustive comparison across important facets of graph learning and clustering. METIS [27] is a well-known and scalable classical method for graph partitioning using only the structural information. Similarly, Node2vec[18] is another scalable graph embedding technique that utilizes random walks on the graph structure. MinCutPool[3] and DMoN [49] are graph clustering techniques motivated by the normalized MinCut objective [42] and Modularity [35] respectively. DGI is a SOTA self-supervised method utilizing both graph structure and features, that motivated a line of work [21, 39] based on entropy maximization between local and global views of a graph. GRACE[62], in contrast to DGI’s methdology, contrasts embeddings at the node level itself, by forming two views of the graph and maximizing the embedding of the same nodes in the two views. BGRL[48] and MVGRL [21] are recent SOTA methods for performing self-supervised graph representation learning.
Metrics: We measure 5 metrics which are relevant for evaluating the quality of the cluster asssignments following the evaluation setup of [56, 21]: Accuracy, Normalized Mutual Information (NMI), Completeness Score (CS), Macro-F1 Score (F1), and Adjusted Rand Index (ARI). For all these aforementioned metrics, a higher value indicates better clustering performance. We generate the
representations using each representation-learning method and then perform k-means clustering on the embeddings to generate the cluster assignments used for evalution of these metrics.
Detailed Setup. We consider the unsupervised learning setting for all the seven datasets where the graph and features corresponding to all the datasets are available. We use the labels only for evaluating the quality of the cluster assignments generated by each method. For the baselines, we use the official implementations provided by the authors without any modifications. All experiments are repeated 3 times and the mean values are reported in the Table 3. We highlight the highest value as well as any other values within 1 standard deviation of the mean of the best performing method, and report the results with standard deviations in the Appendix, due to space constraints. We utilize a single Nvidia A100 GPU with 40GB memory for training each method for a maximum duration of 1 hour for each experiment in Table 3. For ogbn-papers100M we allow upto ⇠ 24 hours of training and upto 300GB main memory in addition. We provide a mini-batched and highly scalable implementation of our method S3GC in PyTorch such that experiments on all datasets other than ogbn-papers100M easily fit in the aforementioned GPU. For the ogbn-papers100M dataset, the forward and backward pass in S3GC are performed in the GPU, with an interfacing with the CPU memory to store the graph, AX , and SX , and to maintain and update I, with minimal overheads. We also provide a comparison of the time and space complexity for each method in the Appendix.
Hyperparameter Tuning: S3GC requires selection of minimal hyperparameters: we use k = 2 for the k-hop Diffusion Matrix Sk which offers the following advantages: 1) S2X = ↵0X + ↵1ÃX + ↵2Ã2X is a finite computation which can be pre-computed and only requires 2 sparse-dense matrix multiplications. 2) We chose ↵0 > ↵1 > ↵2, giving a higher weight to 0-hop neighbourhood attributes X which allows S3GC to exploit the rich information from good quality attributes even when the structural information is not very informative. 3) Two-hop neighbourhood intuitively captures all the features of nodes with similar attributes while maintaining scalability. This is motivated by the 2-hop and 3-hop choice of neighborhoods in [19] and [25] for these datasets. We additionally tune the learning rate, batch size and random walk parameters, namely the walk length l while using the default values of p = 1 and q = 1 for the bias parameters in the walk. We perform model selection based on the NMI on the validation set and evaluate all the metrics for this model. Additional details regarding the hyperparameters are mentioned in the Appendix due to space restrictions.
4.2 Results
Table 3 compares clustering performance of S3GC to a number of baseline methods on datasets of three different scales. For the small scale datasets, namely Cora, Citeseer and Pubmed, we observe that MVGRL outperforms all methods. We also note that MVGRL’s performance in our experiments, using the author’s official implementation with extensive hyperparameter tuning is slightly lower than the reported values, as has been reported by other works as well [60]. Nonetheless, we use these values for comparison and observe that S3GC also performs either competitively or is slightly inferior to MVGRL’s accuracy. For example, on the Cora dataset, S3GC is within ⇠ 2% of MVGRL’s performance and outperforms all the other baseline methods, while on the Pubmed dataset, S3GC is within⇠ 1.5% of MVGRL’s performance. Next, we observe the performance on moderate/large scale datasets and note that S3GC significantly outperforms baselines such as k-means, MinCutPool, METIS, Node2vec, DGI and DMoN. Notably, S3GC is ⇠ 5% better on ogbn-arxiv, ⇠ 1.5% better on Reddit and ⇠ 4% better on ogbn-products in terms of clustering NMI as compared to the next best method. The official implementations of GRACE, BGRL, and MVGRL do not scale to datasets with >200k nodes, running into Out of Memory (OOM) errors due to the non-scalable implementations, sub-optimal memory utilization, or the non-scalable methodology. For example, MVGRL proposes diffusion matrix as the alternate view of graph structure, which is a dense n⇥ n matrix - hence, not scalable.
We also note that S3GC performs reasonably well in settings where the node attributes are not very informative while the graph structure is useful, as evident from the performance on the Reddit dataset. k-means on the node attributes gives an NMI of only ⇠ 10% while methods like METIS and Node2vec perform well using the graph structure. Methods like DGI which depend heavily on the quality of the attributes, thus suffer a degradation in performance having a clustering NMI of only ⇠ 30%, while S3GC which uses both the attributes and graph information effectively outperforms all the other methods and generates clustering with an NMI of ⇠ 80%.
ogbn-papers100M: Finally, we compare the performance of S3GC on the extra-large scale dataset with 111M nodes and 1.6B edges in Table 4, and note that only k-means, Node2vec and DGI scale
to this dataset size and run in a reasonable time of ⇠ 24 hours. We observe that S3GC seamlessly scales to this dataset and significantly outperforms methods utilizing only the features (k-means) by ⇠ 8.5%, only graph structure (Node2vec) by ⇠ 7% and both (DGI) by ⇠ 4% in terms of clustering NMI on the ogbn-papers100M dataset.
Ablation Study on Hyperparameters: We perform detailed ablation studies to investigate the stability of S3GC’s clustering and provide the same, in the Appendix. We find that S3GC is robust to its few hyperparameters such as walk-length and batch size, enabling a near-optimal choice. We note that smaller walk lengths⇠ 5 are an optimal choice across datasets, since they are able to include the “right” positive examples in the batch, while using larger walk lengths may degrade the performance due to the inclusion of nodes belonging to other classes in the positive samples. This helps in scalability as well, as we need to sample only a few positives per node. While small batches take more time per-epoch but converge faster, larger batch sizes are better in per-epoch training time, but require more epochs to converge. Both, however, enjoy similar performance in terms of the quality of the clustering.
4.3 Novelty of S3GC’s Design Choices
Our design choices have unique roles to play which make S3GC both scalable and accurate, by effectively utilizing structure as well as node attribute information for learning clusterable representa-
tions. We describe their importance and contrast with other possible design choices in this section, highlighting how these choices put together work the most effectively empirically, contributing to S3GC’s novelty.
Encoder: Using a multilayered GCN[28] to capture local graph structure along with attribute information increases the space required to O(nnz(A)), making any method non-scalable for very large graphs. This issue is also faced by existing methods like MVGRL [21] which compute the entire diffusion matrix, and hence run into OOM errors on larger datasets as discussed earlier. Hence, we use a 1-layered GCN and precompute AX , requiring only O(nd) space. Intuitively, we see that it is important to utilize both the attribute information as well as structural information in the encoder. With this motivation, we design S3GC to capture attribute information using a 1-layer GCN and capture structural information using a learnable parameter I (eq. (2)). We empirically verify this intuition with experiments using either just the attribute information (for example on the Reddit dataset) or only the structural information (on the ogbn-arxiv dataset) and summarize our findings in Table 5. We find that the design choice of S3GC’s Encoder is optimal for effectively capturing both sources of information and removing any one source, leads to considerably suboptimal performance.
Positive and Negative nodes sampler: Using a random walk sampler offers several advantages - it can be computed in a scalable fashion and it samples nodes from a k-hop neighbourhood. We consider several intuitive sampling approaches and discuss the most intuitive and simple sampling approach here: for a given node, we consider all its k hop neighbourhood nodes as positives, and r randomly sampled nodes as negative. This becomes non-scalable, since calculating a k-hop neighbourhood for all the nodes in the graph has a significant computation cost (it is equivalent to computing non-zero elements in Ak). Hence, for simplicity, we experiment with k = 1 and note that on the ogbn-arxiv dataset, using this sampling scheme and a learnable embedding as the encoder X̄ = I gives only 0.252 NMI. This is significantly lesser than using a random walk generator for sampling and the same encoding scheme X̄ = I, which gives an NMI of 0.444.
Loss function As we already observe in Section 3.5, using the same encoder X̄ = I but a different loss function gives rise to more clusterable embeddings when using the SimCLR loss as compared to using the Node2Vec loss. The experiments on real-world datasets also reinforce these observations, as we see in Table 5 that S3GC I performs better than Node2Vec on ogbn-arxiv and ogbn-products datasets - where both of the methods use the same encoder and random sampler but different loss functions.
5 Discussion and Future Work
We introduced S3GC, a new method for scalable graph clustering with node feature side-information. S3GC is a simple method based on contrastive learning along with a careful encoding of graph and node features, but it is an effective approach across all scales of data. In particular, we showed that S3GC is able to scale to graphs with 100M nodes while still ensuring SOTA clustering performance.
Limitations and Future Work: S3GC demonstrates empirically that on Stochastic Block Models along with mixture-of-Gaussian features, it is able to identify the clusters accurately. Further theoretical investigation into this standard setting and establishing error bounds for S3GC is of interest. S3GC can be applied to graphs with heterogeneous nodes, but it cannot explicitly exploit the information. Extension of S3GC to cluster graphs while directly exploiting heterogeneity of nodes is another open problem. Finally, S3GC like all deep learning methods is susceptible to being unfairly biased by a few “important" nodes. Ensuring stable clustering techniques with minimal bias for a small number of nodes is another interesting direction. | 1. What is the focus of the paper regarding scalability in graph neural networks?
2. What are the strengths of the proposed approach, particularly in combining information from both the graph and node features?
3. What are the weaknesses of the paper, especially regarding its applicability in real-world settings?
4. Do you have any concerns about the method's ability to handle heterophilous graphs?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The paper proposes a Scalable Self-Supervised method that uses graph neural networks (hence called S3GC) and node features to learn clusterable representations using contrastive learning.
S3GC uses a 1 layered graph convolutional network to encode feature and structural information. To encode the structure, the authors propose to use normalized adjacency and a k-hop diffusion matrix. To use the contrastive loss for generating clusterable representations, positive and negative samples are needed. S3GC uses a biased random walk sampler to get similar nodes. Negative nodes are generated randomly from the remaining nodes.
Authors have shown that it can scale well to graphs with 100M nodes which are challenging for current graph neural network-based approaches. S3GC is shown to gain as much as 5% in NMI.
Strengths And Weaknesses
Strengths
The proposed approach is simple and intuitive.
The paper is well written and easy to understand.
The proposed approach combines information from both the graph and the node features and hence is applicable in settings where one vs the other is noisy. Extensive experiments are conducted to validate this using a synthetic dataset.
This method only uses a single GCN layer with normalized adjacency and diffusion matrix and thus it can take into account higher order neighborhood information and still scale to very large-scale graphs which allows for its application in real-world settings.
Extensive experimentation is done to show the effectiveness of the approach on small-scale as well as large-scale datasets. Space and time complexity provided add to the clarity of the central idea of the paper about scalability.
Weaknesses
IMO this is more of applied work and I think perhaps an applied conference is a better place for this work for a larger impact. All the insights for instance for linear separability and clusterable contrastive representations are known ([1]). The part about scaling that uses a diffusion matrix has also been used as a scalable way of getting higher-order information ([2]). I think perhaps (to the best of my knowledge) the idea of enforcing closeness of neighbors is new but it doesn’t contribute to the main idea of the paper which is scalability.
It will be interesting to see a baseline that compares the baseline contrastive learning approaches (DGI, MVGRL, etc.) with just one gcn layer with 1 hop neighborhood perturbations. I think MVGRL will become pretty scalable in that setting as well and because it also uses a diffusion matrix my hunch is that it will be pretty competitive on larger datasets. From table 3 it seems like MVGRL is better than the proposed approach on average (on NMI at least which is considered to be the metric for clustering) whenever it didn’t have OOM error.
Another potential weakness is that all of the datasets are homophilous. I am not sure how will this perform on heterophilous graphs.
Another weakness is that it is not clear how this approach is directly applicable to heterogeneous graphs (which authors mention as well).
As authors point out, there is no theoretical justification for the proposed method. Perhaps one can borrow ideas from [1]
[1] Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss
[2] SIGN: Scalable Inception Graph Neural Networks
Questions
Is there an ablation that uses only the normalized adjacency/diffusion matrix in the GCN. I didn’t find these in the paper (or in the appendix).
This paper refers to DGI as SOTA (page 6 lines 233-235, 273-274). Is that true? Perhaps the authors meant to say DGI style methods? MVGRL has been shown to work better and that is validated in the experiments as well (Table 3). Thus, I think Table 1 should have results with MVGRL. I understand authors have concerns that the reported values are higher in MVGRL (line 317-318) but I think it is good to add for completeness.
Limitations
See Weaknesses + questions. |
NIPS | Title
S3GC: Scalable Self-Supervised Graph Clustering
Abstract
We study the problem of clustering graphs with additional side-information of node features. The problem is extensively studied, and several existing methods exploit Graph Neural Networks to learn node representations [29]. However, most of the existing methods focus on generic representations instead of their cluster-ability or do not scale to large scale graph datasets. In this work, we propose S3GC which uses contrastive learning along with Graph Neural Networks and node features to learn clusterable features. We empirically demonstrate that S3GC is able to learn the correct cluster structure even when graph information or node features are individually not informative enough to learn correct clusters. Finally, using extensive evaluation on a variety of benchmarks, we demonstrate that S3GC is able to significantly outperform state-of-the-art methods in terms of clustering accuracy – with as much as 5% gain in NMI – while being scalable to graphs of size 100M.
1 Introduction
Graphs are commonplace data structures to store information about entities/users, and have been investigated for decades [5, 15, 54, 31, 8, 57]. In modern ML systems, the entities/nodes are often equipped with vector embeddings from different sources. For example, authors are nodes in a citation graph and can be equipped with embeddings of the title/content of the authored papers [16, 41] as relevant side information. Owing to the utlility of graphs in large-scale systems, tremendous progress has been made in the domain of supervised learning from graphs and node features, with Graph Neural Networks (GNNs) headlining the state-of-the-art methods [28, 19, 52]. However, typical realworld ML workflows start with unsupervised data analysis to better understand the data and design supervised methods accordingly. In fact, many times clustering is a key tool to ensure scalability to web-scale data [26]. Furthermore, even independent of supervised learning, clustering the graph data with node features is critical for a variety of real-world applications like recommendation, routing, triaging [6, 2, 32] etc.
Effective graph clustering methods should be scalable, especially with respect to the number of nodes, which can be in millions even for a moderate-scale system[57]. Furthermore, in the presence of side-information, the system should be able to use both the views – node features and graph information – of the data “effectively". For example, the method should be more accurate than single-view methods that either consider only the graph information [27] or only the node feature
⇤work done while the author was an intern at Google Research †Now at University of Illinois, Urbana-Champaign
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
information [33, 43, 7]. This problem of graph clustering with side information has been extensively studied in the literature [61]; see Section 2 for a review of the existing and recent methods. Most methods map the problem to that of learning vector embeddings and then apply standard k-means [33] style clustering techniques. However, such methods – like Node2vec [18] – don’t explicitly optimize for clusterability, therefore the resulting embeddings might not be suitable for effective clustering. Furthermore, several existing methods tend to be highly reliant on the graph information and thus tend to perform poorly when graph information is noisy/incomplete. Finally, several existing methods such as GraphCL [58] propose expensive augmentation and training modules, and thus do not scale to realistic web-scale datasets.
We propose S3GC which uses a one-layer GNN encoder to combine both the graph and node-feature information, along with graph only and node feature only encodings. S3GC applies contrastive learning to ensure that the embedding of a node is close to “near-by" nodes – obtained by random walk – while being far away from all other nodes. That is, S3GC explicitly addresses the above three mentioned challenges: a) S3GC is based on contrastive learning which is known to promote linear separability and hence clustering [20], b) S3GC carefully combines information from both the graph view and the feature view, thus performs well when one of the views is highly noisy/incomplete, c) S3GC use a light-weight encoder and simple random walk based sampler/augmentation, and can be scaled to hundreds of millions of nodes on a single virtual machine (VM).
For example, consider a dataset where the adjacency matrix of the graph is sampled from a stochastic block model with 10 clusters; let probability of an edge between nodes from same cluster is p and from different clusters is q. Furthermore, features of each node are also sampled from a mixture of 10 Gaussians where c is the distance between any two cluster centers while is the standard deviation of each Gaussian. Now, consider a setting where p > q but p, q are close, hence information from the graph structure is weak. Similarly, c < but they are close. Figure 1 plots two-dimensional tSNE projection [51] of embeddings learned by the state-of-the-art Node2vec[18] and DGI[53] methods, along with S3GC. Note that while Node2vec’s objective function is optimized well, the embeddings do not appear to be separable. DGI’s embeddings are better separated, still there is a significant overlap. In contrast, S3GC is able to produce well-separated embeddings due to the contrastive learning objective along with explicit utilization of both data views.
We conduct extensive empirical evaluation of S3GC and compare it to a variety of baselines and standard state-of-the-art benchmarks, particularly: Spectral Clustering[43], k-means[33], METIS[27], Node2vec[18], DGI[53], GRACE[62], MVGRL[21] and BGRL[48]. Overall, we observe that our method consistently outperforms Node2vec, DGI – SOTA scalable methods – on all seven datasets, achieving as much as 5% higher NMI than both the methods. For two small scale datasets, our method is competitive with MVGRL method, but MVGRL does not scale to even moderate sized datasets with about 2.5M nodes and 61M edges, while our method scales to datasets with 111M nodes and 1.6B edges.
2 Related Work
Below, we discuss works related to various aspects of graph clustering and self-supervised learning, and place our contribution in the context of these related works.
Graph OR features-only clustering: Graph clustering is a well-studied problem, and several techniques address the problem including Spectral Clustering (SC) [43], Graclus [12], METIS [27], Node2vec [18], and DeepWalk [40]. In particular, Node2Vec [18] is a probabilistic framework that is an extension to DeepWalk, and maps nodes to low-dimensional feature spaces such that the likelihood of preserving the local and global neighborhood of the nodes is maximized. In the setting of node-features only data, k-means clustering is one of the classical methods, in addition to several others like agglomerative clustering [44], density based clustering [59], and deep clustering [7].
As demonstrated in Figure 1 and Table 1, S3GC attempts to exploit both the views, and if both views are meaningful then it can be significantly more accurate than single-view methods.
Self Supervised Learning: Self-supervised learning methods have demonstrated that they can learn linearly separable features/representations in the absence of any labeled information. Typical approach is to define instance-wise “augmentations" and then pose the problem as that of learning contrastive representations that map instance augmentations close to the instance embedding, while pushing it far apart from all other instance embeddings. Popular examples include MoCo [22], MoCo v2 [11], SimCLR [9], and BYOL [17]. Such methods require augmentations, and as such do not apply directly to the graph+node-features clustering problem. S3GC uses simple random walk based augmentations to enable contrastive learning based techniques.
Graph Clustering with Node Features: To exploit both the graph and feature information, several existing works use the approach of autoencoder. That is, they encode nodes using Graph Neural Networks (GNN) [28], with the goal that inner-product of encodings can reconstruct the graph structure; GAE and VGAE [29] use this technique. GALA [38], ARGA and ARVGA [37] extend the idea by using Laplacian Sharpening and generative adversarial learning. Structural Deep Clustering Network (SDCN) [4] jointly learns an Auto-Encoder (AE) along with a Graph Auto-Encoder (GAE) for better node representations, while Deep Fusion Clustering Network (DFCN) [50] merges the representations learned by AE and GAE for consensus representation learning. Since AE type approaches attempt to solve a much harder problem, their accuracy in practice lags significantly to the state-of-the-art; for example, see Table 3 in [21] which shows that such techniques can be 5-8% less accurate. MinCutPool [42] and DMoN [49] extend spectral clustering with graph encoders, but the resulting problem is somewhat unstable and leads to relatively poor partitions; see Table 3.
Graph Contrastive Learning: Recently several papers have explored contrastive Graph Representation Learning based approaches and have demonstrated state-of-the-art performance. Deep Graph Infomax (DGI) [53] is based on MINE [24] method, and is one of the most scalable method with nearly SOTA performance. It uses edge permutations to learn augmentations and embeddings. Infograph [47] extends the DGI idea to learn unsupervised representations for graphs as well. GraphCL [58] design a framework with four types of graph augmentations for learning unsupervised representations of graph data using a contrastive objective. MVGRL [21] extends these ideas by performing node diffusion and contrasting node representations with augmented graph representations while GRACE [62] maximizes agreement of node embeddings across two corrupted views of the graph. Bootstrapped Graph Latents (BGRL) [48] adapts the BYOL [17] methodology to the graph domain, and eliminates the need for negative sampling by minimizing an invariance based loss for augmented graphs within a batch. While these methods are able to obtain more powerful embeddings, the augmentations and objective function setup become expensive, and hence they are hard to scale to large datasets beyond ⇠ 1M nodes. In contrast, S3GC is able to provide competitive or better clustering accuracy, while still being scalable to graphs of size 100M nodes.
3 S3GC: Scalable Self-Supervised Graph Contrastive Clustering In this section, we first formally introduce the problem of graph clustering and notations. Then we discuss challenges faced by the current methods and outline the framework of our method S3GC. Finally, we detail each component of our method and highlight the overall training methodology.
3.1 Problem Statement and Notations
Consider a graph G = (V,E) with the vertex set V = {v1, · · · , vn} and the edge set E ✓ V ⇥ V , where |E| = m. Let A 2 Rn⇥n be the adjacency matrix of G, where Aij = 1 if (vi, vj) 2 E, else Aij = 0. Let X 2 Rn⇥d be the node attributes or feature matrix, where the i-th row Xi denotes the d-dimensional feature vector of node i. Given the graph G and attributes X , the aim is to partition the graph G into k partitions {G1, G2, G3, ..., Gk} such that nodes in the same cluster are similar/close to each other in terms of the graph structure as well as in terms of attributes.
Now, in general, one can define several loss functions to evaluate quality of clustering but that might not reflect the underlying ground truth. So, to evaluate the quality of clustering, we use standard benchmarks which have ground truth labels apriori. Furthermore, Normalized Mutual Information (NMI) between the ground truth labels and the estimated cluster labels is used as the key metric. NMI between two labellings Y1 and Y2 is defined as:
NMI(Y1, Y2) = 2 · I(Y1, Y2)
H(Y1) +H(Y2) (1)
where I(Y1, Y2) is the Mutual Information between labellings Y1 and Y2, and H(·) is the entropy. Normalized Adjacency Matrix is denoted by à = D 12AD 12 2 Rn⇥n where D = diag (A1N ) is the degree matrix. We also compute a k hop Diffusion Matrix, denoted by Sk = Pk i=0 ↵ià i 2
Rn⇥n, where ↵i 2 [0, 1] 8i 2 [k], and P
i ↵i 1. Intuitively, k hop diffusion matrix captures a weighted average of k-hop neighbourhood around every node. For specific ↵i and for k = 1, diffusion matrix can be computed in closed form [30, 36]. However, in this work we focus on finite k.
3.2 Challenges in Graph Clustering
Clustering in general is a challenging problem as the underlying function to evaluate quality of the clustering solution is unknown apriori. However, graph partitioning/clustering with attributes poses several more challenges. In particular, scaling the methods is challenging as graphs are sparse data structures, while neural network based approaches produce dense artifacts. Furthermore, it is challenging to effectively combine information from the two data views: graph and the feature attributes. Node2vec [18] uses only graph structure information, DGI [53] and related methods [21, 39] are highly dependent upon attribute quality. Motivated by the above mentioned challenges, we propose S3GC which uses a self-supervised variant of GNNs.
3.3 S3GC: Scalable Self Supervised Graph Clustering – Methodology
At a high level, S3GC uses a Graph Convolution Network (GCN) based encoder and optimizes it using a contrastive loss where the nodes are sampled via a random walk. Below we describe the three components of S3GC and then provide the resulting training algorithm.
Graph Convolutional Encoder: We use a 1-layer Graph Convolutional Network [28] to encode the graph and feature information for each node:
X = ⇣ PReLU(ÃX⇥) + PReLU(SkX⇥ 0) + I ⌘
(2)
where X 2 Rn⇥d stores the learned d-dimensional representation of each node. Recall that à is the normalized adjacency matrix and Sk is the k-hop diffusion matrix. I 2 Rn⇥d is a learnable matrix. {⇥,⇥0} are the weights of the GCN layer, and PReLU is the parameteric ReLU activation function [23]: f(zi) = zi if zi 0, f(zi) = a · zi otherwise, (3) where a is a learnable parameter. Our choice of encoder makes the method scalable as a 1-layer GCN requires storing only the learnable parameters in the GPU/memory, which is small ( O(d2), where d is the dimensionality of the node attributes). The parameter I scales only linearly with the number of nodes n. More importantly, we use mini-batches that reduce the memory requirement of forward and backward pass to order O(rsd+ d2) where r is the batch size in consideration and s is the average degree of nodes, therefore making our method scalable to graphs of very large sizes as well. We provide further discussion on memory requirement of our method in Section 3.4.
Random Walk Sampler: Next, inspired by [40, 18], we utilise biased second order Random Walks with restarts to generate points similar to a given node and thus capture the local neighborhood of each node. Formally following [18], we start with a source node u, and simulate a random walk of length l. We use ci to denote the i-th node in the random walk starting from c0 = u. Every other node in walk ci is generated from the distribution:
P (ci = x | ci 1 = v) = ⇡vx Z , if (v, x) 2 E, P (ci = x | ci 1 = v) = 0 otherwise (4)
where ⇡vx is the unnormalized transition probability between nodes v and x and Z is the normalization constant. To bias the random walks and compute the next edge x we follow a methodology similar
to [18], and from node v after traveling (t, v), the transition probability ⇡vx is set to ↵pq(t, x) · wvx where wvx is the weight on the edge between v and x, and the bias parameter ↵ is defined by:
↵pq(t, x) = 1
p , if dtx = 0, ↵pq(t, x) = 1 ,if dtx = 1, alphapq(t, x) =
1 q ,if dtx = 2, (5)
where p is the return parameter, controlling the likelihood of immediately revisiting a node, q is the in-out parameter [18], allowing the search to differentiate between “inward" and “outward" nodes, and dtx denotes the shortest path distance between nodes t and x. We note that dtx from node t to x can only take values 2 {0, 1, 2}. Setting p to a high value (> max(q, 1)) ensures a lesser likelihood of revisiting a node and setting it to a low value (< min(q, 1)) would make the walk more “local". Similarly, setting q > 1 would bias the random walk to nodes near t and obtain a local view of the graph encouraging BFS-like behaviour, whereas a q < 1 would bias the walk towards nodes further away from t and encourage DFS-like behaviour.
Contrastive Loss Formulation: Now to learn the encoder parameters, we use SimCLR style loss function where nodes generated from the random walk are considered to be positives while rest of the samples are considered to be negative. That is, we use graph neighborhood information to produce augmentations of a node. Formally, let C(u)+ = {c0, c1, ..., cl} be the nodes generated by a random walk starting at c0 = u. Then, C(u) is the set of positive samples p+u , while the set of negatives p u is generated by sampling l nodes from the remaining set of nodes [n]\p+u . Given p+u and p u , we can now define the loss for each u as:
LSimCLR(u) = P v2p+u exp(sim(Xu,Xv))P
v2p+u exp(sim(Xu,Xv)) + P v02p u exp(sim(Xu,Xv0)) (6)
where sim is some similarity function, for example inner product: sim(u, v) = u T v
kukkvk .
Note that SimCLR style loss functions have been shown to lead to "linearly separable" representations [20] and hence aligns well with the clustering objective [55, 10]. In contrast, loss functions like those used in Node2vec [18] might not necessarily lead to "clusterable" representations which is also indicated by their performance on synthetic as well as real-world datasets.
3.4 Algorithm
Now that we have discussed the individual components of our method, we describe the overall training methodology in Algorithm 1. We begin with the initialization of the learnable parameters in line 1. In line 4,5 we generate the positive and negative samples for each node in the current batch. Since we operate with embeddings of only the nodes in batch and their positive/negative samples, we take a union of these to create a “node set” in line 6. This helps in reducing the memory requirements of our algorithm, since we do not do forward/backward pass on the entire AX , but only on the nodes needed for the current batch. Once we have the node set, we compute representations for the nodes in the current batch using a forward pass in line 8, compute the loss for nodes in this set in line 9, and perform back-propagation to generate the gradient updates for the learnable parameters in line 10. Finally, we update the learnable parameters in line 11 and repeat the process for the next batch.
Space Complexity: The space complexity for the forward and backward pass of our algorithm is O(rsd + d2), where r is the batch size, s is the average degree of nodes, and d is the attribute dimension. The process of random walk generation is fast and can be done in memory, which is abundantly available and highly parallelizable. Therefore, storing the graph structure in memory for sampling of positives doesn’t create a memory bottleneck and takes O(m) space. For all the datasets other than ogbn-papers100M, we store the AX,SX , and I in the GPU memory as well, requiring additional O(nd) space. However, for very large-scale datasets, one can conveniently store these in the memory itself and interface with the GPU when required, thereby restricting the GPU memory requirement to O(rsd+ d2).
Time Complexity: The forward and backward computation for a given batch takes O(rsd2) time. Hence, for n nodes, batch size of r, and K epochs, time complexity is O(Knsd2).
Embedding property: Detecting communities ideally requires nodes to be clustered based on their position, rather than structural similarities. We show in Appendix C that S3GC produces positional embeddings [46]. Code: Implementation code of S3GC is available at: https://github.com/devvrit/S3GC
Algorithm 1 S3GC: Training and Backpropagation
Input: Graph G, Matrices ÃX 2 Rn⇥d and SkX 2 Rn⇥d, number of epochs K, Batched inputs of nodes B, Self-supervised Loss formulation LSimCLR, Encoder definition ENC, Learning Rate ⌘
1: Initialize model parameters: ⇥, ⇥0, I 2: for epoch = 1, 2, . . .K do 3: for each batch: b 2 B do 4: Generate Positive Samples p+v using biased random walks Section 3.3 8 v 2 b 5: Generate Negative Samples p v using random sampling 8 v 2 b 6: Compute the node set Nb : UNION(p+v , p v ) 8 v 7: Select subset rows (ÃX)Nb and (SkX)Nb corresponding to the node set Nb 8: Forward Pass to compute the representations: X ENC((ÃX)Nb , (SkX)Nb ,⇥,⇥0, I) 9: Compute loss using the self-supervised formulation: L(X)
10: Compute Gradients for learnable parameters at time t: ut(⇥,⇥0, I) r⇥,⇥0,IL(X) 11: Refresh the parameters: (⇥,⇥0, I)t+1 (⇥,⇥0, I)t ⌘|b|ut(⇥,⇥ 0, I) Output: X;⇥,⇥0, I
3.5 Synthetic Dataset – Stochastic Blockmodel with Gaussian Features
To better understand the working of our method in scenarios with varied quality of the graph structure and node attributes, we propose a study on a synthetic dataset using Stochastic Block Models (SBM)[1] with Gaussian features. For a given parameter k, the SBM [45] constructs a graph G = (V,E) with k partitions of nodes V . The probability of an intra-cluster edge is p and an inter-cluster edge is q, where p > q.3 Similar studies have been proposed for benchmarking of GNNs [13] and Graph clustering methods[14, 49] using SBM. In this work, we create an attributed SBM model, where each node has an s-dimensional attribute associated with it. Following the setup in [49], for k clusters (partitions) we generate k cluster centers using s-multivariate normal distributions N (0s, 2c · Is), where 2c is a hyperparameter we define. Then attributes of nodes of a given cluster are sampled from an s-multivariate gaussian distribution with the corresponding cluster center and 2I variance. The ratio 2c/ 2 controls the expected value of the classical between vs within sum-of-squares of the clusters.
We compare our method with: k-means on the attributes, Spectral Clustering [43], DGI [53], and Node2vec [18]. This choice of baseline methods focuses on different facets of graph data and clustering across which we want to assess the performance of our method. k-means on attributes utilizes only the nodes attribute information. Spectral Clustering is a non-trainable classical algorithm commonly used for solving SBMs, but uses only the graph-structure information. Similarly, Node2vec is a common graph-embedding trainable algorithm that utilizes only the structural information. DGI is a scalable SOTA self-supervised graph representation learning algorithm that uses both structure as well as node attributes. To demonstrate the effectiveness of our choice of loss formulation, we also run our method without using any attribute information and using only the learnable embedding I 2 Rn⇥d, i.e. X = I.
3Note that these are parameters for the SBM dataset generation, unrelated to the random walk sampling parameters in the S3GC model.
Setup and Observations: We set the number of nodes n = 1000 and number of clusters k = 10, where each cluster contains n/k = 100 nodes, and vary p and q to generate graphs of different structural qualities. Varying 2c/ 2 controls the quality of the attributes. The first row in table 1 represents a graph with high structural as well as attribute quality. The second row represents low structural as well as low attribute quality. While the last row represents low structural but high attribute quality. We make several observations: 1) Even without using any attribute information, our method performs significantly better as compared to other structure-only based methods like Spectral Clustering and Node2Vec, which demonstrates the effectiveness of our loss formulation and training methodology that promotes clusterability, which is also in line with recent observations [10, 55]. 2) We observe that DGI depends highly on the quality of the attributes and is not able to utilize the high-quality graph structure as well, when the attributes are noisy. In contrast, our method uses both sources of information effectively and performs reasonably well even when even only one of the structure or attribute quality is high (first and the last row in the table).
Visualization of the Embeddings: We further observe the quality of the generated embeddings using t-SNE[51] projected in 2-dimensions. Figure 1 corresponds to the second setting with weak graph and weak attributes, where we observe that S3GC generates representations which are more cluster-like as compared to the other methods. Additionally, we note that S3GC shows similar behaviour in the other two settings as well, the plots for which are provided in the Appendix.
4 Empirical Evaluation
We conduct extensive experiments on several node classification benchmark datasets to evaluate the performance of S3GC as compared to key state-of-the-art (SOTA) baselines across multiple facets associated with Graph Clustering.
4.1 Datasets and Setup
Datasets: We use 3 small scale, 3 moderate/large scale, and 1 extra large scale dataset from GCN [28], GraphSAGE [19] and the OGB-suite [25] to demonstrate the efficacy of our method. The details of the datasets are given in Table 2 and additional details of the sources are mentioned in Appendix.
Baselines: We compare our method with k-means on features and 8 recent state-of-the-art baseline algorithms, including MinCutPool [3], METIS [27], Node2vec [18], DGI [53], DMoN [49], GRACE [62], BGRL [48] and MVGRL [21]. We choose baseline methods from a broad spectrum of methodologies, namely methods that utilize only the graph structure, methods that utilize only the features and specific methods that utilize a combination of the graph structure and attribute information to provide an exhaustive comparison across important facets of graph learning and clustering. METIS [27] is a well-known and scalable classical method for graph partitioning using only the structural information. Similarly, Node2vec[18] is another scalable graph embedding technique that utilizes random walks on the graph structure. MinCutPool[3] and DMoN [49] are graph clustering techniques motivated by the normalized MinCut objective [42] and Modularity [35] respectively. DGI is a SOTA self-supervised method utilizing both graph structure and features, that motivated a line of work [21, 39] based on entropy maximization between local and global views of a graph. GRACE[62], in contrast to DGI’s methdology, contrasts embeddings at the node level itself, by forming two views of the graph and maximizing the embedding of the same nodes in the two views. BGRL[48] and MVGRL [21] are recent SOTA methods for performing self-supervised graph representation learning.
Metrics: We measure 5 metrics which are relevant for evaluating the quality of the cluster asssignments following the evaluation setup of [56, 21]: Accuracy, Normalized Mutual Information (NMI), Completeness Score (CS), Macro-F1 Score (F1), and Adjusted Rand Index (ARI). For all these aforementioned metrics, a higher value indicates better clustering performance. We generate the
representations using each representation-learning method and then perform k-means clustering on the embeddings to generate the cluster assignments used for evalution of these metrics.
Detailed Setup. We consider the unsupervised learning setting for all the seven datasets where the graph and features corresponding to all the datasets are available. We use the labels only for evaluating the quality of the cluster assignments generated by each method. For the baselines, we use the official implementations provided by the authors without any modifications. All experiments are repeated 3 times and the mean values are reported in the Table 3. We highlight the highest value as well as any other values within 1 standard deviation of the mean of the best performing method, and report the results with standard deviations in the Appendix, due to space constraints. We utilize a single Nvidia A100 GPU with 40GB memory for training each method for a maximum duration of 1 hour for each experiment in Table 3. For ogbn-papers100M we allow upto ⇠ 24 hours of training and upto 300GB main memory in addition. We provide a mini-batched and highly scalable implementation of our method S3GC in PyTorch such that experiments on all datasets other than ogbn-papers100M easily fit in the aforementioned GPU. For the ogbn-papers100M dataset, the forward and backward pass in S3GC are performed in the GPU, with an interfacing with the CPU memory to store the graph, AX , and SX , and to maintain and update I, with minimal overheads. We also provide a comparison of the time and space complexity for each method in the Appendix.
Hyperparameter Tuning: S3GC requires selection of minimal hyperparameters: we use k = 2 for the k-hop Diffusion Matrix Sk which offers the following advantages: 1) S2X = ↵0X + ↵1ÃX + ↵2Ã2X is a finite computation which can be pre-computed and only requires 2 sparse-dense matrix multiplications. 2) We chose ↵0 > ↵1 > ↵2, giving a higher weight to 0-hop neighbourhood attributes X which allows S3GC to exploit the rich information from good quality attributes even when the structural information is not very informative. 3) Two-hop neighbourhood intuitively captures all the features of nodes with similar attributes while maintaining scalability. This is motivated by the 2-hop and 3-hop choice of neighborhoods in [19] and [25] for these datasets. We additionally tune the learning rate, batch size and random walk parameters, namely the walk length l while using the default values of p = 1 and q = 1 for the bias parameters in the walk. We perform model selection based on the NMI on the validation set and evaluate all the metrics for this model. Additional details regarding the hyperparameters are mentioned in the Appendix due to space restrictions.
4.2 Results
Table 3 compares clustering performance of S3GC to a number of baseline methods on datasets of three different scales. For the small scale datasets, namely Cora, Citeseer and Pubmed, we observe that MVGRL outperforms all methods. We also note that MVGRL’s performance in our experiments, using the author’s official implementation with extensive hyperparameter tuning is slightly lower than the reported values, as has been reported by other works as well [60]. Nonetheless, we use these values for comparison and observe that S3GC also performs either competitively or is slightly inferior to MVGRL’s accuracy. For example, on the Cora dataset, S3GC is within ⇠ 2% of MVGRL’s performance and outperforms all the other baseline methods, while on the Pubmed dataset, S3GC is within⇠ 1.5% of MVGRL’s performance. Next, we observe the performance on moderate/large scale datasets and note that S3GC significantly outperforms baselines such as k-means, MinCutPool, METIS, Node2vec, DGI and DMoN. Notably, S3GC is ⇠ 5% better on ogbn-arxiv, ⇠ 1.5% better on Reddit and ⇠ 4% better on ogbn-products in terms of clustering NMI as compared to the next best method. The official implementations of GRACE, BGRL, and MVGRL do not scale to datasets with >200k nodes, running into Out of Memory (OOM) errors due to the non-scalable implementations, sub-optimal memory utilization, or the non-scalable methodology. For example, MVGRL proposes diffusion matrix as the alternate view of graph structure, which is a dense n⇥ n matrix - hence, not scalable.
We also note that S3GC performs reasonably well in settings where the node attributes are not very informative while the graph structure is useful, as evident from the performance on the Reddit dataset. k-means on the node attributes gives an NMI of only ⇠ 10% while methods like METIS and Node2vec perform well using the graph structure. Methods like DGI which depend heavily on the quality of the attributes, thus suffer a degradation in performance having a clustering NMI of only ⇠ 30%, while S3GC which uses both the attributes and graph information effectively outperforms all the other methods and generates clustering with an NMI of ⇠ 80%.
ogbn-papers100M: Finally, we compare the performance of S3GC on the extra-large scale dataset with 111M nodes and 1.6B edges in Table 4, and note that only k-means, Node2vec and DGI scale
to this dataset size and run in a reasonable time of ⇠ 24 hours. We observe that S3GC seamlessly scales to this dataset and significantly outperforms methods utilizing only the features (k-means) by ⇠ 8.5%, only graph structure (Node2vec) by ⇠ 7% and both (DGI) by ⇠ 4% in terms of clustering NMI on the ogbn-papers100M dataset.
Ablation Study on Hyperparameters: We perform detailed ablation studies to investigate the stability of S3GC’s clustering and provide the same, in the Appendix. We find that S3GC is robust to its few hyperparameters such as walk-length and batch size, enabling a near-optimal choice. We note that smaller walk lengths⇠ 5 are an optimal choice across datasets, since they are able to include the “right” positive examples in the batch, while using larger walk lengths may degrade the performance due to the inclusion of nodes belonging to other classes in the positive samples. This helps in scalability as well, as we need to sample only a few positives per node. While small batches take more time per-epoch but converge faster, larger batch sizes are better in per-epoch training time, but require more epochs to converge. Both, however, enjoy similar performance in terms of the quality of the clustering.
4.3 Novelty of S3GC’s Design Choices
Our design choices have unique roles to play which make S3GC both scalable and accurate, by effectively utilizing structure as well as node attribute information for learning clusterable representa-
tions. We describe their importance and contrast with other possible design choices in this section, highlighting how these choices put together work the most effectively empirically, contributing to S3GC’s novelty.
Encoder: Using a multilayered GCN[28] to capture local graph structure along with attribute information increases the space required to O(nnz(A)), making any method non-scalable for very large graphs. This issue is also faced by existing methods like MVGRL [21] which compute the entire diffusion matrix, and hence run into OOM errors on larger datasets as discussed earlier. Hence, we use a 1-layered GCN and precompute AX , requiring only O(nd) space. Intuitively, we see that it is important to utilize both the attribute information as well as structural information in the encoder. With this motivation, we design S3GC to capture attribute information using a 1-layer GCN and capture structural information using a learnable parameter I (eq. (2)). We empirically verify this intuition with experiments using either just the attribute information (for example on the Reddit dataset) or only the structural information (on the ogbn-arxiv dataset) and summarize our findings in Table 5. We find that the design choice of S3GC’s Encoder is optimal for effectively capturing both sources of information and removing any one source, leads to considerably suboptimal performance.
Positive and Negative nodes sampler: Using a random walk sampler offers several advantages - it can be computed in a scalable fashion and it samples nodes from a k-hop neighbourhood. We consider several intuitive sampling approaches and discuss the most intuitive and simple sampling approach here: for a given node, we consider all its k hop neighbourhood nodes as positives, and r randomly sampled nodes as negative. This becomes non-scalable, since calculating a k-hop neighbourhood for all the nodes in the graph has a significant computation cost (it is equivalent to computing non-zero elements in Ak). Hence, for simplicity, we experiment with k = 1 and note that on the ogbn-arxiv dataset, using this sampling scheme and a learnable embedding as the encoder X̄ = I gives only 0.252 NMI. This is significantly lesser than using a random walk generator for sampling and the same encoding scheme X̄ = I, which gives an NMI of 0.444.
Loss function As we already observe in Section 3.5, using the same encoder X̄ = I but a different loss function gives rise to more clusterable embeddings when using the SimCLR loss as compared to using the Node2Vec loss. The experiments on real-world datasets also reinforce these observations, as we see in Table 5 that S3GC I performs better than Node2Vec on ogbn-arxiv and ogbn-products datasets - where both of the methods use the same encoder and random sampler but different loss functions.
5 Discussion and Future Work
We introduced S3GC, a new method for scalable graph clustering with node feature side-information. S3GC is a simple method based on contrastive learning along with a careful encoding of graph and node features, but it is an effective approach across all scales of data. In particular, we showed that S3GC is able to scale to graphs with 100M nodes while still ensuring SOTA clustering performance.
Limitations and Future Work: S3GC demonstrates empirically that on Stochastic Block Models along with mixture-of-Gaussian features, it is able to identify the clusters accurately. Further theoretical investigation into this standard setting and establishing error bounds for S3GC is of interest. S3GC can be applied to graphs with heterogeneous nodes, but it cannot explicitly exploit the information. Extension of S3GC to cluster graphs while directly exploiting heterogeneity of nodes is another open problem. Finally, S3GC like all deep learning methods is susceptible to being unfairly biased by a few “important" nodes. Ensuring stable clustering techniques with minimal bias for a small number of nodes is another interesting direction. | 1. What is the main contribution of the paper regarding graph clustering?
2. What are the strengths and weaknesses of the proposed method, particularly in comparison with other works?
3. Do you have any questions or suggestions regarding the notation and presentation of the paper?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any limitations or areas for improvement in the paper that the reviewer would like to highlight? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper studies the problem of graph clustering using additional information such as node features. Previous works focus on learning graph embeddings and don’t explicitly optimize for clustering, are highly reliant on graph information which fall short when noisy or involve expensive modules which do not scale well. This method utilizes both node attribute information and the graph structure information to generate embeddings that are more linearly separable into k clusters. The experiments and results show that the proposed S3GC method outperforms several clustering baselines and produces better quality embedding clusters.
Strengths And Weaknesses
Strengths:
Related works are well covered.
Proposed method is simple and neat. It scales well to large graphs of the order of 100 million nodes.
The standard experiment setup has been followed. Synthetic data generation uses the well-studied SBM algorithm.
Experiments on large scale datasets show the superior performance of the proposed method in comparison with the baselines.
The paper is written in a simple and easy-to-follow manner.
Weaknesses:
The novelty in the proposed method is weak. It is rather similar to Node2Vec but with the SimCLR loss function which improves linear separability in the node representations.
Considering that there are several hyperparameters involved in the proposed algorithm, the paper does not include a systematic study of different ranges for the parameter values.
Questions
In equation 5, both
α
and
a
l
p
h
a
are used to indicate the same variable. It is important to be consistent with the notations.
In Table 1, the name S3GC-I is a little confusing because it usually implies S3GC method without (minus) I while it actually means using only I. So it would be good to consider changing the name to something more intuitive (eg. S3GC_I or Only I, etc)
The second observation under section “Setup and observations”, line 247, talks about DGI not performing well when attributes are noisy but does not reference any result table or figure. If the authors were referring to table 2, it is not clear how they have arrived at that conclusion. Which parameter controls the quality of the node attributes?
Similar comment for observation 3 (line 249),
Can the choice of parameter values (eg in table 1) be justified? The
p
,
q
&
σ
values chosen for the experiments are arbitrary and it is good to experiment with a bigger range of values. For eg, similar to Fig. 5a in [1].
Why does MVGRL perform better than S3GC in the small scale datasets? Is this just an artifact of the citation networks or is a similar behavior observed in other small scale datasets? It would be interesting to look at which components of MCGRL are contributing to its performance, especially on these datasets.
Gemsec [2] is another related work that has not been included in this paper but might be worth including in the baselines. It is similar in terms of learning clusterable graph embeddings. The embeddings are learned while considering the cluster information.
[1] - Grover, Aditya, and Jure Leskovec. "node2vec: Scalable feature learning for networks." Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining. 2016.
[2] - Rozemberczki, Benedek, et al. "Gemsec: Graph embedding with self clustering." Proceedings of the 2019 IEEE/ACM international conference on advances in social networks analysis and mining. 2019.
Limitations
The authors have mentioned some limitations of their work. It does not particularly consider different types of nodes in the learning of embeddings. It does not discount several “important” nodes that may cause the model to be unfairly biased. However, there are several things that need to be improved in this paper (mentioned above) to make it a strong and novel contribution to the graph clustering literature. |
NIPS | Title
S3GC: Scalable Self-Supervised Graph Clustering
Abstract
We study the problem of clustering graphs with additional side-information of node features. The problem is extensively studied, and several existing methods exploit Graph Neural Networks to learn node representations [29]. However, most of the existing methods focus on generic representations instead of their cluster-ability or do not scale to large scale graph datasets. In this work, we propose S3GC which uses contrastive learning along with Graph Neural Networks and node features to learn clusterable features. We empirically demonstrate that S3GC is able to learn the correct cluster structure even when graph information or node features are individually not informative enough to learn correct clusters. Finally, using extensive evaluation on a variety of benchmarks, we demonstrate that S3GC is able to significantly outperform state-of-the-art methods in terms of clustering accuracy – with as much as 5% gain in NMI – while being scalable to graphs of size 100M.
1 Introduction
Graphs are commonplace data structures to store information about entities/users, and have been investigated for decades [5, 15, 54, 31, 8, 57]. In modern ML systems, the entities/nodes are often equipped with vector embeddings from different sources. For example, authors are nodes in a citation graph and can be equipped with embeddings of the title/content of the authored papers [16, 41] as relevant side information. Owing to the utlility of graphs in large-scale systems, tremendous progress has been made in the domain of supervised learning from graphs and node features, with Graph Neural Networks (GNNs) headlining the state-of-the-art methods [28, 19, 52]. However, typical realworld ML workflows start with unsupervised data analysis to better understand the data and design supervised methods accordingly. In fact, many times clustering is a key tool to ensure scalability to web-scale data [26]. Furthermore, even independent of supervised learning, clustering the graph data with node features is critical for a variety of real-world applications like recommendation, routing, triaging [6, 2, 32] etc.
Effective graph clustering methods should be scalable, especially with respect to the number of nodes, which can be in millions even for a moderate-scale system[57]. Furthermore, in the presence of side-information, the system should be able to use both the views – node features and graph information – of the data “effectively". For example, the method should be more accurate than single-view methods that either consider only the graph information [27] or only the node feature
⇤work done while the author was an intern at Google Research †Now at University of Illinois, Urbana-Champaign
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
information [33, 43, 7]. This problem of graph clustering with side information has been extensively studied in the literature [61]; see Section 2 for a review of the existing and recent methods. Most methods map the problem to that of learning vector embeddings and then apply standard k-means [33] style clustering techniques. However, such methods – like Node2vec [18] – don’t explicitly optimize for clusterability, therefore the resulting embeddings might not be suitable for effective clustering. Furthermore, several existing methods tend to be highly reliant on the graph information and thus tend to perform poorly when graph information is noisy/incomplete. Finally, several existing methods such as GraphCL [58] propose expensive augmentation and training modules, and thus do not scale to realistic web-scale datasets.
We propose S3GC which uses a one-layer GNN encoder to combine both the graph and node-feature information, along with graph only and node feature only encodings. S3GC applies contrastive learning to ensure that the embedding of a node is close to “near-by" nodes – obtained by random walk – while being far away from all other nodes. That is, S3GC explicitly addresses the above three mentioned challenges: a) S3GC is based on contrastive learning which is known to promote linear separability and hence clustering [20], b) S3GC carefully combines information from both the graph view and the feature view, thus performs well when one of the views is highly noisy/incomplete, c) S3GC use a light-weight encoder and simple random walk based sampler/augmentation, and can be scaled to hundreds of millions of nodes on a single virtual machine (VM).
For example, consider a dataset where the adjacency matrix of the graph is sampled from a stochastic block model with 10 clusters; let probability of an edge between nodes from same cluster is p and from different clusters is q. Furthermore, features of each node are also sampled from a mixture of 10 Gaussians where c is the distance between any two cluster centers while is the standard deviation of each Gaussian. Now, consider a setting where p > q but p, q are close, hence information from the graph structure is weak. Similarly, c < but they are close. Figure 1 plots two-dimensional tSNE projection [51] of embeddings learned by the state-of-the-art Node2vec[18] and DGI[53] methods, along with S3GC. Note that while Node2vec’s objective function is optimized well, the embeddings do not appear to be separable. DGI’s embeddings are better separated, still there is a significant overlap. In contrast, S3GC is able to produce well-separated embeddings due to the contrastive learning objective along with explicit utilization of both data views.
We conduct extensive empirical evaluation of S3GC and compare it to a variety of baselines and standard state-of-the-art benchmarks, particularly: Spectral Clustering[43], k-means[33], METIS[27], Node2vec[18], DGI[53], GRACE[62], MVGRL[21] and BGRL[48]. Overall, we observe that our method consistently outperforms Node2vec, DGI – SOTA scalable methods – on all seven datasets, achieving as much as 5% higher NMI than both the methods. For two small scale datasets, our method is competitive with MVGRL method, but MVGRL does not scale to even moderate sized datasets with about 2.5M nodes and 61M edges, while our method scales to datasets with 111M nodes and 1.6B edges.
2 Related Work
Below, we discuss works related to various aspects of graph clustering and self-supervised learning, and place our contribution in the context of these related works.
Graph OR features-only clustering: Graph clustering is a well-studied problem, and several techniques address the problem including Spectral Clustering (SC) [43], Graclus [12], METIS [27], Node2vec [18], and DeepWalk [40]. In particular, Node2Vec [18] is a probabilistic framework that is an extension to DeepWalk, and maps nodes to low-dimensional feature spaces such that the likelihood of preserving the local and global neighborhood of the nodes is maximized. In the setting of node-features only data, k-means clustering is one of the classical methods, in addition to several others like agglomerative clustering [44], density based clustering [59], and deep clustering [7].
As demonstrated in Figure 1 and Table 1, S3GC attempts to exploit both the views, and if both views are meaningful then it can be significantly more accurate than single-view methods.
Self Supervised Learning: Self-supervised learning methods have demonstrated that they can learn linearly separable features/representations in the absence of any labeled information. Typical approach is to define instance-wise “augmentations" and then pose the problem as that of learning contrastive representations that map instance augmentations close to the instance embedding, while pushing it far apart from all other instance embeddings. Popular examples include MoCo [22], MoCo v2 [11], SimCLR [9], and BYOL [17]. Such methods require augmentations, and as such do not apply directly to the graph+node-features clustering problem. S3GC uses simple random walk based augmentations to enable contrastive learning based techniques.
Graph Clustering with Node Features: To exploit both the graph and feature information, several existing works use the approach of autoencoder. That is, they encode nodes using Graph Neural Networks (GNN) [28], with the goal that inner-product of encodings can reconstruct the graph structure; GAE and VGAE [29] use this technique. GALA [38], ARGA and ARVGA [37] extend the idea by using Laplacian Sharpening and generative adversarial learning. Structural Deep Clustering Network (SDCN) [4] jointly learns an Auto-Encoder (AE) along with a Graph Auto-Encoder (GAE) for better node representations, while Deep Fusion Clustering Network (DFCN) [50] merges the representations learned by AE and GAE for consensus representation learning. Since AE type approaches attempt to solve a much harder problem, their accuracy in practice lags significantly to the state-of-the-art; for example, see Table 3 in [21] which shows that such techniques can be 5-8% less accurate. MinCutPool [42] and DMoN [49] extend spectral clustering with graph encoders, but the resulting problem is somewhat unstable and leads to relatively poor partitions; see Table 3.
Graph Contrastive Learning: Recently several papers have explored contrastive Graph Representation Learning based approaches and have demonstrated state-of-the-art performance. Deep Graph Infomax (DGI) [53] is based on MINE [24] method, and is one of the most scalable method with nearly SOTA performance. It uses edge permutations to learn augmentations and embeddings. Infograph [47] extends the DGI idea to learn unsupervised representations for graphs as well. GraphCL [58] design a framework with four types of graph augmentations for learning unsupervised representations of graph data using a contrastive objective. MVGRL [21] extends these ideas by performing node diffusion and contrasting node representations with augmented graph representations while GRACE [62] maximizes agreement of node embeddings across two corrupted views of the graph. Bootstrapped Graph Latents (BGRL) [48] adapts the BYOL [17] methodology to the graph domain, and eliminates the need for negative sampling by minimizing an invariance based loss for augmented graphs within a batch. While these methods are able to obtain more powerful embeddings, the augmentations and objective function setup become expensive, and hence they are hard to scale to large datasets beyond ⇠ 1M nodes. In contrast, S3GC is able to provide competitive or better clustering accuracy, while still being scalable to graphs of size 100M nodes.
3 S3GC: Scalable Self-Supervised Graph Contrastive Clustering In this section, we first formally introduce the problem of graph clustering and notations. Then we discuss challenges faced by the current methods and outline the framework of our method S3GC. Finally, we detail each component of our method and highlight the overall training methodology.
3.1 Problem Statement and Notations
Consider a graph G = (V,E) with the vertex set V = {v1, · · · , vn} and the edge set E ✓ V ⇥ V , where |E| = m. Let A 2 Rn⇥n be the adjacency matrix of G, where Aij = 1 if (vi, vj) 2 E, else Aij = 0. Let X 2 Rn⇥d be the node attributes or feature matrix, where the i-th row Xi denotes the d-dimensional feature vector of node i. Given the graph G and attributes X , the aim is to partition the graph G into k partitions {G1, G2, G3, ..., Gk} such that nodes in the same cluster are similar/close to each other in terms of the graph structure as well as in terms of attributes.
Now, in general, one can define several loss functions to evaluate quality of clustering but that might not reflect the underlying ground truth. So, to evaluate the quality of clustering, we use standard benchmarks which have ground truth labels apriori. Furthermore, Normalized Mutual Information (NMI) between the ground truth labels and the estimated cluster labels is used as the key metric. NMI between two labellings Y1 and Y2 is defined as:
NMI(Y1, Y2) = 2 · I(Y1, Y2)
H(Y1) +H(Y2) (1)
where I(Y1, Y2) is the Mutual Information between labellings Y1 and Y2, and H(·) is the entropy. Normalized Adjacency Matrix is denoted by à = D 12AD 12 2 Rn⇥n where D = diag (A1N ) is the degree matrix. We also compute a k hop Diffusion Matrix, denoted by Sk = Pk i=0 ↵ià i 2
Rn⇥n, where ↵i 2 [0, 1] 8i 2 [k], and P
i ↵i 1. Intuitively, k hop diffusion matrix captures a weighted average of k-hop neighbourhood around every node. For specific ↵i and for k = 1, diffusion matrix can be computed in closed form [30, 36]. However, in this work we focus on finite k.
3.2 Challenges in Graph Clustering
Clustering in general is a challenging problem as the underlying function to evaluate quality of the clustering solution is unknown apriori. However, graph partitioning/clustering with attributes poses several more challenges. In particular, scaling the methods is challenging as graphs are sparse data structures, while neural network based approaches produce dense artifacts. Furthermore, it is challenging to effectively combine information from the two data views: graph and the feature attributes. Node2vec [18] uses only graph structure information, DGI [53] and related methods [21, 39] are highly dependent upon attribute quality. Motivated by the above mentioned challenges, we propose S3GC which uses a self-supervised variant of GNNs.
3.3 S3GC: Scalable Self Supervised Graph Clustering – Methodology
At a high level, S3GC uses a Graph Convolution Network (GCN) based encoder and optimizes it using a contrastive loss where the nodes are sampled via a random walk. Below we describe the three components of S3GC and then provide the resulting training algorithm.
Graph Convolutional Encoder: We use a 1-layer Graph Convolutional Network [28] to encode the graph and feature information for each node:
X = ⇣ PReLU(ÃX⇥) + PReLU(SkX⇥ 0) + I ⌘
(2)
where X 2 Rn⇥d stores the learned d-dimensional representation of each node. Recall that à is the normalized adjacency matrix and Sk is the k-hop diffusion matrix. I 2 Rn⇥d is a learnable matrix. {⇥,⇥0} are the weights of the GCN layer, and PReLU is the parameteric ReLU activation function [23]: f(zi) = zi if zi 0, f(zi) = a · zi otherwise, (3) where a is a learnable parameter. Our choice of encoder makes the method scalable as a 1-layer GCN requires storing only the learnable parameters in the GPU/memory, which is small ( O(d2), where d is the dimensionality of the node attributes). The parameter I scales only linearly with the number of nodes n. More importantly, we use mini-batches that reduce the memory requirement of forward and backward pass to order O(rsd+ d2) where r is the batch size in consideration and s is the average degree of nodes, therefore making our method scalable to graphs of very large sizes as well. We provide further discussion on memory requirement of our method in Section 3.4.
Random Walk Sampler: Next, inspired by [40, 18], we utilise biased second order Random Walks with restarts to generate points similar to a given node and thus capture the local neighborhood of each node. Formally following [18], we start with a source node u, and simulate a random walk of length l. We use ci to denote the i-th node in the random walk starting from c0 = u. Every other node in walk ci is generated from the distribution:
P (ci = x | ci 1 = v) = ⇡vx Z , if (v, x) 2 E, P (ci = x | ci 1 = v) = 0 otherwise (4)
where ⇡vx is the unnormalized transition probability between nodes v and x and Z is the normalization constant. To bias the random walks and compute the next edge x we follow a methodology similar
to [18], and from node v after traveling (t, v), the transition probability ⇡vx is set to ↵pq(t, x) · wvx where wvx is the weight on the edge between v and x, and the bias parameter ↵ is defined by:
↵pq(t, x) = 1
p , if dtx = 0, ↵pq(t, x) = 1 ,if dtx = 1, alphapq(t, x) =
1 q ,if dtx = 2, (5)
where p is the return parameter, controlling the likelihood of immediately revisiting a node, q is the in-out parameter [18], allowing the search to differentiate between “inward" and “outward" nodes, and dtx denotes the shortest path distance between nodes t and x. We note that dtx from node t to x can only take values 2 {0, 1, 2}. Setting p to a high value (> max(q, 1)) ensures a lesser likelihood of revisiting a node and setting it to a low value (< min(q, 1)) would make the walk more “local". Similarly, setting q > 1 would bias the random walk to nodes near t and obtain a local view of the graph encouraging BFS-like behaviour, whereas a q < 1 would bias the walk towards nodes further away from t and encourage DFS-like behaviour.
Contrastive Loss Formulation: Now to learn the encoder parameters, we use SimCLR style loss function where nodes generated from the random walk are considered to be positives while rest of the samples are considered to be negative. That is, we use graph neighborhood information to produce augmentations of a node. Formally, let C(u)+ = {c0, c1, ..., cl} be the nodes generated by a random walk starting at c0 = u. Then, C(u) is the set of positive samples p+u , while the set of negatives p u is generated by sampling l nodes from the remaining set of nodes [n]\p+u . Given p+u and p u , we can now define the loss for each u as:
LSimCLR(u) = P v2p+u exp(sim(Xu,Xv))P
v2p+u exp(sim(Xu,Xv)) + P v02p u exp(sim(Xu,Xv0)) (6)
where sim is some similarity function, for example inner product: sim(u, v) = u T v
kukkvk .
Note that SimCLR style loss functions have been shown to lead to "linearly separable" representations [20] and hence aligns well with the clustering objective [55, 10]. In contrast, loss functions like those used in Node2vec [18] might not necessarily lead to "clusterable" representations which is also indicated by their performance on synthetic as well as real-world datasets.
3.4 Algorithm
Now that we have discussed the individual components of our method, we describe the overall training methodology in Algorithm 1. We begin with the initialization of the learnable parameters in line 1. In line 4,5 we generate the positive and negative samples for each node in the current batch. Since we operate with embeddings of only the nodes in batch and their positive/negative samples, we take a union of these to create a “node set” in line 6. This helps in reducing the memory requirements of our algorithm, since we do not do forward/backward pass on the entire AX , but only on the nodes needed for the current batch. Once we have the node set, we compute representations for the nodes in the current batch using a forward pass in line 8, compute the loss for nodes in this set in line 9, and perform back-propagation to generate the gradient updates for the learnable parameters in line 10. Finally, we update the learnable parameters in line 11 and repeat the process for the next batch.
Space Complexity: The space complexity for the forward and backward pass of our algorithm is O(rsd + d2), where r is the batch size, s is the average degree of nodes, and d is the attribute dimension. The process of random walk generation is fast and can be done in memory, which is abundantly available and highly parallelizable. Therefore, storing the graph structure in memory for sampling of positives doesn’t create a memory bottleneck and takes O(m) space. For all the datasets other than ogbn-papers100M, we store the AX,SX , and I in the GPU memory as well, requiring additional O(nd) space. However, for very large-scale datasets, one can conveniently store these in the memory itself and interface with the GPU when required, thereby restricting the GPU memory requirement to O(rsd+ d2).
Time Complexity: The forward and backward computation for a given batch takes O(rsd2) time. Hence, for n nodes, batch size of r, and K epochs, time complexity is O(Knsd2).
Embedding property: Detecting communities ideally requires nodes to be clustered based on their position, rather than structural similarities. We show in Appendix C that S3GC produces positional embeddings [46]. Code: Implementation code of S3GC is available at: https://github.com/devvrit/S3GC
Algorithm 1 S3GC: Training and Backpropagation
Input: Graph G, Matrices ÃX 2 Rn⇥d and SkX 2 Rn⇥d, number of epochs K, Batched inputs of nodes B, Self-supervised Loss formulation LSimCLR, Encoder definition ENC, Learning Rate ⌘
1: Initialize model parameters: ⇥, ⇥0, I 2: for epoch = 1, 2, . . .K do 3: for each batch: b 2 B do 4: Generate Positive Samples p+v using biased random walks Section 3.3 8 v 2 b 5: Generate Negative Samples p v using random sampling 8 v 2 b 6: Compute the node set Nb : UNION(p+v , p v ) 8 v 7: Select subset rows (ÃX)Nb and (SkX)Nb corresponding to the node set Nb 8: Forward Pass to compute the representations: X ENC((ÃX)Nb , (SkX)Nb ,⇥,⇥0, I) 9: Compute loss using the self-supervised formulation: L(X)
10: Compute Gradients for learnable parameters at time t: ut(⇥,⇥0, I) r⇥,⇥0,IL(X) 11: Refresh the parameters: (⇥,⇥0, I)t+1 (⇥,⇥0, I)t ⌘|b|ut(⇥,⇥ 0, I) Output: X;⇥,⇥0, I
3.5 Synthetic Dataset – Stochastic Blockmodel with Gaussian Features
To better understand the working of our method in scenarios with varied quality of the graph structure and node attributes, we propose a study on a synthetic dataset using Stochastic Block Models (SBM)[1] with Gaussian features. For a given parameter k, the SBM [45] constructs a graph G = (V,E) with k partitions of nodes V . The probability of an intra-cluster edge is p and an inter-cluster edge is q, where p > q.3 Similar studies have been proposed for benchmarking of GNNs [13] and Graph clustering methods[14, 49] using SBM. In this work, we create an attributed SBM model, where each node has an s-dimensional attribute associated with it. Following the setup in [49], for k clusters (partitions) we generate k cluster centers using s-multivariate normal distributions N (0s, 2c · Is), where 2c is a hyperparameter we define. Then attributes of nodes of a given cluster are sampled from an s-multivariate gaussian distribution with the corresponding cluster center and 2I variance. The ratio 2c/ 2 controls the expected value of the classical between vs within sum-of-squares of the clusters.
We compare our method with: k-means on the attributes, Spectral Clustering [43], DGI [53], and Node2vec [18]. This choice of baseline methods focuses on different facets of graph data and clustering across which we want to assess the performance of our method. k-means on attributes utilizes only the nodes attribute information. Spectral Clustering is a non-trainable classical algorithm commonly used for solving SBMs, but uses only the graph-structure information. Similarly, Node2vec is a common graph-embedding trainable algorithm that utilizes only the structural information. DGI is a scalable SOTA self-supervised graph representation learning algorithm that uses both structure as well as node attributes. To demonstrate the effectiveness of our choice of loss formulation, we also run our method without using any attribute information and using only the learnable embedding I 2 Rn⇥d, i.e. X = I.
3Note that these are parameters for the SBM dataset generation, unrelated to the random walk sampling parameters in the S3GC model.
Setup and Observations: We set the number of nodes n = 1000 and number of clusters k = 10, where each cluster contains n/k = 100 nodes, and vary p and q to generate graphs of different structural qualities. Varying 2c/ 2 controls the quality of the attributes. The first row in table 1 represents a graph with high structural as well as attribute quality. The second row represents low structural as well as low attribute quality. While the last row represents low structural but high attribute quality. We make several observations: 1) Even without using any attribute information, our method performs significantly better as compared to other structure-only based methods like Spectral Clustering and Node2Vec, which demonstrates the effectiveness of our loss formulation and training methodology that promotes clusterability, which is also in line with recent observations [10, 55]. 2) We observe that DGI depends highly on the quality of the attributes and is not able to utilize the high-quality graph structure as well, when the attributes are noisy. In contrast, our method uses both sources of information effectively and performs reasonably well even when even only one of the structure or attribute quality is high (first and the last row in the table).
Visualization of the Embeddings: We further observe the quality of the generated embeddings using t-SNE[51] projected in 2-dimensions. Figure 1 corresponds to the second setting with weak graph and weak attributes, where we observe that S3GC generates representations which are more cluster-like as compared to the other methods. Additionally, we note that S3GC shows similar behaviour in the other two settings as well, the plots for which are provided in the Appendix.
4 Empirical Evaluation
We conduct extensive experiments on several node classification benchmark datasets to evaluate the performance of S3GC as compared to key state-of-the-art (SOTA) baselines across multiple facets associated with Graph Clustering.
4.1 Datasets and Setup
Datasets: We use 3 small scale, 3 moderate/large scale, and 1 extra large scale dataset from GCN [28], GraphSAGE [19] and the OGB-suite [25] to demonstrate the efficacy of our method. The details of the datasets are given in Table 2 and additional details of the sources are mentioned in Appendix.
Baselines: We compare our method with k-means on features and 8 recent state-of-the-art baseline algorithms, including MinCutPool [3], METIS [27], Node2vec [18], DGI [53], DMoN [49], GRACE [62], BGRL [48] and MVGRL [21]. We choose baseline methods from a broad spectrum of methodologies, namely methods that utilize only the graph structure, methods that utilize only the features and specific methods that utilize a combination of the graph structure and attribute information to provide an exhaustive comparison across important facets of graph learning and clustering. METIS [27] is a well-known and scalable classical method for graph partitioning using only the structural information. Similarly, Node2vec[18] is another scalable graph embedding technique that utilizes random walks on the graph structure. MinCutPool[3] and DMoN [49] are graph clustering techniques motivated by the normalized MinCut objective [42] and Modularity [35] respectively. DGI is a SOTA self-supervised method utilizing both graph structure and features, that motivated a line of work [21, 39] based on entropy maximization between local and global views of a graph. GRACE[62], in contrast to DGI’s methdology, contrasts embeddings at the node level itself, by forming two views of the graph and maximizing the embedding of the same nodes in the two views. BGRL[48] and MVGRL [21] are recent SOTA methods for performing self-supervised graph representation learning.
Metrics: We measure 5 metrics which are relevant for evaluating the quality of the cluster asssignments following the evaluation setup of [56, 21]: Accuracy, Normalized Mutual Information (NMI), Completeness Score (CS), Macro-F1 Score (F1), and Adjusted Rand Index (ARI). For all these aforementioned metrics, a higher value indicates better clustering performance. We generate the
representations using each representation-learning method and then perform k-means clustering on the embeddings to generate the cluster assignments used for evalution of these metrics.
Detailed Setup. We consider the unsupervised learning setting for all the seven datasets where the graph and features corresponding to all the datasets are available. We use the labels only for evaluating the quality of the cluster assignments generated by each method. For the baselines, we use the official implementations provided by the authors without any modifications. All experiments are repeated 3 times and the mean values are reported in the Table 3. We highlight the highest value as well as any other values within 1 standard deviation of the mean of the best performing method, and report the results with standard deviations in the Appendix, due to space constraints. We utilize a single Nvidia A100 GPU with 40GB memory for training each method for a maximum duration of 1 hour for each experiment in Table 3. For ogbn-papers100M we allow upto ⇠ 24 hours of training and upto 300GB main memory in addition. We provide a mini-batched and highly scalable implementation of our method S3GC in PyTorch such that experiments on all datasets other than ogbn-papers100M easily fit in the aforementioned GPU. For the ogbn-papers100M dataset, the forward and backward pass in S3GC are performed in the GPU, with an interfacing with the CPU memory to store the graph, AX , and SX , and to maintain and update I, with minimal overheads. We also provide a comparison of the time and space complexity for each method in the Appendix.
Hyperparameter Tuning: S3GC requires selection of minimal hyperparameters: we use k = 2 for the k-hop Diffusion Matrix Sk which offers the following advantages: 1) S2X = ↵0X + ↵1ÃX + ↵2Ã2X is a finite computation which can be pre-computed and only requires 2 sparse-dense matrix multiplications. 2) We chose ↵0 > ↵1 > ↵2, giving a higher weight to 0-hop neighbourhood attributes X which allows S3GC to exploit the rich information from good quality attributes even when the structural information is not very informative. 3) Two-hop neighbourhood intuitively captures all the features of nodes with similar attributes while maintaining scalability. This is motivated by the 2-hop and 3-hop choice of neighborhoods in [19] and [25] for these datasets. We additionally tune the learning rate, batch size and random walk parameters, namely the walk length l while using the default values of p = 1 and q = 1 for the bias parameters in the walk. We perform model selection based on the NMI on the validation set and evaluate all the metrics for this model. Additional details regarding the hyperparameters are mentioned in the Appendix due to space restrictions.
4.2 Results
Table 3 compares clustering performance of S3GC to a number of baseline methods on datasets of three different scales. For the small scale datasets, namely Cora, Citeseer and Pubmed, we observe that MVGRL outperforms all methods. We also note that MVGRL’s performance in our experiments, using the author’s official implementation with extensive hyperparameter tuning is slightly lower than the reported values, as has been reported by other works as well [60]. Nonetheless, we use these values for comparison and observe that S3GC also performs either competitively or is slightly inferior to MVGRL’s accuracy. For example, on the Cora dataset, S3GC is within ⇠ 2% of MVGRL’s performance and outperforms all the other baseline methods, while on the Pubmed dataset, S3GC is within⇠ 1.5% of MVGRL’s performance. Next, we observe the performance on moderate/large scale datasets and note that S3GC significantly outperforms baselines such as k-means, MinCutPool, METIS, Node2vec, DGI and DMoN. Notably, S3GC is ⇠ 5% better on ogbn-arxiv, ⇠ 1.5% better on Reddit and ⇠ 4% better on ogbn-products in terms of clustering NMI as compared to the next best method. The official implementations of GRACE, BGRL, and MVGRL do not scale to datasets with >200k nodes, running into Out of Memory (OOM) errors due to the non-scalable implementations, sub-optimal memory utilization, or the non-scalable methodology. For example, MVGRL proposes diffusion matrix as the alternate view of graph structure, which is a dense n⇥ n matrix - hence, not scalable.
We also note that S3GC performs reasonably well in settings where the node attributes are not very informative while the graph structure is useful, as evident from the performance on the Reddit dataset. k-means on the node attributes gives an NMI of only ⇠ 10% while methods like METIS and Node2vec perform well using the graph structure. Methods like DGI which depend heavily on the quality of the attributes, thus suffer a degradation in performance having a clustering NMI of only ⇠ 30%, while S3GC which uses both the attributes and graph information effectively outperforms all the other methods and generates clustering with an NMI of ⇠ 80%.
ogbn-papers100M: Finally, we compare the performance of S3GC on the extra-large scale dataset with 111M nodes and 1.6B edges in Table 4, and note that only k-means, Node2vec and DGI scale
to this dataset size and run in a reasonable time of ⇠ 24 hours. We observe that S3GC seamlessly scales to this dataset and significantly outperforms methods utilizing only the features (k-means) by ⇠ 8.5%, only graph structure (Node2vec) by ⇠ 7% and both (DGI) by ⇠ 4% in terms of clustering NMI on the ogbn-papers100M dataset.
Ablation Study on Hyperparameters: We perform detailed ablation studies to investigate the stability of S3GC’s clustering and provide the same, in the Appendix. We find that S3GC is robust to its few hyperparameters such as walk-length and batch size, enabling a near-optimal choice. We note that smaller walk lengths⇠ 5 are an optimal choice across datasets, since they are able to include the “right” positive examples in the batch, while using larger walk lengths may degrade the performance due to the inclusion of nodes belonging to other classes in the positive samples. This helps in scalability as well, as we need to sample only a few positives per node. While small batches take more time per-epoch but converge faster, larger batch sizes are better in per-epoch training time, but require more epochs to converge. Both, however, enjoy similar performance in terms of the quality of the clustering.
4.3 Novelty of S3GC’s Design Choices
Our design choices have unique roles to play which make S3GC both scalable and accurate, by effectively utilizing structure as well as node attribute information for learning clusterable representa-
tions. We describe their importance and contrast with other possible design choices in this section, highlighting how these choices put together work the most effectively empirically, contributing to S3GC’s novelty.
Encoder: Using a multilayered GCN[28] to capture local graph structure along with attribute information increases the space required to O(nnz(A)), making any method non-scalable for very large graphs. This issue is also faced by existing methods like MVGRL [21] which compute the entire diffusion matrix, and hence run into OOM errors on larger datasets as discussed earlier. Hence, we use a 1-layered GCN and precompute AX , requiring only O(nd) space. Intuitively, we see that it is important to utilize both the attribute information as well as structural information in the encoder. With this motivation, we design S3GC to capture attribute information using a 1-layer GCN and capture structural information using a learnable parameter I (eq. (2)). We empirically verify this intuition with experiments using either just the attribute information (for example on the Reddit dataset) or only the structural information (on the ogbn-arxiv dataset) and summarize our findings in Table 5. We find that the design choice of S3GC’s Encoder is optimal for effectively capturing both sources of information and removing any one source, leads to considerably suboptimal performance.
Positive and Negative nodes sampler: Using a random walk sampler offers several advantages - it can be computed in a scalable fashion and it samples nodes from a k-hop neighbourhood. We consider several intuitive sampling approaches and discuss the most intuitive and simple sampling approach here: for a given node, we consider all its k hop neighbourhood nodes as positives, and r randomly sampled nodes as negative. This becomes non-scalable, since calculating a k-hop neighbourhood for all the nodes in the graph has a significant computation cost (it is equivalent to computing non-zero elements in Ak). Hence, for simplicity, we experiment with k = 1 and note that on the ogbn-arxiv dataset, using this sampling scheme and a learnable embedding as the encoder X̄ = I gives only 0.252 NMI. This is significantly lesser than using a random walk generator for sampling and the same encoding scheme X̄ = I, which gives an NMI of 0.444.
Loss function As we already observe in Section 3.5, using the same encoder X̄ = I but a different loss function gives rise to more clusterable embeddings when using the SimCLR loss as compared to using the Node2Vec loss. The experiments on real-world datasets also reinforce these observations, as we see in Table 5 that S3GC I performs better than Node2Vec on ogbn-arxiv and ogbn-products datasets - where both of the methods use the same encoder and random sampler but different loss functions.
5 Discussion and Future Work
We introduced S3GC, a new method for scalable graph clustering with node feature side-information. S3GC is a simple method based on contrastive learning along with a careful encoding of graph and node features, but it is an effective approach across all scales of data. In particular, we showed that S3GC is able to scale to graphs with 100M nodes while still ensuring SOTA clustering performance.
Limitations and Future Work: S3GC demonstrates empirically that on Stochastic Block Models along with mixture-of-Gaussian features, it is able to identify the clusters accurately. Further theoretical investigation into this standard setting and establishing error bounds for S3GC is of interest. S3GC can be applied to graphs with heterogeneous nodes, but it cannot explicitly exploit the information. Extension of S3GC to cluster graphs while directly exploiting heterogeneity of nodes is another open problem. Finally, S3GC like all deep learning methods is susceptible to being unfairly biased by a few “important" nodes. Ensuring stable clustering techniques with minimal bias for a small number of nodes is another interesting direction. | 1. What is the focus and contribution of the paper on scalable graph clustering?
2. What are the strengths of the proposed approach, particularly in combining graph structure information and node feature information?
3. What are the weaknesses of the paper regarding the novelty of the proposed components?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or concerns regarding the paper's experiments and results? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The paper presents a new method, S3GC, for scalable graph clustering. S3GC takes both the graph structure information and node-feature information into consideration by using a one-layer GNN encoder and performs self-supervised contrastive learning. There are several contributions. First, S3GC implements self-supervised learning on the graph clustering problem. Second, S3GC combines graph structure information and node feature information. Third, systematic experiments have been conducted.
S3GC has three key components, namely graph convolutional encoder, random walk sampler, and contrastive loss formulation. The graph convolutional encoder is a 1-layer Graph Convolutional Network, which is used to combine both graph structures and node features. One important design in the encoder is that the author uses mini-batches to reduce the memory requirement, which makes S3GC scalable to the large graph. Random walk sampler uses biased second order Random Walks with restarts to generate points similar, and thus capture the local neighborhood of each node. Contrastive loss formulation is a SimCLR style loss function, which is used for learning the encoder parameters. With this formulation, the authors use graph neighborhood information to produce augmentations of a node.
Strengths And Weaknesses
S1. The paper is in general easy to follow.
S2. Scalable clustering is important in practice.
S3. The authors apply self-supervised learning to graph clustering, and present an effective and scalable model. The result shows that the model can address some existing limitations.
S4. The proposed model can be applied to billion-edge graphs for clustering.
S5. The author conducts extensive experiments and gives correlated analysis. The experimental result proves that S3GC can solve graph clustering problems with better performance in different aspects.
W1. The three components of S3GC all use the existing designs partly. The first part, the graph convolutional encoder, uses a 1-layer Graph Convolutional Network, and the novelty here seems to be the way of combining both graph information and node features. The second part, the random walk sampler, is inspired by previous work and formally follows previous work. The third part, contrastive loss formulation, uses a SimCLR style loss function.
W2. Seems there is a minor error in line 214. The author states that the time complexity is O(Knsd2), seemingly forgetting “r”.
W3. I did not find the results on detailed running times of different methods (though roughly mentioned).
Questions
Q1. Please check Line 214 and see whether "r" is missed.
Q2. Please report the details of running times for each method. As this paper studies scalability, these efficiency results are important.
Q3. In abstract, the authors state that "we demonstrate that S3GC is able to significantly outperform state-of-the-art methods in terms of clustering accuracy with as much as 5% gain in NMI". However, from Table 3 this seems not always true (e.g., compared with MVGRL). Please explain the reason.
Q4. The three components of S3GC use the existing designs partly. Please explain more exactly where the novelty lies.
Limitations
This paper is in general elegant in terms of method design. However, the novelty is a bit unclear. All the three modules are inspired directly from the existing work. To make their combination a novelty, I believe a deeper discussion is needed, i.e., why such combinations can maintain the effectiveness. For example, 1-layer GCN seems to be simple, and why its combination with random walks and contrastive loss are effective? Which part has the dominating factor? The related ablation study that checks the effectiveness of each component should be included in the main paper. The ablation study in the Appendix only includes some parameter testings. |
NIPS | Title
Shallow RNN: Accurate Time-series Classification on Resource Constrained Devices
Abstract
Recurrent Neural Networks (RNNs) capture long dependencies and context, and hence are the key component of typical sequential data based tasks. However, the sequential nature of RNNs dictates a large inference cost for long sequences even if the hardware supports parallelization. To induce long-term dependencies, and yet admit parallelization, we introduce novel shallow RNNs. In this architecture, the first layer splits the input sequence and runs several independent RNNs. The second layer consumes the output of the first layer using a second RNN thus capturing long dependencies. We provide theoretical justification for our architecture under weak assumptions that we verify on real-world benchmarks. Furthermore, we show that for time-series classification, our technique leads to substantially improved inference time over standard RNNs without compromising accuracy. For example, we can deploy audio-keyword classification on tiny Cortex M4 devices (100MHz processor, 256KB RAM, no DSP available) which was not possible using standard RNN models. Similarly, using ShaRNN in the popular Listen-Attend-Spell (LAS) architecture for phoneme classification [4], we can reduce the lag in phoneme classification by 10-12x while maintaining state-of-the-art accuracy.
1 Introduction
We focus on the challenging task of time-series classification on tiny devices, a problem arising in several industrial and consumer applications [25, 22, 30], where tiny edge-devices perform sensing, monitoring and prediction in a limited time and resource budget. A prototypical example is an interactive cane for people with visual impairment, capable of recognizing gestures that are observed as time-traces on a sensor embedded onto the cane [24].
Time series or sequential data naturally exhibit temporal dependencies. Sequential models such as RNNs are particularly well-suited in this context because they can account for temporal dependencies by attempting to derive relations from the previous inputs. Nevertheless, directly leveraging RNNs for prediction in constrained scenarios mentioned above is challenging. As observed by several authors [28, 14, 29, 9], the sequential nature by which RNNs process data fundamentally limits parallelization leading to large training and inference costs. In particular, in time-series classification, at inference time, the processing time scales with the size, T , of the receptive window, which is unacceptable in resource constrained settings.
˚Work done as a Research Fellow at Microsoft Research India. :Work done during internships at Microsoft Research India.
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
A solution proposed in literature [28, 14, 29, 9] is to replace sequential processing with parallelizable feed-forward and convolutional networks. A key insight exploited here is that most applications require relatively small receptive window, and that this size can be increased with tree-structured networks and dilated convolutions. Nevertheless, feedforward/convolutional networks utilize substantial working memory, which makes them difficult to deploy on tiny devices. For this reason, other methods such as [32, 2] also are not applicable for our setting. For example, a standard audio keyword detection task with a relatively modest setup of 32 conv filters would itself need a working memory of 500KB and about 32X more computation than a baseline RNN model (see Section 5).
Shallow RNNs. To address these challenges, we design a novel layered RNN architecture that is parallelizable/limited-recurrence while still maintaining the receptive field length (T ) and the size of the baseline RNN. Concretely, we propose a simple 2-layer architecture that we refer to as ShaRNN. Both the layers of ShaRNN are composed of a collection of shallow recurrent neural networks that operate independently. More precisely, each sequential data point (receptive window) is divided into independent parts called bricks of size k, and a shared RNN operates on each brick independently, thus ensuring a small model size and short recurrence. That is, ShaRNN’s bottom layer restarts from an initial state after every k ăă T steps, and hence only has a short recurrence. The outputs of T {k parallel RNNs are input as a sequence into a second layer RNN, which then outputs a prediction after T {k time. In this way, for k « Op ? T q we obtain a speedup of Op ? T q in inference time in the following two settings:
(a) Parallelization: here we parallelize inference over T {k independent RNNs thus admitting speed-ups on multi-threaded architectures,
(b) Streaming: here we utilize receptive (sliding) windows and reuse computation from older sliding window/receptive fields.
We also note that, in contrast to the proposed feed-forward methods or truncated RNN methods [23], our proposal admits fully receptive fields and thus does not result in loss of information. We further enhance ShaRNN by combining it with the recent MI-RNN method [10] to reduce the receptive window sizes; we call the resulting method MI-ShaRNN.
While a feedforward layer could be used in lieu of our RNN in the next layer, such layers lead to significant increase in model size and working RAM to be admissible in tiny devices.
Performance and Deployability. We compare the two-layer MI-ShaRNN approach against other state-of-art methods, on a variety of benchmark datasets, tabulating both accuracy and budgets. We show that the proposed 2-layer MI-ShaRNN exhibits significant improvement in inference time while also improving accuracy. For example, on Google-13 dataset, MI-ShaRNN achieves 1% higher accuracy than baseline methods while providing 5-10x improvement in inference cost. A compelling aspect of the architecture is that it allows for reuse of most of the computation, which leads to its deployability on the tiniest of devices. In particular, we show empirically that the method can be deployed for real-time time-series classification on devices as those based on the tiny ARM Cortex M4 microprocessor3 with just 256KB RAM, 100MHz clock-speed and no dedicated Digital Signal Processing (DSP) hardware. Finally, we demonstrate that we can replace bi-LSTM based encoder-decoder of the LAS architecture [4] by ShaRNN while maintaining close to best accuracy on publicly-available TIMIT dataset [13]. This enables us to deploy LAS architecture in streaming fashion with a lag of 1 second in phoneme prediction and Op1q amortized cost per time-step; standard LAS model would incur lag of about 8 seconds as it processes the entire 8 seconds of audio before producing predictions.
Theory. We provide theoretical justification for the ShaRNN architecture and show that significant parallelization can be achieved if the network satisfies some relatively weak assumptions. We also point out that additional layers can be introduced in the architecture leading to hierarchical processing. While we do not experiment with this concept here, we note that, it offers potential for exponential improvement in inference time.
3https://en.wikipedia.org/wiki/ARM_Cortex-M#Cortex-M4
In summary, the following are our main contributions:
• We show that under relatively weak assumptions, recurrence in RNNs and consequently, the inference cost can be reduced significantly. • We demonstrate this inference efficiency via a two-layer ShaRNN (and MI-ShaRNN) architecture that uses only shallow RNNs with a small amount of recurrence. • We benchmark MI-ShaRNN (enhancement of ShaRNN with MI-RNN) on several datasets and observe that it learns nearly as accurate models as standard RNNs and MI-RNN. Due to limited recurrence, ShaRNN saves 5-10x computation cost over baseline methods. We deploy MI-ShaRNN model on a tiny microcontroller for real-time audio keyword detection, which, prior to this work, was not possible with standard RNNs due to large inference cost with receptive (sliding) windows. We also deploy ShaRNN in LAS architecture to enable streaming phoneme classification with less than 1 second of lag in prediction.
2 Related Work
Stacked Architecture. Our multi-layered RNN resembles stacked RNNs studied in the literature [15, 16, 27] but they are unrelated. The goal of Stacked RNNs is to produce complex models and subsume conventional RNNs. Each layer is fully recurrent, and feeds output of the first layer to the next level. The next level is another fully recurrent RNN. As such, stacked RNN architectures lead to increased model size and recurrence, which results in worse inference time than standard RNNs.
Recurrent Nets (Training). Conventional works on RNNs primarily address challenges arising during training. In particular for large receptive window T , RNNs suffer from vanishing and exploding gradient issues. A number of works propose to circumvent this issue in a number of ways such as Gated architectures [7, 17] or adding residual connections in RNNs [18, 1, 21] or through constraining the learnt parameters [31]. Several recent works attempt to reduce the number of gates and parameters [8, 6, 21] to reduce model size but as such suffer from poor inference time, since they are still fully recurrent. Different from these works, our focus is on reducing model size as well as inference time and view these works as complementary to our paper.
Recurrent Nets (Inference Time). Recent works have begun to focus on RNN inference cost. [3] proposes to learn skip connections that can avoid evaluating all the hidden states. [10] exploits domain knowledge that true signature is significantly shorter than the time-trace to trim down length of the sliding windows. Both of these approaches are complementary and we indeed leverage the second in our approach. A recent work on dilated RNNs [5] is interesting. While it could serve as a potential solution, we note that, in its original form, dilated RNN also has a fully recurrent first layer, which is therefore infeasible. One remedy is to introduce dilation in the first layer to improve inference time. But, dilation skips steps and hence can miss out on critical local context.
Finally, CNN based methods [28, 14, 29, 9, 2] allow higher parallelization in the sequential tasks but as discussed in Section 1, also lead to significantly larger working RAM requirement when compared to RNNs, thus cannot be considered for deployment on tiny devices (see Section 5).
3 Problem Formulation and Proposed ShaRNN Method
In this paper, we primarily focus on the time-series classification problem, although the techniques apply to more general sequence-to-sequence problems like phoneme classification problem discussed in Section 5. Let Z “ tpX1, y1q, . . . , pXn, ynqu where Xi is the i-th sequential data point with Xi “ rxi,1, xi,2, . . . , xi,T s P RdˆT and xi,t P Rd is the t-th time-step data point. yi P rCs is the label of Xi where C is the number of class labels. xi,t:t`k is the shorthand for xi,t:t`k “ rxi,t, . . . , xi,t`ks. Given training data Z , the goal is to learn a classifier f : RdˆT Ñ rCs that can be used for efficient inference, especially on tiny devices. Recurrent Neural Networks (RNN) are popularly used for modeling such sequential problems and maintain a hidden state ht´1 P Rd̂ at the t-th step that is updated using: ht “ Rpht´1, xtq, t P rT s, ŷ “ fphT q, where ŷ is the prediction by applying a classifier f on hT and d̂ is the dimensionality of the hidden state. Due to the sequential nature of RNN, inference cost of RNN is ΩpT q even if the hardware
supports large amount of parallelization. Furthermore, practical applications require handling a continuous stream of data, e.g., smart-speaker listening for certain audio keywords.
A standard approach is to use sliding windows (receptive field) to form a stream of test points on which inference can be applied. That is, given a stream X “ rx1, x2, . . . , s, we form sliding windows Xs “ xps´1q¨ω`1:ps´1q¨ω`T P RdˆT which stride by ω ą 0 time-steps after each inference. RNN is then applied to each sliding window Xs which implies amortized cost for processing each time-step data point (xt) is ΘpTω q. To ensure high-resolution in prediction, ω is required to be a fairly small constant independent of T . Thus, amortized inference cost for each time-step point is OpT q which is prohibitively large for tiny devices. So, we study the following key question: “Can we process each time-step point in a data stream in opT q computational steps?”
3.1 ShaRNN
Shallow RNNs (ShaRNN) are a hierarchical collection of RNNs organized at two levels. T k
RNNs at ground-layer operate completely in parallel with fully shared parameters and activation functions, thus ensuring small model size and parallel execution. An RNN at the next level take inputs from the ground-layer and subsequently outputs a prediction.
Formally, given a sequential point X “ rx1, . . . , xT s (e.g. sliding window in streaming data), we split it into bricks of size k, where k is a parameter of the algorithm. That is, we form T {k bricks: B “ rB1, . . . , BT {ks where Bj “ xppj´1q¨k`1q:pj¨kq. Now, ShaRNN applies a standard recurrent model Rp1q : Rdˆk Ñ Rd̂1 on each brick, where d̂1 is the dimensionality of hidden states of Rp1q. That is,
ν p1q j “ Rp1qpBjq, j P rT {ks.
Note that Rp1q can be any standard RNN model like GRU, LSTM etc. We now feed output of each layer into another RNN to produce the final state/feature vector that is then fed into a feed forward layer. That is,
ν p2q T {k “ R p2qprνp1q 1 , . . . , ν p1q T {ksq, ŷ “ fpν p2q T {kq,
where Rp2q is the second layer RNN and can also be any standard RNN model. ν p2q T {k P Rd̂2 is the hidden-state obtained by applying Rp2q to ν p1q 1:T {k. f applies the standard feed-forward network to ν p2q T {k. See Figure 1 for a block-diagram of the architecture. That is, ShaRNN is defined by parameters Λ composed of shared RNN parameters at the ground-level, RNN parameters at the next level, and classifier weights for making a prediction. We train the ShaRNN based on minimizing an empirical loss function over training set Z .
Naturally, ShaRNN is an approximation of a true RNN and in principle has less modeling power (and recurrence). But as discussed in Section 4 and shown by our empirical results in Section 5, ShaRNN can still capture enough context from the entire sequence to effectively model a variety of time-series classification problems with large T (typically T ě 100). Due to parallel k RNNs in the bottom layer that are processed by R2 in the second layer, ShaRNN inference cost can be reduced to OpT {k ` kq for multi-threaded architectures with k-wise parallelization; k “ ? T leads to smallest inference cost.
Streaming. Recall that in the streaming setting, we form sliding windows Xs “ xs¨ω`1:s¨ω`T P R
dˆT by striding each window by ω ą 0 time-steps. Hence, if ω “ k ¨ q for q P N then the inference cost of Xs`1 can be reduced by reusing previously computed ν
p1q j vectors @j P rq ` 1, T {ks for Xs.
Below claim provides a formal result for the same.
Claim 1. Let both layers RNNs Rp1q and Rp2q of ShaRNN have same hidden-size and per-time step computation complexity C1. Then, given T and ω, the additional cost of applying ShaRNN to X s`1 given Xs is OpT {k ` q ¨ kq ¨C1, where Xs “ xps´1q¨ω`1:ps´1q¨ω`T , ω is the stride-length of sliding window, and the brick-size ω “ q ¨ k for some integer q ě 1. Consequently, the total amortized cost can be bounded by Op?q ¨ TC1q if k “ a T {q.
See Appendix A for a proof of the claim.
next layer. Note that ν p1q 2 , ν p1q 3
can be reused for evaluating the next window. (b), (c): Mean squared approximation error and the prediction accuracy of ShaRNN with zeroth and first order approximation (M “ 1, 2 respectively in Claim 3) with different brick-sizes k (for Google-13). Note the large error with M “ 1 (same as truncation method in [23]). M “ 2 introduces significant improvement, especially for small k, but clearly needs larger M to achieve better accuracy. (d): Comparison of norm of gradient vs Hessian of Rpht, xt`1:t`kq with varying k. R is FastRNN [21] with swish activation. Smaller Hessian norm indicates that the first-order approximation of R (Claim 3) by ShaRNN is more accurate than the 0-th order one (ShaRNN with M “ 1) suggested by [23].
3.2 Multi-layer ShaRNN
Above claim shows that selecting small k leads to a large number of bricks and hence, a large number of points to be processed by second layer RNN Rp2q which will be the bottleneck in inference. However, using the same approach, we can replace the second layer with another layer of ShaRNN to bring down the cost. By repeating the same process, we can design a general L layer architecture where each layer is equipped with a RNN model Rplq and the output of a l-th layer brick is given by:
ν plq j “ Rplqprν pl´1q pj´1q¨k`1, . . . , ν pl´1q pj´1qk`ksq,
for all 1 ď j ď T {kl, where νp0qj “ xj . The predicted label is given by ŷ “ fpν pLq T {kL´1 q. Using argument similar to the claim in the previous section, we can reduce the total inference cost to Oplog T q by using k “ Op1q and L “ log T . Claim 2. Let all layers of multi-layer ShaRNN have same hidden-size and per-time step complexity C1 and let k “ ω. Then, the additional cost of applying ShaRNN to Xs`1 is OpT {kL ` L ¨ kq ¨ C1, where Xs “ xps´1q¨ω`1:ps´1q¨ω`T . Consequently, selecting L “ logpT q, k “ Op1q, and assuming ω “ Op1q, the total amortized cost is OpC1 ¨ log pT qq.
That is, we can achieve exponential speed-up over OpT q cost for standard RNN. However, such a model can lead to a large loss in accuracy. Moreover, constants in the cost for large L are so large that a network with smaller L might be more efficient for typical values of T .
3.3 MI-ShaRNN
Recently, [10] showed that several time-series training datasets are coarse and the sliding window size T can be decreased significantly by using their multi-instance based algorithm (MI-RNN). MI-RNN finds tight windows around the actual signature of the class, which leads to significantly smaller models and reduces inference cost. Our ShaRNN architecture is orthogonal to MI-RNN and can be combined to obtain even higher amount of inference saving. That is, MI-RNN takes the dataset
Z “ tpX1, y1q, . . . , pXn, ynqu with Xi being a sequential data point over T steps and produces a new set of points X 1j with labels y 1 j , where each X 1 j is sequential data point over T
1 and T 1 ď T . MI-ShaRNN applies ShaRNN to the output of MI-RNN so that the inference cost is dependent only on T 1 ď T , and captures the key signal in each data point.
4 Analysis
In this section, we provide theoretical underpinnings of ShaRNN approach and we also put it in context of work by [23] that discusses RNN models for which we can get rid of almost all of the recurrence.
Let R : Rd`d̂ Ñ Rd̂ be a standard RNN model that maps the given hidden state ht´1 P Rd̂ and data point xt P Rd into the next hidden state ht “ Rpht´1, xtq. Overloading notation, Rph0, x1, . . . , xtq “ Rpht´1, xtq. We define a function to be recurrent if the following holds: Rph0, x1, . . . , xtq “ RpRph0, x1, . . . , xt´1q, xtq. The final class prediction using feed-forward layer is given by: ŷ “ fphT q “ fpRph0, x1:T qq. Now, ShaRNN attempts to untangle and approximate the dependency of fphT q and Rph0, x1:T q on h0, by using Taylor’s theorem. Below claim shows the condition under which the approximation error introduced by ShaRNN is small. Claim 3. Let Rph0, x1, . . . , xtq be an RNN and let }∇Mh Rph, xt:t`kq} ď Opǫ ¨ M !q for some ǫ ě 0 where ∇Mh is M th order derivative with respect to h. Also let }Rph0, x1:tq ´ h0} “ Op1q, }∇mh Rph0, xt`1:t`kq} “ Opm!q for all t P rT s. Then, there exists an ShaRNN defined by functions Rp1q, Rp2q and brick-size k, s.t.:
}Rp2qpνp1q 1 , . . . , ν p1q T {kq ´ Rph0, x1:T q} ď ǫ ¨ M ¨ T, where ν p1q j “ Rp1qph0, xpj´1q¨k`1:j¨kq.
See Appendix A for a detailed proof of the claim.
The above claim shows that the hidden state computed by ShaRNN is close to the state computed by a fully recursive RNN, hence the final output ŷ would also be close. We now compare this result to the result of [23], which showed that }Rph0, x1:T q ´Rph0, xT´k`1:T q} ď ǫ for large enough k if R satisfies a contraction property. That is, if }Rpht´1, xtq ´ Rph1t´1, xtq} ď λ}ht´1 ´ h1t´1} where λ ă 1. However, λ ă 1 is a strict requirement and do not hold in practice. Due to this, if we only compute Rph0, xT´k`1:T q as suggested by the above result (for some reasonable values of k), then resulting accuracy on several datasets drops significantly (see Figure 1(b),(c)).
In the context of Claim 3, result of [23] is a special case with M “ 1, i.e., the result only applies a 0´th order Taylor series expansion. Figure 1 (d) shows how norm of the gradient that bounds error due to the 0-th order expansion is significantly larger than the norm of the Hessian which bounds error due to the 1-st order expansion.
Case study with FastRNN: We now instantiate Claim 3 for a simple FastRNN model [21] with a first-order approximation i.e., with M “ 2 in Claim 3. Claim 4. Let Rph0, x1, . . . , xtq be a FastRNN model with parameters U,W . Let }U} ď Op1q, }∇2hRph0, xt:t`kq} ď Opǫq for any k-length sequence. Then, there exists an ShaRNN defined by functions Rp1q, Rp2q and brick-size k s.t.: }Rp2qpνp1q
1 , . . . , ν p1q T {kq ´ Rph0, x1:T q} ď ǫ, where
ν p1q j “ Rp1qph0, xpj´1q¨k`1:j¨kq.
Note that }U} “ Op1q holds for all the benchmarks that were tried in [21]. Moreover, this assumption is significantly weaker than the typical }U} ă 1 assumption required by [23]. Finally, the Hessian term is significantly smaller than the derivative term (Figure 1 (d)), hence the approximation error and prediction error should be significantly smaller than the one we would get by 0-th order approximation (see Figure 1 (b), (c)).
5 Empirical Results
We conduct experiments to study: a) performance of MI-ShaRNN with varying hidden state dimensions at both the layers Rp1q and Rp2q to understand how its accuracy stacks up against baseline
models across different model sizes, b) inference cost improvement that MI-ShaRNN produces for standard time-series classification problems over baseline models and MI-RNN models, c) if MI-ShaRNN can enable certain time-series classification tasks on devices based on the tiny Cortex M4 with only 100MHz processor and 256KB RAM. Recall that MI-ShaRNN uses ShaRNN on top of trimmed data points given by MI-RNN. MI-RNN is known to have better performance than baseline LSTMs, so naturally MI-ShaRNN has better performance than ShaRNN. Hence, we present results for MI-ShaRNN and compare them to MI-RNN to demonstrate advantage of ShaRNN technique.
Datasets: We benchmark our method on standard datasets from different domains like audio keyword detection (Google-13), wake word detection (STCI-2), activity recognition (HAR-6), sports activity recognition (DSA-19), gesture recognition (GesturePod-5). The number after hyphen in dataset name indicates the number of classes in the dataset. See Table 3 in appendix for more details about the datasets. All the datasets are available online (see Table 3) except STCI-2 which is a proprietary wake word detection dataset.
Baselines: We compare our algorithm MI-ShaRNN (LSTM) against the baseline LSTM method as well as MI-RNN (LSTM) method. Note that MI-RNN as well as MI-ShaRNN build upon an RNN cell. For simplicity and consistency, we have selected LSTM as the base cell for all the methods, but we can train each of them with other RNN cells like GRU [7] or FastRNN [21]. We implemented all the algorithms on TensorFlow and used Adam for training the models [19]. The inference code for Cortex M4 device was written in C and compiled onto the device. All the presented numbers are averaged over 5 independent runs. The implementation of our algorithm is released as part of the EdgeML [11] library.
Hyperparameter selection: The main hyperparameters are: a) hidden state sizes for both the layers of MI-ShaRNN. b) brick-size k for MI-ShaRNN. In addition, the number of time-steps T is associated with each dataset. MI-RNN prunes down T and works with T 1 ď T time-steps. We provide results
with varying hidden state sizes to illustrate trade-offs involved with selecting this hyperparameter (Figure 2). We select k « ? T with some variation to optimize w.r.t the stride length ω for each dataset; we also provide an ablation study to illustrate impact of different choices of k on accuracy and the inference cost (Figure 3, Appendix).
Comparison of accuracies: Table 1 compares accuracy of MI-ShaRNN against baselines and MI-RNN for different hidden dimensions at R1 and R2. In terms of prediction accuracies, MIShaRNN performs much better than baselines and is competitive to MI-RNN on all the datasets. For example, with only k “ 8, MI-ShaRNN is able to achieve 94% accuracy on the Google-13 dataset while MI-RNN model is applied for T “ 49 steps and baseline LSTM for T “ 99 steps. That is, with only 8-deep recurrence, MI-ShaRNN is able to compete with accuracies of 49 and 99 deep LSTMs.
For inference cost, we study the amortized cost per data point in the sliding window setting (See Section 3). That is, baseline and MI-RNN for each sliding window recomputes the entire prediction from scratch. But, MI-ShaRNN can re-use computation in the first layer (see Section 3) leading to significant saving in inference cost. We report inference cost as the additional floating point operations (flops) each model would need to execute for every new inference. For simplicity, we treat both addition and multiplication to be of same cost. The number of non-linearity computations are small and are nearly same for all the methods so we ignore them.
Table 1 clearly shows that to achieve best accuracy, MI-ShaRNN is up to 10x faster than baselines and up to 5x faster than MI-RNN, even on a single threaded hardware architecture. Figure 2 shows computation vs accuracy trade-off for three datasets. We observe that for a range of desired accuracy values, MI-ShaRNN is 5-10x faster than the baselines.
Next, we compute accuracy and flops for MI-ShaRNN with different brick sizes k (see Figure 3 of Appendix). As expected, k „ ? T setting requires fewest flops for inference, but the story for accuracy is more complicated. For this dataset, we do not observe any particular trend for accuracy; all the accuracy values are similar, irrespective of k.
Deployment of Google-13 on Cortex M4: we use ShaRNN to deploy a real-time keyword spotting model (Google-13) on a Cortex M4 device. For time series classification (Section 3), we will need to slide windows and infer classes on each window. Due to small working RAM of M4 devices (256KB), for real-time recognition, the method needs to finish the following tasks within a budget of 120ms: collect data from the microphone buffer, process them, produce ML based inference and smoothened out predictions for one final output.
Standard LSTM models for this task work on 1s windows, whose featurization generates a 32 ˆ 99 feature vector; here T “ 99. So, even a relatively small LSTM (hidden size 16), takes on 456ms to process one window, exceeding the time budget (Table 2). MI-RNN is faster but still requires 225ms. Recently, a few CNN based methods have also been designed for low-resource keyword spotting [26, 20]. However, with just 40 filters applied to the standard 32ˆ99 filter-bank features, the working memory requirement balloons up to « 500KB which is beyond typical M4 devices’ memory budget. Similarly, compute requirement of such architectures also easily exceed the latency budget of 120ms. See Figure 4, in the Appendix for a comparison between CNN models and ShaRNN.
In contrast, our method is able to produce inference in only 70ms, thus is well-within latency budget of M4. Also, MI-ShaRNN holds two arrays in the working RAM: a) input features for 1 brick and b) buffered final states from previous bricks. For the deployed MI-ShaRNN model, with timesteps T “ 49, brick-size k “ 8 working RAM requirement is just 1.5 KB. ShaRNN for Streaming Listen Attend Spell (LAS): LAS is a popular architecture for phoneme classification in given audio stream. It forms non-overlapping time-windows of length 784 (« 8 seconds) and apply an encoder-decoder architecture to predict a sequence of phonemes. We study
LAS applied to TIMIT dataset [13]. We enhance the standard LAS architecture to exploit timeannotated ground truth available in TIMIT dataset, which improved baseline phoneme error rate from publicly reported 0.271 to 0.22. Both Encoder and Decoder layer in standard and enhanced LAS consists of fully recurrent bi-LSTMs. So for each time window (of length 784) we would need to apply entire encoder-decoder architecture to predict the phoneme sequence, implying a potential lag of « 8 seconds (784 steps) in prediction. Instead, using ShaRNN we can divide both the encoder and decoder layer in bricks of size k. This makes it possible to give phoneme classification for every k steps of points thereby bringing down lag from 784 steps to k steps. However, due to small brick size k, in principle we might lose significant amount of context information. But due to the corrective second layer in ShaRNN (Figure 1) we observe little loss in accuracy. Figure 2 shows performance of two variants of ShaRNN + LAS: a) ShaRNN Listener that uses ShaRNN only in encoding layer, b) ShaRNN Listener + Speller that uses ShaRNN in both the encoding and decoding layer. Figure 2 (d) shows that using ShaRNN in both the encoder and decoder is more beneficial than using it only in encoder layer. Furthermore, decreasing k from 784 to 64 leads to marginal increase in error from 0.22 to 0.238 while reducing the lag significantly; from 8 seconds to 0.6 seconds. In fact, even at k “ 64 this model’s performance is significantly better than the reported error of standard LAS (0.27) [12]. See Appendix C for details. | 1. What is the novelty and significance of the proposed "shallow" two layer RNN architecture?
2. How does the author ensure the quality of their claims and theoretical analysis?
3. What are the strengths and weaknesses of the experimental results provided by the author?
4. How does the reviewer assess the clarity and readability of most of the paper?
5. What are some minor typos and suggestions for improving certain sentences or phrases in the paper?
6. Can the author provide more details on how SRNN is combined with MI-RNN, and why did they choose this approach?
7. Does the author have any concerns regarding the variance observed when running over 5 random seeds?
8. Is there any specific reason why the author chose T/k as the first index instead of 1?
9. Can the author explain the source of the limit of 120ms in the latency budget?
10. Are there any differences between GesturePod-5 and GesturePod-6, and if so, how do they impact the benchmarking results? | Review | Review
Originality: The author propose a novel and general architecture that, to the best of my knowledge, has not been described before. Thus the idea of the "shallow" two layer RNN architecture as well as the accompanying theoretical analysis and experimental results are all novel. Quality: The claims appear correct, although I have some confidence in not having missed important issues only for claim 1 and 2. The experiments are comprehensive and instill confidence in the proposed architecture and theoretical guarantees. The code they provide appears about average for this type of research prototypes. Clarity. Most of the paper is clear and easy to follow. There are however a few typos and sentences that could be improved with some additional proof reading. (See below for some of the typos I spotted) Significance. The simplicity of the method combined with the well-motivated use case of embedded devices with constrained resources mean that I see this paper as a useful contribution, from which many are likely to benefit and thus worthy of NeurIPS. Question and comments: When running over 5 random seeds, what kind of variance is observed? It would be worth mentioning this at least the supp material, to get a sense of the statistical relevance of the results. 46: ensuring a small model size -> I believe the model size would not be smaller than that of a standard RNN, if so the claim appears a bit misleading Claim 1 appears correct as stated, but the formulation is a bit convoluted, in the sense that one typically would be given T and w, and can decide on a k; whereas in the current formulation it appears as if you are given a T and q and can pick an arbitrary k based on that, which is not really the case. Line 199: from this sentence it is not very clear how SRNN is combined with MI-RNN, it would be good to give a little more details given that all results use this model are based on a Shallow extension of MI-RNN. In the same vein the empirical analysis would be a little stronger if the results of SRNN without MI-RNN would be reported too. Minor: 37: standard -> a standard 81: main contributions -> our main contributions 90: receptive(sliding) -> [space is missing] 135: it looks like s starts at 0 where all other indices start at 1; including line 171 where s starts at 0 137: fairly small constant -> a fairly small constant 138: that is -> which is 139: tiny-devices -> tiny devices 152: I would find it slightly more readable if the first index v^{(2)} was 1 instead of T/k; if you need an index at all at this point 154: should be v^{(1)} not v^{(2)} 159: tru RNN -> a true RNN 159: principal -> principle 172: for integer -> for some integer 240: it's -> its 267: ablation study -> an ablation study latency budget of 120ms -> it's not clear to me where this exact limit comes from is it a limit of the device itself somehow? 318: steps of pionts threby 314: ully -> fully In the MI-RNN paper [10] they benchmark against GesturePod-6, where the current paper benchmarks against GesturePod-5, are they different? If so in what way? |
NIPS | Title
Shallow RNN: Accurate Time-series Classification on Resource Constrained Devices
Abstract
Recurrent Neural Networks (RNNs) capture long dependencies and context, and hence are the key component of typical sequential data based tasks. However, the sequential nature of RNNs dictates a large inference cost for long sequences even if the hardware supports parallelization. To induce long-term dependencies, and yet admit parallelization, we introduce novel shallow RNNs. In this architecture, the first layer splits the input sequence and runs several independent RNNs. The second layer consumes the output of the first layer using a second RNN thus capturing long dependencies. We provide theoretical justification for our architecture under weak assumptions that we verify on real-world benchmarks. Furthermore, we show that for time-series classification, our technique leads to substantially improved inference time over standard RNNs without compromising accuracy. For example, we can deploy audio-keyword classification on tiny Cortex M4 devices (100MHz processor, 256KB RAM, no DSP available) which was not possible using standard RNN models. Similarly, using ShaRNN in the popular Listen-Attend-Spell (LAS) architecture for phoneme classification [4], we can reduce the lag in phoneme classification by 10-12x while maintaining state-of-the-art accuracy.
1 Introduction
We focus on the challenging task of time-series classification on tiny devices, a problem arising in several industrial and consumer applications [25, 22, 30], where tiny edge-devices perform sensing, monitoring and prediction in a limited time and resource budget. A prototypical example is an interactive cane for people with visual impairment, capable of recognizing gestures that are observed as time-traces on a sensor embedded onto the cane [24].
Time series or sequential data naturally exhibit temporal dependencies. Sequential models such as RNNs are particularly well-suited in this context because they can account for temporal dependencies by attempting to derive relations from the previous inputs. Nevertheless, directly leveraging RNNs for prediction in constrained scenarios mentioned above is challenging. As observed by several authors [28, 14, 29, 9], the sequential nature by which RNNs process data fundamentally limits parallelization leading to large training and inference costs. In particular, in time-series classification, at inference time, the processing time scales with the size, T , of the receptive window, which is unacceptable in resource constrained settings.
˚Work done as a Research Fellow at Microsoft Research India. :Work done during internships at Microsoft Research India.
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
A solution proposed in literature [28, 14, 29, 9] is to replace sequential processing with parallelizable feed-forward and convolutional networks. A key insight exploited here is that most applications require relatively small receptive window, and that this size can be increased with tree-structured networks and dilated convolutions. Nevertheless, feedforward/convolutional networks utilize substantial working memory, which makes them difficult to deploy on tiny devices. For this reason, other methods such as [32, 2] also are not applicable for our setting. For example, a standard audio keyword detection task with a relatively modest setup of 32 conv filters would itself need a working memory of 500KB and about 32X more computation than a baseline RNN model (see Section 5).
Shallow RNNs. To address these challenges, we design a novel layered RNN architecture that is parallelizable/limited-recurrence while still maintaining the receptive field length (T ) and the size of the baseline RNN. Concretely, we propose a simple 2-layer architecture that we refer to as ShaRNN. Both the layers of ShaRNN are composed of a collection of shallow recurrent neural networks that operate independently. More precisely, each sequential data point (receptive window) is divided into independent parts called bricks of size k, and a shared RNN operates on each brick independently, thus ensuring a small model size and short recurrence. That is, ShaRNN’s bottom layer restarts from an initial state after every k ăă T steps, and hence only has a short recurrence. The outputs of T {k parallel RNNs are input as a sequence into a second layer RNN, which then outputs a prediction after T {k time. In this way, for k « Op ? T q we obtain a speedup of Op ? T q in inference time in the following two settings:
(a) Parallelization: here we parallelize inference over T {k independent RNNs thus admitting speed-ups on multi-threaded architectures,
(b) Streaming: here we utilize receptive (sliding) windows and reuse computation from older sliding window/receptive fields.
We also note that, in contrast to the proposed feed-forward methods or truncated RNN methods [23], our proposal admits fully receptive fields and thus does not result in loss of information. We further enhance ShaRNN by combining it with the recent MI-RNN method [10] to reduce the receptive window sizes; we call the resulting method MI-ShaRNN.
While a feedforward layer could be used in lieu of our RNN in the next layer, such layers lead to significant increase in model size and working RAM to be admissible in tiny devices.
Performance and Deployability. We compare the two-layer MI-ShaRNN approach against other state-of-art methods, on a variety of benchmark datasets, tabulating both accuracy and budgets. We show that the proposed 2-layer MI-ShaRNN exhibits significant improvement in inference time while also improving accuracy. For example, on Google-13 dataset, MI-ShaRNN achieves 1% higher accuracy than baseline methods while providing 5-10x improvement in inference cost. A compelling aspect of the architecture is that it allows for reuse of most of the computation, which leads to its deployability on the tiniest of devices. In particular, we show empirically that the method can be deployed for real-time time-series classification on devices as those based on the tiny ARM Cortex M4 microprocessor3 with just 256KB RAM, 100MHz clock-speed and no dedicated Digital Signal Processing (DSP) hardware. Finally, we demonstrate that we can replace bi-LSTM based encoder-decoder of the LAS architecture [4] by ShaRNN while maintaining close to best accuracy on publicly-available TIMIT dataset [13]. This enables us to deploy LAS architecture in streaming fashion with a lag of 1 second in phoneme prediction and Op1q amortized cost per time-step; standard LAS model would incur lag of about 8 seconds as it processes the entire 8 seconds of audio before producing predictions.
Theory. We provide theoretical justification for the ShaRNN architecture and show that significant parallelization can be achieved if the network satisfies some relatively weak assumptions. We also point out that additional layers can be introduced in the architecture leading to hierarchical processing. While we do not experiment with this concept here, we note that, it offers potential for exponential improvement in inference time.
3https://en.wikipedia.org/wiki/ARM_Cortex-M#Cortex-M4
In summary, the following are our main contributions:
• We show that under relatively weak assumptions, recurrence in RNNs and consequently, the inference cost can be reduced significantly. • We demonstrate this inference efficiency via a two-layer ShaRNN (and MI-ShaRNN) architecture that uses only shallow RNNs with a small amount of recurrence. • We benchmark MI-ShaRNN (enhancement of ShaRNN with MI-RNN) on several datasets and observe that it learns nearly as accurate models as standard RNNs and MI-RNN. Due to limited recurrence, ShaRNN saves 5-10x computation cost over baseline methods. We deploy MI-ShaRNN model on a tiny microcontroller for real-time audio keyword detection, which, prior to this work, was not possible with standard RNNs due to large inference cost with receptive (sliding) windows. We also deploy ShaRNN in LAS architecture to enable streaming phoneme classification with less than 1 second of lag in prediction.
2 Related Work
Stacked Architecture. Our multi-layered RNN resembles stacked RNNs studied in the literature [15, 16, 27] but they are unrelated. The goal of Stacked RNNs is to produce complex models and subsume conventional RNNs. Each layer is fully recurrent, and feeds output of the first layer to the next level. The next level is another fully recurrent RNN. As such, stacked RNN architectures lead to increased model size and recurrence, which results in worse inference time than standard RNNs.
Recurrent Nets (Training). Conventional works on RNNs primarily address challenges arising during training. In particular for large receptive window T , RNNs suffer from vanishing and exploding gradient issues. A number of works propose to circumvent this issue in a number of ways such as Gated architectures [7, 17] or adding residual connections in RNNs [18, 1, 21] or through constraining the learnt parameters [31]. Several recent works attempt to reduce the number of gates and parameters [8, 6, 21] to reduce model size but as such suffer from poor inference time, since they are still fully recurrent. Different from these works, our focus is on reducing model size as well as inference time and view these works as complementary to our paper.
Recurrent Nets (Inference Time). Recent works have begun to focus on RNN inference cost. [3] proposes to learn skip connections that can avoid evaluating all the hidden states. [10] exploits domain knowledge that true signature is significantly shorter than the time-trace to trim down length of the sliding windows. Both of these approaches are complementary and we indeed leverage the second in our approach. A recent work on dilated RNNs [5] is interesting. While it could serve as a potential solution, we note that, in its original form, dilated RNN also has a fully recurrent first layer, which is therefore infeasible. One remedy is to introduce dilation in the first layer to improve inference time. But, dilation skips steps and hence can miss out on critical local context.
Finally, CNN based methods [28, 14, 29, 9, 2] allow higher parallelization in the sequential tasks but as discussed in Section 1, also lead to significantly larger working RAM requirement when compared to RNNs, thus cannot be considered for deployment on tiny devices (see Section 5).
3 Problem Formulation and Proposed ShaRNN Method
In this paper, we primarily focus on the time-series classification problem, although the techniques apply to more general sequence-to-sequence problems like phoneme classification problem discussed in Section 5. Let Z “ tpX1, y1q, . . . , pXn, ynqu where Xi is the i-th sequential data point with Xi “ rxi,1, xi,2, . . . , xi,T s P RdˆT and xi,t P Rd is the t-th time-step data point. yi P rCs is the label of Xi where C is the number of class labels. xi,t:t`k is the shorthand for xi,t:t`k “ rxi,t, . . . , xi,t`ks. Given training data Z , the goal is to learn a classifier f : RdˆT Ñ rCs that can be used for efficient inference, especially on tiny devices. Recurrent Neural Networks (RNN) are popularly used for modeling such sequential problems and maintain a hidden state ht´1 P Rd̂ at the t-th step that is updated using: ht “ Rpht´1, xtq, t P rT s, ŷ “ fphT q, where ŷ is the prediction by applying a classifier f on hT and d̂ is the dimensionality of the hidden state. Due to the sequential nature of RNN, inference cost of RNN is ΩpT q even if the hardware
supports large amount of parallelization. Furthermore, practical applications require handling a continuous stream of data, e.g., smart-speaker listening for certain audio keywords.
A standard approach is to use sliding windows (receptive field) to form a stream of test points on which inference can be applied. That is, given a stream X “ rx1, x2, . . . , s, we form sliding windows Xs “ xps´1q¨ω`1:ps´1q¨ω`T P RdˆT which stride by ω ą 0 time-steps after each inference. RNN is then applied to each sliding window Xs which implies amortized cost for processing each time-step data point (xt) is ΘpTω q. To ensure high-resolution in prediction, ω is required to be a fairly small constant independent of T . Thus, amortized inference cost for each time-step point is OpT q which is prohibitively large for tiny devices. So, we study the following key question: “Can we process each time-step point in a data stream in opT q computational steps?”
3.1 ShaRNN
Shallow RNNs (ShaRNN) are a hierarchical collection of RNNs organized at two levels. T k
RNNs at ground-layer operate completely in parallel with fully shared parameters and activation functions, thus ensuring small model size and parallel execution. An RNN at the next level take inputs from the ground-layer and subsequently outputs a prediction.
Formally, given a sequential point X “ rx1, . . . , xT s (e.g. sliding window in streaming data), we split it into bricks of size k, where k is a parameter of the algorithm. That is, we form T {k bricks: B “ rB1, . . . , BT {ks where Bj “ xppj´1q¨k`1q:pj¨kq. Now, ShaRNN applies a standard recurrent model Rp1q : Rdˆk Ñ Rd̂1 on each brick, where d̂1 is the dimensionality of hidden states of Rp1q. That is,
ν p1q j “ Rp1qpBjq, j P rT {ks.
Note that Rp1q can be any standard RNN model like GRU, LSTM etc. We now feed output of each layer into another RNN to produce the final state/feature vector that is then fed into a feed forward layer. That is,
ν p2q T {k “ R p2qprνp1q 1 , . . . , ν p1q T {ksq, ŷ “ fpν p2q T {kq,
where Rp2q is the second layer RNN and can also be any standard RNN model. ν p2q T {k P Rd̂2 is the hidden-state obtained by applying Rp2q to ν p1q 1:T {k. f applies the standard feed-forward network to ν p2q T {k. See Figure 1 for a block-diagram of the architecture. That is, ShaRNN is defined by parameters Λ composed of shared RNN parameters at the ground-level, RNN parameters at the next level, and classifier weights for making a prediction. We train the ShaRNN based on minimizing an empirical loss function over training set Z .
Naturally, ShaRNN is an approximation of a true RNN and in principle has less modeling power (and recurrence). But as discussed in Section 4 and shown by our empirical results in Section 5, ShaRNN can still capture enough context from the entire sequence to effectively model a variety of time-series classification problems with large T (typically T ě 100). Due to parallel k RNNs in the bottom layer that are processed by R2 in the second layer, ShaRNN inference cost can be reduced to OpT {k ` kq for multi-threaded architectures with k-wise parallelization; k “ ? T leads to smallest inference cost.
Streaming. Recall that in the streaming setting, we form sliding windows Xs “ xs¨ω`1:s¨ω`T P R
dˆT by striding each window by ω ą 0 time-steps. Hence, if ω “ k ¨ q for q P N then the inference cost of Xs`1 can be reduced by reusing previously computed ν
p1q j vectors @j P rq ` 1, T {ks for Xs.
Below claim provides a formal result for the same.
Claim 1. Let both layers RNNs Rp1q and Rp2q of ShaRNN have same hidden-size and per-time step computation complexity C1. Then, given T and ω, the additional cost of applying ShaRNN to X s`1 given Xs is OpT {k ` q ¨ kq ¨C1, where Xs “ xps´1q¨ω`1:ps´1q¨ω`T , ω is the stride-length of sliding window, and the brick-size ω “ q ¨ k for some integer q ě 1. Consequently, the total amortized cost can be bounded by Op?q ¨ TC1q if k “ a T {q.
See Appendix A for a proof of the claim.
next layer. Note that ν p1q 2 , ν p1q 3
can be reused for evaluating the next window. (b), (c): Mean squared approximation error and the prediction accuracy of ShaRNN with zeroth and first order approximation (M “ 1, 2 respectively in Claim 3) with different brick-sizes k (for Google-13). Note the large error with M “ 1 (same as truncation method in [23]). M “ 2 introduces significant improvement, especially for small k, but clearly needs larger M to achieve better accuracy. (d): Comparison of norm of gradient vs Hessian of Rpht, xt`1:t`kq with varying k. R is FastRNN [21] with swish activation. Smaller Hessian norm indicates that the first-order approximation of R (Claim 3) by ShaRNN is more accurate than the 0-th order one (ShaRNN with M “ 1) suggested by [23].
3.2 Multi-layer ShaRNN
Above claim shows that selecting small k leads to a large number of bricks and hence, a large number of points to be processed by second layer RNN Rp2q which will be the bottleneck in inference. However, using the same approach, we can replace the second layer with another layer of ShaRNN to bring down the cost. By repeating the same process, we can design a general L layer architecture where each layer is equipped with a RNN model Rplq and the output of a l-th layer brick is given by:
ν plq j “ Rplqprν pl´1q pj´1q¨k`1, . . . , ν pl´1q pj´1qk`ksq,
for all 1 ď j ď T {kl, where νp0qj “ xj . The predicted label is given by ŷ “ fpν pLq T {kL´1 q. Using argument similar to the claim in the previous section, we can reduce the total inference cost to Oplog T q by using k “ Op1q and L “ log T . Claim 2. Let all layers of multi-layer ShaRNN have same hidden-size and per-time step complexity C1 and let k “ ω. Then, the additional cost of applying ShaRNN to Xs`1 is OpT {kL ` L ¨ kq ¨ C1, where Xs “ xps´1q¨ω`1:ps´1q¨ω`T . Consequently, selecting L “ logpT q, k “ Op1q, and assuming ω “ Op1q, the total amortized cost is OpC1 ¨ log pT qq.
That is, we can achieve exponential speed-up over OpT q cost for standard RNN. However, such a model can lead to a large loss in accuracy. Moreover, constants in the cost for large L are so large that a network with smaller L might be more efficient for typical values of T .
3.3 MI-ShaRNN
Recently, [10] showed that several time-series training datasets are coarse and the sliding window size T can be decreased significantly by using their multi-instance based algorithm (MI-RNN). MI-RNN finds tight windows around the actual signature of the class, which leads to significantly smaller models and reduces inference cost. Our ShaRNN architecture is orthogonal to MI-RNN and can be combined to obtain even higher amount of inference saving. That is, MI-RNN takes the dataset
Z “ tpX1, y1q, . . . , pXn, ynqu with Xi being a sequential data point over T steps and produces a new set of points X 1j with labels y 1 j , where each X 1 j is sequential data point over T
1 and T 1 ď T . MI-ShaRNN applies ShaRNN to the output of MI-RNN so that the inference cost is dependent only on T 1 ď T , and captures the key signal in each data point.
4 Analysis
In this section, we provide theoretical underpinnings of ShaRNN approach and we also put it in context of work by [23] that discusses RNN models for which we can get rid of almost all of the recurrence.
Let R : Rd`d̂ Ñ Rd̂ be a standard RNN model that maps the given hidden state ht´1 P Rd̂ and data point xt P Rd into the next hidden state ht “ Rpht´1, xtq. Overloading notation, Rph0, x1, . . . , xtq “ Rpht´1, xtq. We define a function to be recurrent if the following holds: Rph0, x1, . . . , xtq “ RpRph0, x1, . . . , xt´1q, xtq. The final class prediction using feed-forward layer is given by: ŷ “ fphT q “ fpRph0, x1:T qq. Now, ShaRNN attempts to untangle and approximate the dependency of fphT q and Rph0, x1:T q on h0, by using Taylor’s theorem. Below claim shows the condition under which the approximation error introduced by ShaRNN is small. Claim 3. Let Rph0, x1, . . . , xtq be an RNN and let }∇Mh Rph, xt:t`kq} ď Opǫ ¨ M !q for some ǫ ě 0 where ∇Mh is M th order derivative with respect to h. Also let }Rph0, x1:tq ´ h0} “ Op1q, }∇mh Rph0, xt`1:t`kq} “ Opm!q for all t P rT s. Then, there exists an ShaRNN defined by functions Rp1q, Rp2q and brick-size k, s.t.:
}Rp2qpνp1q 1 , . . . , ν p1q T {kq ´ Rph0, x1:T q} ď ǫ ¨ M ¨ T, where ν p1q j “ Rp1qph0, xpj´1q¨k`1:j¨kq.
See Appendix A for a detailed proof of the claim.
The above claim shows that the hidden state computed by ShaRNN is close to the state computed by a fully recursive RNN, hence the final output ŷ would also be close. We now compare this result to the result of [23], which showed that }Rph0, x1:T q ´Rph0, xT´k`1:T q} ď ǫ for large enough k if R satisfies a contraction property. That is, if }Rpht´1, xtq ´ Rph1t´1, xtq} ď λ}ht´1 ´ h1t´1} where λ ă 1. However, λ ă 1 is a strict requirement and do not hold in practice. Due to this, if we only compute Rph0, xT´k`1:T q as suggested by the above result (for some reasonable values of k), then resulting accuracy on several datasets drops significantly (see Figure 1(b),(c)).
In the context of Claim 3, result of [23] is a special case with M “ 1, i.e., the result only applies a 0´th order Taylor series expansion. Figure 1 (d) shows how norm of the gradient that bounds error due to the 0-th order expansion is significantly larger than the norm of the Hessian which bounds error due to the 1-st order expansion.
Case study with FastRNN: We now instantiate Claim 3 for a simple FastRNN model [21] with a first-order approximation i.e., with M “ 2 in Claim 3. Claim 4. Let Rph0, x1, . . . , xtq be a FastRNN model with parameters U,W . Let }U} ď Op1q, }∇2hRph0, xt:t`kq} ď Opǫq for any k-length sequence. Then, there exists an ShaRNN defined by functions Rp1q, Rp2q and brick-size k s.t.: }Rp2qpνp1q
1 , . . . , ν p1q T {kq ´ Rph0, x1:T q} ď ǫ, where
ν p1q j “ Rp1qph0, xpj´1q¨k`1:j¨kq.
Note that }U} “ Op1q holds for all the benchmarks that were tried in [21]. Moreover, this assumption is significantly weaker than the typical }U} ă 1 assumption required by [23]. Finally, the Hessian term is significantly smaller than the derivative term (Figure 1 (d)), hence the approximation error and prediction error should be significantly smaller than the one we would get by 0-th order approximation (see Figure 1 (b), (c)).
5 Empirical Results
We conduct experiments to study: a) performance of MI-ShaRNN with varying hidden state dimensions at both the layers Rp1q and Rp2q to understand how its accuracy stacks up against baseline
models across different model sizes, b) inference cost improvement that MI-ShaRNN produces for standard time-series classification problems over baseline models and MI-RNN models, c) if MI-ShaRNN can enable certain time-series classification tasks on devices based on the tiny Cortex M4 with only 100MHz processor and 256KB RAM. Recall that MI-ShaRNN uses ShaRNN on top of trimmed data points given by MI-RNN. MI-RNN is known to have better performance than baseline LSTMs, so naturally MI-ShaRNN has better performance than ShaRNN. Hence, we present results for MI-ShaRNN and compare them to MI-RNN to demonstrate advantage of ShaRNN technique.
Datasets: We benchmark our method on standard datasets from different domains like audio keyword detection (Google-13), wake word detection (STCI-2), activity recognition (HAR-6), sports activity recognition (DSA-19), gesture recognition (GesturePod-5). The number after hyphen in dataset name indicates the number of classes in the dataset. See Table 3 in appendix for more details about the datasets. All the datasets are available online (see Table 3) except STCI-2 which is a proprietary wake word detection dataset.
Baselines: We compare our algorithm MI-ShaRNN (LSTM) against the baseline LSTM method as well as MI-RNN (LSTM) method. Note that MI-RNN as well as MI-ShaRNN build upon an RNN cell. For simplicity and consistency, we have selected LSTM as the base cell for all the methods, but we can train each of them with other RNN cells like GRU [7] or FastRNN [21]. We implemented all the algorithms on TensorFlow and used Adam for training the models [19]. The inference code for Cortex M4 device was written in C and compiled onto the device. All the presented numbers are averaged over 5 independent runs. The implementation of our algorithm is released as part of the EdgeML [11] library.
Hyperparameter selection: The main hyperparameters are: a) hidden state sizes for both the layers of MI-ShaRNN. b) brick-size k for MI-ShaRNN. In addition, the number of time-steps T is associated with each dataset. MI-RNN prunes down T and works with T 1 ď T time-steps. We provide results
with varying hidden state sizes to illustrate trade-offs involved with selecting this hyperparameter (Figure 2). We select k « ? T with some variation to optimize w.r.t the stride length ω for each dataset; we also provide an ablation study to illustrate impact of different choices of k on accuracy and the inference cost (Figure 3, Appendix).
Comparison of accuracies: Table 1 compares accuracy of MI-ShaRNN against baselines and MI-RNN for different hidden dimensions at R1 and R2. In terms of prediction accuracies, MIShaRNN performs much better than baselines and is competitive to MI-RNN on all the datasets. For example, with only k “ 8, MI-ShaRNN is able to achieve 94% accuracy on the Google-13 dataset while MI-RNN model is applied for T “ 49 steps and baseline LSTM for T “ 99 steps. That is, with only 8-deep recurrence, MI-ShaRNN is able to compete with accuracies of 49 and 99 deep LSTMs.
For inference cost, we study the amortized cost per data point in the sliding window setting (See Section 3). That is, baseline and MI-RNN for each sliding window recomputes the entire prediction from scratch. But, MI-ShaRNN can re-use computation in the first layer (see Section 3) leading to significant saving in inference cost. We report inference cost as the additional floating point operations (flops) each model would need to execute for every new inference. For simplicity, we treat both addition and multiplication to be of same cost. The number of non-linearity computations are small and are nearly same for all the methods so we ignore them.
Table 1 clearly shows that to achieve best accuracy, MI-ShaRNN is up to 10x faster than baselines and up to 5x faster than MI-RNN, even on a single threaded hardware architecture. Figure 2 shows computation vs accuracy trade-off for three datasets. We observe that for a range of desired accuracy values, MI-ShaRNN is 5-10x faster than the baselines.
Next, we compute accuracy and flops for MI-ShaRNN with different brick sizes k (see Figure 3 of Appendix). As expected, k „ ? T setting requires fewest flops for inference, but the story for accuracy is more complicated. For this dataset, we do not observe any particular trend for accuracy; all the accuracy values are similar, irrespective of k.
Deployment of Google-13 on Cortex M4: we use ShaRNN to deploy a real-time keyword spotting model (Google-13) on a Cortex M4 device. For time series classification (Section 3), we will need to slide windows and infer classes on each window. Due to small working RAM of M4 devices (256KB), for real-time recognition, the method needs to finish the following tasks within a budget of 120ms: collect data from the microphone buffer, process them, produce ML based inference and smoothened out predictions for one final output.
Standard LSTM models for this task work on 1s windows, whose featurization generates a 32 ˆ 99 feature vector; here T “ 99. So, even a relatively small LSTM (hidden size 16), takes on 456ms to process one window, exceeding the time budget (Table 2). MI-RNN is faster but still requires 225ms. Recently, a few CNN based methods have also been designed for low-resource keyword spotting [26, 20]. However, with just 40 filters applied to the standard 32ˆ99 filter-bank features, the working memory requirement balloons up to « 500KB which is beyond typical M4 devices’ memory budget. Similarly, compute requirement of such architectures also easily exceed the latency budget of 120ms. See Figure 4, in the Appendix for a comparison between CNN models and ShaRNN.
In contrast, our method is able to produce inference in only 70ms, thus is well-within latency budget of M4. Also, MI-ShaRNN holds two arrays in the working RAM: a) input features for 1 brick and b) buffered final states from previous bricks. For the deployed MI-ShaRNN model, with timesteps T “ 49, brick-size k “ 8 working RAM requirement is just 1.5 KB. ShaRNN for Streaming Listen Attend Spell (LAS): LAS is a popular architecture for phoneme classification in given audio stream. It forms non-overlapping time-windows of length 784 (« 8 seconds) and apply an encoder-decoder architecture to predict a sequence of phonemes. We study
LAS applied to TIMIT dataset [13]. We enhance the standard LAS architecture to exploit timeannotated ground truth available in TIMIT dataset, which improved baseline phoneme error rate from publicly reported 0.271 to 0.22. Both Encoder and Decoder layer in standard and enhanced LAS consists of fully recurrent bi-LSTMs. So for each time window (of length 784) we would need to apply entire encoder-decoder architecture to predict the phoneme sequence, implying a potential lag of « 8 seconds (784 steps) in prediction. Instead, using ShaRNN we can divide both the encoder and decoder layer in bricks of size k. This makes it possible to give phoneme classification for every k steps of points thereby bringing down lag from 784 steps to k steps. However, due to small brick size k, in principle we might lose significant amount of context information. But due to the corrective second layer in ShaRNN (Figure 1) we observe little loss in accuracy. Figure 2 shows performance of two variants of ShaRNN + LAS: a) ShaRNN Listener that uses ShaRNN only in encoding layer, b) ShaRNN Listener + Speller that uses ShaRNN in both the encoding and decoding layer. Figure 2 (d) shows that using ShaRNN in both the encoder and decoder is more beneficial than using it only in encoder layer. Furthermore, decreasing k from 784 to 64 leads to marginal increase in error from 0.22 to 0.238 while reducing the lag significantly; from 8 seconds to 0.6 seconds. In fact, even at k “ 64 this model’s performance is significantly better than the reported error of standard LAS (0.27) [12]. See Appendix C for details. | 1. What is the main contribution of the paper regarding time series classification?
2. What are the strengths and weaknesses of the proposed shallow RNNs architecture?
3. How does the reviewer assess the clarity and significance of the paper's content, particularly in its symbol definitions, claims, and implications?
4. What are the specific points that the reviewer finds unclear or distracting in the paper?
5. How does the reviewer evaluate the usefulness and tightness of the bounds provided in Claims 3 and 4?
6. Does the reviewer think the paper adequately compares the proposed approach with other works, such as convolutional networks?
7. What is the reviewer's overall opinion of the paper's value and impact? | Review | Review
The authors propose shallow RNNs, an efficient architecture for time series classification. shallow RNNs can be parallelized as the time sequences are broken down into subsequences that can be processed independently from each other by copies of the same RNN. Their outputs are then passed to a similarly structured second layer. Multi-layer SRNN extends this to more than two layers. The paper includes both a runtime analysis (claims 1 and 2) and an analysis of the approximation accuracy of the shallow RNN compared to a traditional RNN. The idea is straight forward but the paper scores very low on clarity. The authors opt for symbol definitions instead of clear descriptions, especially in the claims. The claims are a central contribution of the paper BUT UNNECESSARILY HARD TO PARSE. The implications of the claims are not described by the authors. That's why I scored their significance as low. Here are specific points that are unclear from the paper: l.133-140 Shouldn't the amortized inference cost for each time step be C1 i.e. O(1)? Why would you rerun the RNN on each sliding window? l. 165 The heavy use of notation is distracting from getting an understanding of what window size w and partition size k you usually use. Is usually k larger than w or the other way around? This makes it hard to understand how the SRNN architecture interacts with streaming. When the data is coming in in streams, are the streams partitioned and the partitions distributed or are the streams distributed Claim 1 * You already defined $X^s$. Defining it here again just distracts from the claim. * q is the ration between w and k (hence it depends on k). It is weird that your statement relates k to q which depends on k. please explain. Claim 2 * Choice of k in Claim 2 seems incompatible with Claim 1. In Claim 1 k = O(sqrt(T)) in Claim 2 k = O(1). Claim 3 * What is M? What is $\Nabla^M_h$? Claim 3 and 4 * Are those bounds tight enough to be useful? Given a specific problem, can we compute how much accuracy we expect to lose by using a specific SRNN? * Can we use these bounds together with the runtime analysis of claims 1 and 2 to draw tradeoff between accuracy and inference cost like in Figure 2? To me the strength of this paper is the proposed model and ist implementation on small chips (video in the supplement) as well as the empirical study. I would have been curious for a discussion on how the proposed architecture relates to convolutional networks. It seems to me that by setting w small, k small and L large, you almost have a convolutional network where the filter is a small RNN instead of a typical filter. In the introduction, it is mentioned that CNNs are considered impractical. I am curious; could it be that in the regimes for which the accuracy of SRNN is acceptable (Claims 3 and 4) they are actually also impractical? Complexity similar to CNNs? |
NIPS | Title
Shallow RNN: Accurate Time-series Classification on Resource Constrained Devices
Abstract
Recurrent Neural Networks (RNNs) capture long dependencies and context, and hence are the key component of typical sequential data based tasks. However, the sequential nature of RNNs dictates a large inference cost for long sequences even if the hardware supports parallelization. To induce long-term dependencies, and yet admit parallelization, we introduce novel shallow RNNs. In this architecture, the first layer splits the input sequence and runs several independent RNNs. The second layer consumes the output of the first layer using a second RNN thus capturing long dependencies. We provide theoretical justification for our architecture under weak assumptions that we verify on real-world benchmarks. Furthermore, we show that for time-series classification, our technique leads to substantially improved inference time over standard RNNs without compromising accuracy. For example, we can deploy audio-keyword classification on tiny Cortex M4 devices (100MHz processor, 256KB RAM, no DSP available) which was not possible using standard RNN models. Similarly, using ShaRNN in the popular Listen-Attend-Spell (LAS) architecture for phoneme classification [4], we can reduce the lag in phoneme classification by 10-12x while maintaining state-of-the-art accuracy.
1 Introduction
We focus on the challenging task of time-series classification on tiny devices, a problem arising in several industrial and consumer applications [25, 22, 30], where tiny edge-devices perform sensing, monitoring and prediction in a limited time and resource budget. A prototypical example is an interactive cane for people with visual impairment, capable of recognizing gestures that are observed as time-traces on a sensor embedded onto the cane [24].
Time series or sequential data naturally exhibit temporal dependencies. Sequential models such as RNNs are particularly well-suited in this context because they can account for temporal dependencies by attempting to derive relations from the previous inputs. Nevertheless, directly leveraging RNNs for prediction in constrained scenarios mentioned above is challenging. As observed by several authors [28, 14, 29, 9], the sequential nature by which RNNs process data fundamentally limits parallelization leading to large training and inference costs. In particular, in time-series classification, at inference time, the processing time scales with the size, T , of the receptive window, which is unacceptable in resource constrained settings.
˚Work done as a Research Fellow at Microsoft Research India. :Work done during internships at Microsoft Research India.
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
A solution proposed in literature [28, 14, 29, 9] is to replace sequential processing with parallelizable feed-forward and convolutional networks. A key insight exploited here is that most applications require relatively small receptive window, and that this size can be increased with tree-structured networks and dilated convolutions. Nevertheless, feedforward/convolutional networks utilize substantial working memory, which makes them difficult to deploy on tiny devices. For this reason, other methods such as [32, 2] also are not applicable for our setting. For example, a standard audio keyword detection task with a relatively modest setup of 32 conv filters would itself need a working memory of 500KB and about 32X more computation than a baseline RNN model (see Section 5).
Shallow RNNs. To address these challenges, we design a novel layered RNN architecture that is parallelizable/limited-recurrence while still maintaining the receptive field length (T ) and the size of the baseline RNN. Concretely, we propose a simple 2-layer architecture that we refer to as ShaRNN. Both the layers of ShaRNN are composed of a collection of shallow recurrent neural networks that operate independently. More precisely, each sequential data point (receptive window) is divided into independent parts called bricks of size k, and a shared RNN operates on each brick independently, thus ensuring a small model size and short recurrence. That is, ShaRNN’s bottom layer restarts from an initial state after every k ăă T steps, and hence only has a short recurrence. The outputs of T {k parallel RNNs are input as a sequence into a second layer RNN, which then outputs a prediction after T {k time. In this way, for k « Op ? T q we obtain a speedup of Op ? T q in inference time in the following two settings:
(a) Parallelization: here we parallelize inference over T {k independent RNNs thus admitting speed-ups on multi-threaded architectures,
(b) Streaming: here we utilize receptive (sliding) windows and reuse computation from older sliding window/receptive fields.
We also note that, in contrast to the proposed feed-forward methods or truncated RNN methods [23], our proposal admits fully receptive fields and thus does not result in loss of information. We further enhance ShaRNN by combining it with the recent MI-RNN method [10] to reduce the receptive window sizes; we call the resulting method MI-ShaRNN.
While a feedforward layer could be used in lieu of our RNN in the next layer, such layers lead to significant increase in model size and working RAM to be admissible in tiny devices.
Performance and Deployability. We compare the two-layer MI-ShaRNN approach against other state-of-art methods, on a variety of benchmark datasets, tabulating both accuracy and budgets. We show that the proposed 2-layer MI-ShaRNN exhibits significant improvement in inference time while also improving accuracy. For example, on Google-13 dataset, MI-ShaRNN achieves 1% higher accuracy than baseline methods while providing 5-10x improvement in inference cost. A compelling aspect of the architecture is that it allows for reuse of most of the computation, which leads to its deployability on the tiniest of devices. In particular, we show empirically that the method can be deployed for real-time time-series classification on devices as those based on the tiny ARM Cortex M4 microprocessor3 with just 256KB RAM, 100MHz clock-speed and no dedicated Digital Signal Processing (DSP) hardware. Finally, we demonstrate that we can replace bi-LSTM based encoder-decoder of the LAS architecture [4] by ShaRNN while maintaining close to best accuracy on publicly-available TIMIT dataset [13]. This enables us to deploy LAS architecture in streaming fashion with a lag of 1 second in phoneme prediction and Op1q amortized cost per time-step; standard LAS model would incur lag of about 8 seconds as it processes the entire 8 seconds of audio before producing predictions.
Theory. We provide theoretical justification for the ShaRNN architecture and show that significant parallelization can be achieved if the network satisfies some relatively weak assumptions. We also point out that additional layers can be introduced in the architecture leading to hierarchical processing. While we do not experiment with this concept here, we note that, it offers potential for exponential improvement in inference time.
3https://en.wikipedia.org/wiki/ARM_Cortex-M#Cortex-M4
In summary, the following are our main contributions:
• We show that under relatively weak assumptions, recurrence in RNNs and consequently, the inference cost can be reduced significantly. • We demonstrate this inference efficiency via a two-layer ShaRNN (and MI-ShaRNN) architecture that uses only shallow RNNs with a small amount of recurrence. • We benchmark MI-ShaRNN (enhancement of ShaRNN with MI-RNN) on several datasets and observe that it learns nearly as accurate models as standard RNNs and MI-RNN. Due to limited recurrence, ShaRNN saves 5-10x computation cost over baseline methods. We deploy MI-ShaRNN model on a tiny microcontroller for real-time audio keyword detection, which, prior to this work, was not possible with standard RNNs due to large inference cost with receptive (sliding) windows. We also deploy ShaRNN in LAS architecture to enable streaming phoneme classification with less than 1 second of lag in prediction.
2 Related Work
Stacked Architecture. Our multi-layered RNN resembles stacked RNNs studied in the literature [15, 16, 27] but they are unrelated. The goal of Stacked RNNs is to produce complex models and subsume conventional RNNs. Each layer is fully recurrent, and feeds output of the first layer to the next level. The next level is another fully recurrent RNN. As such, stacked RNN architectures lead to increased model size and recurrence, which results in worse inference time than standard RNNs.
Recurrent Nets (Training). Conventional works on RNNs primarily address challenges arising during training. In particular for large receptive window T , RNNs suffer from vanishing and exploding gradient issues. A number of works propose to circumvent this issue in a number of ways such as Gated architectures [7, 17] or adding residual connections in RNNs [18, 1, 21] or through constraining the learnt parameters [31]. Several recent works attempt to reduce the number of gates and parameters [8, 6, 21] to reduce model size but as such suffer from poor inference time, since they are still fully recurrent. Different from these works, our focus is on reducing model size as well as inference time and view these works as complementary to our paper.
Recurrent Nets (Inference Time). Recent works have begun to focus on RNN inference cost. [3] proposes to learn skip connections that can avoid evaluating all the hidden states. [10] exploits domain knowledge that true signature is significantly shorter than the time-trace to trim down length of the sliding windows. Both of these approaches are complementary and we indeed leverage the second in our approach. A recent work on dilated RNNs [5] is interesting. While it could serve as a potential solution, we note that, in its original form, dilated RNN also has a fully recurrent first layer, which is therefore infeasible. One remedy is to introduce dilation in the first layer to improve inference time. But, dilation skips steps and hence can miss out on critical local context.
Finally, CNN based methods [28, 14, 29, 9, 2] allow higher parallelization in the sequential tasks but as discussed in Section 1, also lead to significantly larger working RAM requirement when compared to RNNs, thus cannot be considered for deployment on tiny devices (see Section 5).
3 Problem Formulation and Proposed ShaRNN Method
In this paper, we primarily focus on the time-series classification problem, although the techniques apply to more general sequence-to-sequence problems like phoneme classification problem discussed in Section 5. Let Z “ tpX1, y1q, . . . , pXn, ynqu where Xi is the i-th sequential data point with Xi “ rxi,1, xi,2, . . . , xi,T s P RdˆT and xi,t P Rd is the t-th time-step data point. yi P rCs is the label of Xi where C is the number of class labels. xi,t:t`k is the shorthand for xi,t:t`k “ rxi,t, . . . , xi,t`ks. Given training data Z , the goal is to learn a classifier f : RdˆT Ñ rCs that can be used for efficient inference, especially on tiny devices. Recurrent Neural Networks (RNN) are popularly used for modeling such sequential problems and maintain a hidden state ht´1 P Rd̂ at the t-th step that is updated using: ht “ Rpht´1, xtq, t P rT s, ŷ “ fphT q, where ŷ is the prediction by applying a classifier f on hT and d̂ is the dimensionality of the hidden state. Due to the sequential nature of RNN, inference cost of RNN is ΩpT q even if the hardware
supports large amount of parallelization. Furthermore, practical applications require handling a continuous stream of data, e.g., smart-speaker listening for certain audio keywords.
A standard approach is to use sliding windows (receptive field) to form a stream of test points on which inference can be applied. That is, given a stream X “ rx1, x2, . . . , s, we form sliding windows Xs “ xps´1q¨ω`1:ps´1q¨ω`T P RdˆT which stride by ω ą 0 time-steps after each inference. RNN is then applied to each sliding window Xs which implies amortized cost for processing each time-step data point (xt) is ΘpTω q. To ensure high-resolution in prediction, ω is required to be a fairly small constant independent of T . Thus, amortized inference cost for each time-step point is OpT q which is prohibitively large for tiny devices. So, we study the following key question: “Can we process each time-step point in a data stream in opT q computational steps?”
3.1 ShaRNN
Shallow RNNs (ShaRNN) are a hierarchical collection of RNNs organized at two levels. T k
RNNs at ground-layer operate completely in parallel with fully shared parameters and activation functions, thus ensuring small model size and parallel execution. An RNN at the next level take inputs from the ground-layer and subsequently outputs a prediction.
Formally, given a sequential point X “ rx1, . . . , xT s (e.g. sliding window in streaming data), we split it into bricks of size k, where k is a parameter of the algorithm. That is, we form T {k bricks: B “ rB1, . . . , BT {ks where Bj “ xppj´1q¨k`1q:pj¨kq. Now, ShaRNN applies a standard recurrent model Rp1q : Rdˆk Ñ Rd̂1 on each brick, where d̂1 is the dimensionality of hidden states of Rp1q. That is,
ν p1q j “ Rp1qpBjq, j P rT {ks.
Note that Rp1q can be any standard RNN model like GRU, LSTM etc. We now feed output of each layer into another RNN to produce the final state/feature vector that is then fed into a feed forward layer. That is,
ν p2q T {k “ R p2qprνp1q 1 , . . . , ν p1q T {ksq, ŷ “ fpν p2q T {kq,
where Rp2q is the second layer RNN and can also be any standard RNN model. ν p2q T {k P Rd̂2 is the hidden-state obtained by applying Rp2q to ν p1q 1:T {k. f applies the standard feed-forward network to ν p2q T {k. See Figure 1 for a block-diagram of the architecture. That is, ShaRNN is defined by parameters Λ composed of shared RNN parameters at the ground-level, RNN parameters at the next level, and classifier weights for making a prediction. We train the ShaRNN based on minimizing an empirical loss function over training set Z .
Naturally, ShaRNN is an approximation of a true RNN and in principle has less modeling power (and recurrence). But as discussed in Section 4 and shown by our empirical results in Section 5, ShaRNN can still capture enough context from the entire sequence to effectively model a variety of time-series classification problems with large T (typically T ě 100). Due to parallel k RNNs in the bottom layer that are processed by R2 in the second layer, ShaRNN inference cost can be reduced to OpT {k ` kq for multi-threaded architectures with k-wise parallelization; k “ ? T leads to smallest inference cost.
Streaming. Recall that in the streaming setting, we form sliding windows Xs “ xs¨ω`1:s¨ω`T P R
dˆT by striding each window by ω ą 0 time-steps. Hence, if ω “ k ¨ q for q P N then the inference cost of Xs`1 can be reduced by reusing previously computed ν
p1q j vectors @j P rq ` 1, T {ks for Xs.
Below claim provides a formal result for the same.
Claim 1. Let both layers RNNs Rp1q and Rp2q of ShaRNN have same hidden-size and per-time step computation complexity C1. Then, given T and ω, the additional cost of applying ShaRNN to X s`1 given Xs is OpT {k ` q ¨ kq ¨C1, where Xs “ xps´1q¨ω`1:ps´1q¨ω`T , ω is the stride-length of sliding window, and the brick-size ω “ q ¨ k for some integer q ě 1. Consequently, the total amortized cost can be bounded by Op?q ¨ TC1q if k “ a T {q.
See Appendix A for a proof of the claim.
next layer. Note that ν p1q 2 , ν p1q 3
can be reused for evaluating the next window. (b), (c): Mean squared approximation error and the prediction accuracy of ShaRNN with zeroth and first order approximation (M “ 1, 2 respectively in Claim 3) with different brick-sizes k (for Google-13). Note the large error with M “ 1 (same as truncation method in [23]). M “ 2 introduces significant improvement, especially for small k, but clearly needs larger M to achieve better accuracy. (d): Comparison of norm of gradient vs Hessian of Rpht, xt`1:t`kq with varying k. R is FastRNN [21] with swish activation. Smaller Hessian norm indicates that the first-order approximation of R (Claim 3) by ShaRNN is more accurate than the 0-th order one (ShaRNN with M “ 1) suggested by [23].
3.2 Multi-layer ShaRNN
Above claim shows that selecting small k leads to a large number of bricks and hence, a large number of points to be processed by second layer RNN Rp2q which will be the bottleneck in inference. However, using the same approach, we can replace the second layer with another layer of ShaRNN to bring down the cost. By repeating the same process, we can design a general L layer architecture where each layer is equipped with a RNN model Rplq and the output of a l-th layer brick is given by:
ν plq j “ Rplqprν pl´1q pj´1q¨k`1, . . . , ν pl´1q pj´1qk`ksq,
for all 1 ď j ď T {kl, where νp0qj “ xj . The predicted label is given by ŷ “ fpν pLq T {kL´1 q. Using argument similar to the claim in the previous section, we can reduce the total inference cost to Oplog T q by using k “ Op1q and L “ log T . Claim 2. Let all layers of multi-layer ShaRNN have same hidden-size and per-time step complexity C1 and let k “ ω. Then, the additional cost of applying ShaRNN to Xs`1 is OpT {kL ` L ¨ kq ¨ C1, where Xs “ xps´1q¨ω`1:ps´1q¨ω`T . Consequently, selecting L “ logpT q, k “ Op1q, and assuming ω “ Op1q, the total amortized cost is OpC1 ¨ log pT qq.
That is, we can achieve exponential speed-up over OpT q cost for standard RNN. However, such a model can lead to a large loss in accuracy. Moreover, constants in the cost for large L are so large that a network with smaller L might be more efficient for typical values of T .
3.3 MI-ShaRNN
Recently, [10] showed that several time-series training datasets are coarse and the sliding window size T can be decreased significantly by using their multi-instance based algorithm (MI-RNN). MI-RNN finds tight windows around the actual signature of the class, which leads to significantly smaller models and reduces inference cost. Our ShaRNN architecture is orthogonal to MI-RNN and can be combined to obtain even higher amount of inference saving. That is, MI-RNN takes the dataset
Z “ tpX1, y1q, . . . , pXn, ynqu with Xi being a sequential data point over T steps and produces a new set of points X 1j with labels y 1 j , where each X 1 j is sequential data point over T
1 and T 1 ď T . MI-ShaRNN applies ShaRNN to the output of MI-RNN so that the inference cost is dependent only on T 1 ď T , and captures the key signal in each data point.
4 Analysis
In this section, we provide theoretical underpinnings of ShaRNN approach and we also put it in context of work by [23] that discusses RNN models for which we can get rid of almost all of the recurrence.
Let R : Rd`d̂ Ñ Rd̂ be a standard RNN model that maps the given hidden state ht´1 P Rd̂ and data point xt P Rd into the next hidden state ht “ Rpht´1, xtq. Overloading notation, Rph0, x1, . . . , xtq “ Rpht´1, xtq. We define a function to be recurrent if the following holds: Rph0, x1, . . . , xtq “ RpRph0, x1, . . . , xt´1q, xtq. The final class prediction using feed-forward layer is given by: ŷ “ fphT q “ fpRph0, x1:T qq. Now, ShaRNN attempts to untangle and approximate the dependency of fphT q and Rph0, x1:T q on h0, by using Taylor’s theorem. Below claim shows the condition under which the approximation error introduced by ShaRNN is small. Claim 3. Let Rph0, x1, . . . , xtq be an RNN and let }∇Mh Rph, xt:t`kq} ď Opǫ ¨ M !q for some ǫ ě 0 where ∇Mh is M th order derivative with respect to h. Also let }Rph0, x1:tq ´ h0} “ Op1q, }∇mh Rph0, xt`1:t`kq} “ Opm!q for all t P rT s. Then, there exists an ShaRNN defined by functions Rp1q, Rp2q and brick-size k, s.t.:
}Rp2qpνp1q 1 , . . . , ν p1q T {kq ´ Rph0, x1:T q} ď ǫ ¨ M ¨ T, where ν p1q j “ Rp1qph0, xpj´1q¨k`1:j¨kq.
See Appendix A for a detailed proof of the claim.
The above claim shows that the hidden state computed by ShaRNN is close to the state computed by a fully recursive RNN, hence the final output ŷ would also be close. We now compare this result to the result of [23], which showed that }Rph0, x1:T q ´Rph0, xT´k`1:T q} ď ǫ for large enough k if R satisfies a contraction property. That is, if }Rpht´1, xtq ´ Rph1t´1, xtq} ď λ}ht´1 ´ h1t´1} where λ ă 1. However, λ ă 1 is a strict requirement and do not hold in practice. Due to this, if we only compute Rph0, xT´k`1:T q as suggested by the above result (for some reasonable values of k), then resulting accuracy on several datasets drops significantly (see Figure 1(b),(c)).
In the context of Claim 3, result of [23] is a special case with M “ 1, i.e., the result only applies a 0´th order Taylor series expansion. Figure 1 (d) shows how norm of the gradient that bounds error due to the 0-th order expansion is significantly larger than the norm of the Hessian which bounds error due to the 1-st order expansion.
Case study with FastRNN: We now instantiate Claim 3 for a simple FastRNN model [21] with a first-order approximation i.e., with M “ 2 in Claim 3. Claim 4. Let Rph0, x1, . . . , xtq be a FastRNN model with parameters U,W . Let }U} ď Op1q, }∇2hRph0, xt:t`kq} ď Opǫq for any k-length sequence. Then, there exists an ShaRNN defined by functions Rp1q, Rp2q and brick-size k s.t.: }Rp2qpνp1q
1 , . . . , ν p1q T {kq ´ Rph0, x1:T q} ď ǫ, where
ν p1q j “ Rp1qph0, xpj´1q¨k`1:j¨kq.
Note that }U} “ Op1q holds for all the benchmarks that were tried in [21]. Moreover, this assumption is significantly weaker than the typical }U} ă 1 assumption required by [23]. Finally, the Hessian term is significantly smaller than the derivative term (Figure 1 (d)), hence the approximation error and prediction error should be significantly smaller than the one we would get by 0-th order approximation (see Figure 1 (b), (c)).
5 Empirical Results
We conduct experiments to study: a) performance of MI-ShaRNN with varying hidden state dimensions at both the layers Rp1q and Rp2q to understand how its accuracy stacks up against baseline
models across different model sizes, b) inference cost improvement that MI-ShaRNN produces for standard time-series classification problems over baseline models and MI-RNN models, c) if MI-ShaRNN can enable certain time-series classification tasks on devices based on the tiny Cortex M4 with only 100MHz processor and 256KB RAM. Recall that MI-ShaRNN uses ShaRNN on top of trimmed data points given by MI-RNN. MI-RNN is known to have better performance than baseline LSTMs, so naturally MI-ShaRNN has better performance than ShaRNN. Hence, we present results for MI-ShaRNN and compare them to MI-RNN to demonstrate advantage of ShaRNN technique.
Datasets: We benchmark our method on standard datasets from different domains like audio keyword detection (Google-13), wake word detection (STCI-2), activity recognition (HAR-6), sports activity recognition (DSA-19), gesture recognition (GesturePod-5). The number after hyphen in dataset name indicates the number of classes in the dataset. See Table 3 in appendix for more details about the datasets. All the datasets are available online (see Table 3) except STCI-2 which is a proprietary wake word detection dataset.
Baselines: We compare our algorithm MI-ShaRNN (LSTM) against the baseline LSTM method as well as MI-RNN (LSTM) method. Note that MI-RNN as well as MI-ShaRNN build upon an RNN cell. For simplicity and consistency, we have selected LSTM as the base cell for all the methods, but we can train each of them with other RNN cells like GRU [7] or FastRNN [21]. We implemented all the algorithms on TensorFlow and used Adam for training the models [19]. The inference code for Cortex M4 device was written in C and compiled onto the device. All the presented numbers are averaged over 5 independent runs. The implementation of our algorithm is released as part of the EdgeML [11] library.
Hyperparameter selection: The main hyperparameters are: a) hidden state sizes for both the layers of MI-ShaRNN. b) brick-size k for MI-ShaRNN. In addition, the number of time-steps T is associated with each dataset. MI-RNN prunes down T and works with T 1 ď T time-steps. We provide results
with varying hidden state sizes to illustrate trade-offs involved with selecting this hyperparameter (Figure 2). We select k « ? T with some variation to optimize w.r.t the stride length ω for each dataset; we also provide an ablation study to illustrate impact of different choices of k on accuracy and the inference cost (Figure 3, Appendix).
Comparison of accuracies: Table 1 compares accuracy of MI-ShaRNN against baselines and MI-RNN for different hidden dimensions at R1 and R2. In terms of prediction accuracies, MIShaRNN performs much better than baselines and is competitive to MI-RNN on all the datasets. For example, with only k “ 8, MI-ShaRNN is able to achieve 94% accuracy on the Google-13 dataset while MI-RNN model is applied for T “ 49 steps and baseline LSTM for T “ 99 steps. That is, with only 8-deep recurrence, MI-ShaRNN is able to compete with accuracies of 49 and 99 deep LSTMs.
For inference cost, we study the amortized cost per data point in the sliding window setting (See Section 3). That is, baseline and MI-RNN for each sliding window recomputes the entire prediction from scratch. But, MI-ShaRNN can re-use computation in the first layer (see Section 3) leading to significant saving in inference cost. We report inference cost as the additional floating point operations (flops) each model would need to execute for every new inference. For simplicity, we treat both addition and multiplication to be of same cost. The number of non-linearity computations are small and are nearly same for all the methods so we ignore them.
Table 1 clearly shows that to achieve best accuracy, MI-ShaRNN is up to 10x faster than baselines and up to 5x faster than MI-RNN, even on a single threaded hardware architecture. Figure 2 shows computation vs accuracy trade-off for three datasets. We observe that for a range of desired accuracy values, MI-ShaRNN is 5-10x faster than the baselines.
Next, we compute accuracy and flops for MI-ShaRNN with different brick sizes k (see Figure 3 of Appendix). As expected, k „ ? T setting requires fewest flops for inference, but the story for accuracy is more complicated. For this dataset, we do not observe any particular trend for accuracy; all the accuracy values are similar, irrespective of k.
Deployment of Google-13 on Cortex M4: we use ShaRNN to deploy a real-time keyword spotting model (Google-13) on a Cortex M4 device. For time series classification (Section 3), we will need to slide windows and infer classes on each window. Due to small working RAM of M4 devices (256KB), for real-time recognition, the method needs to finish the following tasks within a budget of 120ms: collect data from the microphone buffer, process them, produce ML based inference and smoothened out predictions for one final output.
Standard LSTM models for this task work on 1s windows, whose featurization generates a 32 ˆ 99 feature vector; here T “ 99. So, even a relatively small LSTM (hidden size 16), takes on 456ms to process one window, exceeding the time budget (Table 2). MI-RNN is faster but still requires 225ms. Recently, a few CNN based methods have also been designed for low-resource keyword spotting [26, 20]. However, with just 40 filters applied to the standard 32ˆ99 filter-bank features, the working memory requirement balloons up to « 500KB which is beyond typical M4 devices’ memory budget. Similarly, compute requirement of such architectures also easily exceed the latency budget of 120ms. See Figure 4, in the Appendix for a comparison between CNN models and ShaRNN.
In contrast, our method is able to produce inference in only 70ms, thus is well-within latency budget of M4. Also, MI-ShaRNN holds two arrays in the working RAM: a) input features for 1 brick and b) buffered final states from previous bricks. For the deployed MI-ShaRNN model, with timesteps T “ 49, brick-size k “ 8 working RAM requirement is just 1.5 KB. ShaRNN for Streaming Listen Attend Spell (LAS): LAS is a popular architecture for phoneme classification in given audio stream. It forms non-overlapping time-windows of length 784 (« 8 seconds) and apply an encoder-decoder architecture to predict a sequence of phonemes. We study
LAS applied to TIMIT dataset [13]. We enhance the standard LAS architecture to exploit timeannotated ground truth available in TIMIT dataset, which improved baseline phoneme error rate from publicly reported 0.271 to 0.22. Both Encoder and Decoder layer in standard and enhanced LAS consists of fully recurrent bi-LSTMs. So for each time window (of length 784) we would need to apply entire encoder-decoder architecture to predict the phoneme sequence, implying a potential lag of « 8 seconds (784 steps) in prediction. Instead, using ShaRNN we can divide both the encoder and decoder layer in bricks of size k. This makes it possible to give phoneme classification for every k steps of points thereby bringing down lag from 784 steps to k steps. However, due to small brick size k, in principle we might lose significant amount of context information. But due to the corrective second layer in ShaRNN (Figure 1) we observe little loss in accuracy. Figure 2 shows performance of two variants of ShaRNN + LAS: a) ShaRNN Listener that uses ShaRNN only in encoding layer, b) ShaRNN Listener + Speller that uses ShaRNN in both the encoding and decoding layer. Figure 2 (d) shows that using ShaRNN in both the encoder and decoder is more beneficial than using it only in encoder layer. Furthermore, decreasing k from 784 to 64 leads to marginal increase in error from 0.22 to 0.238 while reducing the lag significantly; from 8 seconds to 0.6 seconds. In fact, even at k “ 64 this model’s performance is significantly better than the reported error of standard LAS (0.27) [12]. See Appendix C for details. | 1. What is the main contribution of the paper, and how does it improve RNN models' inference efficiency?
2. What are the concerns regarding the bound in Claim 1 in the streaming setting?
3. Will the additional memory cost of O(T/k) in SRNN be an issue during inference?
4. How does the extension of multi-layer SRNN provide at least O(log T) inference complexity?
5. Are there any empirical improvements over LSTM and MI-RNN on multiple tasks? | Review | Review
Overall this is a well-written paper with proper motivation, clear design, and detailed theoretical and empirical analysis. The authors attempt to improve the inference efficiency of RNN models with limited computational resources while keeping the length of its receptive window. This is achieved by using a 2-layer RNN, whose first layer parallelly processes small bricks of the entire time series and the second layer gathers outputs from all bricks. The authors also extend SRNN in the streaming setting with similar inference complexity. One concern about the bound in Claim 1 in the streaming setting: In line 137: w is required to be fairly small constant independent of T. In line 166: w = k * q (w is a multiple of k, and thus k needs to be small constant) In line 173: The bound becomes O(\sqrt{qT} * C_1) iff k=\sqrt{T/q}, which is not o(1). Therefore, I was expecting analysis in practical applications with large T and small w. In SRNN, will the O(T/k) extra memory cost be an issue during inference? The extension of multi-layer SRNN in Section 3.2 provides at least O(log T) inference complexity. The bound here is too ideal, but it would be great to see empirically how SRNN performs by adding more shallow layers. The empirical improvements over LSTM and MI-RNN on multiple tasks are impressing. ==== Thanks for your responses. I have read the rebuttal and other reviewers' comments. I am glad to see about the experimental comparisons to CNNs and the refinement of your claims in the rebuttal, and I think including them in the manuscript or supplementary would better clarify and strengthen this paper. Overall this is a relatively simple yet effective solution to edge computing, which would keep becoming more important. |
NIPS | Title
Communication-Optimal Distributed Clustering
Abstract
Clustering large datasets is a fundamental problem with a number of applications in machine learning. Data is often collected on different sites and clustering needs to be performed in a distributed manner with low communication. We would like the quality of the clustering in the distributed setting to match that in the centralized setting for which all the data resides on a single site. In this work, we study both graph and geometric clustering problems in two distributed models: (1) a point-to-point model, and (2) a model with a broadcast channel. We give protocols in both models which we show are nearly optimal by proving almost matching communication lower bounds. Our work highlights the surprising power of a broadcast channel for clustering problems; roughly speaking, to spectrally cluster n points or n vertices in a graph distributed across s servers, for a worst-case partitioning the communication complexity in a point-to-point model is n · s, while in the broadcast model it is n+ s. A similar phenomenon holds for the geometric setting as well. We implement our algorithms and demonstrate this phenomenon on real life datasets, showing that our algorithms are also very efficient in practice.
1 Introduction
Clustering is a fundamental task in machine learning with widespread applications in data mining, computer vision, and social network analysis. Example applications of clustering include grouping similar webpages by search engines, finding users with common interests in a social network, and identifying different objects in a picture or video. For these applications, one can model the objects that need to be clustered as points in Euclidean space Rd, where the similarities of two objects are represented by the Euclidean distance between the two points. Then the task of clustering is to choose k points as centers, so that the total distance between all input points to their corresponding closest center is minimized. Depending on different distance objective functions, three typical problems have been studied: k-means, k-median, and k-center.
The other popular approach for clustering is to model the input data as vertices of a graph, and the similarity between two objects is represented by the weight of the edge connecting the corresponding vertices. For this scenario, one is asked to partition the vertices into clusters so that the “highly connected” vertices belong to the same cluster. A widely-used approach for graph clustering is spectral clustering, which embeds the vertices of a graph into the points in Rk through the bottom k eigenvectors of the graph’s Laplacian matrix, and applies k-means on the embedded points.
∗Full version appears on arXiv, 2017, under the same title.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Both the spectral clustering and the geometric clustering algorithms mentioned above have been widely used in practice, and have been the subject of extensive theoretical and experimental studies over the decades. However, these algorithms are designed for the centralized setting, and are not applicable in the setting of large-scale datasets that are maintained remotely by different sites. In particular, collecting the information from all the remote sites and performing a centralized clustering algorithm is infeasible due to high communication costs, and new distributed clustering algorithms with low communication cost need to be developed.
There are several natural communication models, and we focus on two of them: (1) a point-to-point model, and (2) a model with a broadcast channel. In the former, sometimes referred to as the messagepassing model, there is a communication channel between each pair of users. This may be impractical, and the so-called coordinator model can often be used in place; in the coordinator model there is a centralized site called the coordinator, and all communication goes through the coordinator. This affects the total communication by a factor of two, since the coordinator can forward a message from one server to another and therefore simulate a point-to-point protocol. There is also an additional additive O(log s) bits per message, where s is the number of sites, since a server must specify to the coordinator where to forward its message. In the model with a broadcast channel, sometimes referred to as the blackboard model, the coordinator has the power to send a single message which is received by all s sites at once. This can be viewed as a model for single-hop wireless networks.
In both models we study the total number of bits communicated among all sites. Although the blackboard model is at least as powerful as the message-passing model, it is often unclear how to exploit its power to obtain better bounds for specific problems. Also, for a number of problems the communication complexity is the same in both models, such as computing the sum of s length-n bit vectors modulo two, where each site holds one bit vector [18], or estimating large moments [20]. Still, for other problems like set disjointness it can save a factor of s in the communication [5].
Our contributions. We present algorithms for graph clustering: for any n-vertex graph whose edges are arbitrarily partitioned across s sites, our algorithms have communication cost Õ(ns) in the message passing model, and have communication cost Õ(n + s) in the blackboard model, where the Õ notation suppresses polylogarithmic factors. The algorithm in the message passing model has each site send a spectral sparsifier of its local data to the coordinator, who then merges them in order to obtain a spectral sparsifier of the union of the datasets, which is sufficient for solving the graph clustering problem. Our algorithm in the blackboard model is technically more involved, as we show a particular recursive sampling procedure for building a spectral sparsifier can be efficiently implemented using a broadcast channel. It is unclear if other natural ways of building spectral sparsifiers can be implemented with low communication in the blackboard model. Our algorithms demonstrate the surprising power of the blackboard model for clustering problems. Since our algorithms compute sparsifiers, they also have applications to solving symmetric diagonally dominant linear systems in a distributed model. Any such system can be converted into a system involving a Laplacian (see, e.g., [1]), from which a spectral sparsifier serves as a good preconditioner.
Next we show that Ω(ns) bits of communication is necessary in the message passing model to even recover a constant fraction of a cluster, and Ω(n + s) bits of communication is necessary in the blackboard model. This shows the optimality of our algorithms up to poly-logarithmic factors.
We then study clustering problems in constant-dimensional Euclidean space. We show for any c > 1, computing a c-approximation for k-median, k-means, or k-center correctly with constant probability in the message passing model requires Ω(sk) bits of communication. We then strengthen this lower bound, and show even for bicriteria clustering algorithms, which may output a constant factor more clusters and a constant factor approximation, our Ω(sk) bit lower bound still holds. Our proofs are based on communication and information complexity. Our results imply that existing algorithms [3] for k-median and k-means with Õ(sk) bits of communication, as well as the folklore parallel guessing algorithm for k-center with Õ(sk) bits of communication, are optimal up to poly-logarithmic factors. For the blackboard model, we present an algorithm for k-median and k-means that achieves an O(1)-approximation using Õ(s+ k) bits of communication. This again separates the models.
We give empirical results which show that using spectral sparsifiers preserves the quality of spectral clustering surprisingly well in real-world datasets. For example, when we partition a graph with over 70 million edges (the Sculpture dataset) into 30 sites, only 6% of the input edges are communicated in the blackboard model and 8% are communicated in the message passing model, while the values
of the normalized cut (the objective function of spectral clustering) given in those two models are at most 2% larger than the ones given by the centralized algorithm, and the visualized results are almost identical. This is strong evidence that spectral sparsifiers can be a powerful tool in practical, distributed computation. When the number of sites is large, the blackboard model incurs significantly less communication than the message passing model, e.g., in the Twomoons dataset when there are 90 sites, the message passing model communicates 9 times as many edges as communicated in the blackboard model, illustrating the strong separation between these models that our theory predicts.
Related work. There is a rich literature on spectral and geometric clustering algorithms from various aspects (see, e.g., [2, 16, 17, 19]). Balcan et al. [3, 4] and Feldman et al. [9] study distributed k-means ([3] also studies k-median). Very recently Guha et al. [10] studied distributed k-median/center/means with outliers. Cohen et al. [7] study dimensionality reduction techniques for the input data matrices that can be used for distributed k-means. The main takeaway is that there is no previous work which develops protocols for spectral clustering in the common message passing and blackboard models, and lower bounds are lacking as well. For geometric clustering, while upper bounds exist (e.g., [3, 4, 9]), no provable lower bounds in either model existed, and our main contribution is to show that previous algorithms are optimal. We also develop a new protocol in the blackboard model.
2 Preliminaries
Let G = (V,E,w) be an undirected graph with n vertices, m edges, and weight function V × V → R≥0. The set of neighbors of a vertex v is represented byN(v), and its degree is dv = ∑ u∼v w(u, v). The maximum degree of G is defined to be ∆(G) = maxv{dv}. For any set S ⊆ V , let µ(S) ,∑ v∈S dv . For any sets S, T ⊆ V , we define w(S, T ) , ∑ u∈S,v∈T w(u, v) to be the total weight of edges crossing S and T . For two sets X and Y , the symmetric difference of X and Y is defined as X4Y , (X \ Y ) ∪ (Y \X). For any matrix A ∈ Rn×n, let λ1(A) ≤ · · · ≤ λn(A) = λmax(A) be the eigenvalues of A. For any two matrices A,B ∈ Rn×n, we write A B to represent B − A is positive semi-definite (PSD). Notice that this condition implies that xᵀAx ≤ xᵀBx for any x ∈ Rn. Sometimes we also use a weaker notation (1− ε)A r B r (1 + ε)A to indicate that (1− ε)xᵀAx ≤ xᵀBx ≤ (1 + ε)xᵀAx for all x in the row span of A.
Graph Laplacian. The Laplacian matrix of G is an n× n matrix LG defined by LG = DG −AG, whereAG is the adjacency matrix ofG defined byAG(u, v) = w(u, v), andDG is the n×n diagonal matrix with DG(v, v) = dv for any v ∈ V [G]. Alternatively, we can write LG with respect to a signed edge-vertex incidence matrix: we assign every edge e = {u, v} an arbitrary orientation, and let BG(e, v) = 1 if v is e’s head, BG(e, v) = −1 if v is e’s tail, and BG(e, v) = 0 otherwise. We further define a diagonal matrix WG ∈ Rm×m, where WG(e, e) = we for any edge e ∈ E[G]. Then, we can write LG as LG = B ᵀ GWGBG. The normalized Laplacian matrix of G is defined by LG , D−1/2G LGD −1/2 G = I − D −1/2 G AGD −1/2 G . We sometimes drop the subscript G when the underlying graph is clear from the context.
Spectral sparsification. For any undirected and weighted graph G = (V,E,w), we say a subgraph H of G with proper reweighting of the edges is a (1 + ε)-spectral sparsifier if
(1− ε)LG LH (1 + ε)LG. (1) By definition, it is easy to show that, if we decompose the edge set of a graph G = (V,E) into E1, . . . , E` for a constant ` and Hi is a spectral sparsifier of Gi = (V,Ei) for any 1 ≤ i ≤ `, then the graph formed by the union of edge sets from Hi is a spectral sparsifier of G. It is known that, for any undirected graph G of n vertices, there is a (1 + ε)-spectral sparsifier of G with O(n/ε2) edges, and it can be constructed in almost-linear time [13]. We will show that a spectral sparsifier preserves the cluster structure of a graph.
Models of computation. We will study distributed clustering in two models for distributed data: the message passing model and the blackboard model. The message passing model represents those distributed computation systems with point-to-point communication, and the blackboard model represents those where messages can be broadcast to all parties.
More precisely, in the message passing model there are s sites P1, . . . ,Ps, and one coordinator. These sites can talk to the coordinator through a two-way private channel. In fact, this is referred to
as the coordinator model in Section 1, where it is shown to be equivalent to the point-to-point model up to small factors. The input is initially distributed at the s sites. The computation is in terms of rounds: at the beginning of each round, the coordinator sends a message to some of the s sites, and then each of those sites that have been contacted by the coordinator sends a message back to the coordinator. At the end, the coordinator outputs the answer. In the alternative blackboard model, the coordinator is simply a blackboard where these s sites P1, . . . ,Ps can share information; in other words, if one site sends a message to the coordinator/blackboard then all the other s− 1 sites can see this information without further communication. The order for the sites to speak is decided by the contents of the blackboard.
For both models we measure the communication cost as the total number of bits sent through the channels. The two models are now standard in multiparty communication complexity (see, e.g., [5, 18, 20]). They are similar to the congested clique model [14] studied in the distributed computing community; the main difference is that in our models we do not post any bandwidth limitations at each channel but instead consider the total number of bits communicated.
3 Distributed graph clustering
In this section we study distributed graph clustering. We assume that the vertex set of the input graph G = (V,E) can be partitioned into k clusters, where vertices in each cluster S are highly connected to each other, and there are fewer edges between S and V \S. To formalize this notion, we define the conductance of a vertex set S by φG(S) , w(S, V \S)/µ(S). Generalizing the Cheeger constant, we define the k-way expansion constant of graphG by ρ(k) , minpartition A1, . . . , Ak max1≤i≤k φG(Ai). Notice that a graph G has k clusters if the value of ρ(k) is small.
Lee et al. [12] relate the value of ρ(k) to λk(LG) by the following higher-order Cheeger inequality:
λk(LG) 2
≤ ρ(k) ≤ O(k2) √ λk(LG).
Based on this, a large gap between λk+1(LG) and ρ(k) implies (i) the existence of a k-way partition {Si}ki=1 with smaller value of φG(Si) ≤ ρ(k), and (ii) any (k + 1)-way partition of G contains a subset with high conductance ρ(k + 1) ≥ λk+1(LG)/2. Hence, a large gap between λk+1(LG) and ρ(k) ensures that G has exactly k clusters.
In the following, we assume that Υ , λk+1(LG)/ρ(k) = Ω(k3), as this assumption was used in the literature for studying graph clustering in the centralized setting [17].
Both algorithms presented in the section are based on the following spectral clustering algorithm: (i) compute the k eigenvectors f1, . . . , fk of LG associated with λ1(LG), . . . , λk(LG); (ii) embed every vertex v to a point in Rk through the embedding F (v) = 1√
dv · (f1(v), . . . , fk(v)); (iii) run
k-means on the embedded points {F (v)}v∈V , and group the vertices of G into k clusters according to the output of k-means.
3.1 The message passing model
We assume the edges of the input graphG = (V,E) are arbitrarily allocated among s sitesP1, · · · ,Ps, and we use Ei to denote the edge set maintained by site Pi. Our proposed algorithm consists of two steps: (i) every Pi computes a linear-sized (1 + c)-spectral sparsifier Hi of Gi , (V,Ei), for a small constant c ≤ 1/10, and sends the edge set of Hi, denoted by E′i, to the coordinator; (ii) the coordinator runs a spectral clustering algorithm on the union of received graphs H , ( V, ⋃k i=1E ′ i ) . The theorem below summarizes the performance of this algorithm, and shows the approximation guarantee of this algorithm is as good as the provable guarantee of spectral clustering known in the centralized setting [17]. Theorem 3.1. Let G = (V,E) be an n-vertex graph with Υ = Ω(k3), and suppose the edges of G are arbitrarily allocated among s sites. Assume S1, · · · , Sk is an optimal partition that achieves ρ(k). Then, the algorithm above computes a partition A1, . . . , Ak satisfying vol(Ai4Si) = O ( k3 ·Υ−1 · vol(Si) ) for any 1 ≤ i ≤ k. The total communication cost of this algorithm is Õ(ns) bits.
Our proposed algorithm is very easy to implement, and the next theorem shows that the communication cost of our algorithm is optimal up to a logarithmic factor. Theorem 3.2. Let G be an undirected graph with n vertices, and suppose the edges of G are distributed among s sites. Then, any algorithm that correctly outputs a constant fraction of a cluster in G requires Ω(ns) bits of communication. This holds even if each cluster has constant expansion.
As a remark, it is easy to see that this lower bound also holds for constructing spectral sparsifiers: for any n× n PSD matrix A whose entries are arbitrarily distributed among s sites, any distributed algorithm that constructs a (1 + Θ(1))-spectral sparsifier of A requires Ω(ns) bits of communication. This follows since such a spectral sparsifier can be used to solve the spectral clustering problem. Spectral sparsification has played an important role in designing fast algorithms from different areas, e.g., machine learning, and numerical linear algebra. Hence our lower bound result for constructing spectral sparsifiers may have applications to studying other distributed learning algorithms.
3.2 The blackboard model
Next we present a graph clustering algorithm with Õ(n + s) bits of communication cost in the blackboard model. Our result is based on the observation that a spectral sparsifier preserves the structure of clusters, which was used for proving Theorem 3.1. So it suffices to design a distributed algorithm for constructing a spectral sparsifier in the blackboard model.
Our distributed algorithm is based on constructing a chain of coarse sparsifiers [15], which is described as follows: for any input PSD matrix K with λmax(K) ≤ λu and all the non-zero eigenvalues of K at least λ`, we define d = dlog2(λu/λ`)e and construct a chain of d+ 1 matrices
[K(0),K(1), . . . ,K(d)], (2)
where γ(i) = λu/2i and K(i) = K + γ(i)I . Notice that in the chain above every K(i − 1) is obtained by adding weights to the diagonal entries of K(i), and K(i− 1) approximates K(i) as long as the weights added to the diagonal entries are small. We will construct this chain recursively, so that K(0) has heavy diagonal entries and can be approximated by a diagonal matrix. Moreover, since K is the Laplacian matrix of a graph G, it is easy to see that d = O(log n) as long as the edge weights of G are polynomially upper-bounded in n. Lemma 3.3 ([15]). The chain (2) satisfies the following relations: (1) K r K(d) r 2K; (2) K(`) K(`− 1) 2K(`) for all ` ∈ {1, . . . , d}; (3) K(0) 2γ(0)I 2K(0).
Based on Lemma 3.3, we will construct a chain of matrices[ K̃(0), K̃(1), . . . , K̃(d) ] (3)
in the blackboard model, such that every K̃(`) is a spectral sparsifier of K(`), and every K̃(`+ 1) can be constructed from K̃(`). The basic idea behind our construction is to use the relations among different K(`) shown in Lemma 3.3 and the fact that, for any K = BᵀB, sampling rows of B with respect to their leverage scores can be used to obtain a matrix approximating K. Theorem 3.4. LetG be an undirected graph on n vertices, where the edges ofG are allocated among s sites, and the edge weights are polynomially upper bounded in n. Then, a spectral sparsifier of G can be constructed with Õ(n+ s) bits of communication in the blackboard model. That is, the chain (3) can be constructed with Õ(n+ s) bits of communication in the blackboard model.
Proof. Let K = BᵀB be the Laplacian matrix of the underlying graph G, where B ∈ Rm×n is the edge-vertex incidence matrix of G. We will prove that every K̃(i+ 1) can be constructed based on K̃(i) with Õ(n+ s) bits of communication. This implies that K̃(d), a (1 + ε)-spectral sparsifier of K, can be constructed with Õ(n+ s) bits of communication, as the length of the chain d = O(log n).
First of all, notice that λu ≤ 2n, and the value of n can be obtained with communication cost Õ(n + s) (different sites sequentially write the new IDs of the vertices on the blackboard). In the following we assume that λu is the upper bound of λmax that we actually obtained in the blackboard.
Base case of ` = 0: By definition, K(0) = K + λu · I , and 12 · K(0) γ(0) · I K(0), due to Statement 3 of Lemma 3.3. Let ⊕ denote appending the rows of one matrix to another. We
define Bγ(0) = B ⊕ √ γ(0) · I , and write K(0) = K + γ(0) · I = Bᵀγ(0)Bγ(0). By defining τi = b ᵀ i (K(0)) ᵀ bi for each row of Bγ(0), we have τi ≤ bᵀi (γ(0) · I) bi ≤ 2 · τi. Let τ̃i = bᵀi (γ(0) · I) + bi be the leverage score of bi approximated using γ(0) · I , and let τ̃ be the vector of
approximate leverage scores, with the leverage scores of the n rows corresponding to √ γ(0) · I rounded up to 1. Then, with high probability sampling O(ε−2n log n) rows of B will give a matrix K̃(0) such that (1− ε)K(0) K̃(0) (1 + ε)K(0). Notice that, as every row of B corresponds to an edge of G, the approximate leverage scores τ̃i for different edges can be computed locally by different sites maintaining the edges, and the sites only need to send the information of the sampled edges to the blackboard, hence the communication cost is Õ(n+ s) bits.
Induction step: We assume that (1−ε)K(`) r K̃(`) r (1+ε)K(`), and the blackboard maintains the matrix K̃(`). This implies that (1− ε)/(1 + ε) ·K(`) r 1/(1 + ε) · K̃(`) r K(`). Combining this with Statement 2 of Lemma 3.3, we have that
1− ε 2(1 + ε) K(`+ 1) r 1 2(1 + ε) K̃(`) K(`+ 1).
We apply the same sampling procedure as in the base case, and obtain a matrix K̃(` + 1) such that (1 − ε)K(` + 1) r K̃(` + 1) r (1 + ε)K(` + 1). Notice that, since K̃(`) is written on the blackboard, the probabilities used for sampling individual edges can be computed locally by different sites, and in each round only the sampled edges will be sent to the blackboard in order for the blackboard to obtain K̃(`+ 1). Hence, the total communication cost in each iteration is Õ(n+ s) bits. Combining this with the fact that the chain length d = O(log n) proves the theorem.
Combining Theorem 3.4 and the fact that a spectral sparsifier preserves the structure of clusters, we obtain a distributed algorithm in the blackboard model with total communication cost Õ(n+ s) bits, and the performance of our algorithm is the same as in the statement of Theorem 3.1. Notice that Ω(n + s) bits of communication are needed for graph clustering in the blackboard model, since the output of a clustering algorithm contains Ω(n) bits of information and each site needs to communicate at least one bit. Hence the communication cost of our proposed algorithm is optimal up to a poly-logarithmic factor.
4 Distributed geometric clustering
We now consider geometric clustering, including k-median, k-means and k-center. Let P be a set of points of size n in a metric space with distance function d(·, ·), and let k ≤ n be an integer. In the k-center problem we want to find a set C (|C| = k) such that maxp∈P d(p, C) is minimized, where d(p, C) = minc∈C d(p, c). In k-median and k-means we replace the objective function maxp∈P d(p, C) with ∑ p∈P d(p, C) and ∑ p∈P (d(p, C)) 2, respectively.
4.1 The message passing model
As mentioned, for constant dimensional Euclidean space and a constant c > 1, there are algorithms that c-approximate k-median and k-means using Õ(sk) bits of communication [3]. For k-center, the folklore parallel guessing algorithms (see, e.g., [8]) achieve a 2.01-approximation using Õ(sk) bits of communication.
The following theorem states that the above upper bounds are tight up to logarithmic factors. Due to space constraints we defer the proof to the full version of this paper. The proof uses tools from multiparty communication complexity. We in fact can prove a stronger statement that any algorithm that can differentiate whether we have k points or k + 1 points in total in the message passing model needs Ω(sk) bits of communication. Theorem 4.1. For any c > 1, computing c-approximation for k-median, k-means or k-center correctly with probability 0.99 in the message passing model needs Ω(sk) bits of communication.
A number of works on clustering consider bicriteria solutions (e.g., [11, 6]). An algorithm is a (c1, c2)-approximation (c1, c2 > 1) if the optimal solution costs W when using k centers, then the
output of the algorithm costs at most c1W when using at most c2k centers. We can show that for kmedian and k-means, the Ω(sk) lower bound holds even for algorithms with bicriteria approximations. The proof of the following theorem can be found in the full version of this paper.
Theorem 4.2. For any c ∈ [1, 1.01], computing (7.1− 6c, c)-bicriteria-approximation for k-median or k-means correctly with probability 0.99 in the message passing model needs Ω(sk) bits of communication.
4.2 The blackboard model
We can show that there is an algorithm that achieves an O(1)-approximation using Õ(s+ k) bits of communication for k-median and k-means. Due to space constraints we defer the description of the algorithm to the full version of this paper. For k-center, it is straightforward to implement the parallel guessing algorithm in the blackboard model using Õ(s+ k) bits of communication.
Theorem 4.3. There are algorithms that compute O(1)-approximations for k-median, k-means and k-center correctly with probability 0.9 in the blackboard model using Õ(s+k) bits of communication.
5 Experiments
In this section we present experimental results for spectral graph clustering in the message passing and blackboard models. We will compare the following three algorithms. (1) Baseline: each site sends all the data to the coordinator directly; (2) MsgPassing: our algorithm in the message passing model (Section 3.1); (3) Blackboard: our algorithm in the blackboard model (Section 3.2).
Besides giving the visualized results of these algorithms on various datasets, we also measure the qualities of the results via the normalized cut, defined as ncut(A1, . . . , Ak) = 12 ∑ i∈[k] w(Ai,V \Ai) vol(Ai)
, which is a standard objective function to be minimized for spectral clustering algorithms.
We implemented the algorithms using multiple languages, including Matlab, Python and C++. Our experiments were conducted on an IBM NeXtScale nx360 M4 server, which is equipped with 2 Intel Xeon E5-2652 v2 8-core processors, 32GB RAM and 250GB local storage.
Datasets. We test the algorithms in the following real and synthetic datasets.
• Twomoons: this dataset contains n = 14, 000 coordinates in R2. We consider each point to be a vertex. For any two vertices u, v, we add an edge with weight w(u, v) = exp{−‖u− v‖22/σ2} with σ = 0.1 when one vertex is among the 7000-nearest points of the other. This construction results in a graph with about 110, 000, 000 edges.
• Gauss: this dataset contains n = 10, 000 points in R2. There are 4 clusters in this dataset, each generated using a Gaussian distribution. We construct a complete graph as the similarity graph. For any two vertices u, v, we define the weight w(u, v) = exp{−‖u− v‖22/σ2} with σ = 1. The resulting graph has about 100, 000, 000 edges.
• Sculpture: a photo of The Greek Slave We use an 80× 150 version of this photo where each pixel is viewed as a vertex. To construct a similarity graph, we map each pixel to a point in R5, i.e., (x, y, r, g, b), where the latter three coordinates are the RGB values. For any two vertices u, v, we put an edge between u, v with weight w(u, v) = exp{−‖u − v‖22/σ2} with σ = 0.5 if one of u, v is among the 5000-nearest points of the other. This results in a graph with about 70, 000, 000 edges.
In the distributed model edges are randomly partitioned across s sites.
Results on clustering quality. We visualize the clustered results for the Twomoons dataset in Figure 1. It can be seen that Baseline, MsgPassing and Blackboard give results of very similar qualities. For simplicity, here we only present the visualization for s = 15. Similar results were observed when we varied the values of s.
We also compare the normalized cut (ncut) values of the clustering results of different algorithms. The results are presented in Figure 2. In all datasets, the ncut values of different algorithms are very close. The ncut value of MsgPassing slightly decreases when we increase the value of s, while the ncut value of Blackboard is independent of s.
Results on Communication Costs. We compare the communication costs of different algorithms in Figure 3. We observe that while achieving similar clustering qualities as Baseline, both MsgPassing and Blackboard are significantly more communication-efficient (by one or two orders of magnitudes in our experiments). We also notice that the value of s does not affect the communication cost of Blackboard, while the communication cost of MsgPassing grows almost linearly with s; when s is large, MsgPassing uses significantly more communication than Blackboard.
Acknowledgement: Jiecao Chen and Qin Zhang are supported in part by NSF CCF-1525024 and IIS-1633215. D.W. thanks support from the XDATA program of the Defense Advanced Research Projects Agency (DARPA), Air Force Research Laboratory contract FA8750-12-C-0323. | 1. What are the main contributions and novel aspects introduced by the paper in distributed clustering?
2. What are the strengths of the paper, particularly in the theoretical analysis of communication complexity?
3. Do you have any questions or concerns regarding the paper's experimental results and their relation to the theoretical findings? | Review | Review
The paper considers the communication complexity of distributed clustering in two different models of communications: the point to point (message passing) and the broadcast (blackboard). Suppose there are n nodes to be clustered across s sites (partition of edges). In the message passing model, an algorithm is presented that takes about ns polylog(ns) bits of communication, and an algorithm is presented for the blackboard case, that takes about (n+s) polylog (ns) bits of communications. O(ns) and O(n+s) are also lower bounds on the respective models. Some other extension of these results are presented; as well as experimental verification over real datasets have been shown. There is one point I did not understand. It is said in the description of experimental results: "For example, when we partition a graph with over 70 million edges (the Sculpture dataset) into 30 sites, only 6% of the input edges are communicated in the blackboard model and 8% are communicated in the message passing model, while the values of the normalized cut (the objective function of spectral clustering) given in those two models are at most 2% larger than the one given by the centralized algorithm, and the visualized results are almost identical." This somehow does not agree with the theoretical result that there should be huge advantage of the blackboard model (n+s compared to ns). Why is this discrepancy? |
NIPS | Title
Communication-Optimal Distributed Clustering
Abstract
Clustering large datasets is a fundamental problem with a number of applications in machine learning. Data is often collected on different sites and clustering needs to be performed in a distributed manner with low communication. We would like the quality of the clustering in the distributed setting to match that in the centralized setting for which all the data resides on a single site. In this work, we study both graph and geometric clustering problems in two distributed models: (1) a point-to-point model, and (2) a model with a broadcast channel. We give protocols in both models which we show are nearly optimal by proving almost matching communication lower bounds. Our work highlights the surprising power of a broadcast channel for clustering problems; roughly speaking, to spectrally cluster n points or n vertices in a graph distributed across s servers, for a worst-case partitioning the communication complexity in a point-to-point model is n · s, while in the broadcast model it is n+ s. A similar phenomenon holds for the geometric setting as well. We implement our algorithms and demonstrate this phenomenon on real life datasets, showing that our algorithms are also very efficient in practice.
1 Introduction
Clustering is a fundamental task in machine learning with widespread applications in data mining, computer vision, and social network analysis. Example applications of clustering include grouping similar webpages by search engines, finding users with common interests in a social network, and identifying different objects in a picture or video. For these applications, one can model the objects that need to be clustered as points in Euclidean space Rd, where the similarities of two objects are represented by the Euclidean distance between the two points. Then the task of clustering is to choose k points as centers, so that the total distance between all input points to their corresponding closest center is minimized. Depending on different distance objective functions, three typical problems have been studied: k-means, k-median, and k-center.
The other popular approach for clustering is to model the input data as vertices of a graph, and the similarity between two objects is represented by the weight of the edge connecting the corresponding vertices. For this scenario, one is asked to partition the vertices into clusters so that the “highly connected” vertices belong to the same cluster. A widely-used approach for graph clustering is spectral clustering, which embeds the vertices of a graph into the points in Rk through the bottom k eigenvectors of the graph’s Laplacian matrix, and applies k-means on the embedded points.
∗Full version appears on arXiv, 2017, under the same title.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Both the spectral clustering and the geometric clustering algorithms mentioned above have been widely used in practice, and have been the subject of extensive theoretical and experimental studies over the decades. However, these algorithms are designed for the centralized setting, and are not applicable in the setting of large-scale datasets that are maintained remotely by different sites. In particular, collecting the information from all the remote sites and performing a centralized clustering algorithm is infeasible due to high communication costs, and new distributed clustering algorithms with low communication cost need to be developed.
There are several natural communication models, and we focus on two of them: (1) a point-to-point model, and (2) a model with a broadcast channel. In the former, sometimes referred to as the messagepassing model, there is a communication channel between each pair of users. This may be impractical, and the so-called coordinator model can often be used in place; in the coordinator model there is a centralized site called the coordinator, and all communication goes through the coordinator. This affects the total communication by a factor of two, since the coordinator can forward a message from one server to another and therefore simulate a point-to-point protocol. There is also an additional additive O(log s) bits per message, where s is the number of sites, since a server must specify to the coordinator where to forward its message. In the model with a broadcast channel, sometimes referred to as the blackboard model, the coordinator has the power to send a single message which is received by all s sites at once. This can be viewed as a model for single-hop wireless networks.
In both models we study the total number of bits communicated among all sites. Although the blackboard model is at least as powerful as the message-passing model, it is often unclear how to exploit its power to obtain better bounds for specific problems. Also, for a number of problems the communication complexity is the same in both models, such as computing the sum of s length-n bit vectors modulo two, where each site holds one bit vector [18], or estimating large moments [20]. Still, for other problems like set disjointness it can save a factor of s in the communication [5].
Our contributions. We present algorithms for graph clustering: for any n-vertex graph whose edges are arbitrarily partitioned across s sites, our algorithms have communication cost Õ(ns) in the message passing model, and have communication cost Õ(n + s) in the blackboard model, where the Õ notation suppresses polylogarithmic factors. The algorithm in the message passing model has each site send a spectral sparsifier of its local data to the coordinator, who then merges them in order to obtain a spectral sparsifier of the union of the datasets, which is sufficient for solving the graph clustering problem. Our algorithm in the blackboard model is technically more involved, as we show a particular recursive sampling procedure for building a spectral sparsifier can be efficiently implemented using a broadcast channel. It is unclear if other natural ways of building spectral sparsifiers can be implemented with low communication in the blackboard model. Our algorithms demonstrate the surprising power of the blackboard model for clustering problems. Since our algorithms compute sparsifiers, they also have applications to solving symmetric diagonally dominant linear systems in a distributed model. Any such system can be converted into a system involving a Laplacian (see, e.g., [1]), from which a spectral sparsifier serves as a good preconditioner.
Next we show that Ω(ns) bits of communication is necessary in the message passing model to even recover a constant fraction of a cluster, and Ω(n + s) bits of communication is necessary in the blackboard model. This shows the optimality of our algorithms up to poly-logarithmic factors.
We then study clustering problems in constant-dimensional Euclidean space. We show for any c > 1, computing a c-approximation for k-median, k-means, or k-center correctly with constant probability in the message passing model requires Ω(sk) bits of communication. We then strengthen this lower bound, and show even for bicriteria clustering algorithms, which may output a constant factor more clusters and a constant factor approximation, our Ω(sk) bit lower bound still holds. Our proofs are based on communication and information complexity. Our results imply that existing algorithms [3] for k-median and k-means with Õ(sk) bits of communication, as well as the folklore parallel guessing algorithm for k-center with Õ(sk) bits of communication, are optimal up to poly-logarithmic factors. For the blackboard model, we present an algorithm for k-median and k-means that achieves an O(1)-approximation using Õ(s+ k) bits of communication. This again separates the models.
We give empirical results which show that using spectral sparsifiers preserves the quality of spectral clustering surprisingly well in real-world datasets. For example, when we partition a graph with over 70 million edges (the Sculpture dataset) into 30 sites, only 6% of the input edges are communicated in the blackboard model and 8% are communicated in the message passing model, while the values
of the normalized cut (the objective function of spectral clustering) given in those two models are at most 2% larger than the ones given by the centralized algorithm, and the visualized results are almost identical. This is strong evidence that spectral sparsifiers can be a powerful tool in practical, distributed computation. When the number of sites is large, the blackboard model incurs significantly less communication than the message passing model, e.g., in the Twomoons dataset when there are 90 sites, the message passing model communicates 9 times as many edges as communicated in the blackboard model, illustrating the strong separation between these models that our theory predicts.
Related work. There is a rich literature on spectral and geometric clustering algorithms from various aspects (see, e.g., [2, 16, 17, 19]). Balcan et al. [3, 4] and Feldman et al. [9] study distributed k-means ([3] also studies k-median). Very recently Guha et al. [10] studied distributed k-median/center/means with outliers. Cohen et al. [7] study dimensionality reduction techniques for the input data matrices that can be used for distributed k-means. The main takeaway is that there is no previous work which develops protocols for spectral clustering in the common message passing and blackboard models, and lower bounds are lacking as well. For geometric clustering, while upper bounds exist (e.g., [3, 4, 9]), no provable lower bounds in either model existed, and our main contribution is to show that previous algorithms are optimal. We also develop a new protocol in the blackboard model.
2 Preliminaries
Let G = (V,E,w) be an undirected graph with n vertices, m edges, and weight function V × V → R≥0. The set of neighbors of a vertex v is represented byN(v), and its degree is dv = ∑ u∼v w(u, v). The maximum degree of G is defined to be ∆(G) = maxv{dv}. For any set S ⊆ V , let µ(S) ,∑ v∈S dv . For any sets S, T ⊆ V , we define w(S, T ) , ∑ u∈S,v∈T w(u, v) to be the total weight of edges crossing S and T . For two sets X and Y , the symmetric difference of X and Y is defined as X4Y , (X \ Y ) ∪ (Y \X). For any matrix A ∈ Rn×n, let λ1(A) ≤ · · · ≤ λn(A) = λmax(A) be the eigenvalues of A. For any two matrices A,B ∈ Rn×n, we write A B to represent B − A is positive semi-definite (PSD). Notice that this condition implies that xᵀAx ≤ xᵀBx for any x ∈ Rn. Sometimes we also use a weaker notation (1− ε)A r B r (1 + ε)A to indicate that (1− ε)xᵀAx ≤ xᵀBx ≤ (1 + ε)xᵀAx for all x in the row span of A.
Graph Laplacian. The Laplacian matrix of G is an n× n matrix LG defined by LG = DG −AG, whereAG is the adjacency matrix ofG defined byAG(u, v) = w(u, v), andDG is the n×n diagonal matrix with DG(v, v) = dv for any v ∈ V [G]. Alternatively, we can write LG with respect to a signed edge-vertex incidence matrix: we assign every edge e = {u, v} an arbitrary orientation, and let BG(e, v) = 1 if v is e’s head, BG(e, v) = −1 if v is e’s tail, and BG(e, v) = 0 otherwise. We further define a diagonal matrix WG ∈ Rm×m, where WG(e, e) = we for any edge e ∈ E[G]. Then, we can write LG as LG = B ᵀ GWGBG. The normalized Laplacian matrix of G is defined by LG , D−1/2G LGD −1/2 G = I − D −1/2 G AGD −1/2 G . We sometimes drop the subscript G when the underlying graph is clear from the context.
Spectral sparsification. For any undirected and weighted graph G = (V,E,w), we say a subgraph H of G with proper reweighting of the edges is a (1 + ε)-spectral sparsifier if
(1− ε)LG LH (1 + ε)LG. (1) By definition, it is easy to show that, if we decompose the edge set of a graph G = (V,E) into E1, . . . , E` for a constant ` and Hi is a spectral sparsifier of Gi = (V,Ei) for any 1 ≤ i ≤ `, then the graph formed by the union of edge sets from Hi is a spectral sparsifier of G. It is known that, for any undirected graph G of n vertices, there is a (1 + ε)-spectral sparsifier of G with O(n/ε2) edges, and it can be constructed in almost-linear time [13]. We will show that a spectral sparsifier preserves the cluster structure of a graph.
Models of computation. We will study distributed clustering in two models for distributed data: the message passing model and the blackboard model. The message passing model represents those distributed computation systems with point-to-point communication, and the blackboard model represents those where messages can be broadcast to all parties.
More precisely, in the message passing model there are s sites P1, . . . ,Ps, and one coordinator. These sites can talk to the coordinator through a two-way private channel. In fact, this is referred to
as the coordinator model in Section 1, where it is shown to be equivalent to the point-to-point model up to small factors. The input is initially distributed at the s sites. The computation is in terms of rounds: at the beginning of each round, the coordinator sends a message to some of the s sites, and then each of those sites that have been contacted by the coordinator sends a message back to the coordinator. At the end, the coordinator outputs the answer. In the alternative blackboard model, the coordinator is simply a blackboard where these s sites P1, . . . ,Ps can share information; in other words, if one site sends a message to the coordinator/blackboard then all the other s− 1 sites can see this information without further communication. The order for the sites to speak is decided by the contents of the blackboard.
For both models we measure the communication cost as the total number of bits sent through the channels. The two models are now standard in multiparty communication complexity (see, e.g., [5, 18, 20]). They are similar to the congested clique model [14] studied in the distributed computing community; the main difference is that in our models we do not post any bandwidth limitations at each channel but instead consider the total number of bits communicated.
3 Distributed graph clustering
In this section we study distributed graph clustering. We assume that the vertex set of the input graph G = (V,E) can be partitioned into k clusters, where vertices in each cluster S are highly connected to each other, and there are fewer edges between S and V \S. To formalize this notion, we define the conductance of a vertex set S by φG(S) , w(S, V \S)/µ(S). Generalizing the Cheeger constant, we define the k-way expansion constant of graphG by ρ(k) , minpartition A1, . . . , Ak max1≤i≤k φG(Ai). Notice that a graph G has k clusters if the value of ρ(k) is small.
Lee et al. [12] relate the value of ρ(k) to λk(LG) by the following higher-order Cheeger inequality:
λk(LG) 2
≤ ρ(k) ≤ O(k2) √ λk(LG).
Based on this, a large gap between λk+1(LG) and ρ(k) implies (i) the existence of a k-way partition {Si}ki=1 with smaller value of φG(Si) ≤ ρ(k), and (ii) any (k + 1)-way partition of G contains a subset with high conductance ρ(k + 1) ≥ λk+1(LG)/2. Hence, a large gap between λk+1(LG) and ρ(k) ensures that G has exactly k clusters.
In the following, we assume that Υ , λk+1(LG)/ρ(k) = Ω(k3), as this assumption was used in the literature for studying graph clustering in the centralized setting [17].
Both algorithms presented in the section are based on the following spectral clustering algorithm: (i) compute the k eigenvectors f1, . . . , fk of LG associated with λ1(LG), . . . , λk(LG); (ii) embed every vertex v to a point in Rk through the embedding F (v) = 1√
dv · (f1(v), . . . , fk(v)); (iii) run
k-means on the embedded points {F (v)}v∈V , and group the vertices of G into k clusters according to the output of k-means.
3.1 The message passing model
We assume the edges of the input graphG = (V,E) are arbitrarily allocated among s sitesP1, · · · ,Ps, and we use Ei to denote the edge set maintained by site Pi. Our proposed algorithm consists of two steps: (i) every Pi computes a linear-sized (1 + c)-spectral sparsifier Hi of Gi , (V,Ei), for a small constant c ≤ 1/10, and sends the edge set of Hi, denoted by E′i, to the coordinator; (ii) the coordinator runs a spectral clustering algorithm on the union of received graphs H , ( V, ⋃k i=1E ′ i ) . The theorem below summarizes the performance of this algorithm, and shows the approximation guarantee of this algorithm is as good as the provable guarantee of spectral clustering known in the centralized setting [17]. Theorem 3.1. Let G = (V,E) be an n-vertex graph with Υ = Ω(k3), and suppose the edges of G are arbitrarily allocated among s sites. Assume S1, · · · , Sk is an optimal partition that achieves ρ(k). Then, the algorithm above computes a partition A1, . . . , Ak satisfying vol(Ai4Si) = O ( k3 ·Υ−1 · vol(Si) ) for any 1 ≤ i ≤ k. The total communication cost of this algorithm is Õ(ns) bits.
Our proposed algorithm is very easy to implement, and the next theorem shows that the communication cost of our algorithm is optimal up to a logarithmic factor. Theorem 3.2. Let G be an undirected graph with n vertices, and suppose the edges of G are distributed among s sites. Then, any algorithm that correctly outputs a constant fraction of a cluster in G requires Ω(ns) bits of communication. This holds even if each cluster has constant expansion.
As a remark, it is easy to see that this lower bound also holds for constructing spectral sparsifiers: for any n× n PSD matrix A whose entries are arbitrarily distributed among s sites, any distributed algorithm that constructs a (1 + Θ(1))-spectral sparsifier of A requires Ω(ns) bits of communication. This follows since such a spectral sparsifier can be used to solve the spectral clustering problem. Spectral sparsification has played an important role in designing fast algorithms from different areas, e.g., machine learning, and numerical linear algebra. Hence our lower bound result for constructing spectral sparsifiers may have applications to studying other distributed learning algorithms.
3.2 The blackboard model
Next we present a graph clustering algorithm with Õ(n + s) bits of communication cost in the blackboard model. Our result is based on the observation that a spectral sparsifier preserves the structure of clusters, which was used for proving Theorem 3.1. So it suffices to design a distributed algorithm for constructing a spectral sparsifier in the blackboard model.
Our distributed algorithm is based on constructing a chain of coarse sparsifiers [15], which is described as follows: for any input PSD matrix K with λmax(K) ≤ λu and all the non-zero eigenvalues of K at least λ`, we define d = dlog2(λu/λ`)e and construct a chain of d+ 1 matrices
[K(0),K(1), . . . ,K(d)], (2)
where γ(i) = λu/2i and K(i) = K + γ(i)I . Notice that in the chain above every K(i − 1) is obtained by adding weights to the diagonal entries of K(i), and K(i− 1) approximates K(i) as long as the weights added to the diagonal entries are small. We will construct this chain recursively, so that K(0) has heavy diagonal entries and can be approximated by a diagonal matrix. Moreover, since K is the Laplacian matrix of a graph G, it is easy to see that d = O(log n) as long as the edge weights of G are polynomially upper-bounded in n. Lemma 3.3 ([15]). The chain (2) satisfies the following relations: (1) K r K(d) r 2K; (2) K(`) K(`− 1) 2K(`) for all ` ∈ {1, . . . , d}; (3) K(0) 2γ(0)I 2K(0).
Based on Lemma 3.3, we will construct a chain of matrices[ K̃(0), K̃(1), . . . , K̃(d) ] (3)
in the blackboard model, such that every K̃(`) is a spectral sparsifier of K(`), and every K̃(`+ 1) can be constructed from K̃(`). The basic idea behind our construction is to use the relations among different K(`) shown in Lemma 3.3 and the fact that, for any K = BᵀB, sampling rows of B with respect to their leverage scores can be used to obtain a matrix approximating K. Theorem 3.4. LetG be an undirected graph on n vertices, where the edges ofG are allocated among s sites, and the edge weights are polynomially upper bounded in n. Then, a spectral sparsifier of G can be constructed with Õ(n+ s) bits of communication in the blackboard model. That is, the chain (3) can be constructed with Õ(n+ s) bits of communication in the blackboard model.
Proof. Let K = BᵀB be the Laplacian matrix of the underlying graph G, where B ∈ Rm×n is the edge-vertex incidence matrix of G. We will prove that every K̃(i+ 1) can be constructed based on K̃(i) with Õ(n+ s) bits of communication. This implies that K̃(d), a (1 + ε)-spectral sparsifier of K, can be constructed with Õ(n+ s) bits of communication, as the length of the chain d = O(log n).
First of all, notice that λu ≤ 2n, and the value of n can be obtained with communication cost Õ(n + s) (different sites sequentially write the new IDs of the vertices on the blackboard). In the following we assume that λu is the upper bound of λmax that we actually obtained in the blackboard.
Base case of ` = 0: By definition, K(0) = K + λu · I , and 12 · K(0) γ(0) · I K(0), due to Statement 3 of Lemma 3.3. Let ⊕ denote appending the rows of one matrix to another. We
define Bγ(0) = B ⊕ √ γ(0) · I , and write K(0) = K + γ(0) · I = Bᵀγ(0)Bγ(0). By defining τi = b ᵀ i (K(0)) ᵀ bi for each row of Bγ(0), we have τi ≤ bᵀi (γ(0) · I) bi ≤ 2 · τi. Let τ̃i = bᵀi (γ(0) · I) + bi be the leverage score of bi approximated using γ(0) · I , and let τ̃ be the vector of
approximate leverage scores, with the leverage scores of the n rows corresponding to √ γ(0) · I rounded up to 1. Then, with high probability sampling O(ε−2n log n) rows of B will give a matrix K̃(0) such that (1− ε)K(0) K̃(0) (1 + ε)K(0). Notice that, as every row of B corresponds to an edge of G, the approximate leverage scores τ̃i for different edges can be computed locally by different sites maintaining the edges, and the sites only need to send the information of the sampled edges to the blackboard, hence the communication cost is Õ(n+ s) bits.
Induction step: We assume that (1−ε)K(`) r K̃(`) r (1+ε)K(`), and the blackboard maintains the matrix K̃(`). This implies that (1− ε)/(1 + ε) ·K(`) r 1/(1 + ε) · K̃(`) r K(`). Combining this with Statement 2 of Lemma 3.3, we have that
1− ε 2(1 + ε) K(`+ 1) r 1 2(1 + ε) K̃(`) K(`+ 1).
We apply the same sampling procedure as in the base case, and obtain a matrix K̃(` + 1) such that (1 − ε)K(` + 1) r K̃(` + 1) r (1 + ε)K(` + 1). Notice that, since K̃(`) is written on the blackboard, the probabilities used for sampling individual edges can be computed locally by different sites, and in each round only the sampled edges will be sent to the blackboard in order for the blackboard to obtain K̃(`+ 1). Hence, the total communication cost in each iteration is Õ(n+ s) bits. Combining this with the fact that the chain length d = O(log n) proves the theorem.
Combining Theorem 3.4 and the fact that a spectral sparsifier preserves the structure of clusters, we obtain a distributed algorithm in the blackboard model with total communication cost Õ(n+ s) bits, and the performance of our algorithm is the same as in the statement of Theorem 3.1. Notice that Ω(n + s) bits of communication are needed for graph clustering in the blackboard model, since the output of a clustering algorithm contains Ω(n) bits of information and each site needs to communicate at least one bit. Hence the communication cost of our proposed algorithm is optimal up to a poly-logarithmic factor.
4 Distributed geometric clustering
We now consider geometric clustering, including k-median, k-means and k-center. Let P be a set of points of size n in a metric space with distance function d(·, ·), and let k ≤ n be an integer. In the k-center problem we want to find a set C (|C| = k) such that maxp∈P d(p, C) is minimized, where d(p, C) = minc∈C d(p, c). In k-median and k-means we replace the objective function maxp∈P d(p, C) with ∑ p∈P d(p, C) and ∑ p∈P (d(p, C)) 2, respectively.
4.1 The message passing model
As mentioned, for constant dimensional Euclidean space and a constant c > 1, there are algorithms that c-approximate k-median and k-means using Õ(sk) bits of communication [3]. For k-center, the folklore parallel guessing algorithms (see, e.g., [8]) achieve a 2.01-approximation using Õ(sk) bits of communication.
The following theorem states that the above upper bounds are tight up to logarithmic factors. Due to space constraints we defer the proof to the full version of this paper. The proof uses tools from multiparty communication complexity. We in fact can prove a stronger statement that any algorithm that can differentiate whether we have k points or k + 1 points in total in the message passing model needs Ω(sk) bits of communication. Theorem 4.1. For any c > 1, computing c-approximation for k-median, k-means or k-center correctly with probability 0.99 in the message passing model needs Ω(sk) bits of communication.
A number of works on clustering consider bicriteria solutions (e.g., [11, 6]). An algorithm is a (c1, c2)-approximation (c1, c2 > 1) if the optimal solution costs W when using k centers, then the
output of the algorithm costs at most c1W when using at most c2k centers. We can show that for kmedian and k-means, the Ω(sk) lower bound holds even for algorithms with bicriteria approximations. The proof of the following theorem can be found in the full version of this paper.
Theorem 4.2. For any c ∈ [1, 1.01], computing (7.1− 6c, c)-bicriteria-approximation for k-median or k-means correctly with probability 0.99 in the message passing model needs Ω(sk) bits of communication.
4.2 The blackboard model
We can show that there is an algorithm that achieves an O(1)-approximation using Õ(s+ k) bits of communication for k-median and k-means. Due to space constraints we defer the description of the algorithm to the full version of this paper. For k-center, it is straightforward to implement the parallel guessing algorithm in the blackboard model using Õ(s+ k) bits of communication.
Theorem 4.3. There are algorithms that compute O(1)-approximations for k-median, k-means and k-center correctly with probability 0.9 in the blackboard model using Õ(s+k) bits of communication.
5 Experiments
In this section we present experimental results for spectral graph clustering in the message passing and blackboard models. We will compare the following three algorithms. (1) Baseline: each site sends all the data to the coordinator directly; (2) MsgPassing: our algorithm in the message passing model (Section 3.1); (3) Blackboard: our algorithm in the blackboard model (Section 3.2).
Besides giving the visualized results of these algorithms on various datasets, we also measure the qualities of the results via the normalized cut, defined as ncut(A1, . . . , Ak) = 12 ∑ i∈[k] w(Ai,V \Ai) vol(Ai)
, which is a standard objective function to be minimized for spectral clustering algorithms.
We implemented the algorithms using multiple languages, including Matlab, Python and C++. Our experiments were conducted on an IBM NeXtScale nx360 M4 server, which is equipped with 2 Intel Xeon E5-2652 v2 8-core processors, 32GB RAM and 250GB local storage.
Datasets. We test the algorithms in the following real and synthetic datasets.
• Twomoons: this dataset contains n = 14, 000 coordinates in R2. We consider each point to be a vertex. For any two vertices u, v, we add an edge with weight w(u, v) = exp{−‖u− v‖22/σ2} with σ = 0.1 when one vertex is among the 7000-nearest points of the other. This construction results in a graph with about 110, 000, 000 edges.
• Gauss: this dataset contains n = 10, 000 points in R2. There are 4 clusters in this dataset, each generated using a Gaussian distribution. We construct a complete graph as the similarity graph. For any two vertices u, v, we define the weight w(u, v) = exp{−‖u− v‖22/σ2} with σ = 1. The resulting graph has about 100, 000, 000 edges.
• Sculpture: a photo of The Greek Slave We use an 80× 150 version of this photo where each pixel is viewed as a vertex. To construct a similarity graph, we map each pixel to a point in R5, i.e., (x, y, r, g, b), where the latter three coordinates are the RGB values. For any two vertices u, v, we put an edge between u, v with weight w(u, v) = exp{−‖u − v‖22/σ2} with σ = 0.5 if one of u, v is among the 5000-nearest points of the other. This results in a graph with about 70, 000, 000 edges.
In the distributed model edges are randomly partitioned across s sites.
Results on clustering quality. We visualize the clustered results for the Twomoons dataset in Figure 1. It can be seen that Baseline, MsgPassing and Blackboard give results of very similar qualities. For simplicity, here we only present the visualization for s = 15. Similar results were observed when we varied the values of s.
We also compare the normalized cut (ncut) values of the clustering results of different algorithms. The results are presented in Figure 2. In all datasets, the ncut values of different algorithms are very close. The ncut value of MsgPassing slightly decreases when we increase the value of s, while the ncut value of Blackboard is independent of s.
Results on Communication Costs. We compare the communication costs of different algorithms in Figure 3. We observe that while achieving similar clustering qualities as Baseline, both MsgPassing and Blackboard are significantly more communication-efficient (by one or two orders of magnitudes in our experiments). We also notice that the value of s does not affect the communication cost of Blackboard, while the communication cost of MsgPassing grows almost linearly with s; when s is large, MsgPassing uses significantly more communication than Blackboard.
Acknowledgement: Jiecao Chen and Qin Zhang are supported in part by NSF CCF-1525024 and IIS-1633215. D.W. thanks support from the XDATA program of the Defense Advanced Research Projects Agency (DARPA), Air Force Research Laboratory contract FA8750-12-C-0323. | 1. What are the main contributions and novel aspects introduced by the paper regarding distributed clustering algorithms?
2. What are the strengths of the paper, particularly in its theoretical analysis and proof of lower and upper bounds for graph spectral clustering and geometric clustering?
3. Do you have any questions or concerns about the proposed algorithms, their implementation, and evaluation on large graphs?
4. How does the reviewer assess the clarity and presentation of the paper's content, particularly regarding the two distributed clustering algorithms and their proofs?
5. Are there any typos or technical issues that need to be addressed in the paper? | Review | Review
The paper proposes several distributed clustering algorithms and study the communication complexity in two models (point to point and blackboard). The authors prove lower and upper bounds for graph spectral clustering, and geometric clustering (i.e. k-means, k-median,...). The algorithms are implemented and evaluated on large graphs (the edge size is larger than 70 million). In this paper, a large dataset is maintained in s sites that communicate in a point to point protocol (private message passing between sites and one coordinator) or in a blackboard protocol (broadcast messages to a centralized site). The proposed algorithm for spectral graph clustering within the message passing model is based on spectral sparsification. The contribution is to show the the sparsification can be computed locally. The proof that the clustering the union of the sparse subgraphs is close to the clustering on the whole graph highly relies on [19, Peng et al, COLT 2015]. The lower bound is based on a reduction to the multiparty set disjointness problem. The algorithm in the Blackboard model relies on the iterative construction of a sequence of matrices of length O(log n) whose last element is a spectral sparsifier. The presentation of this algorithm is less clear than the previous one. In particular it is hard to follow what can be done locally and when the communication with the coordinator is needed (I think that a communication is needed at each step to compute \tau_i). In both cases, the proofs in appendix seem to be necessary to understand the algorithms. So the presentation could be improved. The experiments confirm the theoretical results. I think that the paper is interesting and deserves publication. Typos and technical remarks. - Lines 469/471: \gamma_u is \lambda_u. Line 475: I don't understand the append operator. - I think that a concluding Theorem could be added to Section 3.2. - It should be more clear if you announce sooner that the reduction will encode X_i^j=1 if item j *is not* in site i. |
NIPS | Title
Communication-Optimal Distributed Clustering
Abstract
Clustering large datasets is a fundamental problem with a number of applications in machine learning. Data is often collected on different sites and clustering needs to be performed in a distributed manner with low communication. We would like the quality of the clustering in the distributed setting to match that in the centralized setting for which all the data resides on a single site. In this work, we study both graph and geometric clustering problems in two distributed models: (1) a point-to-point model, and (2) a model with a broadcast channel. We give protocols in both models which we show are nearly optimal by proving almost matching communication lower bounds. Our work highlights the surprising power of a broadcast channel for clustering problems; roughly speaking, to spectrally cluster n points or n vertices in a graph distributed across s servers, for a worst-case partitioning the communication complexity in a point-to-point model is n · s, while in the broadcast model it is n+ s. A similar phenomenon holds for the geometric setting as well. We implement our algorithms and demonstrate this phenomenon on real life datasets, showing that our algorithms are also very efficient in practice.
1 Introduction
Clustering is a fundamental task in machine learning with widespread applications in data mining, computer vision, and social network analysis. Example applications of clustering include grouping similar webpages by search engines, finding users with common interests in a social network, and identifying different objects in a picture or video. For these applications, one can model the objects that need to be clustered as points in Euclidean space Rd, where the similarities of two objects are represented by the Euclidean distance between the two points. Then the task of clustering is to choose k points as centers, so that the total distance between all input points to their corresponding closest center is minimized. Depending on different distance objective functions, three typical problems have been studied: k-means, k-median, and k-center.
The other popular approach for clustering is to model the input data as vertices of a graph, and the similarity between two objects is represented by the weight of the edge connecting the corresponding vertices. For this scenario, one is asked to partition the vertices into clusters so that the “highly connected” vertices belong to the same cluster. A widely-used approach for graph clustering is spectral clustering, which embeds the vertices of a graph into the points in Rk through the bottom k eigenvectors of the graph’s Laplacian matrix, and applies k-means on the embedded points.
∗Full version appears on arXiv, 2017, under the same title.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Both the spectral clustering and the geometric clustering algorithms mentioned above have been widely used in practice, and have been the subject of extensive theoretical and experimental studies over the decades. However, these algorithms are designed for the centralized setting, and are not applicable in the setting of large-scale datasets that are maintained remotely by different sites. In particular, collecting the information from all the remote sites and performing a centralized clustering algorithm is infeasible due to high communication costs, and new distributed clustering algorithms with low communication cost need to be developed.
There are several natural communication models, and we focus on two of them: (1) a point-to-point model, and (2) a model with a broadcast channel. In the former, sometimes referred to as the messagepassing model, there is a communication channel between each pair of users. This may be impractical, and the so-called coordinator model can often be used in place; in the coordinator model there is a centralized site called the coordinator, and all communication goes through the coordinator. This affects the total communication by a factor of two, since the coordinator can forward a message from one server to another and therefore simulate a point-to-point protocol. There is also an additional additive O(log s) bits per message, where s is the number of sites, since a server must specify to the coordinator where to forward its message. In the model with a broadcast channel, sometimes referred to as the blackboard model, the coordinator has the power to send a single message which is received by all s sites at once. This can be viewed as a model for single-hop wireless networks.
In both models we study the total number of bits communicated among all sites. Although the blackboard model is at least as powerful as the message-passing model, it is often unclear how to exploit its power to obtain better bounds for specific problems. Also, for a number of problems the communication complexity is the same in both models, such as computing the sum of s length-n bit vectors modulo two, where each site holds one bit vector [18], or estimating large moments [20]. Still, for other problems like set disjointness it can save a factor of s in the communication [5].
Our contributions. We present algorithms for graph clustering: for any n-vertex graph whose edges are arbitrarily partitioned across s sites, our algorithms have communication cost Õ(ns) in the message passing model, and have communication cost Õ(n + s) in the blackboard model, where the Õ notation suppresses polylogarithmic factors. The algorithm in the message passing model has each site send a spectral sparsifier of its local data to the coordinator, who then merges them in order to obtain a spectral sparsifier of the union of the datasets, which is sufficient for solving the graph clustering problem. Our algorithm in the blackboard model is technically more involved, as we show a particular recursive sampling procedure for building a spectral sparsifier can be efficiently implemented using a broadcast channel. It is unclear if other natural ways of building spectral sparsifiers can be implemented with low communication in the blackboard model. Our algorithms demonstrate the surprising power of the blackboard model for clustering problems. Since our algorithms compute sparsifiers, they also have applications to solving symmetric diagonally dominant linear systems in a distributed model. Any such system can be converted into a system involving a Laplacian (see, e.g., [1]), from which a spectral sparsifier serves as a good preconditioner.
Next we show that Ω(ns) bits of communication is necessary in the message passing model to even recover a constant fraction of a cluster, and Ω(n + s) bits of communication is necessary in the blackboard model. This shows the optimality of our algorithms up to poly-logarithmic factors.
We then study clustering problems in constant-dimensional Euclidean space. We show for any c > 1, computing a c-approximation for k-median, k-means, or k-center correctly with constant probability in the message passing model requires Ω(sk) bits of communication. We then strengthen this lower bound, and show even for bicriteria clustering algorithms, which may output a constant factor more clusters and a constant factor approximation, our Ω(sk) bit lower bound still holds. Our proofs are based on communication and information complexity. Our results imply that existing algorithms [3] for k-median and k-means with Õ(sk) bits of communication, as well as the folklore parallel guessing algorithm for k-center with Õ(sk) bits of communication, are optimal up to poly-logarithmic factors. For the blackboard model, we present an algorithm for k-median and k-means that achieves an O(1)-approximation using Õ(s+ k) bits of communication. This again separates the models.
We give empirical results which show that using spectral sparsifiers preserves the quality of spectral clustering surprisingly well in real-world datasets. For example, when we partition a graph with over 70 million edges (the Sculpture dataset) into 30 sites, only 6% of the input edges are communicated in the blackboard model and 8% are communicated in the message passing model, while the values
of the normalized cut (the objective function of spectral clustering) given in those two models are at most 2% larger than the ones given by the centralized algorithm, and the visualized results are almost identical. This is strong evidence that spectral sparsifiers can be a powerful tool in practical, distributed computation. When the number of sites is large, the blackboard model incurs significantly less communication than the message passing model, e.g., in the Twomoons dataset when there are 90 sites, the message passing model communicates 9 times as many edges as communicated in the blackboard model, illustrating the strong separation between these models that our theory predicts.
Related work. There is a rich literature on spectral and geometric clustering algorithms from various aspects (see, e.g., [2, 16, 17, 19]). Balcan et al. [3, 4] and Feldman et al. [9] study distributed k-means ([3] also studies k-median). Very recently Guha et al. [10] studied distributed k-median/center/means with outliers. Cohen et al. [7] study dimensionality reduction techniques for the input data matrices that can be used for distributed k-means. The main takeaway is that there is no previous work which develops protocols for spectral clustering in the common message passing and blackboard models, and lower bounds are lacking as well. For geometric clustering, while upper bounds exist (e.g., [3, 4, 9]), no provable lower bounds in either model existed, and our main contribution is to show that previous algorithms are optimal. We also develop a new protocol in the blackboard model.
2 Preliminaries
Let G = (V,E,w) be an undirected graph with n vertices, m edges, and weight function V × V → R≥0. The set of neighbors of a vertex v is represented byN(v), and its degree is dv = ∑ u∼v w(u, v). The maximum degree of G is defined to be ∆(G) = maxv{dv}. For any set S ⊆ V , let µ(S) ,∑ v∈S dv . For any sets S, T ⊆ V , we define w(S, T ) , ∑ u∈S,v∈T w(u, v) to be the total weight of edges crossing S and T . For two sets X and Y , the symmetric difference of X and Y is defined as X4Y , (X \ Y ) ∪ (Y \X). For any matrix A ∈ Rn×n, let λ1(A) ≤ · · · ≤ λn(A) = λmax(A) be the eigenvalues of A. For any two matrices A,B ∈ Rn×n, we write A B to represent B − A is positive semi-definite (PSD). Notice that this condition implies that xᵀAx ≤ xᵀBx for any x ∈ Rn. Sometimes we also use a weaker notation (1− ε)A r B r (1 + ε)A to indicate that (1− ε)xᵀAx ≤ xᵀBx ≤ (1 + ε)xᵀAx for all x in the row span of A.
Graph Laplacian. The Laplacian matrix of G is an n× n matrix LG defined by LG = DG −AG, whereAG is the adjacency matrix ofG defined byAG(u, v) = w(u, v), andDG is the n×n diagonal matrix with DG(v, v) = dv for any v ∈ V [G]. Alternatively, we can write LG with respect to a signed edge-vertex incidence matrix: we assign every edge e = {u, v} an arbitrary orientation, and let BG(e, v) = 1 if v is e’s head, BG(e, v) = −1 if v is e’s tail, and BG(e, v) = 0 otherwise. We further define a diagonal matrix WG ∈ Rm×m, where WG(e, e) = we for any edge e ∈ E[G]. Then, we can write LG as LG = B ᵀ GWGBG. The normalized Laplacian matrix of G is defined by LG , D−1/2G LGD −1/2 G = I − D −1/2 G AGD −1/2 G . We sometimes drop the subscript G when the underlying graph is clear from the context.
Spectral sparsification. For any undirected and weighted graph G = (V,E,w), we say a subgraph H of G with proper reweighting of the edges is a (1 + ε)-spectral sparsifier if
(1− ε)LG LH (1 + ε)LG. (1) By definition, it is easy to show that, if we decompose the edge set of a graph G = (V,E) into E1, . . . , E` for a constant ` and Hi is a spectral sparsifier of Gi = (V,Ei) for any 1 ≤ i ≤ `, then the graph formed by the union of edge sets from Hi is a spectral sparsifier of G. It is known that, for any undirected graph G of n vertices, there is a (1 + ε)-spectral sparsifier of G with O(n/ε2) edges, and it can be constructed in almost-linear time [13]. We will show that a spectral sparsifier preserves the cluster structure of a graph.
Models of computation. We will study distributed clustering in two models for distributed data: the message passing model and the blackboard model. The message passing model represents those distributed computation systems with point-to-point communication, and the blackboard model represents those where messages can be broadcast to all parties.
More precisely, in the message passing model there are s sites P1, . . . ,Ps, and one coordinator. These sites can talk to the coordinator through a two-way private channel. In fact, this is referred to
as the coordinator model in Section 1, where it is shown to be equivalent to the point-to-point model up to small factors. The input is initially distributed at the s sites. The computation is in terms of rounds: at the beginning of each round, the coordinator sends a message to some of the s sites, and then each of those sites that have been contacted by the coordinator sends a message back to the coordinator. At the end, the coordinator outputs the answer. In the alternative blackboard model, the coordinator is simply a blackboard where these s sites P1, . . . ,Ps can share information; in other words, if one site sends a message to the coordinator/blackboard then all the other s− 1 sites can see this information without further communication. The order for the sites to speak is decided by the contents of the blackboard.
For both models we measure the communication cost as the total number of bits sent through the channels. The two models are now standard in multiparty communication complexity (see, e.g., [5, 18, 20]). They are similar to the congested clique model [14] studied in the distributed computing community; the main difference is that in our models we do not post any bandwidth limitations at each channel but instead consider the total number of bits communicated.
3 Distributed graph clustering
In this section we study distributed graph clustering. We assume that the vertex set of the input graph G = (V,E) can be partitioned into k clusters, where vertices in each cluster S are highly connected to each other, and there are fewer edges between S and V \S. To formalize this notion, we define the conductance of a vertex set S by φG(S) , w(S, V \S)/µ(S). Generalizing the Cheeger constant, we define the k-way expansion constant of graphG by ρ(k) , minpartition A1, . . . , Ak max1≤i≤k φG(Ai). Notice that a graph G has k clusters if the value of ρ(k) is small.
Lee et al. [12] relate the value of ρ(k) to λk(LG) by the following higher-order Cheeger inequality:
λk(LG) 2
≤ ρ(k) ≤ O(k2) √ λk(LG).
Based on this, a large gap between λk+1(LG) and ρ(k) implies (i) the existence of a k-way partition {Si}ki=1 with smaller value of φG(Si) ≤ ρ(k), and (ii) any (k + 1)-way partition of G contains a subset with high conductance ρ(k + 1) ≥ λk+1(LG)/2. Hence, a large gap between λk+1(LG) and ρ(k) ensures that G has exactly k clusters.
In the following, we assume that Υ , λk+1(LG)/ρ(k) = Ω(k3), as this assumption was used in the literature for studying graph clustering in the centralized setting [17].
Both algorithms presented in the section are based on the following spectral clustering algorithm: (i) compute the k eigenvectors f1, . . . , fk of LG associated with λ1(LG), . . . , λk(LG); (ii) embed every vertex v to a point in Rk through the embedding F (v) = 1√
dv · (f1(v), . . . , fk(v)); (iii) run
k-means on the embedded points {F (v)}v∈V , and group the vertices of G into k clusters according to the output of k-means.
3.1 The message passing model
We assume the edges of the input graphG = (V,E) are arbitrarily allocated among s sitesP1, · · · ,Ps, and we use Ei to denote the edge set maintained by site Pi. Our proposed algorithm consists of two steps: (i) every Pi computes a linear-sized (1 + c)-spectral sparsifier Hi of Gi , (V,Ei), for a small constant c ≤ 1/10, and sends the edge set of Hi, denoted by E′i, to the coordinator; (ii) the coordinator runs a spectral clustering algorithm on the union of received graphs H , ( V, ⋃k i=1E ′ i ) . The theorem below summarizes the performance of this algorithm, and shows the approximation guarantee of this algorithm is as good as the provable guarantee of spectral clustering known in the centralized setting [17]. Theorem 3.1. Let G = (V,E) be an n-vertex graph with Υ = Ω(k3), and suppose the edges of G are arbitrarily allocated among s sites. Assume S1, · · · , Sk is an optimal partition that achieves ρ(k). Then, the algorithm above computes a partition A1, . . . , Ak satisfying vol(Ai4Si) = O ( k3 ·Υ−1 · vol(Si) ) for any 1 ≤ i ≤ k. The total communication cost of this algorithm is Õ(ns) bits.
Our proposed algorithm is very easy to implement, and the next theorem shows that the communication cost of our algorithm is optimal up to a logarithmic factor. Theorem 3.2. Let G be an undirected graph with n vertices, and suppose the edges of G are distributed among s sites. Then, any algorithm that correctly outputs a constant fraction of a cluster in G requires Ω(ns) bits of communication. This holds even if each cluster has constant expansion.
As a remark, it is easy to see that this lower bound also holds for constructing spectral sparsifiers: for any n× n PSD matrix A whose entries are arbitrarily distributed among s sites, any distributed algorithm that constructs a (1 + Θ(1))-spectral sparsifier of A requires Ω(ns) bits of communication. This follows since such a spectral sparsifier can be used to solve the spectral clustering problem. Spectral sparsification has played an important role in designing fast algorithms from different areas, e.g., machine learning, and numerical linear algebra. Hence our lower bound result for constructing spectral sparsifiers may have applications to studying other distributed learning algorithms.
3.2 The blackboard model
Next we present a graph clustering algorithm with Õ(n + s) bits of communication cost in the blackboard model. Our result is based on the observation that a spectral sparsifier preserves the structure of clusters, which was used for proving Theorem 3.1. So it suffices to design a distributed algorithm for constructing a spectral sparsifier in the blackboard model.
Our distributed algorithm is based on constructing a chain of coarse sparsifiers [15], which is described as follows: for any input PSD matrix K with λmax(K) ≤ λu and all the non-zero eigenvalues of K at least λ`, we define d = dlog2(λu/λ`)e and construct a chain of d+ 1 matrices
[K(0),K(1), . . . ,K(d)], (2)
where γ(i) = λu/2i and K(i) = K + γ(i)I . Notice that in the chain above every K(i − 1) is obtained by adding weights to the diagonal entries of K(i), and K(i− 1) approximates K(i) as long as the weights added to the diagonal entries are small. We will construct this chain recursively, so that K(0) has heavy diagonal entries and can be approximated by a diagonal matrix. Moreover, since K is the Laplacian matrix of a graph G, it is easy to see that d = O(log n) as long as the edge weights of G are polynomially upper-bounded in n. Lemma 3.3 ([15]). The chain (2) satisfies the following relations: (1) K r K(d) r 2K; (2) K(`) K(`− 1) 2K(`) for all ` ∈ {1, . . . , d}; (3) K(0) 2γ(0)I 2K(0).
Based on Lemma 3.3, we will construct a chain of matrices[ K̃(0), K̃(1), . . . , K̃(d) ] (3)
in the blackboard model, such that every K̃(`) is a spectral sparsifier of K(`), and every K̃(`+ 1) can be constructed from K̃(`). The basic idea behind our construction is to use the relations among different K(`) shown in Lemma 3.3 and the fact that, for any K = BᵀB, sampling rows of B with respect to their leverage scores can be used to obtain a matrix approximating K. Theorem 3.4. LetG be an undirected graph on n vertices, where the edges ofG are allocated among s sites, and the edge weights are polynomially upper bounded in n. Then, a spectral sparsifier of G can be constructed with Õ(n+ s) bits of communication in the blackboard model. That is, the chain (3) can be constructed with Õ(n+ s) bits of communication in the blackboard model.
Proof. Let K = BᵀB be the Laplacian matrix of the underlying graph G, where B ∈ Rm×n is the edge-vertex incidence matrix of G. We will prove that every K̃(i+ 1) can be constructed based on K̃(i) with Õ(n+ s) bits of communication. This implies that K̃(d), a (1 + ε)-spectral sparsifier of K, can be constructed with Õ(n+ s) bits of communication, as the length of the chain d = O(log n).
First of all, notice that λu ≤ 2n, and the value of n can be obtained with communication cost Õ(n + s) (different sites sequentially write the new IDs of the vertices on the blackboard). In the following we assume that λu is the upper bound of λmax that we actually obtained in the blackboard.
Base case of ` = 0: By definition, K(0) = K + λu · I , and 12 · K(0) γ(0) · I K(0), due to Statement 3 of Lemma 3.3. Let ⊕ denote appending the rows of one matrix to another. We
define Bγ(0) = B ⊕ √ γ(0) · I , and write K(0) = K + γ(0) · I = Bᵀγ(0)Bγ(0). By defining τi = b ᵀ i (K(0)) ᵀ bi for each row of Bγ(0), we have τi ≤ bᵀi (γ(0) · I) bi ≤ 2 · τi. Let τ̃i = bᵀi (γ(0) · I) + bi be the leverage score of bi approximated using γ(0) · I , and let τ̃ be the vector of
approximate leverage scores, with the leverage scores of the n rows corresponding to √ γ(0) · I rounded up to 1. Then, with high probability sampling O(ε−2n log n) rows of B will give a matrix K̃(0) such that (1− ε)K(0) K̃(0) (1 + ε)K(0). Notice that, as every row of B corresponds to an edge of G, the approximate leverage scores τ̃i for different edges can be computed locally by different sites maintaining the edges, and the sites only need to send the information of the sampled edges to the blackboard, hence the communication cost is Õ(n+ s) bits.
Induction step: We assume that (1−ε)K(`) r K̃(`) r (1+ε)K(`), and the blackboard maintains the matrix K̃(`). This implies that (1− ε)/(1 + ε) ·K(`) r 1/(1 + ε) · K̃(`) r K(`). Combining this with Statement 2 of Lemma 3.3, we have that
1− ε 2(1 + ε) K(`+ 1) r 1 2(1 + ε) K̃(`) K(`+ 1).
We apply the same sampling procedure as in the base case, and obtain a matrix K̃(` + 1) such that (1 − ε)K(` + 1) r K̃(` + 1) r (1 + ε)K(` + 1). Notice that, since K̃(`) is written on the blackboard, the probabilities used for sampling individual edges can be computed locally by different sites, and in each round only the sampled edges will be sent to the blackboard in order for the blackboard to obtain K̃(`+ 1). Hence, the total communication cost in each iteration is Õ(n+ s) bits. Combining this with the fact that the chain length d = O(log n) proves the theorem.
Combining Theorem 3.4 and the fact that a spectral sparsifier preserves the structure of clusters, we obtain a distributed algorithm in the blackboard model with total communication cost Õ(n+ s) bits, and the performance of our algorithm is the same as in the statement of Theorem 3.1. Notice that Ω(n + s) bits of communication are needed for graph clustering in the blackboard model, since the output of a clustering algorithm contains Ω(n) bits of information and each site needs to communicate at least one bit. Hence the communication cost of our proposed algorithm is optimal up to a poly-logarithmic factor.
4 Distributed geometric clustering
We now consider geometric clustering, including k-median, k-means and k-center. Let P be a set of points of size n in a metric space with distance function d(·, ·), and let k ≤ n be an integer. In the k-center problem we want to find a set C (|C| = k) such that maxp∈P d(p, C) is minimized, where d(p, C) = minc∈C d(p, c). In k-median and k-means we replace the objective function maxp∈P d(p, C) with ∑ p∈P d(p, C) and ∑ p∈P (d(p, C)) 2, respectively.
4.1 The message passing model
As mentioned, for constant dimensional Euclidean space and a constant c > 1, there are algorithms that c-approximate k-median and k-means using Õ(sk) bits of communication [3]. For k-center, the folklore parallel guessing algorithms (see, e.g., [8]) achieve a 2.01-approximation using Õ(sk) bits of communication.
The following theorem states that the above upper bounds are tight up to logarithmic factors. Due to space constraints we defer the proof to the full version of this paper. The proof uses tools from multiparty communication complexity. We in fact can prove a stronger statement that any algorithm that can differentiate whether we have k points or k + 1 points in total in the message passing model needs Ω(sk) bits of communication. Theorem 4.1. For any c > 1, computing c-approximation for k-median, k-means or k-center correctly with probability 0.99 in the message passing model needs Ω(sk) bits of communication.
A number of works on clustering consider bicriteria solutions (e.g., [11, 6]). An algorithm is a (c1, c2)-approximation (c1, c2 > 1) if the optimal solution costs W when using k centers, then the
output of the algorithm costs at most c1W when using at most c2k centers. We can show that for kmedian and k-means, the Ω(sk) lower bound holds even for algorithms with bicriteria approximations. The proof of the following theorem can be found in the full version of this paper.
Theorem 4.2. For any c ∈ [1, 1.01], computing (7.1− 6c, c)-bicriteria-approximation for k-median or k-means correctly with probability 0.99 in the message passing model needs Ω(sk) bits of communication.
4.2 The blackboard model
We can show that there is an algorithm that achieves an O(1)-approximation using Õ(s+ k) bits of communication for k-median and k-means. Due to space constraints we defer the description of the algorithm to the full version of this paper. For k-center, it is straightforward to implement the parallel guessing algorithm in the blackboard model using Õ(s+ k) bits of communication.
Theorem 4.3. There are algorithms that compute O(1)-approximations for k-median, k-means and k-center correctly with probability 0.9 in the blackboard model using Õ(s+k) bits of communication.
5 Experiments
In this section we present experimental results for spectral graph clustering in the message passing and blackboard models. We will compare the following three algorithms. (1) Baseline: each site sends all the data to the coordinator directly; (2) MsgPassing: our algorithm in the message passing model (Section 3.1); (3) Blackboard: our algorithm in the blackboard model (Section 3.2).
Besides giving the visualized results of these algorithms on various datasets, we also measure the qualities of the results via the normalized cut, defined as ncut(A1, . . . , Ak) = 12 ∑ i∈[k] w(Ai,V \Ai) vol(Ai)
, which is a standard objective function to be minimized for spectral clustering algorithms.
We implemented the algorithms using multiple languages, including Matlab, Python and C++. Our experiments were conducted on an IBM NeXtScale nx360 M4 server, which is equipped with 2 Intel Xeon E5-2652 v2 8-core processors, 32GB RAM and 250GB local storage.
Datasets. We test the algorithms in the following real and synthetic datasets.
• Twomoons: this dataset contains n = 14, 000 coordinates in R2. We consider each point to be a vertex. For any two vertices u, v, we add an edge with weight w(u, v) = exp{−‖u− v‖22/σ2} with σ = 0.1 when one vertex is among the 7000-nearest points of the other. This construction results in a graph with about 110, 000, 000 edges.
• Gauss: this dataset contains n = 10, 000 points in R2. There are 4 clusters in this dataset, each generated using a Gaussian distribution. We construct a complete graph as the similarity graph. For any two vertices u, v, we define the weight w(u, v) = exp{−‖u− v‖22/σ2} with σ = 1. The resulting graph has about 100, 000, 000 edges.
• Sculpture: a photo of The Greek Slave We use an 80× 150 version of this photo where each pixel is viewed as a vertex. To construct a similarity graph, we map each pixel to a point in R5, i.e., (x, y, r, g, b), where the latter three coordinates are the RGB values. For any two vertices u, v, we put an edge between u, v with weight w(u, v) = exp{−‖u − v‖22/σ2} with σ = 0.5 if one of u, v is among the 5000-nearest points of the other. This results in a graph with about 70, 000, 000 edges.
In the distributed model edges are randomly partitioned across s sites.
Results on clustering quality. We visualize the clustered results for the Twomoons dataset in Figure 1. It can be seen that Baseline, MsgPassing and Blackboard give results of very similar qualities. For simplicity, here we only present the visualization for s = 15. Similar results were observed when we varied the values of s.
We also compare the normalized cut (ncut) values of the clustering results of different algorithms. The results are presented in Figure 2. In all datasets, the ncut values of different algorithms are very close. The ncut value of MsgPassing slightly decreases when we increase the value of s, while the ncut value of Blackboard is independent of s.
Results on Communication Costs. We compare the communication costs of different algorithms in Figure 3. We observe that while achieving similar clustering qualities as Baseline, both MsgPassing and Blackboard are significantly more communication-efficient (by one or two orders of magnitudes in our experiments). We also notice that the value of s does not affect the communication cost of Blackboard, while the communication cost of MsgPassing grows almost linearly with s; when s is large, MsgPassing uses significantly more communication than Blackboard.
Acknowledgement: Jiecao Chen and Qin Zhang are supported in part by NSF CCF-1525024 and IIS-1633215. D.W. thanks support from the XDATA program of the Defense Advanced Research Projects Agency (DARPA), Air Force Research Laboratory contract FA8750-12-C-0323. | 1. What is the focus of the paper in terms of the problem addressed and the proposed solution?
2. What are the strengths of the paper, particularly in terms of the theoretical analysis and experimental evaluation?
3. Are there any concerns or limitations regarding the approach proposed in the paper?
4. How does the reviewer assess the novelty and significance of the paper's contributions?
5. Are there any questions regarding the paper's content, such as the methodology, results, or conclusions? | Review | Review
The paper presents communication optimal algorithms for clustering in different distributed models as well as an experimental evaluation. It works in different distributed models as well as providing lower bounds on communication complexity.I really liked this paper. The problem is quite natural and the use of spectral sparsifiers in this distributed setting is a good idea. There are strong guarantees on the behavior and the experimental results are quite compelling. |
NIPS | Title
Communication-Optimal Distributed Clustering
Abstract
Clustering large datasets is a fundamental problem with a number of applications in machine learning. Data is often collected on different sites and clustering needs to be performed in a distributed manner with low communication. We would like the quality of the clustering in the distributed setting to match that in the centralized setting for which all the data resides on a single site. In this work, we study both graph and geometric clustering problems in two distributed models: (1) a point-to-point model, and (2) a model with a broadcast channel. We give protocols in both models which we show are nearly optimal by proving almost matching communication lower bounds. Our work highlights the surprising power of a broadcast channel for clustering problems; roughly speaking, to spectrally cluster n points or n vertices in a graph distributed across s servers, for a worst-case partitioning the communication complexity in a point-to-point model is n · s, while in the broadcast model it is n+ s. A similar phenomenon holds for the geometric setting as well. We implement our algorithms and demonstrate this phenomenon on real life datasets, showing that our algorithms are also very efficient in practice.
1 Introduction
Clustering is a fundamental task in machine learning with widespread applications in data mining, computer vision, and social network analysis. Example applications of clustering include grouping similar webpages by search engines, finding users with common interests in a social network, and identifying different objects in a picture or video. For these applications, one can model the objects that need to be clustered as points in Euclidean space Rd, where the similarities of two objects are represented by the Euclidean distance between the two points. Then the task of clustering is to choose k points as centers, so that the total distance between all input points to their corresponding closest center is minimized. Depending on different distance objective functions, three typical problems have been studied: k-means, k-median, and k-center.
The other popular approach for clustering is to model the input data as vertices of a graph, and the similarity between two objects is represented by the weight of the edge connecting the corresponding vertices. For this scenario, one is asked to partition the vertices into clusters so that the “highly connected” vertices belong to the same cluster. A widely-used approach for graph clustering is spectral clustering, which embeds the vertices of a graph into the points in Rk through the bottom k eigenvectors of the graph’s Laplacian matrix, and applies k-means on the embedded points.
∗Full version appears on arXiv, 2017, under the same title.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Both the spectral clustering and the geometric clustering algorithms mentioned above have been widely used in practice, and have been the subject of extensive theoretical and experimental studies over the decades. However, these algorithms are designed for the centralized setting, and are not applicable in the setting of large-scale datasets that are maintained remotely by different sites. In particular, collecting the information from all the remote sites and performing a centralized clustering algorithm is infeasible due to high communication costs, and new distributed clustering algorithms with low communication cost need to be developed.
There are several natural communication models, and we focus on two of them: (1) a point-to-point model, and (2) a model with a broadcast channel. In the former, sometimes referred to as the messagepassing model, there is a communication channel between each pair of users. This may be impractical, and the so-called coordinator model can often be used in place; in the coordinator model there is a centralized site called the coordinator, and all communication goes through the coordinator. This affects the total communication by a factor of two, since the coordinator can forward a message from one server to another and therefore simulate a point-to-point protocol. There is also an additional additive O(log s) bits per message, where s is the number of sites, since a server must specify to the coordinator where to forward its message. In the model with a broadcast channel, sometimes referred to as the blackboard model, the coordinator has the power to send a single message which is received by all s sites at once. This can be viewed as a model for single-hop wireless networks.
In both models we study the total number of bits communicated among all sites. Although the blackboard model is at least as powerful as the message-passing model, it is often unclear how to exploit its power to obtain better bounds for specific problems. Also, for a number of problems the communication complexity is the same in both models, such as computing the sum of s length-n bit vectors modulo two, where each site holds one bit vector [18], or estimating large moments [20]. Still, for other problems like set disjointness it can save a factor of s in the communication [5].
Our contributions. We present algorithms for graph clustering: for any n-vertex graph whose edges are arbitrarily partitioned across s sites, our algorithms have communication cost Õ(ns) in the message passing model, and have communication cost Õ(n + s) in the blackboard model, where the Õ notation suppresses polylogarithmic factors. The algorithm in the message passing model has each site send a spectral sparsifier of its local data to the coordinator, who then merges them in order to obtain a spectral sparsifier of the union of the datasets, which is sufficient for solving the graph clustering problem. Our algorithm in the blackboard model is technically more involved, as we show a particular recursive sampling procedure for building a spectral sparsifier can be efficiently implemented using a broadcast channel. It is unclear if other natural ways of building spectral sparsifiers can be implemented with low communication in the blackboard model. Our algorithms demonstrate the surprising power of the blackboard model for clustering problems. Since our algorithms compute sparsifiers, they also have applications to solving symmetric diagonally dominant linear systems in a distributed model. Any such system can be converted into a system involving a Laplacian (see, e.g., [1]), from which a spectral sparsifier serves as a good preconditioner.
Next we show that Ω(ns) bits of communication is necessary in the message passing model to even recover a constant fraction of a cluster, and Ω(n + s) bits of communication is necessary in the blackboard model. This shows the optimality of our algorithms up to poly-logarithmic factors.
We then study clustering problems in constant-dimensional Euclidean space. We show for any c > 1, computing a c-approximation for k-median, k-means, or k-center correctly with constant probability in the message passing model requires Ω(sk) bits of communication. We then strengthen this lower bound, and show even for bicriteria clustering algorithms, which may output a constant factor more clusters and a constant factor approximation, our Ω(sk) bit lower bound still holds. Our proofs are based on communication and information complexity. Our results imply that existing algorithms [3] for k-median and k-means with Õ(sk) bits of communication, as well as the folklore parallel guessing algorithm for k-center with Õ(sk) bits of communication, are optimal up to poly-logarithmic factors. For the blackboard model, we present an algorithm for k-median and k-means that achieves an O(1)-approximation using Õ(s+ k) bits of communication. This again separates the models.
We give empirical results which show that using spectral sparsifiers preserves the quality of spectral clustering surprisingly well in real-world datasets. For example, when we partition a graph with over 70 million edges (the Sculpture dataset) into 30 sites, only 6% of the input edges are communicated in the blackboard model and 8% are communicated in the message passing model, while the values
of the normalized cut (the objective function of spectral clustering) given in those two models are at most 2% larger than the ones given by the centralized algorithm, and the visualized results are almost identical. This is strong evidence that spectral sparsifiers can be a powerful tool in practical, distributed computation. When the number of sites is large, the blackboard model incurs significantly less communication than the message passing model, e.g., in the Twomoons dataset when there are 90 sites, the message passing model communicates 9 times as many edges as communicated in the blackboard model, illustrating the strong separation between these models that our theory predicts.
Related work. There is a rich literature on spectral and geometric clustering algorithms from various aspects (see, e.g., [2, 16, 17, 19]). Balcan et al. [3, 4] and Feldman et al. [9] study distributed k-means ([3] also studies k-median). Very recently Guha et al. [10] studied distributed k-median/center/means with outliers. Cohen et al. [7] study dimensionality reduction techniques for the input data matrices that can be used for distributed k-means. The main takeaway is that there is no previous work which develops protocols for spectral clustering in the common message passing and blackboard models, and lower bounds are lacking as well. For geometric clustering, while upper bounds exist (e.g., [3, 4, 9]), no provable lower bounds in either model existed, and our main contribution is to show that previous algorithms are optimal. We also develop a new protocol in the blackboard model.
2 Preliminaries
Let G = (V,E,w) be an undirected graph with n vertices, m edges, and weight function V × V → R≥0. The set of neighbors of a vertex v is represented byN(v), and its degree is dv = ∑ u∼v w(u, v). The maximum degree of G is defined to be ∆(G) = maxv{dv}. For any set S ⊆ V , let µ(S) ,∑ v∈S dv . For any sets S, T ⊆ V , we define w(S, T ) , ∑ u∈S,v∈T w(u, v) to be the total weight of edges crossing S and T . For two sets X and Y , the symmetric difference of X and Y is defined as X4Y , (X \ Y ) ∪ (Y \X). For any matrix A ∈ Rn×n, let λ1(A) ≤ · · · ≤ λn(A) = λmax(A) be the eigenvalues of A. For any two matrices A,B ∈ Rn×n, we write A B to represent B − A is positive semi-definite (PSD). Notice that this condition implies that xᵀAx ≤ xᵀBx for any x ∈ Rn. Sometimes we also use a weaker notation (1− ε)A r B r (1 + ε)A to indicate that (1− ε)xᵀAx ≤ xᵀBx ≤ (1 + ε)xᵀAx for all x in the row span of A.
Graph Laplacian. The Laplacian matrix of G is an n× n matrix LG defined by LG = DG −AG, whereAG is the adjacency matrix ofG defined byAG(u, v) = w(u, v), andDG is the n×n diagonal matrix with DG(v, v) = dv for any v ∈ V [G]. Alternatively, we can write LG with respect to a signed edge-vertex incidence matrix: we assign every edge e = {u, v} an arbitrary orientation, and let BG(e, v) = 1 if v is e’s head, BG(e, v) = −1 if v is e’s tail, and BG(e, v) = 0 otherwise. We further define a diagonal matrix WG ∈ Rm×m, where WG(e, e) = we for any edge e ∈ E[G]. Then, we can write LG as LG = B ᵀ GWGBG. The normalized Laplacian matrix of G is defined by LG , D−1/2G LGD −1/2 G = I − D −1/2 G AGD −1/2 G . We sometimes drop the subscript G when the underlying graph is clear from the context.
Spectral sparsification. For any undirected and weighted graph G = (V,E,w), we say a subgraph H of G with proper reweighting of the edges is a (1 + ε)-spectral sparsifier if
(1− ε)LG LH (1 + ε)LG. (1) By definition, it is easy to show that, if we decompose the edge set of a graph G = (V,E) into E1, . . . , E` for a constant ` and Hi is a spectral sparsifier of Gi = (V,Ei) for any 1 ≤ i ≤ `, then the graph formed by the union of edge sets from Hi is a spectral sparsifier of G. It is known that, for any undirected graph G of n vertices, there is a (1 + ε)-spectral sparsifier of G with O(n/ε2) edges, and it can be constructed in almost-linear time [13]. We will show that a spectral sparsifier preserves the cluster structure of a graph.
Models of computation. We will study distributed clustering in two models for distributed data: the message passing model and the blackboard model. The message passing model represents those distributed computation systems with point-to-point communication, and the blackboard model represents those where messages can be broadcast to all parties.
More precisely, in the message passing model there are s sites P1, . . . ,Ps, and one coordinator. These sites can talk to the coordinator through a two-way private channel. In fact, this is referred to
as the coordinator model in Section 1, where it is shown to be equivalent to the point-to-point model up to small factors. The input is initially distributed at the s sites. The computation is in terms of rounds: at the beginning of each round, the coordinator sends a message to some of the s sites, and then each of those sites that have been contacted by the coordinator sends a message back to the coordinator. At the end, the coordinator outputs the answer. In the alternative blackboard model, the coordinator is simply a blackboard where these s sites P1, . . . ,Ps can share information; in other words, if one site sends a message to the coordinator/blackboard then all the other s− 1 sites can see this information without further communication. The order for the sites to speak is decided by the contents of the blackboard.
For both models we measure the communication cost as the total number of bits sent through the channels. The two models are now standard in multiparty communication complexity (see, e.g., [5, 18, 20]). They are similar to the congested clique model [14] studied in the distributed computing community; the main difference is that in our models we do not post any bandwidth limitations at each channel but instead consider the total number of bits communicated.
3 Distributed graph clustering
In this section we study distributed graph clustering. We assume that the vertex set of the input graph G = (V,E) can be partitioned into k clusters, where vertices in each cluster S are highly connected to each other, and there are fewer edges between S and V \S. To formalize this notion, we define the conductance of a vertex set S by φG(S) , w(S, V \S)/µ(S). Generalizing the Cheeger constant, we define the k-way expansion constant of graphG by ρ(k) , minpartition A1, . . . , Ak max1≤i≤k φG(Ai). Notice that a graph G has k clusters if the value of ρ(k) is small.
Lee et al. [12] relate the value of ρ(k) to λk(LG) by the following higher-order Cheeger inequality:
λk(LG) 2
≤ ρ(k) ≤ O(k2) √ λk(LG).
Based on this, a large gap between λk+1(LG) and ρ(k) implies (i) the existence of a k-way partition {Si}ki=1 with smaller value of φG(Si) ≤ ρ(k), and (ii) any (k + 1)-way partition of G contains a subset with high conductance ρ(k + 1) ≥ λk+1(LG)/2. Hence, a large gap between λk+1(LG) and ρ(k) ensures that G has exactly k clusters.
In the following, we assume that Υ , λk+1(LG)/ρ(k) = Ω(k3), as this assumption was used in the literature for studying graph clustering in the centralized setting [17].
Both algorithms presented in the section are based on the following spectral clustering algorithm: (i) compute the k eigenvectors f1, . . . , fk of LG associated with λ1(LG), . . . , λk(LG); (ii) embed every vertex v to a point in Rk through the embedding F (v) = 1√
dv · (f1(v), . . . , fk(v)); (iii) run
k-means on the embedded points {F (v)}v∈V , and group the vertices of G into k clusters according to the output of k-means.
3.1 The message passing model
We assume the edges of the input graphG = (V,E) are arbitrarily allocated among s sitesP1, · · · ,Ps, and we use Ei to denote the edge set maintained by site Pi. Our proposed algorithm consists of two steps: (i) every Pi computes a linear-sized (1 + c)-spectral sparsifier Hi of Gi , (V,Ei), for a small constant c ≤ 1/10, and sends the edge set of Hi, denoted by E′i, to the coordinator; (ii) the coordinator runs a spectral clustering algorithm on the union of received graphs H , ( V, ⋃k i=1E ′ i ) . The theorem below summarizes the performance of this algorithm, and shows the approximation guarantee of this algorithm is as good as the provable guarantee of spectral clustering known in the centralized setting [17]. Theorem 3.1. Let G = (V,E) be an n-vertex graph with Υ = Ω(k3), and suppose the edges of G are arbitrarily allocated among s sites. Assume S1, · · · , Sk is an optimal partition that achieves ρ(k). Then, the algorithm above computes a partition A1, . . . , Ak satisfying vol(Ai4Si) = O ( k3 ·Υ−1 · vol(Si) ) for any 1 ≤ i ≤ k. The total communication cost of this algorithm is Õ(ns) bits.
Our proposed algorithm is very easy to implement, and the next theorem shows that the communication cost of our algorithm is optimal up to a logarithmic factor. Theorem 3.2. Let G be an undirected graph with n vertices, and suppose the edges of G are distributed among s sites. Then, any algorithm that correctly outputs a constant fraction of a cluster in G requires Ω(ns) bits of communication. This holds even if each cluster has constant expansion.
As a remark, it is easy to see that this lower bound also holds for constructing spectral sparsifiers: for any n× n PSD matrix A whose entries are arbitrarily distributed among s sites, any distributed algorithm that constructs a (1 + Θ(1))-spectral sparsifier of A requires Ω(ns) bits of communication. This follows since such a spectral sparsifier can be used to solve the spectral clustering problem. Spectral sparsification has played an important role in designing fast algorithms from different areas, e.g., machine learning, and numerical linear algebra. Hence our lower bound result for constructing spectral sparsifiers may have applications to studying other distributed learning algorithms.
3.2 The blackboard model
Next we present a graph clustering algorithm with Õ(n + s) bits of communication cost in the blackboard model. Our result is based on the observation that a spectral sparsifier preserves the structure of clusters, which was used for proving Theorem 3.1. So it suffices to design a distributed algorithm for constructing a spectral sparsifier in the blackboard model.
Our distributed algorithm is based on constructing a chain of coarse sparsifiers [15], which is described as follows: for any input PSD matrix K with λmax(K) ≤ λu and all the non-zero eigenvalues of K at least λ`, we define d = dlog2(λu/λ`)e and construct a chain of d+ 1 matrices
[K(0),K(1), . . . ,K(d)], (2)
where γ(i) = λu/2i and K(i) = K + γ(i)I . Notice that in the chain above every K(i − 1) is obtained by adding weights to the diagonal entries of K(i), and K(i− 1) approximates K(i) as long as the weights added to the diagonal entries are small. We will construct this chain recursively, so that K(0) has heavy diagonal entries and can be approximated by a diagonal matrix. Moreover, since K is the Laplacian matrix of a graph G, it is easy to see that d = O(log n) as long as the edge weights of G are polynomially upper-bounded in n. Lemma 3.3 ([15]). The chain (2) satisfies the following relations: (1) K r K(d) r 2K; (2) K(`) K(`− 1) 2K(`) for all ` ∈ {1, . . . , d}; (3) K(0) 2γ(0)I 2K(0).
Based on Lemma 3.3, we will construct a chain of matrices[ K̃(0), K̃(1), . . . , K̃(d) ] (3)
in the blackboard model, such that every K̃(`) is a spectral sparsifier of K(`), and every K̃(`+ 1) can be constructed from K̃(`). The basic idea behind our construction is to use the relations among different K(`) shown in Lemma 3.3 and the fact that, for any K = BᵀB, sampling rows of B with respect to their leverage scores can be used to obtain a matrix approximating K. Theorem 3.4. LetG be an undirected graph on n vertices, where the edges ofG are allocated among s sites, and the edge weights are polynomially upper bounded in n. Then, a spectral sparsifier of G can be constructed with Õ(n+ s) bits of communication in the blackboard model. That is, the chain (3) can be constructed with Õ(n+ s) bits of communication in the blackboard model.
Proof. Let K = BᵀB be the Laplacian matrix of the underlying graph G, where B ∈ Rm×n is the edge-vertex incidence matrix of G. We will prove that every K̃(i+ 1) can be constructed based on K̃(i) with Õ(n+ s) bits of communication. This implies that K̃(d), a (1 + ε)-spectral sparsifier of K, can be constructed with Õ(n+ s) bits of communication, as the length of the chain d = O(log n).
First of all, notice that λu ≤ 2n, and the value of n can be obtained with communication cost Õ(n + s) (different sites sequentially write the new IDs of the vertices on the blackboard). In the following we assume that λu is the upper bound of λmax that we actually obtained in the blackboard.
Base case of ` = 0: By definition, K(0) = K + λu · I , and 12 · K(0) γ(0) · I K(0), due to Statement 3 of Lemma 3.3. Let ⊕ denote appending the rows of one matrix to another. We
define Bγ(0) = B ⊕ √ γ(0) · I , and write K(0) = K + γ(0) · I = Bᵀγ(0)Bγ(0). By defining τi = b ᵀ i (K(0)) ᵀ bi for each row of Bγ(0), we have τi ≤ bᵀi (γ(0) · I) bi ≤ 2 · τi. Let τ̃i = bᵀi (γ(0) · I) + bi be the leverage score of bi approximated using γ(0) · I , and let τ̃ be the vector of
approximate leverage scores, with the leverage scores of the n rows corresponding to √ γ(0) · I rounded up to 1. Then, with high probability sampling O(ε−2n log n) rows of B will give a matrix K̃(0) such that (1− ε)K(0) K̃(0) (1 + ε)K(0). Notice that, as every row of B corresponds to an edge of G, the approximate leverage scores τ̃i for different edges can be computed locally by different sites maintaining the edges, and the sites only need to send the information of the sampled edges to the blackboard, hence the communication cost is Õ(n+ s) bits.
Induction step: We assume that (1−ε)K(`) r K̃(`) r (1+ε)K(`), and the blackboard maintains the matrix K̃(`). This implies that (1− ε)/(1 + ε) ·K(`) r 1/(1 + ε) · K̃(`) r K(`). Combining this with Statement 2 of Lemma 3.3, we have that
1− ε 2(1 + ε) K(`+ 1) r 1 2(1 + ε) K̃(`) K(`+ 1).
We apply the same sampling procedure as in the base case, and obtain a matrix K̃(` + 1) such that (1 − ε)K(` + 1) r K̃(` + 1) r (1 + ε)K(` + 1). Notice that, since K̃(`) is written on the blackboard, the probabilities used for sampling individual edges can be computed locally by different sites, and in each round only the sampled edges will be sent to the blackboard in order for the blackboard to obtain K̃(`+ 1). Hence, the total communication cost in each iteration is Õ(n+ s) bits. Combining this with the fact that the chain length d = O(log n) proves the theorem.
Combining Theorem 3.4 and the fact that a spectral sparsifier preserves the structure of clusters, we obtain a distributed algorithm in the blackboard model with total communication cost Õ(n+ s) bits, and the performance of our algorithm is the same as in the statement of Theorem 3.1. Notice that Ω(n + s) bits of communication are needed for graph clustering in the blackboard model, since the output of a clustering algorithm contains Ω(n) bits of information and each site needs to communicate at least one bit. Hence the communication cost of our proposed algorithm is optimal up to a poly-logarithmic factor.
4 Distributed geometric clustering
We now consider geometric clustering, including k-median, k-means and k-center. Let P be a set of points of size n in a metric space with distance function d(·, ·), and let k ≤ n be an integer. In the k-center problem we want to find a set C (|C| = k) such that maxp∈P d(p, C) is minimized, where d(p, C) = minc∈C d(p, c). In k-median and k-means we replace the objective function maxp∈P d(p, C) with ∑ p∈P d(p, C) and ∑ p∈P (d(p, C)) 2, respectively.
4.1 The message passing model
As mentioned, for constant dimensional Euclidean space and a constant c > 1, there are algorithms that c-approximate k-median and k-means using Õ(sk) bits of communication [3]. For k-center, the folklore parallel guessing algorithms (see, e.g., [8]) achieve a 2.01-approximation using Õ(sk) bits of communication.
The following theorem states that the above upper bounds are tight up to logarithmic factors. Due to space constraints we defer the proof to the full version of this paper. The proof uses tools from multiparty communication complexity. We in fact can prove a stronger statement that any algorithm that can differentiate whether we have k points or k + 1 points in total in the message passing model needs Ω(sk) bits of communication. Theorem 4.1. For any c > 1, computing c-approximation for k-median, k-means or k-center correctly with probability 0.99 in the message passing model needs Ω(sk) bits of communication.
A number of works on clustering consider bicriteria solutions (e.g., [11, 6]). An algorithm is a (c1, c2)-approximation (c1, c2 > 1) if the optimal solution costs W when using k centers, then the
output of the algorithm costs at most c1W when using at most c2k centers. We can show that for kmedian and k-means, the Ω(sk) lower bound holds even for algorithms with bicriteria approximations. The proof of the following theorem can be found in the full version of this paper.
Theorem 4.2. For any c ∈ [1, 1.01], computing (7.1− 6c, c)-bicriteria-approximation for k-median or k-means correctly with probability 0.99 in the message passing model needs Ω(sk) bits of communication.
4.2 The blackboard model
We can show that there is an algorithm that achieves an O(1)-approximation using Õ(s+ k) bits of communication for k-median and k-means. Due to space constraints we defer the description of the algorithm to the full version of this paper. For k-center, it is straightforward to implement the parallel guessing algorithm in the blackboard model using Õ(s+ k) bits of communication.
Theorem 4.3. There are algorithms that compute O(1)-approximations for k-median, k-means and k-center correctly with probability 0.9 in the blackboard model using Õ(s+k) bits of communication.
5 Experiments
In this section we present experimental results for spectral graph clustering in the message passing and blackboard models. We will compare the following three algorithms. (1) Baseline: each site sends all the data to the coordinator directly; (2) MsgPassing: our algorithm in the message passing model (Section 3.1); (3) Blackboard: our algorithm in the blackboard model (Section 3.2).
Besides giving the visualized results of these algorithms on various datasets, we also measure the qualities of the results via the normalized cut, defined as ncut(A1, . . . , Ak) = 12 ∑ i∈[k] w(Ai,V \Ai) vol(Ai)
, which is a standard objective function to be minimized for spectral clustering algorithms.
We implemented the algorithms using multiple languages, including Matlab, Python and C++. Our experiments were conducted on an IBM NeXtScale nx360 M4 server, which is equipped with 2 Intel Xeon E5-2652 v2 8-core processors, 32GB RAM and 250GB local storage.
Datasets. We test the algorithms in the following real and synthetic datasets.
• Twomoons: this dataset contains n = 14, 000 coordinates in R2. We consider each point to be a vertex. For any two vertices u, v, we add an edge with weight w(u, v) = exp{−‖u− v‖22/σ2} with σ = 0.1 when one vertex is among the 7000-nearest points of the other. This construction results in a graph with about 110, 000, 000 edges.
• Gauss: this dataset contains n = 10, 000 points in R2. There are 4 clusters in this dataset, each generated using a Gaussian distribution. We construct a complete graph as the similarity graph. For any two vertices u, v, we define the weight w(u, v) = exp{−‖u− v‖22/σ2} with σ = 1. The resulting graph has about 100, 000, 000 edges.
• Sculpture: a photo of The Greek Slave We use an 80× 150 version of this photo where each pixel is viewed as a vertex. To construct a similarity graph, we map each pixel to a point in R5, i.e., (x, y, r, g, b), where the latter three coordinates are the RGB values. For any two vertices u, v, we put an edge between u, v with weight w(u, v) = exp{−‖u − v‖22/σ2} with σ = 0.5 if one of u, v is among the 5000-nearest points of the other. This results in a graph with about 70, 000, 000 edges.
In the distributed model edges are randomly partitioned across s sites.
Results on clustering quality. We visualize the clustered results for the Twomoons dataset in Figure 1. It can be seen that Baseline, MsgPassing and Blackboard give results of very similar qualities. For simplicity, here we only present the visualization for s = 15. Similar results were observed when we varied the values of s.
We also compare the normalized cut (ncut) values of the clustering results of different algorithms. The results are presented in Figure 2. In all datasets, the ncut values of different algorithms are very close. The ncut value of MsgPassing slightly decreases when we increase the value of s, while the ncut value of Blackboard is independent of s.
Results on Communication Costs. We compare the communication costs of different algorithms in Figure 3. We observe that while achieving similar clustering qualities as Baseline, both MsgPassing and Blackboard are significantly more communication-efficient (by one or two orders of magnitudes in our experiments). We also notice that the value of s does not affect the communication cost of Blackboard, while the communication cost of MsgPassing grows almost linearly with s; when s is large, MsgPassing uses significantly more communication than Blackboard.
Acknowledgement: Jiecao Chen and Qin Zhang are supported in part by NSF CCF-1525024 and IIS-1633215. D.W. thanks support from the XDATA program of the Defense Advanced Research Projects Agency (DARPA), Air Force Research Laboratory contract FA8750-12-C-0323. | 1. What are the main contributions and novel aspects introduced by the paper in distributed graph and geometric clustering problems?
2. What are the weaknesses of the paper regarding its notation, presentation, and reliance on supplementary materials?
3. Do you have any questions or concerns about the paper's content, such as the definition of embedding F(v), the use of unit "words" in measuring communication cost, or the lack of clarity in certain sections? | Review | Review
This paper explores the distributed graph and geometric clustering problems, for the message passing and blackboard communication models. The authors proposed new distributed graph clustering schemes for both message passing and blackboard models, and proved their optimality. For the distributed geometric clustering problem, the authors proved the optimality of the existing schemes for the message passing model and proposed a new scheme for the blackboard model that achieves an O(1)-approximation. A major problem of this paper is its heavy notation, and it sometimes makes the paper really hard to follow. The authors grouped many definitions in the preliminaries section, and directly used them without referring back. Also, the paper should always be self-contained without the supplementary materials. Presenting the results completely in supplementary materials (e.g., the scheme for Theorem 4.3, Figure 6 and 7) is not acceptable. Some specific concerns and comments are given in the following. 1. The embedding F(v) in Line 170 is confusing. 2. E_iâ in Line 176 is not defined. 3. In Theorem 3.1, the communication cost is measured by the unit âwordsâ, which is undefined and inconsistent with the previously used unit âbitsâ. 4. K^+ in Line 228 is not defined, and how does exactly building a chain of matrices in (4) relate to the algorithm of distributed graph clustering in blackboard model is not clear. 5. The second paragraph of Section 4.1 is very confusing. S_i is a subset of [n], then what is the characteristic vector of S_i? 6. Figures in experiments section have poor qualities. It is hard to read axis labels and legends. |
NIPS | Title
Communication-Optimal Distributed Clustering
Abstract
Clustering large datasets is a fundamental problem with a number of applications in machine learning. Data is often collected on different sites and clustering needs to be performed in a distributed manner with low communication. We would like the quality of the clustering in the distributed setting to match that in the centralized setting for which all the data resides on a single site. In this work, we study both graph and geometric clustering problems in two distributed models: (1) a point-to-point model, and (2) a model with a broadcast channel. We give protocols in both models which we show are nearly optimal by proving almost matching communication lower bounds. Our work highlights the surprising power of a broadcast channel for clustering problems; roughly speaking, to spectrally cluster n points or n vertices in a graph distributed across s servers, for a worst-case partitioning the communication complexity in a point-to-point model is n · s, while in the broadcast model it is n+ s. A similar phenomenon holds for the geometric setting as well. We implement our algorithms and demonstrate this phenomenon on real life datasets, showing that our algorithms are also very efficient in practice.
1 Introduction
Clustering is a fundamental task in machine learning with widespread applications in data mining, computer vision, and social network analysis. Example applications of clustering include grouping similar webpages by search engines, finding users with common interests in a social network, and identifying different objects in a picture or video. For these applications, one can model the objects that need to be clustered as points in Euclidean space Rd, where the similarities of two objects are represented by the Euclidean distance between the two points. Then the task of clustering is to choose k points as centers, so that the total distance between all input points to their corresponding closest center is minimized. Depending on different distance objective functions, three typical problems have been studied: k-means, k-median, and k-center.
The other popular approach for clustering is to model the input data as vertices of a graph, and the similarity between two objects is represented by the weight of the edge connecting the corresponding vertices. For this scenario, one is asked to partition the vertices into clusters so that the “highly connected” vertices belong to the same cluster. A widely-used approach for graph clustering is spectral clustering, which embeds the vertices of a graph into the points in Rk through the bottom k eigenvectors of the graph’s Laplacian matrix, and applies k-means on the embedded points.
∗Full version appears on arXiv, 2017, under the same title.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Both the spectral clustering and the geometric clustering algorithms mentioned above have been widely used in practice, and have been the subject of extensive theoretical and experimental studies over the decades. However, these algorithms are designed for the centralized setting, and are not applicable in the setting of large-scale datasets that are maintained remotely by different sites. In particular, collecting the information from all the remote sites and performing a centralized clustering algorithm is infeasible due to high communication costs, and new distributed clustering algorithms with low communication cost need to be developed.
There are several natural communication models, and we focus on two of them: (1) a point-to-point model, and (2) a model with a broadcast channel. In the former, sometimes referred to as the messagepassing model, there is a communication channel between each pair of users. This may be impractical, and the so-called coordinator model can often be used in place; in the coordinator model there is a centralized site called the coordinator, and all communication goes through the coordinator. This affects the total communication by a factor of two, since the coordinator can forward a message from one server to another and therefore simulate a point-to-point protocol. There is also an additional additive O(log s) bits per message, where s is the number of sites, since a server must specify to the coordinator where to forward its message. In the model with a broadcast channel, sometimes referred to as the blackboard model, the coordinator has the power to send a single message which is received by all s sites at once. This can be viewed as a model for single-hop wireless networks.
In both models we study the total number of bits communicated among all sites. Although the blackboard model is at least as powerful as the message-passing model, it is often unclear how to exploit its power to obtain better bounds for specific problems. Also, for a number of problems the communication complexity is the same in both models, such as computing the sum of s length-n bit vectors modulo two, where each site holds one bit vector [18], or estimating large moments [20]. Still, for other problems like set disjointness it can save a factor of s in the communication [5].
Our contributions. We present algorithms for graph clustering: for any n-vertex graph whose edges are arbitrarily partitioned across s sites, our algorithms have communication cost Õ(ns) in the message passing model, and have communication cost Õ(n + s) in the blackboard model, where the Õ notation suppresses polylogarithmic factors. The algorithm in the message passing model has each site send a spectral sparsifier of its local data to the coordinator, who then merges them in order to obtain a spectral sparsifier of the union of the datasets, which is sufficient for solving the graph clustering problem. Our algorithm in the blackboard model is technically more involved, as we show a particular recursive sampling procedure for building a spectral sparsifier can be efficiently implemented using a broadcast channel. It is unclear if other natural ways of building spectral sparsifiers can be implemented with low communication in the blackboard model. Our algorithms demonstrate the surprising power of the blackboard model for clustering problems. Since our algorithms compute sparsifiers, they also have applications to solving symmetric diagonally dominant linear systems in a distributed model. Any such system can be converted into a system involving a Laplacian (see, e.g., [1]), from which a spectral sparsifier serves as a good preconditioner.
Next we show that Ω(ns) bits of communication is necessary in the message passing model to even recover a constant fraction of a cluster, and Ω(n + s) bits of communication is necessary in the blackboard model. This shows the optimality of our algorithms up to poly-logarithmic factors.
We then study clustering problems in constant-dimensional Euclidean space. We show for any c > 1, computing a c-approximation for k-median, k-means, or k-center correctly with constant probability in the message passing model requires Ω(sk) bits of communication. We then strengthen this lower bound, and show even for bicriteria clustering algorithms, which may output a constant factor more clusters and a constant factor approximation, our Ω(sk) bit lower bound still holds. Our proofs are based on communication and information complexity. Our results imply that existing algorithms [3] for k-median and k-means with Õ(sk) bits of communication, as well as the folklore parallel guessing algorithm for k-center with Õ(sk) bits of communication, are optimal up to poly-logarithmic factors. For the blackboard model, we present an algorithm for k-median and k-means that achieves an O(1)-approximation using Õ(s+ k) bits of communication. This again separates the models.
We give empirical results which show that using spectral sparsifiers preserves the quality of spectral clustering surprisingly well in real-world datasets. For example, when we partition a graph with over 70 million edges (the Sculpture dataset) into 30 sites, only 6% of the input edges are communicated in the blackboard model and 8% are communicated in the message passing model, while the values
of the normalized cut (the objective function of spectral clustering) given in those two models are at most 2% larger than the ones given by the centralized algorithm, and the visualized results are almost identical. This is strong evidence that spectral sparsifiers can be a powerful tool in practical, distributed computation. When the number of sites is large, the blackboard model incurs significantly less communication than the message passing model, e.g., in the Twomoons dataset when there are 90 sites, the message passing model communicates 9 times as many edges as communicated in the blackboard model, illustrating the strong separation between these models that our theory predicts.
Related work. There is a rich literature on spectral and geometric clustering algorithms from various aspects (see, e.g., [2, 16, 17, 19]). Balcan et al. [3, 4] and Feldman et al. [9] study distributed k-means ([3] also studies k-median). Very recently Guha et al. [10] studied distributed k-median/center/means with outliers. Cohen et al. [7] study dimensionality reduction techniques for the input data matrices that can be used for distributed k-means. The main takeaway is that there is no previous work which develops protocols for spectral clustering in the common message passing and blackboard models, and lower bounds are lacking as well. For geometric clustering, while upper bounds exist (e.g., [3, 4, 9]), no provable lower bounds in either model existed, and our main contribution is to show that previous algorithms are optimal. We also develop a new protocol in the blackboard model.
2 Preliminaries
Let G = (V,E,w) be an undirected graph with n vertices, m edges, and weight function V × V → R≥0. The set of neighbors of a vertex v is represented byN(v), and its degree is dv = ∑ u∼v w(u, v). The maximum degree of G is defined to be ∆(G) = maxv{dv}. For any set S ⊆ V , let µ(S) ,∑ v∈S dv . For any sets S, T ⊆ V , we define w(S, T ) , ∑ u∈S,v∈T w(u, v) to be the total weight of edges crossing S and T . For two sets X and Y , the symmetric difference of X and Y is defined as X4Y , (X \ Y ) ∪ (Y \X). For any matrix A ∈ Rn×n, let λ1(A) ≤ · · · ≤ λn(A) = λmax(A) be the eigenvalues of A. For any two matrices A,B ∈ Rn×n, we write A B to represent B − A is positive semi-definite (PSD). Notice that this condition implies that xᵀAx ≤ xᵀBx for any x ∈ Rn. Sometimes we also use a weaker notation (1− ε)A r B r (1 + ε)A to indicate that (1− ε)xᵀAx ≤ xᵀBx ≤ (1 + ε)xᵀAx for all x in the row span of A.
Graph Laplacian. The Laplacian matrix of G is an n× n matrix LG defined by LG = DG −AG, whereAG is the adjacency matrix ofG defined byAG(u, v) = w(u, v), andDG is the n×n diagonal matrix with DG(v, v) = dv for any v ∈ V [G]. Alternatively, we can write LG with respect to a signed edge-vertex incidence matrix: we assign every edge e = {u, v} an arbitrary orientation, and let BG(e, v) = 1 if v is e’s head, BG(e, v) = −1 if v is e’s tail, and BG(e, v) = 0 otherwise. We further define a diagonal matrix WG ∈ Rm×m, where WG(e, e) = we for any edge e ∈ E[G]. Then, we can write LG as LG = B ᵀ GWGBG. The normalized Laplacian matrix of G is defined by LG , D−1/2G LGD −1/2 G = I − D −1/2 G AGD −1/2 G . We sometimes drop the subscript G when the underlying graph is clear from the context.
Spectral sparsification. For any undirected and weighted graph G = (V,E,w), we say a subgraph H of G with proper reweighting of the edges is a (1 + ε)-spectral sparsifier if
(1− ε)LG LH (1 + ε)LG. (1) By definition, it is easy to show that, if we decompose the edge set of a graph G = (V,E) into E1, . . . , E` for a constant ` and Hi is a spectral sparsifier of Gi = (V,Ei) for any 1 ≤ i ≤ `, then the graph formed by the union of edge sets from Hi is a spectral sparsifier of G. It is known that, for any undirected graph G of n vertices, there is a (1 + ε)-spectral sparsifier of G with O(n/ε2) edges, and it can be constructed in almost-linear time [13]. We will show that a spectral sparsifier preserves the cluster structure of a graph.
Models of computation. We will study distributed clustering in two models for distributed data: the message passing model and the blackboard model. The message passing model represents those distributed computation systems with point-to-point communication, and the blackboard model represents those where messages can be broadcast to all parties.
More precisely, in the message passing model there are s sites P1, . . . ,Ps, and one coordinator. These sites can talk to the coordinator through a two-way private channel. In fact, this is referred to
as the coordinator model in Section 1, where it is shown to be equivalent to the point-to-point model up to small factors. The input is initially distributed at the s sites. The computation is in terms of rounds: at the beginning of each round, the coordinator sends a message to some of the s sites, and then each of those sites that have been contacted by the coordinator sends a message back to the coordinator. At the end, the coordinator outputs the answer. In the alternative blackboard model, the coordinator is simply a blackboard where these s sites P1, . . . ,Ps can share information; in other words, if one site sends a message to the coordinator/blackboard then all the other s− 1 sites can see this information without further communication. The order for the sites to speak is decided by the contents of the blackboard.
For both models we measure the communication cost as the total number of bits sent through the channels. The two models are now standard in multiparty communication complexity (see, e.g., [5, 18, 20]). They are similar to the congested clique model [14] studied in the distributed computing community; the main difference is that in our models we do not post any bandwidth limitations at each channel but instead consider the total number of bits communicated.
3 Distributed graph clustering
In this section we study distributed graph clustering. We assume that the vertex set of the input graph G = (V,E) can be partitioned into k clusters, where vertices in each cluster S are highly connected to each other, and there are fewer edges between S and V \S. To formalize this notion, we define the conductance of a vertex set S by φG(S) , w(S, V \S)/µ(S). Generalizing the Cheeger constant, we define the k-way expansion constant of graphG by ρ(k) , minpartition A1, . . . , Ak max1≤i≤k φG(Ai). Notice that a graph G has k clusters if the value of ρ(k) is small.
Lee et al. [12] relate the value of ρ(k) to λk(LG) by the following higher-order Cheeger inequality:
λk(LG) 2
≤ ρ(k) ≤ O(k2) √ λk(LG).
Based on this, a large gap between λk+1(LG) and ρ(k) implies (i) the existence of a k-way partition {Si}ki=1 with smaller value of φG(Si) ≤ ρ(k), and (ii) any (k + 1)-way partition of G contains a subset with high conductance ρ(k + 1) ≥ λk+1(LG)/2. Hence, a large gap between λk+1(LG) and ρ(k) ensures that G has exactly k clusters.
In the following, we assume that Υ , λk+1(LG)/ρ(k) = Ω(k3), as this assumption was used in the literature for studying graph clustering in the centralized setting [17].
Both algorithms presented in the section are based on the following spectral clustering algorithm: (i) compute the k eigenvectors f1, . . . , fk of LG associated with λ1(LG), . . . , λk(LG); (ii) embed every vertex v to a point in Rk through the embedding F (v) = 1√
dv · (f1(v), . . . , fk(v)); (iii) run
k-means on the embedded points {F (v)}v∈V , and group the vertices of G into k clusters according to the output of k-means.
3.1 The message passing model
We assume the edges of the input graphG = (V,E) are arbitrarily allocated among s sitesP1, · · · ,Ps, and we use Ei to denote the edge set maintained by site Pi. Our proposed algorithm consists of two steps: (i) every Pi computes a linear-sized (1 + c)-spectral sparsifier Hi of Gi , (V,Ei), for a small constant c ≤ 1/10, and sends the edge set of Hi, denoted by E′i, to the coordinator; (ii) the coordinator runs a spectral clustering algorithm on the union of received graphs H , ( V, ⋃k i=1E ′ i ) . The theorem below summarizes the performance of this algorithm, and shows the approximation guarantee of this algorithm is as good as the provable guarantee of spectral clustering known in the centralized setting [17]. Theorem 3.1. Let G = (V,E) be an n-vertex graph with Υ = Ω(k3), and suppose the edges of G are arbitrarily allocated among s sites. Assume S1, · · · , Sk is an optimal partition that achieves ρ(k). Then, the algorithm above computes a partition A1, . . . , Ak satisfying vol(Ai4Si) = O ( k3 ·Υ−1 · vol(Si) ) for any 1 ≤ i ≤ k. The total communication cost of this algorithm is Õ(ns) bits.
Our proposed algorithm is very easy to implement, and the next theorem shows that the communication cost of our algorithm is optimal up to a logarithmic factor. Theorem 3.2. Let G be an undirected graph with n vertices, and suppose the edges of G are distributed among s sites. Then, any algorithm that correctly outputs a constant fraction of a cluster in G requires Ω(ns) bits of communication. This holds even if each cluster has constant expansion.
As a remark, it is easy to see that this lower bound also holds for constructing spectral sparsifiers: for any n× n PSD matrix A whose entries are arbitrarily distributed among s sites, any distributed algorithm that constructs a (1 + Θ(1))-spectral sparsifier of A requires Ω(ns) bits of communication. This follows since such a spectral sparsifier can be used to solve the spectral clustering problem. Spectral sparsification has played an important role in designing fast algorithms from different areas, e.g., machine learning, and numerical linear algebra. Hence our lower bound result for constructing spectral sparsifiers may have applications to studying other distributed learning algorithms.
3.2 The blackboard model
Next we present a graph clustering algorithm with Õ(n + s) bits of communication cost in the blackboard model. Our result is based on the observation that a spectral sparsifier preserves the structure of clusters, which was used for proving Theorem 3.1. So it suffices to design a distributed algorithm for constructing a spectral sparsifier in the blackboard model.
Our distributed algorithm is based on constructing a chain of coarse sparsifiers [15], which is described as follows: for any input PSD matrix K with λmax(K) ≤ λu and all the non-zero eigenvalues of K at least λ`, we define d = dlog2(λu/λ`)e and construct a chain of d+ 1 matrices
[K(0),K(1), . . . ,K(d)], (2)
where γ(i) = λu/2i and K(i) = K + γ(i)I . Notice that in the chain above every K(i − 1) is obtained by adding weights to the diagonal entries of K(i), and K(i− 1) approximates K(i) as long as the weights added to the diagonal entries are small. We will construct this chain recursively, so that K(0) has heavy diagonal entries and can be approximated by a diagonal matrix. Moreover, since K is the Laplacian matrix of a graph G, it is easy to see that d = O(log n) as long as the edge weights of G are polynomially upper-bounded in n. Lemma 3.3 ([15]). The chain (2) satisfies the following relations: (1) K r K(d) r 2K; (2) K(`) K(`− 1) 2K(`) for all ` ∈ {1, . . . , d}; (3) K(0) 2γ(0)I 2K(0).
Based on Lemma 3.3, we will construct a chain of matrices[ K̃(0), K̃(1), . . . , K̃(d) ] (3)
in the blackboard model, such that every K̃(`) is a spectral sparsifier of K(`), and every K̃(`+ 1) can be constructed from K̃(`). The basic idea behind our construction is to use the relations among different K(`) shown in Lemma 3.3 and the fact that, for any K = BᵀB, sampling rows of B with respect to their leverage scores can be used to obtain a matrix approximating K. Theorem 3.4. LetG be an undirected graph on n vertices, where the edges ofG are allocated among s sites, and the edge weights are polynomially upper bounded in n. Then, a spectral sparsifier of G can be constructed with Õ(n+ s) bits of communication in the blackboard model. That is, the chain (3) can be constructed with Õ(n+ s) bits of communication in the blackboard model.
Proof. Let K = BᵀB be the Laplacian matrix of the underlying graph G, where B ∈ Rm×n is the edge-vertex incidence matrix of G. We will prove that every K̃(i+ 1) can be constructed based on K̃(i) with Õ(n+ s) bits of communication. This implies that K̃(d), a (1 + ε)-spectral sparsifier of K, can be constructed with Õ(n+ s) bits of communication, as the length of the chain d = O(log n).
First of all, notice that λu ≤ 2n, and the value of n can be obtained with communication cost Õ(n + s) (different sites sequentially write the new IDs of the vertices on the blackboard). In the following we assume that λu is the upper bound of λmax that we actually obtained in the blackboard.
Base case of ` = 0: By definition, K(0) = K + λu · I , and 12 · K(0) γ(0) · I K(0), due to Statement 3 of Lemma 3.3. Let ⊕ denote appending the rows of one matrix to another. We
define Bγ(0) = B ⊕ √ γ(0) · I , and write K(0) = K + γ(0) · I = Bᵀγ(0)Bγ(0). By defining τi = b ᵀ i (K(0)) ᵀ bi for each row of Bγ(0), we have τi ≤ bᵀi (γ(0) · I) bi ≤ 2 · τi. Let τ̃i = bᵀi (γ(0) · I) + bi be the leverage score of bi approximated using γ(0) · I , and let τ̃ be the vector of
approximate leverage scores, with the leverage scores of the n rows corresponding to √ γ(0) · I rounded up to 1. Then, with high probability sampling O(ε−2n log n) rows of B will give a matrix K̃(0) such that (1− ε)K(0) K̃(0) (1 + ε)K(0). Notice that, as every row of B corresponds to an edge of G, the approximate leverage scores τ̃i for different edges can be computed locally by different sites maintaining the edges, and the sites only need to send the information of the sampled edges to the blackboard, hence the communication cost is Õ(n+ s) bits.
Induction step: We assume that (1−ε)K(`) r K̃(`) r (1+ε)K(`), and the blackboard maintains the matrix K̃(`). This implies that (1− ε)/(1 + ε) ·K(`) r 1/(1 + ε) · K̃(`) r K(`). Combining this with Statement 2 of Lemma 3.3, we have that
1− ε 2(1 + ε) K(`+ 1) r 1 2(1 + ε) K̃(`) K(`+ 1).
We apply the same sampling procedure as in the base case, and obtain a matrix K̃(` + 1) such that (1 − ε)K(` + 1) r K̃(` + 1) r (1 + ε)K(` + 1). Notice that, since K̃(`) is written on the blackboard, the probabilities used for sampling individual edges can be computed locally by different sites, and in each round only the sampled edges will be sent to the blackboard in order for the blackboard to obtain K̃(`+ 1). Hence, the total communication cost in each iteration is Õ(n+ s) bits. Combining this with the fact that the chain length d = O(log n) proves the theorem.
Combining Theorem 3.4 and the fact that a spectral sparsifier preserves the structure of clusters, we obtain a distributed algorithm in the blackboard model with total communication cost Õ(n+ s) bits, and the performance of our algorithm is the same as in the statement of Theorem 3.1. Notice that Ω(n + s) bits of communication are needed for graph clustering in the blackboard model, since the output of a clustering algorithm contains Ω(n) bits of information and each site needs to communicate at least one bit. Hence the communication cost of our proposed algorithm is optimal up to a poly-logarithmic factor.
4 Distributed geometric clustering
We now consider geometric clustering, including k-median, k-means and k-center. Let P be a set of points of size n in a metric space with distance function d(·, ·), and let k ≤ n be an integer. In the k-center problem we want to find a set C (|C| = k) such that maxp∈P d(p, C) is minimized, where d(p, C) = minc∈C d(p, c). In k-median and k-means we replace the objective function maxp∈P d(p, C) with ∑ p∈P d(p, C) and ∑ p∈P (d(p, C)) 2, respectively.
4.1 The message passing model
As mentioned, for constant dimensional Euclidean space and a constant c > 1, there are algorithms that c-approximate k-median and k-means using Õ(sk) bits of communication [3]. For k-center, the folklore parallel guessing algorithms (see, e.g., [8]) achieve a 2.01-approximation using Õ(sk) bits of communication.
The following theorem states that the above upper bounds are tight up to logarithmic factors. Due to space constraints we defer the proof to the full version of this paper. The proof uses tools from multiparty communication complexity. We in fact can prove a stronger statement that any algorithm that can differentiate whether we have k points or k + 1 points in total in the message passing model needs Ω(sk) bits of communication. Theorem 4.1. For any c > 1, computing c-approximation for k-median, k-means or k-center correctly with probability 0.99 in the message passing model needs Ω(sk) bits of communication.
A number of works on clustering consider bicriteria solutions (e.g., [11, 6]). An algorithm is a (c1, c2)-approximation (c1, c2 > 1) if the optimal solution costs W when using k centers, then the
output of the algorithm costs at most c1W when using at most c2k centers. We can show that for kmedian and k-means, the Ω(sk) lower bound holds even for algorithms with bicriteria approximations. The proof of the following theorem can be found in the full version of this paper.
Theorem 4.2. For any c ∈ [1, 1.01], computing (7.1− 6c, c)-bicriteria-approximation for k-median or k-means correctly with probability 0.99 in the message passing model needs Ω(sk) bits of communication.
4.2 The blackboard model
We can show that there is an algorithm that achieves an O(1)-approximation using Õ(s+ k) bits of communication for k-median and k-means. Due to space constraints we defer the description of the algorithm to the full version of this paper. For k-center, it is straightforward to implement the parallel guessing algorithm in the blackboard model using Õ(s+ k) bits of communication.
Theorem 4.3. There are algorithms that compute O(1)-approximations for k-median, k-means and k-center correctly with probability 0.9 in the blackboard model using Õ(s+k) bits of communication.
5 Experiments
In this section we present experimental results for spectral graph clustering in the message passing and blackboard models. We will compare the following three algorithms. (1) Baseline: each site sends all the data to the coordinator directly; (2) MsgPassing: our algorithm in the message passing model (Section 3.1); (3) Blackboard: our algorithm in the blackboard model (Section 3.2).
Besides giving the visualized results of these algorithms on various datasets, we also measure the qualities of the results via the normalized cut, defined as ncut(A1, . . . , Ak) = 12 ∑ i∈[k] w(Ai,V \Ai) vol(Ai)
, which is a standard objective function to be minimized for spectral clustering algorithms.
We implemented the algorithms using multiple languages, including Matlab, Python and C++. Our experiments were conducted on an IBM NeXtScale nx360 M4 server, which is equipped with 2 Intel Xeon E5-2652 v2 8-core processors, 32GB RAM and 250GB local storage.
Datasets. We test the algorithms in the following real and synthetic datasets.
• Twomoons: this dataset contains n = 14, 000 coordinates in R2. We consider each point to be a vertex. For any two vertices u, v, we add an edge with weight w(u, v) = exp{−‖u− v‖22/σ2} with σ = 0.1 when one vertex is among the 7000-nearest points of the other. This construction results in a graph with about 110, 000, 000 edges.
• Gauss: this dataset contains n = 10, 000 points in R2. There are 4 clusters in this dataset, each generated using a Gaussian distribution. We construct a complete graph as the similarity graph. For any two vertices u, v, we define the weight w(u, v) = exp{−‖u− v‖22/σ2} with σ = 1. The resulting graph has about 100, 000, 000 edges.
• Sculpture: a photo of The Greek Slave We use an 80× 150 version of this photo where each pixel is viewed as a vertex. To construct a similarity graph, we map each pixel to a point in R5, i.e., (x, y, r, g, b), where the latter three coordinates are the RGB values. For any two vertices u, v, we put an edge between u, v with weight w(u, v) = exp{−‖u − v‖22/σ2} with σ = 0.5 if one of u, v is among the 5000-nearest points of the other. This results in a graph with about 70, 000, 000 edges.
In the distributed model edges are randomly partitioned across s sites.
Results on clustering quality. We visualize the clustered results for the Twomoons dataset in Figure 1. It can be seen that Baseline, MsgPassing and Blackboard give results of very similar qualities. For simplicity, here we only present the visualization for s = 15. Similar results were observed when we varied the values of s.
We also compare the normalized cut (ncut) values of the clustering results of different algorithms. The results are presented in Figure 2. In all datasets, the ncut values of different algorithms are very close. The ncut value of MsgPassing slightly decreases when we increase the value of s, while the ncut value of Blackboard is independent of s.
Results on Communication Costs. We compare the communication costs of different algorithms in Figure 3. We observe that while achieving similar clustering qualities as Baseline, both MsgPassing and Blackboard are significantly more communication-efficient (by one or two orders of magnitudes in our experiments). We also notice that the value of s does not affect the communication cost of Blackboard, while the communication cost of MsgPassing grows almost linearly with s; when s is large, MsgPassing uses significantly more communication than Blackboard.
Acknowledgement: Jiecao Chen and Qin Zhang are supported in part by NSF CCF-1525024 and IIS-1633215. D.W. thanks support from the XDATA program of the Defense Advanced Research Projects Agency (DARPA), Air Force Research Laboratory contract FA8750-12-C-0323. | 1. What are the main contributions and findings of the paper regarding communication costs and clustering tasks?
2. What are the strengths and weaknesses of the proposed algorithms for clustering tasks?
3. How does the reviewer assess the clarity and organization of the paper, particularly with regards to the technical supplementary material?
4. What are the questions raised by the reviewer regarding the presentation of the algorithm for the graph clustering task?
5. What are the concerns about the experimental validation of the graph clustering algorithm, especially when applied to real data sets? | Review | Review
The authors prove lower bounds for communication costs for clustering tasks in a point-to-point and broadcast setting. The clustering tasks considered are clustering a graph that meets certain spectral "clusterability" properties, as well as three point clustering tasks (k-means, k-medians, k-centers). The authors then present algorithms for these tasks, and prove their algorithms are within poly-logarithmic factors of the communication costs lower bounds. Experimentally, the authors verify that their algorithms recover clusters well for both tasks and have lower communication costs than simply sending all data to one coordinator.With 6.5pgs of technical supplementary material, this feels like an attempt to cram two papers into one. As a result it was difficult to read because definitions and explanations were split between the paper and the supp mat. I think splitting this into one paper that proves the lower bounds on the communication costs, and another that presents the clustering algorithms may make sense. The algorithm for solving the graph clustering task within the point-to-point communication model isn't clearly presented. Line 175 reads "every site P_i computes a linear-sized (1+\Theta(1))-spectral sparsifier H_i of G_i." It is unclear what \Theta(1) means. Is it 0.5, is it 10, or is it 1000? What value was used in the experiments section? In the experiments section only the graph clustering algorithm is validated using synthetic data sets. Do the spectral separation properties assumed to motivate the graph clustering algorithm hold in these datasets? What happens on real data sets? Furthermore it is stated that the edges are randomly partitioned among sites. In this case, is there variance in results? What happens if edges are partitioned in a adversarial manner? |
NIPS | Title
Communication-Optimal Distributed Clustering
Abstract
Clustering large datasets is a fundamental problem with a number of applications in machine learning. Data is often collected on different sites and clustering needs to be performed in a distributed manner with low communication. We would like the quality of the clustering in the distributed setting to match that in the centralized setting for which all the data resides on a single site. In this work, we study both graph and geometric clustering problems in two distributed models: (1) a point-to-point model, and (2) a model with a broadcast channel. We give protocols in both models which we show are nearly optimal by proving almost matching communication lower bounds. Our work highlights the surprising power of a broadcast channel for clustering problems; roughly speaking, to spectrally cluster n points or n vertices in a graph distributed across s servers, for a worst-case partitioning the communication complexity in a point-to-point model is n · s, while in the broadcast model it is n+ s. A similar phenomenon holds for the geometric setting as well. We implement our algorithms and demonstrate this phenomenon on real life datasets, showing that our algorithms are also very efficient in practice.
1 Introduction
Clustering is a fundamental task in machine learning with widespread applications in data mining, computer vision, and social network analysis. Example applications of clustering include grouping similar webpages by search engines, finding users with common interests in a social network, and identifying different objects in a picture or video. For these applications, one can model the objects that need to be clustered as points in Euclidean space Rd, where the similarities of two objects are represented by the Euclidean distance between the two points. Then the task of clustering is to choose k points as centers, so that the total distance between all input points to their corresponding closest center is minimized. Depending on different distance objective functions, three typical problems have been studied: k-means, k-median, and k-center.
The other popular approach for clustering is to model the input data as vertices of a graph, and the similarity between two objects is represented by the weight of the edge connecting the corresponding vertices. For this scenario, one is asked to partition the vertices into clusters so that the “highly connected” vertices belong to the same cluster. A widely-used approach for graph clustering is spectral clustering, which embeds the vertices of a graph into the points in Rk through the bottom k eigenvectors of the graph’s Laplacian matrix, and applies k-means on the embedded points.
∗Full version appears on arXiv, 2017, under the same title.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Both the spectral clustering and the geometric clustering algorithms mentioned above have been widely used in practice, and have been the subject of extensive theoretical and experimental studies over the decades. However, these algorithms are designed for the centralized setting, and are not applicable in the setting of large-scale datasets that are maintained remotely by different sites. In particular, collecting the information from all the remote sites and performing a centralized clustering algorithm is infeasible due to high communication costs, and new distributed clustering algorithms with low communication cost need to be developed.
There are several natural communication models, and we focus on two of them: (1) a point-to-point model, and (2) a model with a broadcast channel. In the former, sometimes referred to as the messagepassing model, there is a communication channel between each pair of users. This may be impractical, and the so-called coordinator model can often be used in place; in the coordinator model there is a centralized site called the coordinator, and all communication goes through the coordinator. This affects the total communication by a factor of two, since the coordinator can forward a message from one server to another and therefore simulate a point-to-point protocol. There is also an additional additive O(log s) bits per message, where s is the number of sites, since a server must specify to the coordinator where to forward its message. In the model with a broadcast channel, sometimes referred to as the blackboard model, the coordinator has the power to send a single message which is received by all s sites at once. This can be viewed as a model for single-hop wireless networks.
In both models we study the total number of bits communicated among all sites. Although the blackboard model is at least as powerful as the message-passing model, it is often unclear how to exploit its power to obtain better bounds for specific problems. Also, for a number of problems the communication complexity is the same in both models, such as computing the sum of s length-n bit vectors modulo two, where each site holds one bit vector [18], or estimating large moments [20]. Still, for other problems like set disjointness it can save a factor of s in the communication [5].
Our contributions. We present algorithms for graph clustering: for any n-vertex graph whose edges are arbitrarily partitioned across s sites, our algorithms have communication cost Õ(ns) in the message passing model, and have communication cost Õ(n + s) in the blackboard model, where the Õ notation suppresses polylogarithmic factors. The algorithm in the message passing model has each site send a spectral sparsifier of its local data to the coordinator, who then merges them in order to obtain a spectral sparsifier of the union of the datasets, which is sufficient for solving the graph clustering problem. Our algorithm in the blackboard model is technically more involved, as we show a particular recursive sampling procedure for building a spectral sparsifier can be efficiently implemented using a broadcast channel. It is unclear if other natural ways of building spectral sparsifiers can be implemented with low communication in the blackboard model. Our algorithms demonstrate the surprising power of the blackboard model for clustering problems. Since our algorithms compute sparsifiers, they also have applications to solving symmetric diagonally dominant linear systems in a distributed model. Any such system can be converted into a system involving a Laplacian (see, e.g., [1]), from which a spectral sparsifier serves as a good preconditioner.
Next we show that Ω(ns) bits of communication is necessary in the message passing model to even recover a constant fraction of a cluster, and Ω(n + s) bits of communication is necessary in the blackboard model. This shows the optimality of our algorithms up to poly-logarithmic factors.
We then study clustering problems in constant-dimensional Euclidean space. We show for any c > 1, computing a c-approximation for k-median, k-means, or k-center correctly with constant probability in the message passing model requires Ω(sk) bits of communication. We then strengthen this lower bound, and show even for bicriteria clustering algorithms, which may output a constant factor more clusters and a constant factor approximation, our Ω(sk) bit lower bound still holds. Our proofs are based on communication and information complexity. Our results imply that existing algorithms [3] for k-median and k-means with Õ(sk) bits of communication, as well as the folklore parallel guessing algorithm for k-center with Õ(sk) bits of communication, are optimal up to poly-logarithmic factors. For the blackboard model, we present an algorithm for k-median and k-means that achieves an O(1)-approximation using Õ(s+ k) bits of communication. This again separates the models.
We give empirical results which show that using spectral sparsifiers preserves the quality of spectral clustering surprisingly well in real-world datasets. For example, when we partition a graph with over 70 million edges (the Sculpture dataset) into 30 sites, only 6% of the input edges are communicated in the blackboard model and 8% are communicated in the message passing model, while the values
of the normalized cut (the objective function of spectral clustering) given in those two models are at most 2% larger than the ones given by the centralized algorithm, and the visualized results are almost identical. This is strong evidence that spectral sparsifiers can be a powerful tool in practical, distributed computation. When the number of sites is large, the blackboard model incurs significantly less communication than the message passing model, e.g., in the Twomoons dataset when there are 90 sites, the message passing model communicates 9 times as many edges as communicated in the blackboard model, illustrating the strong separation between these models that our theory predicts.
Related work. There is a rich literature on spectral and geometric clustering algorithms from various aspects (see, e.g., [2, 16, 17, 19]). Balcan et al. [3, 4] and Feldman et al. [9] study distributed k-means ([3] also studies k-median). Very recently Guha et al. [10] studied distributed k-median/center/means with outliers. Cohen et al. [7] study dimensionality reduction techniques for the input data matrices that can be used for distributed k-means. The main takeaway is that there is no previous work which develops protocols for spectral clustering in the common message passing and blackboard models, and lower bounds are lacking as well. For geometric clustering, while upper bounds exist (e.g., [3, 4, 9]), no provable lower bounds in either model existed, and our main contribution is to show that previous algorithms are optimal. We also develop a new protocol in the blackboard model.
2 Preliminaries
Let G = (V,E,w) be an undirected graph with n vertices, m edges, and weight function V × V → R≥0. The set of neighbors of a vertex v is represented byN(v), and its degree is dv = ∑ u∼v w(u, v). The maximum degree of G is defined to be ∆(G) = maxv{dv}. For any set S ⊆ V , let µ(S) ,∑ v∈S dv . For any sets S, T ⊆ V , we define w(S, T ) , ∑ u∈S,v∈T w(u, v) to be the total weight of edges crossing S and T . For two sets X and Y , the symmetric difference of X and Y is defined as X4Y , (X \ Y ) ∪ (Y \X). For any matrix A ∈ Rn×n, let λ1(A) ≤ · · · ≤ λn(A) = λmax(A) be the eigenvalues of A. For any two matrices A,B ∈ Rn×n, we write A B to represent B − A is positive semi-definite (PSD). Notice that this condition implies that xᵀAx ≤ xᵀBx for any x ∈ Rn. Sometimes we also use a weaker notation (1− ε)A r B r (1 + ε)A to indicate that (1− ε)xᵀAx ≤ xᵀBx ≤ (1 + ε)xᵀAx for all x in the row span of A.
Graph Laplacian. The Laplacian matrix of G is an n× n matrix LG defined by LG = DG −AG, whereAG is the adjacency matrix ofG defined byAG(u, v) = w(u, v), andDG is the n×n diagonal matrix with DG(v, v) = dv for any v ∈ V [G]. Alternatively, we can write LG with respect to a signed edge-vertex incidence matrix: we assign every edge e = {u, v} an arbitrary orientation, and let BG(e, v) = 1 if v is e’s head, BG(e, v) = −1 if v is e’s tail, and BG(e, v) = 0 otherwise. We further define a diagonal matrix WG ∈ Rm×m, where WG(e, e) = we for any edge e ∈ E[G]. Then, we can write LG as LG = B ᵀ GWGBG. The normalized Laplacian matrix of G is defined by LG , D−1/2G LGD −1/2 G = I − D −1/2 G AGD −1/2 G . We sometimes drop the subscript G when the underlying graph is clear from the context.
Spectral sparsification. For any undirected and weighted graph G = (V,E,w), we say a subgraph H of G with proper reweighting of the edges is a (1 + ε)-spectral sparsifier if
(1− ε)LG LH (1 + ε)LG. (1) By definition, it is easy to show that, if we decompose the edge set of a graph G = (V,E) into E1, . . . , E` for a constant ` and Hi is a spectral sparsifier of Gi = (V,Ei) for any 1 ≤ i ≤ `, then the graph formed by the union of edge sets from Hi is a spectral sparsifier of G. It is known that, for any undirected graph G of n vertices, there is a (1 + ε)-spectral sparsifier of G with O(n/ε2) edges, and it can be constructed in almost-linear time [13]. We will show that a spectral sparsifier preserves the cluster structure of a graph.
Models of computation. We will study distributed clustering in two models for distributed data: the message passing model and the blackboard model. The message passing model represents those distributed computation systems with point-to-point communication, and the blackboard model represents those where messages can be broadcast to all parties.
More precisely, in the message passing model there are s sites P1, . . . ,Ps, and one coordinator. These sites can talk to the coordinator through a two-way private channel. In fact, this is referred to
as the coordinator model in Section 1, where it is shown to be equivalent to the point-to-point model up to small factors. The input is initially distributed at the s sites. The computation is in terms of rounds: at the beginning of each round, the coordinator sends a message to some of the s sites, and then each of those sites that have been contacted by the coordinator sends a message back to the coordinator. At the end, the coordinator outputs the answer. In the alternative blackboard model, the coordinator is simply a blackboard where these s sites P1, . . . ,Ps can share information; in other words, if one site sends a message to the coordinator/blackboard then all the other s− 1 sites can see this information without further communication. The order for the sites to speak is decided by the contents of the blackboard.
For both models we measure the communication cost as the total number of bits sent through the channels. The two models are now standard in multiparty communication complexity (see, e.g., [5, 18, 20]). They are similar to the congested clique model [14] studied in the distributed computing community; the main difference is that in our models we do not post any bandwidth limitations at each channel but instead consider the total number of bits communicated.
3 Distributed graph clustering
In this section we study distributed graph clustering. We assume that the vertex set of the input graph G = (V,E) can be partitioned into k clusters, where vertices in each cluster S are highly connected to each other, and there are fewer edges between S and V \S. To formalize this notion, we define the conductance of a vertex set S by φG(S) , w(S, V \S)/µ(S). Generalizing the Cheeger constant, we define the k-way expansion constant of graphG by ρ(k) , minpartition A1, . . . , Ak max1≤i≤k φG(Ai). Notice that a graph G has k clusters if the value of ρ(k) is small.
Lee et al. [12] relate the value of ρ(k) to λk(LG) by the following higher-order Cheeger inequality:
λk(LG) 2
≤ ρ(k) ≤ O(k2) √ λk(LG).
Based on this, a large gap between λk+1(LG) and ρ(k) implies (i) the existence of a k-way partition {Si}ki=1 with smaller value of φG(Si) ≤ ρ(k), and (ii) any (k + 1)-way partition of G contains a subset with high conductance ρ(k + 1) ≥ λk+1(LG)/2. Hence, a large gap between λk+1(LG) and ρ(k) ensures that G has exactly k clusters.
In the following, we assume that Υ , λk+1(LG)/ρ(k) = Ω(k3), as this assumption was used in the literature for studying graph clustering in the centralized setting [17].
Both algorithms presented in the section are based on the following spectral clustering algorithm: (i) compute the k eigenvectors f1, . . . , fk of LG associated with λ1(LG), . . . , λk(LG); (ii) embed every vertex v to a point in Rk through the embedding F (v) = 1√
dv · (f1(v), . . . , fk(v)); (iii) run
k-means on the embedded points {F (v)}v∈V , and group the vertices of G into k clusters according to the output of k-means.
3.1 The message passing model
We assume the edges of the input graphG = (V,E) are arbitrarily allocated among s sitesP1, · · · ,Ps, and we use Ei to denote the edge set maintained by site Pi. Our proposed algorithm consists of two steps: (i) every Pi computes a linear-sized (1 + c)-spectral sparsifier Hi of Gi , (V,Ei), for a small constant c ≤ 1/10, and sends the edge set of Hi, denoted by E′i, to the coordinator; (ii) the coordinator runs a spectral clustering algorithm on the union of received graphs H , ( V, ⋃k i=1E ′ i ) . The theorem below summarizes the performance of this algorithm, and shows the approximation guarantee of this algorithm is as good as the provable guarantee of spectral clustering known in the centralized setting [17]. Theorem 3.1. Let G = (V,E) be an n-vertex graph with Υ = Ω(k3), and suppose the edges of G are arbitrarily allocated among s sites. Assume S1, · · · , Sk is an optimal partition that achieves ρ(k). Then, the algorithm above computes a partition A1, . . . , Ak satisfying vol(Ai4Si) = O ( k3 ·Υ−1 · vol(Si) ) for any 1 ≤ i ≤ k. The total communication cost of this algorithm is Õ(ns) bits.
Our proposed algorithm is very easy to implement, and the next theorem shows that the communication cost of our algorithm is optimal up to a logarithmic factor. Theorem 3.2. Let G be an undirected graph with n vertices, and suppose the edges of G are distributed among s sites. Then, any algorithm that correctly outputs a constant fraction of a cluster in G requires Ω(ns) bits of communication. This holds even if each cluster has constant expansion.
As a remark, it is easy to see that this lower bound also holds for constructing spectral sparsifiers: for any n× n PSD matrix A whose entries are arbitrarily distributed among s sites, any distributed algorithm that constructs a (1 + Θ(1))-spectral sparsifier of A requires Ω(ns) bits of communication. This follows since such a spectral sparsifier can be used to solve the spectral clustering problem. Spectral sparsification has played an important role in designing fast algorithms from different areas, e.g., machine learning, and numerical linear algebra. Hence our lower bound result for constructing spectral sparsifiers may have applications to studying other distributed learning algorithms.
3.2 The blackboard model
Next we present a graph clustering algorithm with Õ(n + s) bits of communication cost in the blackboard model. Our result is based on the observation that a spectral sparsifier preserves the structure of clusters, which was used for proving Theorem 3.1. So it suffices to design a distributed algorithm for constructing a spectral sparsifier in the blackboard model.
Our distributed algorithm is based on constructing a chain of coarse sparsifiers [15], which is described as follows: for any input PSD matrix K with λmax(K) ≤ λu and all the non-zero eigenvalues of K at least λ`, we define d = dlog2(λu/λ`)e and construct a chain of d+ 1 matrices
[K(0),K(1), . . . ,K(d)], (2)
where γ(i) = λu/2i and K(i) = K + γ(i)I . Notice that in the chain above every K(i − 1) is obtained by adding weights to the diagonal entries of K(i), and K(i− 1) approximates K(i) as long as the weights added to the diagonal entries are small. We will construct this chain recursively, so that K(0) has heavy diagonal entries and can be approximated by a diagonal matrix. Moreover, since K is the Laplacian matrix of a graph G, it is easy to see that d = O(log n) as long as the edge weights of G are polynomially upper-bounded in n. Lemma 3.3 ([15]). The chain (2) satisfies the following relations: (1) K r K(d) r 2K; (2) K(`) K(`− 1) 2K(`) for all ` ∈ {1, . . . , d}; (3) K(0) 2γ(0)I 2K(0).
Based on Lemma 3.3, we will construct a chain of matrices[ K̃(0), K̃(1), . . . , K̃(d) ] (3)
in the blackboard model, such that every K̃(`) is a spectral sparsifier of K(`), and every K̃(`+ 1) can be constructed from K̃(`). The basic idea behind our construction is to use the relations among different K(`) shown in Lemma 3.3 and the fact that, for any K = BᵀB, sampling rows of B with respect to their leverage scores can be used to obtain a matrix approximating K. Theorem 3.4. LetG be an undirected graph on n vertices, where the edges ofG are allocated among s sites, and the edge weights are polynomially upper bounded in n. Then, a spectral sparsifier of G can be constructed with Õ(n+ s) bits of communication in the blackboard model. That is, the chain (3) can be constructed with Õ(n+ s) bits of communication in the blackboard model.
Proof. Let K = BᵀB be the Laplacian matrix of the underlying graph G, where B ∈ Rm×n is the edge-vertex incidence matrix of G. We will prove that every K̃(i+ 1) can be constructed based on K̃(i) with Õ(n+ s) bits of communication. This implies that K̃(d), a (1 + ε)-spectral sparsifier of K, can be constructed with Õ(n+ s) bits of communication, as the length of the chain d = O(log n).
First of all, notice that λu ≤ 2n, and the value of n can be obtained with communication cost Õ(n + s) (different sites sequentially write the new IDs of the vertices on the blackboard). In the following we assume that λu is the upper bound of λmax that we actually obtained in the blackboard.
Base case of ` = 0: By definition, K(0) = K + λu · I , and 12 · K(0) γ(0) · I K(0), due to Statement 3 of Lemma 3.3. Let ⊕ denote appending the rows of one matrix to another. We
define Bγ(0) = B ⊕ √ γ(0) · I , and write K(0) = K + γ(0) · I = Bᵀγ(0)Bγ(0). By defining τi = b ᵀ i (K(0)) ᵀ bi for each row of Bγ(0), we have τi ≤ bᵀi (γ(0) · I) bi ≤ 2 · τi. Let τ̃i = bᵀi (γ(0) · I) + bi be the leverage score of bi approximated using γ(0) · I , and let τ̃ be the vector of
approximate leverage scores, with the leverage scores of the n rows corresponding to √ γ(0) · I rounded up to 1. Then, with high probability sampling O(ε−2n log n) rows of B will give a matrix K̃(0) such that (1− ε)K(0) K̃(0) (1 + ε)K(0). Notice that, as every row of B corresponds to an edge of G, the approximate leverage scores τ̃i for different edges can be computed locally by different sites maintaining the edges, and the sites only need to send the information of the sampled edges to the blackboard, hence the communication cost is Õ(n+ s) bits.
Induction step: We assume that (1−ε)K(`) r K̃(`) r (1+ε)K(`), and the blackboard maintains the matrix K̃(`). This implies that (1− ε)/(1 + ε) ·K(`) r 1/(1 + ε) · K̃(`) r K(`). Combining this with Statement 2 of Lemma 3.3, we have that
1− ε 2(1 + ε) K(`+ 1) r 1 2(1 + ε) K̃(`) K(`+ 1).
We apply the same sampling procedure as in the base case, and obtain a matrix K̃(` + 1) such that (1 − ε)K(` + 1) r K̃(` + 1) r (1 + ε)K(` + 1). Notice that, since K̃(`) is written on the blackboard, the probabilities used for sampling individual edges can be computed locally by different sites, and in each round only the sampled edges will be sent to the blackboard in order for the blackboard to obtain K̃(`+ 1). Hence, the total communication cost in each iteration is Õ(n+ s) bits. Combining this with the fact that the chain length d = O(log n) proves the theorem.
Combining Theorem 3.4 and the fact that a spectral sparsifier preserves the structure of clusters, we obtain a distributed algorithm in the blackboard model with total communication cost Õ(n+ s) bits, and the performance of our algorithm is the same as in the statement of Theorem 3.1. Notice that Ω(n + s) bits of communication are needed for graph clustering in the blackboard model, since the output of a clustering algorithm contains Ω(n) bits of information and each site needs to communicate at least one bit. Hence the communication cost of our proposed algorithm is optimal up to a poly-logarithmic factor.
4 Distributed geometric clustering
We now consider geometric clustering, including k-median, k-means and k-center. Let P be a set of points of size n in a metric space with distance function d(·, ·), and let k ≤ n be an integer. In the k-center problem we want to find a set C (|C| = k) such that maxp∈P d(p, C) is minimized, where d(p, C) = minc∈C d(p, c). In k-median and k-means we replace the objective function maxp∈P d(p, C) with ∑ p∈P d(p, C) and ∑ p∈P (d(p, C)) 2, respectively.
4.1 The message passing model
As mentioned, for constant dimensional Euclidean space and a constant c > 1, there are algorithms that c-approximate k-median and k-means using Õ(sk) bits of communication [3]. For k-center, the folklore parallel guessing algorithms (see, e.g., [8]) achieve a 2.01-approximation using Õ(sk) bits of communication.
The following theorem states that the above upper bounds are tight up to logarithmic factors. Due to space constraints we defer the proof to the full version of this paper. The proof uses tools from multiparty communication complexity. We in fact can prove a stronger statement that any algorithm that can differentiate whether we have k points or k + 1 points in total in the message passing model needs Ω(sk) bits of communication. Theorem 4.1. For any c > 1, computing c-approximation for k-median, k-means or k-center correctly with probability 0.99 in the message passing model needs Ω(sk) bits of communication.
A number of works on clustering consider bicriteria solutions (e.g., [11, 6]). An algorithm is a (c1, c2)-approximation (c1, c2 > 1) if the optimal solution costs W when using k centers, then the
output of the algorithm costs at most c1W when using at most c2k centers. We can show that for kmedian and k-means, the Ω(sk) lower bound holds even for algorithms with bicriteria approximations. The proof of the following theorem can be found in the full version of this paper.
Theorem 4.2. For any c ∈ [1, 1.01], computing (7.1− 6c, c)-bicriteria-approximation for k-median or k-means correctly with probability 0.99 in the message passing model needs Ω(sk) bits of communication.
4.2 The blackboard model
We can show that there is an algorithm that achieves an O(1)-approximation using Õ(s+ k) bits of communication for k-median and k-means. Due to space constraints we defer the description of the algorithm to the full version of this paper. For k-center, it is straightforward to implement the parallel guessing algorithm in the blackboard model using Õ(s+ k) bits of communication.
Theorem 4.3. There are algorithms that compute O(1)-approximations for k-median, k-means and k-center correctly with probability 0.9 in the blackboard model using Õ(s+k) bits of communication.
5 Experiments
In this section we present experimental results for spectral graph clustering in the message passing and blackboard models. We will compare the following three algorithms. (1) Baseline: each site sends all the data to the coordinator directly; (2) MsgPassing: our algorithm in the message passing model (Section 3.1); (3) Blackboard: our algorithm in the blackboard model (Section 3.2).
Besides giving the visualized results of these algorithms on various datasets, we also measure the qualities of the results via the normalized cut, defined as ncut(A1, . . . , Ak) = 12 ∑ i∈[k] w(Ai,V \Ai) vol(Ai)
, which is a standard objective function to be minimized for spectral clustering algorithms.
We implemented the algorithms using multiple languages, including Matlab, Python and C++. Our experiments were conducted on an IBM NeXtScale nx360 M4 server, which is equipped with 2 Intel Xeon E5-2652 v2 8-core processors, 32GB RAM and 250GB local storage.
Datasets. We test the algorithms in the following real and synthetic datasets.
• Twomoons: this dataset contains n = 14, 000 coordinates in R2. We consider each point to be a vertex. For any two vertices u, v, we add an edge with weight w(u, v) = exp{−‖u− v‖22/σ2} with σ = 0.1 when one vertex is among the 7000-nearest points of the other. This construction results in a graph with about 110, 000, 000 edges.
• Gauss: this dataset contains n = 10, 000 points in R2. There are 4 clusters in this dataset, each generated using a Gaussian distribution. We construct a complete graph as the similarity graph. For any two vertices u, v, we define the weight w(u, v) = exp{−‖u− v‖22/σ2} with σ = 1. The resulting graph has about 100, 000, 000 edges.
• Sculpture: a photo of The Greek Slave We use an 80× 150 version of this photo where each pixel is viewed as a vertex. To construct a similarity graph, we map each pixel to a point in R5, i.e., (x, y, r, g, b), where the latter three coordinates are the RGB values. For any two vertices u, v, we put an edge between u, v with weight w(u, v) = exp{−‖u − v‖22/σ2} with σ = 0.5 if one of u, v is among the 5000-nearest points of the other. This results in a graph with about 70, 000, 000 edges.
In the distributed model edges are randomly partitioned across s sites.
Results on clustering quality. We visualize the clustered results for the Twomoons dataset in Figure 1. It can be seen that Baseline, MsgPassing and Blackboard give results of very similar qualities. For simplicity, here we only present the visualization for s = 15. Similar results were observed when we varied the values of s.
We also compare the normalized cut (ncut) values of the clustering results of different algorithms. The results are presented in Figure 2. In all datasets, the ncut values of different algorithms are very close. The ncut value of MsgPassing slightly decreases when we increase the value of s, while the ncut value of Blackboard is independent of s.
Results on Communication Costs. We compare the communication costs of different algorithms in Figure 3. We observe that while achieving similar clustering qualities as Baseline, both MsgPassing and Blackboard are significantly more communication-efficient (by one or two orders of magnitudes in our experiments). We also notice that the value of s does not affect the communication cost of Blackboard, while the communication cost of MsgPassing grows almost linearly with s; when s is large, MsgPassing uses significantly more communication than Blackboard.
Acknowledgement: Jiecao Chen and Qin Zhang are supported in part by NSF CCF-1525024 and IIS-1633215. D.W. thanks support from the XDATA program of the Defense Advanced Research Projects Agency (DARPA), Air Force Research Laboratory contract FA8750-12-C-0323. | 1. What is the novel approach introduced by the paper in cluster algorithms?
2. How does the proposed method improve communication efficiency in clustering algorithms?
3. Are there any concerns regarding the experiments conducted in the paper?
4. How does the reviewer assess the technical quality, novelty, potential impact, and clarity of the paper?
5. Is there anything missing in the conclusion of the paper that makes it difficult to follow? | Review | Review
The paper describes a way to decentralize cluster algorithms. They used a message passing model and a blackboard model. Experiment on three clustering datasets showed that the proposed algorithm is more communication efficient than the baseline cluster algorithm.Technical quality: Experiments are appropriate but incomplete. It is not clear from the experiment section if the distributed clustering algorithms improves upon a centralized version. Proofs are sound. Novelty/originality: Novel method (as far as I can see) Potential impact or usefulness: The paper describes a way of making a cluster algorithm more communication efficient so impact or usefulness depends on the use of clustering algorithms in the community. Clarity and presentation: Conclusion missing! Very hard to follow. |
NIPS | Title
Deep Networks Provably Classify Data on Curves
Abstract
Data with low-dimensional nonlinear structure are ubiquitous in engineering and scientific problems. We study a model problem with such structure—a binary classification task that uses a deep fully-connected neural network to classify data drawn from two disjoint smooth curves on the unit sphere. Aside from mild regularity conditions, we place no restrictions on the configuration of the curves. We prove that when (i) the network depth is large relative to certain geometric properties that set the difficulty of the problem and (ii) the network width and number of samples are polynomial in the depth, randomly-initialized gradient descent quickly learns to correctly classify all points on the two curves with high probability. To our knowledge, this is the first generalization guarantee for deep networks with nonlinear data that depends only on intrinsic data properties. Our analysis proceeds by a reduction to dynamics in the neural tangent kernel (NTK) regime, where the network depth plays the role of a fitting resource in solving the classification problem. In particular, via fine-grained control of the decay properties of the NTK, we demonstrate that when the network is sufficiently deep, the NTK can be locally approximated by a translationally invariant operator on the manifolds and stably inverted over smooth functions, which guarantees convergence and generalization.
1 Introduction
In applied machine learning, engineering, and the sciences, we are frequently confronted with the problem of identifying low-dimensional structure in high-dimensional data. In certain wellstructured data sets, identifying a good low-dimensional model is the principal task: examples include convolutional sparse models in microscopy [43] and neuroscience [10, 16], and low-rank models in collaborative filtering [7, 8]. Even more complicated datasets from problems such as image classification exhibit some form of low-dimensionality: recent experiments estimate the effective dimension of CIFAR-10 as 26 and the effective dimension of ImageNet as 43 [61]. The variability in these datasets can be thought of as comprising two parts: a “probabilistic” variability induced by the distribution of geometries associated with a given class, and a “geometric” variability associated with physical nuisances such as pose and illumination. The former is challenging to model analytically; virtually all progress on this issue has come through the introduction of large datasets and highcapacity learning machines. The latter induces a much cleaner analytical structure: transformations of a given image lie near a low-dimensional submanifold of the image space (Figure 1). The celebrated successes of convolutional neural networks in image classification seem to derive from their ability to simultaneously handle both types of variability. Studying how neural networks compute with data lying near a low-dimensional manifold is an essential step towards understanding how neural
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
networks achieve invariance to continuous transformations of the image domain, and towards the longer term goal of developing a more comprehensive mathematical understanding of how neural networks compute with real data. At the same time, in some scientific and engineering problems, classifying manifold-structured data is the goal—one example is in gravitational wave astronomy [22, 30], where the goal is to distinguish true events from noise, and the events are generated by relatively simple physical systems with only a few degrees of freedom.
Motivated by these long term goals, in this paper we study the multiple manifold problem (Figure 1), a mathematical model problem in which we are presented with a finite set of labeled samples lying on disjoint low-dimensional submanifolds of a high-dimensional space, and the goal is to correctly classify every point on each of the submanifolds—a strong form of generalization. The central mathematical question is how the structure of the data (properties of the manifolds such as dimension, curvature, and separation) influences the resources (data samples, and network depth and width) required to guarantee generalization. Our main contribution is the first end-to-end analysis of this problem for a nontrivial class of manifolds: one-dimensional smooth curves that are non-intersecting, cusp-free, and without antipodal pairs of points. Subject to these constraints, the curves can be oriented essentially arbitrarily (say, non-linearly-separably, as in Figure 1), and the hypotheses of our results depend only on architectural resources and intrinsic geometric properties of the data. To our knowledge, this is the first generalization result for training a deep nonlinear network to classify structured data that makes no a-priori assumptions about the representation capacity of the network or about properties of the network after training.
Our analysis proceeds in the neural tangent kernel (NTK) regime of training, where the network is wide enough to guarantee that gradient descent can make large changes in the network output while making relatively small changes to the network weights. This approach is inspired by the recent work [57], which reduces the analysis of generalization in the one-dimensional multiple manifold problem to an auxiliary problem called the certificate problem. Solving the certificate problem amounts to proving that the target label function lies near the stable range of the NTK. The existence of certificates (and more generally, the conditions under which practically-trained neural networks can fit structured data) is open, except for a few very simple geometries which we will review below—in particular, [57] leaves this question completely open. Our technical contribution is to show that setting the network depth sufficiently large relative to intrinsic properties of the data guarantees the existence of a certificate (Theorem 3.1), resolving the one-dimensional case of the multiple manifold problem for a broad class of curves (Theorem 3.2). This leads in turn to a novel perspective on the role of the network depth as a fitting resource in the classification problem, which is inaccessible to shallow networks.
1.1 Related Work
Deep networks and low dimensional structure. Modern applications of deep neural networks include numerous examples of low-dimensional manifold structure, including pose and illumination variations in image classification [1, 5], as well as detection of structured signals such as electrocardiograms [14, 20], gravitational waves [22, 30], audio signals [13], and solutions to the diffusion equation [48]. Conventionally, to compute with such data one might begin by extracting a low-dimensional representation using nonlinear dimensionality reduction (“manifold learning”) algorithms [2–4, 6, 12, 54, 56]. For supervised tasks, there is also theoretical work on kernel regression over manifolds [9, 11, 19, 51]. These results rely on very general Sobolev embedding theorems, which are not precise enough to specify the interplay between regularity of the kernel and properties of the data need to obtain concrete resource tradeoffs in the two curve problem. There is also a literature which studies the resource requirements associated with approximating functions over low-dimensional manifolds [15, 29, 38, 44]: a typical result is that for a sufficiently smooth function there exists an approximating network whose complexity is controlled by intrinsic properties such as the dimension. In contrast, we seek algorithmic guarantees that prove that we can efficiently train deep neural networks for tasks with low-dimensional structure. This requires us to grapple with how the geometry of the data influences the dynamics of optimization methods.
Neural networks and structured data—theory? Spurred by insights in asymptotic infinite width [23, 24] and non-asymptotic [18, 21] settings, there has been a surge of recent theoretical work aimed at establishing guarantees for neural network training and generalization [26–28, 34, 37, 40, 49, 55]. Here, our interest is in end-to-end generalization guarantees, which are scarce in the literature: those that exist pertain to unstructured data with general targets, in the regression setting [32, 36, 46, 59], and those that involve low-dimensional structure consider only linear structure (i.e., spheres) [46]. For less general targets, there exist numerous works that pertain to the teacher-student setting, where the target is implemented by a neural network of suitable architecture with unstructured inputs [17, 33, 40, 49, 63]. Although adding this extra structure to the target function allows one to establish interesting separations in terms of e.g. sample complexity [31, 39, 49, 62] relative to the preceding analyses, which proceed in the “kernel regime”, we leverage kernel regime techniques in our present work because they allow us to study the interactions between deep networks and data with nonlinear low-dimensional structure, which is not possible with existing teacher-student tools. Relaxing slightly from results with end-to-end guarantees, there exist ‘conditional’ guarantees which require the existence of an efficient representation of the target mapping in terms of a certain RKHS associated to the neural network [34, 53, 57, 58]. In contrast, our present work obtains unconditional, end-to-end generalization guarantees for a nontrivial class of low-dimensional data geometries.
2 Problem Formulation
Notation. We use bold notation x,A for vectors and matrices/operators (respectively). We write ‖x‖p = ( ∑n i=1|xi|p)1/p for the `p norm of x, 〈x,y〉 = ∑n i=1 xiyi for the euclidean inner product,
and for a measure space (X,µ), ‖g‖Lpµ = ( ∫ X |g(x)|p dµ(x))1/p denotes the Lpµ norm of a function g : X → R. The unit sphere in Rn is denoted Sn−1, and ∠(x,y) = cos-1(〈x,y〉) denotes the angle between unit vectors. For a kernel K : X×X → R, we writeKµ[g](x) = ∫ X K(x, x′)g(x′) dµ(x′) for the action of the associated Fredholm integral operator; an omitted subscript denotes Lebesgue measure. We write PS to denote the orthogonal projection operator onto a (closed) subspace S. Full notation is provided in Appendix B.
2.1 The Two Curve Problem1
A natural model problem for the tasks discussed in Section 1 is the classification of low-dimensional submanifolds using a neural network. In this work, we study the one-dimensional, two-class case of this problem, which we refer to as the two curve problem. To fix ideas, let n0 ≥ 3 denote the ambient dimension, and let M+ and M− be two disjoint smooth regular simple closed curves taking values in Sn0−1, which represent the two classes (Figure 1). In addition, we require that
1The content of this section follows the presentation of [57]; we reproduce it here for self-containedness. We omit some nonessential definitions and derivations for concision; see Appendix C.1 for these details.
the curves lie in a spherical cap of radius π/2: for example, the intersection of the sphere and the nonnegative orthant {x ∈ Rn0 |x ≥ 0}.2 Given N i.i.d. samples {xi}Ni=1 from a density ρ supported onM =M+ ∪M−, which is bounded above and below by positive constants ρmax and ρmin and has associated measure µ, as well as their corresponding ±1 labels, we train a feedforward neural network fθ : Rn0 → R with ReLU nonlinearities, uniform width n, and depth L (and parameters θ) by minimizing the empirical mean squared error using randomly-initialized gradient descent. Our goal is to prove that this procedure yields a separator for the geometry given sufficient resources n, L, and N—i.e., that sign(fθk) = 1 onM+ and −1 onM− at some iteration k of gradient descent. To achieve this, we need an understanding of the progress of gradient descent. Let f? :M→ {±1} denote the classification function forM+ andM− that generates our labels, write ζθ(x) = fθ(x)− f?(x) for the network’s prediction error, and let θk+1 = θk − (τ/N) ∑N i=1 ζθk(xi)∇θfθk(xi) denote the gradient descent parameter sequence, where τ > 0 is the step size and θ0 represents our Gaussian initialization. Elementary calculus then implies the error dynamics equation ζθk+1 = ζθk − (τ/N) ∑N i=1 Θ N k ( · ,xi)ζθk(xi) for k = 0, 1, . . . , where ΘNk : M×M → R is a certain kernel. The precise expression for this kernel is not important for our purposes: what matters is that (i) making the width n large relative to the depth L guarantees that ΘNk remains close throughout training to its ‘initial value’ ΘNTK(x,x′) = 〈∇θfθ0(x),∇θfθ0(x′)〉, the neural tangent kernel; and (ii) taking the sample size N to be sufficiently large relative to the depth L implies that a nominal error evolution defined as ζk+1 = ζk − τΘNTKµ [ζk] with ζ0 = ζθ0 uniformly approximates the actual error ζθk throughout training. In other words: to prove that gradient descent yields a neural network classifier that separates the two manifolds, it suffices to overparameterize, sample densely, and show that the norm of ζk decays sufficiently rapidly with k. This constitutes the “NTK regime” approach to gradient descent dynamics for neural network training [23].
The evolution of ζk is relatively straightforward: we have ζk+1 = (Id−τΘNTKµ )k[ζ0], and ΘNTKµ is a positive, compact operator, so there exist an orthonormal basis of L2µ functions vi and eigenvalues λ1 ≥ λ2 ≥ · · · ≥ 0 such that ζk+1 = ∑∞ i=1(1− τλi)k〈ζ0, vi〉L2µvi. In particular, with bounded step size τ < λ−11 , gradient descent leads to rapid decrease of the error if and only if the initial error ζ0 is well-aligned with the eigenvectors of ΘNTKµ corresponding to large eigenvalues. Arguing about this alignment explicitly is a challenging problem in geometry: although closed-form expressions for the functions vi exist in cases whereM and µ are particularly well-structured, no such expression is available for general nonlinear geometries, even in the one-dimensional case we study here. However, this alignment can be guaranteed implicitly if one can show there exists a function g : M → R of small L2µ norm such that Θ NTK µ [g] ≈ ζ0—in this situation, most of the energy of ζ0 must be concentrated on directions corresponding to large eigenvalues. We call the construction of such a function the certificate problem [57, Eqn. (2.3)]:
Certificate Problem. Given a two curves problem instance (M, ρ), find conditions on the architectural hyperparameters (n,L) so that there exists g :M→ R satisfying ‖ΘNTKµ [g]− ζ0‖L2µ . 1/L and ‖g‖L2µ . 1/n, with constants depending on the density ρ and logarithmic factors suppressed.
The construction of certificates demands a fine-grained understanding of the integral operator ΘNTKµ and its interactions with the geometry M. We therefore proceed by identifying those intrinsic properties ofM that will play a role in our analysis and results.
2.2 Key Geometric Properties
In the NTK regime described in Section 2.1, gradient descent makes rapid progress if there exists a small certificate g satisfying ΘNTKµ [g] ≈ ζ0. The NTK is a function of the network width n and depth L—in particular, we will see that the depth L serves as a fitting resource, enabling the network to accommodate more complicated geometries. Our main analytical task is to establish relationships between these architectural resources and the intrinsic geometric properties of the manifolds that guarantee existence of a certificate.
2The specific value π/2 is immaterial to our arguments: this constraint is only to avoid technical issues that arise when antipodal points are present in M, so any constant less than π would work just as well. This choice allows for some extra technical expediency, and connects with natural modeling assumptions (e.g. data corresponding to image manifolds, with nonnegative pixel intensities).
Intuitively, one would expect it to be harder to separate curves that are close together or oscillate wildly. In this section, we formalize these intuitions in terms of the curves’ curvature, and quantities which we term the angle injectivity radius andV-number, which control the separation between the curves and their tendency to self-intersect. Given that the curves are regular, we may parameterize the two curves at unit speed with respect to arc length: for σ ∈ {±}, we write len(Mσ) to denote the length of each curve, and use xσ(s) : [0, len(Mσ)]→ Sn0−1 to represent these parameterizations. We let x(i)σ (s) denote the i-th derivative of xσ with respect to arc length. Because our parameterization is unit speed, ‖x(1)σ (s)‖2 = 1 for all xσ(s) ∈M. We provide full details regarding this parameterization in Appendix C.2.
Curvature and Manifold Derivatives. Our curvesMσ are submanifolds of the sphere Sn0−1. The curvature ofMσ at a point xσ(s) is the norm ‖Pxσ(s)⊥x (2) σ (s)‖2 of the component Pxσ(s)⊥x (2) σ (s) of the second derivative of xσ(s) that lies tangent to the sphere Sn0−1 at xσ(s). Geometrically, this measures the extent to which the curvexσ(s) deviates from a geodesic (great circle) on the sphere. Our technical results are phrased in terms of the maximum curvature κ = supσ,s{‖Pxσ(s)⊥x (2) σ (s)‖2}. In stating results, we also use κ̂ = max{κ, 2π} to simplify various dependencies on κ. When κ is large,Mσ is highly curved, and we will require a larger network depth L. In addition to the maximum curvature κ, our technical arguments require xσ(s) to be five times continuously differentiable, and use bounds Mi = supσ,s{‖x(i)σ (s)‖2} on their higher order derivatives.
Angle Injectivity Radius. Another key geometric quantity that determines the hardness of the problem is the separation between manifolds: the problem is more difficult whenM+ andM− are close together. We measure closeness through the extrinsic distance (angle) ∠(x,x′) = cos−1 〈x,x′〉 between x and x′ over the sphere. In contrast, we use dM(x,x′) to denote the intrinsic distance between x and x′ onM, setting dM(x,x′) =∞ if x and x′ reside on different componentsM+ andM−. We set
∆ = inf x,x′∈M
{∠(x,x′) | dM(x,x′) ≥ τ1}, (2.1)
where τ1 = 1√20κ̂ , and call this quantity the angle injectivity radius. In words, the angle injectivity radius is the minimum angle between two points whose intrinsic distance exceeds τ1. The angle injectivity radius ∆ (i) lower bounds the distance between different components M+ and M−, and (ii) accounts for the possibility that a component will “loop back,” exhibiting points with large intrinsic distance but small angle. This phenomenon is important to account for: the certificate problem is harder when one or both components ofM nearly self-intersect. At an intuitive level, this increases the difficulty of the certificate problem because it introduces nonlocal correlations across the operator ΘNTKµ , hurting its conditioning. As we will see in Section 4, increasing depth L makes ΘNTK better localized; setting L sufficiently large relative to ∆−1 compensates for these correlations.
V-number The conditioning of ΘNTKµ depends not only on how nearM comes to intersecting itself, which is captured by ∆, but also on the number of times thatM can “loop back” to a particular point. IfM “loops back” many times, ΘNTKµ can be highly correlated, leading to a hard certificate problem. TheV-number (verbally, “clover number”) reflects the number of near self-intersections:
V(M) = sup x∈M
{ NM ( {x′ | dM(x,x′) ≥ τ1,∠(x,x′) ≤ τ2},
1√ 1 + κ2
)} (2.2)
with τ2 = 1920√20κ̂ . The set {x ′ | dM(x,x′) ≥ τ1,∠(x,x′) ≤ τ2} is the union of looping pieces, namely points that are close to x in extrinsic distance but far in intrinsic distance. NM(T, δ) is the cardinality of a minimal δ covering of T ⊂M in the intrinsic distance on the manifold, serving as a way to count the number of disjoint looping pieces. TheV-number accounts for the maximal volume of the curve where the angle injectivity radius ∆ is active. It will generally be large if the manifolds nearly intersect multiple times, as illustrated in Fig. 2. TheV-number is typically small, but can be large when the data are generated in a way that induces certain near symmetries, as in the right panel of Fig. 2.
curves with fixed maximum curvature and length, but decreasingV-number, by reflecting ‘petals’ of a clover about a circumscribing square. We setM+ to be a fixed circle with large radius that crosses the center of the configurations, then rescale and project the entire geometry onto the sphere to create a two curve problem instance. In the insets, we show a two-dimensional projection of each of the blueM− curves as well as a base point x ∈ M+ at the center (also highlighed in the three-dimensional plots). The intersection ofM− with the neighborhood of x denoted in orange represents the set whose covering number gives theV-number of the configuration (see (2.2)). Top right: We numerically generate a certificate for each of the four geometries at left and plot its norm as a function ofV-number. The trend demonstrates that increasingV-number correlates with increasing classification difficulty, measured through the certificate problem: this is in line with the intuition we have discussed. Bottom right: t-SNE projection of MNIST images (top: a “four” digit; bottom: a “one” digit) subject to rotations. Due to the approximate symmetry of the one digit under rotation by an angle π, the projection appears to nearly intersect itself. This may lead to a higherV-number compared to the embedding of the less-symmetric four digit. For experimental details for all panels, see Appendix A.
3 Main Results
Our main theorem establishes a set of sufficient resource requirements for the certificate problem under the class of geometries we consider here—by the reductions detailed in Section 2.1, this implies that gradient descent rapidly separates the two classes given a neural network of sufficient depth and width. First, we note a convenient aspect of the certificate problem, which is its amenability to approximate solutions: that is, if we have a kernel Θ that approximates ΘNTK in the sense that ‖Θµ −ΘNTKµ ‖L2µ→L2µ . n/L, and a function ζ such that ‖ζ − ζ0‖L2µ . 1/L, then by the triangle inequality and the Schwarz inequality, it suffices to solve the equation Θµ[g] ≈ ζ instead. In our arguments, we will exploit the fact that the random kernel ΘNTK concentrates well for wide networks with n & L, choosing Θ as
Θ(x,x′) = (n/2) L−1∑
`=0
L−1∏
`′=`
( 1− (1/π)ϕ[`′](∠(x,x′) ) , (3.1)
where ϕ(t) = cos-1((1 − t/π) cos t + (1/π) sin t) and ϕ[`′] denotes `′-fold composition of ϕ; as well as the fact that for wide networks with n & L5, depth ‘smooths out’ the initial error ζ0, choosing ζ as the piecewise-constant function ζ(x) = −f?(x) + ∫ M fθ0(x ′) dµ(x′). We reproduce
high-probability concentration guarantees from the literature that justify these approximations in Appendix G.
Theorem 3.1 (Approximate Certificates for Curves). LetM be two disjoint smooth, regular, simple closed curves, satisfying ∠(x,x′) ≤ π/2 for all x,x′ ∈ M. There exist absolute constants C,C ′, C ′′, C ′′′ and a polynomial P = poly(M3,M4,M5, len(M),∆−1) of degree at most 36, with degree at most 12 in (M3,M4,M5, len(M)) and degree at most 24 in ∆−1, such that when
L ≥ max { exp(C ′ len(M)κ̂), ( ∆ √ 1 + κ2 )−C′′V(M)
, C ′′′κ̂10, P, ρ12max
} ,
there exists a certificate g with ‖g‖L2µ ≤ C‖ζ‖L2µ ρminn logL such that ‖Θµ[g]− ζ‖L2µ ≤ ‖ζ‖L∞ L .
Theorem 3.1 is our main technical contribution: it provides a sufficient condition on the network depth L to resolve the approximate certificate problem for the class of geometries we consider, with the required resources depending only on the geometric properties we introduce in Section 2.2. Given the connection between certificates and gradient descent, Theorem 3.1 demonstrates that deeper networks fit more complex geometries, which shows that the network depth plays the role of a fitting resource in classifying the two curves. We provide a numerical corroboration of the interaction between the network depth, the geometry, and the size of the certificate in Figure 3. For any family of geometries with boundedV-number, Theorem 3.1 implies a polynomial dependence of the depth on the angle injectivity radius ∆, whereas we are unable to avoid an exponential dependence of the depth on the curvature κ. Nevertheless, these dependences may seem overly pessimistic in light of the existence of ‘easy’ two curve problem instances—say, linearly-separable classes, each of which is a highly nonlinear manifold—for which one would expect gradient descent to succeed without needing an unduly large depth. In fact, such geometries will not admit a small certificate norm in general unless the depth is sufficiently large: intuitively, this is a consequence of the operator Θµ being ill-conditioned for such geometries.3
The proof of Theorem 3.1 is novel, both in the context of kernel regression on manifolds and in the context of NTK-regime neural network training. We detail the key intuitions for the proof in
3Again, the equivalence between the difficulty of the certificate problem and the progress of gradient descent on decreasing the error is a consequence of our analysis proceeding in the kernel regime with the square loss—using alternate techniques to analyze the dynamics can allow one to prove that neural networks continue to fit such ‘easy’ classification problems efficiently (e.g. [34]).
Section 4. As suggested above, applying Theorem 3.1 to construct a certificate is straightforward: given a suitable setting of L for a two curve problem instance, we obtain an approximate certificate g via Theorem 3.1. Then with the triangle inequality and the Schwarz inequality, we can bound
‖ΘNTKµ [g]− ζ0‖L2µ ≤ ‖Θ NTK µ −Θµ‖L2µ→L2µ‖g‖L2µ + ‖ζ0 − ζ‖L2µ + ‖Θµ[g]− ζ‖L2µ ,
and leveraging suitable probabilistic control (see Appendix G) of the approximation errors in the previous expression, as well as on ‖ζ‖L2µ , then yields bounds for the certificate problem. Applying the reductions from gradient descent dynamics in the NTK regime to certificates discussed in Section 2.1, we then obtain an end-to-end guarantee for the two curve problem.
Theorem 3.2 (Generalization). LetM be two disjoint smooth, regular, simple closed curves, satisfying ∠(x,x′) ≤ π/2 for all x,x′ ∈M. For any 0 < δ ≤ 1/e, choose L so that
L ≥ K max
1 ( ∆ √ 1 + κ2 )CV(M) , Cµ log 9( 1δ ) log 24(Cµn0 log( 1 δ )), e C′max{len(M)κ̂,log(κ̂)}, P
n = K ′L99 log9(1/δ) log18(Ln0)
N ≥ L10,
and fix τ > 0 such that C ′′
nL2 ≤ τ ≤ cnL . Then with probability at least 1 − δ, the parameters obtained at iteration bL39/44/(nτ)c of gradient descent on the finite sample loss yield a classifier that separates the two manifolds.
The constants c, C,C ′, C ′′,K,K ′ > 0 are absolute, and Cµ equals to max{ρ19min,ρ−19min }(1+ρmax)12 (min {µ(M+),µ(M−)})11/2 is a constant only depends on µ. P is a polynomial poly{M3,M4,M5, len(M),∆−1} of degree at most 36, with degree at most 12 when viewed as a polynomial in M3,M4,M5 and len(M), and of degree at most 24 as a polynomial in ∆−1.
Theorem 3.2 represents the first end-to-end guarantee for training a deep neural network to classify a nontrivial class of low-dimensional nonlinear manifolds. We call attention to the fact that the hypotheses of Theorem 3.2 are completely self-contained, making reference only to intrinsic properties of the data and the architectural hyperparameters of the neural network (as well as poly(log n0)), and that the result is algorithmic, as it applies to training the network via constant-stepping gradient descent on the empirical square loss and guarantees generalization within L2 iterations. Furthermore, Theorem 3.2 can be readily extended to the more general setting of regression on curves, given that we have focused on training with the square loss.
4 Proof Sketch
In this section, we provide an overview of the key elements of the proof of Theorem 3.1, where we show that the equation Θµ[g] ≈ ζ admits a solution g (the certificate) of small norm. To solve the certificate problem forM, we require a fine-grained understanding of the kernel Θ. The most natural approach is to formally set g = ∑∞ i=1 λ −1 i 〈ζ, vi〉L2µvi using the eigendecomposition of Θµ (just as constructed in Section 2.1 for ΘNTKµ ), and then argue that this formal expression converges by studying the rate of decay of λi and the alignment of ζ with eigenvectors of Θµ; this is the standard approach in the literature [46, 53]. However, as discussed in Section 2.1, the nonlinear structure ofM makes obtaining a full diagonalization for Θµ intractable, and simple asymptotic characterizations of its spectrum are insufficient to prove that the solution g has small norm. Our approach will therefore be more direct: we will study the ‘spatial’ properties of the kernel Θ itself, in particular its rate of decay away from x = x′, and thereby use the network depth L as a resource to reduce the study of the operator Θµ to a simpler, localized operator whose invertibility can be proved using harmonic analysis. We will then use differentiability properties of Θ to transfer the solution obtained by inverting this auxiliary operator back to the operator Θµ. We refer readers to Appendix E for the full proof.
We simplify the proceedings using two basic reductions. First, with a small amount of auxiliary argumentation, we can reduce from the study of the operator-with-density Θµ to the density-free operator
Θ. Second, the kernel Θ(x,x′) is a function of the angle ∠(x,x′), and hence is rotationally invariant. This kernel is maximized at ∠(x,x′) = 0 and decreases monotonically as the angle increases, reaching its minimum value at ∠(x,x′) = π. If we subtract this minimum value, it should not affect our ability to fit functions, and we obtain a rotationally invariant kernel Θ◦(x,x′) = ψ◦(∠(x,x′)) that is concentrated around angle 0. In the following, we focus on certificate construction for the kernel Θ◦. Both simplifications are justified in Appendix E.3.
4.1 The Importance of Depth: Localization of the Neural Tangent Kernel
The first problem one encounters when attempting to directly establish (a property like) invertibility of the operator Θ◦ is its action across connected components ofM: the operator Θ◦ acts by integrating against functions defined onM =M+ ∪M−, and although it is intuitive that most of its image’s values on each component will be due to integration of the input over the same component, there will always be some ‘cross-talk’ corresponding to integration over the opposite component that interferes with our ability to apply harmonic analysis tools. To work around this basic issue (as well as others we will see below), our argument proceeds via a localization approach: we will exploit the fact that as the depth L increases, the kernel Θ◦ sharpens and concentrates around its value at x = x′, to the extent that we can neglect its action across components ofM and even pass to the analysis of an auxiliary localized operator. This reduction is enabled by new sharp estimates for the decay of the angle function ψ◦ that we establish in Appendix F.3. Moreover, the perspective of using the network depth as a resource to localize the kernel Θ◦ and exploiting this to solve the classification problem appears to be new: this localization is typically presented as a deficiency in the literature (e.g. [47]).
At a more formal level, when the network is deep enough compared to geometric properties of the curves, for each point x, the majority of the mass of the kernel Θ◦(x,x′) is taken within a small neighborhood dM(x,x′) ≤ r of x. When dM(x,x′) is small relative to κ, we have dM(x,x′) ≈ ∠(x,x′). This allows us to approximate the local component by the following invariant operator:
M̂ [f ](xσ(s)) =
∫ s+r
s′=s−r ψ◦(|s− s′|)f(xσ(s′))ds′. (4.1)
This approximation has two main benefits: (i) the operator M̂ is defined by intrinsic distance s′ − s, and (ii) it is highly localized. In fact, (4.1) takes the form of a convolution over the arc length parameter s. This implies that M̂ diagonalizes in the Fourier basis, giving an explicit characterization of its eigenvalues and eigenvectors. Moreover, because M̂ is localized, the eigenvalues corresponding to slowly oscillating Fourier basis functions are large, and M̂ is stably invertible over such functions. Both of these benefits can be seen as consequences of depth: depth leads to localization, which facilitates approximation by M̂ , and renders that approximation invertible over low-frequency functions. In our proofs, we will work with a subspace S spanned by low-frequency basis functions that are nearly constant over a length 2r interval (this subspace ends up having dimension proportional to 1/r; see Appendix C.3 for a formal definition), and use Fourier arguments to prove invertibility of M̂ over S (see Lemma E.6).
4.2 Stable Inversion over Smooth Functions
Our remaining task is to leverage the invertibility of M̂ over S to argue that Θ is also invertible. In doing so, we need to account for the residual Θ− M̂ . We accomplish this directly, using a Neumann series argument: when setting r . L−1/2 and the dimension of the subspace S proportional to 1/r, the minimum eigenvalue of M̂ over S exceeds the norm of the residual operator Θ◦ − M̂ (Lemma E.2). This argument leverages a decomposition of the domain into “near”, “far” and “winding” pieces, whose contribution to Θ◦ is controlled using the curvature, angle injectivity radius andV-number (Lemma E.8, Lemma E.9, Lemma E.10). This guarantees the strict invertibility of Θ◦ over the subspace S, and yields a unique solution gS to the restricted equation PSΘ◦[gS ] = ζ (Theorem E.1).
This does not yet solve the certificate problem, which demands near solutions to the unrestricted equation Θ◦[g] = ζ. To complete the argument, we set g = gS and use harmonic analysis considerations to show that Θ◦[g] is very close to S. The subspace S contains functions that do not oscillate
rapidly, and hence whose derivatives are small relative to their norm (Lemma E.23). We prove that Θ◦[g] is close to S by controlling the first three derivatives of Θ◦[g], which introduces dependencies on M1, · · · ,M5 in the final statement of our results (Lemma E.27). In controlling these derivatives, we leverage the assumption that supx,x′∈M ∠(x,x
′) ≤ π/2 to avoid issues that arise at antipodal points—we believe the removal of this constraint is purely technical, given our sharp characterization of the decay of ψ◦ and its derivatives. Finally, we move from Θ◦ back to Θ by combining near solutions to Θ◦[g] = ζ and Θ◦[g1] = 1, and iterating the construction to reduce the approximation error to an acceptable level (Appendix E.3).
5 Discussion
A role for depth. In the setting of fitting functions on the sphere Sn0−1 in the NTK regime with unstructured (e.g., uniformly random) data, it is well-known that there is very little marginal benefit to using a deeper network: for example, [32, 46, 59] show that the risk lower bound for RKHS methods is nearly met by kernel regression with a 2-layer network’s NTK in an asymptotic (n0 →∞) setting, and results for fitting degree-1 functions in the nonasymptotic setting [52] are suggestive of a similar phenomenon. In a similar vein, fitting in the NTK regime with a deeper network does not change the kernel’s RKHS [41, 42, 45], and in a certain “infinite-depth” limit, the corresponding NTK for networks with ReLU activations, as we consider here, is a spike, guaranteeing that it fails to generalize [47, 50]. Our results are certainly not in contradiction to these facts—we consider a setting where the data are highly structured, and our proofs only show that an appropriate choice of the depth relative to this structure is sufficient to guarantee generalization, not necessary—but they nonetheless highlight an important role for the network depth in the NTK regime that has not been explored in the existing literature. In particular, the localization phenomenon exhibited by the deep NTK is completely inaccessible by fixed-depth networks, and simultaneously essential to our arguments to proving Theorem 3.2, as we have described in Section 4. It is an interesting open problem to determine whether there exist low-dimensional geometries that cannot be efficiently separated without a deep NTK, or whether the essential sufficiency of the depth-two NTK persists.
Closing the gap to real networks and data. Theorem 3.2 represents an initial step towards understanding the interaction between neural networks and data with low-dimensional structure, and identifying network resource requirements sufficient to guarantee generalization. There are several important avenues for future work. First, although the resource requirements in Theorem 3.1, and by extension Theorem 3.2, reflect only intrinsic properties of the data, the rates are far from optimal—improvements here will demand a more refined harmonic analysis argument beyond the localization approach we take in Section 4.1. A more fundamental advance would consist of extending the analysis to the setting of a model for image data, such as cartoon articulation manifolds, and the NTK of a convolutional neural network with architectural settings that impose translation invariance [25, 35]—recent results show asymptotic statistical efficiency guarantees with the NTK of a simple convolutional architecture, but only in the context of generic data [60]. The approach to certificate construction we develop in Theorem 3.1 will be of use in establishing guarantees analogous to Theorem 3.2 here, as our approach does not require an explicit diagonalization of the NTK.
In addition, extending our certificate construction approach to smooth manifolds of dimension larger than one is a natural next step. We believe our localization argument generalizes to this setting: as our bounds for the kernel ψ are sharp with respect to depth and independent of the manifold dimension, one could seek to prove guarantees analogous to Theorem 3.1 with a similar subspace-restriction argument for sufficiently regular manifolds, such as manifolds diffeomorphic to spheres, where the geometric parameters of Section 2.2 have natural extensions. Such a generalization would incur at best an exponential dependence of the network on the manifold dimension for localization in high dimensions.
More broadly, the localization phenomena at the core of our argument appear to be relevant beyond the regime in which the hypotheses of Theorem 3.2 hold: we provide a preliminary numerical experiment to this end in Appendix A.3. Training fully-connected networks with gradient descent on a simple manifold classification task, low training error appears to be easily achievable only when the decay scale of the kernel is small relative to the inter-manifold distance even at moderate depth and width, and this decay scale is controlled by the depth of the network.
Funding Transparency Statement and Acknowledgements
This work was supported by a Swartz fellowship (DG), by a fellowship award (SB) through the National Defense Science and Engineering Graduate (NDSEG) Fellowship Program, sponsored by the Air Force Research Laboratory (AFRL), the Office of Naval Research (ONR) and the Army Research Office (ARO), and by the National Science Foundation through grants NSF 1733857, NSF 1838061, NSF 1740833, and NSF 174039. We thank Alberto Bietti for bringing to our attention relevant prior art on kernel regression on manifolds. | 1. What is the main contribution of the paper regarding implicit bias in neural networks?
2. What are the strengths of the proposed method in terms of provable learning guarantees?
3. What are the limitations of the paper, particularly in comparison to classical kernel methods?
4. What are some potential future research directions related to the paper's results?
5. Is there any confusion or unclear points in the paper that require further explanation or clarification? | Summary Of The Paper
Review | Summary Of The Paper
This paper fits into an important line of work studying implicit bias of neural networks. In order to address the question of what do neural networks learn, the paper develops a "theory of data" -- it identifies a geometric property of data that allows neural networks to provably learn with gradient descent. This property is that the data lies on two disjoint one-dimensional curves -- the label is +1 on one of the curves, and -1 on the other curve. Guarantees on learning are given in terms of the separation Delta of these curves, and their curvature kappa.
Review
Overall impression The result is presented clearly and is easy to follow. As far as I know, the result is new.
The result is based on the framework of the recent paper "DEEP NETWORKS AND THE MULTIPLE MANIFOLD PROBLEM" by Buchanan et al. That paper reduces learning in the neural tangent kernel regime to proving that there is a "certificate" for the data. The certificate problem amounts to finding low-norm approximate solution to a certain linear system involving the neural tangent kernel.
However, Buchanan et al. paper does not prove the existence of a certificate. This is the main new technical element in this paper: a constructive proof of the certificate problem for a setting with a nonlinear data. The proof uses the fact that as the depth of the network increases the kernel localizes.
I am on the fence about this paper, but recommend acceptance, since I found the results illuminating about the power of deep networks in the NTK regime. Nevertheless, a substantial counterargument against acceptance could be that the learning proved in this paper can just as simply be achieved by classical kernel methods with kernels that are sufficiently local -- instead of deep neural nets.
In any case, this paper opens some interesting future directions: (A) trying to extend this result to scenarios in which data lies on d-dimensional submanifolds, for d > 1. This would be substantially more convincing, and would match practice more closely. It seems that the current technique to find a certificate has an exponential dependence on the dimension d. What extra assumptions could be placed on the geometric structure of the data in order to avoid such an exponential dependence? (B) does high depth provably help in the NTK regime? E.g., can the nonlinear data learned in this paper be provably learned with a depth-2 NTK?
Minor comments — Line 139-140: “In particular, with bounded step size tau < lambda_1”. Why isn’t it tau < 1 / (lambda_1)?
— Lines 149-151: when you introduce the certificate problem, it could be helpful to the reader to refer to the result in https://arxiv.org/pdf/2008.11245.pdf. |
NIPS | Title
Deep Networks Provably Classify Data on Curves
Abstract
Data with low-dimensional nonlinear structure are ubiquitous in engineering and scientific problems. We study a model problem with such structure—a binary classification task that uses a deep fully-connected neural network to classify data drawn from two disjoint smooth curves on the unit sphere. Aside from mild regularity conditions, we place no restrictions on the configuration of the curves. We prove that when (i) the network depth is large relative to certain geometric properties that set the difficulty of the problem and (ii) the network width and number of samples are polynomial in the depth, randomly-initialized gradient descent quickly learns to correctly classify all points on the two curves with high probability. To our knowledge, this is the first generalization guarantee for deep networks with nonlinear data that depends only on intrinsic data properties. Our analysis proceeds by a reduction to dynamics in the neural tangent kernel (NTK) regime, where the network depth plays the role of a fitting resource in solving the classification problem. In particular, via fine-grained control of the decay properties of the NTK, we demonstrate that when the network is sufficiently deep, the NTK can be locally approximated by a translationally invariant operator on the manifolds and stably inverted over smooth functions, which guarantees convergence and generalization.
1 Introduction
In applied machine learning, engineering, and the sciences, we are frequently confronted with the problem of identifying low-dimensional structure in high-dimensional data. In certain wellstructured data sets, identifying a good low-dimensional model is the principal task: examples include convolutional sparse models in microscopy [43] and neuroscience [10, 16], and low-rank models in collaborative filtering [7, 8]. Even more complicated datasets from problems such as image classification exhibit some form of low-dimensionality: recent experiments estimate the effective dimension of CIFAR-10 as 26 and the effective dimension of ImageNet as 43 [61]. The variability in these datasets can be thought of as comprising two parts: a “probabilistic” variability induced by the distribution of geometries associated with a given class, and a “geometric” variability associated with physical nuisances such as pose and illumination. The former is challenging to model analytically; virtually all progress on this issue has come through the introduction of large datasets and highcapacity learning machines. The latter induces a much cleaner analytical structure: transformations of a given image lie near a low-dimensional submanifold of the image space (Figure 1). The celebrated successes of convolutional neural networks in image classification seem to derive from their ability to simultaneously handle both types of variability. Studying how neural networks compute with data lying near a low-dimensional manifold is an essential step towards understanding how neural
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
networks achieve invariance to continuous transformations of the image domain, and towards the longer term goal of developing a more comprehensive mathematical understanding of how neural networks compute with real data. At the same time, in some scientific and engineering problems, classifying manifold-structured data is the goal—one example is in gravitational wave astronomy [22, 30], where the goal is to distinguish true events from noise, and the events are generated by relatively simple physical systems with only a few degrees of freedom.
Motivated by these long term goals, in this paper we study the multiple manifold problem (Figure 1), a mathematical model problem in which we are presented with a finite set of labeled samples lying on disjoint low-dimensional submanifolds of a high-dimensional space, and the goal is to correctly classify every point on each of the submanifolds—a strong form of generalization. The central mathematical question is how the structure of the data (properties of the manifolds such as dimension, curvature, and separation) influences the resources (data samples, and network depth and width) required to guarantee generalization. Our main contribution is the first end-to-end analysis of this problem for a nontrivial class of manifolds: one-dimensional smooth curves that are non-intersecting, cusp-free, and without antipodal pairs of points. Subject to these constraints, the curves can be oriented essentially arbitrarily (say, non-linearly-separably, as in Figure 1), and the hypotheses of our results depend only on architectural resources and intrinsic geometric properties of the data. To our knowledge, this is the first generalization result for training a deep nonlinear network to classify structured data that makes no a-priori assumptions about the representation capacity of the network or about properties of the network after training.
Our analysis proceeds in the neural tangent kernel (NTK) regime of training, where the network is wide enough to guarantee that gradient descent can make large changes in the network output while making relatively small changes to the network weights. This approach is inspired by the recent work [57], which reduces the analysis of generalization in the one-dimensional multiple manifold problem to an auxiliary problem called the certificate problem. Solving the certificate problem amounts to proving that the target label function lies near the stable range of the NTK. The existence of certificates (and more generally, the conditions under which practically-trained neural networks can fit structured data) is open, except for a few very simple geometries which we will review below—in particular, [57] leaves this question completely open. Our technical contribution is to show that setting the network depth sufficiently large relative to intrinsic properties of the data guarantees the existence of a certificate (Theorem 3.1), resolving the one-dimensional case of the multiple manifold problem for a broad class of curves (Theorem 3.2). This leads in turn to a novel perspective on the role of the network depth as a fitting resource in the classification problem, which is inaccessible to shallow networks.
1.1 Related Work
Deep networks and low dimensional structure. Modern applications of deep neural networks include numerous examples of low-dimensional manifold structure, including pose and illumination variations in image classification [1, 5], as well as detection of structured signals such as electrocardiograms [14, 20], gravitational waves [22, 30], audio signals [13], and solutions to the diffusion equation [48]. Conventionally, to compute with such data one might begin by extracting a low-dimensional representation using nonlinear dimensionality reduction (“manifold learning”) algorithms [2–4, 6, 12, 54, 56]. For supervised tasks, there is also theoretical work on kernel regression over manifolds [9, 11, 19, 51]. These results rely on very general Sobolev embedding theorems, which are not precise enough to specify the interplay between regularity of the kernel and properties of the data need to obtain concrete resource tradeoffs in the two curve problem. There is also a literature which studies the resource requirements associated with approximating functions over low-dimensional manifolds [15, 29, 38, 44]: a typical result is that for a sufficiently smooth function there exists an approximating network whose complexity is controlled by intrinsic properties such as the dimension. In contrast, we seek algorithmic guarantees that prove that we can efficiently train deep neural networks for tasks with low-dimensional structure. This requires us to grapple with how the geometry of the data influences the dynamics of optimization methods.
Neural networks and structured data—theory? Spurred by insights in asymptotic infinite width [23, 24] and non-asymptotic [18, 21] settings, there has been a surge of recent theoretical work aimed at establishing guarantees for neural network training and generalization [26–28, 34, 37, 40, 49, 55]. Here, our interest is in end-to-end generalization guarantees, which are scarce in the literature: those that exist pertain to unstructured data with general targets, in the regression setting [32, 36, 46, 59], and those that involve low-dimensional structure consider only linear structure (i.e., spheres) [46]. For less general targets, there exist numerous works that pertain to the teacher-student setting, where the target is implemented by a neural network of suitable architecture with unstructured inputs [17, 33, 40, 49, 63]. Although adding this extra structure to the target function allows one to establish interesting separations in terms of e.g. sample complexity [31, 39, 49, 62] relative to the preceding analyses, which proceed in the “kernel regime”, we leverage kernel regime techniques in our present work because they allow us to study the interactions between deep networks and data with nonlinear low-dimensional structure, which is not possible with existing teacher-student tools. Relaxing slightly from results with end-to-end guarantees, there exist ‘conditional’ guarantees which require the existence of an efficient representation of the target mapping in terms of a certain RKHS associated to the neural network [34, 53, 57, 58]. In contrast, our present work obtains unconditional, end-to-end generalization guarantees for a nontrivial class of low-dimensional data geometries.
2 Problem Formulation
Notation. We use bold notation x,A for vectors and matrices/operators (respectively). We write ‖x‖p = ( ∑n i=1|xi|p)1/p for the `p norm of x, 〈x,y〉 = ∑n i=1 xiyi for the euclidean inner product,
and for a measure space (X,µ), ‖g‖Lpµ = ( ∫ X |g(x)|p dµ(x))1/p denotes the Lpµ norm of a function g : X → R. The unit sphere in Rn is denoted Sn−1, and ∠(x,y) = cos-1(〈x,y〉) denotes the angle between unit vectors. For a kernel K : X×X → R, we writeKµ[g](x) = ∫ X K(x, x′)g(x′) dµ(x′) for the action of the associated Fredholm integral operator; an omitted subscript denotes Lebesgue measure. We write PS to denote the orthogonal projection operator onto a (closed) subspace S. Full notation is provided in Appendix B.
2.1 The Two Curve Problem1
A natural model problem for the tasks discussed in Section 1 is the classification of low-dimensional submanifolds using a neural network. In this work, we study the one-dimensional, two-class case of this problem, which we refer to as the two curve problem. To fix ideas, let n0 ≥ 3 denote the ambient dimension, and let M+ and M− be two disjoint smooth regular simple closed curves taking values in Sn0−1, which represent the two classes (Figure 1). In addition, we require that
1The content of this section follows the presentation of [57]; we reproduce it here for self-containedness. We omit some nonessential definitions and derivations for concision; see Appendix C.1 for these details.
the curves lie in a spherical cap of radius π/2: for example, the intersection of the sphere and the nonnegative orthant {x ∈ Rn0 |x ≥ 0}.2 Given N i.i.d. samples {xi}Ni=1 from a density ρ supported onM =M+ ∪M−, which is bounded above and below by positive constants ρmax and ρmin and has associated measure µ, as well as their corresponding ±1 labels, we train a feedforward neural network fθ : Rn0 → R with ReLU nonlinearities, uniform width n, and depth L (and parameters θ) by minimizing the empirical mean squared error using randomly-initialized gradient descent. Our goal is to prove that this procedure yields a separator for the geometry given sufficient resources n, L, and N—i.e., that sign(fθk) = 1 onM+ and −1 onM− at some iteration k of gradient descent. To achieve this, we need an understanding of the progress of gradient descent. Let f? :M→ {±1} denote the classification function forM+ andM− that generates our labels, write ζθ(x) = fθ(x)− f?(x) for the network’s prediction error, and let θk+1 = θk − (τ/N) ∑N i=1 ζθk(xi)∇θfθk(xi) denote the gradient descent parameter sequence, where τ > 0 is the step size and θ0 represents our Gaussian initialization. Elementary calculus then implies the error dynamics equation ζθk+1 = ζθk − (τ/N) ∑N i=1 Θ N k ( · ,xi)ζθk(xi) for k = 0, 1, . . . , where ΘNk : M×M → R is a certain kernel. The precise expression for this kernel is not important for our purposes: what matters is that (i) making the width n large relative to the depth L guarantees that ΘNk remains close throughout training to its ‘initial value’ ΘNTK(x,x′) = 〈∇θfθ0(x),∇θfθ0(x′)〉, the neural tangent kernel; and (ii) taking the sample size N to be sufficiently large relative to the depth L implies that a nominal error evolution defined as ζk+1 = ζk − τΘNTKµ [ζk] with ζ0 = ζθ0 uniformly approximates the actual error ζθk throughout training. In other words: to prove that gradient descent yields a neural network classifier that separates the two manifolds, it suffices to overparameterize, sample densely, and show that the norm of ζk decays sufficiently rapidly with k. This constitutes the “NTK regime” approach to gradient descent dynamics for neural network training [23].
The evolution of ζk is relatively straightforward: we have ζk+1 = (Id−τΘNTKµ )k[ζ0], and ΘNTKµ is a positive, compact operator, so there exist an orthonormal basis of L2µ functions vi and eigenvalues λ1 ≥ λ2 ≥ · · · ≥ 0 such that ζk+1 = ∑∞ i=1(1− τλi)k〈ζ0, vi〉L2µvi. In particular, with bounded step size τ < λ−11 , gradient descent leads to rapid decrease of the error if and only if the initial error ζ0 is well-aligned with the eigenvectors of ΘNTKµ corresponding to large eigenvalues. Arguing about this alignment explicitly is a challenging problem in geometry: although closed-form expressions for the functions vi exist in cases whereM and µ are particularly well-structured, no such expression is available for general nonlinear geometries, even in the one-dimensional case we study here. However, this alignment can be guaranteed implicitly if one can show there exists a function g : M → R of small L2µ norm such that Θ NTK µ [g] ≈ ζ0—in this situation, most of the energy of ζ0 must be concentrated on directions corresponding to large eigenvalues. We call the construction of such a function the certificate problem [57, Eqn. (2.3)]:
Certificate Problem. Given a two curves problem instance (M, ρ), find conditions on the architectural hyperparameters (n,L) so that there exists g :M→ R satisfying ‖ΘNTKµ [g]− ζ0‖L2µ . 1/L and ‖g‖L2µ . 1/n, with constants depending on the density ρ and logarithmic factors suppressed.
The construction of certificates demands a fine-grained understanding of the integral operator ΘNTKµ and its interactions with the geometry M. We therefore proceed by identifying those intrinsic properties ofM that will play a role in our analysis and results.
2.2 Key Geometric Properties
In the NTK regime described in Section 2.1, gradient descent makes rapid progress if there exists a small certificate g satisfying ΘNTKµ [g] ≈ ζ0. The NTK is a function of the network width n and depth L—in particular, we will see that the depth L serves as a fitting resource, enabling the network to accommodate more complicated geometries. Our main analytical task is to establish relationships between these architectural resources and the intrinsic geometric properties of the manifolds that guarantee existence of a certificate.
2The specific value π/2 is immaterial to our arguments: this constraint is only to avoid technical issues that arise when antipodal points are present in M, so any constant less than π would work just as well. This choice allows for some extra technical expediency, and connects with natural modeling assumptions (e.g. data corresponding to image manifolds, with nonnegative pixel intensities).
Intuitively, one would expect it to be harder to separate curves that are close together or oscillate wildly. In this section, we formalize these intuitions in terms of the curves’ curvature, and quantities which we term the angle injectivity radius andV-number, which control the separation between the curves and their tendency to self-intersect. Given that the curves are regular, we may parameterize the two curves at unit speed with respect to arc length: for σ ∈ {±}, we write len(Mσ) to denote the length of each curve, and use xσ(s) : [0, len(Mσ)]→ Sn0−1 to represent these parameterizations. We let x(i)σ (s) denote the i-th derivative of xσ with respect to arc length. Because our parameterization is unit speed, ‖x(1)σ (s)‖2 = 1 for all xσ(s) ∈M. We provide full details regarding this parameterization in Appendix C.2.
Curvature and Manifold Derivatives. Our curvesMσ are submanifolds of the sphere Sn0−1. The curvature ofMσ at a point xσ(s) is the norm ‖Pxσ(s)⊥x (2) σ (s)‖2 of the component Pxσ(s)⊥x (2) σ (s) of the second derivative of xσ(s) that lies tangent to the sphere Sn0−1 at xσ(s). Geometrically, this measures the extent to which the curvexσ(s) deviates from a geodesic (great circle) on the sphere. Our technical results are phrased in terms of the maximum curvature κ = supσ,s{‖Pxσ(s)⊥x (2) σ (s)‖2}. In stating results, we also use κ̂ = max{κ, 2π} to simplify various dependencies on κ. When κ is large,Mσ is highly curved, and we will require a larger network depth L. In addition to the maximum curvature κ, our technical arguments require xσ(s) to be five times continuously differentiable, and use bounds Mi = supσ,s{‖x(i)σ (s)‖2} on their higher order derivatives.
Angle Injectivity Radius. Another key geometric quantity that determines the hardness of the problem is the separation between manifolds: the problem is more difficult whenM+ andM− are close together. We measure closeness through the extrinsic distance (angle) ∠(x,x′) = cos−1 〈x,x′〉 between x and x′ over the sphere. In contrast, we use dM(x,x′) to denote the intrinsic distance between x and x′ onM, setting dM(x,x′) =∞ if x and x′ reside on different componentsM+ andM−. We set
∆ = inf x,x′∈M
{∠(x,x′) | dM(x,x′) ≥ τ1}, (2.1)
where τ1 = 1√20κ̂ , and call this quantity the angle injectivity radius. In words, the angle injectivity radius is the minimum angle between two points whose intrinsic distance exceeds τ1. The angle injectivity radius ∆ (i) lower bounds the distance between different components M+ and M−, and (ii) accounts for the possibility that a component will “loop back,” exhibiting points with large intrinsic distance but small angle. This phenomenon is important to account for: the certificate problem is harder when one or both components ofM nearly self-intersect. At an intuitive level, this increases the difficulty of the certificate problem because it introduces nonlocal correlations across the operator ΘNTKµ , hurting its conditioning. As we will see in Section 4, increasing depth L makes ΘNTK better localized; setting L sufficiently large relative to ∆−1 compensates for these correlations.
V-number The conditioning of ΘNTKµ depends not only on how nearM comes to intersecting itself, which is captured by ∆, but also on the number of times thatM can “loop back” to a particular point. IfM “loops back” many times, ΘNTKµ can be highly correlated, leading to a hard certificate problem. TheV-number (verbally, “clover number”) reflects the number of near self-intersections:
V(M) = sup x∈M
{ NM ( {x′ | dM(x,x′) ≥ τ1,∠(x,x′) ≤ τ2},
1√ 1 + κ2
)} (2.2)
with τ2 = 1920√20κ̂ . The set {x ′ | dM(x,x′) ≥ τ1,∠(x,x′) ≤ τ2} is the union of looping pieces, namely points that are close to x in extrinsic distance but far in intrinsic distance. NM(T, δ) is the cardinality of a minimal δ covering of T ⊂M in the intrinsic distance on the manifold, serving as a way to count the number of disjoint looping pieces. TheV-number accounts for the maximal volume of the curve where the angle injectivity radius ∆ is active. It will generally be large if the manifolds nearly intersect multiple times, as illustrated in Fig. 2. TheV-number is typically small, but can be large when the data are generated in a way that induces certain near symmetries, as in the right panel of Fig. 2.
curves with fixed maximum curvature and length, but decreasingV-number, by reflecting ‘petals’ of a clover about a circumscribing square. We setM+ to be a fixed circle with large radius that crosses the center of the configurations, then rescale and project the entire geometry onto the sphere to create a two curve problem instance. In the insets, we show a two-dimensional projection of each of the blueM− curves as well as a base point x ∈ M+ at the center (also highlighed in the three-dimensional plots). The intersection ofM− with the neighborhood of x denoted in orange represents the set whose covering number gives theV-number of the configuration (see (2.2)). Top right: We numerically generate a certificate for each of the four geometries at left and plot its norm as a function ofV-number. The trend demonstrates that increasingV-number correlates with increasing classification difficulty, measured through the certificate problem: this is in line with the intuition we have discussed. Bottom right: t-SNE projection of MNIST images (top: a “four” digit; bottom: a “one” digit) subject to rotations. Due to the approximate symmetry of the one digit under rotation by an angle π, the projection appears to nearly intersect itself. This may lead to a higherV-number compared to the embedding of the less-symmetric four digit. For experimental details for all panels, see Appendix A.
3 Main Results
Our main theorem establishes a set of sufficient resource requirements for the certificate problem under the class of geometries we consider here—by the reductions detailed in Section 2.1, this implies that gradient descent rapidly separates the two classes given a neural network of sufficient depth and width. First, we note a convenient aspect of the certificate problem, which is its amenability to approximate solutions: that is, if we have a kernel Θ that approximates ΘNTK in the sense that ‖Θµ −ΘNTKµ ‖L2µ→L2µ . n/L, and a function ζ such that ‖ζ − ζ0‖L2µ . 1/L, then by the triangle inequality and the Schwarz inequality, it suffices to solve the equation Θµ[g] ≈ ζ instead. In our arguments, we will exploit the fact that the random kernel ΘNTK concentrates well for wide networks with n & L, choosing Θ as
Θ(x,x′) = (n/2) L−1∑
`=0
L−1∏
`′=`
( 1− (1/π)ϕ[`′](∠(x,x′) ) , (3.1)
where ϕ(t) = cos-1((1 − t/π) cos t + (1/π) sin t) and ϕ[`′] denotes `′-fold composition of ϕ; as well as the fact that for wide networks with n & L5, depth ‘smooths out’ the initial error ζ0, choosing ζ as the piecewise-constant function ζ(x) = −f?(x) + ∫ M fθ0(x ′) dµ(x′). We reproduce
high-probability concentration guarantees from the literature that justify these approximations in Appendix G.
Theorem 3.1 (Approximate Certificates for Curves). LetM be two disjoint smooth, regular, simple closed curves, satisfying ∠(x,x′) ≤ π/2 for all x,x′ ∈ M. There exist absolute constants C,C ′, C ′′, C ′′′ and a polynomial P = poly(M3,M4,M5, len(M),∆−1) of degree at most 36, with degree at most 12 in (M3,M4,M5, len(M)) and degree at most 24 in ∆−1, such that when
L ≥ max { exp(C ′ len(M)κ̂), ( ∆ √ 1 + κ2 )−C′′V(M)
, C ′′′κ̂10, P, ρ12max
} ,
there exists a certificate g with ‖g‖L2µ ≤ C‖ζ‖L2µ ρminn logL such that ‖Θµ[g]− ζ‖L2µ ≤ ‖ζ‖L∞ L .
Theorem 3.1 is our main technical contribution: it provides a sufficient condition on the network depth L to resolve the approximate certificate problem for the class of geometries we consider, with the required resources depending only on the geometric properties we introduce in Section 2.2. Given the connection between certificates and gradient descent, Theorem 3.1 demonstrates that deeper networks fit more complex geometries, which shows that the network depth plays the role of a fitting resource in classifying the two curves. We provide a numerical corroboration of the interaction between the network depth, the geometry, and the size of the certificate in Figure 3. For any family of geometries with boundedV-number, Theorem 3.1 implies a polynomial dependence of the depth on the angle injectivity radius ∆, whereas we are unable to avoid an exponential dependence of the depth on the curvature κ. Nevertheless, these dependences may seem overly pessimistic in light of the existence of ‘easy’ two curve problem instances—say, linearly-separable classes, each of which is a highly nonlinear manifold—for which one would expect gradient descent to succeed without needing an unduly large depth. In fact, such geometries will not admit a small certificate norm in general unless the depth is sufficiently large: intuitively, this is a consequence of the operator Θµ being ill-conditioned for such geometries.3
The proof of Theorem 3.1 is novel, both in the context of kernel regression on manifolds and in the context of NTK-regime neural network training. We detail the key intuitions for the proof in
3Again, the equivalence between the difficulty of the certificate problem and the progress of gradient descent on decreasing the error is a consequence of our analysis proceeding in the kernel regime with the square loss—using alternate techniques to analyze the dynamics can allow one to prove that neural networks continue to fit such ‘easy’ classification problems efficiently (e.g. [34]).
Section 4. As suggested above, applying Theorem 3.1 to construct a certificate is straightforward: given a suitable setting of L for a two curve problem instance, we obtain an approximate certificate g via Theorem 3.1. Then with the triangle inequality and the Schwarz inequality, we can bound
‖ΘNTKµ [g]− ζ0‖L2µ ≤ ‖Θ NTK µ −Θµ‖L2µ→L2µ‖g‖L2µ + ‖ζ0 − ζ‖L2µ + ‖Θµ[g]− ζ‖L2µ ,
and leveraging suitable probabilistic control (see Appendix G) of the approximation errors in the previous expression, as well as on ‖ζ‖L2µ , then yields bounds for the certificate problem. Applying the reductions from gradient descent dynamics in the NTK regime to certificates discussed in Section 2.1, we then obtain an end-to-end guarantee for the two curve problem.
Theorem 3.2 (Generalization). LetM be two disjoint smooth, regular, simple closed curves, satisfying ∠(x,x′) ≤ π/2 for all x,x′ ∈M. For any 0 < δ ≤ 1/e, choose L so that
L ≥ K max
1 ( ∆ √ 1 + κ2 )CV(M) , Cµ log 9( 1δ ) log 24(Cµn0 log( 1 δ )), e C′max{len(M)κ̂,log(κ̂)}, P
n = K ′L99 log9(1/δ) log18(Ln0)
N ≥ L10,
and fix τ > 0 such that C ′′
nL2 ≤ τ ≤ cnL . Then with probability at least 1 − δ, the parameters obtained at iteration bL39/44/(nτ)c of gradient descent on the finite sample loss yield a classifier that separates the two manifolds.
The constants c, C,C ′, C ′′,K,K ′ > 0 are absolute, and Cµ equals to max{ρ19min,ρ−19min }(1+ρmax)12 (min {µ(M+),µ(M−)})11/2 is a constant only depends on µ. P is a polynomial poly{M3,M4,M5, len(M),∆−1} of degree at most 36, with degree at most 12 when viewed as a polynomial in M3,M4,M5 and len(M), and of degree at most 24 as a polynomial in ∆−1.
Theorem 3.2 represents the first end-to-end guarantee for training a deep neural network to classify a nontrivial class of low-dimensional nonlinear manifolds. We call attention to the fact that the hypotheses of Theorem 3.2 are completely self-contained, making reference only to intrinsic properties of the data and the architectural hyperparameters of the neural network (as well as poly(log n0)), and that the result is algorithmic, as it applies to training the network via constant-stepping gradient descent on the empirical square loss and guarantees generalization within L2 iterations. Furthermore, Theorem 3.2 can be readily extended to the more general setting of regression on curves, given that we have focused on training with the square loss.
4 Proof Sketch
In this section, we provide an overview of the key elements of the proof of Theorem 3.1, where we show that the equation Θµ[g] ≈ ζ admits a solution g (the certificate) of small norm. To solve the certificate problem forM, we require a fine-grained understanding of the kernel Θ. The most natural approach is to formally set g = ∑∞ i=1 λ −1 i 〈ζ, vi〉L2µvi using the eigendecomposition of Θµ (just as constructed in Section 2.1 for ΘNTKµ ), and then argue that this formal expression converges by studying the rate of decay of λi and the alignment of ζ with eigenvectors of Θµ; this is the standard approach in the literature [46, 53]. However, as discussed in Section 2.1, the nonlinear structure ofM makes obtaining a full diagonalization for Θµ intractable, and simple asymptotic characterizations of its spectrum are insufficient to prove that the solution g has small norm. Our approach will therefore be more direct: we will study the ‘spatial’ properties of the kernel Θ itself, in particular its rate of decay away from x = x′, and thereby use the network depth L as a resource to reduce the study of the operator Θµ to a simpler, localized operator whose invertibility can be proved using harmonic analysis. We will then use differentiability properties of Θ to transfer the solution obtained by inverting this auxiliary operator back to the operator Θµ. We refer readers to Appendix E for the full proof.
We simplify the proceedings using two basic reductions. First, with a small amount of auxiliary argumentation, we can reduce from the study of the operator-with-density Θµ to the density-free operator
Θ. Second, the kernel Θ(x,x′) is a function of the angle ∠(x,x′), and hence is rotationally invariant. This kernel is maximized at ∠(x,x′) = 0 and decreases monotonically as the angle increases, reaching its minimum value at ∠(x,x′) = π. If we subtract this minimum value, it should not affect our ability to fit functions, and we obtain a rotationally invariant kernel Θ◦(x,x′) = ψ◦(∠(x,x′)) that is concentrated around angle 0. In the following, we focus on certificate construction for the kernel Θ◦. Both simplifications are justified in Appendix E.3.
4.1 The Importance of Depth: Localization of the Neural Tangent Kernel
The first problem one encounters when attempting to directly establish (a property like) invertibility of the operator Θ◦ is its action across connected components ofM: the operator Θ◦ acts by integrating against functions defined onM =M+ ∪M−, and although it is intuitive that most of its image’s values on each component will be due to integration of the input over the same component, there will always be some ‘cross-talk’ corresponding to integration over the opposite component that interferes with our ability to apply harmonic analysis tools. To work around this basic issue (as well as others we will see below), our argument proceeds via a localization approach: we will exploit the fact that as the depth L increases, the kernel Θ◦ sharpens and concentrates around its value at x = x′, to the extent that we can neglect its action across components ofM and even pass to the analysis of an auxiliary localized operator. This reduction is enabled by new sharp estimates for the decay of the angle function ψ◦ that we establish in Appendix F.3. Moreover, the perspective of using the network depth as a resource to localize the kernel Θ◦ and exploiting this to solve the classification problem appears to be new: this localization is typically presented as a deficiency in the literature (e.g. [47]).
At a more formal level, when the network is deep enough compared to geometric properties of the curves, for each point x, the majority of the mass of the kernel Θ◦(x,x′) is taken within a small neighborhood dM(x,x′) ≤ r of x. When dM(x,x′) is small relative to κ, we have dM(x,x′) ≈ ∠(x,x′). This allows us to approximate the local component by the following invariant operator:
M̂ [f ](xσ(s)) =
∫ s+r
s′=s−r ψ◦(|s− s′|)f(xσ(s′))ds′. (4.1)
This approximation has two main benefits: (i) the operator M̂ is defined by intrinsic distance s′ − s, and (ii) it is highly localized. In fact, (4.1) takes the form of a convolution over the arc length parameter s. This implies that M̂ diagonalizes in the Fourier basis, giving an explicit characterization of its eigenvalues and eigenvectors. Moreover, because M̂ is localized, the eigenvalues corresponding to slowly oscillating Fourier basis functions are large, and M̂ is stably invertible over such functions. Both of these benefits can be seen as consequences of depth: depth leads to localization, which facilitates approximation by M̂ , and renders that approximation invertible over low-frequency functions. In our proofs, we will work with a subspace S spanned by low-frequency basis functions that are nearly constant over a length 2r interval (this subspace ends up having dimension proportional to 1/r; see Appendix C.3 for a formal definition), and use Fourier arguments to prove invertibility of M̂ over S (see Lemma E.6).
4.2 Stable Inversion over Smooth Functions
Our remaining task is to leverage the invertibility of M̂ over S to argue that Θ is also invertible. In doing so, we need to account for the residual Θ− M̂ . We accomplish this directly, using a Neumann series argument: when setting r . L−1/2 and the dimension of the subspace S proportional to 1/r, the minimum eigenvalue of M̂ over S exceeds the norm of the residual operator Θ◦ − M̂ (Lemma E.2). This argument leverages a decomposition of the domain into “near”, “far” and “winding” pieces, whose contribution to Θ◦ is controlled using the curvature, angle injectivity radius andV-number (Lemma E.8, Lemma E.9, Lemma E.10). This guarantees the strict invertibility of Θ◦ over the subspace S, and yields a unique solution gS to the restricted equation PSΘ◦[gS ] = ζ (Theorem E.1).
This does not yet solve the certificate problem, which demands near solutions to the unrestricted equation Θ◦[g] = ζ. To complete the argument, we set g = gS and use harmonic analysis considerations to show that Θ◦[g] is very close to S. The subspace S contains functions that do not oscillate
rapidly, and hence whose derivatives are small relative to their norm (Lemma E.23). We prove that Θ◦[g] is close to S by controlling the first three derivatives of Θ◦[g], which introduces dependencies on M1, · · · ,M5 in the final statement of our results (Lemma E.27). In controlling these derivatives, we leverage the assumption that supx,x′∈M ∠(x,x
′) ≤ π/2 to avoid issues that arise at antipodal points—we believe the removal of this constraint is purely technical, given our sharp characterization of the decay of ψ◦ and its derivatives. Finally, we move from Θ◦ back to Θ by combining near solutions to Θ◦[g] = ζ and Θ◦[g1] = 1, and iterating the construction to reduce the approximation error to an acceptable level (Appendix E.3).
5 Discussion
A role for depth. In the setting of fitting functions on the sphere Sn0−1 in the NTK regime with unstructured (e.g., uniformly random) data, it is well-known that there is very little marginal benefit to using a deeper network: for example, [32, 46, 59] show that the risk lower bound for RKHS methods is nearly met by kernel regression with a 2-layer network’s NTK in an asymptotic (n0 →∞) setting, and results for fitting degree-1 functions in the nonasymptotic setting [52] are suggestive of a similar phenomenon. In a similar vein, fitting in the NTK regime with a deeper network does not change the kernel’s RKHS [41, 42, 45], and in a certain “infinite-depth” limit, the corresponding NTK for networks with ReLU activations, as we consider here, is a spike, guaranteeing that it fails to generalize [47, 50]. Our results are certainly not in contradiction to these facts—we consider a setting where the data are highly structured, and our proofs only show that an appropriate choice of the depth relative to this structure is sufficient to guarantee generalization, not necessary—but they nonetheless highlight an important role for the network depth in the NTK regime that has not been explored in the existing literature. In particular, the localization phenomenon exhibited by the deep NTK is completely inaccessible by fixed-depth networks, and simultaneously essential to our arguments to proving Theorem 3.2, as we have described in Section 4. It is an interesting open problem to determine whether there exist low-dimensional geometries that cannot be efficiently separated without a deep NTK, or whether the essential sufficiency of the depth-two NTK persists.
Closing the gap to real networks and data. Theorem 3.2 represents an initial step towards understanding the interaction between neural networks and data with low-dimensional structure, and identifying network resource requirements sufficient to guarantee generalization. There are several important avenues for future work. First, although the resource requirements in Theorem 3.1, and by extension Theorem 3.2, reflect only intrinsic properties of the data, the rates are far from optimal—improvements here will demand a more refined harmonic analysis argument beyond the localization approach we take in Section 4.1. A more fundamental advance would consist of extending the analysis to the setting of a model for image data, such as cartoon articulation manifolds, and the NTK of a convolutional neural network with architectural settings that impose translation invariance [25, 35]—recent results show asymptotic statistical efficiency guarantees with the NTK of a simple convolutional architecture, but only in the context of generic data [60]. The approach to certificate construction we develop in Theorem 3.1 will be of use in establishing guarantees analogous to Theorem 3.2 here, as our approach does not require an explicit diagonalization of the NTK.
In addition, extending our certificate construction approach to smooth manifolds of dimension larger than one is a natural next step. We believe our localization argument generalizes to this setting: as our bounds for the kernel ψ are sharp with respect to depth and independent of the manifold dimension, one could seek to prove guarantees analogous to Theorem 3.1 with a similar subspace-restriction argument for sufficiently regular manifolds, such as manifolds diffeomorphic to spheres, where the geometric parameters of Section 2.2 have natural extensions. Such a generalization would incur at best an exponential dependence of the network on the manifold dimension for localization in high dimensions.
More broadly, the localization phenomena at the core of our argument appear to be relevant beyond the regime in which the hypotheses of Theorem 3.2 hold: we provide a preliminary numerical experiment to this end in Appendix A.3. Training fully-connected networks with gradient descent on a simple manifold classification task, low training error appears to be easily achievable only when the decay scale of the kernel is small relative to the inter-manifold distance even at moderate depth and width, and this decay scale is controlled by the depth of the network.
Funding Transparency Statement and Acknowledgements
This work was supported by a Swartz fellowship (DG), by a fellowship award (SB) through the National Defense Science and Engineering Graduate (NDSEG) Fellowship Program, sponsored by the Air Force Research Laboratory (AFRL), the Office of Naval Research (ONR) and the Army Research Office (ARO), and by the National Science Foundation through grants NSF 1733857, NSF 1838061, NSF 1740833, and NSF 174039. We thank Alberto Bietti for bringing to our attention relevant prior art on kernel regression on manifolds. | 1. What is the focus of the paper regarding deep neural networks and classification tasks?
2. What are the strengths of the paper, particularly in terms of its significance, originality, and technical aspects?
3. What are the weaknesses of the paper regarding its readability and the correlation between the lower bound of depth and clover number? | Summary Of The Paper
Review | Summary Of The Paper
This paper studies the problem of classifying data drawn from two disjoint smooth curves on the unit sphere using a deep fully-connected neural network. The work proves that certificates can exist as long as the network depth is sufficiently large. The main tool used in the paper is NTK and the author claims that this is the first generalization guarantee for deep networks with nonlinear data that depends only on intrinsic data properties.
Review
Strengths:
The paper solves the open problem left by [55], which is the existence of the certificate in multiple manifold problems. The significance and the originality of the paper are fairly high.
The paper is generally well-written, with sufficient analysis after each theorem and also figure demonstrations.
The proof is highly technical. The idea to localize NTK and then to use harmonic analysis to prove its invertibility may be of independent interest in the theory community.
Weaknesses:
The paper can enhance its readability by providing more preliminary introduction on NTK on manifold problems. The current version seems too dense and difficult to follow when I first read it.
In both theorem 3.1 and 3.2, the lower bound of depth
L
is negatively correlated to the clover number. However the clover number is usually very small, so this will make the lower bound of
L
extremely large?
=================
After reading the rebuttal and other reviewers' comments, I decide to maintain my score. |
NIPS | Title
Deep Networks Provably Classify Data on Curves
Abstract
Data with low-dimensional nonlinear structure are ubiquitous in engineering and scientific problems. We study a model problem with such structure—a binary classification task that uses a deep fully-connected neural network to classify data drawn from two disjoint smooth curves on the unit sphere. Aside from mild regularity conditions, we place no restrictions on the configuration of the curves. We prove that when (i) the network depth is large relative to certain geometric properties that set the difficulty of the problem and (ii) the network width and number of samples are polynomial in the depth, randomly-initialized gradient descent quickly learns to correctly classify all points on the two curves with high probability. To our knowledge, this is the first generalization guarantee for deep networks with nonlinear data that depends only on intrinsic data properties. Our analysis proceeds by a reduction to dynamics in the neural tangent kernel (NTK) regime, where the network depth plays the role of a fitting resource in solving the classification problem. In particular, via fine-grained control of the decay properties of the NTK, we demonstrate that when the network is sufficiently deep, the NTK can be locally approximated by a translationally invariant operator on the manifolds and stably inverted over smooth functions, which guarantees convergence and generalization.
1 Introduction
In applied machine learning, engineering, and the sciences, we are frequently confronted with the problem of identifying low-dimensional structure in high-dimensional data. In certain wellstructured data sets, identifying a good low-dimensional model is the principal task: examples include convolutional sparse models in microscopy [43] and neuroscience [10, 16], and low-rank models in collaborative filtering [7, 8]. Even more complicated datasets from problems such as image classification exhibit some form of low-dimensionality: recent experiments estimate the effective dimension of CIFAR-10 as 26 and the effective dimension of ImageNet as 43 [61]. The variability in these datasets can be thought of as comprising two parts: a “probabilistic” variability induced by the distribution of geometries associated with a given class, and a “geometric” variability associated with physical nuisances such as pose and illumination. The former is challenging to model analytically; virtually all progress on this issue has come through the introduction of large datasets and highcapacity learning machines. The latter induces a much cleaner analytical structure: transformations of a given image lie near a low-dimensional submanifold of the image space (Figure 1). The celebrated successes of convolutional neural networks in image classification seem to derive from their ability to simultaneously handle both types of variability. Studying how neural networks compute with data lying near a low-dimensional manifold is an essential step towards understanding how neural
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
networks achieve invariance to continuous transformations of the image domain, and towards the longer term goal of developing a more comprehensive mathematical understanding of how neural networks compute with real data. At the same time, in some scientific and engineering problems, classifying manifold-structured data is the goal—one example is in gravitational wave astronomy [22, 30], where the goal is to distinguish true events from noise, and the events are generated by relatively simple physical systems with only a few degrees of freedom.
Motivated by these long term goals, in this paper we study the multiple manifold problem (Figure 1), a mathematical model problem in which we are presented with a finite set of labeled samples lying on disjoint low-dimensional submanifolds of a high-dimensional space, and the goal is to correctly classify every point on each of the submanifolds—a strong form of generalization. The central mathematical question is how the structure of the data (properties of the manifolds such as dimension, curvature, and separation) influences the resources (data samples, and network depth and width) required to guarantee generalization. Our main contribution is the first end-to-end analysis of this problem for a nontrivial class of manifolds: one-dimensional smooth curves that are non-intersecting, cusp-free, and without antipodal pairs of points. Subject to these constraints, the curves can be oriented essentially arbitrarily (say, non-linearly-separably, as in Figure 1), and the hypotheses of our results depend only on architectural resources and intrinsic geometric properties of the data. To our knowledge, this is the first generalization result for training a deep nonlinear network to classify structured data that makes no a-priori assumptions about the representation capacity of the network or about properties of the network after training.
Our analysis proceeds in the neural tangent kernel (NTK) regime of training, where the network is wide enough to guarantee that gradient descent can make large changes in the network output while making relatively small changes to the network weights. This approach is inspired by the recent work [57], which reduces the analysis of generalization in the one-dimensional multiple manifold problem to an auxiliary problem called the certificate problem. Solving the certificate problem amounts to proving that the target label function lies near the stable range of the NTK. The existence of certificates (and more generally, the conditions under which practically-trained neural networks can fit structured data) is open, except for a few very simple geometries which we will review below—in particular, [57] leaves this question completely open. Our technical contribution is to show that setting the network depth sufficiently large relative to intrinsic properties of the data guarantees the existence of a certificate (Theorem 3.1), resolving the one-dimensional case of the multiple manifold problem for a broad class of curves (Theorem 3.2). This leads in turn to a novel perspective on the role of the network depth as a fitting resource in the classification problem, which is inaccessible to shallow networks.
1.1 Related Work
Deep networks and low dimensional structure. Modern applications of deep neural networks include numerous examples of low-dimensional manifold structure, including pose and illumination variations in image classification [1, 5], as well as detection of structured signals such as electrocardiograms [14, 20], gravitational waves [22, 30], audio signals [13], and solutions to the diffusion equation [48]. Conventionally, to compute with such data one might begin by extracting a low-dimensional representation using nonlinear dimensionality reduction (“manifold learning”) algorithms [2–4, 6, 12, 54, 56]. For supervised tasks, there is also theoretical work on kernel regression over manifolds [9, 11, 19, 51]. These results rely on very general Sobolev embedding theorems, which are not precise enough to specify the interplay between regularity of the kernel and properties of the data need to obtain concrete resource tradeoffs in the two curve problem. There is also a literature which studies the resource requirements associated with approximating functions over low-dimensional manifolds [15, 29, 38, 44]: a typical result is that for a sufficiently smooth function there exists an approximating network whose complexity is controlled by intrinsic properties such as the dimension. In contrast, we seek algorithmic guarantees that prove that we can efficiently train deep neural networks for tasks with low-dimensional structure. This requires us to grapple with how the geometry of the data influences the dynamics of optimization methods.
Neural networks and structured data—theory? Spurred by insights in asymptotic infinite width [23, 24] and non-asymptotic [18, 21] settings, there has been a surge of recent theoretical work aimed at establishing guarantees for neural network training and generalization [26–28, 34, 37, 40, 49, 55]. Here, our interest is in end-to-end generalization guarantees, which are scarce in the literature: those that exist pertain to unstructured data with general targets, in the regression setting [32, 36, 46, 59], and those that involve low-dimensional structure consider only linear structure (i.e., spheres) [46]. For less general targets, there exist numerous works that pertain to the teacher-student setting, where the target is implemented by a neural network of suitable architecture with unstructured inputs [17, 33, 40, 49, 63]. Although adding this extra structure to the target function allows one to establish interesting separations in terms of e.g. sample complexity [31, 39, 49, 62] relative to the preceding analyses, which proceed in the “kernel regime”, we leverage kernel regime techniques in our present work because they allow us to study the interactions between deep networks and data with nonlinear low-dimensional structure, which is not possible with existing teacher-student tools. Relaxing slightly from results with end-to-end guarantees, there exist ‘conditional’ guarantees which require the existence of an efficient representation of the target mapping in terms of a certain RKHS associated to the neural network [34, 53, 57, 58]. In contrast, our present work obtains unconditional, end-to-end generalization guarantees for a nontrivial class of low-dimensional data geometries.
2 Problem Formulation
Notation. We use bold notation x,A for vectors and matrices/operators (respectively). We write ‖x‖p = ( ∑n i=1|xi|p)1/p for the `p norm of x, 〈x,y〉 = ∑n i=1 xiyi for the euclidean inner product,
and for a measure space (X,µ), ‖g‖Lpµ = ( ∫ X |g(x)|p dµ(x))1/p denotes the Lpµ norm of a function g : X → R. The unit sphere in Rn is denoted Sn−1, and ∠(x,y) = cos-1(〈x,y〉) denotes the angle between unit vectors. For a kernel K : X×X → R, we writeKµ[g](x) = ∫ X K(x, x′)g(x′) dµ(x′) for the action of the associated Fredholm integral operator; an omitted subscript denotes Lebesgue measure. We write PS to denote the orthogonal projection operator onto a (closed) subspace S. Full notation is provided in Appendix B.
2.1 The Two Curve Problem1
A natural model problem for the tasks discussed in Section 1 is the classification of low-dimensional submanifolds using a neural network. In this work, we study the one-dimensional, two-class case of this problem, which we refer to as the two curve problem. To fix ideas, let n0 ≥ 3 denote the ambient dimension, and let M+ and M− be two disjoint smooth regular simple closed curves taking values in Sn0−1, which represent the two classes (Figure 1). In addition, we require that
1The content of this section follows the presentation of [57]; we reproduce it here for self-containedness. We omit some nonessential definitions and derivations for concision; see Appendix C.1 for these details.
the curves lie in a spherical cap of radius π/2: for example, the intersection of the sphere and the nonnegative orthant {x ∈ Rn0 |x ≥ 0}.2 Given N i.i.d. samples {xi}Ni=1 from a density ρ supported onM =M+ ∪M−, which is bounded above and below by positive constants ρmax and ρmin and has associated measure µ, as well as their corresponding ±1 labels, we train a feedforward neural network fθ : Rn0 → R with ReLU nonlinearities, uniform width n, and depth L (and parameters θ) by minimizing the empirical mean squared error using randomly-initialized gradient descent. Our goal is to prove that this procedure yields a separator for the geometry given sufficient resources n, L, and N—i.e., that sign(fθk) = 1 onM+ and −1 onM− at some iteration k of gradient descent. To achieve this, we need an understanding of the progress of gradient descent. Let f? :M→ {±1} denote the classification function forM+ andM− that generates our labels, write ζθ(x) = fθ(x)− f?(x) for the network’s prediction error, and let θk+1 = θk − (τ/N) ∑N i=1 ζθk(xi)∇θfθk(xi) denote the gradient descent parameter sequence, where τ > 0 is the step size and θ0 represents our Gaussian initialization. Elementary calculus then implies the error dynamics equation ζθk+1 = ζθk − (τ/N) ∑N i=1 Θ N k ( · ,xi)ζθk(xi) for k = 0, 1, . . . , where ΘNk : M×M → R is a certain kernel. The precise expression for this kernel is not important for our purposes: what matters is that (i) making the width n large relative to the depth L guarantees that ΘNk remains close throughout training to its ‘initial value’ ΘNTK(x,x′) = 〈∇θfθ0(x),∇θfθ0(x′)〉, the neural tangent kernel; and (ii) taking the sample size N to be sufficiently large relative to the depth L implies that a nominal error evolution defined as ζk+1 = ζk − τΘNTKµ [ζk] with ζ0 = ζθ0 uniformly approximates the actual error ζθk throughout training. In other words: to prove that gradient descent yields a neural network classifier that separates the two manifolds, it suffices to overparameterize, sample densely, and show that the norm of ζk decays sufficiently rapidly with k. This constitutes the “NTK regime” approach to gradient descent dynamics for neural network training [23].
The evolution of ζk is relatively straightforward: we have ζk+1 = (Id−τΘNTKµ )k[ζ0], and ΘNTKµ is a positive, compact operator, so there exist an orthonormal basis of L2µ functions vi and eigenvalues λ1 ≥ λ2 ≥ · · · ≥ 0 such that ζk+1 = ∑∞ i=1(1− τλi)k〈ζ0, vi〉L2µvi. In particular, with bounded step size τ < λ−11 , gradient descent leads to rapid decrease of the error if and only if the initial error ζ0 is well-aligned with the eigenvectors of ΘNTKµ corresponding to large eigenvalues. Arguing about this alignment explicitly is a challenging problem in geometry: although closed-form expressions for the functions vi exist in cases whereM and µ are particularly well-structured, no such expression is available for general nonlinear geometries, even in the one-dimensional case we study here. However, this alignment can be guaranteed implicitly if one can show there exists a function g : M → R of small L2µ norm such that Θ NTK µ [g] ≈ ζ0—in this situation, most of the energy of ζ0 must be concentrated on directions corresponding to large eigenvalues. We call the construction of such a function the certificate problem [57, Eqn. (2.3)]:
Certificate Problem. Given a two curves problem instance (M, ρ), find conditions on the architectural hyperparameters (n,L) so that there exists g :M→ R satisfying ‖ΘNTKµ [g]− ζ0‖L2µ . 1/L and ‖g‖L2µ . 1/n, with constants depending on the density ρ and logarithmic factors suppressed.
The construction of certificates demands a fine-grained understanding of the integral operator ΘNTKµ and its interactions with the geometry M. We therefore proceed by identifying those intrinsic properties ofM that will play a role in our analysis and results.
2.2 Key Geometric Properties
In the NTK regime described in Section 2.1, gradient descent makes rapid progress if there exists a small certificate g satisfying ΘNTKµ [g] ≈ ζ0. The NTK is a function of the network width n and depth L—in particular, we will see that the depth L serves as a fitting resource, enabling the network to accommodate more complicated geometries. Our main analytical task is to establish relationships between these architectural resources and the intrinsic geometric properties of the manifolds that guarantee existence of a certificate.
2The specific value π/2 is immaterial to our arguments: this constraint is only to avoid technical issues that arise when antipodal points are present in M, so any constant less than π would work just as well. This choice allows for some extra technical expediency, and connects with natural modeling assumptions (e.g. data corresponding to image manifolds, with nonnegative pixel intensities).
Intuitively, one would expect it to be harder to separate curves that are close together or oscillate wildly. In this section, we formalize these intuitions in terms of the curves’ curvature, and quantities which we term the angle injectivity radius andV-number, which control the separation between the curves and their tendency to self-intersect. Given that the curves are regular, we may parameterize the two curves at unit speed with respect to arc length: for σ ∈ {±}, we write len(Mσ) to denote the length of each curve, and use xσ(s) : [0, len(Mσ)]→ Sn0−1 to represent these parameterizations. We let x(i)σ (s) denote the i-th derivative of xσ with respect to arc length. Because our parameterization is unit speed, ‖x(1)σ (s)‖2 = 1 for all xσ(s) ∈M. We provide full details regarding this parameterization in Appendix C.2.
Curvature and Manifold Derivatives. Our curvesMσ are submanifolds of the sphere Sn0−1. The curvature ofMσ at a point xσ(s) is the norm ‖Pxσ(s)⊥x (2) σ (s)‖2 of the component Pxσ(s)⊥x (2) σ (s) of the second derivative of xσ(s) that lies tangent to the sphere Sn0−1 at xσ(s). Geometrically, this measures the extent to which the curvexσ(s) deviates from a geodesic (great circle) on the sphere. Our technical results are phrased in terms of the maximum curvature κ = supσ,s{‖Pxσ(s)⊥x (2) σ (s)‖2}. In stating results, we also use κ̂ = max{κ, 2π} to simplify various dependencies on κ. When κ is large,Mσ is highly curved, and we will require a larger network depth L. In addition to the maximum curvature κ, our technical arguments require xσ(s) to be five times continuously differentiable, and use bounds Mi = supσ,s{‖x(i)σ (s)‖2} on their higher order derivatives.
Angle Injectivity Radius. Another key geometric quantity that determines the hardness of the problem is the separation between manifolds: the problem is more difficult whenM+ andM− are close together. We measure closeness through the extrinsic distance (angle) ∠(x,x′) = cos−1 〈x,x′〉 between x and x′ over the sphere. In contrast, we use dM(x,x′) to denote the intrinsic distance between x and x′ onM, setting dM(x,x′) =∞ if x and x′ reside on different componentsM+ andM−. We set
∆ = inf x,x′∈M
{∠(x,x′) | dM(x,x′) ≥ τ1}, (2.1)
where τ1 = 1√20κ̂ , and call this quantity the angle injectivity radius. In words, the angle injectivity radius is the minimum angle between two points whose intrinsic distance exceeds τ1. The angle injectivity radius ∆ (i) lower bounds the distance between different components M+ and M−, and (ii) accounts for the possibility that a component will “loop back,” exhibiting points with large intrinsic distance but small angle. This phenomenon is important to account for: the certificate problem is harder when one or both components ofM nearly self-intersect. At an intuitive level, this increases the difficulty of the certificate problem because it introduces nonlocal correlations across the operator ΘNTKµ , hurting its conditioning. As we will see in Section 4, increasing depth L makes ΘNTK better localized; setting L sufficiently large relative to ∆−1 compensates for these correlations.
V-number The conditioning of ΘNTKµ depends not only on how nearM comes to intersecting itself, which is captured by ∆, but also on the number of times thatM can “loop back” to a particular point. IfM “loops back” many times, ΘNTKµ can be highly correlated, leading to a hard certificate problem. TheV-number (verbally, “clover number”) reflects the number of near self-intersections:
V(M) = sup x∈M
{ NM ( {x′ | dM(x,x′) ≥ τ1,∠(x,x′) ≤ τ2},
1√ 1 + κ2
)} (2.2)
with τ2 = 1920√20κ̂ . The set {x ′ | dM(x,x′) ≥ τ1,∠(x,x′) ≤ τ2} is the union of looping pieces, namely points that are close to x in extrinsic distance but far in intrinsic distance. NM(T, δ) is the cardinality of a minimal δ covering of T ⊂M in the intrinsic distance on the manifold, serving as a way to count the number of disjoint looping pieces. TheV-number accounts for the maximal volume of the curve where the angle injectivity radius ∆ is active. It will generally be large if the manifolds nearly intersect multiple times, as illustrated in Fig. 2. TheV-number is typically small, but can be large when the data are generated in a way that induces certain near symmetries, as in the right panel of Fig. 2.
curves with fixed maximum curvature and length, but decreasingV-number, by reflecting ‘petals’ of a clover about a circumscribing square. We setM+ to be a fixed circle with large radius that crosses the center of the configurations, then rescale and project the entire geometry onto the sphere to create a two curve problem instance. In the insets, we show a two-dimensional projection of each of the blueM− curves as well as a base point x ∈ M+ at the center (also highlighed in the three-dimensional plots). The intersection ofM− with the neighborhood of x denoted in orange represents the set whose covering number gives theV-number of the configuration (see (2.2)). Top right: We numerically generate a certificate for each of the four geometries at left and plot its norm as a function ofV-number. The trend demonstrates that increasingV-number correlates with increasing classification difficulty, measured through the certificate problem: this is in line with the intuition we have discussed. Bottom right: t-SNE projection of MNIST images (top: a “four” digit; bottom: a “one” digit) subject to rotations. Due to the approximate symmetry of the one digit under rotation by an angle π, the projection appears to nearly intersect itself. This may lead to a higherV-number compared to the embedding of the less-symmetric four digit. For experimental details for all panels, see Appendix A.
3 Main Results
Our main theorem establishes a set of sufficient resource requirements for the certificate problem under the class of geometries we consider here—by the reductions detailed in Section 2.1, this implies that gradient descent rapidly separates the two classes given a neural network of sufficient depth and width. First, we note a convenient aspect of the certificate problem, which is its amenability to approximate solutions: that is, if we have a kernel Θ that approximates ΘNTK in the sense that ‖Θµ −ΘNTKµ ‖L2µ→L2µ . n/L, and a function ζ such that ‖ζ − ζ0‖L2µ . 1/L, then by the triangle inequality and the Schwarz inequality, it suffices to solve the equation Θµ[g] ≈ ζ instead. In our arguments, we will exploit the fact that the random kernel ΘNTK concentrates well for wide networks with n & L, choosing Θ as
Θ(x,x′) = (n/2) L−1∑
`=0
L−1∏
`′=`
( 1− (1/π)ϕ[`′](∠(x,x′) ) , (3.1)
where ϕ(t) = cos-1((1 − t/π) cos t + (1/π) sin t) and ϕ[`′] denotes `′-fold composition of ϕ; as well as the fact that for wide networks with n & L5, depth ‘smooths out’ the initial error ζ0, choosing ζ as the piecewise-constant function ζ(x) = −f?(x) + ∫ M fθ0(x ′) dµ(x′). We reproduce
high-probability concentration guarantees from the literature that justify these approximations in Appendix G.
Theorem 3.1 (Approximate Certificates for Curves). LetM be two disjoint smooth, regular, simple closed curves, satisfying ∠(x,x′) ≤ π/2 for all x,x′ ∈ M. There exist absolute constants C,C ′, C ′′, C ′′′ and a polynomial P = poly(M3,M4,M5, len(M),∆−1) of degree at most 36, with degree at most 12 in (M3,M4,M5, len(M)) and degree at most 24 in ∆−1, such that when
L ≥ max { exp(C ′ len(M)κ̂), ( ∆ √ 1 + κ2 )−C′′V(M)
, C ′′′κ̂10, P, ρ12max
} ,
there exists a certificate g with ‖g‖L2µ ≤ C‖ζ‖L2µ ρminn logL such that ‖Θµ[g]− ζ‖L2µ ≤ ‖ζ‖L∞ L .
Theorem 3.1 is our main technical contribution: it provides a sufficient condition on the network depth L to resolve the approximate certificate problem for the class of geometries we consider, with the required resources depending only on the geometric properties we introduce in Section 2.2. Given the connection between certificates and gradient descent, Theorem 3.1 demonstrates that deeper networks fit more complex geometries, which shows that the network depth plays the role of a fitting resource in classifying the two curves. We provide a numerical corroboration of the interaction between the network depth, the geometry, and the size of the certificate in Figure 3. For any family of geometries with boundedV-number, Theorem 3.1 implies a polynomial dependence of the depth on the angle injectivity radius ∆, whereas we are unable to avoid an exponential dependence of the depth on the curvature κ. Nevertheless, these dependences may seem overly pessimistic in light of the existence of ‘easy’ two curve problem instances—say, linearly-separable classes, each of which is a highly nonlinear manifold—for which one would expect gradient descent to succeed without needing an unduly large depth. In fact, such geometries will not admit a small certificate norm in general unless the depth is sufficiently large: intuitively, this is a consequence of the operator Θµ being ill-conditioned for such geometries.3
The proof of Theorem 3.1 is novel, both in the context of kernel regression on manifolds and in the context of NTK-regime neural network training. We detail the key intuitions for the proof in
3Again, the equivalence between the difficulty of the certificate problem and the progress of gradient descent on decreasing the error is a consequence of our analysis proceeding in the kernel regime with the square loss—using alternate techniques to analyze the dynamics can allow one to prove that neural networks continue to fit such ‘easy’ classification problems efficiently (e.g. [34]).
Section 4. As suggested above, applying Theorem 3.1 to construct a certificate is straightforward: given a suitable setting of L for a two curve problem instance, we obtain an approximate certificate g via Theorem 3.1. Then with the triangle inequality and the Schwarz inequality, we can bound
‖ΘNTKµ [g]− ζ0‖L2µ ≤ ‖Θ NTK µ −Θµ‖L2µ→L2µ‖g‖L2µ + ‖ζ0 − ζ‖L2µ + ‖Θµ[g]− ζ‖L2µ ,
and leveraging suitable probabilistic control (see Appendix G) of the approximation errors in the previous expression, as well as on ‖ζ‖L2µ , then yields bounds for the certificate problem. Applying the reductions from gradient descent dynamics in the NTK regime to certificates discussed in Section 2.1, we then obtain an end-to-end guarantee for the two curve problem.
Theorem 3.2 (Generalization). LetM be two disjoint smooth, regular, simple closed curves, satisfying ∠(x,x′) ≤ π/2 for all x,x′ ∈M. For any 0 < δ ≤ 1/e, choose L so that
L ≥ K max
1 ( ∆ √ 1 + κ2 )CV(M) , Cµ log 9( 1δ ) log 24(Cµn0 log( 1 δ )), e C′max{len(M)κ̂,log(κ̂)}, P
n = K ′L99 log9(1/δ) log18(Ln0)
N ≥ L10,
and fix τ > 0 such that C ′′
nL2 ≤ τ ≤ cnL . Then with probability at least 1 − δ, the parameters obtained at iteration bL39/44/(nτ)c of gradient descent on the finite sample loss yield a classifier that separates the two manifolds.
The constants c, C,C ′, C ′′,K,K ′ > 0 are absolute, and Cµ equals to max{ρ19min,ρ−19min }(1+ρmax)12 (min {µ(M+),µ(M−)})11/2 is a constant only depends on µ. P is a polynomial poly{M3,M4,M5, len(M),∆−1} of degree at most 36, with degree at most 12 when viewed as a polynomial in M3,M4,M5 and len(M), and of degree at most 24 as a polynomial in ∆−1.
Theorem 3.2 represents the first end-to-end guarantee for training a deep neural network to classify a nontrivial class of low-dimensional nonlinear manifolds. We call attention to the fact that the hypotheses of Theorem 3.2 are completely self-contained, making reference only to intrinsic properties of the data and the architectural hyperparameters of the neural network (as well as poly(log n0)), and that the result is algorithmic, as it applies to training the network via constant-stepping gradient descent on the empirical square loss and guarantees generalization within L2 iterations. Furthermore, Theorem 3.2 can be readily extended to the more general setting of regression on curves, given that we have focused on training with the square loss.
4 Proof Sketch
In this section, we provide an overview of the key elements of the proof of Theorem 3.1, where we show that the equation Θµ[g] ≈ ζ admits a solution g (the certificate) of small norm. To solve the certificate problem forM, we require a fine-grained understanding of the kernel Θ. The most natural approach is to formally set g = ∑∞ i=1 λ −1 i 〈ζ, vi〉L2µvi using the eigendecomposition of Θµ (just as constructed in Section 2.1 for ΘNTKµ ), and then argue that this formal expression converges by studying the rate of decay of λi and the alignment of ζ with eigenvectors of Θµ; this is the standard approach in the literature [46, 53]. However, as discussed in Section 2.1, the nonlinear structure ofM makes obtaining a full diagonalization for Θµ intractable, and simple asymptotic characterizations of its spectrum are insufficient to prove that the solution g has small norm. Our approach will therefore be more direct: we will study the ‘spatial’ properties of the kernel Θ itself, in particular its rate of decay away from x = x′, and thereby use the network depth L as a resource to reduce the study of the operator Θµ to a simpler, localized operator whose invertibility can be proved using harmonic analysis. We will then use differentiability properties of Θ to transfer the solution obtained by inverting this auxiliary operator back to the operator Θµ. We refer readers to Appendix E for the full proof.
We simplify the proceedings using two basic reductions. First, with a small amount of auxiliary argumentation, we can reduce from the study of the operator-with-density Θµ to the density-free operator
Θ. Second, the kernel Θ(x,x′) is a function of the angle ∠(x,x′), and hence is rotationally invariant. This kernel is maximized at ∠(x,x′) = 0 and decreases monotonically as the angle increases, reaching its minimum value at ∠(x,x′) = π. If we subtract this minimum value, it should not affect our ability to fit functions, and we obtain a rotationally invariant kernel Θ◦(x,x′) = ψ◦(∠(x,x′)) that is concentrated around angle 0. In the following, we focus on certificate construction for the kernel Θ◦. Both simplifications are justified in Appendix E.3.
4.1 The Importance of Depth: Localization of the Neural Tangent Kernel
The first problem one encounters when attempting to directly establish (a property like) invertibility of the operator Θ◦ is its action across connected components ofM: the operator Θ◦ acts by integrating against functions defined onM =M+ ∪M−, and although it is intuitive that most of its image’s values on each component will be due to integration of the input over the same component, there will always be some ‘cross-talk’ corresponding to integration over the opposite component that interferes with our ability to apply harmonic analysis tools. To work around this basic issue (as well as others we will see below), our argument proceeds via a localization approach: we will exploit the fact that as the depth L increases, the kernel Θ◦ sharpens and concentrates around its value at x = x′, to the extent that we can neglect its action across components ofM and even pass to the analysis of an auxiliary localized operator. This reduction is enabled by new sharp estimates for the decay of the angle function ψ◦ that we establish in Appendix F.3. Moreover, the perspective of using the network depth as a resource to localize the kernel Θ◦ and exploiting this to solve the classification problem appears to be new: this localization is typically presented as a deficiency in the literature (e.g. [47]).
At a more formal level, when the network is deep enough compared to geometric properties of the curves, for each point x, the majority of the mass of the kernel Θ◦(x,x′) is taken within a small neighborhood dM(x,x′) ≤ r of x. When dM(x,x′) is small relative to κ, we have dM(x,x′) ≈ ∠(x,x′). This allows us to approximate the local component by the following invariant operator:
M̂ [f ](xσ(s)) =
∫ s+r
s′=s−r ψ◦(|s− s′|)f(xσ(s′))ds′. (4.1)
This approximation has two main benefits: (i) the operator M̂ is defined by intrinsic distance s′ − s, and (ii) it is highly localized. In fact, (4.1) takes the form of a convolution over the arc length parameter s. This implies that M̂ diagonalizes in the Fourier basis, giving an explicit characterization of its eigenvalues and eigenvectors. Moreover, because M̂ is localized, the eigenvalues corresponding to slowly oscillating Fourier basis functions are large, and M̂ is stably invertible over such functions. Both of these benefits can be seen as consequences of depth: depth leads to localization, which facilitates approximation by M̂ , and renders that approximation invertible over low-frequency functions. In our proofs, we will work with a subspace S spanned by low-frequency basis functions that are nearly constant over a length 2r interval (this subspace ends up having dimension proportional to 1/r; see Appendix C.3 for a formal definition), and use Fourier arguments to prove invertibility of M̂ over S (see Lemma E.6).
4.2 Stable Inversion over Smooth Functions
Our remaining task is to leverage the invertibility of M̂ over S to argue that Θ is also invertible. In doing so, we need to account for the residual Θ− M̂ . We accomplish this directly, using a Neumann series argument: when setting r . L−1/2 and the dimension of the subspace S proportional to 1/r, the minimum eigenvalue of M̂ over S exceeds the norm of the residual operator Θ◦ − M̂ (Lemma E.2). This argument leverages a decomposition of the domain into “near”, “far” and “winding” pieces, whose contribution to Θ◦ is controlled using the curvature, angle injectivity radius andV-number (Lemma E.8, Lemma E.9, Lemma E.10). This guarantees the strict invertibility of Θ◦ over the subspace S, and yields a unique solution gS to the restricted equation PSΘ◦[gS ] = ζ (Theorem E.1).
This does not yet solve the certificate problem, which demands near solutions to the unrestricted equation Θ◦[g] = ζ. To complete the argument, we set g = gS and use harmonic analysis considerations to show that Θ◦[g] is very close to S. The subspace S contains functions that do not oscillate
rapidly, and hence whose derivatives are small relative to their norm (Lemma E.23). We prove that Θ◦[g] is close to S by controlling the first three derivatives of Θ◦[g], which introduces dependencies on M1, · · · ,M5 in the final statement of our results (Lemma E.27). In controlling these derivatives, we leverage the assumption that supx,x′∈M ∠(x,x
′) ≤ π/2 to avoid issues that arise at antipodal points—we believe the removal of this constraint is purely technical, given our sharp characterization of the decay of ψ◦ and its derivatives. Finally, we move from Θ◦ back to Θ by combining near solutions to Θ◦[g] = ζ and Θ◦[g1] = 1, and iterating the construction to reduce the approximation error to an acceptable level (Appendix E.3).
5 Discussion
A role for depth. In the setting of fitting functions on the sphere Sn0−1 in the NTK regime with unstructured (e.g., uniformly random) data, it is well-known that there is very little marginal benefit to using a deeper network: for example, [32, 46, 59] show that the risk lower bound for RKHS methods is nearly met by kernel regression with a 2-layer network’s NTK in an asymptotic (n0 →∞) setting, and results for fitting degree-1 functions in the nonasymptotic setting [52] are suggestive of a similar phenomenon. In a similar vein, fitting in the NTK regime with a deeper network does not change the kernel’s RKHS [41, 42, 45], and in a certain “infinite-depth” limit, the corresponding NTK for networks with ReLU activations, as we consider here, is a spike, guaranteeing that it fails to generalize [47, 50]. Our results are certainly not in contradiction to these facts—we consider a setting where the data are highly structured, and our proofs only show that an appropriate choice of the depth relative to this structure is sufficient to guarantee generalization, not necessary—but they nonetheless highlight an important role for the network depth in the NTK regime that has not been explored in the existing literature. In particular, the localization phenomenon exhibited by the deep NTK is completely inaccessible by fixed-depth networks, and simultaneously essential to our arguments to proving Theorem 3.2, as we have described in Section 4. It is an interesting open problem to determine whether there exist low-dimensional geometries that cannot be efficiently separated without a deep NTK, or whether the essential sufficiency of the depth-two NTK persists.
Closing the gap to real networks and data. Theorem 3.2 represents an initial step towards understanding the interaction between neural networks and data with low-dimensional structure, and identifying network resource requirements sufficient to guarantee generalization. There are several important avenues for future work. First, although the resource requirements in Theorem 3.1, and by extension Theorem 3.2, reflect only intrinsic properties of the data, the rates are far from optimal—improvements here will demand a more refined harmonic analysis argument beyond the localization approach we take in Section 4.1. A more fundamental advance would consist of extending the analysis to the setting of a model for image data, such as cartoon articulation manifolds, and the NTK of a convolutional neural network with architectural settings that impose translation invariance [25, 35]—recent results show asymptotic statistical efficiency guarantees with the NTK of a simple convolutional architecture, but only in the context of generic data [60]. The approach to certificate construction we develop in Theorem 3.1 will be of use in establishing guarantees analogous to Theorem 3.2 here, as our approach does not require an explicit diagonalization of the NTK.
In addition, extending our certificate construction approach to smooth manifolds of dimension larger than one is a natural next step. We believe our localization argument generalizes to this setting: as our bounds for the kernel ψ are sharp with respect to depth and independent of the manifold dimension, one could seek to prove guarantees analogous to Theorem 3.1 with a similar subspace-restriction argument for sufficiently regular manifolds, such as manifolds diffeomorphic to spheres, where the geometric parameters of Section 2.2 have natural extensions. Such a generalization would incur at best an exponential dependence of the network on the manifold dimension for localization in high dimensions.
More broadly, the localization phenomena at the core of our argument appear to be relevant beyond the regime in which the hypotheses of Theorem 3.2 hold: we provide a preliminary numerical experiment to this end in Appendix A.3. Training fully-connected networks with gradient descent on a simple manifold classification task, low training error appears to be easily achievable only when the decay scale of the kernel is small relative to the inter-manifold distance even at moderate depth and width, and this decay scale is controlled by the depth of the network.
Funding Transparency Statement and Acknowledgements
This work was supported by a Swartz fellowship (DG), by a fellowship award (SB) through the National Defense Science and Engineering Graduate (NDSEG) Fellowship Program, sponsored by the Air Force Research Laboratory (AFRL), the Office of Naval Research (ONR) and the Army Research Office (ARO), and by the National Science Foundation through grants NSF 1733857, NSF 1838061, NSF 1740833, and NSF 174039. We thank Alberto Bietti for bringing to our attention relevant prior art on kernel regression on manifolds. | 1. What is the focus of the paper regarding neural networks and submanifolds?
2. What are the strengths of the paper, particularly in providing sufficient conditions for the certificate problem?
3. What are the weaknesses of the paper, especially regarding the discussion of the sufficient conditions and the requirement on the network depth?
4. Do you have any questions or suggestions regarding the paper's content, such as simple examples, simulations, or real-data examples that could support the main theorems?
5. Are there any typos or syntax mistakes in the paper that need to be addressed? | Summary Of The Paper
Review | Summary Of The Paper
The paper studies using neural network to classify two disjoint low-dimensional submanifolds. Following the idea from Deep Networks and the Multiple Manifold Problem (Sam Buchanan, Dar Gilboa, and John Wright, 2021), the authors considered the certificate problem and provided some sufficient conditions for the existence of such a certificate.
Review
The paper studies the classification of two disjoint low-dimensional submanifolds. It closely follows the previous work Deep Networks and the Multiple Manifold Problem (Sam Buchanan, Dar Gilboa, and John Wright, 2021). The authors reduced to consider the certificate problem.
Overall, the paper is not good enough for acceptance. The major contribution lies in Theorem 3.1 and 3.2, which gives some sufficient conditions for the existence of such a certificate. However, the authors didn't provide any comments and discussion on these sufficient conditions. A careful discussion of the dependence of the network depth L on the parameters related to the target manifolds is necessary. In addition, based on the lower bound on the network depth L, the theorem suggests a very strong requirement on the network depth. The authors should consider some simple examples to give a quantitative sense on these parameters of the curves and further the depth L. Some simulations and real data examples are also needed to support the main theorems.
The paper also has some typos. To name a few, line 122, the domain and range of f*; line 166, the range of sigma. Also the paper has some syntax mistakes. The authors need a careful modification. |
NIPS | Title
Deep Networks Provably Classify Data on Curves
Abstract
Data with low-dimensional nonlinear structure are ubiquitous in engineering and scientific problems. We study a model problem with such structure—a binary classification task that uses a deep fully-connected neural network to classify data drawn from two disjoint smooth curves on the unit sphere. Aside from mild regularity conditions, we place no restrictions on the configuration of the curves. We prove that when (i) the network depth is large relative to certain geometric properties that set the difficulty of the problem and (ii) the network width and number of samples are polynomial in the depth, randomly-initialized gradient descent quickly learns to correctly classify all points on the two curves with high probability. To our knowledge, this is the first generalization guarantee for deep networks with nonlinear data that depends only on intrinsic data properties. Our analysis proceeds by a reduction to dynamics in the neural tangent kernel (NTK) regime, where the network depth plays the role of a fitting resource in solving the classification problem. In particular, via fine-grained control of the decay properties of the NTK, we demonstrate that when the network is sufficiently deep, the NTK can be locally approximated by a translationally invariant operator on the manifolds and stably inverted over smooth functions, which guarantees convergence and generalization.
1 Introduction
In applied machine learning, engineering, and the sciences, we are frequently confronted with the problem of identifying low-dimensional structure in high-dimensional data. In certain wellstructured data sets, identifying a good low-dimensional model is the principal task: examples include convolutional sparse models in microscopy [43] and neuroscience [10, 16], and low-rank models in collaborative filtering [7, 8]. Even more complicated datasets from problems such as image classification exhibit some form of low-dimensionality: recent experiments estimate the effective dimension of CIFAR-10 as 26 and the effective dimension of ImageNet as 43 [61]. The variability in these datasets can be thought of as comprising two parts: a “probabilistic” variability induced by the distribution of geometries associated with a given class, and a “geometric” variability associated with physical nuisances such as pose and illumination. The former is challenging to model analytically; virtually all progress on this issue has come through the introduction of large datasets and highcapacity learning machines. The latter induces a much cleaner analytical structure: transformations of a given image lie near a low-dimensional submanifold of the image space (Figure 1). The celebrated successes of convolutional neural networks in image classification seem to derive from their ability to simultaneously handle both types of variability. Studying how neural networks compute with data lying near a low-dimensional manifold is an essential step towards understanding how neural
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
networks achieve invariance to continuous transformations of the image domain, and towards the longer term goal of developing a more comprehensive mathematical understanding of how neural networks compute with real data. At the same time, in some scientific and engineering problems, classifying manifold-structured data is the goal—one example is in gravitational wave astronomy [22, 30], where the goal is to distinguish true events from noise, and the events are generated by relatively simple physical systems with only a few degrees of freedom.
Motivated by these long term goals, in this paper we study the multiple manifold problem (Figure 1), a mathematical model problem in which we are presented with a finite set of labeled samples lying on disjoint low-dimensional submanifolds of a high-dimensional space, and the goal is to correctly classify every point on each of the submanifolds—a strong form of generalization. The central mathematical question is how the structure of the data (properties of the manifolds such as dimension, curvature, and separation) influences the resources (data samples, and network depth and width) required to guarantee generalization. Our main contribution is the first end-to-end analysis of this problem for a nontrivial class of manifolds: one-dimensional smooth curves that are non-intersecting, cusp-free, and without antipodal pairs of points. Subject to these constraints, the curves can be oriented essentially arbitrarily (say, non-linearly-separably, as in Figure 1), and the hypotheses of our results depend only on architectural resources and intrinsic geometric properties of the data. To our knowledge, this is the first generalization result for training a deep nonlinear network to classify structured data that makes no a-priori assumptions about the representation capacity of the network or about properties of the network after training.
Our analysis proceeds in the neural tangent kernel (NTK) regime of training, where the network is wide enough to guarantee that gradient descent can make large changes in the network output while making relatively small changes to the network weights. This approach is inspired by the recent work [57], which reduces the analysis of generalization in the one-dimensional multiple manifold problem to an auxiliary problem called the certificate problem. Solving the certificate problem amounts to proving that the target label function lies near the stable range of the NTK. The existence of certificates (and more generally, the conditions under which practically-trained neural networks can fit structured data) is open, except for a few very simple geometries which we will review below—in particular, [57] leaves this question completely open. Our technical contribution is to show that setting the network depth sufficiently large relative to intrinsic properties of the data guarantees the existence of a certificate (Theorem 3.1), resolving the one-dimensional case of the multiple manifold problem for a broad class of curves (Theorem 3.2). This leads in turn to a novel perspective on the role of the network depth as a fitting resource in the classification problem, which is inaccessible to shallow networks.
1.1 Related Work
Deep networks and low dimensional structure. Modern applications of deep neural networks include numerous examples of low-dimensional manifold structure, including pose and illumination variations in image classification [1, 5], as well as detection of structured signals such as electrocardiograms [14, 20], gravitational waves [22, 30], audio signals [13], and solutions to the diffusion equation [48]. Conventionally, to compute with such data one might begin by extracting a low-dimensional representation using nonlinear dimensionality reduction (“manifold learning”) algorithms [2–4, 6, 12, 54, 56]. For supervised tasks, there is also theoretical work on kernel regression over manifolds [9, 11, 19, 51]. These results rely on very general Sobolev embedding theorems, which are not precise enough to specify the interplay between regularity of the kernel and properties of the data need to obtain concrete resource tradeoffs in the two curve problem. There is also a literature which studies the resource requirements associated with approximating functions over low-dimensional manifolds [15, 29, 38, 44]: a typical result is that for a sufficiently smooth function there exists an approximating network whose complexity is controlled by intrinsic properties such as the dimension. In contrast, we seek algorithmic guarantees that prove that we can efficiently train deep neural networks for tasks with low-dimensional structure. This requires us to grapple with how the geometry of the data influences the dynamics of optimization methods.
Neural networks and structured data—theory? Spurred by insights in asymptotic infinite width [23, 24] and non-asymptotic [18, 21] settings, there has been a surge of recent theoretical work aimed at establishing guarantees for neural network training and generalization [26–28, 34, 37, 40, 49, 55]. Here, our interest is in end-to-end generalization guarantees, which are scarce in the literature: those that exist pertain to unstructured data with general targets, in the regression setting [32, 36, 46, 59], and those that involve low-dimensional structure consider only linear structure (i.e., spheres) [46]. For less general targets, there exist numerous works that pertain to the teacher-student setting, where the target is implemented by a neural network of suitable architecture with unstructured inputs [17, 33, 40, 49, 63]. Although adding this extra structure to the target function allows one to establish interesting separations in terms of e.g. sample complexity [31, 39, 49, 62] relative to the preceding analyses, which proceed in the “kernel regime”, we leverage kernel regime techniques in our present work because they allow us to study the interactions between deep networks and data with nonlinear low-dimensional structure, which is not possible with existing teacher-student tools. Relaxing slightly from results with end-to-end guarantees, there exist ‘conditional’ guarantees which require the existence of an efficient representation of the target mapping in terms of a certain RKHS associated to the neural network [34, 53, 57, 58]. In contrast, our present work obtains unconditional, end-to-end generalization guarantees for a nontrivial class of low-dimensional data geometries.
2 Problem Formulation
Notation. We use bold notation x,A for vectors and matrices/operators (respectively). We write ‖x‖p = ( ∑n i=1|xi|p)1/p for the `p norm of x, 〈x,y〉 = ∑n i=1 xiyi for the euclidean inner product,
and for a measure space (X,µ), ‖g‖Lpµ = ( ∫ X |g(x)|p dµ(x))1/p denotes the Lpµ norm of a function g : X → R. The unit sphere in Rn is denoted Sn−1, and ∠(x,y) = cos-1(〈x,y〉) denotes the angle between unit vectors. For a kernel K : X×X → R, we writeKµ[g](x) = ∫ X K(x, x′)g(x′) dµ(x′) for the action of the associated Fredholm integral operator; an omitted subscript denotes Lebesgue measure. We write PS to denote the orthogonal projection operator onto a (closed) subspace S. Full notation is provided in Appendix B.
2.1 The Two Curve Problem1
A natural model problem for the tasks discussed in Section 1 is the classification of low-dimensional submanifolds using a neural network. In this work, we study the one-dimensional, two-class case of this problem, which we refer to as the two curve problem. To fix ideas, let n0 ≥ 3 denote the ambient dimension, and let M+ and M− be two disjoint smooth regular simple closed curves taking values in Sn0−1, which represent the two classes (Figure 1). In addition, we require that
1The content of this section follows the presentation of [57]; we reproduce it here for self-containedness. We omit some nonessential definitions and derivations for concision; see Appendix C.1 for these details.
the curves lie in a spherical cap of radius π/2: for example, the intersection of the sphere and the nonnegative orthant {x ∈ Rn0 |x ≥ 0}.2 Given N i.i.d. samples {xi}Ni=1 from a density ρ supported onM =M+ ∪M−, which is bounded above and below by positive constants ρmax and ρmin and has associated measure µ, as well as their corresponding ±1 labels, we train a feedforward neural network fθ : Rn0 → R with ReLU nonlinearities, uniform width n, and depth L (and parameters θ) by minimizing the empirical mean squared error using randomly-initialized gradient descent. Our goal is to prove that this procedure yields a separator for the geometry given sufficient resources n, L, and N—i.e., that sign(fθk) = 1 onM+ and −1 onM− at some iteration k of gradient descent. To achieve this, we need an understanding of the progress of gradient descent. Let f? :M→ {±1} denote the classification function forM+ andM− that generates our labels, write ζθ(x) = fθ(x)− f?(x) for the network’s prediction error, and let θk+1 = θk − (τ/N) ∑N i=1 ζθk(xi)∇θfθk(xi) denote the gradient descent parameter sequence, where τ > 0 is the step size and θ0 represents our Gaussian initialization. Elementary calculus then implies the error dynamics equation ζθk+1 = ζθk − (τ/N) ∑N i=1 Θ N k ( · ,xi)ζθk(xi) for k = 0, 1, . . . , where ΘNk : M×M → R is a certain kernel. The precise expression for this kernel is not important for our purposes: what matters is that (i) making the width n large relative to the depth L guarantees that ΘNk remains close throughout training to its ‘initial value’ ΘNTK(x,x′) = 〈∇θfθ0(x),∇θfθ0(x′)〉, the neural tangent kernel; and (ii) taking the sample size N to be sufficiently large relative to the depth L implies that a nominal error evolution defined as ζk+1 = ζk − τΘNTKµ [ζk] with ζ0 = ζθ0 uniformly approximates the actual error ζθk throughout training. In other words: to prove that gradient descent yields a neural network classifier that separates the two manifolds, it suffices to overparameterize, sample densely, and show that the norm of ζk decays sufficiently rapidly with k. This constitutes the “NTK regime” approach to gradient descent dynamics for neural network training [23].
The evolution of ζk is relatively straightforward: we have ζk+1 = (Id−τΘNTKµ )k[ζ0], and ΘNTKµ is a positive, compact operator, so there exist an orthonormal basis of L2µ functions vi and eigenvalues λ1 ≥ λ2 ≥ · · · ≥ 0 such that ζk+1 = ∑∞ i=1(1− τλi)k〈ζ0, vi〉L2µvi. In particular, with bounded step size τ < λ−11 , gradient descent leads to rapid decrease of the error if and only if the initial error ζ0 is well-aligned with the eigenvectors of ΘNTKµ corresponding to large eigenvalues. Arguing about this alignment explicitly is a challenging problem in geometry: although closed-form expressions for the functions vi exist in cases whereM and µ are particularly well-structured, no such expression is available for general nonlinear geometries, even in the one-dimensional case we study here. However, this alignment can be guaranteed implicitly if one can show there exists a function g : M → R of small L2µ norm such that Θ NTK µ [g] ≈ ζ0—in this situation, most of the energy of ζ0 must be concentrated on directions corresponding to large eigenvalues. We call the construction of such a function the certificate problem [57, Eqn. (2.3)]:
Certificate Problem. Given a two curves problem instance (M, ρ), find conditions on the architectural hyperparameters (n,L) so that there exists g :M→ R satisfying ‖ΘNTKµ [g]− ζ0‖L2µ . 1/L and ‖g‖L2µ . 1/n, with constants depending on the density ρ and logarithmic factors suppressed.
The construction of certificates demands a fine-grained understanding of the integral operator ΘNTKµ and its interactions with the geometry M. We therefore proceed by identifying those intrinsic properties ofM that will play a role in our analysis and results.
2.2 Key Geometric Properties
In the NTK regime described in Section 2.1, gradient descent makes rapid progress if there exists a small certificate g satisfying ΘNTKµ [g] ≈ ζ0. The NTK is a function of the network width n and depth L—in particular, we will see that the depth L serves as a fitting resource, enabling the network to accommodate more complicated geometries. Our main analytical task is to establish relationships between these architectural resources and the intrinsic geometric properties of the manifolds that guarantee existence of a certificate.
2The specific value π/2 is immaterial to our arguments: this constraint is only to avoid technical issues that arise when antipodal points are present in M, so any constant less than π would work just as well. This choice allows for some extra technical expediency, and connects with natural modeling assumptions (e.g. data corresponding to image manifolds, with nonnegative pixel intensities).
Intuitively, one would expect it to be harder to separate curves that are close together or oscillate wildly. In this section, we formalize these intuitions in terms of the curves’ curvature, and quantities which we term the angle injectivity radius andV-number, which control the separation between the curves and their tendency to self-intersect. Given that the curves are regular, we may parameterize the two curves at unit speed with respect to arc length: for σ ∈ {±}, we write len(Mσ) to denote the length of each curve, and use xσ(s) : [0, len(Mσ)]→ Sn0−1 to represent these parameterizations. We let x(i)σ (s) denote the i-th derivative of xσ with respect to arc length. Because our parameterization is unit speed, ‖x(1)σ (s)‖2 = 1 for all xσ(s) ∈M. We provide full details regarding this parameterization in Appendix C.2.
Curvature and Manifold Derivatives. Our curvesMσ are submanifolds of the sphere Sn0−1. The curvature ofMσ at a point xσ(s) is the norm ‖Pxσ(s)⊥x (2) σ (s)‖2 of the component Pxσ(s)⊥x (2) σ (s) of the second derivative of xσ(s) that lies tangent to the sphere Sn0−1 at xσ(s). Geometrically, this measures the extent to which the curvexσ(s) deviates from a geodesic (great circle) on the sphere. Our technical results are phrased in terms of the maximum curvature κ = supσ,s{‖Pxσ(s)⊥x (2) σ (s)‖2}. In stating results, we also use κ̂ = max{κ, 2π} to simplify various dependencies on κ. When κ is large,Mσ is highly curved, and we will require a larger network depth L. In addition to the maximum curvature κ, our technical arguments require xσ(s) to be five times continuously differentiable, and use bounds Mi = supσ,s{‖x(i)σ (s)‖2} on their higher order derivatives.
Angle Injectivity Radius. Another key geometric quantity that determines the hardness of the problem is the separation between manifolds: the problem is more difficult whenM+ andM− are close together. We measure closeness through the extrinsic distance (angle) ∠(x,x′) = cos−1 〈x,x′〉 between x and x′ over the sphere. In contrast, we use dM(x,x′) to denote the intrinsic distance between x and x′ onM, setting dM(x,x′) =∞ if x and x′ reside on different componentsM+ andM−. We set
∆ = inf x,x′∈M
{∠(x,x′) | dM(x,x′) ≥ τ1}, (2.1)
where τ1 = 1√20κ̂ , and call this quantity the angle injectivity radius. In words, the angle injectivity radius is the minimum angle between two points whose intrinsic distance exceeds τ1. The angle injectivity radius ∆ (i) lower bounds the distance between different components M+ and M−, and (ii) accounts for the possibility that a component will “loop back,” exhibiting points with large intrinsic distance but small angle. This phenomenon is important to account for: the certificate problem is harder when one or both components ofM nearly self-intersect. At an intuitive level, this increases the difficulty of the certificate problem because it introduces nonlocal correlations across the operator ΘNTKµ , hurting its conditioning. As we will see in Section 4, increasing depth L makes ΘNTK better localized; setting L sufficiently large relative to ∆−1 compensates for these correlations.
V-number The conditioning of ΘNTKµ depends not only on how nearM comes to intersecting itself, which is captured by ∆, but also on the number of times thatM can “loop back” to a particular point. IfM “loops back” many times, ΘNTKµ can be highly correlated, leading to a hard certificate problem. TheV-number (verbally, “clover number”) reflects the number of near self-intersections:
V(M) = sup x∈M
{ NM ( {x′ | dM(x,x′) ≥ τ1,∠(x,x′) ≤ τ2},
1√ 1 + κ2
)} (2.2)
with τ2 = 1920√20κ̂ . The set {x ′ | dM(x,x′) ≥ τ1,∠(x,x′) ≤ τ2} is the union of looping pieces, namely points that are close to x in extrinsic distance but far in intrinsic distance. NM(T, δ) is the cardinality of a minimal δ covering of T ⊂M in the intrinsic distance on the manifold, serving as a way to count the number of disjoint looping pieces. TheV-number accounts for the maximal volume of the curve where the angle injectivity radius ∆ is active. It will generally be large if the manifolds nearly intersect multiple times, as illustrated in Fig. 2. TheV-number is typically small, but can be large when the data are generated in a way that induces certain near symmetries, as in the right panel of Fig. 2.
curves with fixed maximum curvature and length, but decreasingV-number, by reflecting ‘petals’ of a clover about a circumscribing square. We setM+ to be a fixed circle with large radius that crosses the center of the configurations, then rescale and project the entire geometry onto the sphere to create a two curve problem instance. In the insets, we show a two-dimensional projection of each of the blueM− curves as well as a base point x ∈ M+ at the center (also highlighed in the three-dimensional plots). The intersection ofM− with the neighborhood of x denoted in orange represents the set whose covering number gives theV-number of the configuration (see (2.2)). Top right: We numerically generate a certificate for each of the four geometries at left and plot its norm as a function ofV-number. The trend demonstrates that increasingV-number correlates with increasing classification difficulty, measured through the certificate problem: this is in line with the intuition we have discussed. Bottom right: t-SNE projection of MNIST images (top: a “four” digit; bottom: a “one” digit) subject to rotations. Due to the approximate symmetry of the one digit under rotation by an angle π, the projection appears to nearly intersect itself. This may lead to a higherV-number compared to the embedding of the less-symmetric four digit. For experimental details for all panels, see Appendix A.
3 Main Results
Our main theorem establishes a set of sufficient resource requirements for the certificate problem under the class of geometries we consider here—by the reductions detailed in Section 2.1, this implies that gradient descent rapidly separates the two classes given a neural network of sufficient depth and width. First, we note a convenient aspect of the certificate problem, which is its amenability to approximate solutions: that is, if we have a kernel Θ that approximates ΘNTK in the sense that ‖Θµ −ΘNTKµ ‖L2µ→L2µ . n/L, and a function ζ such that ‖ζ − ζ0‖L2µ . 1/L, then by the triangle inequality and the Schwarz inequality, it suffices to solve the equation Θµ[g] ≈ ζ instead. In our arguments, we will exploit the fact that the random kernel ΘNTK concentrates well for wide networks with n & L, choosing Θ as
Θ(x,x′) = (n/2) L−1∑
`=0
L−1∏
`′=`
( 1− (1/π)ϕ[`′](∠(x,x′) ) , (3.1)
where ϕ(t) = cos-1((1 − t/π) cos t + (1/π) sin t) and ϕ[`′] denotes `′-fold composition of ϕ; as well as the fact that for wide networks with n & L5, depth ‘smooths out’ the initial error ζ0, choosing ζ as the piecewise-constant function ζ(x) = −f?(x) + ∫ M fθ0(x ′) dµ(x′). We reproduce
high-probability concentration guarantees from the literature that justify these approximations in Appendix G.
Theorem 3.1 (Approximate Certificates for Curves). LetM be two disjoint smooth, regular, simple closed curves, satisfying ∠(x,x′) ≤ π/2 for all x,x′ ∈ M. There exist absolute constants C,C ′, C ′′, C ′′′ and a polynomial P = poly(M3,M4,M5, len(M),∆−1) of degree at most 36, with degree at most 12 in (M3,M4,M5, len(M)) and degree at most 24 in ∆−1, such that when
L ≥ max { exp(C ′ len(M)κ̂), ( ∆ √ 1 + κ2 )−C′′V(M)
, C ′′′κ̂10, P, ρ12max
} ,
there exists a certificate g with ‖g‖L2µ ≤ C‖ζ‖L2µ ρminn logL such that ‖Θµ[g]− ζ‖L2µ ≤ ‖ζ‖L∞ L .
Theorem 3.1 is our main technical contribution: it provides a sufficient condition on the network depth L to resolve the approximate certificate problem for the class of geometries we consider, with the required resources depending only on the geometric properties we introduce in Section 2.2. Given the connection between certificates and gradient descent, Theorem 3.1 demonstrates that deeper networks fit more complex geometries, which shows that the network depth plays the role of a fitting resource in classifying the two curves. We provide a numerical corroboration of the interaction between the network depth, the geometry, and the size of the certificate in Figure 3. For any family of geometries with boundedV-number, Theorem 3.1 implies a polynomial dependence of the depth on the angle injectivity radius ∆, whereas we are unable to avoid an exponential dependence of the depth on the curvature κ. Nevertheless, these dependences may seem overly pessimistic in light of the existence of ‘easy’ two curve problem instances—say, linearly-separable classes, each of which is a highly nonlinear manifold—for which one would expect gradient descent to succeed without needing an unduly large depth. In fact, such geometries will not admit a small certificate norm in general unless the depth is sufficiently large: intuitively, this is a consequence of the operator Θµ being ill-conditioned for such geometries.3
The proof of Theorem 3.1 is novel, both in the context of kernel regression on manifolds and in the context of NTK-regime neural network training. We detail the key intuitions for the proof in
3Again, the equivalence between the difficulty of the certificate problem and the progress of gradient descent on decreasing the error is a consequence of our analysis proceeding in the kernel regime with the square loss—using alternate techniques to analyze the dynamics can allow one to prove that neural networks continue to fit such ‘easy’ classification problems efficiently (e.g. [34]).
Section 4. As suggested above, applying Theorem 3.1 to construct a certificate is straightforward: given a suitable setting of L for a two curve problem instance, we obtain an approximate certificate g via Theorem 3.1. Then with the triangle inequality and the Schwarz inequality, we can bound
‖ΘNTKµ [g]− ζ0‖L2µ ≤ ‖Θ NTK µ −Θµ‖L2µ→L2µ‖g‖L2µ + ‖ζ0 − ζ‖L2µ + ‖Θµ[g]− ζ‖L2µ ,
and leveraging suitable probabilistic control (see Appendix G) of the approximation errors in the previous expression, as well as on ‖ζ‖L2µ , then yields bounds for the certificate problem. Applying the reductions from gradient descent dynamics in the NTK regime to certificates discussed in Section 2.1, we then obtain an end-to-end guarantee for the two curve problem.
Theorem 3.2 (Generalization). LetM be two disjoint smooth, regular, simple closed curves, satisfying ∠(x,x′) ≤ π/2 for all x,x′ ∈M. For any 0 < δ ≤ 1/e, choose L so that
L ≥ K max
1 ( ∆ √ 1 + κ2 )CV(M) , Cµ log 9( 1δ ) log 24(Cµn0 log( 1 δ )), e C′max{len(M)κ̂,log(κ̂)}, P
n = K ′L99 log9(1/δ) log18(Ln0)
N ≥ L10,
and fix τ > 0 such that C ′′
nL2 ≤ τ ≤ cnL . Then with probability at least 1 − δ, the parameters obtained at iteration bL39/44/(nτ)c of gradient descent on the finite sample loss yield a classifier that separates the two manifolds.
The constants c, C,C ′, C ′′,K,K ′ > 0 are absolute, and Cµ equals to max{ρ19min,ρ−19min }(1+ρmax)12 (min {µ(M+),µ(M−)})11/2 is a constant only depends on µ. P is a polynomial poly{M3,M4,M5, len(M),∆−1} of degree at most 36, with degree at most 12 when viewed as a polynomial in M3,M4,M5 and len(M), and of degree at most 24 as a polynomial in ∆−1.
Theorem 3.2 represents the first end-to-end guarantee for training a deep neural network to classify a nontrivial class of low-dimensional nonlinear manifolds. We call attention to the fact that the hypotheses of Theorem 3.2 are completely self-contained, making reference only to intrinsic properties of the data and the architectural hyperparameters of the neural network (as well as poly(log n0)), and that the result is algorithmic, as it applies to training the network via constant-stepping gradient descent on the empirical square loss and guarantees generalization within L2 iterations. Furthermore, Theorem 3.2 can be readily extended to the more general setting of regression on curves, given that we have focused on training with the square loss.
4 Proof Sketch
In this section, we provide an overview of the key elements of the proof of Theorem 3.1, where we show that the equation Θµ[g] ≈ ζ admits a solution g (the certificate) of small norm. To solve the certificate problem forM, we require a fine-grained understanding of the kernel Θ. The most natural approach is to formally set g = ∑∞ i=1 λ −1 i 〈ζ, vi〉L2µvi using the eigendecomposition of Θµ (just as constructed in Section 2.1 for ΘNTKµ ), and then argue that this formal expression converges by studying the rate of decay of λi and the alignment of ζ with eigenvectors of Θµ; this is the standard approach in the literature [46, 53]. However, as discussed in Section 2.1, the nonlinear structure ofM makes obtaining a full diagonalization for Θµ intractable, and simple asymptotic characterizations of its spectrum are insufficient to prove that the solution g has small norm. Our approach will therefore be more direct: we will study the ‘spatial’ properties of the kernel Θ itself, in particular its rate of decay away from x = x′, and thereby use the network depth L as a resource to reduce the study of the operator Θµ to a simpler, localized operator whose invertibility can be proved using harmonic analysis. We will then use differentiability properties of Θ to transfer the solution obtained by inverting this auxiliary operator back to the operator Θµ. We refer readers to Appendix E for the full proof.
We simplify the proceedings using two basic reductions. First, with a small amount of auxiliary argumentation, we can reduce from the study of the operator-with-density Θµ to the density-free operator
Θ. Second, the kernel Θ(x,x′) is a function of the angle ∠(x,x′), and hence is rotationally invariant. This kernel is maximized at ∠(x,x′) = 0 and decreases monotonically as the angle increases, reaching its minimum value at ∠(x,x′) = π. If we subtract this minimum value, it should not affect our ability to fit functions, and we obtain a rotationally invariant kernel Θ◦(x,x′) = ψ◦(∠(x,x′)) that is concentrated around angle 0. In the following, we focus on certificate construction for the kernel Θ◦. Both simplifications are justified in Appendix E.3.
4.1 The Importance of Depth: Localization of the Neural Tangent Kernel
The first problem one encounters when attempting to directly establish (a property like) invertibility of the operator Θ◦ is its action across connected components ofM: the operator Θ◦ acts by integrating against functions defined onM =M+ ∪M−, and although it is intuitive that most of its image’s values on each component will be due to integration of the input over the same component, there will always be some ‘cross-talk’ corresponding to integration over the opposite component that interferes with our ability to apply harmonic analysis tools. To work around this basic issue (as well as others we will see below), our argument proceeds via a localization approach: we will exploit the fact that as the depth L increases, the kernel Θ◦ sharpens and concentrates around its value at x = x′, to the extent that we can neglect its action across components ofM and even pass to the analysis of an auxiliary localized operator. This reduction is enabled by new sharp estimates for the decay of the angle function ψ◦ that we establish in Appendix F.3. Moreover, the perspective of using the network depth as a resource to localize the kernel Θ◦ and exploiting this to solve the classification problem appears to be new: this localization is typically presented as a deficiency in the literature (e.g. [47]).
At a more formal level, when the network is deep enough compared to geometric properties of the curves, for each point x, the majority of the mass of the kernel Θ◦(x,x′) is taken within a small neighborhood dM(x,x′) ≤ r of x. When dM(x,x′) is small relative to κ, we have dM(x,x′) ≈ ∠(x,x′). This allows us to approximate the local component by the following invariant operator:
M̂ [f ](xσ(s)) =
∫ s+r
s′=s−r ψ◦(|s− s′|)f(xσ(s′))ds′. (4.1)
This approximation has two main benefits: (i) the operator M̂ is defined by intrinsic distance s′ − s, and (ii) it is highly localized. In fact, (4.1) takes the form of a convolution over the arc length parameter s. This implies that M̂ diagonalizes in the Fourier basis, giving an explicit characterization of its eigenvalues and eigenvectors. Moreover, because M̂ is localized, the eigenvalues corresponding to slowly oscillating Fourier basis functions are large, and M̂ is stably invertible over such functions. Both of these benefits can be seen as consequences of depth: depth leads to localization, which facilitates approximation by M̂ , and renders that approximation invertible over low-frequency functions. In our proofs, we will work with a subspace S spanned by low-frequency basis functions that are nearly constant over a length 2r interval (this subspace ends up having dimension proportional to 1/r; see Appendix C.3 for a formal definition), and use Fourier arguments to prove invertibility of M̂ over S (see Lemma E.6).
4.2 Stable Inversion over Smooth Functions
Our remaining task is to leverage the invertibility of M̂ over S to argue that Θ is also invertible. In doing so, we need to account for the residual Θ− M̂ . We accomplish this directly, using a Neumann series argument: when setting r . L−1/2 and the dimension of the subspace S proportional to 1/r, the minimum eigenvalue of M̂ over S exceeds the norm of the residual operator Θ◦ − M̂ (Lemma E.2). This argument leverages a decomposition of the domain into “near”, “far” and “winding” pieces, whose contribution to Θ◦ is controlled using the curvature, angle injectivity radius andV-number (Lemma E.8, Lemma E.9, Lemma E.10). This guarantees the strict invertibility of Θ◦ over the subspace S, and yields a unique solution gS to the restricted equation PSΘ◦[gS ] = ζ (Theorem E.1).
This does not yet solve the certificate problem, which demands near solutions to the unrestricted equation Θ◦[g] = ζ. To complete the argument, we set g = gS and use harmonic analysis considerations to show that Θ◦[g] is very close to S. The subspace S contains functions that do not oscillate
rapidly, and hence whose derivatives are small relative to their norm (Lemma E.23). We prove that Θ◦[g] is close to S by controlling the first three derivatives of Θ◦[g], which introduces dependencies on M1, · · · ,M5 in the final statement of our results (Lemma E.27). In controlling these derivatives, we leverage the assumption that supx,x′∈M ∠(x,x
′) ≤ π/2 to avoid issues that arise at antipodal points—we believe the removal of this constraint is purely technical, given our sharp characterization of the decay of ψ◦ and its derivatives. Finally, we move from Θ◦ back to Θ by combining near solutions to Θ◦[g] = ζ and Θ◦[g1] = 1, and iterating the construction to reduce the approximation error to an acceptable level (Appendix E.3).
5 Discussion
A role for depth. In the setting of fitting functions on the sphere Sn0−1 in the NTK regime with unstructured (e.g., uniformly random) data, it is well-known that there is very little marginal benefit to using a deeper network: for example, [32, 46, 59] show that the risk lower bound for RKHS methods is nearly met by kernel regression with a 2-layer network’s NTK in an asymptotic (n0 →∞) setting, and results for fitting degree-1 functions in the nonasymptotic setting [52] are suggestive of a similar phenomenon. In a similar vein, fitting in the NTK regime with a deeper network does not change the kernel’s RKHS [41, 42, 45], and in a certain “infinite-depth” limit, the corresponding NTK for networks with ReLU activations, as we consider here, is a spike, guaranteeing that it fails to generalize [47, 50]. Our results are certainly not in contradiction to these facts—we consider a setting where the data are highly structured, and our proofs only show that an appropriate choice of the depth relative to this structure is sufficient to guarantee generalization, not necessary—but they nonetheless highlight an important role for the network depth in the NTK regime that has not been explored in the existing literature. In particular, the localization phenomenon exhibited by the deep NTK is completely inaccessible by fixed-depth networks, and simultaneously essential to our arguments to proving Theorem 3.2, as we have described in Section 4. It is an interesting open problem to determine whether there exist low-dimensional geometries that cannot be efficiently separated without a deep NTK, or whether the essential sufficiency of the depth-two NTK persists.
Closing the gap to real networks and data. Theorem 3.2 represents an initial step towards understanding the interaction between neural networks and data with low-dimensional structure, and identifying network resource requirements sufficient to guarantee generalization. There are several important avenues for future work. First, although the resource requirements in Theorem 3.1, and by extension Theorem 3.2, reflect only intrinsic properties of the data, the rates are far from optimal—improvements here will demand a more refined harmonic analysis argument beyond the localization approach we take in Section 4.1. A more fundamental advance would consist of extending the analysis to the setting of a model for image data, such as cartoon articulation manifolds, and the NTK of a convolutional neural network with architectural settings that impose translation invariance [25, 35]—recent results show asymptotic statistical efficiency guarantees with the NTK of a simple convolutional architecture, but only in the context of generic data [60]. The approach to certificate construction we develop in Theorem 3.1 will be of use in establishing guarantees analogous to Theorem 3.2 here, as our approach does not require an explicit diagonalization of the NTK.
In addition, extending our certificate construction approach to smooth manifolds of dimension larger than one is a natural next step. We believe our localization argument generalizes to this setting: as our bounds for the kernel ψ are sharp with respect to depth and independent of the manifold dimension, one could seek to prove guarantees analogous to Theorem 3.1 with a similar subspace-restriction argument for sufficiently regular manifolds, such as manifolds diffeomorphic to spheres, where the geometric parameters of Section 2.2 have natural extensions. Such a generalization would incur at best an exponential dependence of the network on the manifold dimension for localization in high dimensions.
More broadly, the localization phenomena at the core of our argument appear to be relevant beyond the regime in which the hypotheses of Theorem 3.2 hold: we provide a preliminary numerical experiment to this end in Appendix A.3. Training fully-connected networks with gradient descent on a simple manifold classification task, low training error appears to be easily achievable only when the decay scale of the kernel is small relative to the inter-manifold distance even at moderate depth and width, and this decay scale is controlled by the depth of the network.
Funding Transparency Statement and Acknowledgements
This work was supported by a Swartz fellowship (DG), by a fellowship award (SB) through the National Defense Science and Engineering Graduate (NDSEG) Fellowship Program, sponsored by the Air Force Research Laboratory (AFRL), the Office of Naval Research (ONR) and the Army Research Office (ARO), and by the National Science Foundation through grants NSF 1733857, NSF 1838061, NSF 1740833, and NSF 174039. We thank Alberto Bietti for bringing to our attention relevant prior art on kernel regression on manifolds. | 1. What is the focus of the paper regarding binary classification and nonlinear curves?
2. What are the strengths of the proposed approach, particularly in its reliance on prior works?
3. Do you have any concerns or questions about the paper's assumptions and limitations?
4. How does the reviewer assess the clarity, significance, and originality of the paper's content?
5. Are there any potential issues with the paper's focus on full gradient descent and its applicability to stochastic gradient descent?
6. Can the result be extended to provide insights into using hinge loss instead of mean squared error?
7. Is there any advantage to assuming data lies in a high-dimensional space despite originating from a one-dimensional submanifold? | Summary Of The Paper
Review | Summary Of The Paper
The paper considers the problem of binary classification where the data from the two classes lie on two disjoint nonlinear curves in the unit sphere. They show that given a sufficiently overparameterized deep neural network, gradient descent from a random initialization converges to a classifier which separates the two classes. They show a reduction to the certificate problem in Buchanan et al. and similarly utilize the NTK regime to prove convergence.
Review
Overall, I find this paper to be an excellent theoretical contribution. It relies heavily on the work by Buchanan et al. but nevertheless, it is a significant contribution. These are some of the first generalization bounds I have seen concerning data which is not necessarily linearly-separable and though they are probably not tight, they are powerful results.
Originality: The work relies heavily on the paper by Buchanan et al. and resolves some of the open problems asked there while extending their results.
Clarity: The paper is extremely well written. Despite being an extremely technical result, the main paper provides good intuition and proof sketches relegating the technical details to the appendix.
Significance: I think this is a good result that takes us in the right direction towards understanding generalization and deep learning but I have some questions:
I see that this result assumes that the gradients are exact i.e, the optimization algorithm is full gradient descent. Does a similar result hold for Stochastic Gradient Descent? In practice, it seems that the randomness is an important element. Perhaps the low-dimensional structure is what helps you avoid local minima?
The result shows that GD eventually reaches a solution that separates the classifier when one uses MSE. However, this would require sufficiently dense sampling (as is mentioned in the paper). Does the analysis provide any insight into whether it would eventually reach a max-margin classififier if one used hinge-loss, say?
This is perhaps an oversight on my part, but I found the usage of the one-dimensional setting very subtle and tricky to identify even in the theorem statements. Is there a significant benefit to assuming the data to lie in a high dimensional space if it comes from a 1-dimensional submanifold? |
NIPS | Title
Metric-Free Individual Fairness in Online Learning
Abstract
We study an online learning problem subject to the constraint of individual fairness, which requires that similar individuals are treated similarly. Unlike prior work on individual fairness, we do not assume the similarity measure among individuals is known, nor do we assume that such measure takes a certain parametric form. Instead, we leverage the existence of an auditor who detects fairness violations without enunciating the quantitative measure. In each round, the auditor examines the learner’s decisions and attempts to identify a pair of individuals that are treated unfairly by the learner. We provide a general reduction framework that reduces online classification in our model to standard online classification, which allows us to leverage existing online learning algorithms to achieve sub-linear regret and number of fairness violations. Surprisingly, in the stochastic setting where the data are drawn independently from a distribution, we are also able to establish PAC-style fairness and accuracy generalization guarantees (Rothblum and Yona (2018)), despite only having access to a very restricted form of fairness feedback. Our fairness generalization bound qualitatively matches the uniform convergence bound of Rothblum and Yona (2018), while also providing a meaningful accuracy generalization guarantee. Our results resolve an open question by Gillen et al. (2018) by showing that online learning under an unknown individual fairness constraint is possible even without assuming a strong parametric form of the underlying similarity measure.
1 Introduction
As machine learning increasingly permeates many critical aspects of society, including education, healthcare, criminal justice, and lending, there is by now a vast literature that studies how to make machine learning algorithms fair (see, e.g., Chouldechova and Roth (2018); Podesta et al. (2014); Corbett-Davies and Goel (2018)). Most of the work in this literature tackles the problem by taking the statistical group fairness approach that first fixes a small collection of high-level groups defined by protected attributes (e.g., race or gender) and then asks for approximate parity of some statistic of the predictor, such as positive classification rate or false positive rate, across these groups (see, e.g., Hardt et al. (2016); Chouldechova (2017); Kleinberg et al. (2017); Agarwal et al. (2018)). While notions of group fairness are easy to operationalize, they are aggregate in nature without fairness guarantees for finer subgroups or individuals (Dwork et al., 2012; Hébert-Johnson et al., 2018; Kearns et al., 2018).
In contrast, the individual fairness approach aims to address this limitation by asking for explicit fairness criteria at an individual level. In particular, the compelling notion of individual fairness proposed in the seminal work of Dwork et al. (2012) requires that similar people are treated similarly. The original formulation of individual fairness assumes that the algorithm designer has access to
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
a task-specific fairness metric that captures how similar two individuals are in the context of the specific classification task at hand. In practice, however, such a fairness metric is rarely specified, and the lack of metrics has been a major obstacle for the wide adoption of individual fairness. There has been recent work on learning the fairness metric based on different forms of human feedback. For example, Ilvento (2019) provides an algorithm for learning the metric by presenting human arbiters with queries concerning the distance between individuals, and Gillen et al. (2018) provide an online learning algorithm that can eventually learn a Mahalanobis metric based on identified fairness violations. While these results are encouraging, they are still bound by several limitations. In particular, it might be difficult for humans to enunciate a precise quantitative similarity measure between individuals. Moreover, their similarity measure across individuals may not be consistent with any metric (e.g., it may not satisfy the triangle inequality) and is unlikely to be given by a simple parametric function (e.g., the Mahalanobis metric function).
To tackle these issues, this paper studies metric-free online learning algorithms for individual fairness that rely on a weaker form of interactive human feedback and minimal assumptions on the similarity measure across individuals. Similar to the prior work of Gillen et al. (2018), we do not assume a prespecified metric, but instead assume access to an auditor, who observes the learner’s decisions over a group of individuals that show up in each round and attempts to identify a fairness violation—a pair of individuals in the group that should have been treated more similarly by the learner. Since the auditor only needs to identify such unfairly treated pairs, there is no need for them to enunciate a quantitative measure – to specify the distance between the identified pairs. Moreover, we do not impose any parametric assumption on the underlying similarity measure, nor do we assume that it is actually a metric since we do not require that similarity measure to satisfy the triangle inequality. Under this model, we provide a general reduction framework that can take any online classification algorithm (without fairness constraint) as a black-box and obtain a learning algorithm that can simultaneously minimize cumulative classification loss and the number of fairness violations. Our results in particular remove many strong assumptions in Gillen et al. (2018), including their parametric assumptions on linear rewards and Mahalanobis distances, and thus answer several questions left open in their work.
1.1 Overview of Model and Results
We study an online classification problem: over rounds t = 1, . . . , T , a learner observes a small set of k individuals with their feature vectors (xtτ ) k τ=1 in space X . The learner tries to predict the label ytk ∈ {0, 1} of each individual with a “soft” predictor πt that predicts πt(xtτ ) ∈ [0, 1] on each xtτ and incurs classification loss |πt(xtτ ) − ytτ |. Then an auditor will investigate if the learner has violated the individual fairness constraint on any pair of individuals within this round, that is, if there exists (τ1, τ2) ∈ [k]2 such that |πt(xtτ1)−π t(xtτ2)| > d(x t τ1 , x t τ2) +α, where d : X ×X → R+ is an unknown distance function and α denotes the auditor’s tolerance. If this violation has occurred on any number of pairs, the auditor will identify one of such pairs and incur a fairness loss of 1; otherwise, the fairness loss is 0. Then the learner will update the predictive policy based on the observed labels and the received fairness feedback. Under this model, our results include:
A Reduction from Fair Online Classification to Standard Online Classification. Our reductionbased algorithm can take any no-regret online (batch) classification learner as a black-box and achieve sub-linear cumulative fairness loss and sub-linear regret on mis-classification loss compared to the most accurate policy that is fair on every round. In particular, our framework can leverage the generic exponential weights method (Freund and Schapire, 1997; Cesa-Bianchi et al., 1997; Arora et al., 2012) and also oracle-efficient methods, including variants of Follow-the-Perturbed-Leader (FTPL) (e.g., Syrgkanis et al. (2016); Suggala and Netrapalli (2019)), that further reduces online learning to standard supervised learning or optimization problems. We instantiate our framework using two online learning algorithms (exponential weights and CONTEXT-FTPL), both of which obtain a Õ( √ T ) on misclassification regret and cumulative fairness loss.
Fairness and Accuracy Generalization Guarantees. While our algorithmic results hold under adversarial arrivals of the individuals, in the stochastic arrivals setting we show that the uniform average policy over time is probably approximate correct and fair (PACF) (Rothblum and Yona, 2018)– that is, the policy is approximately fair on almost all random pairs drawn from the distribution and nearly matches the accuracy gurantee of the best fair policy. In particular, we show that the average policy πavg with high probability satisfies Prx,x′ [|πavg(x)−πavg(x′)| > α+1/T 1/4] ≤ O(1/T 1/4),
which qualitatively achieves similar PACF uniform convergence sample complexity as Rothblum and Yona (2018).1 However, we establish our generalization guarantee through fundamentally different techniques. While their work assumes a fully specified metric and i.i.d. data, the learner in our setting can only access the similarity measure through an auditor’s limited fairness violations feedback. The main challenge we need to overcome is that the fairness feedback is inherently adaptive–that is, the auditor only provides feedback for the sequence of deployed policies, which are updated adaptively over rounds. In comparison, a fully known metric allows the learner to evaluate the fairness guarantee of all policies simultaneously. As a result, we cannot rely on their uniform convergence result to bound the fairness generalization error, but instead we leverage a probabilistic argument that relates the learner’s regret to the distributional fairness guarantee.
2 Related Work
Solving open problems in Gillen et al. (2018). The most related work to ours is Gillen et al. (2018), which studies the linear contextual bandit problem subject to individual fairness with an unknown Mahalanobis metric. Similar to our work, they also assume an auditor who can identify fairness violations in each round and provide an online learning algorithm with sublinear regret and a bounded number of fairness violations. Our results resolve two main questions left open in their work. First, we assume a weaker auditor who only identifies a single fairness violation (as opposed to all of the fairness violations in their setting). Second, we remove the strong parametric assumption on the Mahalanobis metric and work with a broad class of similarity functions that need not be metric.
Starting with Joseph et al. (2016), there is a different line of work that studies online learning for individual fairness, but subject to a different notion called meritocratic fairness (Jabbari et al., 2017; Joseph et al., 2018; Kannan et al., 2017). These results present algorithms that are “fair” within each round but again rely on strong realizability assumptions–their fairness guarantee depends on the assumption that the outcome variable of each individual is given by a linear function. Gupta and Kamble (2019) also studies online learning subject to individual fairness but with a known metric. They formulate a one-sided fairness constraint across time, called fairness in hindsight, and provide an algorithm with regret O(TM/(M+1)) for some distribution-dependent constant M .
Our work is related to several others that aim to enforce individual fairness without a known metric. Ilvento (2019) studies the problem of metric learning by asking human arbiters distance queries. Unlike Ilvento (2019), our algorithm does not explicitly learn the underlying similarity measure and does not require asking auditors numeric queries. The PAC-style fairness generalization bound in our work falls under the framework of probably approximately metric-fairness due to Rothblum and Yona (2018). However, their work assumes a pre-specified fairness metric and i.i.d. data from the distribution, while we establish our generalization through a sequence of adaptive fairness violations feedback over time. Kim et al. (2018) study a group-fairness relaxation of individual fairness, which requires that similar subpopulations are treated similarly. They do not assume a pre-specified metric for their offline learning problem, but they do assume a metric oracle that returns numeric distance values on random pairs of individuals. Jung et al. (2019) study an offline learning problem with subjective individual fairness, in which the algorithm tries to elicit subjective fairness feedback from human judges by asking them questions of the form “should this pair of individuals be treated similarly or not?" Their fairness generalization takes a different form, which involves taking averages over both the individuals and human judges. We aim to provide a fairness generalization guarantee that holds for almost all individuals from the population.
3 Model and Preliminaries
We define the instance space to beX and its label space to beY . Throughout this paper, we will restrict our attention to binary labels, that is Y = {0, 1}. We write H : X → Y to denote the hypothesis class and assume thatH contains a constant hypothesis – i.e. there exists h such that h(x) = 0 for all x ∈ X . Also, we allow for convex combination of hypotheses for the purpose of randomizing the prediction and denote the simplex of hypotheses by ∆H; we call a randomized hypothesis a policy.
1Rothblum and Yona (2018) show (Theorem 1.4 in their work) that if a policy π is α-fair on all pairs in a i.i.d. dataset of size m, then π satisfies Prx,x′ [|π(x)− π(x′)| > α+ ] ≤ , as long as m ≥ Ω̃(1/ 4).
Sometimes, we assume the existence of an underlying (but unknown) distribution D over (X ,Y). For each prediction ŷ ∈ Y and its true label y ∈ Y , there is an associated misclassification loss, `(ŷ, y) = 1(ŷ 6= y). For simplicity, we overload the notation and write
`(π(x), y) = (1− π(x)) · y + π(x) · (1− y) = E h∼π [`(h(x), y)].
3.1 Individual Fairness and Auditor
We want our deployed policy π to behave fairly in some manner, and we use the individual fairness definition from Dwork et al. (2012) that asserts that “similar individuals should be treated similarly.” We assume that there is some distance function d : X × X → R+ over the instance space X which captures the distance between individuals in X , although d doesn’t have to satisfy the triangle inequality. The only requirement on d is that it is always non-negative and symmetric d(x, x′) = d(x′, x). Definition 3.1 ((α, β)-fairness). Assume α, β > 0. A policy π ∈ ∆H is said to be α-fair on pair (x, x′), if |π(x)− π(x′)| ≤ d(x, x′) + α. We say policy π’s α-fairness violation on pair (x, x′) is
vα(π, (x, x ′)) = max(0, |π(x)− π(x′)| − d(x, x′)− α).
A policy is π is said to be (α, β)-fair on distribution D, if Pr
(x,x′)∼D|X×D|X [|π(x)− π(x′)| > d(x, x′) + α] ≤ β.
A policy π is said to be α-fair on set S ⊆ X , if for all (x, x′) ∈ S2, it is α-fair.
Although individual fairness is intuitively sound, individual fairness notion requires the knowledge of the distance function d which is often hard to specify. Therefore, we rely on an auditor J that can detect instances of α-unfairness. Definition 3.2 (Auditor J ). An auditor Jα which can have its own internal state takes in a reference set S ⊆ X and a policy π. Then, it outputs ρ which is either null or a pair of indices from the provided reference set to denote that there is some positive α-fairness violation for that pair. For some S = (x1, . . . , xn),
Jα(S, π) = { ρ = (ρ1, ρ2) if ∃ρ1, ρ2 ∈ [n].π(xρ1)− π(xρ2)− d(xρ1 , xρ2)− α > 0 null otherwise
If there exists multiple pairs with some α-violation, the auditor can choose one arbitrarily. Remark 3.3. Our assumptions on the auditor are much more relaxed than those of Gillen et al. (2018), which require that the auditor outputs whether the policy is 0-fair (i.e. with no slack) on all pairs S2 exactly. Furthermore, the auditor in Gillen et al. (2018) can only handle Mahalanobis distances. In our setting, because of the internal state of the auditor, the auditor does not have to be a fixed function but rather can be adaptively changing in each round. Finally, we never rely on the fact the distance function d stays the same throughout rounds, meaning all our results extend to the case where the distance function governing the fairness constraints is changing every round.
3.2 Online Batch Classification
We now describe our online batch classification setting. In each round t = 1, . . . , T , the learner deploys some model πt ∈ ∆H. Upon seeing the deployed policy πt, the environment chooses a batch of k individuals, (xtτ , y t τ ) k τ=1 and possibly, a pair of individuals from that round on which πt will be responsible for any α-fairness violation. For simplicity, we write x̄t = (xtτ ) k τ=1 and ȳt = (ytτ ) k τ=1. The strategy z t FAIR-BATCH ∈ ZFAIR-BATCH that the environment chooses can be described by ztFAIR-BATCH = (x̄ t, ȳt) × ρt, where ρt ∈ [k]2 ∪ {null}. Often, we will omit the subscript and simply write zt. If ρt = (ρt1, ρ t 2), then π
t will be responsible for the α-fairness violation on the pair (xtρt1 , xtρt2 ). There are two types of losses that we are interested in: misclassification and fairness loss. Definition 3.4 (Misclassification Loss). The (batch) misclassification loss Err2 is
Err(π, zt) = k∑ τ=1 `(π(xtτ ), y t τ ).
2We will overload the notation for this loss; regardless of what Z is, we’ll assume Err(π, zt) is well-defined as long as zt includes (x̄t, ȳt).
Algorithm 1: Online Fair Batch Classification FAIR-BATCH for t = 1, . . . , T do
Learner deploys πt Environment chooses (x̄t, ȳt) Environment chooses the pair ρt zt = (x̄t, ȳt)× ρt Learner incurs misclassfication loss Err(πt, zt) Learner incurs fairness loss Unfair(πt, zt)
end
Algorithm 2: Online Batch Classification BATCH
for t = 1, . . . , T do Learner deploys πt Environment chooses zt = (x̄t, ȳt) Learner incurs misclassification loss Err(πt, zt) end
Figure 1: Comparison between Online Fair Batch Classification and Online Batch Classification: each is summarized by the interaction between the learner and the environment: (∆H×ZFAIR-BATCH)T and (∆H×ZBATCH)T where ZFAIR-BATCH = X k × Yk × ([k]2 ∪ {null}) and ZBATCH = X k × Yk.
Definition 3.5 (Fairness Loss). The α-fairness loss Unfairα is
Unfairα(π, z t) =
{ 1 ( π(xtρt1 )− π(xtρt2)− d(x t ρt1 , xtρt2 )− α > 0 ) if ρt = (ρt1, ρ t 2)
0 otherwise
We want the total misclassification and fairness loss over T rounds to be as small as any π∗ ∈ Q for some competitor set Q, which we describe now. As said above, each round’s reference set, a set of pairs for which the deployed policy will possibly be responsible in terms of α-fairness, will be defined in terms of the instances that arrive within that round x̄t. The baseline Qα that we compete against will be all policies that are α-fair on x̄t for all t ∈ [T ]:
Qα = {π ∈ ∆H : π is α-fair on x̄t for all t ∈ [T ]}
Note that becauseH contains a constant hypothesis which must be 0-fair on all instances, Qα cannot be empty. The difference in total loss between our algorithm and a fixed π∗ is called ‘regret’, which we formally define below. Definition 3.6 (Algorithm A). An algorithm A : (∆H × Z)∗ → ∆H takes in its past history (πτ , zτ )t−1τ=1 and deploys a policy π
t ∈ ∆H at every round t ∈ [T ]. Definition 3.7 (Regret). For some Q ⊆ ∆H, the regret of algorithm A with respect to some loss L : ∆H×Z → R is denoted as RegretL(A, Q, T ), if for any (zt)Tt=1,
T∑ t=1 L ( πt, zt ) − inf π∗∈Q T∑ t=1 L ( π∗, zt ) = RegretL(A, Q, T ),
where πt = A((πj , zj)t−1j=1). When it is not clear from the context, we will use subscript to denote the setting – e.g. RegretLFAIR-BATCH.
We wish to develop an algorithm such that both the misclassfication and fairness loss regret is sublinear, which is often called no-regret. Note that because π∗ ∈ Qα is α-fair on x̄t for all t ∈ [T ], we have Unfairα(π∗, zt) = 0 for all t ∈ [T ]. Hence, achieving RegretUnfairαFAIR-BATCH(A, Q, T ) = o(T ) is equivalent to ensuring that the total number of rounds with any α-fairness violation is sublinear. Therefore, our goal is equivalent to developing an algorithm A so that for any (zt)Tt=1,
RegretErrFAIR-BATCH(A, Q, T ) = o(T ) and T∑ t=1 Unfairα(πt, zt) = o(T ).
To achieve the result above, we will reduce our setting to a setting with no fairness constraint, which we call online batch classification problem. Similar to the online fair batch classification setting, in each round t, the learner deploys a policy πt, but the environment chooses only a batch of instances (xtτ , y t τ ) k τ=1. In online batch classification, we denote the strategy that the environment can take with ZBATCH = X k × Yk. We compare the two settings in figure 1.
4 Achieving No Regret Simultaneously
Here, we define a round-based Lagrangian loss and show that the regret with respect to our Lagrangian loss also serves as the misclassification and the fairness complaint regret. Then, we show that using an auditor that can detect any fairness violation beyond certain threshold, we can still hope to achieve no-regret against an adaptive adversary.
Finally, we show how to achieve no regret with respect to the Lagrangian loss by reducing the problem to an online batch classification where there’s no fairness constraint. We show that FollowThe-Perturbed-Leader style approach (CONTEXT-FTPL from Syrgkanis et al. (2016)) can achieve sublinear regret in the online batch classification setting, which allows us to achieve sublinear regret with respect to both misclassification and fairness loss in the online fair batch classification setting.
4.1 Lagrangian Formulation
Here we present a hybrid loss that we call Lagrangian loss that combines the misclassification loss and the magnitude of the fairness loss of round t.
Definition 4.1 (Lagrangian Loss). The (C,α)-Lagrangian loss of π is
LC,α ( π, ( (x̄t, ȳt), ρt )) = k∑ τ=1 ` ( π ( xtτ ) , ytτ ) + { C ( π(xtρ1)− π(x t ρ2)− α ) ρt = (ρ1, ρ2) 0 ρt = null
Given an auditor Jα that can detect any α-fairness violation, we can simulate the online fair batch classification setting with an auditor Jα by setting the pair ρtJ = Jα(x̄t, πt): subscript J is placed on this pair to distinguish from the pair chosen by the environment.3
Definition 4.2 (Lagrangian Regret). Algorithm A’s (C,α,Jα′)-Lagrangian regret against Q is
RegretC,α,Jα′ (A, Q, T ), if for any (x̄t, ȳt)Tt=1, we have
T∑ t=1 LC,α(πt, (x̄t, ȳt), ρtJ )− min π∗∈Q T∑ t=1 LC,α(π∗, (x̄t, ȳt), ρtJ ) ≤ Regret C,α,Jα′ (A, Q, T ),
where ρtJ = Jα′(x̄t, πt). Remark 4.3. From here on, we assume the auditor has a given sensitivity denoted by α′ = α+ , where is a parameter we will fix in order to define our desired benchmark Qα.
Now, we show that the Lagrangian regret upper bounds the α-fairness loss regret with some slack by setting C to be appropriately big enough. Also, we show that (C,α,Jα+ )-Lagrangian regret serves as the misclassification loss regret, too. The proofs are given in Appendix A.1.
Theorem 4.4. Fix some small constant > 0 and C ≥ k+1 . For any sequence of environment’s strategy (zt)Tt=1 ∈ ZTFAIR-BATCH, ∑T t=1 Unfairα+ (π t, zt) ≤ RegretC,α,Jα+ (A, Qα, T ).
Theorem 4.5. Fix some small constant > 0. For any sequence of (zt)Tt=1 ∈ ZTFAIR-BATCH and π∗ ∈ Qα,
T∑ t=1 k∑ τ=1 ` ( πt ( xtτ ) , ytτ ) − T∑ t=1 k∑ τ=1 `(π∗(xtτ ), y t τ ) ≤ Regret C,α,Jα+ (A, Qα, T ) ,
where C ≥ k+1 . In other words, Regret Err FAIR-BATCH(A, Qα, T ) ≤ Regret C,α,Jα+ (A, Qα, T ). 3Although we are simulating the adaptive environment’s strategy ρt with ρtJ , note that the fairness loss with ρtJ will always be at least the fairness loss with ρ t because the auditor will always indicate if there’s a fairness violation. This distinction between the pair chosen by the environment and the auditor is necessary just for technical reasons, as we need to ensure that the pair used to charge the Lagrangian loss incurs constant instantaneous regret in the rounds where there is actually some fairness violation, as the pair chosen by the environment can possibly have no fairness violation and hence negative instantaneous regret. This will be made more clear in the proof of Theorem 4.4.
4.2 Reduction to Online Batch Classification
In this subsection, we will first discuss a computationally inefficient way to achieve no regret with respect to the Lagrangian loss. Then, we will show an efficient reduction to online batch classification and discuss an example of an oracle-efficient algorithm ABATCH that achieves no-regret. It is well known that for linear loss, exponential weights with appropriately tuned learning rate γ can achieve no regret (Freund and Schapire, 1997; Cesa-Bianchi et al., 1997; Arora et al., 2012). Note that our Lagrangian loss
LtC,α(π) = LC,α(π, zt) = k∑ τ=1 (1− π(xtτ )) · ytτ + π(xtτ ) · (1− ytτ )
+
{ C ( π(xtρ1)− π(x t ρ2)− α ) ρt = (ρ1, ρ2)
0 ρt = null
is linear in π for any zt, and its range is [0, C + k]. Therefore, running exponential weights with learning rate γ = √
ln(|H|) T , we achieve the following regret with respect to the Lagrangian loss: Corollary 4.6. Running exponential weights with γ = √
ln(|H|) T and C ≥ k+1 , we achieve
RegretErrFAIR-BATCH(A, Qα, T ) ≤ (C+k) √ ln(|H|)T , T∑ t=1 Unfairα+ (π t, zt) ≤ (C+k) √ ln(|H|)T .
Nevertheless, running exponential weights is not efficient as it needs to calculate the loss for each h ∈ H every round t. To design an oracle-efficient algorithm, we reduce the online batch fair classification problem to the online batch classification problem in an efficient manner and use any online batch algorithm ABATCH((πj , (x̄′j , ȳ′j))tj=1) as a black box. At a high level, our reduction involves just carefully transforming our online fair batch classification history up to t, (πj , (x̄j , ȳj , ρj))tj=1 ∈ (∆H×ZFAIR-BATCH)t into some fake online batch classification history (πj , (x̄′j , ȳ′j))tj=1 ∈ (∆H× ZBATCH)t and then feeding the artificially created history to ABATCH.
Without loss of generality, we assume thatC ≥ k+1 is an integer; if it’s not, then take the ceiling. Now, we describe how the transformation of the history works. For each round t, whenever ρt = (ρt1, ρ t 2), we add C copies of each of (xtρt1 , 0) and (x t ρt2 , 1) to the original pairs to form x̄′t and ȳ′t. Just to keep the batch size the same across each round, even if ρt = null, we add C copies of each of (v, 0) and (v, 1) where v is some arbitrary instance in X . We describe this process in more detail in algorithm 3. This reduction essentially preserves the regret. Theorem 4.7. For any sequence of (zt)Tt=1 ∈ ZTFAIR-BATCH, Q ⊆ ∆H, and π∗ ∈ Q,
T∑ t=1 LC,α(πt, zt)− T∑ t=1 LC,α(π∗, zt) ≤ RegretErrBATCH(A, Q, T ),
where πt = ABATCH ( (πj , x̄′j , ȳ′j)t−1j=1 ) . Therefore, RegretC,α,Jα+ (A, Qα, T ) ≤ RegretErrBATCH(A, Q, T ).
One example ofABATCH that achieves sublinear regret in online batch classification is CONTEXT-FTPL from Syrgkanis et al. (2016). We defer the details to Appendix A.3 and present the regret guarantee here. We only focus on their small separator set setting (i.e. there exists a small set of points which serves as a witness to distinguish any two different hypothesis), although their transductive setting (i.e. the contexts {xt}Tt=1 are known in advance) naturally follows as well. Theorem 4.8. If the separator set S forH is of size s, then CONTEXT-FTPL achieves the following misclassification and fairness regret in the online fair batch classification setting.
RegretErrFAIR-BATCH(A, Qα, T ) ≤ O
(( sk ) 3 4 √
T log(|H|) ) T∑ t=1 Unfairα+ (π t, zt) ≤ O (( sk ) 3 4 √ T log(|H|) )
Algorithm 3: Reduction from Online Fair Batch Classification to Online Batch Classification Parameters: inflation constant C, original round size k Initialize: k′ = k + 2C; for t = 1, . . . , T do
Learner deploys πt; Environment chooses (x̄t, ȳt) and the pair ρt; if ρt = (ρt1, ρt2) then
for i = 1, . . . , C do xtk+i = x
t ρt1 and ytk+i = 0; xtk+C+i = x
t ρt2 and ytk+C+i = 1; end
end else
for i = 1, . . . , C do xtk+i = v and y t k+i = 0;
xtk+C+i = v and y t k+C+i = 1;
end end x̄′t = (xtτ ) k′ τ=1 and ȳ ′t = (ytτ ) k′
τ=1; πt+1 = ABATCH ( (πj , x̄′j , ȳ′j)tj=1 ) ;
end
5 Generalization
We observe that until this point, all of our results apply to the more general setting where individuals arrive in any adversarial fashion. In order to argue about generalization, in this section, we will assume the existence of an (unknown) data distribution from which individual arrivals are drawn: {{(xtτ , ytτ )}kτ=1}Tt=1 ∼i.i.d. DTk. Despite the data are drawn i.i.d., there are two technical challenges in establishing generalization guarantee: (1) the auditor’s fairness feedback at each round is limited to a single fairness violation with regards to the policy deployed in that round, and (2) both the deployed policies and the auditor are adaptive over rounds. To overcome these challenges, we will draw a connection between the established regret guarantees in Section 4 and the learner’s distributional accuracy and fairness guarantees. In particular, we will analyze the generalization bounds for the average policy over rounds. Definition 5.1 (Average Policy). Let πt be the policy deployed by the algorithm at round t. The average policy πavg is defined by ∀x : πavg(x) = 1T ∑T t=1 π t(x).
In order to be consistent with Section 4, we denote α′ = α+ in this section.
Here, we state the main results of this section: Theorem 5.2 (Accuracy Generalization). With probabilty 1− δ, the misclassification loss of πavg is upper bounded by
E (x,y)∼D [`(πavg(x), y)] ≤ inf π∈Qα E (x,y)∼D
[`(π(x), y)]+ 1
kT RegretC,α,Jα+ (A, Qα, T )+
√ 8 ln ( 4 δ ) T
Theorem 5.3 (Fairness Generalization). Assume that for all t, πt is (α, βt)-fair (0 ≤ βt ≤ 1). With probability 1− δ, for any integer q ≤ T , πavg is (α′ + qT , β ∗)-fair where
β∗ = 1
q
( RegretC,α,Jα+ (A, Qα, T ) + √ 2T ln ( 2
δ
)) .
Corollary 5.4. Using CONTEXT-FTPL from Syrgkanis et al. (2016) with a separator set of size s, with probability 1− δ, the average policy πavg has the following guarantee:
1. Accuracy:
E (x,y)∼D [`(πavg(x), y)] ≤ inf π∈Qα E (x,y)∼D [`(π(x), y)]+O 1 k 1 4 (s ) 3 4 √ ln(|H|) + ln ( 1 δ ) T . 2. Fairness: πavg is (α′ + λ, λ)-fair where λ = O (( sk ) 3 4 ( ln(|H|)+ln( 1δ )
T
) 1 4 ) .
Remark 5.5. Recall that the sensitivity of the auditor α′ is fixed, and the learner chooses the parameter ∈ (0, α′), which in return determines α = α′ − and the set of policy Qα the learner is competing against. In the case where α′ = Ω(1), the learner can choose in the order of Ω(1) and guarantee that πavg is (α′ + λ, λ)-fair with λ = Õ(T−1/4). In this regime, corollary 5.4 implies that policy πavg has a non-trivial accuracy guarantee and a fairness generalization bound that qualitatively matches the uniform convergence bound in Theorem 1.4 of Rothblum and Yona (2018).
The accuracy generalization bound of Theorem 5.2 is attained by applying Azuma’s inequality on the left hand side of the inequality in Theorem 4.5 and then leveraging the fact that our classification loss function is linear with respect to the base classifiers over which it is defined. The full proof is given in Appendix B.
As for the more challenging task of providing a fairness generalization guarantee (Theorem 5.3), we show how careful interpolation between α and β may be be used to provide a meaningful bound. Here, we state the key lemma required for Theorem 5.3 and a brief description of the proof technique. Lemma 5.6. Assume that for all t, πt is (α′, βt)-fair (0 ≤ βt ≤ 1). For any integer q ≤ T , πavg is( α′ + qT , 1 q T∑ t=1 βt ) -fair.
High-Level Proof Idea for Lemma 5.6 Setting α′′ = α′ + qT has the following implication: for any pair of individuals (x, x′), in order for πavg to have an α′′-fairness violation on x, x′, at least q of the policies in {π1, . . . , πT } must have an α′-fairness violation on x, x′. We will then say a subset A ⊆ X × X is α′-covered by a policy π, if π has an α′-violation on every element in A. We will denote by Aα ′
q ⊆ X × X the subset of pairs of elements from X that are α′-covered by at least q policies in {π1, . . . , πT }. Next, consider the probability space D|X ×D|X over pairs of individuals. The lemma then follows from observing that for any q ≤ T , Pr(Aα′q ) ≤ 1q Pr(A α′
1 ), as this will allow us to upper bound the probability of an α′′-fairness violation by the stated bound.
In Appendix B, we provide the full proof of Theorem 5.3, which features the covering argument presented in lemma 5.6, in addition to a concentration argument linking the probability of the algorithm deploying unfair policies throught its run to the regret guarantees proven in section 4. We also illustrate why an α, β interpolation is required in order to achieve a non-vacuous guarantee.
6 Conclusion
In this paper, we were able to answer an open question by Gillen et al. (2018), proving that online learning under an unknown individual fairness constraint is possible even without assuming a strong parametric form of the underlying similarity measure. We were further able to prove what we consider a very surprising generalization result, matching the state-of-the-art bounds for individual fairness given by Rothblum and Yona (2018), while eliminating or significantly relaxing all of their rather stringent assumptions. Contrary to previous work, which provided individual fairness generalization bounds utilizing standard uniform convergence arguments (Agarwal et al. (2018); Rothblum and Yona (2018)), we have presented a novel proof technique with the use of a composition covering argument (Lemma 5.6), we also believe is of separate interest.
Broader Impact
As the authors of this work believe that bridging the gap between theoretical research in algorithmic fairness and practical use is of the essence, one of the main focuses of this work has been removing
the rather stringent assumptions made in previous research in individual fairness, and replacing these with more realistic ones (if any). As such, the contributions offered in the paper allow taking a step closer to incorporating the long sought-after notion of individual fairness into real life systems. The introduction of a fairness auditor gives a simple, elegant solution to the hurdle posed by the classic similarity metric assumption. The notion of individual fairness pursued in this work offers a strong guarantee on the individual’s level (which is not given, for example, by the various more popular yet weaker notions of group fairness). We believe this combination between practicality of use and a strong fairness guarantee has the power to significantly impact our ability to ensure fairness and non-discrimination in machine learning based algorithms.
Acknowledgments and Disclosure of Funding
We thank Sampath Kannan, Akshay Krishnamurthy, Katrina Ligett, and Aaron Roth for helpful conversations at an early stage of this work. Part of this work was done while YB, CJ, and ZSW were visiting the Simons Institute for the Theory of Computing. YB is supported in part by Israel Science Foundation (ISF) grant #1044/16, the United States Air Force and DARPA under contracts FA8750-16-C-0022 and FA8750-19-2-0222, and the Federmann Cyber Security Center in conjunction with the Israel national cyber directorate. CJ is supported in part by NSF grant AF-1763307. ZSW is supported in part by the NSF FAI Award #1939606, an Amazon Research Award, a Google Faculty Research Award, a J.P. Morgan Faculty Award, a Facebook Research Award, and a Mozilla Research Grant. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Air Force and DARPA. | 1. What is the focus and contribution of the paper regarding online batch classification?
2. What are the strengths of the proposed approach, particularly in terms of its ability to handle individual fairness constraints and remove restrictive assumptions?
3. What are the weaknesses of the paper, especially regarding the dependence on log(|H|) and the lack of description of the reduction process and underlying assumptions?
4. Do you have any concerns about the practical applicability of the proposed method given its limitations? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper discusses online batch classification problem with individual fairness constraints. It follows a metric free approach where an auditor detects fairness violation. Authors use a Lagrangian formulation to unify the constraint and the classification loss. They reduce the problem to "vanilla" online batch classification problem and proves a sublinear regret bound using CONTEXT-FTPL algorithm. Following this, they generalise the analysis to stochastic setting.
Strengths
1. The paper solves an interesting problem which can have practical impact. 2. The paper proposes a clean reduction technique to formulate the problem and deploy online algorithms in it. 3. It removes the constraining assumptions in the existing literature such as Mahalanobis metric, while achieving the \sqrt{T} regret in loss. 4. The algorithm also achieves T^{-1/4}-fairness violations with high probability that resonates the PACF uniform convergence sample complexity.
Weaknesses
1. The bounds have a dependency of log(|H|). Essentially, the size of the hypothesis space can be unbound or quite large. This can be a big issue to use this work in reality. 2. The reduction of the problem is not described enough in the main paper. 3. The assumptions of separators and CONTEXT-FTPL are described as passing references. But the final results depend significantly on them. 4. Being specific to previous results like which theorem or which bound is needed. |
NIPS | Title
Metric-Free Individual Fairness in Online Learning
Abstract
We study an online learning problem subject to the constraint of individual fairness, which requires that similar individuals are treated similarly. Unlike prior work on individual fairness, we do not assume the similarity measure among individuals is known, nor do we assume that such measure takes a certain parametric form. Instead, we leverage the existence of an auditor who detects fairness violations without enunciating the quantitative measure. In each round, the auditor examines the learner’s decisions and attempts to identify a pair of individuals that are treated unfairly by the learner. We provide a general reduction framework that reduces online classification in our model to standard online classification, which allows us to leverage existing online learning algorithms to achieve sub-linear regret and number of fairness violations. Surprisingly, in the stochastic setting where the data are drawn independently from a distribution, we are also able to establish PAC-style fairness and accuracy generalization guarantees (Rothblum and Yona (2018)), despite only having access to a very restricted form of fairness feedback. Our fairness generalization bound qualitatively matches the uniform convergence bound of Rothblum and Yona (2018), while also providing a meaningful accuracy generalization guarantee. Our results resolve an open question by Gillen et al. (2018) by showing that online learning under an unknown individual fairness constraint is possible even without assuming a strong parametric form of the underlying similarity measure.
1 Introduction
As machine learning increasingly permeates many critical aspects of society, including education, healthcare, criminal justice, and lending, there is by now a vast literature that studies how to make machine learning algorithms fair (see, e.g., Chouldechova and Roth (2018); Podesta et al. (2014); Corbett-Davies and Goel (2018)). Most of the work in this literature tackles the problem by taking the statistical group fairness approach that first fixes a small collection of high-level groups defined by protected attributes (e.g., race or gender) and then asks for approximate parity of some statistic of the predictor, such as positive classification rate or false positive rate, across these groups (see, e.g., Hardt et al. (2016); Chouldechova (2017); Kleinberg et al. (2017); Agarwal et al. (2018)). While notions of group fairness are easy to operationalize, they are aggregate in nature without fairness guarantees for finer subgroups or individuals (Dwork et al., 2012; Hébert-Johnson et al., 2018; Kearns et al., 2018).
In contrast, the individual fairness approach aims to address this limitation by asking for explicit fairness criteria at an individual level. In particular, the compelling notion of individual fairness proposed in the seminal work of Dwork et al. (2012) requires that similar people are treated similarly. The original formulation of individual fairness assumes that the algorithm designer has access to
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
a task-specific fairness metric that captures how similar two individuals are in the context of the specific classification task at hand. In practice, however, such a fairness metric is rarely specified, and the lack of metrics has been a major obstacle for the wide adoption of individual fairness. There has been recent work on learning the fairness metric based on different forms of human feedback. For example, Ilvento (2019) provides an algorithm for learning the metric by presenting human arbiters with queries concerning the distance between individuals, and Gillen et al. (2018) provide an online learning algorithm that can eventually learn a Mahalanobis metric based on identified fairness violations. While these results are encouraging, they are still bound by several limitations. In particular, it might be difficult for humans to enunciate a precise quantitative similarity measure between individuals. Moreover, their similarity measure across individuals may not be consistent with any metric (e.g., it may not satisfy the triangle inequality) and is unlikely to be given by a simple parametric function (e.g., the Mahalanobis metric function).
To tackle these issues, this paper studies metric-free online learning algorithms for individual fairness that rely on a weaker form of interactive human feedback and minimal assumptions on the similarity measure across individuals. Similar to the prior work of Gillen et al. (2018), we do not assume a prespecified metric, but instead assume access to an auditor, who observes the learner’s decisions over a group of individuals that show up in each round and attempts to identify a fairness violation—a pair of individuals in the group that should have been treated more similarly by the learner. Since the auditor only needs to identify such unfairly treated pairs, there is no need for them to enunciate a quantitative measure – to specify the distance between the identified pairs. Moreover, we do not impose any parametric assumption on the underlying similarity measure, nor do we assume that it is actually a metric since we do not require that similarity measure to satisfy the triangle inequality. Under this model, we provide a general reduction framework that can take any online classification algorithm (without fairness constraint) as a black-box and obtain a learning algorithm that can simultaneously minimize cumulative classification loss and the number of fairness violations. Our results in particular remove many strong assumptions in Gillen et al. (2018), including their parametric assumptions on linear rewards and Mahalanobis distances, and thus answer several questions left open in their work.
1.1 Overview of Model and Results
We study an online classification problem: over rounds t = 1, . . . , T , a learner observes a small set of k individuals with their feature vectors (xtτ ) k τ=1 in space X . The learner tries to predict the label ytk ∈ {0, 1} of each individual with a “soft” predictor πt that predicts πt(xtτ ) ∈ [0, 1] on each xtτ and incurs classification loss |πt(xtτ ) − ytτ |. Then an auditor will investigate if the learner has violated the individual fairness constraint on any pair of individuals within this round, that is, if there exists (τ1, τ2) ∈ [k]2 such that |πt(xtτ1)−π t(xtτ2)| > d(x t τ1 , x t τ2) +α, where d : X ×X → R+ is an unknown distance function and α denotes the auditor’s tolerance. If this violation has occurred on any number of pairs, the auditor will identify one of such pairs and incur a fairness loss of 1; otherwise, the fairness loss is 0. Then the learner will update the predictive policy based on the observed labels and the received fairness feedback. Under this model, our results include:
A Reduction from Fair Online Classification to Standard Online Classification. Our reductionbased algorithm can take any no-regret online (batch) classification learner as a black-box and achieve sub-linear cumulative fairness loss and sub-linear regret on mis-classification loss compared to the most accurate policy that is fair on every round. In particular, our framework can leverage the generic exponential weights method (Freund and Schapire, 1997; Cesa-Bianchi et al., 1997; Arora et al., 2012) and also oracle-efficient methods, including variants of Follow-the-Perturbed-Leader (FTPL) (e.g., Syrgkanis et al. (2016); Suggala and Netrapalli (2019)), that further reduces online learning to standard supervised learning or optimization problems. We instantiate our framework using two online learning algorithms (exponential weights and CONTEXT-FTPL), both of which obtain a Õ( √ T ) on misclassification regret and cumulative fairness loss.
Fairness and Accuracy Generalization Guarantees. While our algorithmic results hold under adversarial arrivals of the individuals, in the stochastic arrivals setting we show that the uniform average policy over time is probably approximate correct and fair (PACF) (Rothblum and Yona, 2018)– that is, the policy is approximately fair on almost all random pairs drawn from the distribution and nearly matches the accuracy gurantee of the best fair policy. In particular, we show that the average policy πavg with high probability satisfies Prx,x′ [|πavg(x)−πavg(x′)| > α+1/T 1/4] ≤ O(1/T 1/4),
which qualitatively achieves similar PACF uniform convergence sample complexity as Rothblum and Yona (2018).1 However, we establish our generalization guarantee through fundamentally different techniques. While their work assumes a fully specified metric and i.i.d. data, the learner in our setting can only access the similarity measure through an auditor’s limited fairness violations feedback. The main challenge we need to overcome is that the fairness feedback is inherently adaptive–that is, the auditor only provides feedback for the sequence of deployed policies, which are updated adaptively over rounds. In comparison, a fully known metric allows the learner to evaluate the fairness guarantee of all policies simultaneously. As a result, we cannot rely on their uniform convergence result to bound the fairness generalization error, but instead we leverage a probabilistic argument that relates the learner’s regret to the distributional fairness guarantee.
2 Related Work
Solving open problems in Gillen et al. (2018). The most related work to ours is Gillen et al. (2018), which studies the linear contextual bandit problem subject to individual fairness with an unknown Mahalanobis metric. Similar to our work, they also assume an auditor who can identify fairness violations in each round and provide an online learning algorithm with sublinear regret and a bounded number of fairness violations. Our results resolve two main questions left open in their work. First, we assume a weaker auditor who only identifies a single fairness violation (as opposed to all of the fairness violations in their setting). Second, we remove the strong parametric assumption on the Mahalanobis metric and work with a broad class of similarity functions that need not be metric.
Starting with Joseph et al. (2016), there is a different line of work that studies online learning for individual fairness, but subject to a different notion called meritocratic fairness (Jabbari et al., 2017; Joseph et al., 2018; Kannan et al., 2017). These results present algorithms that are “fair” within each round but again rely on strong realizability assumptions–their fairness guarantee depends on the assumption that the outcome variable of each individual is given by a linear function. Gupta and Kamble (2019) also studies online learning subject to individual fairness but with a known metric. They formulate a one-sided fairness constraint across time, called fairness in hindsight, and provide an algorithm with regret O(TM/(M+1)) for some distribution-dependent constant M .
Our work is related to several others that aim to enforce individual fairness without a known metric. Ilvento (2019) studies the problem of metric learning by asking human arbiters distance queries. Unlike Ilvento (2019), our algorithm does not explicitly learn the underlying similarity measure and does not require asking auditors numeric queries. The PAC-style fairness generalization bound in our work falls under the framework of probably approximately metric-fairness due to Rothblum and Yona (2018). However, their work assumes a pre-specified fairness metric and i.i.d. data from the distribution, while we establish our generalization through a sequence of adaptive fairness violations feedback over time. Kim et al. (2018) study a group-fairness relaxation of individual fairness, which requires that similar subpopulations are treated similarly. They do not assume a pre-specified metric for their offline learning problem, but they do assume a metric oracle that returns numeric distance values on random pairs of individuals. Jung et al. (2019) study an offline learning problem with subjective individual fairness, in which the algorithm tries to elicit subjective fairness feedback from human judges by asking them questions of the form “should this pair of individuals be treated similarly or not?" Their fairness generalization takes a different form, which involves taking averages over both the individuals and human judges. We aim to provide a fairness generalization guarantee that holds for almost all individuals from the population.
3 Model and Preliminaries
We define the instance space to beX and its label space to beY . Throughout this paper, we will restrict our attention to binary labels, that is Y = {0, 1}. We write H : X → Y to denote the hypothesis class and assume thatH contains a constant hypothesis – i.e. there exists h such that h(x) = 0 for all x ∈ X . Also, we allow for convex combination of hypotheses for the purpose of randomizing the prediction and denote the simplex of hypotheses by ∆H; we call a randomized hypothesis a policy.
1Rothblum and Yona (2018) show (Theorem 1.4 in their work) that if a policy π is α-fair on all pairs in a i.i.d. dataset of size m, then π satisfies Prx,x′ [|π(x)− π(x′)| > α+ ] ≤ , as long as m ≥ Ω̃(1/ 4).
Sometimes, we assume the existence of an underlying (but unknown) distribution D over (X ,Y). For each prediction ŷ ∈ Y and its true label y ∈ Y , there is an associated misclassification loss, `(ŷ, y) = 1(ŷ 6= y). For simplicity, we overload the notation and write
`(π(x), y) = (1− π(x)) · y + π(x) · (1− y) = E h∼π [`(h(x), y)].
3.1 Individual Fairness and Auditor
We want our deployed policy π to behave fairly in some manner, and we use the individual fairness definition from Dwork et al. (2012) that asserts that “similar individuals should be treated similarly.” We assume that there is some distance function d : X × X → R+ over the instance space X which captures the distance between individuals in X , although d doesn’t have to satisfy the triangle inequality. The only requirement on d is that it is always non-negative and symmetric d(x, x′) = d(x′, x). Definition 3.1 ((α, β)-fairness). Assume α, β > 0. A policy π ∈ ∆H is said to be α-fair on pair (x, x′), if |π(x)− π(x′)| ≤ d(x, x′) + α. We say policy π’s α-fairness violation on pair (x, x′) is
vα(π, (x, x ′)) = max(0, |π(x)− π(x′)| − d(x, x′)− α).
A policy is π is said to be (α, β)-fair on distribution D, if Pr
(x,x′)∼D|X×D|X [|π(x)− π(x′)| > d(x, x′) + α] ≤ β.
A policy π is said to be α-fair on set S ⊆ X , if for all (x, x′) ∈ S2, it is α-fair.
Although individual fairness is intuitively sound, individual fairness notion requires the knowledge of the distance function d which is often hard to specify. Therefore, we rely on an auditor J that can detect instances of α-unfairness. Definition 3.2 (Auditor J ). An auditor Jα which can have its own internal state takes in a reference set S ⊆ X and a policy π. Then, it outputs ρ which is either null or a pair of indices from the provided reference set to denote that there is some positive α-fairness violation for that pair. For some S = (x1, . . . , xn),
Jα(S, π) = { ρ = (ρ1, ρ2) if ∃ρ1, ρ2 ∈ [n].π(xρ1)− π(xρ2)− d(xρ1 , xρ2)− α > 0 null otherwise
If there exists multiple pairs with some α-violation, the auditor can choose one arbitrarily. Remark 3.3. Our assumptions on the auditor are much more relaxed than those of Gillen et al. (2018), which require that the auditor outputs whether the policy is 0-fair (i.e. with no slack) on all pairs S2 exactly. Furthermore, the auditor in Gillen et al. (2018) can only handle Mahalanobis distances. In our setting, because of the internal state of the auditor, the auditor does not have to be a fixed function but rather can be adaptively changing in each round. Finally, we never rely on the fact the distance function d stays the same throughout rounds, meaning all our results extend to the case where the distance function governing the fairness constraints is changing every round.
3.2 Online Batch Classification
We now describe our online batch classification setting. In each round t = 1, . . . , T , the learner deploys some model πt ∈ ∆H. Upon seeing the deployed policy πt, the environment chooses a batch of k individuals, (xtτ , y t τ ) k τ=1 and possibly, a pair of individuals from that round on which πt will be responsible for any α-fairness violation. For simplicity, we write x̄t = (xtτ ) k τ=1 and ȳt = (ytτ ) k τ=1. The strategy z t FAIR-BATCH ∈ ZFAIR-BATCH that the environment chooses can be described by ztFAIR-BATCH = (x̄ t, ȳt) × ρt, where ρt ∈ [k]2 ∪ {null}. Often, we will omit the subscript and simply write zt. If ρt = (ρt1, ρ t 2), then π
t will be responsible for the α-fairness violation on the pair (xtρt1 , xtρt2 ). There are two types of losses that we are interested in: misclassification and fairness loss. Definition 3.4 (Misclassification Loss). The (batch) misclassification loss Err2 is
Err(π, zt) = k∑ τ=1 `(π(xtτ ), y t τ ).
2We will overload the notation for this loss; regardless of what Z is, we’ll assume Err(π, zt) is well-defined as long as zt includes (x̄t, ȳt).
Algorithm 1: Online Fair Batch Classification FAIR-BATCH for t = 1, . . . , T do
Learner deploys πt Environment chooses (x̄t, ȳt) Environment chooses the pair ρt zt = (x̄t, ȳt)× ρt Learner incurs misclassfication loss Err(πt, zt) Learner incurs fairness loss Unfair(πt, zt)
end
Algorithm 2: Online Batch Classification BATCH
for t = 1, . . . , T do Learner deploys πt Environment chooses zt = (x̄t, ȳt) Learner incurs misclassification loss Err(πt, zt) end
Figure 1: Comparison between Online Fair Batch Classification and Online Batch Classification: each is summarized by the interaction between the learner and the environment: (∆H×ZFAIR-BATCH)T and (∆H×ZBATCH)T where ZFAIR-BATCH = X k × Yk × ([k]2 ∪ {null}) and ZBATCH = X k × Yk.
Definition 3.5 (Fairness Loss). The α-fairness loss Unfairα is
Unfairα(π, z t) =
{ 1 ( π(xtρt1 )− π(xtρt2)− d(x t ρt1 , xtρt2 )− α > 0 ) if ρt = (ρt1, ρ t 2)
0 otherwise
We want the total misclassification and fairness loss over T rounds to be as small as any π∗ ∈ Q for some competitor set Q, which we describe now. As said above, each round’s reference set, a set of pairs for which the deployed policy will possibly be responsible in terms of α-fairness, will be defined in terms of the instances that arrive within that round x̄t. The baseline Qα that we compete against will be all policies that are α-fair on x̄t for all t ∈ [T ]:
Qα = {π ∈ ∆H : π is α-fair on x̄t for all t ∈ [T ]}
Note that becauseH contains a constant hypothesis which must be 0-fair on all instances, Qα cannot be empty. The difference in total loss between our algorithm and a fixed π∗ is called ‘regret’, which we formally define below. Definition 3.6 (Algorithm A). An algorithm A : (∆H × Z)∗ → ∆H takes in its past history (πτ , zτ )t−1τ=1 and deploys a policy π
t ∈ ∆H at every round t ∈ [T ]. Definition 3.7 (Regret). For some Q ⊆ ∆H, the regret of algorithm A with respect to some loss L : ∆H×Z → R is denoted as RegretL(A, Q, T ), if for any (zt)Tt=1,
T∑ t=1 L ( πt, zt ) − inf π∗∈Q T∑ t=1 L ( π∗, zt ) = RegretL(A, Q, T ),
where πt = A((πj , zj)t−1j=1). When it is not clear from the context, we will use subscript to denote the setting – e.g. RegretLFAIR-BATCH.
We wish to develop an algorithm such that both the misclassfication and fairness loss regret is sublinear, which is often called no-regret. Note that because π∗ ∈ Qα is α-fair on x̄t for all t ∈ [T ], we have Unfairα(π∗, zt) = 0 for all t ∈ [T ]. Hence, achieving RegretUnfairαFAIR-BATCH(A, Q, T ) = o(T ) is equivalent to ensuring that the total number of rounds with any α-fairness violation is sublinear. Therefore, our goal is equivalent to developing an algorithm A so that for any (zt)Tt=1,
RegretErrFAIR-BATCH(A, Q, T ) = o(T ) and T∑ t=1 Unfairα(πt, zt) = o(T ).
To achieve the result above, we will reduce our setting to a setting with no fairness constraint, which we call online batch classification problem. Similar to the online fair batch classification setting, in each round t, the learner deploys a policy πt, but the environment chooses only a batch of instances (xtτ , y t τ ) k τ=1. In online batch classification, we denote the strategy that the environment can take with ZBATCH = X k × Yk. We compare the two settings in figure 1.
4 Achieving No Regret Simultaneously
Here, we define a round-based Lagrangian loss and show that the regret with respect to our Lagrangian loss also serves as the misclassification and the fairness complaint regret. Then, we show that using an auditor that can detect any fairness violation beyond certain threshold, we can still hope to achieve no-regret against an adaptive adversary.
Finally, we show how to achieve no regret with respect to the Lagrangian loss by reducing the problem to an online batch classification where there’s no fairness constraint. We show that FollowThe-Perturbed-Leader style approach (CONTEXT-FTPL from Syrgkanis et al. (2016)) can achieve sublinear regret in the online batch classification setting, which allows us to achieve sublinear regret with respect to both misclassification and fairness loss in the online fair batch classification setting.
4.1 Lagrangian Formulation
Here we present a hybrid loss that we call Lagrangian loss that combines the misclassification loss and the magnitude of the fairness loss of round t.
Definition 4.1 (Lagrangian Loss). The (C,α)-Lagrangian loss of π is
LC,α ( π, ( (x̄t, ȳt), ρt )) = k∑ τ=1 ` ( π ( xtτ ) , ytτ ) + { C ( π(xtρ1)− π(x t ρ2)− α ) ρt = (ρ1, ρ2) 0 ρt = null
Given an auditor Jα that can detect any α-fairness violation, we can simulate the online fair batch classification setting with an auditor Jα by setting the pair ρtJ = Jα(x̄t, πt): subscript J is placed on this pair to distinguish from the pair chosen by the environment.3
Definition 4.2 (Lagrangian Regret). Algorithm A’s (C,α,Jα′)-Lagrangian regret against Q is
RegretC,α,Jα′ (A, Q, T ), if for any (x̄t, ȳt)Tt=1, we have
T∑ t=1 LC,α(πt, (x̄t, ȳt), ρtJ )− min π∗∈Q T∑ t=1 LC,α(π∗, (x̄t, ȳt), ρtJ ) ≤ Regret C,α,Jα′ (A, Q, T ),
where ρtJ = Jα′(x̄t, πt). Remark 4.3. From here on, we assume the auditor has a given sensitivity denoted by α′ = α+ , where is a parameter we will fix in order to define our desired benchmark Qα.
Now, we show that the Lagrangian regret upper bounds the α-fairness loss regret with some slack by setting C to be appropriately big enough. Also, we show that (C,α,Jα+ )-Lagrangian regret serves as the misclassification loss regret, too. The proofs are given in Appendix A.1.
Theorem 4.4. Fix some small constant > 0 and C ≥ k+1 . For any sequence of environment’s strategy (zt)Tt=1 ∈ ZTFAIR-BATCH, ∑T t=1 Unfairα+ (π t, zt) ≤ RegretC,α,Jα+ (A, Qα, T ).
Theorem 4.5. Fix some small constant > 0. For any sequence of (zt)Tt=1 ∈ ZTFAIR-BATCH and π∗ ∈ Qα,
T∑ t=1 k∑ τ=1 ` ( πt ( xtτ ) , ytτ ) − T∑ t=1 k∑ τ=1 `(π∗(xtτ ), y t τ ) ≤ Regret C,α,Jα+ (A, Qα, T ) ,
where C ≥ k+1 . In other words, Regret Err FAIR-BATCH(A, Qα, T ) ≤ Regret C,α,Jα+ (A, Qα, T ). 3Although we are simulating the adaptive environment’s strategy ρt with ρtJ , note that the fairness loss with ρtJ will always be at least the fairness loss with ρ t because the auditor will always indicate if there’s a fairness violation. This distinction between the pair chosen by the environment and the auditor is necessary just for technical reasons, as we need to ensure that the pair used to charge the Lagrangian loss incurs constant instantaneous regret in the rounds where there is actually some fairness violation, as the pair chosen by the environment can possibly have no fairness violation and hence negative instantaneous regret. This will be made more clear in the proof of Theorem 4.4.
4.2 Reduction to Online Batch Classification
In this subsection, we will first discuss a computationally inefficient way to achieve no regret with respect to the Lagrangian loss. Then, we will show an efficient reduction to online batch classification and discuss an example of an oracle-efficient algorithm ABATCH that achieves no-regret. It is well known that for linear loss, exponential weights with appropriately tuned learning rate γ can achieve no regret (Freund and Schapire, 1997; Cesa-Bianchi et al., 1997; Arora et al., 2012). Note that our Lagrangian loss
LtC,α(π) = LC,α(π, zt) = k∑ τ=1 (1− π(xtτ )) · ytτ + π(xtτ ) · (1− ytτ )
+
{ C ( π(xtρ1)− π(x t ρ2)− α ) ρt = (ρ1, ρ2)
0 ρt = null
is linear in π for any zt, and its range is [0, C + k]. Therefore, running exponential weights with learning rate γ = √
ln(|H|) T , we achieve the following regret with respect to the Lagrangian loss: Corollary 4.6. Running exponential weights with γ = √
ln(|H|) T and C ≥ k+1 , we achieve
RegretErrFAIR-BATCH(A, Qα, T ) ≤ (C+k) √ ln(|H|)T , T∑ t=1 Unfairα+ (π t, zt) ≤ (C+k) √ ln(|H|)T .
Nevertheless, running exponential weights is not efficient as it needs to calculate the loss for each h ∈ H every round t. To design an oracle-efficient algorithm, we reduce the online batch fair classification problem to the online batch classification problem in an efficient manner and use any online batch algorithm ABATCH((πj , (x̄′j , ȳ′j))tj=1) as a black box. At a high level, our reduction involves just carefully transforming our online fair batch classification history up to t, (πj , (x̄j , ȳj , ρj))tj=1 ∈ (∆H×ZFAIR-BATCH)t into some fake online batch classification history (πj , (x̄′j , ȳ′j))tj=1 ∈ (∆H× ZBATCH)t and then feeding the artificially created history to ABATCH.
Without loss of generality, we assume thatC ≥ k+1 is an integer; if it’s not, then take the ceiling. Now, we describe how the transformation of the history works. For each round t, whenever ρt = (ρt1, ρ t 2), we add C copies of each of (xtρt1 , 0) and (x t ρt2 , 1) to the original pairs to form x̄′t and ȳ′t. Just to keep the batch size the same across each round, even if ρt = null, we add C copies of each of (v, 0) and (v, 1) where v is some arbitrary instance in X . We describe this process in more detail in algorithm 3. This reduction essentially preserves the regret. Theorem 4.7. For any sequence of (zt)Tt=1 ∈ ZTFAIR-BATCH, Q ⊆ ∆H, and π∗ ∈ Q,
T∑ t=1 LC,α(πt, zt)− T∑ t=1 LC,α(π∗, zt) ≤ RegretErrBATCH(A, Q, T ),
where πt = ABATCH ( (πj , x̄′j , ȳ′j)t−1j=1 ) . Therefore, RegretC,α,Jα+ (A, Qα, T ) ≤ RegretErrBATCH(A, Q, T ).
One example ofABATCH that achieves sublinear regret in online batch classification is CONTEXT-FTPL from Syrgkanis et al. (2016). We defer the details to Appendix A.3 and present the regret guarantee here. We only focus on their small separator set setting (i.e. there exists a small set of points which serves as a witness to distinguish any two different hypothesis), although their transductive setting (i.e. the contexts {xt}Tt=1 are known in advance) naturally follows as well. Theorem 4.8. If the separator set S forH is of size s, then CONTEXT-FTPL achieves the following misclassification and fairness regret in the online fair batch classification setting.
RegretErrFAIR-BATCH(A, Qα, T ) ≤ O
(( sk ) 3 4 √
T log(|H|) ) T∑ t=1 Unfairα+ (π t, zt) ≤ O (( sk ) 3 4 √ T log(|H|) )
Algorithm 3: Reduction from Online Fair Batch Classification to Online Batch Classification Parameters: inflation constant C, original round size k Initialize: k′ = k + 2C; for t = 1, . . . , T do
Learner deploys πt; Environment chooses (x̄t, ȳt) and the pair ρt; if ρt = (ρt1, ρt2) then
for i = 1, . . . , C do xtk+i = x
t ρt1 and ytk+i = 0; xtk+C+i = x
t ρt2 and ytk+C+i = 1; end
end else
for i = 1, . . . , C do xtk+i = v and y t k+i = 0;
xtk+C+i = v and y t k+C+i = 1;
end end x̄′t = (xtτ ) k′ τ=1 and ȳ ′t = (ytτ ) k′
τ=1; πt+1 = ABATCH ( (πj , x̄′j , ȳ′j)tj=1 ) ;
end
5 Generalization
We observe that until this point, all of our results apply to the more general setting where individuals arrive in any adversarial fashion. In order to argue about generalization, in this section, we will assume the existence of an (unknown) data distribution from which individual arrivals are drawn: {{(xtτ , ytτ )}kτ=1}Tt=1 ∼i.i.d. DTk. Despite the data are drawn i.i.d., there are two technical challenges in establishing generalization guarantee: (1) the auditor’s fairness feedback at each round is limited to a single fairness violation with regards to the policy deployed in that round, and (2) both the deployed policies and the auditor are adaptive over rounds. To overcome these challenges, we will draw a connection between the established regret guarantees in Section 4 and the learner’s distributional accuracy and fairness guarantees. In particular, we will analyze the generalization bounds for the average policy over rounds. Definition 5.1 (Average Policy). Let πt be the policy deployed by the algorithm at round t. The average policy πavg is defined by ∀x : πavg(x) = 1T ∑T t=1 π t(x).
In order to be consistent with Section 4, we denote α′ = α+ in this section.
Here, we state the main results of this section: Theorem 5.2 (Accuracy Generalization). With probabilty 1− δ, the misclassification loss of πavg is upper bounded by
E (x,y)∼D [`(πavg(x), y)] ≤ inf π∈Qα E (x,y)∼D
[`(π(x), y)]+ 1
kT RegretC,α,Jα+ (A, Qα, T )+
√ 8 ln ( 4 δ ) T
Theorem 5.3 (Fairness Generalization). Assume that for all t, πt is (α, βt)-fair (0 ≤ βt ≤ 1). With probability 1− δ, for any integer q ≤ T , πavg is (α′ + qT , β ∗)-fair where
β∗ = 1
q
( RegretC,α,Jα+ (A, Qα, T ) + √ 2T ln ( 2
δ
)) .
Corollary 5.4. Using CONTEXT-FTPL from Syrgkanis et al. (2016) with a separator set of size s, with probability 1− δ, the average policy πavg has the following guarantee:
1. Accuracy:
E (x,y)∼D [`(πavg(x), y)] ≤ inf π∈Qα E (x,y)∼D [`(π(x), y)]+O 1 k 1 4 (s ) 3 4 √ ln(|H|) + ln ( 1 δ ) T . 2. Fairness: πavg is (α′ + λ, λ)-fair where λ = O (( sk ) 3 4 ( ln(|H|)+ln( 1δ )
T
) 1 4 ) .
Remark 5.5. Recall that the sensitivity of the auditor α′ is fixed, and the learner chooses the parameter ∈ (0, α′), which in return determines α = α′ − and the set of policy Qα the learner is competing against. In the case where α′ = Ω(1), the learner can choose in the order of Ω(1) and guarantee that πavg is (α′ + λ, λ)-fair with λ = Õ(T−1/4). In this regime, corollary 5.4 implies that policy πavg has a non-trivial accuracy guarantee and a fairness generalization bound that qualitatively matches the uniform convergence bound in Theorem 1.4 of Rothblum and Yona (2018).
The accuracy generalization bound of Theorem 5.2 is attained by applying Azuma’s inequality on the left hand side of the inequality in Theorem 4.5 and then leveraging the fact that our classification loss function is linear with respect to the base classifiers over which it is defined. The full proof is given in Appendix B.
As for the more challenging task of providing a fairness generalization guarantee (Theorem 5.3), we show how careful interpolation between α and β may be be used to provide a meaningful bound. Here, we state the key lemma required for Theorem 5.3 and a brief description of the proof technique. Lemma 5.6. Assume that for all t, πt is (α′, βt)-fair (0 ≤ βt ≤ 1). For any integer q ≤ T , πavg is( α′ + qT , 1 q T∑ t=1 βt ) -fair.
High-Level Proof Idea for Lemma 5.6 Setting α′′ = α′ + qT has the following implication: for any pair of individuals (x, x′), in order for πavg to have an α′′-fairness violation on x, x′, at least q of the policies in {π1, . . . , πT } must have an α′-fairness violation on x, x′. We will then say a subset A ⊆ X × X is α′-covered by a policy π, if π has an α′-violation on every element in A. We will denote by Aα ′
q ⊆ X × X the subset of pairs of elements from X that are α′-covered by at least q policies in {π1, . . . , πT }. Next, consider the probability space D|X ×D|X over pairs of individuals. The lemma then follows from observing that for any q ≤ T , Pr(Aα′q ) ≤ 1q Pr(A α′
1 ), as this will allow us to upper bound the probability of an α′′-fairness violation by the stated bound.
In Appendix B, we provide the full proof of Theorem 5.3, which features the covering argument presented in lemma 5.6, in addition to a concentration argument linking the probability of the algorithm deploying unfair policies throught its run to the regret guarantees proven in section 4. We also illustrate why an α, β interpolation is required in order to achieve a non-vacuous guarantee.
6 Conclusion
In this paper, we were able to answer an open question by Gillen et al. (2018), proving that online learning under an unknown individual fairness constraint is possible even without assuming a strong parametric form of the underlying similarity measure. We were further able to prove what we consider a very surprising generalization result, matching the state-of-the-art bounds for individual fairness given by Rothblum and Yona (2018), while eliminating or significantly relaxing all of their rather stringent assumptions. Contrary to previous work, which provided individual fairness generalization bounds utilizing standard uniform convergence arguments (Agarwal et al. (2018); Rothblum and Yona (2018)), we have presented a novel proof technique with the use of a composition covering argument (Lemma 5.6), we also believe is of separate interest.
Broader Impact
As the authors of this work believe that bridging the gap between theoretical research in algorithmic fairness and practical use is of the essence, one of the main focuses of this work has been removing
the rather stringent assumptions made in previous research in individual fairness, and replacing these with more realistic ones (if any). As such, the contributions offered in the paper allow taking a step closer to incorporating the long sought-after notion of individual fairness into real life systems. The introduction of a fairness auditor gives a simple, elegant solution to the hurdle posed by the classic similarity metric assumption. The notion of individual fairness pursued in this work offers a strong guarantee on the individual’s level (which is not given, for example, by the various more popular yet weaker notions of group fairness). We believe this combination between practicality of use and a strong fairness guarantee has the power to significantly impact our ability to ensure fairness and non-discrimination in machine learning based algorithms.
Acknowledgments and Disclosure of Funding
We thank Sampath Kannan, Akshay Krishnamurthy, Katrina Ligett, and Aaron Roth for helpful conversations at an early stage of this work. Part of this work was done while YB, CJ, and ZSW were visiting the Simons Institute for the Theory of Computing. YB is supported in part by Israel Science Foundation (ISF) grant #1044/16, the United States Air Force and DARPA under contracts FA8750-16-C-0022 and FA8750-19-2-0222, and the Federmann Cyber Security Center in conjunction with the Israel national cyber directorate. CJ is supported in part by NSF grant AF-1763307. ZSW is supported in part by the NSF FAI Award #1939606, an Amazon Research Award, a Google Faculty Research Award, a J.P. Morgan Faculty Award, a Facebook Research Award, and a Mozilla Research Grant. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Air Force and DARPA. | 1. What is the focus and contribution of the paper on online learning for individual fairness?
2. What are the strengths of the proposed approach, particularly in terms of its weakened assumptions and novel proof techniques?
3. What are the weaknesses of the paper regarding its structure and presentation? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The paper introduces a new online learning model for studying individual fairness. It provides a learning algorithm that achieves sublinear regret both for accuracy and fairness. In the case when examples are sampled from a probability distribution, the paper also gives strong generalization bounds. These results are obtained using significantly weaker assumptions than previous related papers.
Strengths
This paper addresses an important problem, and obtains significantly stronger results than prior papers. This paper's online fair learning model has much weaker assumptions and feedback format than previous similar models. The proof techniques are nontrivial. Overall, this paper provides novel and significant contributions to the field of ML fairness.
Weaknesses
The structure of the paper and the presentation of the results could be significantly improved (see details below). |
NIPS | Title
Metric-Free Individual Fairness in Online Learning
Abstract
We study an online learning problem subject to the constraint of individual fairness, which requires that similar individuals are treated similarly. Unlike prior work on individual fairness, we do not assume the similarity measure among individuals is known, nor do we assume that such measure takes a certain parametric form. Instead, we leverage the existence of an auditor who detects fairness violations without enunciating the quantitative measure. In each round, the auditor examines the learner’s decisions and attempts to identify a pair of individuals that are treated unfairly by the learner. We provide a general reduction framework that reduces online classification in our model to standard online classification, which allows us to leverage existing online learning algorithms to achieve sub-linear regret and number of fairness violations. Surprisingly, in the stochastic setting where the data are drawn independently from a distribution, we are also able to establish PAC-style fairness and accuracy generalization guarantees (Rothblum and Yona (2018)), despite only having access to a very restricted form of fairness feedback. Our fairness generalization bound qualitatively matches the uniform convergence bound of Rothblum and Yona (2018), while also providing a meaningful accuracy generalization guarantee. Our results resolve an open question by Gillen et al. (2018) by showing that online learning under an unknown individual fairness constraint is possible even without assuming a strong parametric form of the underlying similarity measure.
1 Introduction
As machine learning increasingly permeates many critical aspects of society, including education, healthcare, criminal justice, and lending, there is by now a vast literature that studies how to make machine learning algorithms fair (see, e.g., Chouldechova and Roth (2018); Podesta et al. (2014); Corbett-Davies and Goel (2018)). Most of the work in this literature tackles the problem by taking the statistical group fairness approach that first fixes a small collection of high-level groups defined by protected attributes (e.g., race or gender) and then asks for approximate parity of some statistic of the predictor, such as positive classification rate or false positive rate, across these groups (see, e.g., Hardt et al. (2016); Chouldechova (2017); Kleinberg et al. (2017); Agarwal et al. (2018)). While notions of group fairness are easy to operationalize, they are aggregate in nature without fairness guarantees for finer subgroups or individuals (Dwork et al., 2012; Hébert-Johnson et al., 2018; Kearns et al., 2018).
In contrast, the individual fairness approach aims to address this limitation by asking for explicit fairness criteria at an individual level. In particular, the compelling notion of individual fairness proposed in the seminal work of Dwork et al. (2012) requires that similar people are treated similarly. The original formulation of individual fairness assumes that the algorithm designer has access to
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
a task-specific fairness metric that captures how similar two individuals are in the context of the specific classification task at hand. In practice, however, such a fairness metric is rarely specified, and the lack of metrics has been a major obstacle for the wide adoption of individual fairness. There has been recent work on learning the fairness metric based on different forms of human feedback. For example, Ilvento (2019) provides an algorithm for learning the metric by presenting human arbiters with queries concerning the distance between individuals, and Gillen et al. (2018) provide an online learning algorithm that can eventually learn a Mahalanobis metric based on identified fairness violations. While these results are encouraging, they are still bound by several limitations. In particular, it might be difficult for humans to enunciate a precise quantitative similarity measure between individuals. Moreover, their similarity measure across individuals may not be consistent with any metric (e.g., it may not satisfy the triangle inequality) and is unlikely to be given by a simple parametric function (e.g., the Mahalanobis metric function).
To tackle these issues, this paper studies metric-free online learning algorithms for individual fairness that rely on a weaker form of interactive human feedback and minimal assumptions on the similarity measure across individuals. Similar to the prior work of Gillen et al. (2018), we do not assume a prespecified metric, but instead assume access to an auditor, who observes the learner’s decisions over a group of individuals that show up in each round and attempts to identify a fairness violation—a pair of individuals in the group that should have been treated more similarly by the learner. Since the auditor only needs to identify such unfairly treated pairs, there is no need for them to enunciate a quantitative measure – to specify the distance between the identified pairs. Moreover, we do not impose any parametric assumption on the underlying similarity measure, nor do we assume that it is actually a metric since we do not require that similarity measure to satisfy the triangle inequality. Under this model, we provide a general reduction framework that can take any online classification algorithm (without fairness constraint) as a black-box and obtain a learning algorithm that can simultaneously minimize cumulative classification loss and the number of fairness violations. Our results in particular remove many strong assumptions in Gillen et al. (2018), including their parametric assumptions on linear rewards and Mahalanobis distances, and thus answer several questions left open in their work.
1.1 Overview of Model and Results
We study an online classification problem: over rounds t = 1, . . . , T , a learner observes a small set of k individuals with their feature vectors (xtτ ) k τ=1 in space X . The learner tries to predict the label ytk ∈ {0, 1} of each individual with a “soft” predictor πt that predicts πt(xtτ ) ∈ [0, 1] on each xtτ and incurs classification loss |πt(xtτ ) − ytτ |. Then an auditor will investigate if the learner has violated the individual fairness constraint on any pair of individuals within this round, that is, if there exists (τ1, τ2) ∈ [k]2 such that |πt(xtτ1)−π t(xtτ2)| > d(x t τ1 , x t τ2) +α, where d : X ×X → R+ is an unknown distance function and α denotes the auditor’s tolerance. If this violation has occurred on any number of pairs, the auditor will identify one of such pairs and incur a fairness loss of 1; otherwise, the fairness loss is 0. Then the learner will update the predictive policy based on the observed labels and the received fairness feedback. Under this model, our results include:
A Reduction from Fair Online Classification to Standard Online Classification. Our reductionbased algorithm can take any no-regret online (batch) classification learner as a black-box and achieve sub-linear cumulative fairness loss and sub-linear regret on mis-classification loss compared to the most accurate policy that is fair on every round. In particular, our framework can leverage the generic exponential weights method (Freund and Schapire, 1997; Cesa-Bianchi et al., 1997; Arora et al., 2012) and also oracle-efficient methods, including variants of Follow-the-Perturbed-Leader (FTPL) (e.g., Syrgkanis et al. (2016); Suggala and Netrapalli (2019)), that further reduces online learning to standard supervised learning or optimization problems. We instantiate our framework using two online learning algorithms (exponential weights and CONTEXT-FTPL), both of which obtain a Õ( √ T ) on misclassification regret and cumulative fairness loss.
Fairness and Accuracy Generalization Guarantees. While our algorithmic results hold under adversarial arrivals of the individuals, in the stochastic arrivals setting we show that the uniform average policy over time is probably approximate correct and fair (PACF) (Rothblum and Yona, 2018)– that is, the policy is approximately fair on almost all random pairs drawn from the distribution and nearly matches the accuracy gurantee of the best fair policy. In particular, we show that the average policy πavg with high probability satisfies Prx,x′ [|πavg(x)−πavg(x′)| > α+1/T 1/4] ≤ O(1/T 1/4),
which qualitatively achieves similar PACF uniform convergence sample complexity as Rothblum and Yona (2018).1 However, we establish our generalization guarantee through fundamentally different techniques. While their work assumes a fully specified metric and i.i.d. data, the learner in our setting can only access the similarity measure through an auditor’s limited fairness violations feedback. The main challenge we need to overcome is that the fairness feedback is inherently adaptive–that is, the auditor only provides feedback for the sequence of deployed policies, which are updated adaptively over rounds. In comparison, a fully known metric allows the learner to evaluate the fairness guarantee of all policies simultaneously. As a result, we cannot rely on their uniform convergence result to bound the fairness generalization error, but instead we leverage a probabilistic argument that relates the learner’s regret to the distributional fairness guarantee.
2 Related Work
Solving open problems in Gillen et al. (2018). The most related work to ours is Gillen et al. (2018), which studies the linear contextual bandit problem subject to individual fairness with an unknown Mahalanobis metric. Similar to our work, they also assume an auditor who can identify fairness violations in each round and provide an online learning algorithm with sublinear regret and a bounded number of fairness violations. Our results resolve two main questions left open in their work. First, we assume a weaker auditor who only identifies a single fairness violation (as opposed to all of the fairness violations in their setting). Second, we remove the strong parametric assumption on the Mahalanobis metric and work with a broad class of similarity functions that need not be metric.
Starting with Joseph et al. (2016), there is a different line of work that studies online learning for individual fairness, but subject to a different notion called meritocratic fairness (Jabbari et al., 2017; Joseph et al., 2018; Kannan et al., 2017). These results present algorithms that are “fair” within each round but again rely on strong realizability assumptions–their fairness guarantee depends on the assumption that the outcome variable of each individual is given by a linear function. Gupta and Kamble (2019) also studies online learning subject to individual fairness but with a known metric. They formulate a one-sided fairness constraint across time, called fairness in hindsight, and provide an algorithm with regret O(TM/(M+1)) for some distribution-dependent constant M .
Our work is related to several others that aim to enforce individual fairness without a known metric. Ilvento (2019) studies the problem of metric learning by asking human arbiters distance queries. Unlike Ilvento (2019), our algorithm does not explicitly learn the underlying similarity measure and does not require asking auditors numeric queries. The PAC-style fairness generalization bound in our work falls under the framework of probably approximately metric-fairness due to Rothblum and Yona (2018). However, their work assumes a pre-specified fairness metric and i.i.d. data from the distribution, while we establish our generalization through a sequence of adaptive fairness violations feedback over time. Kim et al. (2018) study a group-fairness relaxation of individual fairness, which requires that similar subpopulations are treated similarly. They do not assume a pre-specified metric for their offline learning problem, but they do assume a metric oracle that returns numeric distance values on random pairs of individuals. Jung et al. (2019) study an offline learning problem with subjective individual fairness, in which the algorithm tries to elicit subjective fairness feedback from human judges by asking them questions of the form “should this pair of individuals be treated similarly or not?" Their fairness generalization takes a different form, which involves taking averages over both the individuals and human judges. We aim to provide a fairness generalization guarantee that holds for almost all individuals from the population.
3 Model and Preliminaries
We define the instance space to beX and its label space to beY . Throughout this paper, we will restrict our attention to binary labels, that is Y = {0, 1}. We write H : X → Y to denote the hypothesis class and assume thatH contains a constant hypothesis – i.e. there exists h such that h(x) = 0 for all x ∈ X . Also, we allow for convex combination of hypotheses for the purpose of randomizing the prediction and denote the simplex of hypotheses by ∆H; we call a randomized hypothesis a policy.
1Rothblum and Yona (2018) show (Theorem 1.4 in their work) that if a policy π is α-fair on all pairs in a i.i.d. dataset of size m, then π satisfies Prx,x′ [|π(x)− π(x′)| > α+ ] ≤ , as long as m ≥ Ω̃(1/ 4).
Sometimes, we assume the existence of an underlying (but unknown) distribution D over (X ,Y). For each prediction ŷ ∈ Y and its true label y ∈ Y , there is an associated misclassification loss, `(ŷ, y) = 1(ŷ 6= y). For simplicity, we overload the notation and write
`(π(x), y) = (1− π(x)) · y + π(x) · (1− y) = E h∼π [`(h(x), y)].
3.1 Individual Fairness and Auditor
We want our deployed policy π to behave fairly in some manner, and we use the individual fairness definition from Dwork et al. (2012) that asserts that “similar individuals should be treated similarly.” We assume that there is some distance function d : X × X → R+ over the instance space X which captures the distance between individuals in X , although d doesn’t have to satisfy the triangle inequality. The only requirement on d is that it is always non-negative and symmetric d(x, x′) = d(x′, x). Definition 3.1 ((α, β)-fairness). Assume α, β > 0. A policy π ∈ ∆H is said to be α-fair on pair (x, x′), if |π(x)− π(x′)| ≤ d(x, x′) + α. We say policy π’s α-fairness violation on pair (x, x′) is
vα(π, (x, x ′)) = max(0, |π(x)− π(x′)| − d(x, x′)− α).
A policy is π is said to be (α, β)-fair on distribution D, if Pr
(x,x′)∼D|X×D|X [|π(x)− π(x′)| > d(x, x′) + α] ≤ β.
A policy π is said to be α-fair on set S ⊆ X , if for all (x, x′) ∈ S2, it is α-fair.
Although individual fairness is intuitively sound, individual fairness notion requires the knowledge of the distance function d which is often hard to specify. Therefore, we rely on an auditor J that can detect instances of α-unfairness. Definition 3.2 (Auditor J ). An auditor Jα which can have its own internal state takes in a reference set S ⊆ X and a policy π. Then, it outputs ρ which is either null or a pair of indices from the provided reference set to denote that there is some positive α-fairness violation for that pair. For some S = (x1, . . . , xn),
Jα(S, π) = { ρ = (ρ1, ρ2) if ∃ρ1, ρ2 ∈ [n].π(xρ1)− π(xρ2)− d(xρ1 , xρ2)− α > 0 null otherwise
If there exists multiple pairs with some α-violation, the auditor can choose one arbitrarily. Remark 3.3. Our assumptions on the auditor are much more relaxed than those of Gillen et al. (2018), which require that the auditor outputs whether the policy is 0-fair (i.e. with no slack) on all pairs S2 exactly. Furthermore, the auditor in Gillen et al. (2018) can only handle Mahalanobis distances. In our setting, because of the internal state of the auditor, the auditor does not have to be a fixed function but rather can be adaptively changing in each round. Finally, we never rely on the fact the distance function d stays the same throughout rounds, meaning all our results extend to the case where the distance function governing the fairness constraints is changing every round.
3.2 Online Batch Classification
We now describe our online batch classification setting. In each round t = 1, . . . , T , the learner deploys some model πt ∈ ∆H. Upon seeing the deployed policy πt, the environment chooses a batch of k individuals, (xtτ , y t τ ) k τ=1 and possibly, a pair of individuals from that round on which πt will be responsible for any α-fairness violation. For simplicity, we write x̄t = (xtτ ) k τ=1 and ȳt = (ytτ ) k τ=1. The strategy z t FAIR-BATCH ∈ ZFAIR-BATCH that the environment chooses can be described by ztFAIR-BATCH = (x̄ t, ȳt) × ρt, where ρt ∈ [k]2 ∪ {null}. Often, we will omit the subscript and simply write zt. If ρt = (ρt1, ρ t 2), then π
t will be responsible for the α-fairness violation on the pair (xtρt1 , xtρt2 ). There are two types of losses that we are interested in: misclassification and fairness loss. Definition 3.4 (Misclassification Loss). The (batch) misclassification loss Err2 is
Err(π, zt) = k∑ τ=1 `(π(xtτ ), y t τ ).
2We will overload the notation for this loss; regardless of what Z is, we’ll assume Err(π, zt) is well-defined as long as zt includes (x̄t, ȳt).
Algorithm 1: Online Fair Batch Classification FAIR-BATCH for t = 1, . . . , T do
Learner deploys πt Environment chooses (x̄t, ȳt) Environment chooses the pair ρt zt = (x̄t, ȳt)× ρt Learner incurs misclassfication loss Err(πt, zt) Learner incurs fairness loss Unfair(πt, zt)
end
Algorithm 2: Online Batch Classification BATCH
for t = 1, . . . , T do Learner deploys πt Environment chooses zt = (x̄t, ȳt) Learner incurs misclassification loss Err(πt, zt) end
Figure 1: Comparison between Online Fair Batch Classification and Online Batch Classification: each is summarized by the interaction between the learner and the environment: (∆H×ZFAIR-BATCH)T and (∆H×ZBATCH)T where ZFAIR-BATCH = X k × Yk × ([k]2 ∪ {null}) and ZBATCH = X k × Yk.
Definition 3.5 (Fairness Loss). The α-fairness loss Unfairα is
Unfairα(π, z t) =
{ 1 ( π(xtρt1 )− π(xtρt2)− d(x t ρt1 , xtρt2 )− α > 0 ) if ρt = (ρt1, ρ t 2)
0 otherwise
We want the total misclassification and fairness loss over T rounds to be as small as any π∗ ∈ Q for some competitor set Q, which we describe now. As said above, each round’s reference set, a set of pairs for which the deployed policy will possibly be responsible in terms of α-fairness, will be defined in terms of the instances that arrive within that round x̄t. The baseline Qα that we compete against will be all policies that are α-fair on x̄t for all t ∈ [T ]:
Qα = {π ∈ ∆H : π is α-fair on x̄t for all t ∈ [T ]}
Note that becauseH contains a constant hypothesis which must be 0-fair on all instances, Qα cannot be empty. The difference in total loss between our algorithm and a fixed π∗ is called ‘regret’, which we formally define below. Definition 3.6 (Algorithm A). An algorithm A : (∆H × Z)∗ → ∆H takes in its past history (πτ , zτ )t−1τ=1 and deploys a policy π
t ∈ ∆H at every round t ∈ [T ]. Definition 3.7 (Regret). For some Q ⊆ ∆H, the regret of algorithm A with respect to some loss L : ∆H×Z → R is denoted as RegretL(A, Q, T ), if for any (zt)Tt=1,
T∑ t=1 L ( πt, zt ) − inf π∗∈Q T∑ t=1 L ( π∗, zt ) = RegretL(A, Q, T ),
where πt = A((πj , zj)t−1j=1). When it is not clear from the context, we will use subscript to denote the setting – e.g. RegretLFAIR-BATCH.
We wish to develop an algorithm such that both the misclassfication and fairness loss regret is sublinear, which is often called no-regret. Note that because π∗ ∈ Qα is α-fair on x̄t for all t ∈ [T ], we have Unfairα(π∗, zt) = 0 for all t ∈ [T ]. Hence, achieving RegretUnfairαFAIR-BATCH(A, Q, T ) = o(T ) is equivalent to ensuring that the total number of rounds with any α-fairness violation is sublinear. Therefore, our goal is equivalent to developing an algorithm A so that for any (zt)Tt=1,
RegretErrFAIR-BATCH(A, Q, T ) = o(T ) and T∑ t=1 Unfairα(πt, zt) = o(T ).
To achieve the result above, we will reduce our setting to a setting with no fairness constraint, which we call online batch classification problem. Similar to the online fair batch classification setting, in each round t, the learner deploys a policy πt, but the environment chooses only a batch of instances (xtτ , y t τ ) k τ=1. In online batch classification, we denote the strategy that the environment can take with ZBATCH = X k × Yk. We compare the two settings in figure 1.
4 Achieving No Regret Simultaneously
Here, we define a round-based Lagrangian loss and show that the regret with respect to our Lagrangian loss also serves as the misclassification and the fairness complaint regret. Then, we show that using an auditor that can detect any fairness violation beyond certain threshold, we can still hope to achieve no-regret against an adaptive adversary.
Finally, we show how to achieve no regret with respect to the Lagrangian loss by reducing the problem to an online batch classification where there’s no fairness constraint. We show that FollowThe-Perturbed-Leader style approach (CONTEXT-FTPL from Syrgkanis et al. (2016)) can achieve sublinear regret in the online batch classification setting, which allows us to achieve sublinear regret with respect to both misclassification and fairness loss in the online fair batch classification setting.
4.1 Lagrangian Formulation
Here we present a hybrid loss that we call Lagrangian loss that combines the misclassification loss and the magnitude of the fairness loss of round t.
Definition 4.1 (Lagrangian Loss). The (C,α)-Lagrangian loss of π is
LC,α ( π, ( (x̄t, ȳt), ρt )) = k∑ τ=1 ` ( π ( xtτ ) , ytτ ) + { C ( π(xtρ1)− π(x t ρ2)− α ) ρt = (ρ1, ρ2) 0 ρt = null
Given an auditor Jα that can detect any α-fairness violation, we can simulate the online fair batch classification setting with an auditor Jα by setting the pair ρtJ = Jα(x̄t, πt): subscript J is placed on this pair to distinguish from the pair chosen by the environment.3
Definition 4.2 (Lagrangian Regret). Algorithm A’s (C,α,Jα′)-Lagrangian regret against Q is
RegretC,α,Jα′ (A, Q, T ), if for any (x̄t, ȳt)Tt=1, we have
T∑ t=1 LC,α(πt, (x̄t, ȳt), ρtJ )− min π∗∈Q T∑ t=1 LC,α(π∗, (x̄t, ȳt), ρtJ ) ≤ Regret C,α,Jα′ (A, Q, T ),
where ρtJ = Jα′(x̄t, πt). Remark 4.3. From here on, we assume the auditor has a given sensitivity denoted by α′ = α+ , where is a parameter we will fix in order to define our desired benchmark Qα.
Now, we show that the Lagrangian regret upper bounds the α-fairness loss regret with some slack by setting C to be appropriately big enough. Also, we show that (C,α,Jα+ )-Lagrangian regret serves as the misclassification loss regret, too. The proofs are given in Appendix A.1.
Theorem 4.4. Fix some small constant > 0 and C ≥ k+1 . For any sequence of environment’s strategy (zt)Tt=1 ∈ ZTFAIR-BATCH, ∑T t=1 Unfairα+ (π t, zt) ≤ RegretC,α,Jα+ (A, Qα, T ).
Theorem 4.5. Fix some small constant > 0. For any sequence of (zt)Tt=1 ∈ ZTFAIR-BATCH and π∗ ∈ Qα,
T∑ t=1 k∑ τ=1 ` ( πt ( xtτ ) , ytτ ) − T∑ t=1 k∑ τ=1 `(π∗(xtτ ), y t τ ) ≤ Regret C,α,Jα+ (A, Qα, T ) ,
where C ≥ k+1 . In other words, Regret Err FAIR-BATCH(A, Qα, T ) ≤ Regret C,α,Jα+ (A, Qα, T ). 3Although we are simulating the adaptive environment’s strategy ρt with ρtJ , note that the fairness loss with ρtJ will always be at least the fairness loss with ρ t because the auditor will always indicate if there’s a fairness violation. This distinction between the pair chosen by the environment and the auditor is necessary just for technical reasons, as we need to ensure that the pair used to charge the Lagrangian loss incurs constant instantaneous regret in the rounds where there is actually some fairness violation, as the pair chosen by the environment can possibly have no fairness violation and hence negative instantaneous regret. This will be made more clear in the proof of Theorem 4.4.
4.2 Reduction to Online Batch Classification
In this subsection, we will first discuss a computationally inefficient way to achieve no regret with respect to the Lagrangian loss. Then, we will show an efficient reduction to online batch classification and discuss an example of an oracle-efficient algorithm ABATCH that achieves no-regret. It is well known that for linear loss, exponential weights with appropriately tuned learning rate γ can achieve no regret (Freund and Schapire, 1997; Cesa-Bianchi et al., 1997; Arora et al., 2012). Note that our Lagrangian loss
LtC,α(π) = LC,α(π, zt) = k∑ τ=1 (1− π(xtτ )) · ytτ + π(xtτ ) · (1− ytτ )
+
{ C ( π(xtρ1)− π(x t ρ2)− α ) ρt = (ρ1, ρ2)
0 ρt = null
is linear in π for any zt, and its range is [0, C + k]. Therefore, running exponential weights with learning rate γ = √
ln(|H|) T , we achieve the following regret with respect to the Lagrangian loss: Corollary 4.6. Running exponential weights with γ = √
ln(|H|) T and C ≥ k+1 , we achieve
RegretErrFAIR-BATCH(A, Qα, T ) ≤ (C+k) √ ln(|H|)T , T∑ t=1 Unfairα+ (π t, zt) ≤ (C+k) √ ln(|H|)T .
Nevertheless, running exponential weights is not efficient as it needs to calculate the loss for each h ∈ H every round t. To design an oracle-efficient algorithm, we reduce the online batch fair classification problem to the online batch classification problem in an efficient manner and use any online batch algorithm ABATCH((πj , (x̄′j , ȳ′j))tj=1) as a black box. At a high level, our reduction involves just carefully transforming our online fair batch classification history up to t, (πj , (x̄j , ȳj , ρj))tj=1 ∈ (∆H×ZFAIR-BATCH)t into some fake online batch classification history (πj , (x̄′j , ȳ′j))tj=1 ∈ (∆H× ZBATCH)t and then feeding the artificially created history to ABATCH.
Without loss of generality, we assume thatC ≥ k+1 is an integer; if it’s not, then take the ceiling. Now, we describe how the transformation of the history works. For each round t, whenever ρt = (ρt1, ρ t 2), we add C copies of each of (xtρt1 , 0) and (x t ρt2 , 1) to the original pairs to form x̄′t and ȳ′t. Just to keep the batch size the same across each round, even if ρt = null, we add C copies of each of (v, 0) and (v, 1) where v is some arbitrary instance in X . We describe this process in more detail in algorithm 3. This reduction essentially preserves the regret. Theorem 4.7. For any sequence of (zt)Tt=1 ∈ ZTFAIR-BATCH, Q ⊆ ∆H, and π∗ ∈ Q,
T∑ t=1 LC,α(πt, zt)− T∑ t=1 LC,α(π∗, zt) ≤ RegretErrBATCH(A, Q, T ),
where πt = ABATCH ( (πj , x̄′j , ȳ′j)t−1j=1 ) . Therefore, RegretC,α,Jα+ (A, Qα, T ) ≤ RegretErrBATCH(A, Q, T ).
One example ofABATCH that achieves sublinear regret in online batch classification is CONTEXT-FTPL from Syrgkanis et al. (2016). We defer the details to Appendix A.3 and present the regret guarantee here. We only focus on their small separator set setting (i.e. there exists a small set of points which serves as a witness to distinguish any two different hypothesis), although their transductive setting (i.e. the contexts {xt}Tt=1 are known in advance) naturally follows as well. Theorem 4.8. If the separator set S forH is of size s, then CONTEXT-FTPL achieves the following misclassification and fairness regret in the online fair batch classification setting.
RegretErrFAIR-BATCH(A, Qα, T ) ≤ O
(( sk ) 3 4 √
T log(|H|) ) T∑ t=1 Unfairα+ (π t, zt) ≤ O (( sk ) 3 4 √ T log(|H|) )
Algorithm 3: Reduction from Online Fair Batch Classification to Online Batch Classification Parameters: inflation constant C, original round size k Initialize: k′ = k + 2C; for t = 1, . . . , T do
Learner deploys πt; Environment chooses (x̄t, ȳt) and the pair ρt; if ρt = (ρt1, ρt2) then
for i = 1, . . . , C do xtk+i = x
t ρt1 and ytk+i = 0; xtk+C+i = x
t ρt2 and ytk+C+i = 1; end
end else
for i = 1, . . . , C do xtk+i = v and y t k+i = 0;
xtk+C+i = v and y t k+C+i = 1;
end end x̄′t = (xtτ ) k′ τ=1 and ȳ ′t = (ytτ ) k′
τ=1; πt+1 = ABATCH ( (πj , x̄′j , ȳ′j)tj=1 ) ;
end
5 Generalization
We observe that until this point, all of our results apply to the more general setting where individuals arrive in any adversarial fashion. In order to argue about generalization, in this section, we will assume the existence of an (unknown) data distribution from which individual arrivals are drawn: {{(xtτ , ytτ )}kτ=1}Tt=1 ∼i.i.d. DTk. Despite the data are drawn i.i.d., there are two technical challenges in establishing generalization guarantee: (1) the auditor’s fairness feedback at each round is limited to a single fairness violation with regards to the policy deployed in that round, and (2) both the deployed policies and the auditor are adaptive over rounds. To overcome these challenges, we will draw a connection between the established regret guarantees in Section 4 and the learner’s distributional accuracy and fairness guarantees. In particular, we will analyze the generalization bounds for the average policy over rounds. Definition 5.1 (Average Policy). Let πt be the policy deployed by the algorithm at round t. The average policy πavg is defined by ∀x : πavg(x) = 1T ∑T t=1 π t(x).
In order to be consistent with Section 4, we denote α′ = α+ in this section.
Here, we state the main results of this section: Theorem 5.2 (Accuracy Generalization). With probabilty 1− δ, the misclassification loss of πavg is upper bounded by
E (x,y)∼D [`(πavg(x), y)] ≤ inf π∈Qα E (x,y)∼D
[`(π(x), y)]+ 1
kT RegretC,α,Jα+ (A, Qα, T )+
√ 8 ln ( 4 δ ) T
Theorem 5.3 (Fairness Generalization). Assume that for all t, πt is (α, βt)-fair (0 ≤ βt ≤ 1). With probability 1− δ, for any integer q ≤ T , πavg is (α′ + qT , β ∗)-fair where
β∗ = 1
q
( RegretC,α,Jα+ (A, Qα, T ) + √ 2T ln ( 2
δ
)) .
Corollary 5.4. Using CONTEXT-FTPL from Syrgkanis et al. (2016) with a separator set of size s, with probability 1− δ, the average policy πavg has the following guarantee:
1. Accuracy:
E (x,y)∼D [`(πavg(x), y)] ≤ inf π∈Qα E (x,y)∼D [`(π(x), y)]+O 1 k 1 4 (s ) 3 4 √ ln(|H|) + ln ( 1 δ ) T . 2. Fairness: πavg is (α′ + λ, λ)-fair where λ = O (( sk ) 3 4 ( ln(|H|)+ln( 1δ )
T
) 1 4 ) .
Remark 5.5. Recall that the sensitivity of the auditor α′ is fixed, and the learner chooses the parameter ∈ (0, α′), which in return determines α = α′ − and the set of policy Qα the learner is competing against. In the case where α′ = Ω(1), the learner can choose in the order of Ω(1) and guarantee that πavg is (α′ + λ, λ)-fair with λ = Õ(T−1/4). In this regime, corollary 5.4 implies that policy πavg has a non-trivial accuracy guarantee and a fairness generalization bound that qualitatively matches the uniform convergence bound in Theorem 1.4 of Rothblum and Yona (2018).
The accuracy generalization bound of Theorem 5.2 is attained by applying Azuma’s inequality on the left hand side of the inequality in Theorem 4.5 and then leveraging the fact that our classification loss function is linear with respect to the base classifiers over which it is defined. The full proof is given in Appendix B.
As for the more challenging task of providing a fairness generalization guarantee (Theorem 5.3), we show how careful interpolation between α and β may be be used to provide a meaningful bound. Here, we state the key lemma required for Theorem 5.3 and a brief description of the proof technique. Lemma 5.6. Assume that for all t, πt is (α′, βt)-fair (0 ≤ βt ≤ 1). For any integer q ≤ T , πavg is( α′ + qT , 1 q T∑ t=1 βt ) -fair.
High-Level Proof Idea for Lemma 5.6 Setting α′′ = α′ + qT has the following implication: for any pair of individuals (x, x′), in order for πavg to have an α′′-fairness violation on x, x′, at least q of the policies in {π1, . . . , πT } must have an α′-fairness violation on x, x′. We will then say a subset A ⊆ X × X is α′-covered by a policy π, if π has an α′-violation on every element in A. We will denote by Aα ′
q ⊆ X × X the subset of pairs of elements from X that are α′-covered by at least q policies in {π1, . . . , πT }. Next, consider the probability space D|X ×D|X over pairs of individuals. The lemma then follows from observing that for any q ≤ T , Pr(Aα′q ) ≤ 1q Pr(A α′
1 ), as this will allow us to upper bound the probability of an α′′-fairness violation by the stated bound.
In Appendix B, we provide the full proof of Theorem 5.3, which features the covering argument presented in lemma 5.6, in addition to a concentration argument linking the probability of the algorithm deploying unfair policies throught its run to the regret guarantees proven in section 4. We also illustrate why an α, β interpolation is required in order to achieve a non-vacuous guarantee.
6 Conclusion
In this paper, we were able to answer an open question by Gillen et al. (2018), proving that online learning under an unknown individual fairness constraint is possible even without assuming a strong parametric form of the underlying similarity measure. We were further able to prove what we consider a very surprising generalization result, matching the state-of-the-art bounds for individual fairness given by Rothblum and Yona (2018), while eliminating or significantly relaxing all of their rather stringent assumptions. Contrary to previous work, which provided individual fairness generalization bounds utilizing standard uniform convergence arguments (Agarwal et al. (2018); Rothblum and Yona (2018)), we have presented a novel proof technique with the use of a composition covering argument (Lemma 5.6), we also believe is of separate interest.
Broader Impact
As the authors of this work believe that bridging the gap between theoretical research in algorithmic fairness and practical use is of the essence, one of the main focuses of this work has been removing
the rather stringent assumptions made in previous research in individual fairness, and replacing these with more realistic ones (if any). As such, the contributions offered in the paper allow taking a step closer to incorporating the long sought-after notion of individual fairness into real life systems. The introduction of a fairness auditor gives a simple, elegant solution to the hurdle posed by the classic similarity metric assumption. The notion of individual fairness pursued in this work offers a strong guarantee on the individual’s level (which is not given, for example, by the various more popular yet weaker notions of group fairness). We believe this combination between practicality of use and a strong fairness guarantee has the power to significantly impact our ability to ensure fairness and non-discrimination in machine learning based algorithms.
Acknowledgments and Disclosure of Funding
We thank Sampath Kannan, Akshay Krishnamurthy, Katrina Ligett, and Aaron Roth for helpful conversations at an early stage of this work. Part of this work was done while YB, CJ, and ZSW were visiting the Simons Institute for the Theory of Computing. YB is supported in part by Israel Science Foundation (ISF) grant #1044/16, the United States Air Force and DARPA under contracts FA8750-16-C-0022 and FA8750-19-2-0222, and the Federmann Cyber Security Center in conjunction with the Israel national cyber directorate. CJ is supported in part by NSF grant AF-1763307. ZSW is supported in part by the NSF FAI Award #1939606, an Amazon Research Award, a Google Faculty Research Award, a J.P. Morgan Faculty Award, a Facebook Research Award, and a Mozilla Research Grant. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Air Force and DARPA. | 1. What is the main contribution of the paper regarding online learning with individual fairness constraints?
2. What are the strengths of the proposed approach, particularly in its ability to remove previous assumptions?
3. Are there any weaknesses or areas for improvement in the paper's methodology or results?
4. How does the reviewer assess the significance and practicality of the paper's findings in terms of achieving individual fairness in online learning? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The paper studies online learning with individual fairness as constraints. In particular, the paper assumes the existence of an auditor that detects fairness violations rather than assuming the known similarity metric among individuals. Different from a closely related work [9] "Online learning with unknown fairness metric" by Gillen et al. NeurIPS 2018, in each round the auditor only needs to return one pair of individuals identified as fairness violation. Under this setting, the paper establishes PAC-style fairness and accuracy generalization guarantees. The main contribution is to answer the question raised in [9] and the results show that online learning under an unknown individual fairness constraint is possible even without assuming a parametric of form of the underling similarity measure.
Strengths
The presented general reduction framework, which takes any online learning algorithm as a black-box and obtains a learning algorithm that minimizes the cumulative classification error and the number of fairness violations, is sound. The removal of the previous assumptions, linear rewards and Mahalanobis distance, is also significant. It is also a nice result to see the use of the Follow-the-Perturbed-Leader approach can achieve sublinear regret with respect to both misclassification and fairness violations in the online fair batch learning setting.
Weaknesses
I do not identify any clear weakness of this work. Somehow, I would like to see how results would be different when the auditor still returns the set of all pairs of individuals with fairness violations. Similar to the work [9], the paper focuses on the fairness constraint that binds between individuals at each round. While the enforcement of the fairness constraint across rounds is difficult. But practically users may want to see to what extend the between-round individual fairness can be achieved in the proposed approaches. |
NIPS | Title
Sample Complexity of Tree Search Configuration: Cutting Planes and Beyond
Abstract
Cutting-plane methods have enabled remarkable successes in integer programming over the last few decades. State-of-the-art solvers integrate a myriad of cutting-plane techniques to speed up the underlying tree-search algorithm used to find optimal solutions. In this paper we provide sample complexity bounds for cut-selection in branch-and-cut (B&C). Given a training set of integer programs sampled from an application-specific input distribution and a family of cut selection policies, these guarantees bound the number of samples sufficient to ensure that using any policy in the family, the size of the tree B&C builds on average over the training set is close to the expected size of the tree B&C builds. We first bound the sample complexity of learning cutting planes from the canonical family of Chvátal-Gomory cuts. Our bounds handle any number of waves of any number of cuts and are fine tuned to the magnitudes of the constraint coefficients. Next, we prove sample complexity bounds for more sophisticated cut selection policies that use a combination of scoring rules to choose from a family of cuts. Finally, beyond the realm of cutting planes for integer programming, we develop a general abstraction of tree search that captures key components such as node selection and variable selection. For this abstraction, we bound the sample complexity of learning a good policy for building the search tree.
1 Introduction
Integer programming is one of the most broadly-applicable tools in computer science, used to formulate problems from operations research (such as routing, scheduling, and pricing), machine learning (such as adversarially-robust learning, MAP estimation, and clustering), and beyond. Branchand-cut (B&C) is the most widely-used algorithm for solving integer programs (IPs). B&C is highly configurable, and with a deft configuration, it can be used to solve computationally challenging problems. Finding a good configuration, however, is a notoriously difficult problem.
We study machine learning approaches to configuring policies for selecting cutting planes, which have an enormous impact on B&C’s performance. At a high level, B&C works by recursively partitioning the IP’s feasible region, searching for the locally optimal solution within each set of the partition,
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
until it can verify that it has found the globally optimal solution. An IP’s feasible region is defined by a set of linear inequalities Ax ≤ b and integer constraints x ∈ Zn, where n is the number of variables. By dropping the integrality constraints, we obtain the linear programming (LP) relaxation of the IP, which can be solved efficiently. A cutting plane is a carefully-chosen linear inequality αTx ≤ β which refines the LP relaxation’s feasible region without separating any integral point. Intuitively, a well-chosen cutting plane will remove a large portion of the LP relaxation’s feasible region, speeding up the time it takes B&C to find the optimal solution to the original IP. Cutting plane selection is a crucial task, yet it is challenging because many cutting planes and cut-selection policies have tunable parameters, and the best configuration depends intimately on the application domain.
We provide the first provable guarantees for learning high-performing cutting planes and cut-selection policies, tailored to the application at hand. We model the application domain via an unknown, application-specific distribution over IPs, as is standard in the literature on using machine learning for integer programming [e.g., 21, 23, 31, 36, 43]. For example, this could be a distribution over the routing IPs that a shipping company must solve day after day. The learning algorithm’s input is a training set sampled from this distribution. The goal is to use this training set to learn cutting planes and cut-selection policies with strong future performance on problems from the same application but which are not already in the training set—or more formally, strong expected performance.
1.1 Summary of main contributions and overview of techniques
As our first main contribution, we provide sample complexity bounds of the following form: fixing a family of cutting planes, we bound the number of samples sufficient to ensure that for any sequence of cutting planes from the family, the average size of the B&C tree is close to the expected size of the B&C tree. We measure performance in terms of the size of the search tree B&C builds. Our guarantees apply to the parameterized family of Chvátal-Gomory (CG) cuts [10, 17], one of the most widely-used families of cutting planes.
The overriding challenge is that to provide guarantees, we must analyze how the tree size changes as a function of the cut parameters. This is a sensitive function—slightly shifting the parameters can cause the tree size to shift from constant to exponential in the number of variables. Our key technical insight is that as the parameters vary, the entries of the cut (i.e., the vector α and offset β of the cut αTx ≤ β) are multivariate polynomials of bounded degree. The number of terms defining the polynomials is exponential in the number of parameters, but we show that the polynomials can be embedded in a space with dimension sublinear in the number of parameters. This insight allows us to better understand tree size as a function of the parameters. We then leverage results by Balcan et al. [8] that show how to use structure exhibited by dual functions (measuring an algorithm’s performance, such as its tree size, as a function of its parameters) to derive sample complexity bounds.
Our second main contribution is a sample complexity bound for learning cut-selection policies, which allow B&C to adaptively select cuts as it solves the input IP. These cut-selection policies assign a number of real-valued scores to a set of cutting planes and then apply the cut that has the maximum weighted sum of scores. Tree size is a volatile function of these weights, though we prove that it is piecewise constant, as illustrated in Figure 1, which allows us to prove our sample complexity bound.
Finally, as our third main contribution, we provide guarantees for tuning weighted combinations of scoring rules for other aspects of tree search beyond cut selection, including node and variable selection. We prove that there is a set of hyperplanes splitting the parameter space into regions such that if tree search uses any configuration from a single region, it will take the same sequence of actions. This structure allows us to prove our sample complexity bound. This is the first paper to provide guarantees for tree search configuration that apply simultaneously to multiple different aspects of the algorithm—prior research was specific to variable selection [5].
Sample complexity bounds are important because if the parameterized class of cuts or cut-selection policies that we optimize over is highly complex and the training set is too small, the learned cut or cut-selection policy might have great average empirical performance over the training set but terrible future performance. In other words, the parameter configuration procedure may overfit to the training set. The sample complexity bounds we provide are uniform-convergence: we prove that given enough samples, uniformly across all parameter settings, the difference between average and empirical performance is small. In other words, these bounds hold for any procedure one might use to optimize over the training set: manual or automated, optimal or suboptimal. No matter what
parameter setting the configuration procedure comes up with, the user can be guaranteed that so long as that parameter setting has good average empirical performance over the training set, it will also have strong future performance.
1.2 Related work
Applied research on tree search configuration. Over the past decade, a substantial literature has developed on the use of machine learning for integer programming and tree search [e.g., 2, 7, 9, 13, 19, 23–25, 29, 31–33, 35, 36, 41–43]. This has included research that improves specific aspects of B&C such as variable selection [2, 13, 24, 29, 32, 41], node selection [19, 35, 44], and heuristic scheduling [25]. These papers are applied, whereas we focus on providing theoretical guarantees.
With respect to cutting plane selection, the focus of this paper, Sandholm [36] uses machine learning techniques to customize B&C for combinatorial auction winner determination, including cutting plane selection. Tang et al. [37] and Huang et al. [20] study machine learning approaches to cutting plane selection. The former work formulates this problem as a reinforcement learning problem and shows that their approach can outperform human-designed heuristics for a variety of tasks. The latter work studies cutting plane selection in the multiple-instance-learning framework and proposes a neural-network architecture for scoring and ranking cutting planes. Meanwhile, the focus of our paper is to provide the first provable guarantees for cutting plane selection via machine learning.
Ferber et al. [15] study a problem where the IP objective vector c is unknown, but an estimate ĉ can be obtained from data. Their goal is to optimize the quality of the solutions obtained by solving the IP defined by ĉ, with respect to the true vector c. They do so by formulating the IP as a differentiable layer in a neural network. The nonconvex nature of the IP does not allow for straightforward gradient computation for the backward pass, so they obtain a continuous surrogate using cutting planes.
Provable guarantees for algorithm configuration. Gupta and Roughgarden [18] initiated the study of sample complexity bounds for algorithm configuration. In research most related to ours, Balcan et al. [5] provide sample complexity bounds for learning tree search variable selection policies (VSPs). They prove their bounds by showing that for any IP, hyperplanes partition the VSP parameter space into regions where the B&C tree size is a constant function of the parameters. The analysis in this paper requires new techniques because although we prove that the B&C tree size is a piecewiseconstant function of the CG cutting plane parameters, the boundaries between pieces are far more complex than hyperplanes: they are hypersurfaces defined by multivariate polynomials.
Kleinberg et al. [26, 27] and Weisz et al. [38, 39] design configuration procedures for runtime minimization that come with theoretical guarantees. Their algorithms are designed for the case where there are a finitely-many parameter settings to choose from (although they are still able to provide guarantees for infinite parameter spaces by running their procedure on a finite sample of configurations; Balcan et al. [5, 6] analyze when discretization approaches can and cannot be gainfully employed). In contrast, our guarantees are designed for infinite parameter spaces.
2 Problem formulation
In this section we give a more detailed technical overview of branch-and-cut, as well as an overview of the tools from learning theory we use to prove sample complexity guarantees.
2.1 Branch-and-cut
We study integer programs (IPs) in canonical form given by max { cTx : Ax ≤ b,x ≥ 0,x ∈ Zn } , (1)
where A ∈ Zm×n, b ∈ Zm, and c ∈ Rn. Branch-and-cut (B&C) works by recursively partitioning the input IP’s feasible region, searching for the locally optimal solution within each set of the partition until it can verify that it has found the globally optimal solution. It organizes this partition as a search tree, with the input IP stored at the root. It begins by solving the LP relaxation of the input IP; we denote the solution as x∗LP ∈ Rn. If x∗LP satisfies the IP’s integrality constraints (x∗LP ∈ Zn), then the procedure terminates—x∗LP is the globally optimal solution. Otherwise, it uses a variable selection policy to choose a variable x[i]. In the left child of the root, it stores the original IP with the additional constraint that x[i] ≤ bx∗LP[i]c, and in the right child, with the additional constraint that x[i] ≥ dx∗LP[i]e. It then uses a node selection policy to select a leaf of the tree and repeats this procedure—solving the LP relaxation and branching on a variable. B&C can fathom a node, meaning that it will stop searching along that branch, if 1) the LP relaxation satisfies the IP’s integrality constraints, 2) the LP relaxation is infeasible, or 3) the objective value of the LP relaxation’s solution is no better than the best integral solution found thus far. We assume there is a bound κ on the size of the tree we allow B&C to build before we terminate, as is common in prior research [5, 21, 26, 27].
Cutting planes are a means of ensuring that at each iteration of B&C, the solution to the LP relaxation is as close to the optimal integral solution as possible. Formally, let P = {x ∈ Rn : Ax ≤ b,x ≥ 0} denote the feasible region obtained by taking the LP relaxation of IP (1). Let PI = conv(P ∩ Zn) denote the integer hull of P . A valid cutting plane is any hyperplane αTx ≤ β such that if x is in the integer hull (x ∈ PI), then x satisfies the inequality αTx ≤ β. In other words, a valid cut does not remove any integral point from the LP relaxation’s feasible region. A valid cutting plane separates x ∈ P \ PI if it does not satisfy the inequality, or in other words, αTx > β. At any node of the search tree, B&C can add valid cutting planes that separate the optimal solution to the node’s LP relaxation, thus improving the solution estimates used to prune the search tree. However, adding too many cuts will increase the time it takes to solve the LP relaxation at each node. Therefore, solvers such as SCIP [16], the leading open-source solver, bound the number of cuts that will be applied.
A famous class of cutting planes is the family of Chvátal-Gomory (CG) cuts1 [10, 17], which are parameterized by vectors u ∈ Rm. The CG cut defined by u ∈ Rm is the hyperplane buTAcx ≤ buT bc, which is guaranteed to be valid. Throughout this paper we primarily restrict our attention to u ∈ [0, 1)m. This is without loss of generality, since the facets of P ∩ {x ∈ Rn : buTAcx ≤ buT bc ∀u ∈ Rm} can be described by the finitely many u ∈ [0, 1)m such that uTA ∈ Zn (Lemma 5.13 of Conforti et al. [11]).
Some IP solvers such as SCIP use scoring rules to select among cutting planes, which are meant to measure the quality of a cut. Some commonly-used scoring rules include efficacy [4] (score1), objective parallelism [1] (score2), directed cutoff distance [16] (score3), and integral support [40] (score4) (defined in Appendix A). Efficacy measures the distance between the cut αTx ≤ β and x∗LP: score1(α
Tx ≤ β) = (αTx∗LP − β)/ ‖α‖2 , as illustrated in Figure 2a. Objective parallelism measures the angle between the objective c and the cut’s normal vector α: score2(αTx ≤ β) =∣∣cTα∣∣ /(‖α‖2 ‖c‖2), as illustrated in Figures 2b and 2c. Directed cutoff distance measures the distance between the LP optimal solution and the cut in a more relevant direction than the efficacy scoring rule. Specifically, let x be the incumbent solution, which is the best-known feasible solution to the input IP. The directed cutoff distance is the distance between the hyperplane (α, β) and the current LP solution x∗LP along the direction of the incumbent x, as illustrated in Figures 2d and 2e: score3(αTx ≤ β) = ‖x− x∗LP‖2 · (α Tx∗LP − β)/ ∣∣αT (x− x∗LP)∣∣ . SCIP uses the scoring rule 3 5score1 + 1 10score2 + 1 2score3 + 1 10score4 [16].
1The set of CG cuts is equivalent to the set of Gomory (fractional) cuts [12], another commonly studied family of cutting planes with a slightly different parameterization.
2.2 Learning theory background and notation
The goal of this paper is to learn cut-selection policies using samples in order to guarantee, with high probability, that B&C builds a small tree in expectation on unseen IPs. To this end, we rely on the notion of pseudo-dimension [34], a well-known measure of a function class’s intrinsic complexity. The pseudo-dimension of a function class F ⊆ RY , denoted Pdim(F), is the largest integer N for which there exist N inputs y1, . . . , yN ∈ Y and N thresholds r1, . . . , rN ∈ R such that for every (σ1, . . . , σN ) ∈ {0, 1}N , there exists f ∈ F such that f(yi) ≥ ri if and only if σi = 1. Function classes with bounded pseudo-dimension satisfy the following uniform convergence guarantee [3, 34]. Let [−κ, κ] be the range of the functions in F , let NF (ε, δ) = O(κ 2 ε2 (Pdim(F) + ln( 1 δ ))), and let N ≥ NF (ε, δ). For all distributionsD on Y , with probability 1−δ over the draw of y1, . . . , yN ∼ D, for every function f ∈ F , the average value of f over the samples is within ε of its expected value: | 1N ∑N i=1 f(yi)− Ey∼D[f(y)]| ≤ ε. The quantity NF (ε, δ) is the sample complexity of F .
We use the notation ‖A‖1,1 to denote the sum of the absolute values of all the entries in A.
3 Learning Chvátal-Gomory cuts
In this section we bound the sample complexity of learning CG cuts at the root node of the B&C search tree. In many IP settings, similar IPs are being solved and there can be good cuts that carry across instances—for example, in applications where the constraints stay the same or roughly the same across instances,2 and only the objective changes. One high-stakes example of this is the feasibility checking problem in the billion-dollar incentive auction for radio spectrum, where prices change but the radiowave interference constraints do not change.
We warm up by analyzing the case where a single CG cut is added at the root (Section 3.1), and then build on this analysis to handle w sequential waves of k simultaneous CG cuts (Section 3.3). This means that all k cuts in the first wave are added simultaneously, the new (larger) LP relaxation is solved, all k cuts in the second wave are added to the new problem simultaneously, and so on. B&C adds cuts in waves because otherwise, the angles between cuts would become obtuse, leading to numerical instability. Moreover, many commercial IP solvers only add cuts at the root because those cuts can be leveraged throughout the tree. However, in Section 5, we also provide guarantees for applying cuts throughout the tree. In this section, we assume that all aspects of B&C (such as node selection and variable selection) are fixed except for the cuts applied at the root of the search tree.
3.1 Learning a single cut
To provide sample complexity bounds, as per Section 2.2, we bound the pseudo-dimension of the set of functions fu for u ∈ [0, 1]m, where fu(c, A, b) is the size of the tree B&C builds when it applies the CG cut defined by u at the root. To do so, we take advantage of structure exhibited by the class of dual functions, each of which is defined by a fixed IP (c, A, b) and measures tree size as
2We assume that constraints are generated in the same order across instances; see Appendix B for a discussion.
a function of the parameters u. In other words, each dual function f∗c,A,b : [0, 1] m → R is defined as f∗c,A,b(u) = fu(c, A, b). Our main result in this section is a proof that the dual functions are well-structured (Lemma 3.2), which then allows us to apply a result by Balcan et al. [8] to bound Pdim({fu : u ∈ [0, 1]m}) (Theorem 3.3). Proving that the dual functions are well-structured is challenging because they are volatile: slightly perturbing u can cause the tree size to shift from constant to exponential in n, as we prove in the following theorem. The full proof is in Appendix C.
Theorem 3.1. For any integer n, there exists an integer program (c, A, b) with two constraints and n variables such that if 12 ≤ u[1]− u[2] < n+1 2n , then applying the CG cut defined by u at the root causes B&C to terminate immediately. Meanwhile, if n+12n ≤ u[1]− u[2] < 1, then applying the CG cut defined by u at the root causes B&C to build a tree of size at least 2(n−1)/2.
Proof sketch. Without loss of generality, assume that n is odd. Consider an IP with constraints 2(x[1] + · · · + x[n]) ≤ n, −2(x[1] + · · · + x[n]) ≤ −n, x ∈ {0, 1}n, and any objective. This IP is infeasible because n is odd. Jeroslow [22] proved that without the use of cutting planes or heuristics, B&C will build a tree of size 2(n−1)/2 before it terminates. We prove that when 1 2 ≤ u[1]− u[2] < n+1 2n , the CG cut halfspace defined by u = (u[1], u[2]) has an empty intersection with the feasible region of the IP, causing B&C to terminate immediately. On the other hand, we show that if n+12n ≤ u[1]− u[2] < 1, then the CG cut halfspace defined by u contains the feasible region of the IP, and thus leaves the feasible region unchanged. In this case, due to Jeroslow [22], applying this CG cut at the root will cause B&C to build a tree of size at least 2(n−1)/2 before it terminates.
This theorem shows that the dual tree-size functions can be extremely sensitive to perturbations in the CG cut parameters. However, we are able to prove that the dual functions are piecewise-constant.
Lemma 3.2. For any IP (c, A, b), there areO(‖A‖1,1 +‖b‖1 +n) hyperplanes that partition [0, 1]m into regions where in any one region R, the dual function f∗c,A,b(u) is constant for all u ∈ R.
Proof. Let a1, . . . ,an ∈ Rm be the columns of A. Let Ai = ‖ai‖1 and B = ‖b‖1, so for any u ∈ [0, 1]m, ⌊ uTai ⌋ ∈ [−Ai, Ai] and ⌊ uT b ⌋ ∈ [−B,B]. For each integer ki ∈ [−Ai, Ai], we
have ⌊ uTai ⌋ = ki ⇐⇒ ki ≤ uTai < ki + 1. There are ∑n i=1 2Ai + 1 = O(‖A‖1,1 + n) such halfspaces, plus an additional 2B + 1 halfspaces of the form kn+1 ≤ uT b < kn+1 + 1 for each kn+1 ∈ {−B, . . . , B}. In any region R defined by the intersection of these halfspaces, the vector (buTa1c, . . . , buTanc, buT bc) is constant for all u ∈ R, and thus so is the resulting cut.
Combined with the main result of Balcan et al. [8], this lemma implies the following bound.
Theorem 3.3. Let Fα,β denote the set of all functions fu for u ∈ [0, 1]m defined on the domain of IPs (c, A, b) with ‖A‖1,1 ≤ α and ‖b‖1 ≤ β. Then, Pdim(Fα,β) = O(m log(m(α+ β + n))).
This theorem implies that Õ(κ2m/ε2) samples are sufficient to ensure that with high probability, for every CG cut, the average size of the tree B&C builds upon applying the cutting plane is within of the expected size of the tree it builds (the Õ notation suppresses logarithmic terms).
3.2 Learning a sequence of cuts
We now determine the sample complexity of making w sequential CG cuts at the root. The first cut is defined by m parameters u1 ∈ [0, 1]m for each of the m constraints. Its application leads to the addition of the row buT1 Acx ≤ buT1 bc to the constraint matrix. The next cut is then be defined by m+ 1 parameters u2 ∈ [0, 1]m+1 since there are now m+ 1 constraints. Continuing in this fashion, the wth cut is be defined by m+w− 1 parameters uw ∈ [0, 1]m+w−1. Let fu1,...,uw(c, A, b) be the size of the tree B&C builds when it applies the CG cut defined by u1, then applies the CG cut defined by u2 to the new IP, and so on, all at the root of the search tree.
As in Section 3.1, we bound the pseudo-dimension of the functions fu1,...,uw by analyzing the structure of the dual functions f∗c,A,b, which measure tree size as a function of the parameters u1, . . . ,uw. Specifically, f∗c,A,b : [0, 1]
m × · · · × [0, 1]m+w−1 → R, where f∗c,A,b(u1, . . . ,uw) = fu1,...,uw(c, A, b). The analysis in this section is more complex because the s th cut (with s ∈
{2, . . . ,W}) depends not only on the parameters us but also on u1, . . . ,us−1. We prove that the dual functions are again piecewise-constant, but in this case, the boundaries between pieces are defined by multivariate polynomials rather than hyperplanes. The full proof is in Appendix C. Lemma 3.4. For any IP (c, A, b), there are O(w2w ‖A‖1,1 + 2w ‖b‖1 + nw) multivariate polynomials in ≤ w2 +mw variables of degree ≤ w that partition [0, 1]m× · · · × [0, 1]m+w−1 into regions where in any one region R, f∗c,A,b(u1, . . . ,uw) is constant for all (u1, . . . ,uw) ∈ R.
Proof sketch. Let a1, . . . ,an ∈ Rm be the columns of A. For u1 ∈ [0, 1]m, . . . ,uw ∈ [0, 1]m+w−1, define ã1i ∈ [0, 1]m, . . . , ãwi ∈ [0, 1]m+w−1 for each i ∈ [n] such that ãsi is the ith column of the constraint matrix after applying cuts u1, . . . ,us−1. Similarly, define b̃s to be the constraint vector after applying the first s− 1 cuts. More precisely, we have the recurrence relation
ã1i = ai b̃ 1 = b
ãsi =
[ ãs−1i
uTs−1ã s−1 i
] b̃s = [ b̃s−1
uTs−1b̃ s−1 ] for s = 2, . . . ,W . We prove that ⌊ uTs ã s i ⌋ ∈ [−2s−1 ‖ai‖1 , 2s−1 ‖ai‖1]. For each integer ki in this
interval, ⌊ uTs ã s i ⌋ = ki ⇐⇒ ki ≤ uTs ãsi < ki + 1. The boundaries of these surfaces are defined by polynomials over us in ≤ ms+ s2 variables with degree ≤ s. Counting the total number of such hypersurfaces yields the lemma statement.
We now use this structure to provide a pseudo-dimension bound. The full proof is in Appendix C. Theorem 3.5. Let Fα,β denote the set of all functions fu1,...,uw for u1 ∈ [0, 1]m, . . . ,uw ∈ [0, 1]m+w−1 defined on the domain of integer programs (c, A, b) with ‖A‖1,1 ≤ α and ‖b‖1 ≤ β. Then, Pdim(Fα,β) = O(mw2 log(mw(α+ β + n))).
Proof sketch. The space of 0/1 classifiers induced by the set of degree ≤ w multivariate polynomials in w2 + mw variables has VC dimension O((w2 + mw) logw) [3]. However, we more carefully examine the structure of the polynomials considered in Lemma 3.4 to give an improved VC dimension bound of 1 +mw. For each j = 1, . . . ,m define ũ1[j], . . . , ũw[j] recursively as
ũ1[j] = u1[j]
ũs[j] = us[j] + s−1∑ `=1 us[m+ `]ũ`[j] for s = 2, . . . , w
The space of polynomials induced by the sth cut is contained in span{1, ũs[1], . . . , ũs[m]}. The intuition for this is as follows: consider the additional term added by the sth cut to the constraint matrix, that is, uTs ã s i . The first m coordinates (us[1], . . . ,us[m]) interact only with ai—so us[j] collects a coefficient of ai[j]. Each subsequent coordinate us[m+ `] interacts with all coordinates of ãsi arising from the first ` cuts. The term that collects a coefficient of ai[j] is precisely us[m+ `] times the sum of all terms from the first ` cuts with a coefficient of ai[j]. Using standard facts about the VC dimension of vector spaces and their duals in conjunction with Lemma 3.4 and the framework of Balcan et al. [8] yields the theorem statement.
The sample complexity (defined in Section 2.2) of learning W sequential cuts is thus Õ(κ2mw2/ 2).
3.3 Learning waves of simultaneous cuts
We now determine the sample complexity of making w sequential waves of cuts at the root, each wave consisting of k simultaneous CG cuts. Given vectors u11, . . . ,u k 1 ∈ [0, 1]m,u12, . . . ,uk2 ∈ [0, 1]m+k, . . . ,u1w, . . . ,u k w ∈ [0, 1]m+k(w−1), let fu11,...,uk1 ,...,u1w,...,ukw(c, A, b) be the size of the tree B&C builds when it applies the CG cuts defined by u11, . . . ,u k 1 , then applies the CG cuts defined by u12, . . . ,u k 2 to the new IP, and so on, all at the root of the search tree. The full proof of the following theorem is in Appendix C, and follows from the observation that w waves of k simultaneous cuts can be viewed as making kw sequential cuts with the restriction that cuts within each wave assign nonzero weight only to constraints from previous waves.
Theorem 3.6. Let Fα,β be the set of all functions fu11,...,uk1 ,...,u1w,...,ukw for u 1 1, . . . ,u k 1 ∈ [0, 1]m, . . . ,u1w, . . . ,u k w ∈ [0, 1]m+k(w−1) defined on the domain of integer programs (c, A, b) with ‖A‖1,1 ≤ α and ‖b‖1 ≤ β. Then, Pdim(Fα,β) = O(mk2w2 log(mkw(α+ β + n))).
This result implies that the sample complexity of learning W waves of k cuts is Õ(κ2mk2w2/ 2).
3.4 Data-dependent guarantees
So far, our guarantees have depended on the maximum possible norms of the constraint matrix and vector in the domain of IPs under consideration. The uniform convergence result in Section 2.2 for Fα,β only holds for distributions over A and b with norms bounded by α and β, respectively. In Appendix C.1, we show how to convert these into more broadly applicable data-dependent guarantees that leverage properties of the distribution over IPs. These guarantees hold without assumptions on the distribution’s support, and depend on E[maxi ‖Ai‖1,1] and E[maxi ‖bi‖1] (where the expectation is over N samples), thus giving a sharper sample complexity guarantee that is tuned to the distribution.
4 Learning cut selection policies
In Section 3, we studied the sample complexity of learning waves of specific cut parameters. In this section, we bound the sample complexity of learning cut-selection policies at the root, that is, functions that take as input an IP and output a candidate cut. Using scoring rules is a more nuanced way of choosing cuts since it allows for the cut parameters to depend on the input IP.
Formally, let Im be the set of IPs withm constraints (the number of variables is always fixed at n) and letHm be the set of all hyperplanes in Rm. A scoring rule is a function score : ∪m(Hm × Im)→ R≥0. The real value score(αTx ≤ β, (c, A, b)) is a measure of the quality of the cutting plane αTx ≤ β for the IP (c, A, b). Examples include the scoring rules discussed in Section 2.1. Suppose score1, . . . , scored are d different scoring rules. We now bound the sample complexity of learning a combination of these scoring rules that guarantee a low expected tree size. Our highlevel proof technique is the same as in the previous section: we establish that the dual tree-size functions are piecewise structured, and then apply the general framework of Balcan et al. [8] to obtain pseudo-dimension bounds. Theorem 4.1. Let C be a set of cutting-plane parameters such that for every IP (c, A, b), there is a decomposition of C into ≤ r regions such that the cuts generated by any two vectors in the same region are the same. Let score1, . . . , scored be d scoring rules. For µ ∈ Rd, let fµ(c, A, b) be the size of the tree B&C builds when it chooses a cut from C to maximize µ[1]score1(·, (c, A, b)) + · · ·+ µ[d]scored(·, (c, A, b)). Then, Pdim({fµ : µ ∈ Rd}) = O(d log(rd)).
Proof. Fix an integer program (c, A, b). Let u1, . . . ,ur ∈ C be representative cut parameters for each of the r regions. Consider the hyperplanes ∑d i=1 µ[i]scorei(us) = ∑d i=1 µ[i]scorei(ut) for each s 6= t ∈ {1, . . . , r} (suppressing the dependence on c, A, b). These O(r2) hyperplanes partition Rd into regions such that as µ varies in a given region, the cut chosen from C is invariant. The desired pseudo-dimension bound follows from the main result of Balcan et al. [8].
Theorem 4.1 can be directly instantiated with the class of CG cuts. Combining Lemma 3.2 with the basic combinatorial fact that k hyperplanes partition Rm into at most km regions, we get that the pseudo-dimension of {fµ : µ ∈ Rd} defined on IPs with ‖A‖1,1 ≤ α and ‖b‖1 ≤ β is O(dm log(d(α + β + n))). Instantiating Theorem 4.1 with the set of all sequences of w CG cuts requires the following extension of scoring rules to sequences of cutting planes. A sequential scoring rule is a function that takes as input an IP (c, A, b) and a sequence of cutting planes h1, . . . , hw, where each cut lives in one higher dimension than the previous. It measures the quality of this sequence of cutting planes when applied one after the other to the original IP. Every scoring rule score can be naturally extended to a sequential scoring rule score defined by score(h1, . . . , hw, (c0, A0, b0)) =∑w−1 i=0 score(hi+1, (c i, Ai, bi)), where (ci, Ai, bi) is the IP after applying cuts h1, . . . , hi−1. Corollary 4.2. Let C = [0, 1]m × · · · × [0, 1]m+w−1 denote the set of possible sequences of w Chvátal-Gomory cut parameters. Let score1, . . . , scored : C × Im × · · · × Im+w−1 → R
be d sequential scoring rules and let fµ(c, A, b) be as in Theorem 4.1 for the class C. Then, Pdim({fwµ : µ ∈ Rd}) = O(dmw2 log(dw(α+ β + n))).
Proof. In Lemma 3.4 and Theorem 3.5 we showed that there are O(w2wα+ 2wβ+nw) multivariate polynomials that belong to a family of polynomials G with VCdim(G∗) ≤ 1 +mw (G∗ denotes the dual of G) that partition C into regions such that resulting sequence of cuts is invariant in each region. By Claim 3.5 by Balcan et al. [8], the number of regions is O(w2wα + 2wβ + nw)VCdim(G
∗) ≤ O(w2wα+ 2wβ + nw)1+mw. The corollary then follows from Theorem 4.1.
These results bound the sample complexity of learning cut-selection policies based on scoring rules, which allow the cuts B&C that selects to depend on the input IP.
5 Sample complexity of generic tree search
In this section, we study the sample complexity of selecting high-performing parameters for generic tree-based algorithms, which are a generalization of B&C. This abstraction allows us to provide guarantees for simultaneously optimizing key aspects of tree search beyond cut selection, including node selection and branching variable selection. We also generalize the previous sections by allowing actions (such as cut selection) to be taken at any stage of the tree search—not just at the root.
Tree search algorithms take place over a series of κ rounds (analogous to the B&B tree-size cap κ in the previous sections). There is a sequence of t steps that the algorithm takes on each round. For example, in B&C, these steps include node selection, cut selection, and variable selection. The specific action the algorithm takes during each step (for example, which node to select, which cut to include, or which variable to branch on) typically depends on a scoring rule. This scoring rule weights each possible action and the algorithm performs the action with the highest weight. These actions (deterministically) transition the algorithm from one state to another. This high-level description of tree search is summarized by Algorithm 1. For each step j ∈ [t], the number of possible actions is Tj ∈ N. There is a scoring rule scorej , where scorej(k, s) ∈ R is the weight associated with the action k ∈ [Tj ] when the algorithm is in the state s.
Algorithm 1 Tree search Input: Problem instance, t scoring rules score1, . . . , scoret, number of rounds κ.
1: s1,1 ← Initial state of algorithm 2: for each round i ∈ [κ] do 3: for each step j ∈ [t] do 4: Perform the action k ∈ [Tj ] that maximizes scorej (si,j , k) 5: si,j+1 ← New state of algorithm 6: si+1,1 ← si,t+1 . State at beginning of next round equals state at end of this round
Output: Incumbent solution in state sκ,t+1, if one exists.
There are often several scoring rules one could use, and it is not clear which to use in which scenarios. As in Section 4, we provide guarantees for learning combinations of these scoring rules for the particular application at hand. More formally, for each step j ∈ [t], rather than just a single scoring rule scorej as in Step 4, there are dj scoring rules scorej,1, . . . , scorej,dj . Given parameters µj = (µj [1], . . . , µj [dj ]) ∈ Rdj , the algorithm takes the action k ∈ [Tj ] that maximizes∑dj i=1 µj [i]scorej,i(k, s). There is a distribution D over inputs x to Algorithm 1. For example, when this framework is instantiated for branch-and-cut, x is an integer program (c, A, b). There is a utility function fµ(x) ∈ [−H,H] that measures the utility of the algorithm parameterized by µ = (µ1, . . . ,µt) on input x. For example, this utility function might measure the size of the search tree that the algorithm builds. We assume that this utility function is final-state-constant: Definition 5.1. Let µ = (µ1, . . . ,µt) and µ′ = (µ′1, . . . ,µ′t) be two parameter vectors. Suppose that we run Algorithm 1 on input x once using the scoring rule scorej = ∑dj i=1 µj [i]scorej,i and
once using the scoring rule scorej = ∑dj i=1 µ ′ j [i]scorej,i. Suppose that on each run, we obtain the same final state sκ,t+1. The utility function is final-state-constant if fµ(x) = fµ′(x).
We provide a sample complexity bound for learning the parametersµ. The full proof is in Appendix D. Theorem 5.2. Let d = ∑t j=1 dj denote the total number of tunable parameters of tree search. Then,
Pdim ({ fµ : µ ∈ Rd }) = O ( dκ
t∑ j=1 log Tj + d log d
) .
Proof sketch. We prove that there is a set of hyperplanes splitting the parameter space into regions such that if tree search uses any parameter setting from a single region, it will always take the same sequence of actions (including node, variable, and cut selection). The main subtlety is an induction argument to count these hyperplanes that depends on the current step of the tree-search algorithm.
In the context of integer programming, Theorem 5.2 not only recovers the main result of Balcan et al. [5] for learning variable selection policies, but also yields a more general bound that simultaneously incorporates cutting plane selection, variable selection, and node selection. In B&C, the first action of each round is to select a node. Since there are at most 2n+1 − 1 nodes, T1 ≤ 2n+1 − 1. The second action is to choose a cutting plane. As in Theorem 4.1, let C be a family of cutting planes such that for every IP (c, A, b), there is a decomposition of the parameter space into ≤ r regions such that the cuts generated by any two parameters in the same region are the same. So T2 ≤ r. The last action is to choose a variable to branch on at that node, so T3 = n. Applying Theorem 5.2, Pdim({fµ : µ ∈ Rd}) = O(dκn + dκ log r + d log d). Ignoring T1 and T2, thereby only learning the variable selection policy, recovers the O(dκ log n+ d log d) bound of Balcan et al. [5].
6 Conclusions and future research
We provided the first provable guarantees for using machine learning to configure cutting planes and cut-selection policies. We analyzed the sample complexity of learning cutting planes from the popular family of Chvátal-Gomory (CG) cuts. We then provided sample complexity guarantees for learning parameterized cut-selection policies, which allow the branch-and-cut algorithm to adaptively apply cuts as it builds the search tree. We showed that this analysis can be generalized to simultaneously capture various key aspects of tree search beyond cut selection, such as node and variable selection.
This paper opens up a variety questions for future research. For example, which other cut families can we learn over with low sample complexity? Section 3 focused on learning within the family of CG cuts (Sections 4 and 5 applied more generally). There are many other families, such as Gomory mixed-integer cuts and lift-and-project cuts, and a sample complexity analysis of these is an interesting direction for future research (and would call for new techniques). In addition, can we use machine learning to design improved scoring rules and heuristics for cut selection? The bounds we provide in Section 4 apply to any choice of scoring rules, no matter how simple or complex. Is it possible to obtain even better bounds by taking into account the complexity of the scoring rules? Finally, the bounds in this paper are worst case, but a great direction for future research is to develop data-dependent bounds that improve based on the structure of the input distribution.
Acknowledgements
This material is based on work supported by the National Science Foundation under grants IIS1618714, IIS-1718457, IIS-1901403, CCF-1733556, CCF-1535967, CCF-1910321, SES-1919453, the ARO under award W911NF2010081, DARPA under cooperative agreement HR00112020003, an AWS Machine Learning Research Award, an Amazon Research Award, a Bloomberg Research Grant, and a Microsoft Research Faculty Fellowship. | 1. What is the focus of the paper regarding linear integer programming?
2. What are the strengths and weaknesses of the paper's contributions, particularly in bounding sample complexity?
3. How does the paper approach the choice of cuts and branching in integer programming?
4. What are some open questions or areas for further research related to the paper's topics?
5. How might machine learning algorithms be used to improve the efficiency of integer programming solvers? | Summary Of The Paper
Review | Summary Of The Paper
Linear integer programming is an incredibly important algorithmic tool that is used widely in practice. In general the problem is NP-hard and so we have to resort to heuristics. These heuristics often use the branch-and-cut framework: we branch on "guesses" on variables and we cut by adding valid inequalities that tightens the linear relaxation of the integer program.
When dealing with such heuristics, an obvious question arises: which cut should I add and on which variable should I branch? The goal here is to limit the size of the search tree and therefore getting a good running time. There has been a growing amount of experimental work on using ML algorithms to guide this choice.
The paper under submission considers this problem from a more theoretical perspective. The main results bound the sample complexity for finding "good" cuts complementing and generalizing a prior work that studied this question for variable selection.
Review
The paper studies an important problem and give original new contributions. The paper is also written extremely well and is a pleasure to read. In what follows, I detail some of the contributions of the paper and comment on their strengths and weaknesses.
The first question that the authors study is the following. Suppose you have an (unknown) distribution over m-constraint IPs. You would like to find the best Chvatal-Gomory (CG) cut u in the following sense. If I add u to the IP then I would like to have the smallest search tree in expectation over the randomly chosen IP (from the distribution). More specifically, the authors study the sample complexity of u and show that it is near linear in m under reasonable norm bounds on the coefficients of the IP instances. The proofs of this is not hard and uses a connection to pseudo-dimension used in prior work. The key insight is the observation that for a fixed IP the number of Chvatal-Gomory cuts of interest is exponential. The authors also generalize this to when you add several cuts to the root of the search tree.
The strength of the first line of results is that it give new insights to a classic and very natural family of cuts (Chvatal-Gomory cuts). The weaknesses are that (1) the results do not tell us much about when we can find a good cut efficiently; and, as the author points out, (2) the current cut does not depend on the actual IP only on the distribution which is quite unnatural. It would be good if the authors further explain when this setting is natural and also to comment on the efficiency issue.
Indeed, many solvers uses different scoring rules (that depend on the IP) to select the cuts to add. This is the second part of the paper where the authors analyze the sample complexity of learning such scores. The statement of this result is interesting. The techniques to get it are not striking: basically they show that a good combination of d scoring rules is learnable. Assuming the coefficients in front of these scoring rules don't have huge bit complexity, this follows since then the number of options is say poly(n)^d (or even exp(n)^d if we allow polynomial bit complexity in our coefficients). Now the authors have to work a little more since they don't make such an assumption.
The authors also generalize this to a more generalize tree search model in which they are able to recover and generalize prior work that only considered the sample complexity of variable selection.
Overall, I see this as a nice paper that is interesting. It touches on a very interesting problem where we can expect ML to lead to huge speedups. It is unclear how the the sample complexity of the problems considered here will inform such improvements and it would be great if the authors would comment more on this. Overall, I'd recommend the paper to be accepted. |
NIPS | Title
Sample Complexity of Tree Search Configuration: Cutting Planes and Beyond
Abstract
Cutting-plane methods have enabled remarkable successes in integer programming over the last few decades. State-of-the-art solvers integrate a myriad of cutting-plane techniques to speed up the underlying tree-search algorithm used to find optimal solutions. In this paper we provide sample complexity bounds for cut-selection in branch-and-cut (B&C). Given a training set of integer programs sampled from an application-specific input distribution and a family of cut selection policies, these guarantees bound the number of samples sufficient to ensure that using any policy in the family, the size of the tree B&C builds on average over the training set is close to the expected size of the tree B&C builds. We first bound the sample complexity of learning cutting planes from the canonical family of Chvátal-Gomory cuts. Our bounds handle any number of waves of any number of cuts and are fine tuned to the magnitudes of the constraint coefficients. Next, we prove sample complexity bounds for more sophisticated cut selection policies that use a combination of scoring rules to choose from a family of cuts. Finally, beyond the realm of cutting planes for integer programming, we develop a general abstraction of tree search that captures key components such as node selection and variable selection. For this abstraction, we bound the sample complexity of learning a good policy for building the search tree.
1 Introduction
Integer programming is one of the most broadly-applicable tools in computer science, used to formulate problems from operations research (such as routing, scheduling, and pricing), machine learning (such as adversarially-robust learning, MAP estimation, and clustering), and beyond. Branchand-cut (B&C) is the most widely-used algorithm for solving integer programs (IPs). B&C is highly configurable, and with a deft configuration, it can be used to solve computationally challenging problems. Finding a good configuration, however, is a notoriously difficult problem.
We study machine learning approaches to configuring policies for selecting cutting planes, which have an enormous impact on B&C’s performance. At a high level, B&C works by recursively partitioning the IP’s feasible region, searching for the locally optimal solution within each set of the partition,
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
until it can verify that it has found the globally optimal solution. An IP’s feasible region is defined by a set of linear inequalities Ax ≤ b and integer constraints x ∈ Zn, where n is the number of variables. By dropping the integrality constraints, we obtain the linear programming (LP) relaxation of the IP, which can be solved efficiently. A cutting plane is a carefully-chosen linear inequality αTx ≤ β which refines the LP relaxation’s feasible region without separating any integral point. Intuitively, a well-chosen cutting plane will remove a large portion of the LP relaxation’s feasible region, speeding up the time it takes B&C to find the optimal solution to the original IP. Cutting plane selection is a crucial task, yet it is challenging because many cutting planes and cut-selection policies have tunable parameters, and the best configuration depends intimately on the application domain.
We provide the first provable guarantees for learning high-performing cutting planes and cut-selection policies, tailored to the application at hand. We model the application domain via an unknown, application-specific distribution over IPs, as is standard in the literature on using machine learning for integer programming [e.g., 21, 23, 31, 36, 43]. For example, this could be a distribution over the routing IPs that a shipping company must solve day after day. The learning algorithm’s input is a training set sampled from this distribution. The goal is to use this training set to learn cutting planes and cut-selection policies with strong future performance on problems from the same application but which are not already in the training set—or more formally, strong expected performance.
1.1 Summary of main contributions and overview of techniques
As our first main contribution, we provide sample complexity bounds of the following form: fixing a family of cutting planes, we bound the number of samples sufficient to ensure that for any sequence of cutting planes from the family, the average size of the B&C tree is close to the expected size of the B&C tree. We measure performance in terms of the size of the search tree B&C builds. Our guarantees apply to the parameterized family of Chvátal-Gomory (CG) cuts [10, 17], one of the most widely-used families of cutting planes.
The overriding challenge is that to provide guarantees, we must analyze how the tree size changes as a function of the cut parameters. This is a sensitive function—slightly shifting the parameters can cause the tree size to shift from constant to exponential in the number of variables. Our key technical insight is that as the parameters vary, the entries of the cut (i.e., the vector α and offset β of the cut αTx ≤ β) are multivariate polynomials of bounded degree. The number of terms defining the polynomials is exponential in the number of parameters, but we show that the polynomials can be embedded in a space with dimension sublinear in the number of parameters. This insight allows us to better understand tree size as a function of the parameters. We then leverage results by Balcan et al. [8] that show how to use structure exhibited by dual functions (measuring an algorithm’s performance, such as its tree size, as a function of its parameters) to derive sample complexity bounds.
Our second main contribution is a sample complexity bound for learning cut-selection policies, which allow B&C to adaptively select cuts as it solves the input IP. These cut-selection policies assign a number of real-valued scores to a set of cutting planes and then apply the cut that has the maximum weighted sum of scores. Tree size is a volatile function of these weights, though we prove that it is piecewise constant, as illustrated in Figure 1, which allows us to prove our sample complexity bound.
Finally, as our third main contribution, we provide guarantees for tuning weighted combinations of scoring rules for other aspects of tree search beyond cut selection, including node and variable selection. We prove that there is a set of hyperplanes splitting the parameter space into regions such that if tree search uses any configuration from a single region, it will take the same sequence of actions. This structure allows us to prove our sample complexity bound. This is the first paper to provide guarantees for tree search configuration that apply simultaneously to multiple different aspects of the algorithm—prior research was specific to variable selection [5].
Sample complexity bounds are important because if the parameterized class of cuts or cut-selection policies that we optimize over is highly complex and the training set is too small, the learned cut or cut-selection policy might have great average empirical performance over the training set but terrible future performance. In other words, the parameter configuration procedure may overfit to the training set. The sample complexity bounds we provide are uniform-convergence: we prove that given enough samples, uniformly across all parameter settings, the difference between average and empirical performance is small. In other words, these bounds hold for any procedure one might use to optimize over the training set: manual or automated, optimal or suboptimal. No matter what
parameter setting the configuration procedure comes up with, the user can be guaranteed that so long as that parameter setting has good average empirical performance over the training set, it will also have strong future performance.
1.2 Related work
Applied research on tree search configuration. Over the past decade, a substantial literature has developed on the use of machine learning for integer programming and tree search [e.g., 2, 7, 9, 13, 19, 23–25, 29, 31–33, 35, 36, 41–43]. This has included research that improves specific aspects of B&C such as variable selection [2, 13, 24, 29, 32, 41], node selection [19, 35, 44], and heuristic scheduling [25]. These papers are applied, whereas we focus on providing theoretical guarantees.
With respect to cutting plane selection, the focus of this paper, Sandholm [36] uses machine learning techniques to customize B&C for combinatorial auction winner determination, including cutting plane selection. Tang et al. [37] and Huang et al. [20] study machine learning approaches to cutting plane selection. The former work formulates this problem as a reinforcement learning problem and shows that their approach can outperform human-designed heuristics for a variety of tasks. The latter work studies cutting plane selection in the multiple-instance-learning framework and proposes a neural-network architecture for scoring and ranking cutting planes. Meanwhile, the focus of our paper is to provide the first provable guarantees for cutting plane selection via machine learning.
Ferber et al. [15] study a problem where the IP objective vector c is unknown, but an estimate ĉ can be obtained from data. Their goal is to optimize the quality of the solutions obtained by solving the IP defined by ĉ, with respect to the true vector c. They do so by formulating the IP as a differentiable layer in a neural network. The nonconvex nature of the IP does not allow for straightforward gradient computation for the backward pass, so they obtain a continuous surrogate using cutting planes.
Provable guarantees for algorithm configuration. Gupta and Roughgarden [18] initiated the study of sample complexity bounds for algorithm configuration. In research most related to ours, Balcan et al. [5] provide sample complexity bounds for learning tree search variable selection policies (VSPs). They prove their bounds by showing that for any IP, hyperplanes partition the VSP parameter space into regions where the B&C tree size is a constant function of the parameters. The analysis in this paper requires new techniques because although we prove that the B&C tree size is a piecewiseconstant function of the CG cutting plane parameters, the boundaries between pieces are far more complex than hyperplanes: they are hypersurfaces defined by multivariate polynomials.
Kleinberg et al. [26, 27] and Weisz et al. [38, 39] design configuration procedures for runtime minimization that come with theoretical guarantees. Their algorithms are designed for the case where there are a finitely-many parameter settings to choose from (although they are still able to provide guarantees for infinite parameter spaces by running their procedure on a finite sample of configurations; Balcan et al. [5, 6] analyze when discretization approaches can and cannot be gainfully employed). In contrast, our guarantees are designed for infinite parameter spaces.
2 Problem formulation
In this section we give a more detailed technical overview of branch-and-cut, as well as an overview of the tools from learning theory we use to prove sample complexity guarantees.
2.1 Branch-and-cut
We study integer programs (IPs) in canonical form given by max { cTx : Ax ≤ b,x ≥ 0,x ∈ Zn } , (1)
where A ∈ Zm×n, b ∈ Zm, and c ∈ Rn. Branch-and-cut (B&C) works by recursively partitioning the input IP’s feasible region, searching for the locally optimal solution within each set of the partition until it can verify that it has found the globally optimal solution. It organizes this partition as a search tree, with the input IP stored at the root. It begins by solving the LP relaxation of the input IP; we denote the solution as x∗LP ∈ Rn. If x∗LP satisfies the IP’s integrality constraints (x∗LP ∈ Zn), then the procedure terminates—x∗LP is the globally optimal solution. Otherwise, it uses a variable selection policy to choose a variable x[i]. In the left child of the root, it stores the original IP with the additional constraint that x[i] ≤ bx∗LP[i]c, and in the right child, with the additional constraint that x[i] ≥ dx∗LP[i]e. It then uses a node selection policy to select a leaf of the tree and repeats this procedure—solving the LP relaxation and branching on a variable. B&C can fathom a node, meaning that it will stop searching along that branch, if 1) the LP relaxation satisfies the IP’s integrality constraints, 2) the LP relaxation is infeasible, or 3) the objective value of the LP relaxation’s solution is no better than the best integral solution found thus far. We assume there is a bound κ on the size of the tree we allow B&C to build before we terminate, as is common in prior research [5, 21, 26, 27].
Cutting planes are a means of ensuring that at each iteration of B&C, the solution to the LP relaxation is as close to the optimal integral solution as possible. Formally, let P = {x ∈ Rn : Ax ≤ b,x ≥ 0} denote the feasible region obtained by taking the LP relaxation of IP (1). Let PI = conv(P ∩ Zn) denote the integer hull of P . A valid cutting plane is any hyperplane αTx ≤ β such that if x is in the integer hull (x ∈ PI), then x satisfies the inequality αTx ≤ β. In other words, a valid cut does not remove any integral point from the LP relaxation’s feasible region. A valid cutting plane separates x ∈ P \ PI if it does not satisfy the inequality, or in other words, αTx > β. At any node of the search tree, B&C can add valid cutting planes that separate the optimal solution to the node’s LP relaxation, thus improving the solution estimates used to prune the search tree. However, adding too many cuts will increase the time it takes to solve the LP relaxation at each node. Therefore, solvers such as SCIP [16], the leading open-source solver, bound the number of cuts that will be applied.
A famous class of cutting planes is the family of Chvátal-Gomory (CG) cuts1 [10, 17], which are parameterized by vectors u ∈ Rm. The CG cut defined by u ∈ Rm is the hyperplane buTAcx ≤ buT bc, which is guaranteed to be valid. Throughout this paper we primarily restrict our attention to u ∈ [0, 1)m. This is without loss of generality, since the facets of P ∩ {x ∈ Rn : buTAcx ≤ buT bc ∀u ∈ Rm} can be described by the finitely many u ∈ [0, 1)m such that uTA ∈ Zn (Lemma 5.13 of Conforti et al. [11]).
Some IP solvers such as SCIP use scoring rules to select among cutting planes, which are meant to measure the quality of a cut. Some commonly-used scoring rules include efficacy [4] (score1), objective parallelism [1] (score2), directed cutoff distance [16] (score3), and integral support [40] (score4) (defined in Appendix A). Efficacy measures the distance between the cut αTx ≤ β and x∗LP: score1(α
Tx ≤ β) = (αTx∗LP − β)/ ‖α‖2 , as illustrated in Figure 2a. Objective parallelism measures the angle between the objective c and the cut’s normal vector α: score2(αTx ≤ β) =∣∣cTα∣∣ /(‖α‖2 ‖c‖2), as illustrated in Figures 2b and 2c. Directed cutoff distance measures the distance between the LP optimal solution and the cut in a more relevant direction than the efficacy scoring rule. Specifically, let x be the incumbent solution, which is the best-known feasible solution to the input IP. The directed cutoff distance is the distance between the hyperplane (α, β) and the current LP solution x∗LP along the direction of the incumbent x, as illustrated in Figures 2d and 2e: score3(αTx ≤ β) = ‖x− x∗LP‖2 · (α Tx∗LP − β)/ ∣∣αT (x− x∗LP)∣∣ . SCIP uses the scoring rule 3 5score1 + 1 10score2 + 1 2score3 + 1 10score4 [16].
1The set of CG cuts is equivalent to the set of Gomory (fractional) cuts [12], another commonly studied family of cutting planes with a slightly different parameterization.
2.2 Learning theory background and notation
The goal of this paper is to learn cut-selection policies using samples in order to guarantee, with high probability, that B&C builds a small tree in expectation on unseen IPs. To this end, we rely on the notion of pseudo-dimension [34], a well-known measure of a function class’s intrinsic complexity. The pseudo-dimension of a function class F ⊆ RY , denoted Pdim(F), is the largest integer N for which there exist N inputs y1, . . . , yN ∈ Y and N thresholds r1, . . . , rN ∈ R such that for every (σ1, . . . , σN ) ∈ {0, 1}N , there exists f ∈ F such that f(yi) ≥ ri if and only if σi = 1. Function classes with bounded pseudo-dimension satisfy the following uniform convergence guarantee [3, 34]. Let [−κ, κ] be the range of the functions in F , let NF (ε, δ) = O(κ 2 ε2 (Pdim(F) + ln( 1 δ ))), and let N ≥ NF (ε, δ). For all distributionsD on Y , with probability 1−δ over the draw of y1, . . . , yN ∼ D, for every function f ∈ F , the average value of f over the samples is within ε of its expected value: | 1N ∑N i=1 f(yi)− Ey∼D[f(y)]| ≤ ε. The quantity NF (ε, δ) is the sample complexity of F .
We use the notation ‖A‖1,1 to denote the sum of the absolute values of all the entries in A.
3 Learning Chvátal-Gomory cuts
In this section we bound the sample complexity of learning CG cuts at the root node of the B&C search tree. In many IP settings, similar IPs are being solved and there can be good cuts that carry across instances—for example, in applications where the constraints stay the same or roughly the same across instances,2 and only the objective changes. One high-stakes example of this is the feasibility checking problem in the billion-dollar incentive auction for radio spectrum, where prices change but the radiowave interference constraints do not change.
We warm up by analyzing the case where a single CG cut is added at the root (Section 3.1), and then build on this analysis to handle w sequential waves of k simultaneous CG cuts (Section 3.3). This means that all k cuts in the first wave are added simultaneously, the new (larger) LP relaxation is solved, all k cuts in the second wave are added to the new problem simultaneously, and so on. B&C adds cuts in waves because otherwise, the angles between cuts would become obtuse, leading to numerical instability. Moreover, many commercial IP solvers only add cuts at the root because those cuts can be leveraged throughout the tree. However, in Section 5, we also provide guarantees for applying cuts throughout the tree. In this section, we assume that all aspects of B&C (such as node selection and variable selection) are fixed except for the cuts applied at the root of the search tree.
3.1 Learning a single cut
To provide sample complexity bounds, as per Section 2.2, we bound the pseudo-dimension of the set of functions fu for u ∈ [0, 1]m, where fu(c, A, b) is the size of the tree B&C builds when it applies the CG cut defined by u at the root. To do so, we take advantage of structure exhibited by the class of dual functions, each of which is defined by a fixed IP (c, A, b) and measures tree size as
2We assume that constraints are generated in the same order across instances; see Appendix B for a discussion.
a function of the parameters u. In other words, each dual function f∗c,A,b : [0, 1] m → R is defined as f∗c,A,b(u) = fu(c, A, b). Our main result in this section is a proof that the dual functions are well-structured (Lemma 3.2), which then allows us to apply a result by Balcan et al. [8] to bound Pdim({fu : u ∈ [0, 1]m}) (Theorem 3.3). Proving that the dual functions are well-structured is challenging because they are volatile: slightly perturbing u can cause the tree size to shift from constant to exponential in n, as we prove in the following theorem. The full proof is in Appendix C.
Theorem 3.1. For any integer n, there exists an integer program (c, A, b) with two constraints and n variables such that if 12 ≤ u[1]− u[2] < n+1 2n , then applying the CG cut defined by u at the root causes B&C to terminate immediately. Meanwhile, if n+12n ≤ u[1]− u[2] < 1, then applying the CG cut defined by u at the root causes B&C to build a tree of size at least 2(n−1)/2.
Proof sketch. Without loss of generality, assume that n is odd. Consider an IP with constraints 2(x[1] + · · · + x[n]) ≤ n, −2(x[1] + · · · + x[n]) ≤ −n, x ∈ {0, 1}n, and any objective. This IP is infeasible because n is odd. Jeroslow [22] proved that without the use of cutting planes or heuristics, B&C will build a tree of size 2(n−1)/2 before it terminates. We prove that when 1 2 ≤ u[1]− u[2] < n+1 2n , the CG cut halfspace defined by u = (u[1], u[2]) has an empty intersection with the feasible region of the IP, causing B&C to terminate immediately. On the other hand, we show that if n+12n ≤ u[1]− u[2] < 1, then the CG cut halfspace defined by u contains the feasible region of the IP, and thus leaves the feasible region unchanged. In this case, due to Jeroslow [22], applying this CG cut at the root will cause B&C to build a tree of size at least 2(n−1)/2 before it terminates.
This theorem shows that the dual tree-size functions can be extremely sensitive to perturbations in the CG cut parameters. However, we are able to prove that the dual functions are piecewise-constant.
Lemma 3.2. For any IP (c, A, b), there areO(‖A‖1,1 +‖b‖1 +n) hyperplanes that partition [0, 1]m into regions where in any one region R, the dual function f∗c,A,b(u) is constant for all u ∈ R.
Proof. Let a1, . . . ,an ∈ Rm be the columns of A. Let Ai = ‖ai‖1 and B = ‖b‖1, so for any u ∈ [0, 1]m, ⌊ uTai ⌋ ∈ [−Ai, Ai] and ⌊ uT b ⌋ ∈ [−B,B]. For each integer ki ∈ [−Ai, Ai], we
have ⌊ uTai ⌋ = ki ⇐⇒ ki ≤ uTai < ki + 1. There are ∑n i=1 2Ai + 1 = O(‖A‖1,1 + n) such halfspaces, plus an additional 2B + 1 halfspaces of the form kn+1 ≤ uT b < kn+1 + 1 for each kn+1 ∈ {−B, . . . , B}. In any region R defined by the intersection of these halfspaces, the vector (buTa1c, . . . , buTanc, buT bc) is constant for all u ∈ R, and thus so is the resulting cut.
Combined with the main result of Balcan et al. [8], this lemma implies the following bound.
Theorem 3.3. Let Fα,β denote the set of all functions fu for u ∈ [0, 1]m defined on the domain of IPs (c, A, b) with ‖A‖1,1 ≤ α and ‖b‖1 ≤ β. Then, Pdim(Fα,β) = O(m log(m(α+ β + n))).
This theorem implies that Õ(κ2m/ε2) samples are sufficient to ensure that with high probability, for every CG cut, the average size of the tree B&C builds upon applying the cutting plane is within of the expected size of the tree it builds (the Õ notation suppresses logarithmic terms).
3.2 Learning a sequence of cuts
We now determine the sample complexity of making w sequential CG cuts at the root. The first cut is defined by m parameters u1 ∈ [0, 1]m for each of the m constraints. Its application leads to the addition of the row buT1 Acx ≤ buT1 bc to the constraint matrix. The next cut is then be defined by m+ 1 parameters u2 ∈ [0, 1]m+1 since there are now m+ 1 constraints. Continuing in this fashion, the wth cut is be defined by m+w− 1 parameters uw ∈ [0, 1]m+w−1. Let fu1,...,uw(c, A, b) be the size of the tree B&C builds when it applies the CG cut defined by u1, then applies the CG cut defined by u2 to the new IP, and so on, all at the root of the search tree.
As in Section 3.1, we bound the pseudo-dimension of the functions fu1,...,uw by analyzing the structure of the dual functions f∗c,A,b, which measure tree size as a function of the parameters u1, . . . ,uw. Specifically, f∗c,A,b : [0, 1]
m × · · · × [0, 1]m+w−1 → R, where f∗c,A,b(u1, . . . ,uw) = fu1,...,uw(c, A, b). The analysis in this section is more complex because the s th cut (with s ∈
{2, . . . ,W}) depends not only on the parameters us but also on u1, . . . ,us−1. We prove that the dual functions are again piecewise-constant, but in this case, the boundaries between pieces are defined by multivariate polynomials rather than hyperplanes. The full proof is in Appendix C. Lemma 3.4. For any IP (c, A, b), there are O(w2w ‖A‖1,1 + 2w ‖b‖1 + nw) multivariate polynomials in ≤ w2 +mw variables of degree ≤ w that partition [0, 1]m× · · · × [0, 1]m+w−1 into regions where in any one region R, f∗c,A,b(u1, . . . ,uw) is constant for all (u1, . . . ,uw) ∈ R.
Proof sketch. Let a1, . . . ,an ∈ Rm be the columns of A. For u1 ∈ [0, 1]m, . . . ,uw ∈ [0, 1]m+w−1, define ã1i ∈ [0, 1]m, . . . , ãwi ∈ [0, 1]m+w−1 for each i ∈ [n] such that ãsi is the ith column of the constraint matrix after applying cuts u1, . . . ,us−1. Similarly, define b̃s to be the constraint vector after applying the first s− 1 cuts. More precisely, we have the recurrence relation
ã1i = ai b̃ 1 = b
ãsi =
[ ãs−1i
uTs−1ã s−1 i
] b̃s = [ b̃s−1
uTs−1b̃ s−1 ] for s = 2, . . . ,W . We prove that ⌊ uTs ã s i ⌋ ∈ [−2s−1 ‖ai‖1 , 2s−1 ‖ai‖1]. For each integer ki in this
interval, ⌊ uTs ã s i ⌋ = ki ⇐⇒ ki ≤ uTs ãsi < ki + 1. The boundaries of these surfaces are defined by polynomials over us in ≤ ms+ s2 variables with degree ≤ s. Counting the total number of such hypersurfaces yields the lemma statement.
We now use this structure to provide a pseudo-dimension bound. The full proof is in Appendix C. Theorem 3.5. Let Fα,β denote the set of all functions fu1,...,uw for u1 ∈ [0, 1]m, . . . ,uw ∈ [0, 1]m+w−1 defined on the domain of integer programs (c, A, b) with ‖A‖1,1 ≤ α and ‖b‖1 ≤ β. Then, Pdim(Fα,β) = O(mw2 log(mw(α+ β + n))).
Proof sketch. The space of 0/1 classifiers induced by the set of degree ≤ w multivariate polynomials in w2 + mw variables has VC dimension O((w2 + mw) logw) [3]. However, we more carefully examine the structure of the polynomials considered in Lemma 3.4 to give an improved VC dimension bound of 1 +mw. For each j = 1, . . . ,m define ũ1[j], . . . , ũw[j] recursively as
ũ1[j] = u1[j]
ũs[j] = us[j] + s−1∑ `=1 us[m+ `]ũ`[j] for s = 2, . . . , w
The space of polynomials induced by the sth cut is contained in span{1, ũs[1], . . . , ũs[m]}. The intuition for this is as follows: consider the additional term added by the sth cut to the constraint matrix, that is, uTs ã s i . The first m coordinates (us[1], . . . ,us[m]) interact only with ai—so us[j] collects a coefficient of ai[j]. Each subsequent coordinate us[m+ `] interacts with all coordinates of ãsi arising from the first ` cuts. The term that collects a coefficient of ai[j] is precisely us[m+ `] times the sum of all terms from the first ` cuts with a coefficient of ai[j]. Using standard facts about the VC dimension of vector spaces and their duals in conjunction with Lemma 3.4 and the framework of Balcan et al. [8] yields the theorem statement.
The sample complexity (defined in Section 2.2) of learning W sequential cuts is thus Õ(κ2mw2/ 2).
3.3 Learning waves of simultaneous cuts
We now determine the sample complexity of making w sequential waves of cuts at the root, each wave consisting of k simultaneous CG cuts. Given vectors u11, . . . ,u k 1 ∈ [0, 1]m,u12, . . . ,uk2 ∈ [0, 1]m+k, . . . ,u1w, . . . ,u k w ∈ [0, 1]m+k(w−1), let fu11,...,uk1 ,...,u1w,...,ukw(c, A, b) be the size of the tree B&C builds when it applies the CG cuts defined by u11, . . . ,u k 1 , then applies the CG cuts defined by u12, . . . ,u k 2 to the new IP, and so on, all at the root of the search tree. The full proof of the following theorem is in Appendix C, and follows from the observation that w waves of k simultaneous cuts can be viewed as making kw sequential cuts with the restriction that cuts within each wave assign nonzero weight only to constraints from previous waves.
Theorem 3.6. Let Fα,β be the set of all functions fu11,...,uk1 ,...,u1w,...,ukw for u 1 1, . . . ,u k 1 ∈ [0, 1]m, . . . ,u1w, . . . ,u k w ∈ [0, 1]m+k(w−1) defined on the domain of integer programs (c, A, b) with ‖A‖1,1 ≤ α and ‖b‖1 ≤ β. Then, Pdim(Fα,β) = O(mk2w2 log(mkw(α+ β + n))).
This result implies that the sample complexity of learning W waves of k cuts is Õ(κ2mk2w2/ 2).
3.4 Data-dependent guarantees
So far, our guarantees have depended on the maximum possible norms of the constraint matrix and vector in the domain of IPs under consideration. The uniform convergence result in Section 2.2 for Fα,β only holds for distributions over A and b with norms bounded by α and β, respectively. In Appendix C.1, we show how to convert these into more broadly applicable data-dependent guarantees that leverage properties of the distribution over IPs. These guarantees hold without assumptions on the distribution’s support, and depend on E[maxi ‖Ai‖1,1] and E[maxi ‖bi‖1] (where the expectation is over N samples), thus giving a sharper sample complexity guarantee that is tuned to the distribution.
4 Learning cut selection policies
In Section 3, we studied the sample complexity of learning waves of specific cut parameters. In this section, we bound the sample complexity of learning cut-selection policies at the root, that is, functions that take as input an IP and output a candidate cut. Using scoring rules is a more nuanced way of choosing cuts since it allows for the cut parameters to depend on the input IP.
Formally, let Im be the set of IPs withm constraints (the number of variables is always fixed at n) and letHm be the set of all hyperplanes in Rm. A scoring rule is a function score : ∪m(Hm × Im)→ R≥0. The real value score(αTx ≤ β, (c, A, b)) is a measure of the quality of the cutting plane αTx ≤ β for the IP (c, A, b). Examples include the scoring rules discussed in Section 2.1. Suppose score1, . . . , scored are d different scoring rules. We now bound the sample complexity of learning a combination of these scoring rules that guarantee a low expected tree size. Our highlevel proof technique is the same as in the previous section: we establish that the dual tree-size functions are piecewise structured, and then apply the general framework of Balcan et al. [8] to obtain pseudo-dimension bounds. Theorem 4.1. Let C be a set of cutting-plane parameters such that for every IP (c, A, b), there is a decomposition of C into ≤ r regions such that the cuts generated by any two vectors in the same region are the same. Let score1, . . . , scored be d scoring rules. For µ ∈ Rd, let fµ(c, A, b) be the size of the tree B&C builds when it chooses a cut from C to maximize µ[1]score1(·, (c, A, b)) + · · ·+ µ[d]scored(·, (c, A, b)). Then, Pdim({fµ : µ ∈ Rd}) = O(d log(rd)).
Proof. Fix an integer program (c, A, b). Let u1, . . . ,ur ∈ C be representative cut parameters for each of the r regions. Consider the hyperplanes ∑d i=1 µ[i]scorei(us) = ∑d i=1 µ[i]scorei(ut) for each s 6= t ∈ {1, . . . , r} (suppressing the dependence on c, A, b). These O(r2) hyperplanes partition Rd into regions such that as µ varies in a given region, the cut chosen from C is invariant. The desired pseudo-dimension bound follows from the main result of Balcan et al. [8].
Theorem 4.1 can be directly instantiated with the class of CG cuts. Combining Lemma 3.2 with the basic combinatorial fact that k hyperplanes partition Rm into at most km regions, we get that the pseudo-dimension of {fµ : µ ∈ Rd} defined on IPs with ‖A‖1,1 ≤ α and ‖b‖1 ≤ β is O(dm log(d(α + β + n))). Instantiating Theorem 4.1 with the set of all sequences of w CG cuts requires the following extension of scoring rules to sequences of cutting planes. A sequential scoring rule is a function that takes as input an IP (c, A, b) and a sequence of cutting planes h1, . . . , hw, where each cut lives in one higher dimension than the previous. It measures the quality of this sequence of cutting planes when applied one after the other to the original IP. Every scoring rule score can be naturally extended to a sequential scoring rule score defined by score(h1, . . . , hw, (c0, A0, b0)) =∑w−1 i=0 score(hi+1, (c i, Ai, bi)), where (ci, Ai, bi) is the IP after applying cuts h1, . . . , hi−1. Corollary 4.2. Let C = [0, 1]m × · · · × [0, 1]m+w−1 denote the set of possible sequences of w Chvátal-Gomory cut parameters. Let score1, . . . , scored : C × Im × · · · × Im+w−1 → R
be d sequential scoring rules and let fµ(c, A, b) be as in Theorem 4.1 for the class C. Then, Pdim({fwµ : µ ∈ Rd}) = O(dmw2 log(dw(α+ β + n))).
Proof. In Lemma 3.4 and Theorem 3.5 we showed that there are O(w2wα+ 2wβ+nw) multivariate polynomials that belong to a family of polynomials G with VCdim(G∗) ≤ 1 +mw (G∗ denotes the dual of G) that partition C into regions such that resulting sequence of cuts is invariant in each region. By Claim 3.5 by Balcan et al. [8], the number of regions is O(w2wα + 2wβ + nw)VCdim(G
∗) ≤ O(w2wα+ 2wβ + nw)1+mw. The corollary then follows from Theorem 4.1.
These results bound the sample complexity of learning cut-selection policies based on scoring rules, which allow the cuts B&C that selects to depend on the input IP.
5 Sample complexity of generic tree search
In this section, we study the sample complexity of selecting high-performing parameters for generic tree-based algorithms, which are a generalization of B&C. This abstraction allows us to provide guarantees for simultaneously optimizing key aspects of tree search beyond cut selection, including node selection and branching variable selection. We also generalize the previous sections by allowing actions (such as cut selection) to be taken at any stage of the tree search—not just at the root.
Tree search algorithms take place over a series of κ rounds (analogous to the B&B tree-size cap κ in the previous sections). There is a sequence of t steps that the algorithm takes on each round. For example, in B&C, these steps include node selection, cut selection, and variable selection. The specific action the algorithm takes during each step (for example, which node to select, which cut to include, or which variable to branch on) typically depends on a scoring rule. This scoring rule weights each possible action and the algorithm performs the action with the highest weight. These actions (deterministically) transition the algorithm from one state to another. This high-level description of tree search is summarized by Algorithm 1. For each step j ∈ [t], the number of possible actions is Tj ∈ N. There is a scoring rule scorej , where scorej(k, s) ∈ R is the weight associated with the action k ∈ [Tj ] when the algorithm is in the state s.
Algorithm 1 Tree search Input: Problem instance, t scoring rules score1, . . . , scoret, number of rounds κ.
1: s1,1 ← Initial state of algorithm 2: for each round i ∈ [κ] do 3: for each step j ∈ [t] do 4: Perform the action k ∈ [Tj ] that maximizes scorej (si,j , k) 5: si,j+1 ← New state of algorithm 6: si+1,1 ← si,t+1 . State at beginning of next round equals state at end of this round
Output: Incumbent solution in state sκ,t+1, if one exists.
There are often several scoring rules one could use, and it is not clear which to use in which scenarios. As in Section 4, we provide guarantees for learning combinations of these scoring rules for the particular application at hand. More formally, for each step j ∈ [t], rather than just a single scoring rule scorej as in Step 4, there are dj scoring rules scorej,1, . . . , scorej,dj . Given parameters µj = (µj [1], . . . , µj [dj ]) ∈ Rdj , the algorithm takes the action k ∈ [Tj ] that maximizes∑dj i=1 µj [i]scorej,i(k, s). There is a distribution D over inputs x to Algorithm 1. For example, when this framework is instantiated for branch-and-cut, x is an integer program (c, A, b). There is a utility function fµ(x) ∈ [−H,H] that measures the utility of the algorithm parameterized by µ = (µ1, . . . ,µt) on input x. For example, this utility function might measure the size of the search tree that the algorithm builds. We assume that this utility function is final-state-constant: Definition 5.1. Let µ = (µ1, . . . ,µt) and µ′ = (µ′1, . . . ,µ′t) be two parameter vectors. Suppose that we run Algorithm 1 on input x once using the scoring rule scorej = ∑dj i=1 µj [i]scorej,i and
once using the scoring rule scorej = ∑dj i=1 µ ′ j [i]scorej,i. Suppose that on each run, we obtain the same final state sκ,t+1. The utility function is final-state-constant if fµ(x) = fµ′(x).
We provide a sample complexity bound for learning the parametersµ. The full proof is in Appendix D. Theorem 5.2. Let d = ∑t j=1 dj denote the total number of tunable parameters of tree search. Then,
Pdim ({ fµ : µ ∈ Rd }) = O ( dκ
t∑ j=1 log Tj + d log d
) .
Proof sketch. We prove that there is a set of hyperplanes splitting the parameter space into regions such that if tree search uses any parameter setting from a single region, it will always take the same sequence of actions (including node, variable, and cut selection). The main subtlety is an induction argument to count these hyperplanes that depends on the current step of the tree-search algorithm.
In the context of integer programming, Theorem 5.2 not only recovers the main result of Balcan et al. [5] for learning variable selection policies, but also yields a more general bound that simultaneously incorporates cutting plane selection, variable selection, and node selection. In B&C, the first action of each round is to select a node. Since there are at most 2n+1 − 1 nodes, T1 ≤ 2n+1 − 1. The second action is to choose a cutting plane. As in Theorem 4.1, let C be a family of cutting planes such that for every IP (c, A, b), there is a decomposition of the parameter space into ≤ r regions such that the cuts generated by any two parameters in the same region are the same. So T2 ≤ r. The last action is to choose a variable to branch on at that node, so T3 = n. Applying Theorem 5.2, Pdim({fµ : µ ∈ Rd}) = O(dκn + dκ log r + d log d). Ignoring T1 and T2, thereby only learning the variable selection policy, recovers the O(dκ log n+ d log d) bound of Balcan et al. [5].
6 Conclusions and future research
We provided the first provable guarantees for using machine learning to configure cutting planes and cut-selection policies. We analyzed the sample complexity of learning cutting planes from the popular family of Chvátal-Gomory (CG) cuts. We then provided sample complexity guarantees for learning parameterized cut-selection policies, which allow the branch-and-cut algorithm to adaptively apply cuts as it builds the search tree. We showed that this analysis can be generalized to simultaneously capture various key aspects of tree search beyond cut selection, such as node and variable selection.
This paper opens up a variety questions for future research. For example, which other cut families can we learn over with low sample complexity? Section 3 focused on learning within the family of CG cuts (Sections 4 and 5 applied more generally). There are many other families, such as Gomory mixed-integer cuts and lift-and-project cuts, and a sample complexity analysis of these is an interesting direction for future research (and would call for new techniques). In addition, can we use machine learning to design improved scoring rules and heuristics for cut selection? The bounds we provide in Section 4 apply to any choice of scoring rules, no matter how simple or complex. Is it possible to obtain even better bounds by taking into account the complexity of the scoring rules? Finally, the bounds in this paper are worst case, but a great direction for future research is to develop data-dependent bounds that improve based on the structure of the input distribution.
Acknowledgements
This material is based on work supported by the National Science Foundation under grants IIS1618714, IIS-1718457, IIS-1901403, CCF-1733556, CCF-1535967, CCF-1910321, SES-1919453, the ARO under award W911NF2010081, DARPA under cooperative agreement HR00112020003, an AWS Machine Learning Research Award, an Amazon Research Award, a Bloomberg Research Grant, and a Microsoft Research Faculty Fellowship. | 1. What is the main contribution of the paper regarding branch-and-cut algorithms?
2. How does the paper relate to the broader topic of combinatorial optimization solvers?
3. Can you explain the significance of the results in terms of their potential impact on practical applications?
4. Do you have any concerns or suggestions regarding the presentation and organization of the paper?
5. How does the paper handle the relationship between the complexity of scoring functions and tree sizes?
6. Are there any relevant papers missing from the literature review section?
7. Do you think the tone of the abstract and paper should be adjusted to better reflect the actual contributions of the paper?
8. Could the paper do a better job of explaining the potential uses of the results for cutting plane algorithms?
9. Is there a distinction between "waves of cuts" and "rounds of cuts"? If so, how do they differ? | Summary Of The Paper
Review | Summary Of The Paper
Combinatorial optimization solvers often add cutting planes during solving, which are additional constraints in the problem that accelerate solving. These cutting planes have a major impact on the size of the final branch-and-bound tree, and therefore of the speed of the overall solving, but their impact is badly understood theoretically. At minimum, it is empirically understood that the size of the branch-and-bound tree is a complicated and sensitive function of the parameters of the cutting planes that were added during solving: a slightly different cutting plane might have been much more, or much less, effective.
In this paper the authors look at the problem of trying to learn these functions that relate the size of the branch-and-bound tree (a measure of solving performance) as a function of the instance data (A, b, c) when cuts with given parameters u_1, ..., u_k are added to the problem. They compute an order of magnitude of samples required to learn well this function, both for a single cut, a sequence of cuts, or rounds of sequences of cuts (as are used in practice). They also compute the number of samples required for the generalized problem of predicting tree size as a function of the instance data (A, b, c) for an algorithm that would pick the cut parameters to maximize a convex combination of scores, for given algorithm hyperparameters (the convex weights.) Finally, they do the same for more general algorithms that would pick an action, during solving, by maximizing a convex combination of scores, such as when doing node selection or variable selection.
Review
I think the paper is overall interesting. Although branch-and-cut is the main algorithm used by combinatorial optimization solvers, very little theory has been developed for it, mostly because it is devilishly difficult. The current paper develops some theory related to a more tangential aspect (how hard is it to learn the function mapping cuts, or scoring weights, to the tree size?), but which might be useful for cut selection. In some sense only the initial results of the paper (Theorem 3.1, Lemma 3.2, Lemma 3.4) really seem to have to do with cutting planes per se; the rest is more a consequence of generic results about learning discrete functions, like the tree size, as a function of continuous parameters. Nonetheless the results are not trivial and are certainly original.
On the criticisms side, I think there could be ways for the presentation in Section 4 to be improved.
I think the authors should really separate better the paragraph starting at line 298 in two, at line 301. I found the lack of break confusing: the text switches from discussion Theorem 4.1 to moving on to the next result without a breath in between.
I think the authors should explain better in Theorem 4.1 what is the main result of Balcan et al. (2021) (Theorem 3.3?) and how it gets invoked in the proof, since the setup and notation of that second paper is quite different from this one. Right now I find the current proof a bit too hand-wavey for my taste, for what is ultimately (with Theorem 5.2) the most interesting result of the paper.
My third question is perhaps a bit more philosophical, but is there a good reason why the bound of Theorem 4.1 seems to have no relationship at all with the scoring functions? One might have thought that more complex scoring functions would have led to tree sizes that are more sensitive to their input, and therefore needing more samples to learn, but your bound seems independent of their complexity. Perhaps a little high level explanation would be welcome in the text.
Otherwise, I think the literature review section could be improved. There is a lot of applied papers missing, mostly from more recent years - the list is far from being up to date, both for variable selection, node selection and cut selection. For cut selection there is the recent Huang et al. (2021); for node selection there is Yilmaz and Yorke-Smith (2021). For variable selection (the most studied problem) there is quite a bit missing out, probably at least 10 papers or so since 2017-2018. Conversely, I am not sure I understand the relevance of the paragraph discussing Ferber et al. (2020), except that the method uses cutting planes - this has not really much to do with branch-and-cut?
I also think the tone is sometimes a bit too grandiose in the abstract and the paper. "In this paper we prove the first guarantees for learning high-performing cut-selection policies tailored to the instance distribution at hand using samples" makes it sound like the authors have a procedure with guarantees to produce high-quality cuts - this is not the case. Or in sentences like, "we bound the sample complexity of learning high-performing cutting planes" - I don't think they bound at all the sample complexity of learning the (optimal) planes, they bound the sample complexity of learning a metric, the tree size. This is different in my mind. Now, I do agree that the results of the paper might be useful for deriving such procedures-with-guarantees, for example by optimization of the empirical mean performance over instances (empirical risk minimization), but this is not discussed in the paper, and there would be more work needed to get there. In fact, I think the paper does in that respect an underwhelming job of explaining why these results are interesting, and on that aspect better explaining how they could be useful for cutting plane algorithms would be useful.
Another more minor comment: in Section 3 (and elsewhere in the paper), the authors discuss "waves of cuts". I am personally much more used to hear about "rounds" of cuts, which I think (perhaps incorrectly) is the more standard term in the combinatorial optimization community. To help readers, perhaps it would be valuable to switch to this terminology? Unless there is a distinction I did not quite grasp between the two concepts.
Overall, my recommendation would be acceptance of the paper, with the criticisms I described above having been addressed. |
NIPS | Title
Sample Complexity of Tree Search Configuration: Cutting Planes and Beyond
Abstract
Cutting-plane methods have enabled remarkable successes in integer programming over the last few decades. State-of-the-art solvers integrate a myriad of cutting-plane techniques to speed up the underlying tree-search algorithm used to find optimal solutions. In this paper we provide sample complexity bounds for cut-selection in branch-and-cut (B&C). Given a training set of integer programs sampled from an application-specific input distribution and a family of cut selection policies, these guarantees bound the number of samples sufficient to ensure that using any policy in the family, the size of the tree B&C builds on average over the training set is close to the expected size of the tree B&C builds. We first bound the sample complexity of learning cutting planes from the canonical family of Chvátal-Gomory cuts. Our bounds handle any number of waves of any number of cuts and are fine tuned to the magnitudes of the constraint coefficients. Next, we prove sample complexity bounds for more sophisticated cut selection policies that use a combination of scoring rules to choose from a family of cuts. Finally, beyond the realm of cutting planes for integer programming, we develop a general abstraction of tree search that captures key components such as node selection and variable selection. For this abstraction, we bound the sample complexity of learning a good policy for building the search tree.
1 Introduction
Integer programming is one of the most broadly-applicable tools in computer science, used to formulate problems from operations research (such as routing, scheduling, and pricing), machine learning (such as adversarially-robust learning, MAP estimation, and clustering), and beyond. Branchand-cut (B&C) is the most widely-used algorithm for solving integer programs (IPs). B&C is highly configurable, and with a deft configuration, it can be used to solve computationally challenging problems. Finding a good configuration, however, is a notoriously difficult problem.
We study machine learning approaches to configuring policies for selecting cutting planes, which have an enormous impact on B&C’s performance. At a high level, B&C works by recursively partitioning the IP’s feasible region, searching for the locally optimal solution within each set of the partition,
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
until it can verify that it has found the globally optimal solution. An IP’s feasible region is defined by a set of linear inequalities Ax ≤ b and integer constraints x ∈ Zn, where n is the number of variables. By dropping the integrality constraints, we obtain the linear programming (LP) relaxation of the IP, which can be solved efficiently. A cutting plane is a carefully-chosen linear inequality αTx ≤ β which refines the LP relaxation’s feasible region without separating any integral point. Intuitively, a well-chosen cutting plane will remove a large portion of the LP relaxation’s feasible region, speeding up the time it takes B&C to find the optimal solution to the original IP. Cutting plane selection is a crucial task, yet it is challenging because many cutting planes and cut-selection policies have tunable parameters, and the best configuration depends intimately on the application domain.
We provide the first provable guarantees for learning high-performing cutting planes and cut-selection policies, tailored to the application at hand. We model the application domain via an unknown, application-specific distribution over IPs, as is standard in the literature on using machine learning for integer programming [e.g., 21, 23, 31, 36, 43]. For example, this could be a distribution over the routing IPs that a shipping company must solve day after day. The learning algorithm’s input is a training set sampled from this distribution. The goal is to use this training set to learn cutting planes and cut-selection policies with strong future performance on problems from the same application but which are not already in the training set—or more formally, strong expected performance.
1.1 Summary of main contributions and overview of techniques
As our first main contribution, we provide sample complexity bounds of the following form: fixing a family of cutting planes, we bound the number of samples sufficient to ensure that for any sequence of cutting planes from the family, the average size of the B&C tree is close to the expected size of the B&C tree. We measure performance in terms of the size of the search tree B&C builds. Our guarantees apply to the parameterized family of Chvátal-Gomory (CG) cuts [10, 17], one of the most widely-used families of cutting planes.
The overriding challenge is that to provide guarantees, we must analyze how the tree size changes as a function of the cut parameters. This is a sensitive function—slightly shifting the parameters can cause the tree size to shift from constant to exponential in the number of variables. Our key technical insight is that as the parameters vary, the entries of the cut (i.e., the vector α and offset β of the cut αTx ≤ β) are multivariate polynomials of bounded degree. The number of terms defining the polynomials is exponential in the number of parameters, but we show that the polynomials can be embedded in a space with dimension sublinear in the number of parameters. This insight allows us to better understand tree size as a function of the parameters. We then leverage results by Balcan et al. [8] that show how to use structure exhibited by dual functions (measuring an algorithm’s performance, such as its tree size, as a function of its parameters) to derive sample complexity bounds.
Our second main contribution is a sample complexity bound for learning cut-selection policies, which allow B&C to adaptively select cuts as it solves the input IP. These cut-selection policies assign a number of real-valued scores to a set of cutting planes and then apply the cut that has the maximum weighted sum of scores. Tree size is a volatile function of these weights, though we prove that it is piecewise constant, as illustrated in Figure 1, which allows us to prove our sample complexity bound.
Finally, as our third main contribution, we provide guarantees for tuning weighted combinations of scoring rules for other aspects of tree search beyond cut selection, including node and variable selection. We prove that there is a set of hyperplanes splitting the parameter space into regions such that if tree search uses any configuration from a single region, it will take the same sequence of actions. This structure allows us to prove our sample complexity bound. This is the first paper to provide guarantees for tree search configuration that apply simultaneously to multiple different aspects of the algorithm—prior research was specific to variable selection [5].
Sample complexity bounds are important because if the parameterized class of cuts or cut-selection policies that we optimize over is highly complex and the training set is too small, the learned cut or cut-selection policy might have great average empirical performance over the training set but terrible future performance. In other words, the parameter configuration procedure may overfit to the training set. The sample complexity bounds we provide are uniform-convergence: we prove that given enough samples, uniformly across all parameter settings, the difference between average and empirical performance is small. In other words, these bounds hold for any procedure one might use to optimize over the training set: manual or automated, optimal or suboptimal. No matter what
parameter setting the configuration procedure comes up with, the user can be guaranteed that so long as that parameter setting has good average empirical performance over the training set, it will also have strong future performance.
1.2 Related work
Applied research on tree search configuration. Over the past decade, a substantial literature has developed on the use of machine learning for integer programming and tree search [e.g., 2, 7, 9, 13, 19, 23–25, 29, 31–33, 35, 36, 41–43]. This has included research that improves specific aspects of B&C such as variable selection [2, 13, 24, 29, 32, 41], node selection [19, 35, 44], and heuristic scheduling [25]. These papers are applied, whereas we focus on providing theoretical guarantees.
With respect to cutting plane selection, the focus of this paper, Sandholm [36] uses machine learning techniques to customize B&C for combinatorial auction winner determination, including cutting plane selection. Tang et al. [37] and Huang et al. [20] study machine learning approaches to cutting plane selection. The former work formulates this problem as a reinforcement learning problem and shows that their approach can outperform human-designed heuristics for a variety of tasks. The latter work studies cutting plane selection in the multiple-instance-learning framework and proposes a neural-network architecture for scoring and ranking cutting planes. Meanwhile, the focus of our paper is to provide the first provable guarantees for cutting plane selection via machine learning.
Ferber et al. [15] study a problem where the IP objective vector c is unknown, but an estimate ĉ can be obtained from data. Their goal is to optimize the quality of the solutions obtained by solving the IP defined by ĉ, with respect to the true vector c. They do so by formulating the IP as a differentiable layer in a neural network. The nonconvex nature of the IP does not allow for straightforward gradient computation for the backward pass, so they obtain a continuous surrogate using cutting planes.
Provable guarantees for algorithm configuration. Gupta and Roughgarden [18] initiated the study of sample complexity bounds for algorithm configuration. In research most related to ours, Balcan et al. [5] provide sample complexity bounds for learning tree search variable selection policies (VSPs). They prove their bounds by showing that for any IP, hyperplanes partition the VSP parameter space into regions where the B&C tree size is a constant function of the parameters. The analysis in this paper requires new techniques because although we prove that the B&C tree size is a piecewiseconstant function of the CG cutting plane parameters, the boundaries between pieces are far more complex than hyperplanes: they are hypersurfaces defined by multivariate polynomials.
Kleinberg et al. [26, 27] and Weisz et al. [38, 39] design configuration procedures for runtime minimization that come with theoretical guarantees. Their algorithms are designed for the case where there are a finitely-many parameter settings to choose from (although they are still able to provide guarantees for infinite parameter spaces by running their procedure on a finite sample of configurations; Balcan et al. [5, 6] analyze when discretization approaches can and cannot be gainfully employed). In contrast, our guarantees are designed for infinite parameter spaces.
2 Problem formulation
In this section we give a more detailed technical overview of branch-and-cut, as well as an overview of the tools from learning theory we use to prove sample complexity guarantees.
2.1 Branch-and-cut
We study integer programs (IPs) in canonical form given by max { cTx : Ax ≤ b,x ≥ 0,x ∈ Zn } , (1)
where A ∈ Zm×n, b ∈ Zm, and c ∈ Rn. Branch-and-cut (B&C) works by recursively partitioning the input IP’s feasible region, searching for the locally optimal solution within each set of the partition until it can verify that it has found the globally optimal solution. It organizes this partition as a search tree, with the input IP stored at the root. It begins by solving the LP relaxation of the input IP; we denote the solution as x∗LP ∈ Rn. If x∗LP satisfies the IP’s integrality constraints (x∗LP ∈ Zn), then the procedure terminates—x∗LP is the globally optimal solution. Otherwise, it uses a variable selection policy to choose a variable x[i]. In the left child of the root, it stores the original IP with the additional constraint that x[i] ≤ bx∗LP[i]c, and in the right child, with the additional constraint that x[i] ≥ dx∗LP[i]e. It then uses a node selection policy to select a leaf of the tree and repeats this procedure—solving the LP relaxation and branching on a variable. B&C can fathom a node, meaning that it will stop searching along that branch, if 1) the LP relaxation satisfies the IP’s integrality constraints, 2) the LP relaxation is infeasible, or 3) the objective value of the LP relaxation’s solution is no better than the best integral solution found thus far. We assume there is a bound κ on the size of the tree we allow B&C to build before we terminate, as is common in prior research [5, 21, 26, 27].
Cutting planes are a means of ensuring that at each iteration of B&C, the solution to the LP relaxation is as close to the optimal integral solution as possible. Formally, let P = {x ∈ Rn : Ax ≤ b,x ≥ 0} denote the feasible region obtained by taking the LP relaxation of IP (1). Let PI = conv(P ∩ Zn) denote the integer hull of P . A valid cutting plane is any hyperplane αTx ≤ β such that if x is in the integer hull (x ∈ PI), then x satisfies the inequality αTx ≤ β. In other words, a valid cut does not remove any integral point from the LP relaxation’s feasible region. A valid cutting plane separates x ∈ P \ PI if it does not satisfy the inequality, or in other words, αTx > β. At any node of the search tree, B&C can add valid cutting planes that separate the optimal solution to the node’s LP relaxation, thus improving the solution estimates used to prune the search tree. However, adding too many cuts will increase the time it takes to solve the LP relaxation at each node. Therefore, solvers such as SCIP [16], the leading open-source solver, bound the number of cuts that will be applied.
A famous class of cutting planes is the family of Chvátal-Gomory (CG) cuts1 [10, 17], which are parameterized by vectors u ∈ Rm. The CG cut defined by u ∈ Rm is the hyperplane buTAcx ≤ buT bc, which is guaranteed to be valid. Throughout this paper we primarily restrict our attention to u ∈ [0, 1)m. This is without loss of generality, since the facets of P ∩ {x ∈ Rn : buTAcx ≤ buT bc ∀u ∈ Rm} can be described by the finitely many u ∈ [0, 1)m such that uTA ∈ Zn (Lemma 5.13 of Conforti et al. [11]).
Some IP solvers such as SCIP use scoring rules to select among cutting planes, which are meant to measure the quality of a cut. Some commonly-used scoring rules include efficacy [4] (score1), objective parallelism [1] (score2), directed cutoff distance [16] (score3), and integral support [40] (score4) (defined in Appendix A). Efficacy measures the distance between the cut αTx ≤ β and x∗LP: score1(α
Tx ≤ β) = (αTx∗LP − β)/ ‖α‖2 , as illustrated in Figure 2a. Objective parallelism measures the angle between the objective c and the cut’s normal vector α: score2(αTx ≤ β) =∣∣cTα∣∣ /(‖α‖2 ‖c‖2), as illustrated in Figures 2b and 2c. Directed cutoff distance measures the distance between the LP optimal solution and the cut in a more relevant direction than the efficacy scoring rule. Specifically, let x be the incumbent solution, which is the best-known feasible solution to the input IP. The directed cutoff distance is the distance between the hyperplane (α, β) and the current LP solution x∗LP along the direction of the incumbent x, as illustrated in Figures 2d and 2e: score3(αTx ≤ β) = ‖x− x∗LP‖2 · (α Tx∗LP − β)/ ∣∣αT (x− x∗LP)∣∣ . SCIP uses the scoring rule 3 5score1 + 1 10score2 + 1 2score3 + 1 10score4 [16].
1The set of CG cuts is equivalent to the set of Gomory (fractional) cuts [12], another commonly studied family of cutting planes with a slightly different parameterization.
2.2 Learning theory background and notation
The goal of this paper is to learn cut-selection policies using samples in order to guarantee, with high probability, that B&C builds a small tree in expectation on unseen IPs. To this end, we rely on the notion of pseudo-dimension [34], a well-known measure of a function class’s intrinsic complexity. The pseudo-dimension of a function class F ⊆ RY , denoted Pdim(F), is the largest integer N for which there exist N inputs y1, . . . , yN ∈ Y and N thresholds r1, . . . , rN ∈ R such that for every (σ1, . . . , σN ) ∈ {0, 1}N , there exists f ∈ F such that f(yi) ≥ ri if and only if σi = 1. Function classes with bounded pseudo-dimension satisfy the following uniform convergence guarantee [3, 34]. Let [−κ, κ] be the range of the functions in F , let NF (ε, δ) = O(κ 2 ε2 (Pdim(F) + ln( 1 δ ))), and let N ≥ NF (ε, δ). For all distributionsD on Y , with probability 1−δ over the draw of y1, . . . , yN ∼ D, for every function f ∈ F , the average value of f over the samples is within ε of its expected value: | 1N ∑N i=1 f(yi)− Ey∼D[f(y)]| ≤ ε. The quantity NF (ε, δ) is the sample complexity of F .
We use the notation ‖A‖1,1 to denote the sum of the absolute values of all the entries in A.
3 Learning Chvátal-Gomory cuts
In this section we bound the sample complexity of learning CG cuts at the root node of the B&C search tree. In many IP settings, similar IPs are being solved and there can be good cuts that carry across instances—for example, in applications where the constraints stay the same or roughly the same across instances,2 and only the objective changes. One high-stakes example of this is the feasibility checking problem in the billion-dollar incentive auction for radio spectrum, where prices change but the radiowave interference constraints do not change.
We warm up by analyzing the case where a single CG cut is added at the root (Section 3.1), and then build on this analysis to handle w sequential waves of k simultaneous CG cuts (Section 3.3). This means that all k cuts in the first wave are added simultaneously, the new (larger) LP relaxation is solved, all k cuts in the second wave are added to the new problem simultaneously, and so on. B&C adds cuts in waves because otherwise, the angles between cuts would become obtuse, leading to numerical instability. Moreover, many commercial IP solvers only add cuts at the root because those cuts can be leveraged throughout the tree. However, in Section 5, we also provide guarantees for applying cuts throughout the tree. In this section, we assume that all aspects of B&C (such as node selection and variable selection) are fixed except for the cuts applied at the root of the search tree.
3.1 Learning a single cut
To provide sample complexity bounds, as per Section 2.2, we bound the pseudo-dimension of the set of functions fu for u ∈ [0, 1]m, where fu(c, A, b) is the size of the tree B&C builds when it applies the CG cut defined by u at the root. To do so, we take advantage of structure exhibited by the class of dual functions, each of which is defined by a fixed IP (c, A, b) and measures tree size as
2We assume that constraints are generated in the same order across instances; see Appendix B for a discussion.
a function of the parameters u. In other words, each dual function f∗c,A,b : [0, 1] m → R is defined as f∗c,A,b(u) = fu(c, A, b). Our main result in this section is a proof that the dual functions are well-structured (Lemma 3.2), which then allows us to apply a result by Balcan et al. [8] to bound Pdim({fu : u ∈ [0, 1]m}) (Theorem 3.3). Proving that the dual functions are well-structured is challenging because they are volatile: slightly perturbing u can cause the tree size to shift from constant to exponential in n, as we prove in the following theorem. The full proof is in Appendix C.
Theorem 3.1. For any integer n, there exists an integer program (c, A, b) with two constraints and n variables such that if 12 ≤ u[1]− u[2] < n+1 2n , then applying the CG cut defined by u at the root causes B&C to terminate immediately. Meanwhile, if n+12n ≤ u[1]− u[2] < 1, then applying the CG cut defined by u at the root causes B&C to build a tree of size at least 2(n−1)/2.
Proof sketch. Without loss of generality, assume that n is odd. Consider an IP with constraints 2(x[1] + · · · + x[n]) ≤ n, −2(x[1] + · · · + x[n]) ≤ −n, x ∈ {0, 1}n, and any objective. This IP is infeasible because n is odd. Jeroslow [22] proved that without the use of cutting planes or heuristics, B&C will build a tree of size 2(n−1)/2 before it terminates. We prove that when 1 2 ≤ u[1]− u[2] < n+1 2n , the CG cut halfspace defined by u = (u[1], u[2]) has an empty intersection with the feasible region of the IP, causing B&C to terminate immediately. On the other hand, we show that if n+12n ≤ u[1]− u[2] < 1, then the CG cut halfspace defined by u contains the feasible region of the IP, and thus leaves the feasible region unchanged. In this case, due to Jeroslow [22], applying this CG cut at the root will cause B&C to build a tree of size at least 2(n−1)/2 before it terminates.
This theorem shows that the dual tree-size functions can be extremely sensitive to perturbations in the CG cut parameters. However, we are able to prove that the dual functions are piecewise-constant.
Lemma 3.2. For any IP (c, A, b), there areO(‖A‖1,1 +‖b‖1 +n) hyperplanes that partition [0, 1]m into regions where in any one region R, the dual function f∗c,A,b(u) is constant for all u ∈ R.
Proof. Let a1, . . . ,an ∈ Rm be the columns of A. Let Ai = ‖ai‖1 and B = ‖b‖1, so for any u ∈ [0, 1]m, ⌊ uTai ⌋ ∈ [−Ai, Ai] and ⌊ uT b ⌋ ∈ [−B,B]. For each integer ki ∈ [−Ai, Ai], we
have ⌊ uTai ⌋ = ki ⇐⇒ ki ≤ uTai < ki + 1. There are ∑n i=1 2Ai + 1 = O(‖A‖1,1 + n) such halfspaces, plus an additional 2B + 1 halfspaces of the form kn+1 ≤ uT b < kn+1 + 1 for each kn+1 ∈ {−B, . . . , B}. In any region R defined by the intersection of these halfspaces, the vector (buTa1c, . . . , buTanc, buT bc) is constant for all u ∈ R, and thus so is the resulting cut.
Combined with the main result of Balcan et al. [8], this lemma implies the following bound.
Theorem 3.3. Let Fα,β denote the set of all functions fu for u ∈ [0, 1]m defined on the domain of IPs (c, A, b) with ‖A‖1,1 ≤ α and ‖b‖1 ≤ β. Then, Pdim(Fα,β) = O(m log(m(α+ β + n))).
This theorem implies that Õ(κ2m/ε2) samples are sufficient to ensure that with high probability, for every CG cut, the average size of the tree B&C builds upon applying the cutting plane is within of the expected size of the tree it builds (the Õ notation suppresses logarithmic terms).
3.2 Learning a sequence of cuts
We now determine the sample complexity of making w sequential CG cuts at the root. The first cut is defined by m parameters u1 ∈ [0, 1]m for each of the m constraints. Its application leads to the addition of the row buT1 Acx ≤ buT1 bc to the constraint matrix. The next cut is then be defined by m+ 1 parameters u2 ∈ [0, 1]m+1 since there are now m+ 1 constraints. Continuing in this fashion, the wth cut is be defined by m+w− 1 parameters uw ∈ [0, 1]m+w−1. Let fu1,...,uw(c, A, b) be the size of the tree B&C builds when it applies the CG cut defined by u1, then applies the CG cut defined by u2 to the new IP, and so on, all at the root of the search tree.
As in Section 3.1, we bound the pseudo-dimension of the functions fu1,...,uw by analyzing the structure of the dual functions f∗c,A,b, which measure tree size as a function of the parameters u1, . . . ,uw. Specifically, f∗c,A,b : [0, 1]
m × · · · × [0, 1]m+w−1 → R, where f∗c,A,b(u1, . . . ,uw) = fu1,...,uw(c, A, b). The analysis in this section is more complex because the s th cut (with s ∈
{2, . . . ,W}) depends not only on the parameters us but also on u1, . . . ,us−1. We prove that the dual functions are again piecewise-constant, but in this case, the boundaries between pieces are defined by multivariate polynomials rather than hyperplanes. The full proof is in Appendix C. Lemma 3.4. For any IP (c, A, b), there are O(w2w ‖A‖1,1 + 2w ‖b‖1 + nw) multivariate polynomials in ≤ w2 +mw variables of degree ≤ w that partition [0, 1]m× · · · × [0, 1]m+w−1 into regions where in any one region R, f∗c,A,b(u1, . . . ,uw) is constant for all (u1, . . . ,uw) ∈ R.
Proof sketch. Let a1, . . . ,an ∈ Rm be the columns of A. For u1 ∈ [0, 1]m, . . . ,uw ∈ [0, 1]m+w−1, define ã1i ∈ [0, 1]m, . . . , ãwi ∈ [0, 1]m+w−1 for each i ∈ [n] such that ãsi is the ith column of the constraint matrix after applying cuts u1, . . . ,us−1. Similarly, define b̃s to be the constraint vector after applying the first s− 1 cuts. More precisely, we have the recurrence relation
ã1i = ai b̃ 1 = b
ãsi =
[ ãs−1i
uTs−1ã s−1 i
] b̃s = [ b̃s−1
uTs−1b̃ s−1 ] for s = 2, . . . ,W . We prove that ⌊ uTs ã s i ⌋ ∈ [−2s−1 ‖ai‖1 , 2s−1 ‖ai‖1]. For each integer ki in this
interval, ⌊ uTs ã s i ⌋ = ki ⇐⇒ ki ≤ uTs ãsi < ki + 1. The boundaries of these surfaces are defined by polynomials over us in ≤ ms+ s2 variables with degree ≤ s. Counting the total number of such hypersurfaces yields the lemma statement.
We now use this structure to provide a pseudo-dimension bound. The full proof is in Appendix C. Theorem 3.5. Let Fα,β denote the set of all functions fu1,...,uw for u1 ∈ [0, 1]m, . . . ,uw ∈ [0, 1]m+w−1 defined on the domain of integer programs (c, A, b) with ‖A‖1,1 ≤ α and ‖b‖1 ≤ β. Then, Pdim(Fα,β) = O(mw2 log(mw(α+ β + n))).
Proof sketch. The space of 0/1 classifiers induced by the set of degree ≤ w multivariate polynomials in w2 + mw variables has VC dimension O((w2 + mw) logw) [3]. However, we more carefully examine the structure of the polynomials considered in Lemma 3.4 to give an improved VC dimension bound of 1 +mw. For each j = 1, . . . ,m define ũ1[j], . . . , ũw[j] recursively as
ũ1[j] = u1[j]
ũs[j] = us[j] + s−1∑ `=1 us[m+ `]ũ`[j] for s = 2, . . . , w
The space of polynomials induced by the sth cut is contained in span{1, ũs[1], . . . , ũs[m]}. The intuition for this is as follows: consider the additional term added by the sth cut to the constraint matrix, that is, uTs ã s i . The first m coordinates (us[1], . . . ,us[m]) interact only with ai—so us[j] collects a coefficient of ai[j]. Each subsequent coordinate us[m+ `] interacts with all coordinates of ãsi arising from the first ` cuts. The term that collects a coefficient of ai[j] is precisely us[m+ `] times the sum of all terms from the first ` cuts with a coefficient of ai[j]. Using standard facts about the VC dimension of vector spaces and their duals in conjunction with Lemma 3.4 and the framework of Balcan et al. [8] yields the theorem statement.
The sample complexity (defined in Section 2.2) of learning W sequential cuts is thus Õ(κ2mw2/ 2).
3.3 Learning waves of simultaneous cuts
We now determine the sample complexity of making w sequential waves of cuts at the root, each wave consisting of k simultaneous CG cuts. Given vectors u11, . . . ,u k 1 ∈ [0, 1]m,u12, . . . ,uk2 ∈ [0, 1]m+k, . . . ,u1w, . . . ,u k w ∈ [0, 1]m+k(w−1), let fu11,...,uk1 ,...,u1w,...,ukw(c, A, b) be the size of the tree B&C builds when it applies the CG cuts defined by u11, . . . ,u k 1 , then applies the CG cuts defined by u12, . . . ,u k 2 to the new IP, and so on, all at the root of the search tree. The full proof of the following theorem is in Appendix C, and follows from the observation that w waves of k simultaneous cuts can be viewed as making kw sequential cuts with the restriction that cuts within each wave assign nonzero weight only to constraints from previous waves.
Theorem 3.6. Let Fα,β be the set of all functions fu11,...,uk1 ,...,u1w,...,ukw for u 1 1, . . . ,u k 1 ∈ [0, 1]m, . . . ,u1w, . . . ,u k w ∈ [0, 1]m+k(w−1) defined on the domain of integer programs (c, A, b) with ‖A‖1,1 ≤ α and ‖b‖1 ≤ β. Then, Pdim(Fα,β) = O(mk2w2 log(mkw(α+ β + n))).
This result implies that the sample complexity of learning W waves of k cuts is Õ(κ2mk2w2/ 2).
3.4 Data-dependent guarantees
So far, our guarantees have depended on the maximum possible norms of the constraint matrix and vector in the domain of IPs under consideration. The uniform convergence result in Section 2.2 for Fα,β only holds for distributions over A and b with norms bounded by α and β, respectively. In Appendix C.1, we show how to convert these into more broadly applicable data-dependent guarantees that leverage properties of the distribution over IPs. These guarantees hold without assumptions on the distribution’s support, and depend on E[maxi ‖Ai‖1,1] and E[maxi ‖bi‖1] (where the expectation is over N samples), thus giving a sharper sample complexity guarantee that is tuned to the distribution.
4 Learning cut selection policies
In Section 3, we studied the sample complexity of learning waves of specific cut parameters. In this section, we bound the sample complexity of learning cut-selection policies at the root, that is, functions that take as input an IP and output a candidate cut. Using scoring rules is a more nuanced way of choosing cuts since it allows for the cut parameters to depend on the input IP.
Formally, let Im be the set of IPs withm constraints (the number of variables is always fixed at n) and letHm be the set of all hyperplanes in Rm. A scoring rule is a function score : ∪m(Hm × Im)→ R≥0. The real value score(αTx ≤ β, (c, A, b)) is a measure of the quality of the cutting plane αTx ≤ β for the IP (c, A, b). Examples include the scoring rules discussed in Section 2.1. Suppose score1, . . . , scored are d different scoring rules. We now bound the sample complexity of learning a combination of these scoring rules that guarantee a low expected tree size. Our highlevel proof technique is the same as in the previous section: we establish that the dual tree-size functions are piecewise structured, and then apply the general framework of Balcan et al. [8] to obtain pseudo-dimension bounds. Theorem 4.1. Let C be a set of cutting-plane parameters such that for every IP (c, A, b), there is a decomposition of C into ≤ r regions such that the cuts generated by any two vectors in the same region are the same. Let score1, . . . , scored be d scoring rules. For µ ∈ Rd, let fµ(c, A, b) be the size of the tree B&C builds when it chooses a cut from C to maximize µ[1]score1(·, (c, A, b)) + · · ·+ µ[d]scored(·, (c, A, b)). Then, Pdim({fµ : µ ∈ Rd}) = O(d log(rd)).
Proof. Fix an integer program (c, A, b). Let u1, . . . ,ur ∈ C be representative cut parameters for each of the r regions. Consider the hyperplanes ∑d i=1 µ[i]scorei(us) = ∑d i=1 µ[i]scorei(ut) for each s 6= t ∈ {1, . . . , r} (suppressing the dependence on c, A, b). These O(r2) hyperplanes partition Rd into regions such that as µ varies in a given region, the cut chosen from C is invariant. The desired pseudo-dimension bound follows from the main result of Balcan et al. [8].
Theorem 4.1 can be directly instantiated with the class of CG cuts. Combining Lemma 3.2 with the basic combinatorial fact that k hyperplanes partition Rm into at most km regions, we get that the pseudo-dimension of {fµ : µ ∈ Rd} defined on IPs with ‖A‖1,1 ≤ α and ‖b‖1 ≤ β is O(dm log(d(α + β + n))). Instantiating Theorem 4.1 with the set of all sequences of w CG cuts requires the following extension of scoring rules to sequences of cutting planes. A sequential scoring rule is a function that takes as input an IP (c, A, b) and a sequence of cutting planes h1, . . . , hw, where each cut lives in one higher dimension than the previous. It measures the quality of this sequence of cutting planes when applied one after the other to the original IP. Every scoring rule score can be naturally extended to a sequential scoring rule score defined by score(h1, . . . , hw, (c0, A0, b0)) =∑w−1 i=0 score(hi+1, (c i, Ai, bi)), where (ci, Ai, bi) is the IP after applying cuts h1, . . . , hi−1. Corollary 4.2. Let C = [0, 1]m × · · · × [0, 1]m+w−1 denote the set of possible sequences of w Chvátal-Gomory cut parameters. Let score1, . . . , scored : C × Im × · · · × Im+w−1 → R
be d sequential scoring rules and let fµ(c, A, b) be as in Theorem 4.1 for the class C. Then, Pdim({fwµ : µ ∈ Rd}) = O(dmw2 log(dw(α+ β + n))).
Proof. In Lemma 3.4 and Theorem 3.5 we showed that there are O(w2wα+ 2wβ+nw) multivariate polynomials that belong to a family of polynomials G with VCdim(G∗) ≤ 1 +mw (G∗ denotes the dual of G) that partition C into regions such that resulting sequence of cuts is invariant in each region. By Claim 3.5 by Balcan et al. [8], the number of regions is O(w2wα + 2wβ + nw)VCdim(G
∗) ≤ O(w2wα+ 2wβ + nw)1+mw. The corollary then follows from Theorem 4.1.
These results bound the sample complexity of learning cut-selection policies based on scoring rules, which allow the cuts B&C that selects to depend on the input IP.
5 Sample complexity of generic tree search
In this section, we study the sample complexity of selecting high-performing parameters for generic tree-based algorithms, which are a generalization of B&C. This abstraction allows us to provide guarantees for simultaneously optimizing key aspects of tree search beyond cut selection, including node selection and branching variable selection. We also generalize the previous sections by allowing actions (such as cut selection) to be taken at any stage of the tree search—not just at the root.
Tree search algorithms take place over a series of κ rounds (analogous to the B&B tree-size cap κ in the previous sections). There is a sequence of t steps that the algorithm takes on each round. For example, in B&C, these steps include node selection, cut selection, and variable selection. The specific action the algorithm takes during each step (for example, which node to select, which cut to include, or which variable to branch on) typically depends on a scoring rule. This scoring rule weights each possible action and the algorithm performs the action with the highest weight. These actions (deterministically) transition the algorithm from one state to another. This high-level description of tree search is summarized by Algorithm 1. For each step j ∈ [t], the number of possible actions is Tj ∈ N. There is a scoring rule scorej , where scorej(k, s) ∈ R is the weight associated with the action k ∈ [Tj ] when the algorithm is in the state s.
Algorithm 1 Tree search Input: Problem instance, t scoring rules score1, . . . , scoret, number of rounds κ.
1: s1,1 ← Initial state of algorithm 2: for each round i ∈ [κ] do 3: for each step j ∈ [t] do 4: Perform the action k ∈ [Tj ] that maximizes scorej (si,j , k) 5: si,j+1 ← New state of algorithm 6: si+1,1 ← si,t+1 . State at beginning of next round equals state at end of this round
Output: Incumbent solution in state sκ,t+1, if one exists.
There are often several scoring rules one could use, and it is not clear which to use in which scenarios. As in Section 4, we provide guarantees for learning combinations of these scoring rules for the particular application at hand. More formally, for each step j ∈ [t], rather than just a single scoring rule scorej as in Step 4, there are dj scoring rules scorej,1, . . . , scorej,dj . Given parameters µj = (µj [1], . . . , µj [dj ]) ∈ Rdj , the algorithm takes the action k ∈ [Tj ] that maximizes∑dj i=1 µj [i]scorej,i(k, s). There is a distribution D over inputs x to Algorithm 1. For example, when this framework is instantiated for branch-and-cut, x is an integer program (c, A, b). There is a utility function fµ(x) ∈ [−H,H] that measures the utility of the algorithm parameterized by µ = (µ1, . . . ,µt) on input x. For example, this utility function might measure the size of the search tree that the algorithm builds. We assume that this utility function is final-state-constant: Definition 5.1. Let µ = (µ1, . . . ,µt) and µ′ = (µ′1, . . . ,µ′t) be two parameter vectors. Suppose that we run Algorithm 1 on input x once using the scoring rule scorej = ∑dj i=1 µj [i]scorej,i and
once using the scoring rule scorej = ∑dj i=1 µ ′ j [i]scorej,i. Suppose that on each run, we obtain the same final state sκ,t+1. The utility function is final-state-constant if fµ(x) = fµ′(x).
We provide a sample complexity bound for learning the parametersµ. The full proof is in Appendix D. Theorem 5.2. Let d = ∑t j=1 dj denote the total number of tunable parameters of tree search. Then,
Pdim ({ fµ : µ ∈ Rd }) = O ( dκ
t∑ j=1 log Tj + d log d
) .
Proof sketch. We prove that there is a set of hyperplanes splitting the parameter space into regions such that if tree search uses any parameter setting from a single region, it will always take the same sequence of actions (including node, variable, and cut selection). The main subtlety is an induction argument to count these hyperplanes that depends on the current step of the tree-search algorithm.
In the context of integer programming, Theorem 5.2 not only recovers the main result of Balcan et al. [5] for learning variable selection policies, but also yields a more general bound that simultaneously incorporates cutting plane selection, variable selection, and node selection. In B&C, the first action of each round is to select a node. Since there are at most 2n+1 − 1 nodes, T1 ≤ 2n+1 − 1. The second action is to choose a cutting plane. As in Theorem 4.1, let C be a family of cutting planes such that for every IP (c, A, b), there is a decomposition of the parameter space into ≤ r regions such that the cuts generated by any two parameters in the same region are the same. So T2 ≤ r. The last action is to choose a variable to branch on at that node, so T3 = n. Applying Theorem 5.2, Pdim({fµ : µ ∈ Rd}) = O(dκn + dκ log r + d log d). Ignoring T1 and T2, thereby only learning the variable selection policy, recovers the O(dκ log n+ d log d) bound of Balcan et al. [5].
6 Conclusions and future research
We provided the first provable guarantees for using machine learning to configure cutting planes and cut-selection policies. We analyzed the sample complexity of learning cutting planes from the popular family of Chvátal-Gomory (CG) cuts. We then provided sample complexity guarantees for learning parameterized cut-selection policies, which allow the branch-and-cut algorithm to adaptively apply cuts as it builds the search tree. We showed that this analysis can be generalized to simultaneously capture various key aspects of tree search beyond cut selection, such as node and variable selection.
This paper opens up a variety questions for future research. For example, which other cut families can we learn over with low sample complexity? Section 3 focused on learning within the family of CG cuts (Sections 4 and 5 applied more generally). There are many other families, such as Gomory mixed-integer cuts and lift-and-project cuts, and a sample complexity analysis of these is an interesting direction for future research (and would call for new techniques). In addition, can we use machine learning to design improved scoring rules and heuristics for cut selection? The bounds we provide in Section 4 apply to any choice of scoring rules, no matter how simple or complex. Is it possible to obtain even better bounds by taking into account the complexity of the scoring rules? Finally, the bounds in this paper are worst case, but a great direction for future research is to develop data-dependent bounds that improve based on the structure of the input distribution.
Acknowledgements
This material is based on work supported by the National Science Foundation under grants IIS1618714, IIS-1718457, IIS-1901403, CCF-1733556, CCF-1535967, CCF-1910321, SES-1919453, the ARO under award W911NF2010081, DARPA under cooperative agreement HR00112020003, an AWS Machine Learning Research Award, an Amazon Research Award, a Bloomberg Research Grant, and a Microsoft Research Faculty Fellowship. | 1. What is the main contribution of the paper regarding sample complexity?
2. How does the paper structure the cut generation process, and what are the boundaries determined?
3. Can you explain the sample complexity bounds derived for generic tree search?
4. What is the significance of the paper's results, particularly in practical scenarios?
5. Do you have any concerns or potential mistakes in the paper, especially regarding the missing factor of n in the sample complexity bound?
6. How does the paper relate to previous works, such as Balcan et al. [8] and Baltean-Lugojan et al.?
7. What are your thoughts on the practicality of the theory presented in the paper, specifically concerning empirical risk minimization procedures? | Summary Of The Paper
Review | Summary Of The Paper
The paper focuses on the sample complexity of learning to select Chvatal-Gomory cuts for integer linear programming. We assume that there is an unknown distribution that generates ILP instances. CG cuts are parametrized by a set of weights, one per constraint. How large should the set of training instances be for one to accurately estimate the "goodness" of a given parametrization? This is the main question tackled here.
Using the data-driven algorithm design framework of Balcan et al. [8], this paper shows that three flavors of the learning problem can be analyzed effectively. The main contribution is to show that there is structure to the cut generation process as its parameters vary; the space of possible cuts can be partitioned, the form of the boundaries that determine the partition is identified, and the behavior of the cut generation is constant within each region. These can be plugged into a very general PAC learning bound from Balcan et al. [8].
Additionally, the sample complexity of generic tree search is analyzed. It is shown that variable, node, and cut selection can be parameterized simultaneously, each with its own additive scoring function, and sample complexity bounds can be derived accordingly. This result generalizes a previous branching-only bound from Balcan et al. [5].
Review
Clarity: The paper is very well-written and easy to read. Proofs are either concise or sketched in the main text, which I appreciate.
Originality: The sample complexity of learning to generate CG cuts is a new problem as far as I know. Deriving the partitions of the parameter spaces do involve some careful analysis.
Quality: The paper is expertly executed. I have checked the proofs to the extent that I am capable, and identified one potential mistake which is likely inconsequential:
In line 203, you say
∑
i
=
1
n
2
A
i
∈
O
(
‖
A
‖
1
,
1
)
. I suspect this should be
O
(
n
‖
A
‖
1
,
1
)
: consider the case where all column norms are equal, then summing them up gives
2
n
‖
A
‖
1
,
1
.
If you ignore logarithmic factors in the sample complexity bound of line 210, this missing factor of
n
is ignored anyways, so even if I were right about this, it is maybe not a big issue. I'd like the authors to comment on this.
Significance: There's a basic issue with the set of results in section 3. The values of the parameter vector depend on the index of the constraint, i.e.,
u
[
1
]
corresponds to the first constraint, etc. Now consider two identical ILP instances with the constraints permuted. The same parameter vector can potentially produce a different cut. A practical scenario for this is a graph optimization problem, say Max. Independent Set (MIS), where each linear constraint expresses independence along an edge of the graph; there is no intrinsic ordering to the edges and thus to the constraints. This is to say that unlike the scoring rules in section 4, the parameters in section 3 make little sense unless you make strong assumptions on the distribution of instances, e.g., if all instances have the exact same constraints but a random objective function, which makes the ordering argument irrelevant.
I am not 100% sure of it, but I think section 4 does not suffer from this issue, as the scoring rules take pre-generated cuts as input.
More broadly though, while the technical work involved in deriving the sample complexity bounds in this paper is solid, I don't see what the theory tells us about the practice: the sample complexity bounds often require a large number of instances (e.g., line 210, quadratic in the maximum search tree size), and there is no clear path to a practical empirical risk minimization procedure that exploits the partition of the parameter space. I would be interested in hearing what the authors think about this.
Minor:
Line 318: "which allow the cuts B&C that selects" --> "which allow the cuts that B&C selects"
This paper might be relevant to your literature review: Baltean-Lugojan, Radu, et al. Scoring positive semidefinite cutting planes for quadratic optimization via trained neural networks. Working paper., 2019, http://www. optimization-online. org/DB_HTML/2018/11/6943. html, 2019. |
NIPS | Title
Sample Complexity of Tree Search Configuration: Cutting Planes and Beyond
Abstract
Cutting-plane methods have enabled remarkable successes in integer programming over the last few decades. State-of-the-art solvers integrate a myriad of cutting-plane techniques to speed up the underlying tree-search algorithm used to find optimal solutions. In this paper we provide sample complexity bounds for cut-selection in branch-and-cut (B&C). Given a training set of integer programs sampled from an application-specific input distribution and a family of cut selection policies, these guarantees bound the number of samples sufficient to ensure that using any policy in the family, the size of the tree B&C builds on average over the training set is close to the expected size of the tree B&C builds. We first bound the sample complexity of learning cutting planes from the canonical family of Chvátal-Gomory cuts. Our bounds handle any number of waves of any number of cuts and are fine tuned to the magnitudes of the constraint coefficients. Next, we prove sample complexity bounds for more sophisticated cut selection policies that use a combination of scoring rules to choose from a family of cuts. Finally, beyond the realm of cutting planes for integer programming, we develop a general abstraction of tree search that captures key components such as node selection and variable selection. For this abstraction, we bound the sample complexity of learning a good policy for building the search tree.
1 Introduction
Integer programming is one of the most broadly-applicable tools in computer science, used to formulate problems from operations research (such as routing, scheduling, and pricing), machine learning (such as adversarially-robust learning, MAP estimation, and clustering), and beyond. Branchand-cut (B&C) is the most widely-used algorithm for solving integer programs (IPs). B&C is highly configurable, and with a deft configuration, it can be used to solve computationally challenging problems. Finding a good configuration, however, is a notoriously difficult problem.
We study machine learning approaches to configuring policies for selecting cutting planes, which have an enormous impact on B&C’s performance. At a high level, B&C works by recursively partitioning the IP’s feasible region, searching for the locally optimal solution within each set of the partition,
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
until it can verify that it has found the globally optimal solution. An IP’s feasible region is defined by a set of linear inequalities Ax ≤ b and integer constraints x ∈ Zn, where n is the number of variables. By dropping the integrality constraints, we obtain the linear programming (LP) relaxation of the IP, which can be solved efficiently. A cutting plane is a carefully-chosen linear inequality αTx ≤ β which refines the LP relaxation’s feasible region without separating any integral point. Intuitively, a well-chosen cutting plane will remove a large portion of the LP relaxation’s feasible region, speeding up the time it takes B&C to find the optimal solution to the original IP. Cutting plane selection is a crucial task, yet it is challenging because many cutting planes and cut-selection policies have tunable parameters, and the best configuration depends intimately on the application domain.
We provide the first provable guarantees for learning high-performing cutting planes and cut-selection policies, tailored to the application at hand. We model the application domain via an unknown, application-specific distribution over IPs, as is standard in the literature on using machine learning for integer programming [e.g., 21, 23, 31, 36, 43]. For example, this could be a distribution over the routing IPs that a shipping company must solve day after day. The learning algorithm’s input is a training set sampled from this distribution. The goal is to use this training set to learn cutting planes and cut-selection policies with strong future performance on problems from the same application but which are not already in the training set—or more formally, strong expected performance.
1.1 Summary of main contributions and overview of techniques
As our first main contribution, we provide sample complexity bounds of the following form: fixing a family of cutting planes, we bound the number of samples sufficient to ensure that for any sequence of cutting planes from the family, the average size of the B&C tree is close to the expected size of the B&C tree. We measure performance in terms of the size of the search tree B&C builds. Our guarantees apply to the parameterized family of Chvátal-Gomory (CG) cuts [10, 17], one of the most widely-used families of cutting planes.
The overriding challenge is that to provide guarantees, we must analyze how the tree size changes as a function of the cut parameters. This is a sensitive function—slightly shifting the parameters can cause the tree size to shift from constant to exponential in the number of variables. Our key technical insight is that as the parameters vary, the entries of the cut (i.e., the vector α and offset β of the cut αTx ≤ β) are multivariate polynomials of bounded degree. The number of terms defining the polynomials is exponential in the number of parameters, but we show that the polynomials can be embedded in a space with dimension sublinear in the number of parameters. This insight allows us to better understand tree size as a function of the parameters. We then leverage results by Balcan et al. [8] that show how to use structure exhibited by dual functions (measuring an algorithm’s performance, such as its tree size, as a function of its parameters) to derive sample complexity bounds.
Our second main contribution is a sample complexity bound for learning cut-selection policies, which allow B&C to adaptively select cuts as it solves the input IP. These cut-selection policies assign a number of real-valued scores to a set of cutting planes and then apply the cut that has the maximum weighted sum of scores. Tree size is a volatile function of these weights, though we prove that it is piecewise constant, as illustrated in Figure 1, which allows us to prove our sample complexity bound.
Finally, as our third main contribution, we provide guarantees for tuning weighted combinations of scoring rules for other aspects of tree search beyond cut selection, including node and variable selection. We prove that there is a set of hyperplanes splitting the parameter space into regions such that if tree search uses any configuration from a single region, it will take the same sequence of actions. This structure allows us to prove our sample complexity bound. This is the first paper to provide guarantees for tree search configuration that apply simultaneously to multiple different aspects of the algorithm—prior research was specific to variable selection [5].
Sample complexity bounds are important because if the parameterized class of cuts or cut-selection policies that we optimize over is highly complex and the training set is too small, the learned cut or cut-selection policy might have great average empirical performance over the training set but terrible future performance. In other words, the parameter configuration procedure may overfit to the training set. The sample complexity bounds we provide are uniform-convergence: we prove that given enough samples, uniformly across all parameter settings, the difference between average and empirical performance is small. In other words, these bounds hold for any procedure one might use to optimize over the training set: manual or automated, optimal or suboptimal. No matter what
parameter setting the configuration procedure comes up with, the user can be guaranteed that so long as that parameter setting has good average empirical performance over the training set, it will also have strong future performance.
1.2 Related work
Applied research on tree search configuration. Over the past decade, a substantial literature has developed on the use of machine learning for integer programming and tree search [e.g., 2, 7, 9, 13, 19, 23–25, 29, 31–33, 35, 36, 41–43]. This has included research that improves specific aspects of B&C such as variable selection [2, 13, 24, 29, 32, 41], node selection [19, 35, 44], and heuristic scheduling [25]. These papers are applied, whereas we focus on providing theoretical guarantees.
With respect to cutting plane selection, the focus of this paper, Sandholm [36] uses machine learning techniques to customize B&C for combinatorial auction winner determination, including cutting plane selection. Tang et al. [37] and Huang et al. [20] study machine learning approaches to cutting plane selection. The former work formulates this problem as a reinforcement learning problem and shows that their approach can outperform human-designed heuristics for a variety of tasks. The latter work studies cutting plane selection in the multiple-instance-learning framework and proposes a neural-network architecture for scoring and ranking cutting planes. Meanwhile, the focus of our paper is to provide the first provable guarantees for cutting plane selection via machine learning.
Ferber et al. [15] study a problem where the IP objective vector c is unknown, but an estimate ĉ can be obtained from data. Their goal is to optimize the quality of the solutions obtained by solving the IP defined by ĉ, with respect to the true vector c. They do so by formulating the IP as a differentiable layer in a neural network. The nonconvex nature of the IP does not allow for straightforward gradient computation for the backward pass, so they obtain a continuous surrogate using cutting planes.
Provable guarantees for algorithm configuration. Gupta and Roughgarden [18] initiated the study of sample complexity bounds for algorithm configuration. In research most related to ours, Balcan et al. [5] provide sample complexity bounds for learning tree search variable selection policies (VSPs). They prove their bounds by showing that for any IP, hyperplanes partition the VSP parameter space into regions where the B&C tree size is a constant function of the parameters. The analysis in this paper requires new techniques because although we prove that the B&C tree size is a piecewiseconstant function of the CG cutting plane parameters, the boundaries between pieces are far more complex than hyperplanes: they are hypersurfaces defined by multivariate polynomials.
Kleinberg et al. [26, 27] and Weisz et al. [38, 39] design configuration procedures for runtime minimization that come with theoretical guarantees. Their algorithms are designed for the case where there are a finitely-many parameter settings to choose from (although they are still able to provide guarantees for infinite parameter spaces by running their procedure on a finite sample of configurations; Balcan et al. [5, 6] analyze when discretization approaches can and cannot be gainfully employed). In contrast, our guarantees are designed for infinite parameter spaces.
2 Problem formulation
In this section we give a more detailed technical overview of branch-and-cut, as well as an overview of the tools from learning theory we use to prove sample complexity guarantees.
2.1 Branch-and-cut
We study integer programs (IPs) in canonical form given by max { cTx : Ax ≤ b,x ≥ 0,x ∈ Zn } , (1)
where A ∈ Zm×n, b ∈ Zm, and c ∈ Rn. Branch-and-cut (B&C) works by recursively partitioning the input IP’s feasible region, searching for the locally optimal solution within each set of the partition until it can verify that it has found the globally optimal solution. It organizes this partition as a search tree, with the input IP stored at the root. It begins by solving the LP relaxation of the input IP; we denote the solution as x∗LP ∈ Rn. If x∗LP satisfies the IP’s integrality constraints (x∗LP ∈ Zn), then the procedure terminates—x∗LP is the globally optimal solution. Otherwise, it uses a variable selection policy to choose a variable x[i]. In the left child of the root, it stores the original IP with the additional constraint that x[i] ≤ bx∗LP[i]c, and in the right child, with the additional constraint that x[i] ≥ dx∗LP[i]e. It then uses a node selection policy to select a leaf of the tree and repeats this procedure—solving the LP relaxation and branching on a variable. B&C can fathom a node, meaning that it will stop searching along that branch, if 1) the LP relaxation satisfies the IP’s integrality constraints, 2) the LP relaxation is infeasible, or 3) the objective value of the LP relaxation’s solution is no better than the best integral solution found thus far. We assume there is a bound κ on the size of the tree we allow B&C to build before we terminate, as is common in prior research [5, 21, 26, 27].
Cutting planes are a means of ensuring that at each iteration of B&C, the solution to the LP relaxation is as close to the optimal integral solution as possible. Formally, let P = {x ∈ Rn : Ax ≤ b,x ≥ 0} denote the feasible region obtained by taking the LP relaxation of IP (1). Let PI = conv(P ∩ Zn) denote the integer hull of P . A valid cutting plane is any hyperplane αTx ≤ β such that if x is in the integer hull (x ∈ PI), then x satisfies the inequality αTx ≤ β. In other words, a valid cut does not remove any integral point from the LP relaxation’s feasible region. A valid cutting plane separates x ∈ P \ PI if it does not satisfy the inequality, or in other words, αTx > β. At any node of the search tree, B&C can add valid cutting planes that separate the optimal solution to the node’s LP relaxation, thus improving the solution estimates used to prune the search tree. However, adding too many cuts will increase the time it takes to solve the LP relaxation at each node. Therefore, solvers such as SCIP [16], the leading open-source solver, bound the number of cuts that will be applied.
A famous class of cutting planes is the family of Chvátal-Gomory (CG) cuts1 [10, 17], which are parameterized by vectors u ∈ Rm. The CG cut defined by u ∈ Rm is the hyperplane buTAcx ≤ buT bc, which is guaranteed to be valid. Throughout this paper we primarily restrict our attention to u ∈ [0, 1)m. This is without loss of generality, since the facets of P ∩ {x ∈ Rn : buTAcx ≤ buT bc ∀u ∈ Rm} can be described by the finitely many u ∈ [0, 1)m such that uTA ∈ Zn (Lemma 5.13 of Conforti et al. [11]).
Some IP solvers such as SCIP use scoring rules to select among cutting planes, which are meant to measure the quality of a cut. Some commonly-used scoring rules include efficacy [4] (score1), objective parallelism [1] (score2), directed cutoff distance [16] (score3), and integral support [40] (score4) (defined in Appendix A). Efficacy measures the distance between the cut αTx ≤ β and x∗LP: score1(α
Tx ≤ β) = (αTx∗LP − β)/ ‖α‖2 , as illustrated in Figure 2a. Objective parallelism measures the angle between the objective c and the cut’s normal vector α: score2(αTx ≤ β) =∣∣cTα∣∣ /(‖α‖2 ‖c‖2), as illustrated in Figures 2b and 2c. Directed cutoff distance measures the distance between the LP optimal solution and the cut in a more relevant direction than the efficacy scoring rule. Specifically, let x be the incumbent solution, which is the best-known feasible solution to the input IP. The directed cutoff distance is the distance between the hyperplane (α, β) and the current LP solution x∗LP along the direction of the incumbent x, as illustrated in Figures 2d and 2e: score3(αTx ≤ β) = ‖x− x∗LP‖2 · (α Tx∗LP − β)/ ∣∣αT (x− x∗LP)∣∣ . SCIP uses the scoring rule 3 5score1 + 1 10score2 + 1 2score3 + 1 10score4 [16].
1The set of CG cuts is equivalent to the set of Gomory (fractional) cuts [12], another commonly studied family of cutting planes with a slightly different parameterization.
2.2 Learning theory background and notation
The goal of this paper is to learn cut-selection policies using samples in order to guarantee, with high probability, that B&C builds a small tree in expectation on unseen IPs. To this end, we rely on the notion of pseudo-dimension [34], a well-known measure of a function class’s intrinsic complexity. The pseudo-dimension of a function class F ⊆ RY , denoted Pdim(F), is the largest integer N for which there exist N inputs y1, . . . , yN ∈ Y and N thresholds r1, . . . , rN ∈ R such that for every (σ1, . . . , σN ) ∈ {0, 1}N , there exists f ∈ F such that f(yi) ≥ ri if and only if σi = 1. Function classes with bounded pseudo-dimension satisfy the following uniform convergence guarantee [3, 34]. Let [−κ, κ] be the range of the functions in F , let NF (ε, δ) = O(κ 2 ε2 (Pdim(F) + ln( 1 δ ))), and let N ≥ NF (ε, δ). For all distributionsD on Y , with probability 1−δ over the draw of y1, . . . , yN ∼ D, for every function f ∈ F , the average value of f over the samples is within ε of its expected value: | 1N ∑N i=1 f(yi)− Ey∼D[f(y)]| ≤ ε. The quantity NF (ε, δ) is the sample complexity of F .
We use the notation ‖A‖1,1 to denote the sum of the absolute values of all the entries in A.
3 Learning Chvátal-Gomory cuts
In this section we bound the sample complexity of learning CG cuts at the root node of the B&C search tree. In many IP settings, similar IPs are being solved and there can be good cuts that carry across instances—for example, in applications where the constraints stay the same or roughly the same across instances,2 and only the objective changes. One high-stakes example of this is the feasibility checking problem in the billion-dollar incentive auction for radio spectrum, where prices change but the radiowave interference constraints do not change.
We warm up by analyzing the case where a single CG cut is added at the root (Section 3.1), and then build on this analysis to handle w sequential waves of k simultaneous CG cuts (Section 3.3). This means that all k cuts in the first wave are added simultaneously, the new (larger) LP relaxation is solved, all k cuts in the second wave are added to the new problem simultaneously, and so on. B&C adds cuts in waves because otherwise, the angles between cuts would become obtuse, leading to numerical instability. Moreover, many commercial IP solvers only add cuts at the root because those cuts can be leveraged throughout the tree. However, in Section 5, we also provide guarantees for applying cuts throughout the tree. In this section, we assume that all aspects of B&C (such as node selection and variable selection) are fixed except for the cuts applied at the root of the search tree.
3.1 Learning a single cut
To provide sample complexity bounds, as per Section 2.2, we bound the pseudo-dimension of the set of functions fu for u ∈ [0, 1]m, where fu(c, A, b) is the size of the tree B&C builds when it applies the CG cut defined by u at the root. To do so, we take advantage of structure exhibited by the class of dual functions, each of which is defined by a fixed IP (c, A, b) and measures tree size as
2We assume that constraints are generated in the same order across instances; see Appendix B for a discussion.
a function of the parameters u. In other words, each dual function f∗c,A,b : [0, 1] m → R is defined as f∗c,A,b(u) = fu(c, A, b). Our main result in this section is a proof that the dual functions are well-structured (Lemma 3.2), which then allows us to apply a result by Balcan et al. [8] to bound Pdim({fu : u ∈ [0, 1]m}) (Theorem 3.3). Proving that the dual functions are well-structured is challenging because they are volatile: slightly perturbing u can cause the tree size to shift from constant to exponential in n, as we prove in the following theorem. The full proof is in Appendix C.
Theorem 3.1. For any integer n, there exists an integer program (c, A, b) with two constraints and n variables such that if 12 ≤ u[1]− u[2] < n+1 2n , then applying the CG cut defined by u at the root causes B&C to terminate immediately. Meanwhile, if n+12n ≤ u[1]− u[2] < 1, then applying the CG cut defined by u at the root causes B&C to build a tree of size at least 2(n−1)/2.
Proof sketch. Without loss of generality, assume that n is odd. Consider an IP with constraints 2(x[1] + · · · + x[n]) ≤ n, −2(x[1] + · · · + x[n]) ≤ −n, x ∈ {0, 1}n, and any objective. This IP is infeasible because n is odd. Jeroslow [22] proved that without the use of cutting planes or heuristics, B&C will build a tree of size 2(n−1)/2 before it terminates. We prove that when 1 2 ≤ u[1]− u[2] < n+1 2n , the CG cut halfspace defined by u = (u[1], u[2]) has an empty intersection with the feasible region of the IP, causing B&C to terminate immediately. On the other hand, we show that if n+12n ≤ u[1]− u[2] < 1, then the CG cut halfspace defined by u contains the feasible region of the IP, and thus leaves the feasible region unchanged. In this case, due to Jeroslow [22], applying this CG cut at the root will cause B&C to build a tree of size at least 2(n−1)/2 before it terminates.
This theorem shows that the dual tree-size functions can be extremely sensitive to perturbations in the CG cut parameters. However, we are able to prove that the dual functions are piecewise-constant.
Lemma 3.2. For any IP (c, A, b), there areO(‖A‖1,1 +‖b‖1 +n) hyperplanes that partition [0, 1]m into regions where in any one region R, the dual function f∗c,A,b(u) is constant for all u ∈ R.
Proof. Let a1, . . . ,an ∈ Rm be the columns of A. Let Ai = ‖ai‖1 and B = ‖b‖1, so for any u ∈ [0, 1]m, ⌊ uTai ⌋ ∈ [−Ai, Ai] and ⌊ uT b ⌋ ∈ [−B,B]. For each integer ki ∈ [−Ai, Ai], we
have ⌊ uTai ⌋ = ki ⇐⇒ ki ≤ uTai < ki + 1. There are ∑n i=1 2Ai + 1 = O(‖A‖1,1 + n) such halfspaces, plus an additional 2B + 1 halfspaces of the form kn+1 ≤ uT b < kn+1 + 1 for each kn+1 ∈ {−B, . . . , B}. In any region R defined by the intersection of these halfspaces, the vector (buTa1c, . . . , buTanc, buT bc) is constant for all u ∈ R, and thus so is the resulting cut.
Combined with the main result of Balcan et al. [8], this lemma implies the following bound.
Theorem 3.3. Let Fα,β denote the set of all functions fu for u ∈ [0, 1]m defined on the domain of IPs (c, A, b) with ‖A‖1,1 ≤ α and ‖b‖1 ≤ β. Then, Pdim(Fα,β) = O(m log(m(α+ β + n))).
This theorem implies that Õ(κ2m/ε2) samples are sufficient to ensure that with high probability, for every CG cut, the average size of the tree B&C builds upon applying the cutting plane is within of the expected size of the tree it builds (the Õ notation suppresses logarithmic terms).
3.2 Learning a sequence of cuts
We now determine the sample complexity of making w sequential CG cuts at the root. The first cut is defined by m parameters u1 ∈ [0, 1]m for each of the m constraints. Its application leads to the addition of the row buT1 Acx ≤ buT1 bc to the constraint matrix. The next cut is then be defined by m+ 1 parameters u2 ∈ [0, 1]m+1 since there are now m+ 1 constraints. Continuing in this fashion, the wth cut is be defined by m+w− 1 parameters uw ∈ [0, 1]m+w−1. Let fu1,...,uw(c, A, b) be the size of the tree B&C builds when it applies the CG cut defined by u1, then applies the CG cut defined by u2 to the new IP, and so on, all at the root of the search tree.
As in Section 3.1, we bound the pseudo-dimension of the functions fu1,...,uw by analyzing the structure of the dual functions f∗c,A,b, which measure tree size as a function of the parameters u1, . . . ,uw. Specifically, f∗c,A,b : [0, 1]
m × · · · × [0, 1]m+w−1 → R, where f∗c,A,b(u1, . . . ,uw) = fu1,...,uw(c, A, b). The analysis in this section is more complex because the s th cut (with s ∈
{2, . . . ,W}) depends not only on the parameters us but also on u1, . . . ,us−1. We prove that the dual functions are again piecewise-constant, but in this case, the boundaries between pieces are defined by multivariate polynomials rather than hyperplanes. The full proof is in Appendix C. Lemma 3.4. For any IP (c, A, b), there are O(w2w ‖A‖1,1 + 2w ‖b‖1 + nw) multivariate polynomials in ≤ w2 +mw variables of degree ≤ w that partition [0, 1]m× · · · × [0, 1]m+w−1 into regions where in any one region R, f∗c,A,b(u1, . . . ,uw) is constant for all (u1, . . . ,uw) ∈ R.
Proof sketch. Let a1, . . . ,an ∈ Rm be the columns of A. For u1 ∈ [0, 1]m, . . . ,uw ∈ [0, 1]m+w−1, define ã1i ∈ [0, 1]m, . . . , ãwi ∈ [0, 1]m+w−1 for each i ∈ [n] such that ãsi is the ith column of the constraint matrix after applying cuts u1, . . . ,us−1. Similarly, define b̃s to be the constraint vector after applying the first s− 1 cuts. More precisely, we have the recurrence relation
ã1i = ai b̃ 1 = b
ãsi =
[ ãs−1i
uTs−1ã s−1 i
] b̃s = [ b̃s−1
uTs−1b̃ s−1 ] for s = 2, . . . ,W . We prove that ⌊ uTs ã s i ⌋ ∈ [−2s−1 ‖ai‖1 , 2s−1 ‖ai‖1]. For each integer ki in this
interval, ⌊ uTs ã s i ⌋ = ki ⇐⇒ ki ≤ uTs ãsi < ki + 1. The boundaries of these surfaces are defined by polynomials over us in ≤ ms+ s2 variables with degree ≤ s. Counting the total number of such hypersurfaces yields the lemma statement.
We now use this structure to provide a pseudo-dimension bound. The full proof is in Appendix C. Theorem 3.5. Let Fα,β denote the set of all functions fu1,...,uw for u1 ∈ [0, 1]m, . . . ,uw ∈ [0, 1]m+w−1 defined on the domain of integer programs (c, A, b) with ‖A‖1,1 ≤ α and ‖b‖1 ≤ β. Then, Pdim(Fα,β) = O(mw2 log(mw(α+ β + n))).
Proof sketch. The space of 0/1 classifiers induced by the set of degree ≤ w multivariate polynomials in w2 + mw variables has VC dimension O((w2 + mw) logw) [3]. However, we more carefully examine the structure of the polynomials considered in Lemma 3.4 to give an improved VC dimension bound of 1 +mw. For each j = 1, . . . ,m define ũ1[j], . . . , ũw[j] recursively as
ũ1[j] = u1[j]
ũs[j] = us[j] + s−1∑ `=1 us[m+ `]ũ`[j] for s = 2, . . . , w
The space of polynomials induced by the sth cut is contained in span{1, ũs[1], . . . , ũs[m]}. The intuition for this is as follows: consider the additional term added by the sth cut to the constraint matrix, that is, uTs ã s i . The first m coordinates (us[1], . . . ,us[m]) interact only with ai—so us[j] collects a coefficient of ai[j]. Each subsequent coordinate us[m+ `] interacts with all coordinates of ãsi arising from the first ` cuts. The term that collects a coefficient of ai[j] is precisely us[m+ `] times the sum of all terms from the first ` cuts with a coefficient of ai[j]. Using standard facts about the VC dimension of vector spaces and their duals in conjunction with Lemma 3.4 and the framework of Balcan et al. [8] yields the theorem statement.
The sample complexity (defined in Section 2.2) of learning W sequential cuts is thus Õ(κ2mw2/ 2).
3.3 Learning waves of simultaneous cuts
We now determine the sample complexity of making w sequential waves of cuts at the root, each wave consisting of k simultaneous CG cuts. Given vectors u11, . . . ,u k 1 ∈ [0, 1]m,u12, . . . ,uk2 ∈ [0, 1]m+k, . . . ,u1w, . . . ,u k w ∈ [0, 1]m+k(w−1), let fu11,...,uk1 ,...,u1w,...,ukw(c, A, b) be the size of the tree B&C builds when it applies the CG cuts defined by u11, . . . ,u k 1 , then applies the CG cuts defined by u12, . . . ,u k 2 to the new IP, and so on, all at the root of the search tree. The full proof of the following theorem is in Appendix C, and follows from the observation that w waves of k simultaneous cuts can be viewed as making kw sequential cuts with the restriction that cuts within each wave assign nonzero weight only to constraints from previous waves.
Theorem 3.6. Let Fα,β be the set of all functions fu11,...,uk1 ,...,u1w,...,ukw for u 1 1, . . . ,u k 1 ∈ [0, 1]m, . . . ,u1w, . . . ,u k w ∈ [0, 1]m+k(w−1) defined on the domain of integer programs (c, A, b) with ‖A‖1,1 ≤ α and ‖b‖1 ≤ β. Then, Pdim(Fα,β) = O(mk2w2 log(mkw(α+ β + n))).
This result implies that the sample complexity of learning W waves of k cuts is Õ(κ2mk2w2/ 2).
3.4 Data-dependent guarantees
So far, our guarantees have depended on the maximum possible norms of the constraint matrix and vector in the domain of IPs under consideration. The uniform convergence result in Section 2.2 for Fα,β only holds for distributions over A and b with norms bounded by α and β, respectively. In Appendix C.1, we show how to convert these into more broadly applicable data-dependent guarantees that leverage properties of the distribution over IPs. These guarantees hold without assumptions on the distribution’s support, and depend on E[maxi ‖Ai‖1,1] and E[maxi ‖bi‖1] (where the expectation is over N samples), thus giving a sharper sample complexity guarantee that is tuned to the distribution.
4 Learning cut selection policies
In Section 3, we studied the sample complexity of learning waves of specific cut parameters. In this section, we bound the sample complexity of learning cut-selection policies at the root, that is, functions that take as input an IP and output a candidate cut. Using scoring rules is a more nuanced way of choosing cuts since it allows for the cut parameters to depend on the input IP.
Formally, let Im be the set of IPs withm constraints (the number of variables is always fixed at n) and letHm be the set of all hyperplanes in Rm. A scoring rule is a function score : ∪m(Hm × Im)→ R≥0. The real value score(αTx ≤ β, (c, A, b)) is a measure of the quality of the cutting plane αTx ≤ β for the IP (c, A, b). Examples include the scoring rules discussed in Section 2.1. Suppose score1, . . . , scored are d different scoring rules. We now bound the sample complexity of learning a combination of these scoring rules that guarantee a low expected tree size. Our highlevel proof technique is the same as in the previous section: we establish that the dual tree-size functions are piecewise structured, and then apply the general framework of Balcan et al. [8] to obtain pseudo-dimension bounds. Theorem 4.1. Let C be a set of cutting-plane parameters such that for every IP (c, A, b), there is a decomposition of C into ≤ r regions such that the cuts generated by any two vectors in the same region are the same. Let score1, . . . , scored be d scoring rules. For µ ∈ Rd, let fµ(c, A, b) be the size of the tree B&C builds when it chooses a cut from C to maximize µ[1]score1(·, (c, A, b)) + · · ·+ µ[d]scored(·, (c, A, b)). Then, Pdim({fµ : µ ∈ Rd}) = O(d log(rd)).
Proof. Fix an integer program (c, A, b). Let u1, . . . ,ur ∈ C be representative cut parameters for each of the r regions. Consider the hyperplanes ∑d i=1 µ[i]scorei(us) = ∑d i=1 µ[i]scorei(ut) for each s 6= t ∈ {1, . . . , r} (suppressing the dependence on c, A, b). These O(r2) hyperplanes partition Rd into regions such that as µ varies in a given region, the cut chosen from C is invariant. The desired pseudo-dimension bound follows from the main result of Balcan et al. [8].
Theorem 4.1 can be directly instantiated with the class of CG cuts. Combining Lemma 3.2 with the basic combinatorial fact that k hyperplanes partition Rm into at most km regions, we get that the pseudo-dimension of {fµ : µ ∈ Rd} defined on IPs with ‖A‖1,1 ≤ α and ‖b‖1 ≤ β is O(dm log(d(α + β + n))). Instantiating Theorem 4.1 with the set of all sequences of w CG cuts requires the following extension of scoring rules to sequences of cutting planes. A sequential scoring rule is a function that takes as input an IP (c, A, b) and a sequence of cutting planes h1, . . . , hw, where each cut lives in one higher dimension than the previous. It measures the quality of this sequence of cutting planes when applied one after the other to the original IP. Every scoring rule score can be naturally extended to a sequential scoring rule score defined by score(h1, . . . , hw, (c0, A0, b0)) =∑w−1 i=0 score(hi+1, (c i, Ai, bi)), where (ci, Ai, bi) is the IP after applying cuts h1, . . . , hi−1. Corollary 4.2. Let C = [0, 1]m × · · · × [0, 1]m+w−1 denote the set of possible sequences of w Chvátal-Gomory cut parameters. Let score1, . . . , scored : C × Im × · · · × Im+w−1 → R
be d sequential scoring rules and let fµ(c, A, b) be as in Theorem 4.1 for the class C. Then, Pdim({fwµ : µ ∈ Rd}) = O(dmw2 log(dw(α+ β + n))).
Proof. In Lemma 3.4 and Theorem 3.5 we showed that there are O(w2wα+ 2wβ+nw) multivariate polynomials that belong to a family of polynomials G with VCdim(G∗) ≤ 1 +mw (G∗ denotes the dual of G) that partition C into regions such that resulting sequence of cuts is invariant in each region. By Claim 3.5 by Balcan et al. [8], the number of regions is O(w2wα + 2wβ + nw)VCdim(G
∗) ≤ O(w2wα+ 2wβ + nw)1+mw. The corollary then follows from Theorem 4.1.
These results bound the sample complexity of learning cut-selection policies based on scoring rules, which allow the cuts B&C that selects to depend on the input IP.
5 Sample complexity of generic tree search
In this section, we study the sample complexity of selecting high-performing parameters for generic tree-based algorithms, which are a generalization of B&C. This abstraction allows us to provide guarantees for simultaneously optimizing key aspects of tree search beyond cut selection, including node selection and branching variable selection. We also generalize the previous sections by allowing actions (such as cut selection) to be taken at any stage of the tree search—not just at the root.
Tree search algorithms take place over a series of κ rounds (analogous to the B&B tree-size cap κ in the previous sections). There is a sequence of t steps that the algorithm takes on each round. For example, in B&C, these steps include node selection, cut selection, and variable selection. The specific action the algorithm takes during each step (for example, which node to select, which cut to include, or which variable to branch on) typically depends on a scoring rule. This scoring rule weights each possible action and the algorithm performs the action with the highest weight. These actions (deterministically) transition the algorithm from one state to another. This high-level description of tree search is summarized by Algorithm 1. For each step j ∈ [t], the number of possible actions is Tj ∈ N. There is a scoring rule scorej , where scorej(k, s) ∈ R is the weight associated with the action k ∈ [Tj ] when the algorithm is in the state s.
Algorithm 1 Tree search Input: Problem instance, t scoring rules score1, . . . , scoret, number of rounds κ.
1: s1,1 ← Initial state of algorithm 2: for each round i ∈ [κ] do 3: for each step j ∈ [t] do 4: Perform the action k ∈ [Tj ] that maximizes scorej (si,j , k) 5: si,j+1 ← New state of algorithm 6: si+1,1 ← si,t+1 . State at beginning of next round equals state at end of this round
Output: Incumbent solution in state sκ,t+1, if one exists.
There are often several scoring rules one could use, and it is not clear which to use in which scenarios. As in Section 4, we provide guarantees for learning combinations of these scoring rules for the particular application at hand. More formally, for each step j ∈ [t], rather than just a single scoring rule scorej as in Step 4, there are dj scoring rules scorej,1, . . . , scorej,dj . Given parameters µj = (µj [1], . . . , µj [dj ]) ∈ Rdj , the algorithm takes the action k ∈ [Tj ] that maximizes∑dj i=1 µj [i]scorej,i(k, s). There is a distribution D over inputs x to Algorithm 1. For example, when this framework is instantiated for branch-and-cut, x is an integer program (c, A, b). There is a utility function fµ(x) ∈ [−H,H] that measures the utility of the algorithm parameterized by µ = (µ1, . . . ,µt) on input x. For example, this utility function might measure the size of the search tree that the algorithm builds. We assume that this utility function is final-state-constant: Definition 5.1. Let µ = (µ1, . . . ,µt) and µ′ = (µ′1, . . . ,µ′t) be two parameter vectors. Suppose that we run Algorithm 1 on input x once using the scoring rule scorej = ∑dj i=1 µj [i]scorej,i and
once using the scoring rule scorej = ∑dj i=1 µ ′ j [i]scorej,i. Suppose that on each run, we obtain the same final state sκ,t+1. The utility function is final-state-constant if fµ(x) = fµ′(x).
We provide a sample complexity bound for learning the parametersµ. The full proof is in Appendix D. Theorem 5.2. Let d = ∑t j=1 dj denote the total number of tunable parameters of tree search. Then,
Pdim ({ fµ : µ ∈ Rd }) = O ( dκ
t∑ j=1 log Tj + d log d
) .
Proof sketch. We prove that there is a set of hyperplanes splitting the parameter space into regions such that if tree search uses any parameter setting from a single region, it will always take the same sequence of actions (including node, variable, and cut selection). The main subtlety is an induction argument to count these hyperplanes that depends on the current step of the tree-search algorithm.
In the context of integer programming, Theorem 5.2 not only recovers the main result of Balcan et al. [5] for learning variable selection policies, but also yields a more general bound that simultaneously incorporates cutting plane selection, variable selection, and node selection. In B&C, the first action of each round is to select a node. Since there are at most 2n+1 − 1 nodes, T1 ≤ 2n+1 − 1. The second action is to choose a cutting plane. As in Theorem 4.1, let C be a family of cutting planes such that for every IP (c, A, b), there is a decomposition of the parameter space into ≤ r regions such that the cuts generated by any two parameters in the same region are the same. So T2 ≤ r. The last action is to choose a variable to branch on at that node, so T3 = n. Applying Theorem 5.2, Pdim({fµ : µ ∈ Rd}) = O(dκn + dκ log r + d log d). Ignoring T1 and T2, thereby only learning the variable selection policy, recovers the O(dκ log n+ d log d) bound of Balcan et al. [5].
6 Conclusions and future research
We provided the first provable guarantees for using machine learning to configure cutting planes and cut-selection policies. We analyzed the sample complexity of learning cutting planes from the popular family of Chvátal-Gomory (CG) cuts. We then provided sample complexity guarantees for learning parameterized cut-selection policies, which allow the branch-and-cut algorithm to adaptively apply cuts as it builds the search tree. We showed that this analysis can be generalized to simultaneously capture various key aspects of tree search beyond cut selection, such as node and variable selection.
This paper opens up a variety questions for future research. For example, which other cut families can we learn over with low sample complexity? Section 3 focused on learning within the family of CG cuts (Sections 4 and 5 applied more generally). There are many other families, such as Gomory mixed-integer cuts and lift-and-project cuts, and a sample complexity analysis of these is an interesting direction for future research (and would call for new techniques). In addition, can we use machine learning to design improved scoring rules and heuristics for cut selection? The bounds we provide in Section 4 apply to any choice of scoring rules, no matter how simple or complex. Is it possible to obtain even better bounds by taking into account the complexity of the scoring rules? Finally, the bounds in this paper are worst case, but a great direction for future research is to develop data-dependent bounds that improve based on the structure of the input distribution.
Acknowledgements
This material is based on work supported by the National Science Foundation under grants IIS1618714, IIS-1718457, IIS-1901403, CCF-1733556, CCF-1535967, CCF-1910321, SES-1919453, the ARO under award W911NF2010081, DARPA under cooperative agreement HR00112020003, an AWS Machine Learning Research Award, an Amazon Research Award, a Bloomberg Research Grant, and a Microsoft Research Faculty Fellowship. | 1. What is the focus of the paper regarding branch-and-cut algorithms for integer programming problems?
2. What are the different settings studied by the authors to characterize the intrinsic difficulty of learning?
3. What is the contribution of the paper in terms of generalizing the results to bound the sample complexity of generic tree search policies?
4. What are the reviewer's concerns or questions regarding the paper, particularly on the assumptions made in the citations and the applicability to mixed-integer programming? | Summary Of The Paper
Review | Summary Of The Paper
This paper studies the sample complexity of cutting plane selection in branch-and-cut algorithms for integer programming problems. The authors study a handful of different settings, with the goal of characterizing the intrinsic difficulty of learning: (i) the tree size upon adding a single cut at the root, (ii) the tree size upon adding a sequence of cuts at the root, (iii) the tree size upon adding a series of waves of cuts at the root, and (iv) the score of cuts according to a set of predefined scoring rules. Finally, the authors generalize the results to bound the sample complexity of generic tree search policies with sequential operations that can be scored.
Review
The paper is nicely written, tackles an interesting problem, and covers a nice range of related, but still distinct, settings. I view it as a worthy addition to the literature, though I am not particularly well-suited to evaluate the learning aspects of the work (i.e. its novelty or deepness).
I have two questions/comments for the authors: 1.This is nitpicky, but: [10], as cited on L136, restricts attention to bounded polyhedron, whereas the authors do not make the same assumption. 2. How much of this machinery is specialized for integer programming as opposed to mixed-integer programming? I would be interested to hear the authors thoughts, though I do not feel that a lengthy discussion is needed in the paper itself. |
NIPS | Title
TabNAS: Rejection Sampling for Neural Architecture Search on Tabular Datasets
Abstract
The best neural architecture for a given machine learning problem depends on many factors: not only the complexity and structure of the dataset, but also on resource constraints including latency, compute, energy consumption, etc. Neural architecture search (NAS) for tabular datasets is an important but under-explored problem. Previous NAS algorithms designed for image search spaces incorporate resource constraints directly into the reinforcement learning (RL) rewards. However, for NAS on tabular datasets, this protocol often discovers suboptimal architectures. This paper develops TabNAS, a new and more effective approach to handle resource constraints in tabular NAS using an RL controller motivated by the idea of rejection sampling. TabNAS immediately discards any architecture that violates the resource constraints without training or learning from that architecture. TabNAS uses a Monte-Carlo-based correction to the RL policy gradient update to account for this extra filtering step. Results on several tabular datasets demonstrate the superiority of TabNAS over previous reward-shaping methods: it finds better models that obey the constraints.
1 Introduction
To make a machine learning model better, one can scale it up. But larger networks are more expensive as measured by inference time, memory, energy, etc, and these costs limit the application of large models: training is slow and expensive, and inference is often too slow to satisfy user requirements.
Many applications of machine learning in industry use tabular data, e.g., in finance, advertising and medicine. It was only recently that deep learning has achieved parity with classical tree-based models in these domains [9, 11]. For vision, optimizing models for practical deployment often relies on Neural Architecture Search (NAS). Most NAS literature targets convolutional networks on vision benchmarks [14, 5, 10, 19]. Despite the practical importance of tabular data, however, NAS research on this topic is quite limited [8, 7]. (See Appendix A for a more comprehensive literature review.)
Weight-sharing reduces the cost of NAS by training a SuperNet that is the superset of all candidate architectures [2]. This trained SuperNet is then used to estimate the quality of each candidate architecture or child network by allowing activations in only a subset of the components of the SuperNet and evaluating the model. Reinforcement learning (RL) has shown to efficiently find the most promising child networks [16, 5, 3] for vision problems.
In our experiments, we show that a direct application of approaches designed for vision to tabular data often fails. For example, the TuNAS [3] approach from vision struggles to find the optimal architectures for tabular datasets (see experiments). The failure is caused by the interaction of the search space and the factorized RL controller. To understand why, consider the following toy example
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
with 2 layers, illustrated in Figure 1. For each layer, we can choose a layer size of 2, 3, or 4, and the maximum number of parameters is set to 25. The optimal solution is to set the size of the first hidden layer to 4 and the second to 2. Finding this solution with RL is difficult with a cost penalty approach. The RL controller is initialized with uniform probabilities. As a result, it is quite likely that the RL controller will initially be penalized heavily when choosing option 4 for the first layer, since two thirds of the choices for the second layer will result in a model that is too expensive. As a result, option 4 for the first layer is quickly discarded by the RL controller and we get stuck in a local optimum.
This co-adaptation problem is caused by the fact that existing NAS methods for computer vision often use factorized RL controllers, which force all choices to be be made independently. While factorized controllers can be optimized easily and are parameter-efficient, they cannot capture all of the nuances in the loss landscape. A solution to this could be to use a more complex model such as an LSTM (e.g., [16, 4]). However, LSTMs are often much slower to train and are far more difficult to tune.
Our proposed method, TabNAS, uses a solution inspired by rejection sampling. It updates the RL controller only when the sampled model satisfies the cost constraint. The RL controller is then discouraged from sampling poor models within the cost constraint and encouraged to sample the high quality models. Rather than penalizing models that violate the constraints, the controller silently discards them. This trick allows the RL controller to see the true constrained loss landscape, in which
having some large layers is beneficial, allowing TabNAS to efficiently find global (not just local) optima for tabular NAS problems. Our contributions can be summarized as follows:
• We identify failure cases of existing resource-aware NAS methods on tabular data and provide evidence this failure is due to the cost penalty in the reward together with the factorized space. • We propose and evaluate an alternative: a rejection sampling mechanism that ensures the RL
controller only selects architectures that satisfy resource constraint. This extra rejection step allows the RL controller to explore parts of the search space that would otherwise be overlooked. • The rejection mechanism also introduces a systematic bias into the RL gradient updates, which
can skew the results. To compensate for this bias, we introduce a theoretically motivated and empirically effective correction into the gradient updates. This correction can be computed exactly for small search spaces and efficiently approximated by Monte-Carlo sampling otherwise. • We show the resulting method, TabNAS, automatically learns whether a bottleneck structure is
needed in an optimal architecture, and if needed, where to place the bottleneck in the network.
These contributions form TabNAS, our RL-based weight-sharing NAS with rejection-based reward. TabNAS robustly and efficiently finds a feasible architecture with optimal performance within the resource constraint. Figure 2 shows an example.
2 Notation and terminology
Math basics. We define [n] = {1, ··· , n} for a positive integer n. With a Boolean variable X , the indicator function 1(X ) equals 1 if X is true, and 0 otherwise. |S| denotes the cardinality of a set S; stop_grad(f) denotes the constant value (with gradient 0) corresponding to a differentiable quantity f , and is equivalent to tensorflow.stop_gradient(f) in TensorFlow [1] or f.detach() in PyTorch [15]. ⊆ and ⊂ denote subset and strict subset, respectively. ∇ denotes the gradient with respect to the variable in the context.
Weight, architecture, and hyperparameter. We use weights to refer to the parameters of the neural network. The architecture of a neural network is the structure of how nodes are connected; examples of architectural choices are hidden layer sizes and activation types. Hyperparameters are the non-architectural parameters that control the training process of either stand-alone training or RL, including learning rate, optimizer type, optimizer parameters, etc.
Neural architecture. A neural network with specified architecture and hyperparameters is called a model. We only consider fully-connected feedforward networks (FFNs) in this paper, since they can already achieve SOTA performance on tabular datasets [11]. The number of hidden nodes after each weight matrix and activation function is called a hidden layer size. We denote a single network in our search space with hyphen-connected choices. For example, when searching for hidden layer sizes, in the space of 3-hidden-layer ReLU networks, 32-144-24 denotes the candidate where the sizes of
the first, second and third hidden layers are 32, 144 and 24, respectively. We only search for ReLU networks; for brevity, we will not mention the activation function type in the sequel.
Loss-resource tradeoff and reference architectures. In the hidden layer size search space, the validation loss in general decreases with the increase of the number of parameters, giving the lossresource tradeoff (e.g., Figure 3). Here loss and number of parameters serve as two costs for NAS. Thus there are Pareto-optimal models that achieve the smallest loss among all models with a given bound on the number of parameters. With an architecture that outperforms others with a similar or fewer number of parameters, we do resource-constrained NAS with the number of parameters of this architecture as the resource target or constraint. We call this architecture the reference architecture (or reference) of NAS, and its performance the reference performance. We do NAS with the goal of matching (the size and performance of) the reference. Note that the RL controller only has knowledge of the number of parameters of the reference, and is not informed of its hidden layer sizes.
Search space. When searching L-layer networks, we use capital letters like X = X1- ··· -XL to denote the random variable of sampled architectures, in which Xi is the random variable for the size of the i-th layer. We use lowercase letters like x = x1- ··· -xL to denote an architecture sampled from the distribution over X , in which xi is an instance of the i-th layer size. When there are multiple samples drawn, we use a bracketed superscript to denote the index over samples: x(k) denotes the k-th sample. The search space S = {sij}i∈[L],j∈[Ci] has Ci choices for the i-th hidden layer, in which sij is the j-th choice for the size of the i-th hidden layer: for example, when searching for a one-hidden-layer network with size candidates {5, 10, 15}, we have s13 = 15.
Reinforcement learning. The RL algorithm learns the set of logits {`ij}i∈[L],j∈[Ci], in which `ij is the logit associated with the j-th choice for the i-th hidden layer. With a fully factorized distribution of layer sizes (we learn a separate distribution for each layer), the probability of sampling the j-th choice for the i-th layer pij is given by the SoftMax function: pij = exp(`ij)/ ∑ j∈[Ci] exp(`ij). In each RL step, we sample an architecture y to compute the single-step RL objective J(y), and update the logits with∇J(y): an unbiased estimate of the gradient of the RL value function. Resource metric and number of parameters. We use the number of parameters, which can be easily computed for neural networks, as a cost metric in this paper. However, our approach does not depend on the specific cost used, and can be easily adapted to other cost metrics.
3 Methodology
Our NAS methodology can be decomposed into three main components: weight-sharing with layer warmup, REINFORCE with one-shot search, and Monte Carlo (MC) sampling with rejection.
As an overview, our method starts with a SuperNet, which is a network that layer-wise has width equal to the largest choice within the search space. We first stochastically update the weights of the entire SuperNet to “warm up” over the first 25% of search epochs. Then we alternate between updating the shared model weights (which are used to estimate the quality of different child models) and the RL controller (which focuses the search on the most promising parts of the space). In each iteration, we first sample a child network from the current layer-wise probability distributions and update the corresponding weights within the SuperNet (weight update). We then sample another child network to update the layerwise logits that give the probability distributions (RL update). The latter RL update is only performed if the sampled network is feasible, in which case we use rejection with MC sampling to update the logits with a sampling probability conditional on the feasible set.
To avoid overfitting, we split the labelled portion of a dataset into training and validation splits. Weight updates are carried out on the training split; RL updates are performed on the validation split.
3.1 Weight sharing with layer warmup
The weight-sharing approach has shown success on various computer vision tasks and NAS benchmarks [16, 2, 5, 3]. To search for an FFN on tabular datasets, we build a SuperNet where the size of each hidden layer is the largest value in the search space. Figure 4 shows an example. When we sample a child network with a hidden layer size `i smaller than the SuperNet, we only use the first `i hidden nodes in that layer to compute the output in the forward pass and the gradients in the
backward pass. Similarly, in RL updates, only the weights of the child network are used to estimate the quality reward that is used to update logits.
In weight-sharing NAS, warmup helps to ensure that the SuperNet weights are sufficiently trained to properly guide the RL updates [3]. With probability p, we train all weights of the SuperNet, and with probability 1− p we only train the weights of a random child model. When we run architecture searches for FFNs, we do warmup in the first 25% epochs, during which the probability p linearly decays from 1 to 0 (Figure 5(a)). The RL controller is disabled during this period.
3.2 One-shot training and REINFORCE
We do NAS on FFNs with a REINFORCE-based algorithm. Previous works have used this type of algorithm to search for convolutional networks on vision tasks [18, 5, 3]. When searching for L-layer FFNs, we learn a separate probability distribution over Ci size candidates for each layer. The distribution is given by Ci logits via the SoftMax function. Each layer has its own independent set of logits. With Ci choices for the ith layer, where i = 1, 2, ··· , L, there are ∏ i∈[L] Ci candidate
networks in the search space but only ∑ i∈[L] Ci logits to learn. This technique significantly reduces the difficulty of RL and make the NAS problem practically tractable [5, 3].
The REINFORCE-based algorithm trains the SuperNet weights and learns the logits {`ij}i∈[L],j∈[Ci] that give the sampling probabilities {`ij}i∈[L],j∈[Ci] over size candidates by alternating between weight and RL updates. In each iteration, we first sample a child network x from the SuperNet and compute its training loss in the forward pass. Then we update the weights in x with gradients of the training loss computed in the backward pass. This weight update step trains the weights of x. The weights in architectures with larger sampling probabilities are sampled and thus trained more often. We then update the logits for the RL controller by sampling a child network y that is independent of the network x from the same layerwise distributions, computing the quality reward Q(y) as 1− loss(y) on the validation set, and then updating the logits with the gradient of J(y) = stop_grad(Q(y)− Q̄) logP(y): the product of the advantage of y’s reward over past rewards (usually an exponential moving average) and the log-probability of the current sample.
The alternation creates a positive feedback loop that trains the weights and updates the logits of the large-probability child networks; thus the layer-wise sampling probabilities gradually converge to more deterministic distributions, under which one or several architectures are finally selected.
Details of a resource-oblivious version is shown as Appendix B Algorithm 1, which does not take into account a resource constraint. In Section 3.3, we show an algorithm that combines Monte-Carlo sampling with rejection sampling, which serves as a subroutine of Algorithm 1 by replacing the probability in J(y) with a conditional version.
3.3 Rejection-based reward with MC sampling
Only a subset of the architectures in the search space S will satisfy resource constraints; V denotes this set of feasible architectures. To find a feasible architecture, a resource target T0 is often used in an RL reward. Given an architecture y, a resource-aware reward combines its quality Q(y) and resource consumption T (y) into a single reward. MnasNet [18] proposes the rewards Q(y)(T (y)/T0)β and Q(y) max{1, (T (y)/T0)β} while TuNAS [3] proposes the absolute value reward (or Abs Reward) Q(y) + β|T (y)/T0 − 1|. The idea behind is to encourage models with high quality with respect the resource target. In these rewards β is a hyperparameter that needs careful tuning.
We find that on tabular data, RL controllers using these resource-aware rewards above can struggle to discover high quality structures. Figure 1 shows a toy example in the search space in Figure 4, in which we know the validation losses of each child network and only train the RL controller for 500 steps. The optimal network is 4-2 among architectures with number of parameters no more than 25, but the RL controller rarely chooses it. In Section 4.1, we show examples on real datasets.
This phenomenon reveals a gap between the true distribution we want to sample from and the distributions obtained by sampling from this factorized search space:
• We only want to sample from the set of feasible architectures V , whose distribution is {P(y |y ∈ V )}y∈V . The resources (e.g., number of parameters) used by an architecture, and thus its feasibility, is determined jointly by the sizes of all layers.
P̂(V )) in a successful search, with 8,000 architectures in the search space and the number of MC samples N = 1024. Both probabilities are (nearly) constant during warmup before RL starts, then increase after RL starts because of rejection sampling.
• On the other hand, the factorized search space learns a separate (independent) probability distribution for the choices of each layer. While this distribution is efficient to learn, independence between layers discourages an RL controller with a resource-aware reward from choosing a bottleneck structure. A bottleneck requires the controller to select large sizes for some layers and small for others. But decisions for different layers are made independently, and both very large and very small layer sizes, considered independently, have poor expected rewards: small layers are estimated to perform poorly, while large layers easily exceed the resource constraints.
To bridge the gap and efficiently learn layerwise distributions that take into account the architecture feasibility, we propose a rejection-based RL reward for Algorithm 1. We next sketch the idea; detailed pseudocode is provided as Algorithm 2 in Appendix B.
REINFORCE optimizes a set of logits {`ij}i∈[L],j∈[Ci] which define a probability distribution p over architectures. In the original algorithm, we sample a random architecture y from p and then estimate its quality Q(y). Updates to the logits `ij take the form `ij ← `ij + η ∂∂`ij J(y), where η is the learning rate, Q is a moving average of recent rewards, and J(y) = stop_grad(Q(y)−Q) · logP(y). If y is better (worse) than average, then Q(y)−Q will be positive (negative), so the REINFORCE update will increase (decrease) the probability of sampling the same architecture in the future.
In our new REINFORCE variant, motivated by rejection sampling, we do not update the logits when y is infeasible. When y is feasible, we replace the probability P(y) in the REINFORCE update equation with the conditional probability P(y | y ∈ V ) = P(y)/P(y ∈ V ). So J(y) becomes
J(y) = stop_grad(Q(y)−Q) · log [P(y)/P(y ∈ V )] . (1) We can compute the probability of sampling a feasible architecture P(V ) := P(y ∈ V ) exactly when the search space is small, but this computation is too expensive when the space is large. Instead, we replace the exact probability P(y) with a differential approximation P̂(y) obtained with Monte-Carlo (MC) sampling. In each RL step, we sample N architectures {z(k)}k∈[N ] within the search space with a proposal distribution q and estimate P(V ) as
P̂(V ) = 1
N ∑ k∈[N ] p(k) q(k) · 1(z(k) ∈ V ). (2)
For each k ∈ [N ], p(k) is the probability of sampling z(k) with the factorized layerwise distributions and so is differentiable with respect to the logits. In contrast, q(k) is the probability of sampling z(k) with the proposal distribution, and is therefore non-differentiable.
P̂(V ) is an unbiased and consistent estimate of P(V ); ∇ log[P(y)/P̂(V )] is a consistent estimate of ∇ log[P(y | y ∈ V )] (Appendix J). A larger N gives better results (Appendix H); in experiments, we
need smaller than the size of the sample space to get a faithful estimate (Figure 5(b), Appendix D and I) because neighboring RL steps can correct the estimates of each other. We set q = stop_grad(p) in experiments for convenience: use the current distribution over architectures for MC sampling. Other distributions that have a larger support on V may be used to reduce sampling variance (Appendix J).
At the end of NAS, we pick as our final architecture the layer sizes with largest sampling probabilities if the layerwise distributions are deterministic, or sample from the distributions m times and pick n feasible architectures with the largest number of parameters if not. Appendix B Algorithm 3 provides the full details. We find m = 500 and n ≤ 3 suffice to find an architecture that matches the reference (optimal) architecture in our experiments.
In practice, the distributions often (almost) converge after twice the number of epochs used to train a stand-alone child network. Indeed the distributions are often useful after training the same number of epochs in that the architectures found by Algorithm 3 are competitive. Figure 1 shows TabNAS finds the best feasible architecture, 4-2, in our toy example, using P̂(V ) estimated by MC sampling.
4 Experimental results
Our implementation can be found at https://github.com/google-research/tabnas. We ran all experiments using TensorFlow on a Cloud TPU v2 with 8 cores. We use a 1,027-dimensional input representation for the Criteo dataset and 180 features for Volkert1. The best architectures in our FFN search spaces already produce near-state-of-the-art results; details in Appendix C.2. More details of experiment setup and results in other search spaces can be found in Appendix C and D. Appendix E tabulates the performance of all RL rewards on all tabular datasets in our experiments. Appendix F shows a comparison with Bayesian optimization and evolutionary search in similar settings; Ablation studies in Appendix I show TabNAS components collectively deliver desirable results; Appendix H shows TabNAS has easy-to-tune hyperparameters.
4.1 When do previous RL rewards fail?
Section 3.3 discussed the resource-aware RL rewards and highlighted a potential failure case. In this section, we show several failure cases of three resource-aware rewards, Q(y)(T (y)/T0)β , Q(y) max{1, (T (y)/T0)β}, and the Abs Reward Q(y) + β|T (y)/T0 − 1|, on our tabular datasets.
4.1.1 Criteo – 3 layer search space
We use the 32-144-24 reference architecture (41,153 parameters). Figure 3 gives an overview of the costs and losses of all architectures in the search space. The search space requires us to choose one of 20 possible sizes for each hidden layer; details in Appendix D. The search has 1.7× the cost of a stand-alone training run.
1Our paper takes these features as given. It is worth noting that methods proposed in feature engineering works like [12] and [13] are complementary to and can work together with TabNAS.
Failure of latency rewards. Figure 6 shows the sampling probabilities from the search when using the Abs Reward, and the retrain validation losses of the found architecture 32-64-96. In Figures 6(a) – 6(c), the sampling probabilities for the different choices are uniform during warmup and then converge quickly. The final selected model (32-64-96) is much worse than the reference model (32-144-24) even though the reference model is actually less expensive. We also observed similar failures for the MnasNet rewards. With the MnasNet rewards, the RL controller also struggles to find a model within ±5% of the constraint despite a grid search of the RL parameters (details in Appendix C). In both cases, almost all found models are worse than the reference architecture.
0.443 0.448 0.453 stand-alone loss
0.43
0.44
0.45
on e-
sh ot
lo ss
015 60 100 # epochs
8 16 24 32 48 64 80 96
112 128 144 160 176 192 208 224 240 256 384 512hi dd en
la ye
r 2
0
1
pr ob
ab ilit
y
Figure 7: Left: 3-layer Criteo SuperNet calibration after 60 epochs (search space in Appendix C): Pearson correlation is 0.96. The one-shot loss is validation loss of each child network with weights taken from a SuperNet trained with the same hyperparameters as in Figure 6 but with no RL in the first 60 epochs; the stand-alone loss of each child network is computed by training the same architecture with the same hyperparameters from scratch, and has std 0.0003. Right: change in probabilities in layer 2 after 60 epochs of SuperNet training and 40 of RL. Note the rapid changes due to RL.
The RL controller is to blame. To verify that a low quality SuperNet was not the culprit, we trained a SuperNet without updating the RL controller, and manually inspected the quality of the resulting SuperNet. The sampling probabilities for the RL controller remained uniform throughout the search; the rest of the training setup was kept the same. At the end of the training, we compare two sets of losses on each of the child networks: the validation loss from the SuperNet (one-shot loss), and the validation loss from training the child network from scratch. Figure 7(a) shows that there is a strong correlation between these accuracies; Figure 7(b) shows RL that starts from the sufficiently trained SuperNet weights in 7(a) still chooses the suboptimal choice 64. This suggests that the suboptimal search results on Criteo are likely due to issues with the RL controller, rather than issues with the one-shot model weights. In a 3 layer search space we can actually find good models without the RL controller, but in a 5 layer search space, we found an RL controller whose training is interleaved with the SuperNet is important to achieve good results.
4.1.2 Volkert – 4 layer search space
We search for 4-layer and 9-layer networks on the Volkert dataset; details in Appendix D. For resource-aware RL rewards, we ran a grid search over the RL learning rate and β hyperparameter. The reference architecture for the 4 layer search space is 48-160-32-144 with 27,882 parameters. Despite a hyperparameter grid search, it was difficult to find models with the right target cost reliably using the MnasNet rewards. Using the Abs Reward (Figure 8), searched models met the target cost but their quality was suboptimal, and the trend
is similar to what has been shown in the toy example (Figure 1): a smaller |β| gives an infeasible architecture that is beyond the reference number of parameters, and a larger |β| gives an architecture that is feasible but suboptimal.
4.1.3 A common failure pattern
Apart from Section 4.1.1 and 4.1.2, more examples in search spaces of deeper FFNs can be found in Appendix D. In cases on Criteo and Volkert where where the RL controller with soft constraints cannot match the quality of the reference architectures, the reference architecture often has a bottleneck structure. For example, with a 1,027-dimensional input representation, the 32-144-24 reference on Criteo has bottleneck 32; with 180 features, the 48-160-32-144 reference on Volkert has bottleneck 48 and 32. As the example in Section 3.3 shows, the wide hidden layers around the bottlenecks get penalized harder in the search, and it is thus more difficult for RL with the Abs Reward to find a
model that can match the reference performance. Also, Appendix C.2.1 shows the Pareto-optimal architectures in the tradeoff points in Figure 3 often have bottleneck structures, so resource-aware RL rewards in previous NAS practice may have more room for improvement than previously believed.
4.2 NAS with TabNAS reward
With proper hyperparameters (Appendix H), our RL controller with TabNAS reward finds the global optimum when RL with resource-aware rewards produces suboptimal results.
TabNAS does not introduce a resource-aware bias in the RL reward (Section 3.3). Instead, it uses conditional probabilities to update the logits in feasible architectures. We run TabNAS for 120 epochs with RL learning rate 0.005 and N = 3072 MC samples.2 The RL controller converges to two architectures, 32-160-16 (40,769 parameters, with loss 0.4457 ± 0.0002) and 32-144-24 (41,153 parameters, with loss 0.4455 ± 0.0003), after around 50 epochs of NAS, then oscillates between these two solutions (Figure 9). After 120-epochs, we sample from the layerwise distribution and pick the largest feasible architecture: the global optimum 32-144-24.
On the same hardware, the search takes 3× the runtime of stand-alone training. Hence, as can be seen in Figure 2, the proposed architecture search method is much faster than a random baseline.
4.3 TabNAS automatically determines whether bottlenecks are needed
Previous NAS works like MnasNet and TuNAS (often or only on vision tasks) often have inverted bottleneck blocks [17] in their search spaces. However, the search spaces used there have a hardcoded requirement that certain layers must have bottlenecks. In contrast, our search spaces permit the controller to automatically determine whether to use bottleneck structures based on the task under consideration. TabNAS automatically finds high-quality architectures, both in cases where bottlenecks are needed and in cases where they are not. This is important because networks with bottlenecks do not always outperform others on all tasks. For example, the reference architecture 32-144-24 outperforms the TuNAS-found 32-64-96 on Criteo, but the reference 64-192-48-32 (64,568 parameters, 0.0662 ± 0.0011) is on par with the TuNAS-and-TabNAS-found 96-80-96-32 (64,024 parameters, 0.0669 ± 0.0013) on Aloi. TabNAS automatically finds an optimal (bottleneck) architecture for Criteo, and automatically finds an optimal architecture that does not necessarily have a bottleneck structure for Aloi. Previous reward-shaping rewards like the Abs Reward only succeed in the latter case.
4.4 Rejection-based reward outperforms Abs Reward in NATS-Bench size search space
Although we target resource-constrained NAS on tabular datasets in this paper, our proposed method is not specific to NAS on tabular datasets. In Appendix G, we show the rejection-based reward in TabNAS outperforms RL with the Abs Reward in the size search space of NATS-Bench [6], a NAS benchmark on vision tasks.
2The 3-layer search space has 203 = 8000 candidate architectures, which is small enough to compute P(V ) exactly. However, MC can scale to larger spaces which are prohibitively expensive for exhaustive search (Appendix D).
5 Conclusion
We investigate the failure of resource-aware RL rewards to discover optimal structures in tabular NAS and propose TabNAS for tabular NAS in a constrained search space. The TabNAS controller uses a rejection mechanism to compute the policy gradient updates from feasible architectures only, and uses Monte-Carlo sampling to reduce the cost of debiasing this rejection-sampling approach. Experiments show TabNAS finds better architectures than previously proposed RL methods with resource-aware rewards in resource-constrained searches.
Many questions remain open. For example: 1) Can the TabNAS strategy find better architectures on other types of tasks such as vision and language? 2) Can TabNAS improve RL results for more complex architectures? 3) Is TabNAS useful for resource-constrained RL problems more broadly?
Acknowledgments and Disclosure of Funding
This work was done when Madeleine Udell was a visiting researcher at Google. The authors thank Ruoxi Wang, Mike Van Ness, Ziteng Sun, Xuanyi Dong, Lijun Ding, Yanqi Zhou, Chen Liang, Zachary Frangella, Yi Su, and Ed H. Chi for helpful discussions, and thank several anonymous reviewers for useful comments. | 1. What is the main contribution of the paper, and how does it differ from previous works in the field?
2. How effective is the proposed method in addressing resource constraints compared to other approaches?
3. Is the method specific to tabular datasets, or can it be applied to other tasks?
4. What are the strengths and weaknesses of the paper regarding its literature review and citations?
5. Are there any open questions or areas for future research related to the paper's topic? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
TabNAS proposes a modification to Reinforce which instead of modifying the reward function to incorporate resource constraints, estimates the feasible set of architectures (e.g. architectures less than some threshold/reference) and updates the probabilities of the controller accordingly by redistributing the probability mass over the feasible set as more trial-and-error progresses.
Strengths And Weaknesses
Strengths: Paper is clear and well-written. Literature has been adequately cited and discussed.
Weaknesses: Please see my comments in the questions section.
Questions
I am quite confused by the contribution of this paper. The way I read it, the authors show that reward function modification as has been commonly done before is not as good as rejection sampling as proposed in this work in coming up with architectures that better meet resource constraints. I buy this and note that most Pareto-frontier aware algorithms which output the entire Pareto-frontier employ rejection sampling to constrain search to only those parts of the search space that the user wants (e.g. give me the Pareto-frontier estimate less than < 100 milliseconds inference latency, or between 200-300 milliseconds inference latency) See https://arxiv.org/abs/2105.01015. But there seems to be nothing specific in the method to tabular datasets. I can see applying this method as-is to any task (e.g. vision amd NLP). Is my understanding correct? If so, then it perhaps happened that the authors ran on tabular datasets to get the ball rolling? Note that one can use any of the excellent NAS benchmarks without using GPUs or large compute to validate this method. My suggestion will be NAS-Bench-Suite which has several different benchmarks under a single interface. https://arxiv.org/abs/2201.13396.
My second concern/comment is perhaps a bit unfair to this work: It is a bit baffling to me that a stateful RL algorithm like Reinforce has been widely applied (and continues to be used in the broader AutoML community) to a problem setting where one doesn't need to do long horizon credit assignment. NAS is inherently stateless, one immediately knows the result of their action (sampled architecture or in HPO the sampled hyperparameter). So 1-step RL like contextual bandits or Bayesian optimization will be just as efficient and probably more. Can such rejection constraints be added to BOHB, HyperBand, BANANAS, NASBO etc? If so how will they be different from those in https://arxiv.org/abs/2105.01015?
Related to point 2 above: in the appendix lines 251-254, states that there are open questions on BO and ES for one-shot NAS. But is that necessary or one could train the supernet using the style of training in Once-For-All https://arxiv.org/abs/1908.09791 or use all the insights for training supernets as detailed in https://openreview.net/forum?id=Esd7tGH3Spl and then doing search via any search technique? Note OFA uses evolutionary search, the earlier ENAS paper used Reinforce https://arxiv.org/abs/1802.03268 and indeed the actual search technique varies across papers. In this respect it appears to me that the aim of the experiments in the appendix comparing across RL, BO and ES since BO and ES are being run on the feasible set only. So, a comparison (on tabular task) seems orthogonal to the aim of the paper which is to show a modification of specifically Reinforce to take into account resource constraints better than others who modify the reward function?
Limitations
Yes. |
NIPS | Title
TabNAS: Rejection Sampling for Neural Architecture Search on Tabular Datasets
Abstract
The best neural architecture for a given machine learning problem depends on many factors: not only the complexity and structure of the dataset, but also on resource constraints including latency, compute, energy consumption, etc. Neural architecture search (NAS) for tabular datasets is an important but under-explored problem. Previous NAS algorithms designed for image search spaces incorporate resource constraints directly into the reinforcement learning (RL) rewards. However, for NAS on tabular datasets, this protocol often discovers suboptimal architectures. This paper develops TabNAS, a new and more effective approach to handle resource constraints in tabular NAS using an RL controller motivated by the idea of rejection sampling. TabNAS immediately discards any architecture that violates the resource constraints without training or learning from that architecture. TabNAS uses a Monte-Carlo-based correction to the RL policy gradient update to account for this extra filtering step. Results on several tabular datasets demonstrate the superiority of TabNAS over previous reward-shaping methods: it finds better models that obey the constraints.
1 Introduction
To make a machine learning model better, one can scale it up. But larger networks are more expensive as measured by inference time, memory, energy, etc, and these costs limit the application of large models: training is slow and expensive, and inference is often too slow to satisfy user requirements.
Many applications of machine learning in industry use tabular data, e.g., in finance, advertising and medicine. It was only recently that deep learning has achieved parity with classical tree-based models in these domains [9, 11]. For vision, optimizing models for practical deployment often relies on Neural Architecture Search (NAS). Most NAS literature targets convolutional networks on vision benchmarks [14, 5, 10, 19]. Despite the practical importance of tabular data, however, NAS research on this topic is quite limited [8, 7]. (See Appendix A for a more comprehensive literature review.)
Weight-sharing reduces the cost of NAS by training a SuperNet that is the superset of all candidate architectures [2]. This trained SuperNet is then used to estimate the quality of each candidate architecture or child network by allowing activations in only a subset of the components of the SuperNet and evaluating the model. Reinforcement learning (RL) has shown to efficiently find the most promising child networks [16, 5, 3] for vision problems.
In our experiments, we show that a direct application of approaches designed for vision to tabular data often fails. For example, the TuNAS [3] approach from vision struggles to find the optimal architectures for tabular datasets (see experiments). The failure is caused by the interaction of the search space and the factorized RL controller. To understand why, consider the following toy example
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
with 2 layers, illustrated in Figure 1. For each layer, we can choose a layer size of 2, 3, or 4, and the maximum number of parameters is set to 25. The optimal solution is to set the size of the first hidden layer to 4 and the second to 2. Finding this solution with RL is difficult with a cost penalty approach. The RL controller is initialized with uniform probabilities. As a result, it is quite likely that the RL controller will initially be penalized heavily when choosing option 4 for the first layer, since two thirds of the choices for the second layer will result in a model that is too expensive. As a result, option 4 for the first layer is quickly discarded by the RL controller and we get stuck in a local optimum.
This co-adaptation problem is caused by the fact that existing NAS methods for computer vision often use factorized RL controllers, which force all choices to be be made independently. While factorized controllers can be optimized easily and are parameter-efficient, they cannot capture all of the nuances in the loss landscape. A solution to this could be to use a more complex model such as an LSTM (e.g., [16, 4]). However, LSTMs are often much slower to train and are far more difficult to tune.
Our proposed method, TabNAS, uses a solution inspired by rejection sampling. It updates the RL controller only when the sampled model satisfies the cost constraint. The RL controller is then discouraged from sampling poor models within the cost constraint and encouraged to sample the high quality models. Rather than penalizing models that violate the constraints, the controller silently discards them. This trick allows the RL controller to see the true constrained loss landscape, in which
having some large layers is beneficial, allowing TabNAS to efficiently find global (not just local) optima for tabular NAS problems. Our contributions can be summarized as follows:
• We identify failure cases of existing resource-aware NAS methods on tabular data and provide evidence this failure is due to the cost penalty in the reward together with the factorized space. • We propose and evaluate an alternative: a rejection sampling mechanism that ensures the RL
controller only selects architectures that satisfy resource constraint. This extra rejection step allows the RL controller to explore parts of the search space that would otherwise be overlooked. • The rejection mechanism also introduces a systematic bias into the RL gradient updates, which
can skew the results. To compensate for this bias, we introduce a theoretically motivated and empirically effective correction into the gradient updates. This correction can be computed exactly for small search spaces and efficiently approximated by Monte-Carlo sampling otherwise. • We show the resulting method, TabNAS, automatically learns whether a bottleneck structure is
needed in an optimal architecture, and if needed, where to place the bottleneck in the network.
These contributions form TabNAS, our RL-based weight-sharing NAS with rejection-based reward. TabNAS robustly and efficiently finds a feasible architecture with optimal performance within the resource constraint. Figure 2 shows an example.
2 Notation and terminology
Math basics. We define [n] = {1, ··· , n} for a positive integer n. With a Boolean variable X , the indicator function 1(X ) equals 1 if X is true, and 0 otherwise. |S| denotes the cardinality of a set S; stop_grad(f) denotes the constant value (with gradient 0) corresponding to a differentiable quantity f , and is equivalent to tensorflow.stop_gradient(f) in TensorFlow [1] or f.detach() in PyTorch [15]. ⊆ and ⊂ denote subset and strict subset, respectively. ∇ denotes the gradient with respect to the variable in the context.
Weight, architecture, and hyperparameter. We use weights to refer to the parameters of the neural network. The architecture of a neural network is the structure of how nodes are connected; examples of architectural choices are hidden layer sizes and activation types. Hyperparameters are the non-architectural parameters that control the training process of either stand-alone training or RL, including learning rate, optimizer type, optimizer parameters, etc.
Neural architecture. A neural network with specified architecture and hyperparameters is called a model. We only consider fully-connected feedforward networks (FFNs) in this paper, since they can already achieve SOTA performance on tabular datasets [11]. The number of hidden nodes after each weight matrix and activation function is called a hidden layer size. We denote a single network in our search space with hyphen-connected choices. For example, when searching for hidden layer sizes, in the space of 3-hidden-layer ReLU networks, 32-144-24 denotes the candidate where the sizes of
the first, second and third hidden layers are 32, 144 and 24, respectively. We only search for ReLU networks; for brevity, we will not mention the activation function type in the sequel.
Loss-resource tradeoff and reference architectures. In the hidden layer size search space, the validation loss in general decreases with the increase of the number of parameters, giving the lossresource tradeoff (e.g., Figure 3). Here loss and number of parameters serve as two costs for NAS. Thus there are Pareto-optimal models that achieve the smallest loss among all models with a given bound on the number of parameters. With an architecture that outperforms others with a similar or fewer number of parameters, we do resource-constrained NAS with the number of parameters of this architecture as the resource target or constraint. We call this architecture the reference architecture (or reference) of NAS, and its performance the reference performance. We do NAS with the goal of matching (the size and performance of) the reference. Note that the RL controller only has knowledge of the number of parameters of the reference, and is not informed of its hidden layer sizes.
Search space. When searching L-layer networks, we use capital letters like X = X1- ··· -XL to denote the random variable of sampled architectures, in which Xi is the random variable for the size of the i-th layer. We use lowercase letters like x = x1- ··· -xL to denote an architecture sampled from the distribution over X , in which xi is an instance of the i-th layer size. When there are multiple samples drawn, we use a bracketed superscript to denote the index over samples: x(k) denotes the k-th sample. The search space S = {sij}i∈[L],j∈[Ci] has Ci choices for the i-th hidden layer, in which sij is the j-th choice for the size of the i-th hidden layer: for example, when searching for a one-hidden-layer network with size candidates {5, 10, 15}, we have s13 = 15.
Reinforcement learning. The RL algorithm learns the set of logits {`ij}i∈[L],j∈[Ci], in which `ij is the logit associated with the j-th choice for the i-th hidden layer. With a fully factorized distribution of layer sizes (we learn a separate distribution for each layer), the probability of sampling the j-th choice for the i-th layer pij is given by the SoftMax function: pij = exp(`ij)/ ∑ j∈[Ci] exp(`ij). In each RL step, we sample an architecture y to compute the single-step RL objective J(y), and update the logits with∇J(y): an unbiased estimate of the gradient of the RL value function. Resource metric and number of parameters. We use the number of parameters, which can be easily computed for neural networks, as a cost metric in this paper. However, our approach does not depend on the specific cost used, and can be easily adapted to other cost metrics.
3 Methodology
Our NAS methodology can be decomposed into three main components: weight-sharing with layer warmup, REINFORCE with one-shot search, and Monte Carlo (MC) sampling with rejection.
As an overview, our method starts with a SuperNet, which is a network that layer-wise has width equal to the largest choice within the search space. We first stochastically update the weights of the entire SuperNet to “warm up” over the first 25% of search epochs. Then we alternate between updating the shared model weights (which are used to estimate the quality of different child models) and the RL controller (which focuses the search on the most promising parts of the space). In each iteration, we first sample a child network from the current layer-wise probability distributions and update the corresponding weights within the SuperNet (weight update). We then sample another child network to update the layerwise logits that give the probability distributions (RL update). The latter RL update is only performed if the sampled network is feasible, in which case we use rejection with MC sampling to update the logits with a sampling probability conditional on the feasible set.
To avoid overfitting, we split the labelled portion of a dataset into training and validation splits. Weight updates are carried out on the training split; RL updates are performed on the validation split.
3.1 Weight sharing with layer warmup
The weight-sharing approach has shown success on various computer vision tasks and NAS benchmarks [16, 2, 5, 3]. To search for an FFN on tabular datasets, we build a SuperNet where the size of each hidden layer is the largest value in the search space. Figure 4 shows an example. When we sample a child network with a hidden layer size `i smaller than the SuperNet, we only use the first `i hidden nodes in that layer to compute the output in the forward pass and the gradients in the
backward pass. Similarly, in RL updates, only the weights of the child network are used to estimate the quality reward that is used to update logits.
In weight-sharing NAS, warmup helps to ensure that the SuperNet weights are sufficiently trained to properly guide the RL updates [3]. With probability p, we train all weights of the SuperNet, and with probability 1− p we only train the weights of a random child model. When we run architecture searches for FFNs, we do warmup in the first 25% epochs, during which the probability p linearly decays from 1 to 0 (Figure 5(a)). The RL controller is disabled during this period.
3.2 One-shot training and REINFORCE
We do NAS on FFNs with a REINFORCE-based algorithm. Previous works have used this type of algorithm to search for convolutional networks on vision tasks [18, 5, 3]. When searching for L-layer FFNs, we learn a separate probability distribution over Ci size candidates for each layer. The distribution is given by Ci logits via the SoftMax function. Each layer has its own independent set of logits. With Ci choices for the ith layer, where i = 1, 2, ··· , L, there are ∏ i∈[L] Ci candidate
networks in the search space but only ∑ i∈[L] Ci logits to learn. This technique significantly reduces the difficulty of RL and make the NAS problem practically tractable [5, 3].
The REINFORCE-based algorithm trains the SuperNet weights and learns the logits {`ij}i∈[L],j∈[Ci] that give the sampling probabilities {`ij}i∈[L],j∈[Ci] over size candidates by alternating between weight and RL updates. In each iteration, we first sample a child network x from the SuperNet and compute its training loss in the forward pass. Then we update the weights in x with gradients of the training loss computed in the backward pass. This weight update step trains the weights of x. The weights in architectures with larger sampling probabilities are sampled and thus trained more often. We then update the logits for the RL controller by sampling a child network y that is independent of the network x from the same layerwise distributions, computing the quality reward Q(y) as 1− loss(y) on the validation set, and then updating the logits with the gradient of J(y) = stop_grad(Q(y)− Q̄) logP(y): the product of the advantage of y’s reward over past rewards (usually an exponential moving average) and the log-probability of the current sample.
The alternation creates a positive feedback loop that trains the weights and updates the logits of the large-probability child networks; thus the layer-wise sampling probabilities gradually converge to more deterministic distributions, under which one or several architectures are finally selected.
Details of a resource-oblivious version is shown as Appendix B Algorithm 1, which does not take into account a resource constraint. In Section 3.3, we show an algorithm that combines Monte-Carlo sampling with rejection sampling, which serves as a subroutine of Algorithm 1 by replacing the probability in J(y) with a conditional version.
3.3 Rejection-based reward with MC sampling
Only a subset of the architectures in the search space S will satisfy resource constraints; V denotes this set of feasible architectures. To find a feasible architecture, a resource target T0 is often used in an RL reward. Given an architecture y, a resource-aware reward combines its quality Q(y) and resource consumption T (y) into a single reward. MnasNet [18] proposes the rewards Q(y)(T (y)/T0)β and Q(y) max{1, (T (y)/T0)β} while TuNAS [3] proposes the absolute value reward (or Abs Reward) Q(y) + β|T (y)/T0 − 1|. The idea behind is to encourage models with high quality with respect the resource target. In these rewards β is a hyperparameter that needs careful tuning.
We find that on tabular data, RL controllers using these resource-aware rewards above can struggle to discover high quality structures. Figure 1 shows a toy example in the search space in Figure 4, in which we know the validation losses of each child network and only train the RL controller for 500 steps. The optimal network is 4-2 among architectures with number of parameters no more than 25, but the RL controller rarely chooses it. In Section 4.1, we show examples on real datasets.
This phenomenon reveals a gap between the true distribution we want to sample from and the distributions obtained by sampling from this factorized search space:
• We only want to sample from the set of feasible architectures V , whose distribution is {P(y |y ∈ V )}y∈V . The resources (e.g., number of parameters) used by an architecture, and thus its feasibility, is determined jointly by the sizes of all layers.
P̂(V )) in a successful search, with 8,000 architectures in the search space and the number of MC samples N = 1024. Both probabilities are (nearly) constant during warmup before RL starts, then increase after RL starts because of rejection sampling.
• On the other hand, the factorized search space learns a separate (independent) probability distribution for the choices of each layer. While this distribution is efficient to learn, independence between layers discourages an RL controller with a resource-aware reward from choosing a bottleneck structure. A bottleneck requires the controller to select large sizes for some layers and small for others. But decisions for different layers are made independently, and both very large and very small layer sizes, considered independently, have poor expected rewards: small layers are estimated to perform poorly, while large layers easily exceed the resource constraints.
To bridge the gap and efficiently learn layerwise distributions that take into account the architecture feasibility, we propose a rejection-based RL reward for Algorithm 1. We next sketch the idea; detailed pseudocode is provided as Algorithm 2 in Appendix B.
REINFORCE optimizes a set of logits {`ij}i∈[L],j∈[Ci] which define a probability distribution p over architectures. In the original algorithm, we sample a random architecture y from p and then estimate its quality Q(y). Updates to the logits `ij take the form `ij ← `ij + η ∂∂`ij J(y), where η is the learning rate, Q is a moving average of recent rewards, and J(y) = stop_grad(Q(y)−Q) · logP(y). If y is better (worse) than average, then Q(y)−Q will be positive (negative), so the REINFORCE update will increase (decrease) the probability of sampling the same architecture in the future.
In our new REINFORCE variant, motivated by rejection sampling, we do not update the logits when y is infeasible. When y is feasible, we replace the probability P(y) in the REINFORCE update equation with the conditional probability P(y | y ∈ V ) = P(y)/P(y ∈ V ). So J(y) becomes
J(y) = stop_grad(Q(y)−Q) · log [P(y)/P(y ∈ V )] . (1) We can compute the probability of sampling a feasible architecture P(V ) := P(y ∈ V ) exactly when the search space is small, but this computation is too expensive when the space is large. Instead, we replace the exact probability P(y) with a differential approximation P̂(y) obtained with Monte-Carlo (MC) sampling. In each RL step, we sample N architectures {z(k)}k∈[N ] within the search space with a proposal distribution q and estimate P(V ) as
P̂(V ) = 1
N ∑ k∈[N ] p(k) q(k) · 1(z(k) ∈ V ). (2)
For each k ∈ [N ], p(k) is the probability of sampling z(k) with the factorized layerwise distributions and so is differentiable with respect to the logits. In contrast, q(k) is the probability of sampling z(k) with the proposal distribution, and is therefore non-differentiable.
P̂(V ) is an unbiased and consistent estimate of P(V ); ∇ log[P(y)/P̂(V )] is a consistent estimate of ∇ log[P(y | y ∈ V )] (Appendix J). A larger N gives better results (Appendix H); in experiments, we
need smaller than the size of the sample space to get a faithful estimate (Figure 5(b), Appendix D and I) because neighboring RL steps can correct the estimates of each other. We set q = stop_grad(p) in experiments for convenience: use the current distribution over architectures for MC sampling. Other distributions that have a larger support on V may be used to reduce sampling variance (Appendix J).
At the end of NAS, we pick as our final architecture the layer sizes with largest sampling probabilities if the layerwise distributions are deterministic, or sample from the distributions m times and pick n feasible architectures with the largest number of parameters if not. Appendix B Algorithm 3 provides the full details. We find m = 500 and n ≤ 3 suffice to find an architecture that matches the reference (optimal) architecture in our experiments.
In practice, the distributions often (almost) converge after twice the number of epochs used to train a stand-alone child network. Indeed the distributions are often useful after training the same number of epochs in that the architectures found by Algorithm 3 are competitive. Figure 1 shows TabNAS finds the best feasible architecture, 4-2, in our toy example, using P̂(V ) estimated by MC sampling.
4 Experimental results
Our implementation can be found at https://github.com/google-research/tabnas. We ran all experiments using TensorFlow on a Cloud TPU v2 with 8 cores. We use a 1,027-dimensional input representation for the Criteo dataset and 180 features for Volkert1. The best architectures in our FFN search spaces already produce near-state-of-the-art results; details in Appendix C.2. More details of experiment setup and results in other search spaces can be found in Appendix C and D. Appendix E tabulates the performance of all RL rewards on all tabular datasets in our experiments. Appendix F shows a comparison with Bayesian optimization and evolutionary search in similar settings; Ablation studies in Appendix I show TabNAS components collectively deliver desirable results; Appendix H shows TabNAS has easy-to-tune hyperparameters.
4.1 When do previous RL rewards fail?
Section 3.3 discussed the resource-aware RL rewards and highlighted a potential failure case. In this section, we show several failure cases of three resource-aware rewards, Q(y)(T (y)/T0)β , Q(y) max{1, (T (y)/T0)β}, and the Abs Reward Q(y) + β|T (y)/T0 − 1|, on our tabular datasets.
4.1.1 Criteo – 3 layer search space
We use the 32-144-24 reference architecture (41,153 parameters). Figure 3 gives an overview of the costs and losses of all architectures in the search space. The search space requires us to choose one of 20 possible sizes for each hidden layer; details in Appendix D. The search has 1.7× the cost of a stand-alone training run.
1Our paper takes these features as given. It is worth noting that methods proposed in feature engineering works like [12] and [13] are complementary to and can work together with TabNAS.
Failure of latency rewards. Figure 6 shows the sampling probabilities from the search when using the Abs Reward, and the retrain validation losses of the found architecture 32-64-96. In Figures 6(a) – 6(c), the sampling probabilities for the different choices are uniform during warmup and then converge quickly. The final selected model (32-64-96) is much worse than the reference model (32-144-24) even though the reference model is actually less expensive. We also observed similar failures for the MnasNet rewards. With the MnasNet rewards, the RL controller also struggles to find a model within ±5% of the constraint despite a grid search of the RL parameters (details in Appendix C). In both cases, almost all found models are worse than the reference architecture.
0.443 0.448 0.453 stand-alone loss
0.43
0.44
0.45
on e-
sh ot
lo ss
015 60 100 # epochs
8 16 24 32 48 64 80 96
112 128 144 160 176 192 208 224 240 256 384 512hi dd en
la ye
r 2
0
1
pr ob
ab ilit
y
Figure 7: Left: 3-layer Criteo SuperNet calibration after 60 epochs (search space in Appendix C): Pearson correlation is 0.96. The one-shot loss is validation loss of each child network with weights taken from a SuperNet trained with the same hyperparameters as in Figure 6 but with no RL in the first 60 epochs; the stand-alone loss of each child network is computed by training the same architecture with the same hyperparameters from scratch, and has std 0.0003. Right: change in probabilities in layer 2 after 60 epochs of SuperNet training and 40 of RL. Note the rapid changes due to RL.
The RL controller is to blame. To verify that a low quality SuperNet was not the culprit, we trained a SuperNet without updating the RL controller, and manually inspected the quality of the resulting SuperNet. The sampling probabilities for the RL controller remained uniform throughout the search; the rest of the training setup was kept the same. At the end of the training, we compare two sets of losses on each of the child networks: the validation loss from the SuperNet (one-shot loss), and the validation loss from training the child network from scratch. Figure 7(a) shows that there is a strong correlation between these accuracies; Figure 7(b) shows RL that starts from the sufficiently trained SuperNet weights in 7(a) still chooses the suboptimal choice 64. This suggests that the suboptimal search results on Criteo are likely due to issues with the RL controller, rather than issues with the one-shot model weights. In a 3 layer search space we can actually find good models without the RL controller, but in a 5 layer search space, we found an RL controller whose training is interleaved with the SuperNet is important to achieve good results.
4.1.2 Volkert – 4 layer search space
We search for 4-layer and 9-layer networks on the Volkert dataset; details in Appendix D. For resource-aware RL rewards, we ran a grid search over the RL learning rate and β hyperparameter. The reference architecture for the 4 layer search space is 48-160-32-144 with 27,882 parameters. Despite a hyperparameter grid search, it was difficult to find models with the right target cost reliably using the MnasNet rewards. Using the Abs Reward (Figure 8), searched models met the target cost but their quality was suboptimal, and the trend
is similar to what has been shown in the toy example (Figure 1): a smaller |β| gives an infeasible architecture that is beyond the reference number of parameters, and a larger |β| gives an architecture that is feasible but suboptimal.
4.1.3 A common failure pattern
Apart from Section 4.1.1 and 4.1.2, more examples in search spaces of deeper FFNs can be found in Appendix D. In cases on Criteo and Volkert where where the RL controller with soft constraints cannot match the quality of the reference architectures, the reference architecture often has a bottleneck structure. For example, with a 1,027-dimensional input representation, the 32-144-24 reference on Criteo has bottleneck 32; with 180 features, the 48-160-32-144 reference on Volkert has bottleneck 48 and 32. As the example in Section 3.3 shows, the wide hidden layers around the bottlenecks get penalized harder in the search, and it is thus more difficult for RL with the Abs Reward to find a
model that can match the reference performance. Also, Appendix C.2.1 shows the Pareto-optimal architectures in the tradeoff points in Figure 3 often have bottleneck structures, so resource-aware RL rewards in previous NAS practice may have more room for improvement than previously believed.
4.2 NAS with TabNAS reward
With proper hyperparameters (Appendix H), our RL controller with TabNAS reward finds the global optimum when RL with resource-aware rewards produces suboptimal results.
TabNAS does not introduce a resource-aware bias in the RL reward (Section 3.3). Instead, it uses conditional probabilities to update the logits in feasible architectures. We run TabNAS for 120 epochs with RL learning rate 0.005 and N = 3072 MC samples.2 The RL controller converges to two architectures, 32-160-16 (40,769 parameters, with loss 0.4457 ± 0.0002) and 32-144-24 (41,153 parameters, with loss 0.4455 ± 0.0003), after around 50 epochs of NAS, then oscillates between these two solutions (Figure 9). After 120-epochs, we sample from the layerwise distribution and pick the largest feasible architecture: the global optimum 32-144-24.
On the same hardware, the search takes 3× the runtime of stand-alone training. Hence, as can be seen in Figure 2, the proposed architecture search method is much faster than a random baseline.
4.3 TabNAS automatically determines whether bottlenecks are needed
Previous NAS works like MnasNet and TuNAS (often or only on vision tasks) often have inverted bottleneck blocks [17] in their search spaces. However, the search spaces used there have a hardcoded requirement that certain layers must have bottlenecks. In contrast, our search spaces permit the controller to automatically determine whether to use bottleneck structures based on the task under consideration. TabNAS automatically finds high-quality architectures, both in cases where bottlenecks are needed and in cases where they are not. This is important because networks with bottlenecks do not always outperform others on all tasks. For example, the reference architecture 32-144-24 outperforms the TuNAS-found 32-64-96 on Criteo, but the reference 64-192-48-32 (64,568 parameters, 0.0662 ± 0.0011) is on par with the TuNAS-and-TabNAS-found 96-80-96-32 (64,024 parameters, 0.0669 ± 0.0013) on Aloi. TabNAS automatically finds an optimal (bottleneck) architecture for Criteo, and automatically finds an optimal architecture that does not necessarily have a bottleneck structure for Aloi. Previous reward-shaping rewards like the Abs Reward only succeed in the latter case.
4.4 Rejection-based reward outperforms Abs Reward in NATS-Bench size search space
Although we target resource-constrained NAS on tabular datasets in this paper, our proposed method is not specific to NAS on tabular datasets. In Appendix G, we show the rejection-based reward in TabNAS outperforms RL with the Abs Reward in the size search space of NATS-Bench [6], a NAS benchmark on vision tasks.
2The 3-layer search space has 203 = 8000 candidate architectures, which is small enough to compute P(V ) exactly. However, MC can scale to larger spaces which are prohibitively expensive for exhaustive search (Appendix D).
5 Conclusion
We investigate the failure of resource-aware RL rewards to discover optimal structures in tabular NAS and propose TabNAS for tabular NAS in a constrained search space. The TabNAS controller uses a rejection mechanism to compute the policy gradient updates from feasible architectures only, and uses Monte-Carlo sampling to reduce the cost of debiasing this rejection-sampling approach. Experiments show TabNAS finds better architectures than previously proposed RL methods with resource-aware rewards in resource-constrained searches.
Many questions remain open. For example: 1) Can the TabNAS strategy find better architectures on other types of tasks such as vision and language? 2) Can TabNAS improve RL results for more complex architectures? 3) Is TabNAS useful for resource-constrained RL problems more broadly?
Acknowledgments and Disclosure of Funding
This work was done when Madeleine Udell was a visiting researcher at Google. The authors thank Ruoxi Wang, Mike Van Ness, Ziteng Sun, Xuanyi Dong, Lijun Ding, Yanqi Zhou, Chen Liang, Zachary Frangella, Yi Su, and Ed H. Chi for helpful discussions, and thank several anonymous reviewers for useful comments. | 1. What is the main contribution of the paper regarding NAS?
2. What are the strengths and weaknesses of the proposed approach compared to other works?
3. How does the reviewer assess the effectiveness and efficiency of the algorithm in terms of scalability and computational cost?
4. Do you have any suggestions or recommendations for future research or improvements? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper proposes a multi-objective NAS algorithm using RL-controller. They use monte-carlo based rejection sampling to guide the controller towards feasible search space. The search space is divided into possible categorical options for each layer, such as [16, 22, 42] etc for layer1. The controller predicts logits corresponding to each possible value for each layer in the fully connected network (FCNN). So logits_{ij} refers to the value for
j
t
h
value in the
i
t
h
layer and corresponds to the probability that it would be sampled. As the choice of each layer is independent of the other, to generate a network adhering to the constraints, they use rejection based reward mechanism.
Strengths And Weaknesses
They showed that their technique is able to find well performing networks.
However, they did not compare adequately against all the baselines. Tabulating the results rather than presenting them in a paragraph is preferred. Please see Questions for more details.
Questions
(a) Please tabulate the accuracies, number of parameters and time taken of networks obtained from MnasNet, TuNAS, TabNAS and for all the datasets.
(b) MoNas [1] is also an RL based multi-objective algorithm which yields a reward only if the constraint < threshold. Resource-Efficient Neural Architect[2] uses a different penalty as listed in eqn 3 of their paper. Please compare how these two objectives fare against yours.
(c) While you are focusing only on RL based multi-objective algorithms, it would be good to compare with evolutionary based algos, namely Lemonade and NSGA-NET[3] .
(d) It would also be good to compare it against TabNet. Based on the number of parameters of the best model found by Tabnet, you can modify your search to fit that parameter search space.
While rejection sampling is helping us focus the search on only the feasible area, the number of samples required to get a good estimate of the probability distribution seem to be very high. So it might not be scalable to larger search spaces. For ex, 5 *
10
6
samples for 9-layer search space for Volkert dataset is computationally expensive and cannot be used in real world.
[1] MONAS: Multi-Objective Neural Architecture Search, Hsu et al. [2] Resource-Efficient Neural Architect, Zhou et al. [3] NSGA-Net: Neural Architecture Search using Multi-Objective Genetic Algorithm, Lu et al.
Limitations
The monte carlo sampling is very expensive and this search won't scale for tasks requiring large search spaces. Please see Questions section for further details. |
NIPS | Title
TabNAS: Rejection Sampling for Neural Architecture Search on Tabular Datasets
Abstract
The best neural architecture for a given machine learning problem depends on many factors: not only the complexity and structure of the dataset, but also on resource constraints including latency, compute, energy consumption, etc. Neural architecture search (NAS) for tabular datasets is an important but under-explored problem. Previous NAS algorithms designed for image search spaces incorporate resource constraints directly into the reinforcement learning (RL) rewards. However, for NAS on tabular datasets, this protocol often discovers suboptimal architectures. This paper develops TabNAS, a new and more effective approach to handle resource constraints in tabular NAS using an RL controller motivated by the idea of rejection sampling. TabNAS immediately discards any architecture that violates the resource constraints without training or learning from that architecture. TabNAS uses a Monte-Carlo-based correction to the RL policy gradient update to account for this extra filtering step. Results on several tabular datasets demonstrate the superiority of TabNAS over previous reward-shaping methods: it finds better models that obey the constraints.
1 Introduction
To make a machine learning model better, one can scale it up. But larger networks are more expensive as measured by inference time, memory, energy, etc, and these costs limit the application of large models: training is slow and expensive, and inference is often too slow to satisfy user requirements.
Many applications of machine learning in industry use tabular data, e.g., in finance, advertising and medicine. It was only recently that deep learning has achieved parity with classical tree-based models in these domains [9, 11]. For vision, optimizing models for practical deployment often relies on Neural Architecture Search (NAS). Most NAS literature targets convolutional networks on vision benchmarks [14, 5, 10, 19]. Despite the practical importance of tabular data, however, NAS research on this topic is quite limited [8, 7]. (See Appendix A for a more comprehensive literature review.)
Weight-sharing reduces the cost of NAS by training a SuperNet that is the superset of all candidate architectures [2]. This trained SuperNet is then used to estimate the quality of each candidate architecture or child network by allowing activations in only a subset of the components of the SuperNet and evaluating the model. Reinforcement learning (RL) has shown to efficiently find the most promising child networks [16, 5, 3] for vision problems.
In our experiments, we show that a direct application of approaches designed for vision to tabular data often fails. For example, the TuNAS [3] approach from vision struggles to find the optimal architectures for tabular datasets (see experiments). The failure is caused by the interaction of the search space and the factorized RL controller. To understand why, consider the following toy example
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
with 2 layers, illustrated in Figure 1. For each layer, we can choose a layer size of 2, 3, or 4, and the maximum number of parameters is set to 25. The optimal solution is to set the size of the first hidden layer to 4 and the second to 2. Finding this solution with RL is difficult with a cost penalty approach. The RL controller is initialized with uniform probabilities. As a result, it is quite likely that the RL controller will initially be penalized heavily when choosing option 4 for the first layer, since two thirds of the choices for the second layer will result in a model that is too expensive. As a result, option 4 for the first layer is quickly discarded by the RL controller and we get stuck in a local optimum.
This co-adaptation problem is caused by the fact that existing NAS methods for computer vision often use factorized RL controllers, which force all choices to be be made independently. While factorized controllers can be optimized easily and are parameter-efficient, they cannot capture all of the nuances in the loss landscape. A solution to this could be to use a more complex model such as an LSTM (e.g., [16, 4]). However, LSTMs are often much slower to train and are far more difficult to tune.
Our proposed method, TabNAS, uses a solution inspired by rejection sampling. It updates the RL controller only when the sampled model satisfies the cost constraint. The RL controller is then discouraged from sampling poor models within the cost constraint and encouraged to sample the high quality models. Rather than penalizing models that violate the constraints, the controller silently discards them. This trick allows the RL controller to see the true constrained loss landscape, in which
having some large layers is beneficial, allowing TabNAS to efficiently find global (not just local) optima for tabular NAS problems. Our contributions can be summarized as follows:
• We identify failure cases of existing resource-aware NAS methods on tabular data and provide evidence this failure is due to the cost penalty in the reward together with the factorized space. • We propose and evaluate an alternative: a rejection sampling mechanism that ensures the RL
controller only selects architectures that satisfy resource constraint. This extra rejection step allows the RL controller to explore parts of the search space that would otherwise be overlooked. • The rejection mechanism also introduces a systematic bias into the RL gradient updates, which
can skew the results. To compensate for this bias, we introduce a theoretically motivated and empirically effective correction into the gradient updates. This correction can be computed exactly for small search spaces and efficiently approximated by Monte-Carlo sampling otherwise. • We show the resulting method, TabNAS, automatically learns whether a bottleneck structure is
needed in an optimal architecture, and if needed, where to place the bottleneck in the network.
These contributions form TabNAS, our RL-based weight-sharing NAS with rejection-based reward. TabNAS robustly and efficiently finds a feasible architecture with optimal performance within the resource constraint. Figure 2 shows an example.
2 Notation and terminology
Math basics. We define [n] = {1, ··· , n} for a positive integer n. With a Boolean variable X , the indicator function 1(X ) equals 1 if X is true, and 0 otherwise. |S| denotes the cardinality of a set S; stop_grad(f) denotes the constant value (with gradient 0) corresponding to a differentiable quantity f , and is equivalent to tensorflow.stop_gradient(f) in TensorFlow [1] or f.detach() in PyTorch [15]. ⊆ and ⊂ denote subset and strict subset, respectively. ∇ denotes the gradient with respect to the variable in the context.
Weight, architecture, and hyperparameter. We use weights to refer to the parameters of the neural network. The architecture of a neural network is the structure of how nodes are connected; examples of architectural choices are hidden layer sizes and activation types. Hyperparameters are the non-architectural parameters that control the training process of either stand-alone training or RL, including learning rate, optimizer type, optimizer parameters, etc.
Neural architecture. A neural network with specified architecture and hyperparameters is called a model. We only consider fully-connected feedforward networks (FFNs) in this paper, since they can already achieve SOTA performance on tabular datasets [11]. The number of hidden nodes after each weight matrix and activation function is called a hidden layer size. We denote a single network in our search space with hyphen-connected choices. For example, when searching for hidden layer sizes, in the space of 3-hidden-layer ReLU networks, 32-144-24 denotes the candidate where the sizes of
the first, second and third hidden layers are 32, 144 and 24, respectively. We only search for ReLU networks; for brevity, we will not mention the activation function type in the sequel.
Loss-resource tradeoff and reference architectures. In the hidden layer size search space, the validation loss in general decreases with the increase of the number of parameters, giving the lossresource tradeoff (e.g., Figure 3). Here loss and number of parameters serve as two costs for NAS. Thus there are Pareto-optimal models that achieve the smallest loss among all models with a given bound on the number of parameters. With an architecture that outperforms others with a similar or fewer number of parameters, we do resource-constrained NAS with the number of parameters of this architecture as the resource target or constraint. We call this architecture the reference architecture (or reference) of NAS, and its performance the reference performance. We do NAS with the goal of matching (the size and performance of) the reference. Note that the RL controller only has knowledge of the number of parameters of the reference, and is not informed of its hidden layer sizes.
Search space. When searching L-layer networks, we use capital letters like X = X1- ··· -XL to denote the random variable of sampled architectures, in which Xi is the random variable for the size of the i-th layer. We use lowercase letters like x = x1- ··· -xL to denote an architecture sampled from the distribution over X , in which xi is an instance of the i-th layer size. When there are multiple samples drawn, we use a bracketed superscript to denote the index over samples: x(k) denotes the k-th sample. The search space S = {sij}i∈[L],j∈[Ci] has Ci choices for the i-th hidden layer, in which sij is the j-th choice for the size of the i-th hidden layer: for example, when searching for a one-hidden-layer network with size candidates {5, 10, 15}, we have s13 = 15.
Reinforcement learning. The RL algorithm learns the set of logits {`ij}i∈[L],j∈[Ci], in which `ij is the logit associated with the j-th choice for the i-th hidden layer. With a fully factorized distribution of layer sizes (we learn a separate distribution for each layer), the probability of sampling the j-th choice for the i-th layer pij is given by the SoftMax function: pij = exp(`ij)/ ∑ j∈[Ci] exp(`ij). In each RL step, we sample an architecture y to compute the single-step RL objective J(y), and update the logits with∇J(y): an unbiased estimate of the gradient of the RL value function. Resource metric and number of parameters. We use the number of parameters, which can be easily computed for neural networks, as a cost metric in this paper. However, our approach does not depend on the specific cost used, and can be easily adapted to other cost metrics.
3 Methodology
Our NAS methodology can be decomposed into three main components: weight-sharing with layer warmup, REINFORCE with one-shot search, and Monte Carlo (MC) sampling with rejection.
As an overview, our method starts with a SuperNet, which is a network that layer-wise has width equal to the largest choice within the search space. We first stochastically update the weights of the entire SuperNet to “warm up” over the first 25% of search epochs. Then we alternate between updating the shared model weights (which are used to estimate the quality of different child models) and the RL controller (which focuses the search on the most promising parts of the space). In each iteration, we first sample a child network from the current layer-wise probability distributions and update the corresponding weights within the SuperNet (weight update). We then sample another child network to update the layerwise logits that give the probability distributions (RL update). The latter RL update is only performed if the sampled network is feasible, in which case we use rejection with MC sampling to update the logits with a sampling probability conditional on the feasible set.
To avoid overfitting, we split the labelled portion of a dataset into training and validation splits. Weight updates are carried out on the training split; RL updates are performed on the validation split.
3.1 Weight sharing with layer warmup
The weight-sharing approach has shown success on various computer vision tasks and NAS benchmarks [16, 2, 5, 3]. To search for an FFN on tabular datasets, we build a SuperNet where the size of each hidden layer is the largest value in the search space. Figure 4 shows an example. When we sample a child network with a hidden layer size `i smaller than the SuperNet, we only use the first `i hidden nodes in that layer to compute the output in the forward pass and the gradients in the
backward pass. Similarly, in RL updates, only the weights of the child network are used to estimate the quality reward that is used to update logits.
In weight-sharing NAS, warmup helps to ensure that the SuperNet weights are sufficiently trained to properly guide the RL updates [3]. With probability p, we train all weights of the SuperNet, and with probability 1− p we only train the weights of a random child model. When we run architecture searches for FFNs, we do warmup in the first 25% epochs, during which the probability p linearly decays from 1 to 0 (Figure 5(a)). The RL controller is disabled during this period.
3.2 One-shot training and REINFORCE
We do NAS on FFNs with a REINFORCE-based algorithm. Previous works have used this type of algorithm to search for convolutional networks on vision tasks [18, 5, 3]. When searching for L-layer FFNs, we learn a separate probability distribution over Ci size candidates for each layer. The distribution is given by Ci logits via the SoftMax function. Each layer has its own independent set of logits. With Ci choices for the ith layer, where i = 1, 2, ··· , L, there are ∏ i∈[L] Ci candidate
networks in the search space but only ∑ i∈[L] Ci logits to learn. This technique significantly reduces the difficulty of RL and make the NAS problem practically tractable [5, 3].
The REINFORCE-based algorithm trains the SuperNet weights and learns the logits {`ij}i∈[L],j∈[Ci] that give the sampling probabilities {`ij}i∈[L],j∈[Ci] over size candidates by alternating between weight and RL updates. In each iteration, we first sample a child network x from the SuperNet and compute its training loss in the forward pass. Then we update the weights in x with gradients of the training loss computed in the backward pass. This weight update step trains the weights of x. The weights in architectures with larger sampling probabilities are sampled and thus trained more often. We then update the logits for the RL controller by sampling a child network y that is independent of the network x from the same layerwise distributions, computing the quality reward Q(y) as 1− loss(y) on the validation set, and then updating the logits with the gradient of J(y) = stop_grad(Q(y)− Q̄) logP(y): the product of the advantage of y’s reward over past rewards (usually an exponential moving average) and the log-probability of the current sample.
The alternation creates a positive feedback loop that trains the weights and updates the logits of the large-probability child networks; thus the layer-wise sampling probabilities gradually converge to more deterministic distributions, under which one or several architectures are finally selected.
Details of a resource-oblivious version is shown as Appendix B Algorithm 1, which does not take into account a resource constraint. In Section 3.3, we show an algorithm that combines Monte-Carlo sampling with rejection sampling, which serves as a subroutine of Algorithm 1 by replacing the probability in J(y) with a conditional version.
3.3 Rejection-based reward with MC sampling
Only a subset of the architectures in the search space S will satisfy resource constraints; V denotes this set of feasible architectures. To find a feasible architecture, a resource target T0 is often used in an RL reward. Given an architecture y, a resource-aware reward combines its quality Q(y) and resource consumption T (y) into a single reward. MnasNet [18] proposes the rewards Q(y)(T (y)/T0)β and Q(y) max{1, (T (y)/T0)β} while TuNAS [3] proposes the absolute value reward (or Abs Reward) Q(y) + β|T (y)/T0 − 1|. The idea behind is to encourage models with high quality with respect the resource target. In these rewards β is a hyperparameter that needs careful tuning.
We find that on tabular data, RL controllers using these resource-aware rewards above can struggle to discover high quality structures. Figure 1 shows a toy example in the search space in Figure 4, in which we know the validation losses of each child network and only train the RL controller for 500 steps. The optimal network is 4-2 among architectures with number of parameters no more than 25, but the RL controller rarely chooses it. In Section 4.1, we show examples on real datasets.
This phenomenon reveals a gap between the true distribution we want to sample from and the distributions obtained by sampling from this factorized search space:
• We only want to sample from the set of feasible architectures V , whose distribution is {P(y |y ∈ V )}y∈V . The resources (e.g., number of parameters) used by an architecture, and thus its feasibility, is determined jointly by the sizes of all layers.
P̂(V )) in a successful search, with 8,000 architectures in the search space and the number of MC samples N = 1024. Both probabilities are (nearly) constant during warmup before RL starts, then increase after RL starts because of rejection sampling.
• On the other hand, the factorized search space learns a separate (independent) probability distribution for the choices of each layer. While this distribution is efficient to learn, independence between layers discourages an RL controller with a resource-aware reward from choosing a bottleneck structure. A bottleneck requires the controller to select large sizes for some layers and small for others. But decisions for different layers are made independently, and both very large and very small layer sizes, considered independently, have poor expected rewards: small layers are estimated to perform poorly, while large layers easily exceed the resource constraints.
To bridge the gap and efficiently learn layerwise distributions that take into account the architecture feasibility, we propose a rejection-based RL reward for Algorithm 1. We next sketch the idea; detailed pseudocode is provided as Algorithm 2 in Appendix B.
REINFORCE optimizes a set of logits {`ij}i∈[L],j∈[Ci] which define a probability distribution p over architectures. In the original algorithm, we sample a random architecture y from p and then estimate its quality Q(y). Updates to the logits `ij take the form `ij ← `ij + η ∂∂`ij J(y), where η is the learning rate, Q is a moving average of recent rewards, and J(y) = stop_grad(Q(y)−Q) · logP(y). If y is better (worse) than average, then Q(y)−Q will be positive (negative), so the REINFORCE update will increase (decrease) the probability of sampling the same architecture in the future.
In our new REINFORCE variant, motivated by rejection sampling, we do not update the logits when y is infeasible. When y is feasible, we replace the probability P(y) in the REINFORCE update equation with the conditional probability P(y | y ∈ V ) = P(y)/P(y ∈ V ). So J(y) becomes
J(y) = stop_grad(Q(y)−Q) · log [P(y)/P(y ∈ V )] . (1) We can compute the probability of sampling a feasible architecture P(V ) := P(y ∈ V ) exactly when the search space is small, but this computation is too expensive when the space is large. Instead, we replace the exact probability P(y) with a differential approximation P̂(y) obtained with Monte-Carlo (MC) sampling. In each RL step, we sample N architectures {z(k)}k∈[N ] within the search space with a proposal distribution q and estimate P(V ) as
P̂(V ) = 1
N ∑ k∈[N ] p(k) q(k) · 1(z(k) ∈ V ). (2)
For each k ∈ [N ], p(k) is the probability of sampling z(k) with the factorized layerwise distributions and so is differentiable with respect to the logits. In contrast, q(k) is the probability of sampling z(k) with the proposal distribution, and is therefore non-differentiable.
P̂(V ) is an unbiased and consistent estimate of P(V ); ∇ log[P(y)/P̂(V )] is a consistent estimate of ∇ log[P(y | y ∈ V )] (Appendix J). A larger N gives better results (Appendix H); in experiments, we
need smaller than the size of the sample space to get a faithful estimate (Figure 5(b), Appendix D and I) because neighboring RL steps can correct the estimates of each other. We set q = stop_grad(p) in experiments for convenience: use the current distribution over architectures for MC sampling. Other distributions that have a larger support on V may be used to reduce sampling variance (Appendix J).
At the end of NAS, we pick as our final architecture the layer sizes with largest sampling probabilities if the layerwise distributions are deterministic, or sample from the distributions m times and pick n feasible architectures with the largest number of parameters if not. Appendix B Algorithm 3 provides the full details. We find m = 500 and n ≤ 3 suffice to find an architecture that matches the reference (optimal) architecture in our experiments.
In practice, the distributions often (almost) converge after twice the number of epochs used to train a stand-alone child network. Indeed the distributions are often useful after training the same number of epochs in that the architectures found by Algorithm 3 are competitive. Figure 1 shows TabNAS finds the best feasible architecture, 4-2, in our toy example, using P̂(V ) estimated by MC sampling.
4 Experimental results
Our implementation can be found at https://github.com/google-research/tabnas. We ran all experiments using TensorFlow on a Cloud TPU v2 with 8 cores. We use a 1,027-dimensional input representation for the Criteo dataset and 180 features for Volkert1. The best architectures in our FFN search spaces already produce near-state-of-the-art results; details in Appendix C.2. More details of experiment setup and results in other search spaces can be found in Appendix C and D. Appendix E tabulates the performance of all RL rewards on all tabular datasets in our experiments. Appendix F shows a comparison with Bayesian optimization and evolutionary search in similar settings; Ablation studies in Appendix I show TabNAS components collectively deliver desirable results; Appendix H shows TabNAS has easy-to-tune hyperparameters.
4.1 When do previous RL rewards fail?
Section 3.3 discussed the resource-aware RL rewards and highlighted a potential failure case. In this section, we show several failure cases of three resource-aware rewards, Q(y)(T (y)/T0)β , Q(y) max{1, (T (y)/T0)β}, and the Abs Reward Q(y) + β|T (y)/T0 − 1|, on our tabular datasets.
4.1.1 Criteo – 3 layer search space
We use the 32-144-24 reference architecture (41,153 parameters). Figure 3 gives an overview of the costs and losses of all architectures in the search space. The search space requires us to choose one of 20 possible sizes for each hidden layer; details in Appendix D. The search has 1.7× the cost of a stand-alone training run.
1Our paper takes these features as given. It is worth noting that methods proposed in feature engineering works like [12] and [13] are complementary to and can work together with TabNAS.
Failure of latency rewards. Figure 6 shows the sampling probabilities from the search when using the Abs Reward, and the retrain validation losses of the found architecture 32-64-96. In Figures 6(a) – 6(c), the sampling probabilities for the different choices are uniform during warmup and then converge quickly. The final selected model (32-64-96) is much worse than the reference model (32-144-24) even though the reference model is actually less expensive. We also observed similar failures for the MnasNet rewards. With the MnasNet rewards, the RL controller also struggles to find a model within ±5% of the constraint despite a grid search of the RL parameters (details in Appendix C). In both cases, almost all found models are worse than the reference architecture.
0.443 0.448 0.453 stand-alone loss
0.43
0.44
0.45
on e-
sh ot
lo ss
015 60 100 # epochs
8 16 24 32 48 64 80 96
112 128 144 160 176 192 208 224 240 256 384 512hi dd en
la ye
r 2
0
1
pr ob
ab ilit
y
Figure 7: Left: 3-layer Criteo SuperNet calibration after 60 epochs (search space in Appendix C): Pearson correlation is 0.96. The one-shot loss is validation loss of each child network with weights taken from a SuperNet trained with the same hyperparameters as in Figure 6 but with no RL in the first 60 epochs; the stand-alone loss of each child network is computed by training the same architecture with the same hyperparameters from scratch, and has std 0.0003. Right: change in probabilities in layer 2 after 60 epochs of SuperNet training and 40 of RL. Note the rapid changes due to RL.
The RL controller is to blame. To verify that a low quality SuperNet was not the culprit, we trained a SuperNet without updating the RL controller, and manually inspected the quality of the resulting SuperNet. The sampling probabilities for the RL controller remained uniform throughout the search; the rest of the training setup was kept the same. At the end of the training, we compare two sets of losses on each of the child networks: the validation loss from the SuperNet (one-shot loss), and the validation loss from training the child network from scratch. Figure 7(a) shows that there is a strong correlation between these accuracies; Figure 7(b) shows RL that starts from the sufficiently trained SuperNet weights in 7(a) still chooses the suboptimal choice 64. This suggests that the suboptimal search results on Criteo are likely due to issues with the RL controller, rather than issues with the one-shot model weights. In a 3 layer search space we can actually find good models without the RL controller, but in a 5 layer search space, we found an RL controller whose training is interleaved with the SuperNet is important to achieve good results.
4.1.2 Volkert – 4 layer search space
We search for 4-layer and 9-layer networks on the Volkert dataset; details in Appendix D. For resource-aware RL rewards, we ran a grid search over the RL learning rate and β hyperparameter. The reference architecture for the 4 layer search space is 48-160-32-144 with 27,882 parameters. Despite a hyperparameter grid search, it was difficult to find models with the right target cost reliably using the MnasNet rewards. Using the Abs Reward (Figure 8), searched models met the target cost but their quality was suboptimal, and the trend
is similar to what has been shown in the toy example (Figure 1): a smaller |β| gives an infeasible architecture that is beyond the reference number of parameters, and a larger |β| gives an architecture that is feasible but suboptimal.
4.1.3 A common failure pattern
Apart from Section 4.1.1 and 4.1.2, more examples in search spaces of deeper FFNs can be found in Appendix D. In cases on Criteo and Volkert where where the RL controller with soft constraints cannot match the quality of the reference architectures, the reference architecture often has a bottleneck structure. For example, with a 1,027-dimensional input representation, the 32-144-24 reference on Criteo has bottleneck 32; with 180 features, the 48-160-32-144 reference on Volkert has bottleneck 48 and 32. As the example in Section 3.3 shows, the wide hidden layers around the bottlenecks get penalized harder in the search, and it is thus more difficult for RL with the Abs Reward to find a
model that can match the reference performance. Also, Appendix C.2.1 shows the Pareto-optimal architectures in the tradeoff points in Figure 3 often have bottleneck structures, so resource-aware RL rewards in previous NAS practice may have more room for improvement than previously believed.
4.2 NAS with TabNAS reward
With proper hyperparameters (Appendix H), our RL controller with TabNAS reward finds the global optimum when RL with resource-aware rewards produces suboptimal results.
TabNAS does not introduce a resource-aware bias in the RL reward (Section 3.3). Instead, it uses conditional probabilities to update the logits in feasible architectures. We run TabNAS for 120 epochs with RL learning rate 0.005 and N = 3072 MC samples.2 The RL controller converges to two architectures, 32-160-16 (40,769 parameters, with loss 0.4457 ± 0.0002) and 32-144-24 (41,153 parameters, with loss 0.4455 ± 0.0003), after around 50 epochs of NAS, then oscillates between these two solutions (Figure 9). After 120-epochs, we sample from the layerwise distribution and pick the largest feasible architecture: the global optimum 32-144-24.
On the same hardware, the search takes 3× the runtime of stand-alone training. Hence, as can be seen in Figure 2, the proposed architecture search method is much faster than a random baseline.
4.3 TabNAS automatically determines whether bottlenecks are needed
Previous NAS works like MnasNet and TuNAS (often or only on vision tasks) often have inverted bottleneck blocks [17] in their search spaces. However, the search spaces used there have a hardcoded requirement that certain layers must have bottlenecks. In contrast, our search spaces permit the controller to automatically determine whether to use bottleneck structures based on the task under consideration. TabNAS automatically finds high-quality architectures, both in cases where bottlenecks are needed and in cases where they are not. This is important because networks with bottlenecks do not always outperform others on all tasks. For example, the reference architecture 32-144-24 outperforms the TuNAS-found 32-64-96 on Criteo, but the reference 64-192-48-32 (64,568 parameters, 0.0662 ± 0.0011) is on par with the TuNAS-and-TabNAS-found 96-80-96-32 (64,024 parameters, 0.0669 ± 0.0013) on Aloi. TabNAS automatically finds an optimal (bottleneck) architecture for Criteo, and automatically finds an optimal architecture that does not necessarily have a bottleneck structure for Aloi. Previous reward-shaping rewards like the Abs Reward only succeed in the latter case.
4.4 Rejection-based reward outperforms Abs Reward in NATS-Bench size search space
Although we target resource-constrained NAS on tabular datasets in this paper, our proposed method is not specific to NAS on tabular datasets. In Appendix G, we show the rejection-based reward in TabNAS outperforms RL with the Abs Reward in the size search space of NATS-Bench [6], a NAS benchmark on vision tasks.
2The 3-layer search space has 203 = 8000 candidate architectures, which is small enough to compute P(V ) exactly. However, MC can scale to larger spaces which are prohibitively expensive for exhaustive search (Appendix D).
5 Conclusion
We investigate the failure of resource-aware RL rewards to discover optimal structures in tabular NAS and propose TabNAS for tabular NAS in a constrained search space. The TabNAS controller uses a rejection mechanism to compute the policy gradient updates from feasible architectures only, and uses Monte-Carlo sampling to reduce the cost of debiasing this rejection-sampling approach. Experiments show TabNAS finds better architectures than previously proposed RL methods with resource-aware rewards in resource-constrained searches.
Many questions remain open. For example: 1) Can the TabNAS strategy find better architectures on other types of tasks such as vision and language? 2) Can TabNAS improve RL results for more complex architectures? 3) Is TabNAS useful for resource-constrained RL problems more broadly?
Acknowledgments and Disclosure of Funding
This work was done when Madeleine Udell was a visiting researcher at Google. The authors thank Ruoxi Wang, Mike Van Ness, Ziteng Sun, Xuanyi Dong, Lijun Ding, Yanqi Zhou, Chen Liang, Zachary Frangella, Yi Su, and Ed H. Chi for helpful discussions, and thank several anonymous reviewers for useful comments. | 1. What is the focus and contribution of the paper on neural architecture search?
2. What are the strengths of the proposed approach, particularly in terms of its novelty and significance?
3. What are the weaknesses of the paper regarding its claims and comparisons with other works?
4. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content?
5. Are there any concerns or limitations regarding the societal impact of the proposed method? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper proposes a neural architecture search algorithm designed for tabular datasets. Prior work uses an RL-based optimization procedure with penalties for feasibility and resource constraints, but the authors show that this does not work well. To address this issue, the authors add an additional rejection sampling phase to filter out those that do not satisfy constraints, and show that this allows the RL method to better explore the search space of architectures. Evaluation is done on the Criteo and Volkert datasets, where the found architectures consistently achieve lower losses.
Strengths And Weaknesses
Strengths:
I believe that the use of rejection sampling in NAS algorithms is novel. However, I am not familiar with neural architecture search so my knowledge of related work is relatively small.
The work is significant, in that it unlocks more NAS applications in a new domain area.
In general, the writing is clear and well-organized - the authors clearly present their method and reasoning, and I appreciate that the authors experimented with multiple baselines and multiple datasets and found consistent results between them.
Weaknesses:
The experimental results are presented on a domain the baseline methods were not developed for, so it is more difficult to put the results in context. It would be informative to see how well the proposed methods work well on image-based domains, even if it does not perform as well.
Questions
I think the results could be summarized in a more concise and clear way (e.g. a table) - in the paper the only results are shown in 2 small graphs where it's difficult to tell what the absolute numbers are and which baseline is being run.
Limitations
The authors did not discuss any potential negative societal impacts of their work. |
NIPS | Title
Compression-aware Training of Deep Networks
Abstract
In recent years, great progress has been made in a variety of application domains thanks to the development of increasingly deeper neural networks. Unfortunately, the huge number of units of these networks makes them expensive both computationally and memory-wise. To overcome this, exploiting the fact that deep networks are over-parametrized, several compression strategies have been proposed. These methods, however, typically start from a network that has been trained in a standard manner, without considering such a future compression. In this paper, we propose to explicitly account for compression in the training process. To this end, we introduce a regularizer that encourages the parameter matrix of each layer to have low rank during training. We show that accounting for compression during training allows us to learn much more compact, yet at least as effective, models than state-of-the-art compression techniques.
1 Introduction
With the increasing availability of large-scale datasets, recent years have witnessed a resurgence of interest for Deep Learning techniques. Impressive progress has been made in a variety of application domains, such as speech, natural language and image processing, thanks to the development of new learning strategies [15, 53, 30, 45, 26, 3] and of new architectures [31, 44, 46, 23]. In particular, these architectures tend to become ever deeper, with hundreds of layers, each of which containing hundreds or even thousands of units.
While it has been shown that training such very deep architectures was typically easier than smaller ones [24], it is also well-known that they are highly over-parameterized. In essence, this means that equally good results could in principle be obtained with more compact networks. Automatically deriving such equivalent, compact models would be highly beneficial in runtime- and memorysensitive applications, e.g., to deploy deep networks on embedded systems with limited hardware resources. As a consequence, many methods have been proposed to compress existing architectures.
An early trend for such compression consisted of removing individual parameters [33, 22] or entire units [36, 29, 38] according to their influence on the output. Unfortunately, such an analysis of individual parameters or units quickly becomes intractable in the presence of very deep networks. Therefore, currently, one of the most popular compression approaches amounts to extracting low-rank approximations either of individual units [28] or of the parameter matrix/tensor of each layer [14]. This latter idea is particularly attractive, since, as opposed to the former one, it reduces the number of units in each layer. In essence, the above-mentioned techniques aim to compress a network that has been pre-trained. There is, however, no guarantee that the parameter matrices of such pre-trained networks truly have low-rank. Therefore, these methods typically truncate some of the relevant information, thus resulting in a loss of prediction accuracy, and, more importantly, do not necessarily achieve the best possible compression rates.
In this paper, we propose to explicitly account for compression while training the initial deep network. Specifically, we introduce a regularizer that encourages the parameter matrix of each layer to have
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
low rank in the training loss, and rely on a stochastic proximal gradient descent strategy to optimize the network parameters. In essence, and by contrast with methods that aim to learn uncorrelated units to prevent overfitting [5, 54, 40], we seek to learn correlated ones, which can then easily be pruned in a second phase. Our compression-aware training scheme therefore yields networks that are well adapted to the following post-processing stage. As a consequence, we achieve higher compression rates than the above-mentioned techniques at virtually no loss in prediction accuracy.
Our approach constitutes one of the very few attempts at explicitly training a compact network from scratch. In this context, the work of [4] has proposed to learn correlated units by making use of additional noise outputs. This strategy, however, is only guaranteed to have the desired effect for simple networks and has only been demonstrated on relatively shallow architectures. In the contemporary work [51], units are coordinated via a regularizer acting on all pairs of filters within a layer. While effective, exploiting all pairs can quickly become cumbersome in the presence of large numbers of units. Recently, group sparsity has also been employed to obtain compact networks [2, 50]. Such a regularizer, however, acts on individual units, without explicitly aiming to model their redundancies. Here, we show that accounting for interactions between the units within a layer allows us to obtain more compact networks. Furthermore, using such a group sparsity prior in conjunction with our compression-aware strategy lets us achieve even higher compression rates.
We demonstrate the benefits of our approach on several deep architectures, including the 8-layers DecomposeMe network of [1] and the 50-layers ResNet of [23]. Our experiments on ImageNet and ICDAR show that we can achieve compression rates of more than 90%, thus hugely reducing the number of required operations at inference time.
2 Related Work
It is well-known that deep neural networks are over-parametrized [13]. While, given sufficient training data, this seems to facilitate the training procedure, it also has two potential drawbacks. First, over-parametrized networks can easily suffer from overfitting. Second, even when they can be trained successfully, the resulting networks are expensive both computationally and memory-wise, thus making their deployment on platforms with limited hardware resources, such as embedded systems, challenging. Over the years, much effort has been made to overcome these two drawbacks.
In particular, much progress has been made to reduce overfitting, for example by devising new optimization strategies, such as DropOut [45] or MaxOut [16]. In this context, other works have advocated the use of different normalization strategies, such as Batch Normalization [26], Weight Normalization [42] and Layer Normalization [3]. Recently, there has also been a surge of methods aiming to regularize the network parameters by making the different units in each layer less correlated. This has been achieved by designing new activation functions [5], by explicitly considering the pairwise correlations of the units [54, 37, 40] or of the activations [9, 52], or by constraining the weight matrices of each layer to be orthonormal [21].
In this paper, we are more directly interested in addressing the second drawback, that is, the large memory and runtime required by very deep networks. To tackle this, most existing research has focused on pruning pre-trained networks. In this context, early works have proposed to analyze the saliency of individual parameters [33, 22] or units [36, 29, 38, 34], so as to measure their impact on the output. Such a local analysis, however, quickly becomes impractically expensive when dealing with networks with millions of parameters.
As a consequence, recent works have proposed to focus on more global methods, which analyze larger groups of parameters simultaneously. In this context, the most popular trend consists of extracting low-rank approximations of the network parameters. In particular, it has been shown that individual units can be replaced by rank 1 approximations, either via a post-processing step [28, 46] or directly during training [1, 25]. Furthermore, low-rank approximations of the complete parameter matrix/tensor of each layer were computed in [14], which has the benefit of reducing the number of units in each layer. The resulting low-rank representation can then be fine-tuned [32], or potentially even learned from scratch [47], given the rank of each layer in the network. With the exception of this last work, which assumes that the ranks are known, these methods, however, aim to approximate a given pre-trained model. In practice, however, the parameter matrices of this model might not have low rank. Therefore, the resulting approximations yield some loss of accuracy and, more importantly,
will typically not correspond to the most compact networks. Here, we propose to explicitly learn a low-rank network from scratch, but without having to manually define the rank of each layer a priori.
To this end, and in contrast with the above-mentioned methods that aim to minimize correlations, we rather seek to maximize correlations between the different units within each layer, such that many of these units can be removed in a post-processing stage. In [4], additional noise outputs were introduced in a network to similarly learn correlated filters. This strategy, however, is only justified for simple networks and was only demonstrated on relatively shallow architectures. The contemporary work [51] introduced a penalty during training to learn correlated units. This, however, was achieved by explicitly computing all pairwise correlations, which quickly becomes cumbersome in very deep networks with wide layers. By contrast, our approach makes use of a low-rank regularizer that can effectively be optimized by proximal stochastic gradient descent.
Our approach belongs to the relatively small group of methods that explicitly aim to learn a compact network during training, i.e., not as a post-processing step. Other methods have proposed to make use of sparsity-inducing techniques to cancel out individual parameters [49, 10, 20, 19, 35] or units [2, 50, 55]. These methods, however, act, at best, on individual units, without considering the relationships between multiple units in the same layer. Variational inference [17] has also been used to explicitly compress the network. However, the priors and posteriors used in these approaches will typically zero out individual weights. Our experiments demonstrate that accounting for the interactions between multiple units allows us to obtain more compact networks.
Another line of research aims to quantize the weights of deep networks [48, 12, 18]. Note that, in a sense, this research direction is orthogonal to ours, since one could still further quantize our compact networks. Furthermore, with the recent progress in efficient hardware handling floating-point operations, we believe that there is also high value in designing non-quantized compact networks.
3 Compression-aware Training of Deep Networks
In this section, we introduce our approach to explicitly encouraging compactness while training a deep neural network. To this end, we propose to make use of a low-rank regularizer on the parameter matrix in each layer, which inherently aims to maximize the compression rate when computing a low-rank approximation in a post-processing stage. In the following, we focus on convolutional neural networks, because the popular visual recognition models tend to rely less and less on fully-connected layers, and, more importantly, the inference time of such models is dominated by the convolutions in the first few layers. Note, however, that our approach still applies to fully-connected layers.
To introduce our approach, let us first consider the l-th layer of a convolutional network, and denote its parameters by θl ∈ RKl×Cl×d H l ×d W l , where Cl and Kl are the number of input and output channels, respectively, and dHl and d W l are the height and width of each convolutional kernel. Alternatively, these parameters can be represented by a matrix θ̂l ∈ RKl×Sl with Sl =CldHl dWl . Following [14], a network can be compacted via a post-processing step performing a singular value decomposition of θ̂l and truncating the 0, or small, singular values. In essence, after this step, the parameter matrix can be approximated as θ̂l ≈UlMTl , where Ul is a Kl× rl matrix representing the basis kernels, with rl ≤min(Kl ,Sl), and Ml is an Sl× rl matrix that mixes the activations of these basis kernels. By making use of a post-processing step on a network trained in the usual way, however, there is no guarantee that, during training, many singular values have become near-zero. Here, we aim to explicitly account for this post-processing step during training, by seeking to obtain a parameter matrix such that rl << min(Kl ,Sl). To this end, given N training input-output pairs (xi,yi), we formulate learning as the regularized minimization problem
min Θ 1 N
N
∑ i=1 `(yi, f (xi,Θ))+ r(Θ) , (1)
where Θ encompasses all network parameters, `(·, ·) is a supervised loss, such as the cross-entropy, and r(·) is a regularizer encouraging the parameter matrix in each layer to have low rank. Since explicitly minimizing the rank of a matrix is NP-hard, following the matrix completion literature [7, 6], we make use of a convex relaxation in the form of the nuclear norm. This lets us
write our regularizer as
r(Θ) = τ L
∑ l=1 ‖θ̂l‖∗ , (2)
where τ is a hyper-parameter setting the influence of the regularizer, and the nuclear norm is defined as ‖θ̂l‖∗ = ∑ rank(θ̂l) j=1 σ j l , with σ j l the singular values of θ̂l .
In practice, to minimize (1), we make use of proximal stochastic gradient descent. Specifically, this amounts to minimizing the supervised loss only for one epoch, with learning rate ρ , and then applying the proximity operator of our regularizer. In our case, this can be achieved independently for each layer. For layer l, this proximity operator corresponds to solving
θ ∗l = argmin θ̄l 1 2ρ ‖θ̄l− θ̂l‖2F + τ‖θ̄l‖∗ , (3)
where θ̂l is the current estimate of the parameter matrix for layer l. As shown in [6], the solution to this problem can be obtained by soft-thresholding the singular values of θ̂l , which can be written as
θ ∗l =UlΣl(ρτ)V T l , where Σl(ρτ) = diag([(σ 1 l −ρτ)+, . . . ,(σ rank(θ̂l) l −ρτ)+]), (4)
Ul and Vl are the left - and right-singular vectors of θ̂l , and (·)+ corresponds to taking the maximum between the argument and 0.
3.1 Low-rank and Group-sparse Layers
While, as shown in our experiments, the low-rank solution discussed above significantly reduces the number of parameters in the network, it does not affect the original number of input and output channels Cl and Kl . By contrast, the group-sparsity based methods [2, 50] discussed in Section 2 cancel out entire units, thus reducing these numbers, but do not consider the interactions between multiple units in the same layer, and would therefore typically not benefit from a post-processing step such as the one of [14]. Here, we propose to make the best of both worlds to obtain low-rank parameter matrices, some of whose units have explicitly been removed.
To this end, we combine the sparse group Lasso regularizer used in [2] with the low-rank one described above. This lets us re-define the regularizer in (1) as
r(Θ) = L
∑ l=1
( (1−α)λl √ Pl Kl
∑ n=1 ‖θ nl ‖2 +αλl‖θl‖1
) + τ L
∑ l=1 ‖θ̂l‖∗ , (5)
where Kl is the number of units in layer l, θ nl denotes the vector of parameters for unit n in layer l, Pl is the size of this vector (the same for all units in a layer), α ∈ [0,1] balances the influence of sparsity terms on groups vs. individual parameters, and λl is a layer-wise hyper-parameter. In practice, following [2], we use only two different values of λl ; one for the first few layers and one for the remaining ones.
To learn our model with this new regularizer consisting of two main terms, we make use of the incremental proximal descent approach proposed in [39], which has the benefit of having a lower memory footprint than parallel proximal methods. The proximity operator for the sparse group Lasso regularizer also has a closed form solution derived in [43] and provided in [2].
3.2 Benefits at Inference
Once our model is trained, we can obtain a compact network for faster and more memory-efficient inference by making use of a post-processing step. In particular, to account for the low rank of the parameter matrix of each layer, we make use of the SVD-based approach of [14]. Specifically, for each layer l, we compute the SVD of the parameter matrix as θ̂l = ŨlΣ̃lṼl and only keep the rl singular values that are either non-zero, thus incurring no loss, or larger than a pre-defined threshold, at some potential loss. The parameter matrix can then be represented as θ̂l =UlMl , with Ul ∈RCld H l d W l ×rl and Ml = ΣlVl ∈ Rrl×Kl). In essence, every layer is decomposed into two layers. This incurs significant memory and computational savings if rl(CldHl d W l +Kl)<< (Cld H l d W l Kl).
Furthermore, additional savings can be achieved when using the sparse group Lasso regularizer discussed in Section 3.1. Indeed, in this case, the zeroed-out units can explicitly be removed, thus yielding only K̂l filters, with K̂l < Kl . Note that, except for the first layer, units have also been removed from the previous layer, thus reducing Cl to a lower Ĉl . Furthermore, thanks to our low-rank regularizer, the remaining, non-zero, units will form a parameter matrix that still has low rank, and can thus also be decomposed. This results in a total of rl(ĈldHl d W l + K̂l) parameters.
In our experiments, we select the rank rl based on the percentage el of the energy (i.e., the sum of singular values) that we seek to capture by our low-rank approximation. This percentage plays an important role in the trade-off between runtime/memory savings and drop of prediction accuracy. In our experiments, we use the same percentage for all layers.
4 Experimental Settings
Datasets: For our experiments, we used two image classification datasets: ImageNet [41] and ICDAR, the character recognition dataset introduced in [27]. ImageNet is a large-scale dataset comprising over 15 million labeled images split into 22,000 categories. We used the ILSVRC2012 [41] subset consisting of 1000 categories, with 1.2 million training images and 50,000 validation images. The ICDAR dataset consists of 185,639 training samples combining real and synthetic characters and 5,198 test samples coming from the ICDAR2003 training set after removing all non-alphanumeric characters. The images in ICDAR are split into 36 categories. The use of ICDAR here was motivated by the fact that it is fairly large-scale, but, in contrast with ImageNet, existing architectures haven’t been heavily tuned to this data. As such, one can expect our approach consisting of training a compact network from scratch to be even more effective on this dataset.
Network Architectures: In our experiments, we make use of architectures where each kernel in the convolutional layers has been decomposed into two 1D kernels [1], thus inherently having rank-1 kernels. Note that this is orthogonal to the purpose of our low-rank regularizer, since, here, we essentially aim at reducing the number of kernels, not the rank of individual kernels. The decomposed layers yield even more compact architectures that require a lower computational cost for training and testing while maintaining or even improving classification accuracy. In the following, a convolutional layer refers to a layer with 1D kernels, while a decomposed layer refers to a block of two convolutional layers using 1D vertical and horizontal kernels, respectively, with a non-linearity and batch normalization after each convolution.
Let us consider a decomposed layer consisting of C and K input and output channels, respectively. Let v̄ and h̄T be vectors of length dv and dh, respectively, representing the kernel size of each 1D feature map. In this paper, we set dh = dv ≡ d. Furthermore, let ϕ(·) be a non-linearity, and xc denote the c-th input channel of the layer. In this setting, the activation of the i-th output channel fi can be written as
fi = ϕ(bhi + L
∑ l=1
h̄Til ∗ [ϕ(bvl + C
∑ c=1 v̄lc ∗ xc)]), (6)
where L is the number of vertical filters, corresponding to the number of input channels for the horizontal filters, and bvl and b h l are biases.
We report results with two different models using such decomposed layers: DecomposeMe [1] and ResNets [23]. In all cases, we make use of batch-normalization after each convolutional layer 1. We rely on rectified linear units (ReLU) [31] as non-linearities, although some initial experiments suggest that slightly better performance can be obtained with exponential linear units [8]. For DecomposeMe, we used two different Dec8 architectures, whose specific number of units are provided in Table 1. For residual networks, we used a decomposed ResNet-50, and empirically verified that the use of 1D kernels instead of the standard ones had no significant impact on classification accuracy.
Implementation details: For the comparison to be fair, all models, including the baselines, were trained from scratch on the same computer using the same random seed and the same framework. More specifically, we used the torch-7 multi-gpu framework [11]. 1 We empirically found the use of batch normalization after each convolutional layer to have more impact with our low-rank regularizer than with group sparsity or with no regularizer, in which cases the computational cost can be reduced by using a single batch normalization after each decomposed layer.
For ImageNet, training was done on a DGX-1 node using two-P100 GPUs in parallel. We used stochastic gradient descent with a momentum of 0.9 and a batch size of 180 images. The models were trained using an initial learning rate of 0.1 multiplied by 0.1 every 20 iterations for the small models (Dec2568 in Table 1) and every 30 iterations for the larger models (Dec 512 8 in Table 1). For ICDAR, we trained each network on a single TitanX-Pascal GPU for a total of 55 epochs with a batch size of 256 and 1,000 iterations per epoch. We follow the same experimental setting as in [2]: The initial learning rate was set to an initial value of 0.1 and multiplied by 0.1. We used a momentum of 0.9.
For DecomposeMe networks, we only performed basic data augmentation consisting of using random crops and random horizontal flips with probability 0.5. At test time, we used a single central crop. For ResNets, we used the standard data augmentation advocated for in [23]. In practice, in all models, we also included weight decay with a penalty strength of 1e−4 in our loss function. We observed empirically that adding this weight decay prevents the weights to overly grow between every two computations of the proximity operator.
In terms of hyper-parameters, for our low-rank regularizer, we considered four values: τ ∈{0,1,5,10}. For the sparse group Lasso term, we initially set the same λ to every layer to analyze the effect of combining both types of regularization. Then, in a second experiment, we followed the experimental set-up proposed in [2], where the first two decomposed layers have a lower penalty. In addition, we set α = 0.2 to favor promoting sparsity at group level rather than at parameter level. The sparse group Lasso hyper-parameter values are summarized in Table 2.
Computational cost: While a convenient measure of computational cost is the forward time, this measure is highly hardware-dependent. Nowadays, hardware is heavily optimized for current architectures and does not necessarily reflect the concept of any-time-computation. Therefore, we focus on analyzing the number of multiply-accumulate operations (MAC). Let a convolution be defined as fi = ϕ(bi +∑Cj=1 Wi j ∗ x j), where each Wi j is a 2D kernel of dimensions dH × dW and i ∈ [1, . . .K]. Considering a naive convolution algorithm, the number of MACs for a convolutional layer is equal to PCKdhdW where P is the number of pixels in the output feature map. Therefore, it is important to reduce CK whenever P is large. That is, reducing the number of units in the first convolutional layers has more impact than in the later ones.
5 Experimental Results
Parameter sensitivity and comparison to other methods on ImagNet: We first analyze the effect of our low-rank regularizer on its own and jointly with the sparse group Lasso one on MACs and accuracy. To this end, we make use of the Dec2568 model on ImageNet, and measure the impact of varying both τ and λ in Eq. 5. Note that using τ = λ = 0 corresponds to the standard model, and τ = 0 and λ 6= 0 to the method of [2]. Below, we report results obtained without and with the post-processing step described in Section 3.2. Note that applying such a post-processing on the standard model corresponds to the compression technique of [14]. Fig. 1 summarizes the results of this analysis.
In Fig. 1(a), we can observe that accuracy remains stable for a wide range of values of τ and λ . In fact, there are even small improvements in accuracy when a moderate regularization is applied.
Figs. 1(b,c) depict the MACs without and with applying the post-processing step discussed in Section 3.2. As expected, the MACs decrease as the weights of the regularizers increase. Importantly, however, Figs. 1(a,b) show that several models can achieve a high compression rate at virtually no loss in accuracy. In Fig. 1(c), we provide the curves after post-processing with two different energy percentages el = {100%,80%}. Keeping all the energy tends to incur an increase in MAC, since the inequality defined in Section 3.2 is then not satisfied anymore. Recall, however, that, without post-processing, the resulting models are still more compact than and as accurate as the baseline one. With el = 80%, while a small drop in accuracy typically occurs, the gain in MAC is significantly larger. Altogether, these experiments show that, by providing more compact models, our regularizer lets us consistently reduce the computational cost over the baseline.
Interestingly, by looking at the case where Confλ = 0 in Fig. 1(b), we can see that we already significantly reduce the number of operations when using our low-rank regularizer only, even without post-processing. This is due to the fact that, even in this case, a significant number of units are automatically zeroed-out. Empirically, we observed that, for moderate values of τ , the number of zeroed-out singular values corresponds to complete units going to zero. This can be observed in Fig. 2(left), were we show the number of non-zero units for each layer. In Fig. 2(right), we further show the effective rank of each layer before and after post-processing.
Comparison to other approaches on ICDAR: We now compare our results with existing approaches on the ICDAR dataset. As a baseline, we consider the Dec5123 trained using SGD and L2 regularization for 75 epochs. For comparison, we consider the post-processing approach in [14] with el = 90%, the group-sparsity regularization approach proposed in [2] and three different instances of our model. First, using τ = 15, no group-sparsity and el = 90%. Then, two instances combining our low-rank regularizer with group-sparsity (Section 3.1) with el = 90% and el = 100%. In this case, the models are trained for 55 epochs and then reloaded and fine tuned for 20 more epochs. Table 3 summarizes these results. The comparison with [14] clearly evidences the benefits of our compression-aware training strategy. Furthermore, these results show the benefits of further combining our low-rank regularizer with the groups-sparsity one of [2].
In addition, we also compare our approach with L1 and L2 regularizers on the same dataset and with the same experimental setup. Pruning the weights of the baseline models with a threshold of 1e−4 resulted in 1.5M zeroed-out parameters for the L2 regularizer and 2.8M zeroed-out parameters for the L1 regularizer. However, these zeroed out weights are sparsely located within units (neurons). Applying our post-processing step (low-rank approximation with el = 100%) to these results yielded models with 3.6M and 3.2M parameters for L2 and L1 regularizers, respectively. The top-1 accuracy for these two models after post-processing was 87% and 89%, respectively. Using a stronger L1 regularizer resulted in lower top-1 accuracy. By comparison, our approach yields a model with 3.4M zeroed-out parameters after post-processing and a top-1 accuracy of 90%. Empirically, we found the benefits of our approach to hold for varying regularizer weights.
Results with larger models: In Table 4, we provide the accuracies and MACs for our approach and the baseline on ImageNet and ICDAR for Dec5128 models. Note that using our low-rank regularizer yields more compact networks than the baselines for similar or higher accuracies. In particular, for ImageNet, we achieve reductions in parameter number of more than 20% and more than 50% for el = 100% and el = 80%, respectively. For ICDAR, these reductions are around 90% in both cases.
We now focus on our results with a ResNet-50 model on ImageNet. For post-processing we used el = 90% for all these experiments which resulted in virtually no loss of accuracy. The baseline corresponds to a top-1 accuracy of 74.7% and 18M parameters. Applying the post-processing step on this baseline resulted in a compression rate of 4%. By contrast, our approach with low-rank yields a top-1 accuracy of 75.0% for a compression rate of 20.6%, and with group sparsity and low-rank
jointly, a top-1 accuracy of 75.2% for a compression rate of 27%. By comparison, applying [2] to the same model yields an accuracy of 74.5% for a compression rate of 17%.
Inference time: While MACs represent the number of operations, we are also interested in the inference time of the resulting models. Table 5 summarizes several representative inference times for different instances of our experiments. Interestingly, there is a significant reduction in inference time when we only remove the zeroed-out neurons from the model. This is a direct consequence of the pruning effect, especially in the first layers. However, there is no significant reduction in inference time when post-processing our model via a low-rank decomposition. The main reason for this is that modern hardware is designed to compute convolutions with much fewer operations than a naive algorithm. Furthermore, the actual computational cost depends not only on the number of floating point operations but also on the memory bandwidth. In modern architectures, decomposing a convolutional layer into a convolution and a matrix multiplication involves (with current hardware) additional intermediate computations, as one cannot reuse convolutional kernels. Nevertheless, we believe that our approach remains beneficial for embedded systems using customized hardware, such as FPGAs.
Additional benefits at training time: So far, our experiments have demonstrated the effectiveness of our approach at test time. Empirically, we found that our approach is also beneficial for training, by pruning the network after only a few epochs (e.g., 15) and reloading and training the pruned network, which becomes much more efficient. Specifically, Table 3 summarizes the effect of varying the reload epoch for a model relying on both low-rank and group-sparsity. We were able to reduce the training time (with a batch size of 32 and training for 100 epochs) from 1.69 to 0.77 hours (relative speedup of 54.5%). The accuracy also improved by 2% and the number of parameters reduced from 3.7M (baseline) to 210K (relative 94.3% reduction). We found this behavior to be stable across a wide range of regularization parameters. If we seek to maintain accuracy compared to the baseline, we found that we could achieve a compression rate of 95.5% (up to 96% for an accuracy drop of 0.5%), which corresponds to a training time reduced by up to 60%.
6 Conclusion
In this paper, we have proposed to explicitly account for a post-processing compression stage when training deep networks. To this end, we have introduced a regularizer in the training loss to encourage the parameter matrix of each layer to have low rank. We have further studied the case where this regularizer is combined with a sparsity-inducing one to achieve even higher compression. Our experiments have demonstrated that our approach can achieve higher compression rates than state-ofthe-art methods, thus evidencing the benefits of taking compression into account during training. The SVD-based technique that motivated our approach is only one specific choice of compression strategy. In the future, we will therefore study how regularizers corresponding to other such compression mechanisms can be incorporated in our framework. | 1. What is the focus of the paper regarding neural network compression?
2. What are the strengths and weaknesses of the proposed regularization method?
3. How does the reviewer assess the significance of the paper compared to prior works on explicit compression methods like variational inference and weight pruning?
4. What are some concerns regarding the interpretation of the results and comparisons with other approaches?
5. Are there any suggestions for additional experiments or analyses to enhance the clarity and impact of the paper? | Review | Review
The authors present a regularization term that encourages weight matrices to be low rank during network training, effectively increasing the compression of the network, and making it possible to explicitly reduce the rank during post-processing, reducing the number of operations required for inference. Overall the paper seems like a good and well written paper on an interesting topic. I have some caveats however: firstly the authors do not mention variational inference, which also explicitly compresses the network (in the literal sense of reducing the number of bits in its description length) and can be used to prune away many weights after training - see 'Practical Variational Inference for Neural Networks' for details. More generally, almost any regularizer provides a form of implicit 'compression-aware training' (that's why they work - simpler models generalize better) and can often be used to prune networks post-hoc. For example a network trained with l1 or l2 regularization will generally end up with many weights very close to 0, which can be removed without greatly altering network performance. I think it's important to clarify this, especially since the authors use an l2 term in addition to their own regularizer during training. They also don't compare seem to compare how well previous low rank post processing works with and without their regulariser, or with other regularisers used in previous work. All of these caveats could be answered by providing more baseline results in the experimental section, demonstrating that training with this particular regulariser does indeed lead to a better accuracy / compression tradeoff than other approaches.
In general I found the results a little hard to interpret, so may be missing something: the graph I wanted to see was a set of curves for accuracy vs compression ratio (either in terms of number of parameters or number of MACs) rather than accuracy against the strength of the regularisation term. On this graph it should be possible to explicitly compare your approach vs previous regularisers / compressors. |
NIPS | Title
Compression-aware Training of Deep Networks
Abstract
In recent years, great progress has been made in a variety of application domains thanks to the development of increasingly deeper neural networks. Unfortunately, the huge number of units of these networks makes them expensive both computationally and memory-wise. To overcome this, exploiting the fact that deep networks are over-parametrized, several compression strategies have been proposed. These methods, however, typically start from a network that has been trained in a standard manner, without considering such a future compression. In this paper, we propose to explicitly account for compression in the training process. To this end, we introduce a regularizer that encourages the parameter matrix of each layer to have low rank during training. We show that accounting for compression during training allows us to learn much more compact, yet at least as effective, models than state-of-the-art compression techniques.
1 Introduction
With the increasing availability of large-scale datasets, recent years have witnessed a resurgence of interest for Deep Learning techniques. Impressive progress has been made in a variety of application domains, such as speech, natural language and image processing, thanks to the development of new learning strategies [15, 53, 30, 45, 26, 3] and of new architectures [31, 44, 46, 23]. In particular, these architectures tend to become ever deeper, with hundreds of layers, each of which containing hundreds or even thousands of units.
While it has been shown that training such very deep architectures was typically easier than smaller ones [24], it is also well-known that they are highly over-parameterized. In essence, this means that equally good results could in principle be obtained with more compact networks. Automatically deriving such equivalent, compact models would be highly beneficial in runtime- and memorysensitive applications, e.g., to deploy deep networks on embedded systems with limited hardware resources. As a consequence, many methods have been proposed to compress existing architectures.
An early trend for such compression consisted of removing individual parameters [33, 22] or entire units [36, 29, 38] according to their influence on the output. Unfortunately, such an analysis of individual parameters or units quickly becomes intractable in the presence of very deep networks. Therefore, currently, one of the most popular compression approaches amounts to extracting low-rank approximations either of individual units [28] or of the parameter matrix/tensor of each layer [14]. This latter idea is particularly attractive, since, as opposed to the former one, it reduces the number of units in each layer. In essence, the above-mentioned techniques aim to compress a network that has been pre-trained. There is, however, no guarantee that the parameter matrices of such pre-trained networks truly have low-rank. Therefore, these methods typically truncate some of the relevant information, thus resulting in a loss of prediction accuracy, and, more importantly, do not necessarily achieve the best possible compression rates.
In this paper, we propose to explicitly account for compression while training the initial deep network. Specifically, we introduce a regularizer that encourages the parameter matrix of each layer to have
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
low rank in the training loss, and rely on a stochastic proximal gradient descent strategy to optimize the network parameters. In essence, and by contrast with methods that aim to learn uncorrelated units to prevent overfitting [5, 54, 40], we seek to learn correlated ones, which can then easily be pruned in a second phase. Our compression-aware training scheme therefore yields networks that are well adapted to the following post-processing stage. As a consequence, we achieve higher compression rates than the above-mentioned techniques at virtually no loss in prediction accuracy.
Our approach constitutes one of the very few attempts at explicitly training a compact network from scratch. In this context, the work of [4] has proposed to learn correlated units by making use of additional noise outputs. This strategy, however, is only guaranteed to have the desired effect for simple networks and has only been demonstrated on relatively shallow architectures. In the contemporary work [51], units are coordinated via a regularizer acting on all pairs of filters within a layer. While effective, exploiting all pairs can quickly become cumbersome in the presence of large numbers of units. Recently, group sparsity has also been employed to obtain compact networks [2, 50]. Such a regularizer, however, acts on individual units, without explicitly aiming to model their redundancies. Here, we show that accounting for interactions between the units within a layer allows us to obtain more compact networks. Furthermore, using such a group sparsity prior in conjunction with our compression-aware strategy lets us achieve even higher compression rates.
We demonstrate the benefits of our approach on several deep architectures, including the 8-layers DecomposeMe network of [1] and the 50-layers ResNet of [23]. Our experiments on ImageNet and ICDAR show that we can achieve compression rates of more than 90%, thus hugely reducing the number of required operations at inference time.
2 Related Work
It is well-known that deep neural networks are over-parametrized [13]. While, given sufficient training data, this seems to facilitate the training procedure, it also has two potential drawbacks. First, over-parametrized networks can easily suffer from overfitting. Second, even when they can be trained successfully, the resulting networks are expensive both computationally and memory-wise, thus making their deployment on platforms with limited hardware resources, such as embedded systems, challenging. Over the years, much effort has been made to overcome these two drawbacks.
In particular, much progress has been made to reduce overfitting, for example by devising new optimization strategies, such as DropOut [45] or MaxOut [16]. In this context, other works have advocated the use of different normalization strategies, such as Batch Normalization [26], Weight Normalization [42] and Layer Normalization [3]. Recently, there has also been a surge of methods aiming to regularize the network parameters by making the different units in each layer less correlated. This has been achieved by designing new activation functions [5], by explicitly considering the pairwise correlations of the units [54, 37, 40] or of the activations [9, 52], or by constraining the weight matrices of each layer to be orthonormal [21].
In this paper, we are more directly interested in addressing the second drawback, that is, the large memory and runtime required by very deep networks. To tackle this, most existing research has focused on pruning pre-trained networks. In this context, early works have proposed to analyze the saliency of individual parameters [33, 22] or units [36, 29, 38, 34], so as to measure their impact on the output. Such a local analysis, however, quickly becomes impractically expensive when dealing with networks with millions of parameters.
As a consequence, recent works have proposed to focus on more global methods, which analyze larger groups of parameters simultaneously. In this context, the most popular trend consists of extracting low-rank approximations of the network parameters. In particular, it has been shown that individual units can be replaced by rank 1 approximations, either via a post-processing step [28, 46] or directly during training [1, 25]. Furthermore, low-rank approximations of the complete parameter matrix/tensor of each layer were computed in [14], which has the benefit of reducing the number of units in each layer. The resulting low-rank representation can then be fine-tuned [32], or potentially even learned from scratch [47], given the rank of each layer in the network. With the exception of this last work, which assumes that the ranks are known, these methods, however, aim to approximate a given pre-trained model. In practice, however, the parameter matrices of this model might not have low rank. Therefore, the resulting approximations yield some loss of accuracy and, more importantly,
will typically not correspond to the most compact networks. Here, we propose to explicitly learn a low-rank network from scratch, but without having to manually define the rank of each layer a priori.
To this end, and in contrast with the above-mentioned methods that aim to minimize correlations, we rather seek to maximize correlations between the different units within each layer, such that many of these units can be removed in a post-processing stage. In [4], additional noise outputs were introduced in a network to similarly learn correlated filters. This strategy, however, is only justified for simple networks and was only demonstrated on relatively shallow architectures. The contemporary work [51] introduced a penalty during training to learn correlated units. This, however, was achieved by explicitly computing all pairwise correlations, which quickly becomes cumbersome in very deep networks with wide layers. By contrast, our approach makes use of a low-rank regularizer that can effectively be optimized by proximal stochastic gradient descent.
Our approach belongs to the relatively small group of methods that explicitly aim to learn a compact network during training, i.e., not as a post-processing step. Other methods have proposed to make use of sparsity-inducing techniques to cancel out individual parameters [49, 10, 20, 19, 35] or units [2, 50, 55]. These methods, however, act, at best, on individual units, without considering the relationships between multiple units in the same layer. Variational inference [17] has also been used to explicitly compress the network. However, the priors and posteriors used in these approaches will typically zero out individual weights. Our experiments demonstrate that accounting for the interactions between multiple units allows us to obtain more compact networks.
Another line of research aims to quantize the weights of deep networks [48, 12, 18]. Note that, in a sense, this research direction is orthogonal to ours, since one could still further quantize our compact networks. Furthermore, with the recent progress in efficient hardware handling floating-point operations, we believe that there is also high value in designing non-quantized compact networks.
3 Compression-aware Training of Deep Networks
In this section, we introduce our approach to explicitly encouraging compactness while training a deep neural network. To this end, we propose to make use of a low-rank regularizer on the parameter matrix in each layer, which inherently aims to maximize the compression rate when computing a low-rank approximation in a post-processing stage. In the following, we focus on convolutional neural networks, because the popular visual recognition models tend to rely less and less on fully-connected layers, and, more importantly, the inference time of such models is dominated by the convolutions in the first few layers. Note, however, that our approach still applies to fully-connected layers.
To introduce our approach, let us first consider the l-th layer of a convolutional network, and denote its parameters by θl ∈ RKl×Cl×d H l ×d W l , where Cl and Kl are the number of input and output channels, respectively, and dHl and d W l are the height and width of each convolutional kernel. Alternatively, these parameters can be represented by a matrix θ̂l ∈ RKl×Sl with Sl =CldHl dWl . Following [14], a network can be compacted via a post-processing step performing a singular value decomposition of θ̂l and truncating the 0, or small, singular values. In essence, after this step, the parameter matrix can be approximated as θ̂l ≈UlMTl , where Ul is a Kl× rl matrix representing the basis kernels, with rl ≤min(Kl ,Sl), and Ml is an Sl× rl matrix that mixes the activations of these basis kernels. By making use of a post-processing step on a network trained in the usual way, however, there is no guarantee that, during training, many singular values have become near-zero. Here, we aim to explicitly account for this post-processing step during training, by seeking to obtain a parameter matrix such that rl << min(Kl ,Sl). To this end, given N training input-output pairs (xi,yi), we formulate learning as the regularized minimization problem
min Θ 1 N
N
∑ i=1 `(yi, f (xi,Θ))+ r(Θ) , (1)
where Θ encompasses all network parameters, `(·, ·) is a supervised loss, such as the cross-entropy, and r(·) is a regularizer encouraging the parameter matrix in each layer to have low rank. Since explicitly minimizing the rank of a matrix is NP-hard, following the matrix completion literature [7, 6], we make use of a convex relaxation in the form of the nuclear norm. This lets us
write our regularizer as
r(Θ) = τ L
∑ l=1 ‖θ̂l‖∗ , (2)
where τ is a hyper-parameter setting the influence of the regularizer, and the nuclear norm is defined as ‖θ̂l‖∗ = ∑ rank(θ̂l) j=1 σ j l , with σ j l the singular values of θ̂l .
In practice, to minimize (1), we make use of proximal stochastic gradient descent. Specifically, this amounts to minimizing the supervised loss only for one epoch, with learning rate ρ , and then applying the proximity operator of our regularizer. In our case, this can be achieved independently for each layer. For layer l, this proximity operator corresponds to solving
θ ∗l = argmin θ̄l 1 2ρ ‖θ̄l− θ̂l‖2F + τ‖θ̄l‖∗ , (3)
where θ̂l is the current estimate of the parameter matrix for layer l. As shown in [6], the solution to this problem can be obtained by soft-thresholding the singular values of θ̂l , which can be written as
θ ∗l =UlΣl(ρτ)V T l , where Σl(ρτ) = diag([(σ 1 l −ρτ)+, . . . ,(σ rank(θ̂l) l −ρτ)+]), (4)
Ul and Vl are the left - and right-singular vectors of θ̂l , and (·)+ corresponds to taking the maximum between the argument and 0.
3.1 Low-rank and Group-sparse Layers
While, as shown in our experiments, the low-rank solution discussed above significantly reduces the number of parameters in the network, it does not affect the original number of input and output channels Cl and Kl . By contrast, the group-sparsity based methods [2, 50] discussed in Section 2 cancel out entire units, thus reducing these numbers, but do not consider the interactions between multiple units in the same layer, and would therefore typically not benefit from a post-processing step such as the one of [14]. Here, we propose to make the best of both worlds to obtain low-rank parameter matrices, some of whose units have explicitly been removed.
To this end, we combine the sparse group Lasso regularizer used in [2] with the low-rank one described above. This lets us re-define the regularizer in (1) as
r(Θ) = L
∑ l=1
( (1−α)λl √ Pl Kl
∑ n=1 ‖θ nl ‖2 +αλl‖θl‖1
) + τ L
∑ l=1 ‖θ̂l‖∗ , (5)
where Kl is the number of units in layer l, θ nl denotes the vector of parameters for unit n in layer l, Pl is the size of this vector (the same for all units in a layer), α ∈ [0,1] balances the influence of sparsity terms on groups vs. individual parameters, and λl is a layer-wise hyper-parameter. In practice, following [2], we use only two different values of λl ; one for the first few layers and one for the remaining ones.
To learn our model with this new regularizer consisting of two main terms, we make use of the incremental proximal descent approach proposed in [39], which has the benefit of having a lower memory footprint than parallel proximal methods. The proximity operator for the sparse group Lasso regularizer also has a closed form solution derived in [43] and provided in [2].
3.2 Benefits at Inference
Once our model is trained, we can obtain a compact network for faster and more memory-efficient inference by making use of a post-processing step. In particular, to account for the low rank of the parameter matrix of each layer, we make use of the SVD-based approach of [14]. Specifically, for each layer l, we compute the SVD of the parameter matrix as θ̂l = ŨlΣ̃lṼl and only keep the rl singular values that are either non-zero, thus incurring no loss, or larger than a pre-defined threshold, at some potential loss. The parameter matrix can then be represented as θ̂l =UlMl , with Ul ∈RCld H l d W l ×rl and Ml = ΣlVl ∈ Rrl×Kl). In essence, every layer is decomposed into two layers. This incurs significant memory and computational savings if rl(CldHl d W l +Kl)<< (Cld H l d W l Kl).
Furthermore, additional savings can be achieved when using the sparse group Lasso regularizer discussed in Section 3.1. Indeed, in this case, the zeroed-out units can explicitly be removed, thus yielding only K̂l filters, with K̂l < Kl . Note that, except for the first layer, units have also been removed from the previous layer, thus reducing Cl to a lower Ĉl . Furthermore, thanks to our low-rank regularizer, the remaining, non-zero, units will form a parameter matrix that still has low rank, and can thus also be decomposed. This results in a total of rl(ĈldHl d W l + K̂l) parameters.
In our experiments, we select the rank rl based on the percentage el of the energy (i.e., the sum of singular values) that we seek to capture by our low-rank approximation. This percentage plays an important role in the trade-off between runtime/memory savings and drop of prediction accuracy. In our experiments, we use the same percentage for all layers.
4 Experimental Settings
Datasets: For our experiments, we used two image classification datasets: ImageNet [41] and ICDAR, the character recognition dataset introduced in [27]. ImageNet is a large-scale dataset comprising over 15 million labeled images split into 22,000 categories. We used the ILSVRC2012 [41] subset consisting of 1000 categories, with 1.2 million training images and 50,000 validation images. The ICDAR dataset consists of 185,639 training samples combining real and synthetic characters and 5,198 test samples coming from the ICDAR2003 training set after removing all non-alphanumeric characters. The images in ICDAR are split into 36 categories. The use of ICDAR here was motivated by the fact that it is fairly large-scale, but, in contrast with ImageNet, existing architectures haven’t been heavily tuned to this data. As such, one can expect our approach consisting of training a compact network from scratch to be even more effective on this dataset.
Network Architectures: In our experiments, we make use of architectures where each kernel in the convolutional layers has been decomposed into two 1D kernels [1], thus inherently having rank-1 kernels. Note that this is orthogonal to the purpose of our low-rank regularizer, since, here, we essentially aim at reducing the number of kernels, not the rank of individual kernels. The decomposed layers yield even more compact architectures that require a lower computational cost for training and testing while maintaining or even improving classification accuracy. In the following, a convolutional layer refers to a layer with 1D kernels, while a decomposed layer refers to a block of two convolutional layers using 1D vertical and horizontal kernels, respectively, with a non-linearity and batch normalization after each convolution.
Let us consider a decomposed layer consisting of C and K input and output channels, respectively. Let v̄ and h̄T be vectors of length dv and dh, respectively, representing the kernel size of each 1D feature map. In this paper, we set dh = dv ≡ d. Furthermore, let ϕ(·) be a non-linearity, and xc denote the c-th input channel of the layer. In this setting, the activation of the i-th output channel fi can be written as
fi = ϕ(bhi + L
∑ l=1
h̄Til ∗ [ϕ(bvl + C
∑ c=1 v̄lc ∗ xc)]), (6)
where L is the number of vertical filters, corresponding to the number of input channels for the horizontal filters, and bvl and b h l are biases.
We report results with two different models using such decomposed layers: DecomposeMe [1] and ResNets [23]. In all cases, we make use of batch-normalization after each convolutional layer 1. We rely on rectified linear units (ReLU) [31] as non-linearities, although some initial experiments suggest that slightly better performance can be obtained with exponential linear units [8]. For DecomposeMe, we used two different Dec8 architectures, whose specific number of units are provided in Table 1. For residual networks, we used a decomposed ResNet-50, and empirically verified that the use of 1D kernels instead of the standard ones had no significant impact on classification accuracy.
Implementation details: For the comparison to be fair, all models, including the baselines, were trained from scratch on the same computer using the same random seed and the same framework. More specifically, we used the torch-7 multi-gpu framework [11]. 1 We empirically found the use of batch normalization after each convolutional layer to have more impact with our low-rank regularizer than with group sparsity or with no regularizer, in which cases the computational cost can be reduced by using a single batch normalization after each decomposed layer.
For ImageNet, training was done on a DGX-1 node using two-P100 GPUs in parallel. We used stochastic gradient descent with a momentum of 0.9 and a batch size of 180 images. The models were trained using an initial learning rate of 0.1 multiplied by 0.1 every 20 iterations for the small models (Dec2568 in Table 1) and every 30 iterations for the larger models (Dec 512 8 in Table 1). For ICDAR, we trained each network on a single TitanX-Pascal GPU for a total of 55 epochs with a batch size of 256 and 1,000 iterations per epoch. We follow the same experimental setting as in [2]: The initial learning rate was set to an initial value of 0.1 and multiplied by 0.1. We used a momentum of 0.9.
For DecomposeMe networks, we only performed basic data augmentation consisting of using random crops and random horizontal flips with probability 0.5. At test time, we used a single central crop. For ResNets, we used the standard data augmentation advocated for in [23]. In practice, in all models, we also included weight decay with a penalty strength of 1e−4 in our loss function. We observed empirically that adding this weight decay prevents the weights to overly grow between every two computations of the proximity operator.
In terms of hyper-parameters, for our low-rank regularizer, we considered four values: τ ∈{0,1,5,10}. For the sparse group Lasso term, we initially set the same λ to every layer to analyze the effect of combining both types of regularization. Then, in a second experiment, we followed the experimental set-up proposed in [2], where the first two decomposed layers have a lower penalty. In addition, we set α = 0.2 to favor promoting sparsity at group level rather than at parameter level. The sparse group Lasso hyper-parameter values are summarized in Table 2.
Computational cost: While a convenient measure of computational cost is the forward time, this measure is highly hardware-dependent. Nowadays, hardware is heavily optimized for current architectures and does not necessarily reflect the concept of any-time-computation. Therefore, we focus on analyzing the number of multiply-accumulate operations (MAC). Let a convolution be defined as fi = ϕ(bi +∑Cj=1 Wi j ∗ x j), where each Wi j is a 2D kernel of dimensions dH × dW and i ∈ [1, . . .K]. Considering a naive convolution algorithm, the number of MACs for a convolutional layer is equal to PCKdhdW where P is the number of pixels in the output feature map. Therefore, it is important to reduce CK whenever P is large. That is, reducing the number of units in the first convolutional layers has more impact than in the later ones.
5 Experimental Results
Parameter sensitivity and comparison to other methods on ImagNet: We first analyze the effect of our low-rank regularizer on its own and jointly with the sparse group Lasso one on MACs and accuracy. To this end, we make use of the Dec2568 model on ImageNet, and measure the impact of varying both τ and λ in Eq. 5. Note that using τ = λ = 0 corresponds to the standard model, and τ = 0 and λ 6= 0 to the method of [2]. Below, we report results obtained without and with the post-processing step described in Section 3.2. Note that applying such a post-processing on the standard model corresponds to the compression technique of [14]. Fig. 1 summarizes the results of this analysis.
In Fig. 1(a), we can observe that accuracy remains stable for a wide range of values of τ and λ . In fact, there are even small improvements in accuracy when a moderate regularization is applied.
Figs. 1(b,c) depict the MACs without and with applying the post-processing step discussed in Section 3.2. As expected, the MACs decrease as the weights of the regularizers increase. Importantly, however, Figs. 1(a,b) show that several models can achieve a high compression rate at virtually no loss in accuracy. In Fig. 1(c), we provide the curves after post-processing with two different energy percentages el = {100%,80%}. Keeping all the energy tends to incur an increase in MAC, since the inequality defined in Section 3.2 is then not satisfied anymore. Recall, however, that, without post-processing, the resulting models are still more compact than and as accurate as the baseline one. With el = 80%, while a small drop in accuracy typically occurs, the gain in MAC is significantly larger. Altogether, these experiments show that, by providing more compact models, our regularizer lets us consistently reduce the computational cost over the baseline.
Interestingly, by looking at the case where Confλ = 0 in Fig. 1(b), we can see that we already significantly reduce the number of operations when using our low-rank regularizer only, even without post-processing. This is due to the fact that, even in this case, a significant number of units are automatically zeroed-out. Empirically, we observed that, for moderate values of τ , the number of zeroed-out singular values corresponds to complete units going to zero. This can be observed in Fig. 2(left), were we show the number of non-zero units for each layer. In Fig. 2(right), we further show the effective rank of each layer before and after post-processing.
Comparison to other approaches on ICDAR: We now compare our results with existing approaches on the ICDAR dataset. As a baseline, we consider the Dec5123 trained using SGD and L2 regularization for 75 epochs. For comparison, we consider the post-processing approach in [14] with el = 90%, the group-sparsity regularization approach proposed in [2] and three different instances of our model. First, using τ = 15, no group-sparsity and el = 90%. Then, two instances combining our low-rank regularizer with group-sparsity (Section 3.1) with el = 90% and el = 100%. In this case, the models are trained for 55 epochs and then reloaded and fine tuned for 20 more epochs. Table 3 summarizes these results. The comparison with [14] clearly evidences the benefits of our compression-aware training strategy. Furthermore, these results show the benefits of further combining our low-rank regularizer with the groups-sparsity one of [2].
In addition, we also compare our approach with L1 and L2 regularizers on the same dataset and with the same experimental setup. Pruning the weights of the baseline models with a threshold of 1e−4 resulted in 1.5M zeroed-out parameters for the L2 regularizer and 2.8M zeroed-out parameters for the L1 regularizer. However, these zeroed out weights are sparsely located within units (neurons). Applying our post-processing step (low-rank approximation with el = 100%) to these results yielded models with 3.6M and 3.2M parameters for L2 and L1 regularizers, respectively. The top-1 accuracy for these two models after post-processing was 87% and 89%, respectively. Using a stronger L1 regularizer resulted in lower top-1 accuracy. By comparison, our approach yields a model with 3.4M zeroed-out parameters after post-processing and a top-1 accuracy of 90%. Empirically, we found the benefits of our approach to hold for varying regularizer weights.
Results with larger models: In Table 4, we provide the accuracies and MACs for our approach and the baseline on ImageNet and ICDAR for Dec5128 models. Note that using our low-rank regularizer yields more compact networks than the baselines for similar or higher accuracies. In particular, for ImageNet, we achieve reductions in parameter number of more than 20% and more than 50% for el = 100% and el = 80%, respectively. For ICDAR, these reductions are around 90% in both cases.
We now focus on our results with a ResNet-50 model on ImageNet. For post-processing we used el = 90% for all these experiments which resulted in virtually no loss of accuracy. The baseline corresponds to a top-1 accuracy of 74.7% and 18M parameters. Applying the post-processing step on this baseline resulted in a compression rate of 4%. By contrast, our approach with low-rank yields a top-1 accuracy of 75.0% for a compression rate of 20.6%, and with group sparsity and low-rank
jointly, a top-1 accuracy of 75.2% for a compression rate of 27%. By comparison, applying [2] to the same model yields an accuracy of 74.5% for a compression rate of 17%.
Inference time: While MACs represent the number of operations, we are also interested in the inference time of the resulting models. Table 5 summarizes several representative inference times for different instances of our experiments. Interestingly, there is a significant reduction in inference time when we only remove the zeroed-out neurons from the model. This is a direct consequence of the pruning effect, especially in the first layers. However, there is no significant reduction in inference time when post-processing our model via a low-rank decomposition. The main reason for this is that modern hardware is designed to compute convolutions with much fewer operations than a naive algorithm. Furthermore, the actual computational cost depends not only on the number of floating point operations but also on the memory bandwidth. In modern architectures, decomposing a convolutional layer into a convolution and a matrix multiplication involves (with current hardware) additional intermediate computations, as one cannot reuse convolutional kernels. Nevertheless, we believe that our approach remains beneficial for embedded systems using customized hardware, such as FPGAs.
Additional benefits at training time: So far, our experiments have demonstrated the effectiveness of our approach at test time. Empirically, we found that our approach is also beneficial for training, by pruning the network after only a few epochs (e.g., 15) and reloading and training the pruned network, which becomes much more efficient. Specifically, Table 3 summarizes the effect of varying the reload epoch for a model relying on both low-rank and group-sparsity. We were able to reduce the training time (with a batch size of 32 and training for 100 epochs) from 1.69 to 0.77 hours (relative speedup of 54.5%). The accuracy also improved by 2% and the number of parameters reduced from 3.7M (baseline) to 210K (relative 94.3% reduction). We found this behavior to be stable across a wide range of regularization parameters. If we seek to maintain accuracy compared to the baseline, we found that we could achieve a compression rate of 95.5% (up to 96% for an accuracy drop of 0.5%), which corresponds to a training time reduced by up to 60%.
6 Conclusion
In this paper, we have proposed to explicitly account for a post-processing compression stage when training deep networks. To this end, we have introduced a regularizer in the training loss to encourage the parameter matrix of each layer to have low rank. We have further studied the case where this regularizer is combined with a sparsity-inducing one to achieve even higher compression. Our experiments have demonstrated that our approach can achieve higher compression rates than state-ofthe-art methods, thus evidencing the benefits of taking compression into account during training. The SVD-based technique that motivated our approach is only one specific choice of compression strategy. In the future, we will therefore study how regularizers corresponding to other such compression mechanisms can be incorporated in our framework. | 1. What is the focus and contribution of the paper on deep model compression?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its relevance to previous works?
3. What are the limitations of the experimental results, and how could they be improved?
4. How does the reviewer assess the novelty and impact of the paper's content?
5. Are there any suggestions for additional comparisons or improvements to enhance the paper's contributions? | Review | Review
This paper proposes a low-rank regularizer for deep model compression during training. Overall, this paper is well-written, and the motivation is clear. However, here are some comments as follows.
1 The novelty is relatively limited, as the technical parts are strongly relevant to the previous works.
2 The experiments should be further improved.
(1) Parameter sensitivity: From Fig 1, the performance of the proposed method (\tau is 1,\lambda is not 0) is similar to [2] (\tau is 0,\lambda is not 0). For other settings of \tau, the compression rate is improved while the accuracy is reduced.
(2) Results on larger models: the comparison with [2] should be performed to show the effectiveness. Furthermore, it would be interesting to compare with other state-of-the-art compression approaches, such as [18]. |
NIPS | Title
Compression-aware Training of Deep Networks
Abstract
In recent years, great progress has been made in a variety of application domains thanks to the development of increasingly deeper neural networks. Unfortunately, the huge number of units of these networks makes them expensive both computationally and memory-wise. To overcome this, exploiting the fact that deep networks are over-parametrized, several compression strategies have been proposed. These methods, however, typically start from a network that has been trained in a standard manner, without considering such a future compression. In this paper, we propose to explicitly account for compression in the training process. To this end, we introduce a regularizer that encourages the parameter matrix of each layer to have low rank during training. We show that accounting for compression during training allows us to learn much more compact, yet at least as effective, models than state-of-the-art compression techniques.
1 Introduction
With the increasing availability of large-scale datasets, recent years have witnessed a resurgence of interest for Deep Learning techniques. Impressive progress has been made in a variety of application domains, such as speech, natural language and image processing, thanks to the development of new learning strategies [15, 53, 30, 45, 26, 3] and of new architectures [31, 44, 46, 23]. In particular, these architectures tend to become ever deeper, with hundreds of layers, each of which containing hundreds or even thousands of units.
While it has been shown that training such very deep architectures was typically easier than smaller ones [24], it is also well-known that they are highly over-parameterized. In essence, this means that equally good results could in principle be obtained with more compact networks. Automatically deriving such equivalent, compact models would be highly beneficial in runtime- and memorysensitive applications, e.g., to deploy deep networks on embedded systems with limited hardware resources. As a consequence, many methods have been proposed to compress existing architectures.
An early trend for such compression consisted of removing individual parameters [33, 22] or entire units [36, 29, 38] according to their influence on the output. Unfortunately, such an analysis of individual parameters or units quickly becomes intractable in the presence of very deep networks. Therefore, currently, one of the most popular compression approaches amounts to extracting low-rank approximations either of individual units [28] or of the parameter matrix/tensor of each layer [14]. This latter idea is particularly attractive, since, as opposed to the former one, it reduces the number of units in each layer. In essence, the above-mentioned techniques aim to compress a network that has been pre-trained. There is, however, no guarantee that the parameter matrices of such pre-trained networks truly have low-rank. Therefore, these methods typically truncate some of the relevant information, thus resulting in a loss of prediction accuracy, and, more importantly, do not necessarily achieve the best possible compression rates.
In this paper, we propose to explicitly account for compression while training the initial deep network. Specifically, we introduce a regularizer that encourages the parameter matrix of each layer to have
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
low rank in the training loss, and rely on a stochastic proximal gradient descent strategy to optimize the network parameters. In essence, and by contrast with methods that aim to learn uncorrelated units to prevent overfitting [5, 54, 40], we seek to learn correlated ones, which can then easily be pruned in a second phase. Our compression-aware training scheme therefore yields networks that are well adapted to the following post-processing stage. As a consequence, we achieve higher compression rates than the above-mentioned techniques at virtually no loss in prediction accuracy.
Our approach constitutes one of the very few attempts at explicitly training a compact network from scratch. In this context, the work of [4] has proposed to learn correlated units by making use of additional noise outputs. This strategy, however, is only guaranteed to have the desired effect for simple networks and has only been demonstrated on relatively shallow architectures. In the contemporary work [51], units are coordinated via a regularizer acting on all pairs of filters within a layer. While effective, exploiting all pairs can quickly become cumbersome in the presence of large numbers of units. Recently, group sparsity has also been employed to obtain compact networks [2, 50]. Such a regularizer, however, acts on individual units, without explicitly aiming to model their redundancies. Here, we show that accounting for interactions between the units within a layer allows us to obtain more compact networks. Furthermore, using such a group sparsity prior in conjunction with our compression-aware strategy lets us achieve even higher compression rates.
We demonstrate the benefits of our approach on several deep architectures, including the 8-layers DecomposeMe network of [1] and the 50-layers ResNet of [23]. Our experiments on ImageNet and ICDAR show that we can achieve compression rates of more than 90%, thus hugely reducing the number of required operations at inference time.
2 Related Work
It is well-known that deep neural networks are over-parametrized [13]. While, given sufficient training data, this seems to facilitate the training procedure, it also has two potential drawbacks. First, over-parametrized networks can easily suffer from overfitting. Second, even when they can be trained successfully, the resulting networks are expensive both computationally and memory-wise, thus making their deployment on platforms with limited hardware resources, such as embedded systems, challenging. Over the years, much effort has been made to overcome these two drawbacks.
In particular, much progress has been made to reduce overfitting, for example by devising new optimization strategies, such as DropOut [45] or MaxOut [16]. In this context, other works have advocated the use of different normalization strategies, such as Batch Normalization [26], Weight Normalization [42] and Layer Normalization [3]. Recently, there has also been a surge of methods aiming to regularize the network parameters by making the different units in each layer less correlated. This has been achieved by designing new activation functions [5], by explicitly considering the pairwise correlations of the units [54, 37, 40] or of the activations [9, 52], or by constraining the weight matrices of each layer to be orthonormal [21].
In this paper, we are more directly interested in addressing the second drawback, that is, the large memory and runtime required by very deep networks. To tackle this, most existing research has focused on pruning pre-trained networks. In this context, early works have proposed to analyze the saliency of individual parameters [33, 22] or units [36, 29, 38, 34], so as to measure their impact on the output. Such a local analysis, however, quickly becomes impractically expensive when dealing with networks with millions of parameters.
As a consequence, recent works have proposed to focus on more global methods, which analyze larger groups of parameters simultaneously. In this context, the most popular trend consists of extracting low-rank approximations of the network parameters. In particular, it has been shown that individual units can be replaced by rank 1 approximations, either via a post-processing step [28, 46] or directly during training [1, 25]. Furthermore, low-rank approximations of the complete parameter matrix/tensor of each layer were computed in [14], which has the benefit of reducing the number of units in each layer. The resulting low-rank representation can then be fine-tuned [32], or potentially even learned from scratch [47], given the rank of each layer in the network. With the exception of this last work, which assumes that the ranks are known, these methods, however, aim to approximate a given pre-trained model. In practice, however, the parameter matrices of this model might not have low rank. Therefore, the resulting approximations yield some loss of accuracy and, more importantly,
will typically not correspond to the most compact networks. Here, we propose to explicitly learn a low-rank network from scratch, but without having to manually define the rank of each layer a priori.
To this end, and in contrast with the above-mentioned methods that aim to minimize correlations, we rather seek to maximize correlations between the different units within each layer, such that many of these units can be removed in a post-processing stage. In [4], additional noise outputs were introduced in a network to similarly learn correlated filters. This strategy, however, is only justified for simple networks and was only demonstrated on relatively shallow architectures. The contemporary work [51] introduced a penalty during training to learn correlated units. This, however, was achieved by explicitly computing all pairwise correlations, which quickly becomes cumbersome in very deep networks with wide layers. By contrast, our approach makes use of a low-rank regularizer that can effectively be optimized by proximal stochastic gradient descent.
Our approach belongs to the relatively small group of methods that explicitly aim to learn a compact network during training, i.e., not as a post-processing step. Other methods have proposed to make use of sparsity-inducing techniques to cancel out individual parameters [49, 10, 20, 19, 35] or units [2, 50, 55]. These methods, however, act, at best, on individual units, without considering the relationships between multiple units in the same layer. Variational inference [17] has also been used to explicitly compress the network. However, the priors and posteriors used in these approaches will typically zero out individual weights. Our experiments demonstrate that accounting for the interactions between multiple units allows us to obtain more compact networks.
Another line of research aims to quantize the weights of deep networks [48, 12, 18]. Note that, in a sense, this research direction is orthogonal to ours, since one could still further quantize our compact networks. Furthermore, with the recent progress in efficient hardware handling floating-point operations, we believe that there is also high value in designing non-quantized compact networks.
3 Compression-aware Training of Deep Networks
In this section, we introduce our approach to explicitly encouraging compactness while training a deep neural network. To this end, we propose to make use of a low-rank regularizer on the parameter matrix in each layer, which inherently aims to maximize the compression rate when computing a low-rank approximation in a post-processing stage. In the following, we focus on convolutional neural networks, because the popular visual recognition models tend to rely less and less on fully-connected layers, and, more importantly, the inference time of such models is dominated by the convolutions in the first few layers. Note, however, that our approach still applies to fully-connected layers.
To introduce our approach, let us first consider the l-th layer of a convolutional network, and denote its parameters by θl ∈ RKl×Cl×d H l ×d W l , where Cl and Kl are the number of input and output channels, respectively, and dHl and d W l are the height and width of each convolutional kernel. Alternatively, these parameters can be represented by a matrix θ̂l ∈ RKl×Sl with Sl =CldHl dWl . Following [14], a network can be compacted via a post-processing step performing a singular value decomposition of θ̂l and truncating the 0, or small, singular values. In essence, after this step, the parameter matrix can be approximated as θ̂l ≈UlMTl , where Ul is a Kl× rl matrix representing the basis kernels, with rl ≤min(Kl ,Sl), and Ml is an Sl× rl matrix that mixes the activations of these basis kernels. By making use of a post-processing step on a network trained in the usual way, however, there is no guarantee that, during training, many singular values have become near-zero. Here, we aim to explicitly account for this post-processing step during training, by seeking to obtain a parameter matrix such that rl << min(Kl ,Sl). To this end, given N training input-output pairs (xi,yi), we formulate learning as the regularized minimization problem
min Θ 1 N
N
∑ i=1 `(yi, f (xi,Θ))+ r(Θ) , (1)
where Θ encompasses all network parameters, `(·, ·) is a supervised loss, such as the cross-entropy, and r(·) is a regularizer encouraging the parameter matrix in each layer to have low rank. Since explicitly minimizing the rank of a matrix is NP-hard, following the matrix completion literature [7, 6], we make use of a convex relaxation in the form of the nuclear norm. This lets us
write our regularizer as
r(Θ) = τ L
∑ l=1 ‖θ̂l‖∗ , (2)
where τ is a hyper-parameter setting the influence of the regularizer, and the nuclear norm is defined as ‖θ̂l‖∗ = ∑ rank(θ̂l) j=1 σ j l , with σ j l the singular values of θ̂l .
In practice, to minimize (1), we make use of proximal stochastic gradient descent. Specifically, this amounts to minimizing the supervised loss only for one epoch, with learning rate ρ , and then applying the proximity operator of our regularizer. In our case, this can be achieved independently for each layer. For layer l, this proximity operator corresponds to solving
θ ∗l = argmin θ̄l 1 2ρ ‖θ̄l− θ̂l‖2F + τ‖θ̄l‖∗ , (3)
where θ̂l is the current estimate of the parameter matrix for layer l. As shown in [6], the solution to this problem can be obtained by soft-thresholding the singular values of θ̂l , which can be written as
θ ∗l =UlΣl(ρτ)V T l , where Σl(ρτ) = diag([(σ 1 l −ρτ)+, . . . ,(σ rank(θ̂l) l −ρτ)+]), (4)
Ul and Vl are the left - and right-singular vectors of θ̂l , and (·)+ corresponds to taking the maximum between the argument and 0.
3.1 Low-rank and Group-sparse Layers
While, as shown in our experiments, the low-rank solution discussed above significantly reduces the number of parameters in the network, it does not affect the original number of input and output channels Cl and Kl . By contrast, the group-sparsity based methods [2, 50] discussed in Section 2 cancel out entire units, thus reducing these numbers, but do not consider the interactions between multiple units in the same layer, and would therefore typically not benefit from a post-processing step such as the one of [14]. Here, we propose to make the best of both worlds to obtain low-rank parameter matrices, some of whose units have explicitly been removed.
To this end, we combine the sparse group Lasso regularizer used in [2] with the low-rank one described above. This lets us re-define the regularizer in (1) as
r(Θ) = L
∑ l=1
( (1−α)λl √ Pl Kl
∑ n=1 ‖θ nl ‖2 +αλl‖θl‖1
) + τ L
∑ l=1 ‖θ̂l‖∗ , (5)
where Kl is the number of units in layer l, θ nl denotes the vector of parameters for unit n in layer l, Pl is the size of this vector (the same for all units in a layer), α ∈ [0,1] balances the influence of sparsity terms on groups vs. individual parameters, and λl is a layer-wise hyper-parameter. In practice, following [2], we use only two different values of λl ; one for the first few layers and one for the remaining ones.
To learn our model with this new regularizer consisting of two main terms, we make use of the incremental proximal descent approach proposed in [39], which has the benefit of having a lower memory footprint than parallel proximal methods. The proximity operator for the sparse group Lasso regularizer also has a closed form solution derived in [43] and provided in [2].
3.2 Benefits at Inference
Once our model is trained, we can obtain a compact network for faster and more memory-efficient inference by making use of a post-processing step. In particular, to account for the low rank of the parameter matrix of each layer, we make use of the SVD-based approach of [14]. Specifically, for each layer l, we compute the SVD of the parameter matrix as θ̂l = ŨlΣ̃lṼl and only keep the rl singular values that are either non-zero, thus incurring no loss, or larger than a pre-defined threshold, at some potential loss. The parameter matrix can then be represented as θ̂l =UlMl , with Ul ∈RCld H l d W l ×rl and Ml = ΣlVl ∈ Rrl×Kl). In essence, every layer is decomposed into two layers. This incurs significant memory and computational savings if rl(CldHl d W l +Kl)<< (Cld H l d W l Kl).
Furthermore, additional savings can be achieved when using the sparse group Lasso regularizer discussed in Section 3.1. Indeed, in this case, the zeroed-out units can explicitly be removed, thus yielding only K̂l filters, with K̂l < Kl . Note that, except for the first layer, units have also been removed from the previous layer, thus reducing Cl to a lower Ĉl . Furthermore, thanks to our low-rank regularizer, the remaining, non-zero, units will form a parameter matrix that still has low rank, and can thus also be decomposed. This results in a total of rl(ĈldHl d W l + K̂l) parameters.
In our experiments, we select the rank rl based on the percentage el of the energy (i.e., the sum of singular values) that we seek to capture by our low-rank approximation. This percentage plays an important role in the trade-off between runtime/memory savings and drop of prediction accuracy. In our experiments, we use the same percentage for all layers.
4 Experimental Settings
Datasets: For our experiments, we used two image classification datasets: ImageNet [41] and ICDAR, the character recognition dataset introduced in [27]. ImageNet is a large-scale dataset comprising over 15 million labeled images split into 22,000 categories. We used the ILSVRC2012 [41] subset consisting of 1000 categories, with 1.2 million training images and 50,000 validation images. The ICDAR dataset consists of 185,639 training samples combining real and synthetic characters and 5,198 test samples coming from the ICDAR2003 training set after removing all non-alphanumeric characters. The images in ICDAR are split into 36 categories. The use of ICDAR here was motivated by the fact that it is fairly large-scale, but, in contrast with ImageNet, existing architectures haven’t been heavily tuned to this data. As such, one can expect our approach consisting of training a compact network from scratch to be even more effective on this dataset.
Network Architectures: In our experiments, we make use of architectures where each kernel in the convolutional layers has been decomposed into two 1D kernels [1], thus inherently having rank-1 kernels. Note that this is orthogonal to the purpose of our low-rank regularizer, since, here, we essentially aim at reducing the number of kernels, not the rank of individual kernels. The decomposed layers yield even more compact architectures that require a lower computational cost for training and testing while maintaining or even improving classification accuracy. In the following, a convolutional layer refers to a layer with 1D kernels, while a decomposed layer refers to a block of two convolutional layers using 1D vertical and horizontal kernels, respectively, with a non-linearity and batch normalization after each convolution.
Let us consider a decomposed layer consisting of C and K input and output channels, respectively. Let v̄ and h̄T be vectors of length dv and dh, respectively, representing the kernel size of each 1D feature map. In this paper, we set dh = dv ≡ d. Furthermore, let ϕ(·) be a non-linearity, and xc denote the c-th input channel of the layer. In this setting, the activation of the i-th output channel fi can be written as
fi = ϕ(bhi + L
∑ l=1
h̄Til ∗ [ϕ(bvl + C
∑ c=1 v̄lc ∗ xc)]), (6)
where L is the number of vertical filters, corresponding to the number of input channels for the horizontal filters, and bvl and b h l are biases.
We report results with two different models using such decomposed layers: DecomposeMe [1] and ResNets [23]. In all cases, we make use of batch-normalization after each convolutional layer 1. We rely on rectified linear units (ReLU) [31] as non-linearities, although some initial experiments suggest that slightly better performance can be obtained with exponential linear units [8]. For DecomposeMe, we used two different Dec8 architectures, whose specific number of units are provided in Table 1. For residual networks, we used a decomposed ResNet-50, and empirically verified that the use of 1D kernels instead of the standard ones had no significant impact on classification accuracy.
Implementation details: For the comparison to be fair, all models, including the baselines, were trained from scratch on the same computer using the same random seed and the same framework. More specifically, we used the torch-7 multi-gpu framework [11]. 1 We empirically found the use of batch normalization after each convolutional layer to have more impact with our low-rank regularizer than with group sparsity or with no regularizer, in which cases the computational cost can be reduced by using a single batch normalization after each decomposed layer.
For ImageNet, training was done on a DGX-1 node using two-P100 GPUs in parallel. We used stochastic gradient descent with a momentum of 0.9 and a batch size of 180 images. The models were trained using an initial learning rate of 0.1 multiplied by 0.1 every 20 iterations for the small models (Dec2568 in Table 1) and every 30 iterations for the larger models (Dec 512 8 in Table 1). For ICDAR, we trained each network on a single TitanX-Pascal GPU for a total of 55 epochs with a batch size of 256 and 1,000 iterations per epoch. We follow the same experimental setting as in [2]: The initial learning rate was set to an initial value of 0.1 and multiplied by 0.1. We used a momentum of 0.9.
For DecomposeMe networks, we only performed basic data augmentation consisting of using random crops and random horizontal flips with probability 0.5. At test time, we used a single central crop. For ResNets, we used the standard data augmentation advocated for in [23]. In practice, in all models, we also included weight decay with a penalty strength of 1e−4 in our loss function. We observed empirically that adding this weight decay prevents the weights to overly grow between every two computations of the proximity operator.
In terms of hyper-parameters, for our low-rank regularizer, we considered four values: τ ∈{0,1,5,10}. For the sparse group Lasso term, we initially set the same λ to every layer to analyze the effect of combining both types of regularization. Then, in a second experiment, we followed the experimental set-up proposed in [2], where the first two decomposed layers have a lower penalty. In addition, we set α = 0.2 to favor promoting sparsity at group level rather than at parameter level. The sparse group Lasso hyper-parameter values are summarized in Table 2.
Computational cost: While a convenient measure of computational cost is the forward time, this measure is highly hardware-dependent. Nowadays, hardware is heavily optimized for current architectures and does not necessarily reflect the concept of any-time-computation. Therefore, we focus on analyzing the number of multiply-accumulate operations (MAC). Let a convolution be defined as fi = ϕ(bi +∑Cj=1 Wi j ∗ x j), where each Wi j is a 2D kernel of dimensions dH × dW and i ∈ [1, . . .K]. Considering a naive convolution algorithm, the number of MACs for a convolutional layer is equal to PCKdhdW where P is the number of pixels in the output feature map. Therefore, it is important to reduce CK whenever P is large. That is, reducing the number of units in the first convolutional layers has more impact than in the later ones.
5 Experimental Results
Parameter sensitivity and comparison to other methods on ImagNet: We first analyze the effect of our low-rank regularizer on its own and jointly with the sparse group Lasso one on MACs and accuracy. To this end, we make use of the Dec2568 model on ImageNet, and measure the impact of varying both τ and λ in Eq. 5. Note that using τ = λ = 0 corresponds to the standard model, and τ = 0 and λ 6= 0 to the method of [2]. Below, we report results obtained without and with the post-processing step described in Section 3.2. Note that applying such a post-processing on the standard model corresponds to the compression technique of [14]. Fig. 1 summarizes the results of this analysis.
In Fig. 1(a), we can observe that accuracy remains stable for a wide range of values of τ and λ . In fact, there are even small improvements in accuracy when a moderate regularization is applied.
Figs. 1(b,c) depict the MACs without and with applying the post-processing step discussed in Section 3.2. As expected, the MACs decrease as the weights of the regularizers increase. Importantly, however, Figs. 1(a,b) show that several models can achieve a high compression rate at virtually no loss in accuracy. In Fig. 1(c), we provide the curves after post-processing with two different energy percentages el = {100%,80%}. Keeping all the energy tends to incur an increase in MAC, since the inequality defined in Section 3.2 is then not satisfied anymore. Recall, however, that, without post-processing, the resulting models are still more compact than and as accurate as the baseline one. With el = 80%, while a small drop in accuracy typically occurs, the gain in MAC is significantly larger. Altogether, these experiments show that, by providing more compact models, our regularizer lets us consistently reduce the computational cost over the baseline.
Interestingly, by looking at the case where Confλ = 0 in Fig. 1(b), we can see that we already significantly reduce the number of operations when using our low-rank regularizer only, even without post-processing. This is due to the fact that, even in this case, a significant number of units are automatically zeroed-out. Empirically, we observed that, for moderate values of τ , the number of zeroed-out singular values corresponds to complete units going to zero. This can be observed in Fig. 2(left), were we show the number of non-zero units for each layer. In Fig. 2(right), we further show the effective rank of each layer before and after post-processing.
Comparison to other approaches on ICDAR: We now compare our results with existing approaches on the ICDAR dataset. As a baseline, we consider the Dec5123 trained using SGD and L2 regularization for 75 epochs. For comparison, we consider the post-processing approach in [14] with el = 90%, the group-sparsity regularization approach proposed in [2] and three different instances of our model. First, using τ = 15, no group-sparsity and el = 90%. Then, two instances combining our low-rank regularizer with group-sparsity (Section 3.1) with el = 90% and el = 100%. In this case, the models are trained for 55 epochs and then reloaded and fine tuned for 20 more epochs. Table 3 summarizes these results. The comparison with [14] clearly evidences the benefits of our compression-aware training strategy. Furthermore, these results show the benefits of further combining our low-rank regularizer with the groups-sparsity one of [2].
In addition, we also compare our approach with L1 and L2 regularizers on the same dataset and with the same experimental setup. Pruning the weights of the baseline models with a threshold of 1e−4 resulted in 1.5M zeroed-out parameters for the L2 regularizer and 2.8M zeroed-out parameters for the L1 regularizer. However, these zeroed out weights are sparsely located within units (neurons). Applying our post-processing step (low-rank approximation with el = 100%) to these results yielded models with 3.6M and 3.2M parameters for L2 and L1 regularizers, respectively. The top-1 accuracy for these two models after post-processing was 87% and 89%, respectively. Using a stronger L1 regularizer resulted in lower top-1 accuracy. By comparison, our approach yields a model with 3.4M zeroed-out parameters after post-processing and a top-1 accuracy of 90%. Empirically, we found the benefits of our approach to hold for varying regularizer weights.
Results with larger models: In Table 4, we provide the accuracies and MACs for our approach and the baseline on ImageNet and ICDAR for Dec5128 models. Note that using our low-rank regularizer yields more compact networks than the baselines for similar or higher accuracies. In particular, for ImageNet, we achieve reductions in parameter number of more than 20% and more than 50% for el = 100% and el = 80%, respectively. For ICDAR, these reductions are around 90% in both cases.
We now focus on our results with a ResNet-50 model on ImageNet. For post-processing we used el = 90% for all these experiments which resulted in virtually no loss of accuracy. The baseline corresponds to a top-1 accuracy of 74.7% and 18M parameters. Applying the post-processing step on this baseline resulted in a compression rate of 4%. By contrast, our approach with low-rank yields a top-1 accuracy of 75.0% for a compression rate of 20.6%, and with group sparsity and low-rank
jointly, a top-1 accuracy of 75.2% for a compression rate of 27%. By comparison, applying [2] to the same model yields an accuracy of 74.5% for a compression rate of 17%.
Inference time: While MACs represent the number of operations, we are also interested in the inference time of the resulting models. Table 5 summarizes several representative inference times for different instances of our experiments. Interestingly, there is a significant reduction in inference time when we only remove the zeroed-out neurons from the model. This is a direct consequence of the pruning effect, especially in the first layers. However, there is no significant reduction in inference time when post-processing our model via a low-rank decomposition. The main reason for this is that modern hardware is designed to compute convolutions with much fewer operations than a naive algorithm. Furthermore, the actual computational cost depends not only on the number of floating point operations but also on the memory bandwidth. In modern architectures, decomposing a convolutional layer into a convolution and a matrix multiplication involves (with current hardware) additional intermediate computations, as one cannot reuse convolutional kernels. Nevertheless, we believe that our approach remains beneficial for embedded systems using customized hardware, such as FPGAs.
Additional benefits at training time: So far, our experiments have demonstrated the effectiveness of our approach at test time. Empirically, we found that our approach is also beneficial for training, by pruning the network after only a few epochs (e.g., 15) and reloading and training the pruned network, which becomes much more efficient. Specifically, Table 3 summarizes the effect of varying the reload epoch for a model relying on both low-rank and group-sparsity. We were able to reduce the training time (with a batch size of 32 and training for 100 epochs) from 1.69 to 0.77 hours (relative speedup of 54.5%). The accuracy also improved by 2% and the number of parameters reduced from 3.7M (baseline) to 210K (relative 94.3% reduction). We found this behavior to be stable across a wide range of regularization parameters. If we seek to maintain accuracy compared to the baseline, we found that we could achieve a compression rate of 95.5% (up to 96% for an accuracy drop of 0.5%), which corresponds to a training time reduced by up to 60%.
6 Conclusion
In this paper, we have proposed to explicitly account for a post-processing compression stage when training deep networks. To this end, we have introduced a regularizer in the training loss to encourage the parameter matrix of each layer to have low rank. We have further studied the case where this regularizer is combined with a sparsity-inducing one to achieve even higher compression. Our experiments have demonstrated that our approach can achieve higher compression rates than state-ofthe-art methods, thus evidencing the benefits of taking compression into account during training. The SVD-based technique that motivated our approach is only one specific choice of compression strategy. In the future, we will therefore study how regularizers corresponding to other such compression mechanisms can be incorporated in our framework. | 1. What is the novel approach proposed by the paper in training neural networks?
2. How does the proposed method differ from other compression techniques in terms of targeting the number of parameters?
3. What are the strengths and weaknesses of the paper's experimental section?
4. Are there any concerns or limitations regarding the proposed method's ability to compress neural networks effectively?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Review | Review
The paper is interesting. The propose training neural networks with a cost that explicitly favors networks that are easier to compress by truncated SVD. They formulate this regularization as a cost on the nuclear norm of the weight matrices, which they enforce with the soft threshold of the singular values as the proximal operator after every epoch. I found the idea interesting, and the experimental sections I thought gave a nice breakdown of the results of their own experiments and the behavior of their proposed method, but I would have liked to see some more comparative results, i.e. the performance of their own network versus other compression techniques targeting the same number of parameters on the datasets, for instance. Overall good paper, interesting idea, good execution, but experiments somewhat lacking. |
NIPS | Title
SDP Relaxation with Randomized Rounding for Energy Disaggregation
Abstract
We develop a scalable, computationally efficient method for the task of energy disaggregation for home appliance monitoring. In this problem the goal is to estimate the energy consumption of each appliance over time based on the total energy-consumption signal of a household. The current state of the art is to model the problem as inference in factorial HMMs, and use quadratic programming to find an approximate solution to the resulting quadratic integer program. Here we take a more principled approach, better suited to integer programming problems, and find an approximate optimum by combining convex semidefinite relaxations randomized rounding, as well as a scalable ADMM method that exploits the special structure of the resulting semidefinite program. Simulation results both in synthetic and real-world datasets demonstrate the superiority of our method.
1 Introduction
Energy efficiency is becoming one of the most important issues in our society. Identifying the energy consumption of individual electrical appliances in homes can raise awareness of power consumption and lead to significant saving in utility bills. Detailed feedback about the power consumption of individual appliances helps energy consumers to identify potential areas for energy savings, and increases their willingness to invest in more efficient products. Notifying home owners of accidentally running stoves, ovens, etc., may not only result in savings but also improves safety. Energy disaggregation or non-intrusive load monitoring (NILM) uses data from utility smart meters to separate individual load consumptions (i.e., a load signal) from the total measured power (i.e., the mixture of the signals) in households.
The bulk of the research in NILM has mostly concentrated on applying different data mining and pattern recognition methods to track the footprint of each appliance in total power measurements. Several techniques, such as artificial neural networks (ANN) [Prudenzi, 2002, Chang et al., 2012, Liang et al., 2010], deep neural networks [Kelly and Knottenbelt, 2015], k-nearest neighbor (k-NN) [Figueiredo et al., 2012, Weiss et al., 2012], sparse coding [Kolter et al., 2010], or ad-hoc heuristic methods [Dong et al., 2012] have been employed. Recent works, rather than turning electrical events into features fed into classifiers, consider the temporal structure of the data[Zia et al., 2011, Kolter and Jaakkola, 2012, Kim et al., 2011, Zhong et al., 2014, Egarter et al., 2015, Guo et al., 2015], resulting in state-of-the-art performance [Kolter and Jaakkola, 2012]. These works usually model the individual appliances by independent hidden Markov models (HMMs), which leads to a factorial HMM (FHMM) model describing the total consumption.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
FHMMs, introduced by Ghahramani and Jordan [1997], are powerful tools for modeling times series generated from multiple independent sources, and are great for modeling speech with multiple people simultaneously talking [Rennie et al., 2009], or energy monitoring which we consider here [Kim et al., 2011]. Doing exact inference in FHMMs is NP hard; therefore, computationally efficient approximate methods have been the subject of study. Classic approaches include sampling methods, such as MCMC or particle filtering [Koller and Friedman, 2009] and variational Bayes methods [Wainwright and Jordan, 2007, Ghahramani and Jordan, 1997]. In practice, both methods are nontrivial to make work and we are not aware of any works that would have demonstrated good results in our application domain with the type of FHMMs we need to work and at practical scales.
In this paper we follow the work of Kolter and Jaakkola [2012] to model the NILM problem by FHMMs. The distinguishing features of FHMMs in this setting are that (i) the output is the sum of the output of the underlying HMMs (perhaps with some noise), and (ii) the number of transitions are small in comparison to the signal length. FHMMs with the first property are called additive. In this paper we derive an efficient, convex relaxation based method for FHMMs of the above type, which significantly outperforms the state-of-the-art algorithms. Our approach is based on revisiting relaxations to the integer programming formulation of Kolter and Jaakkola [2012]. In particular, we replace the quadratic programming relaxation of Kolter and Jaakkola, 2012 with a relaxation to an semi-definite program (SDP), which, based on the literature of relaxations is expected to be tighter and thus better. While SDPs are convex and could in theory be solved using interior-point (IP) methods in polynomial time [Malick et al., 2009], IP scales poorly with the size of the problem and is thus unsuitable to our large scale problem which may involve as many a million variables. To address this problem, capitalizing on the structure of our relaxation coming from our FHMM model, we develop a novel variant of ADMM [Boyd et al., 2011] that uses Moreau-Yosida regularization and combine it with a version of randomized rounding that is inspired by the the recent work of Park and Boyd [2015]. Experiments on synthetic and real data confirm that our method significantly outperforms other algorithms from the literature, and we expect that it may find its applications in other FHMM inference problems, too.
1.1 Notation
Throughout the paper, we use the following notation: R denotes the set of real numbers, Sn+ denotes the set of n ⇥ n positive semidefinite matrices, I{E} denotes the indicator function of an event E (that is, it is 1 if the event is true and zero otherwise), 1 denotes a vector of appropriate dimension whose entries are all 1. For an integer K, [K] denotes the set {1, 2, . . . ,K}. N (µ,⌃) denotes the Gaussian distribution with mean µ and covariance matrix ⌃. For a matrix A, trace(A) denotes its trace and diag(A) denotes the vector formed by the diagonal entries of A.
2 System Model
Following Kolter and Jaakkola [2012], the energy usage of the household is modeled using an additive factorial HMM [Ghahramani and Jordan, 1997]. Suppose there are M appliances in a household. Each of them is modeled via an HMM: let P
i 2 RKi⇥Ki denote the transition-probability matrix of appliance i 2 [M ], and assume that for each state s 2 [K
i ], the energy consumption of the appliance is constant µ
i,s (µ i denotes the corresponding K i -dimensional column vector (µ i,1, . . . , µi,Ki) >). Denoting by x
t,i 2 {0, 1}Ki the indicator vector of the state s t,i of appliance i at time t (i.e., x
t,i,s = I{st,i=s}), the total power consumption at time t is P i2[M ] µ > i x t,i
, which we assume is observed with some additive zero mean Gaussian noise of variance 2: y
t
⇠ N ( P
i2[M ] µ > i x t,i , 2 ).1
Given this model, the maximum likelihood estimate of the appliance state vector sequence can be obtained by minimizing the log-posterior function
argmin
xt,i
TX
t=1
(y
t
P
M i=1 x > t,i µ i ) 2
2
2
T 1X
t=1
MX
i=1
x > t,i (logP i )x t+1,i
subject to x t,i 2 {0, 1}Ki , 1>x t,i = 1, i 2 [M ] and t 2 [T ],
(1)
1Alternatively, we can assume that the power consumption yt,iof each appliance is normally distributed with mean µ>i xt,i and variance 2i , where 2 = P i2[M ] 2 i , and yt = P i2[M ] yt,i.
where logP i denotes a matrix obtained from P i by taking the logarithm of each entry.
In our particular application, in addition to the signal’s temporal structure, large changes in total power (in comparison to signal noise) contain valuable information that can be used to further improve the inference results (in fact, solely this information was used for energy disaggregation, e.g., by Dong et al., 2012, 2013, Figueiredo et al., 2012). This observation was used by Kolter and Jaakkola [2012] to amend the posterior with a term that tries to match the large signal changes to the possible changes in the power level when only the state of a single appliance changes.
Formally, let y t = y t+1 yt, µ(i) m,k = µ i,k µ i,m , and define the matrices E t,i 2 RKi⇥Ki
by (E t,i ) m,k = ( y t
µ(i) m,k ) 2 /(2 2 diff), for some constant diff > 0. Intuitively, (Et,i)m,k is
the negative log-likelihood (up to a constant) of observing a change y t in the power level when appliance i transitions from state m to state k under some zero-mean Gaussian noise with variance
2 diff. Making the heuristic approximation that the observation noise and this noise are independent (which clearly does not hold under the previous model), Kolter and Jaakkola [2012] added the term ( P T 1 t=1 P M i=1 x > t,i E t,i x t+1,i) to the objective of (1), arriving at
argmin
xt,i
f(x1, . . . , xT ) :=
TX
t=1
(y
t
P
M i=1 x > t,i µ i ) 2
2
2
T 1X
t=1
MX
i=1
x > t,i (E t,i + logP i )x t+1,i
subject to x t,i 2 {0, 1}Ki , 1>x t,i = 1, i 2 [M ] and t 2 [T ] .
(2)
In the rest of the paper we derive an efficient approximate solution to (2), and demonstrate that it is superior to the approximate solution derived by Kolter and Jaakkola [2012] with respect to several measures quantifying the accuracy of load disaggregation solutions.
3 SDP Relaxation and Randomized Rounding
There are two major challenges to solve the optimization problem (2) exactly: (i) the optimization is over binary vectors x
t,i ; and (ii) the objective function f , even when considering its extension to a convex domain, is in general non-convex (due to the second term). As a remedy we will relax (2) to make it an integer quadratic programming problem, then apply an SDP relaxation and randomized rounding to solve approximately the relaxed problem. We start with reviewing the latter methods.
3.1 Approximate Solutions for Integer Quadratic Programming
In this section we consider approximate solutions to the integer quadratic programming problem
minimize f(x) = x>Dx+ 2d>x subject to x 2 {0, 1}n,
(3)
where D 2 Sn+ is positive semidefinite, and d 2 Rn. While an exact solution of (3) can be found by enumerating all possible combination of binary values within a properly chosen box or ellipsoid, the running time of such exact methods is nearly exponential in the number n of binary variables, making these methods unfit for large scale problems.
One way to avoid exponential running times is to replace (3) with a convex problem with the hope that the solutions of the convex problems can serve as a good starting point to find high-quality solutions to (3). The standard approach to this is to linearize (3) by introducing a new variable X 2 Sn+ tied to x trough X = xx>, so that x>Dx = trace(DX), and then relax the nonconvex constraints X = xx
>, x 2 {0, 1}n to X ⌫ xx>, diag(X) = x, x 2 [0, 1]n. This leads to the relaxed SDP problem
minimize trace(D>X) + 2d>x subject to 1 x >
x X
⌫ 0, diag(X) = x, x 2 [0, 1]n
(4)
By introducing ˆX = 1 x >
x X
this can be written in the compact SDP form
minimize trace( ˆD> ˆX) subject to ˆX ⌫ 0, A ˆX = b .
(5)
where ˆD = 0 d >
d D
2 Sn+1+ , b 2 Rm and A : Sn+ ! Rm is an appropriate linear operator. This
general SDP optimization problem can be solved with arbitrary precision in polynomial time using interior-point methods [Malick et al., 2009, Wen et al., 2010]. As discussed before, this approach becomes impractical in terms of both the running time and the required memory if either the number of variables or the optimization constraints are large [Wen et al., 2010]. We will return to the issue of building scaleable solvers for NILM in Section 5.
Note that introducing the new variable X , the problem is projected into a higher dimensional space, which is computationally more challenging than just simply relaxing the integrality constraint in (3), but leads to a tighter approximation of the optimum (c.f., Park and Boyd, 2015; see also Lovász and Schrijver, 1991, Burer and Vandenbussche, 2006).
To obtain a feasible point of (3) from the solution of (5), we still need to change the solution x to a binary vector. This can be done via randomized rounding [Park and Boyd, 2015, Goemans and Williamson, 1995]: Instead of letting x 2 [0, 1]n, the integrality constraint x 2 {0, 1}n in (3) can be replaced by the inequalities x
i
(x
i 1) 0 for all i 2 [n]. Although these constraints are nonconvex, they admit an interesting probabilistic interpretation: the optimization problem
minimize E w⇠N (µ,⌃)[w
> Dw + 2d > w]
subject to E w⇠N (µ,⌃)[wi(wi 1)] 0, i 2 [n], µ 2 Rn, ⌃ ⌫ 0
is equivalent to
minimize trace((⌃+ µµ>)D) + 2d>µ subject to ⌃
i,i
+ µ 2 i µ i
0, i 2 [n], (6)
which is in the form of (4) with X = ⌃ + µµ> and x = µ (above, E x⇠P [f(x)] stands forR
f(x)dP (x)). This leads to the rounding procedure: starting from a solution (x⇤, X⇤) of (4), we randomly draw several samples w(j) from N (x⇤, X⇤ x⇤x⇤>), round w(j)
i to 0 or 1 to obtain x
(j), and keep the x(j) with the smallest objective value. In a series of experiments, Park and Boyd [2015] found this procedure to be better than just naively rounding the coordinates of x⇤.
4 An Efficient Algorithm for Inference in FHMMs
To arrive at our method we apply the results of the previous subsection to (2). To do so, as mentioned at the beginning of the section, we need to change the problem to a convex one, since the elements of the second term in the objective of (2), x>
t,i
(E
t,i
+ logP
i
)x t+1,i are not convex. To address this issue, we relax the problem by introducing new variables Z
t,i
= x
t,i
x > t+1,i and replace the constraint
Z
t,i
= x
t,i
x > t+1,i with two new ones:
Z
t,i 1 = x t,i and Z> t,i 1 = x t+1,i.
To simplify the presentation, we will assume that K i = K for all i 2 [M ]. Then problem (2) becomes
argmin
xt,i
TX
t=1
⇢ 1
2
2
y
t x> t µ 2 p> t z t
subject to x t 2 {0, 1}MK , t 2 [T ], ẑ
t 2 {0, 1}MKK , t 2 [T 1], 1>x
t,i = 1, t 2 [T ] and i 2 [M ], Z
t,i 1> = x t,i , Z > t,i 1> = x t+1,i , t 2 [T 1] and i 2 [M ],
(7)
Algorithm 1 ADMM-RR: Randomized rounding algorithm for suboptimal solution to (2) Given: number of iterations: itermax, length of input data: T Solve the optimization problem (8): Run Algorithm 2 to get X⇤t and z⇤t Set xbestt := z⇤t and Xbestt := X⇤t for t = 1, . . . , T for t = 2, . . . , T 1 do
Set x := [xbestt 1 > , x best t > , x best t+1 > ]> Set X := block(Xbestt 1 , Xbestt , Xbestt+1 ) where block(·, ·) constructs block diagonal matrix from input arguments Set f best := 1 Form the covariance matrix ⌃ := X xxT and find its Cholesky factorization LL> = ⌃. for k = 1, 2, . . . , itermax do
Random sampling: zk := x+ Lw, where w ⇠ N (0, I) Round zk to the nearest integer point xk that satisfies the constraints of (7) If f best > ft(xk) then update xbestt and Xbestt from the corresponding entries of xk and xkxk
>, respectively
end for end for
where x> t = [x > t,1, . . . , x > t,M ], µ> = [µ>1 , . . . , µ> M ], z> t = [vec(Z t,1) > , . . . , vec(Z t,M ) > ] and
p > t = [vec(E t,1 + logP1), . . . , vec(logPT )], with vec(A) denoting the column vector obtained by concatenating the columns of A for a matrix A. Expanding the first term of (7) and following the relaxation method of Section 3.1, we get the following SDP problem:2
arg min
Xt,zt
TX
t=1
trace(D> t X t ) + d > t z t
subject to AX t = b, BX t + Cz t + EX t+1 = g,
X
t ⌫ 0, X t , z t 0 .
(8)
Here A : SMK+1+ ! Rm, B, E : SMK+1+ ! Rm 0 and C 2 RMKK⇥m0 are all appropriate linear operators, and the integers m and m0 are determined by the number of equality constraints, while
D
t
= 1 2 2
0 y
t
µ
>
y t µ µµ
>
and d
t
= p
t
. Notice that (8) is a simple, though huge-dimensional SDP
problem in the form of (5) where ˆD has a special block structure.
Next we apply the randomized rounding method from Section 3.1 to provide an approximate solution to our original problem (2). Starting from an optimal solution (z⇤, X⇤) of (8) , and utilizing that we have an SDP problem for each time step t, we obtain Algorithm 1 that performs the rounding sequentially for t = 1, 2, . . . , T . However we run the randomized method for three consecutive time steps, since X
t appears at both time steps t 1 and t + 1 in addition to time t (cf., equation 9). Following Park and Boyd [2015], in the experiments we introduce a simple greedy search within Algorithm 1: after finding the initial point xk, we greedily try to objective the target value by change the status of a single appliance at a single time instant. The search stops when no such improvement is possible, and we use the resulting point as the estimate.
5 ADMM Solver for Large-Scale, Sparse Block-Structured SDP Problems
Given the relaxation and randomized rounding presented in the previous subsection all that remains is to find X⇤
t
, z ⇤ t to initialize Algorithm 1. Although interior point methods can solve SDP problems efficiently, even for problems with sparse constraints as (4), the running time to obtain an ✏ optimal solution is of the order of n3.5 log(1/✏) [Nesterov, 2004, Section 4.3.3], which becomes prohibitive in our case since the number of variables scales linearly with the time horizon T .
As an alternative solution, first-order methods can be used for large scale problems [Wen et al., 2010]. Since our problem (8) is an SDP problem where the objective function is separable, ADMM is a promising candidate to find a near-optimal solution. To apply ADMM, we use the Moreau-Yosida quadratic regularization [Malick et al., 2009], which is well suited for the primal formulation we
2The only modification is that we need to keep the equality constraints in (7) that are missing from (3).
Algorithm 2 ADMM for sparse SDPs of the form (8) Given: length of input data: T , number of iterations: itermax. Set the initial values to zero. W 0t , P 0t , S0 = 0, 0t = 0, ⌫0t = 0, and r0t , h0t = 0 Set µ = 0.001 {Default step-size value} for k = 0, 1, . . . , itermax do
for t = 1, 2, . . . , T do Update P kt , W kt , k, Skt , rkt , hkt , and ⌫kt , respectively, according to (11) (Appendix A).
end for end for
consider. When implementing ADMM over the variables (X t , z t ) t , the sparse structure of our constraints allows to consider the SDP problems for each time step t sequentially:
arg min
Xt,zt
trace(D> t X t ) + d > t z t
subject to AX t = b,
BX t + Cz t + EX t+1 = g,
BX t 1 + Czt 1 + EXt = g,
X
t ⌫ 0, X t , z t 0 .
(9)
The regularized Lagrangian function for (9) is3
L µ =trace(D>X) + d>z + 1 2µ kX Sk2 F + 1 2µ kz rk22 + >(b AX)
+ ⌫ > (g BX Cz EX+) + ⌫> (g BX Cz EX)
trace(W>X) trace(P>X) h>z,
(10)
where , ⌫, W 0, P ⌫ 0, and h 0 are dual variables, and µ > 0 is a constant. By taking the derivatives of L
µ and computing the optimal values of X and z, one can derive the standard ADMM updates, which, due to space constraints, are given in Appendix A. The final algorithm, which updates the variables for each t sequentially, is given by Algorithm 2.
Algorithms 1 and 2 together give an efficient algorithm for finding an approximate solution to (2) and thus also to the inference problem of additive FHMMs.
6 Learning the Model
The previous section provided an algorithm to solve the inference part of our energy disaggregation problem. However, to be able to run the inference method, we need to set up the model. To learn the HMMs describing each appliance, we use the method of Kontorovich et al. [2013] to learn the transition matrix, and the spectral learning method of Anandkumar et al. [2012] (following Mattfeld, 2014) to determine the emission parameters.
However, when it comes to the specific application of NILM, the problem of unknown, time-varying bias also needs to be addressed, which appears due to the presence of unknown/unmodeled appliances in the measured signal. A simple idea, which is also followed by Kolter and Jaakkola [2012], is to use a “generic model” whose contribution to the objective function is downweighted. Surprisingly, incorporating this idea in the FHMM inference creates some unexpected challenges.4
Therefore, in this work we come up with a practical, heuristic solution tailored to NILM. First we identify all electric events defined by a large change y
t in the power usage (using some ad-hoc threshold). Then we discard all events that are similar to any possible level change µ(i)
m,k . The remaining large jumps are regarded as coming from a generic HMM model describing the unregistered appliances: they are clustered into K 1 clusters, and an HMM model is built where each cluster is regarded as power usage coming from a single state of the unregistered appliances. We also allow an “off state” with power usage 0.
3We drop the subscript t and replace t+ 1 and t 1 with + and signs, respectively. 4For example, the incorporation of this generic model breaks the derivation of the algorithm of Kolter and
Jaakkola [2012]. See Appendix B for a discussion of this.
7 Experimental Results
We evaluate the performance of our algorithm in two setups:5 we use a synthetic dataset to test the inference method in a controlled environment, while we used the REDD dataset of Kolter and Johnson [2011] to see how the method performs on non-simulated, “real” data. The performance of our algorithm is compared to the structured variational inference (SVI) method of Ghahramani and Jordan [1997], the method of Kolter and Jaakkola [2012] and that of Zhong et al. [2014]; we shall refer to the last two algorithms as KJ and ZGS, respectively.
7.1 Experimental Results: Synthetic Data
The synthetic dataset was generated randomly (the exact procedure is described in Appendix C). To evaluate the performance, we use normalized disaggregation error as suggested by Kolter and Jaakkola [2012] and also adopted by Zhong et al. [2014]. This measures the reconstruction error for each individual appliance. Given the true output y
t,i and the estimated output ŷ t,i (i.e. ŷ t,i = µ > i x̂ t,i ), the error measure is defined as
NDE = qP
t,i
(y
t,i ŷ t,i )
2 / P t,i (y t,i ) 2 .
Figures 1 and 2 show the performance of the algorithms as the number HMMs (M ) (resp., number of states, K) is varied. Each plot is a report for T = 1000 steps averaged over 100 random models and realizations, showing the mean and standard deviation of NDE. Our method, shown under the label ADMM-RR, runs ADMM for 2500 iterations, runs the local search at the end of each 250 iterations, and chooses the result that has the maximum likelihood. ADMM is the algorithm which applies naive rounding. It can be observed that the variational inference method is significantly outperformed by all other methods, while our algorithm consistently obtained better results than its competitors, KJ coming second and ZGS third.
7.2 Experimental Results: Real Data
In this section, we also compared the 3 best methods on the real dataset REDD [Kolter and Johnson, 2011]. We use the first half of the data for training and the second half for testing. Each HMM (i.e.,
5Our code is available online at https://github.com/kiarashshaloudegi/FHMM_inference.
appliance) is trained separately using the associated circuit level data, and the HMM corresponding to unregistered appliances is trained using the main panel data. In this set of experiments we monitor appliances consuming more than 100 watts. ADMM-RR is run for 1000 iterations, and the local search is run at the end of each 250 iterations, and the result with the largest likelihood is chosen. To be able to use the ZGS method on this data, we need to have some prior information about the usage of each appliance; the authors suggestion is to us national energy surveys, but in the lack of this information (also about the number of residents, type of houses, etc.) we used the training data to extract this prior knowledge, which is expected to help this method.
Detailed results about the precision and recall of estimating which appliances are ‘on’ at any given time are given in Table 1. In Appendix D we also report the error of the total power usage assigned to different appliances (Table 2), as well as the amount of assigned power to each appliance as a percentage of total power (Figure 3). As a summary, we can see that our method consistently outperformed the others, achieving an average precision and recall of 60.97% and 78.56%, with about 50% better precision than KJ with essentially the same recall (38.68/75.02%), while significantly improving upon ZGS (17.97/36.22%). Considering the error in assigning the power consumption to different appliances, our method achieved about 30 35% smaller error (ADMM-RR: 2.87%, KJ: 4.44%, ZGS: 3.94%) than its competitors.
In our real-data experiments, there are about 1 million decision variables: M = 7 or 6 appliances (for phase A and B power, respectively) with K = 4 states each and for about T = 30, 000 time steps for one day, 1 sample every 6 seconds. KJ and ZGS solve quadratic programs, increasing their memory usage (14GB vs 6GB in our case). On the other hand, our implementation of their method, using the commercial solver MOSEK inside the Matlab-based YALMIP [Löfberg, 2004], runs in 5 minutes, while our algorithm, which is purely Matlab-based takes 5 hours to finish. We expect that an optimized C++ version of our method could achieve a significant speed-up compared to our current implementation.
8 Conclusion
FHMMs are widely used in energy disaggregation. However, the resulting model has a huge (factored) state space, making standard inference FHMM algorithms infeasible even for only a handful of appliances. In this paper we developed a scalable approximate inference algorithm, based on a semidefinite relaxation combined with randomized rounding, which significantly outperformed the state of the art in our experiments. A crucial component of our solution is a scalable ADMM method that utilizes the special block-diagonal-like structure of the SDP relaxation and provides a good initialization for randomized rounding. We expect that our method may prove useful in solving other FHMM inference problems, as well as in large scale integer quadratic programming.
Acknowledgements
This work was supported in part by the Alberta Innovates Technology Futures through the Alberta Ingenuity Centre for Machine Learning and by NSERC. K. is indebted to Pooria Joulani and Mohammad Ajallooeian, whom provided much useful technical advise, while all authors are grateful for Zico Kolter for sharing his code. | 1. How does the proposed approach estimate the power consumption of individual appliances?
2. What are the strengths of the paper's solution, particularly in its complexity and composition?
3. Are there any concerns or questions regarding the estimation of the number of appliances?
4. How sensitive is the choice of the number of appliances during empirical evaluations? | Review | Review
The paper investigates the problem of estimating the power consumed by each appliances in a household from the time series measurements of total energy consumed. The authors solve the problem with an additive factorial HMM model and infer the estimates using a few clever tricks including casting it as a convex semidefinite relaxation, random rounding, and an efficient and scalable alternating direction method of multipliers (ADMM). They demonstrates its efficacy on a simulated data set.This paper tackles an intriguing problem and is well-written. Their solution, which is complex and contains several components, is explained in stages where each stage naturally follows from the previous one. Even for a reader not familiar with the problem domain, the paper is self-contained. After reading the paper, the one question that was unclear is how are the number of appliances estimated? Perhaps, the authors could elaborate on this in Section 5. It would also be useful to know how sensitive this choice is during the empirical evaluation. |
NIPS | Title
SDP Relaxation with Randomized Rounding for Energy Disaggregation
Abstract
We develop a scalable, computationally efficient method for the task of energy disaggregation for home appliance monitoring. In this problem the goal is to estimate the energy consumption of each appliance over time based on the total energy-consumption signal of a household. The current state of the art is to model the problem as inference in factorial HMMs, and use quadratic programming to find an approximate solution to the resulting quadratic integer program. Here we take a more principled approach, better suited to integer programming problems, and find an approximate optimum by combining convex semidefinite relaxations randomized rounding, as well as a scalable ADMM method that exploits the special structure of the resulting semidefinite program. Simulation results both in synthetic and real-world datasets demonstrate the superiority of our method.
1 Introduction
Energy efficiency is becoming one of the most important issues in our society. Identifying the energy consumption of individual electrical appliances in homes can raise awareness of power consumption and lead to significant saving in utility bills. Detailed feedback about the power consumption of individual appliances helps energy consumers to identify potential areas for energy savings, and increases their willingness to invest in more efficient products. Notifying home owners of accidentally running stoves, ovens, etc., may not only result in savings but also improves safety. Energy disaggregation or non-intrusive load monitoring (NILM) uses data from utility smart meters to separate individual load consumptions (i.e., a load signal) from the total measured power (i.e., the mixture of the signals) in households.
The bulk of the research in NILM has mostly concentrated on applying different data mining and pattern recognition methods to track the footprint of each appliance in total power measurements. Several techniques, such as artificial neural networks (ANN) [Prudenzi, 2002, Chang et al., 2012, Liang et al., 2010], deep neural networks [Kelly and Knottenbelt, 2015], k-nearest neighbor (k-NN) [Figueiredo et al., 2012, Weiss et al., 2012], sparse coding [Kolter et al., 2010], or ad-hoc heuristic methods [Dong et al., 2012] have been employed. Recent works, rather than turning electrical events into features fed into classifiers, consider the temporal structure of the data[Zia et al., 2011, Kolter and Jaakkola, 2012, Kim et al., 2011, Zhong et al., 2014, Egarter et al., 2015, Guo et al., 2015], resulting in state-of-the-art performance [Kolter and Jaakkola, 2012]. These works usually model the individual appliances by independent hidden Markov models (HMMs), which leads to a factorial HMM (FHMM) model describing the total consumption.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
FHMMs, introduced by Ghahramani and Jordan [1997], are powerful tools for modeling times series generated from multiple independent sources, and are great for modeling speech with multiple people simultaneously talking [Rennie et al., 2009], or energy monitoring which we consider here [Kim et al., 2011]. Doing exact inference in FHMMs is NP hard; therefore, computationally efficient approximate methods have been the subject of study. Classic approaches include sampling methods, such as MCMC or particle filtering [Koller and Friedman, 2009] and variational Bayes methods [Wainwright and Jordan, 2007, Ghahramani and Jordan, 1997]. In practice, both methods are nontrivial to make work and we are not aware of any works that would have demonstrated good results in our application domain with the type of FHMMs we need to work and at practical scales.
In this paper we follow the work of Kolter and Jaakkola [2012] to model the NILM problem by FHMMs. The distinguishing features of FHMMs in this setting are that (i) the output is the sum of the output of the underlying HMMs (perhaps with some noise), and (ii) the number of transitions are small in comparison to the signal length. FHMMs with the first property are called additive. In this paper we derive an efficient, convex relaxation based method for FHMMs of the above type, which significantly outperforms the state-of-the-art algorithms. Our approach is based on revisiting relaxations to the integer programming formulation of Kolter and Jaakkola [2012]. In particular, we replace the quadratic programming relaxation of Kolter and Jaakkola, 2012 with a relaxation to an semi-definite program (SDP), which, based on the literature of relaxations is expected to be tighter and thus better. While SDPs are convex and could in theory be solved using interior-point (IP) methods in polynomial time [Malick et al., 2009], IP scales poorly with the size of the problem and is thus unsuitable to our large scale problem which may involve as many a million variables. To address this problem, capitalizing on the structure of our relaxation coming from our FHMM model, we develop a novel variant of ADMM [Boyd et al., 2011] that uses Moreau-Yosida regularization and combine it with a version of randomized rounding that is inspired by the the recent work of Park and Boyd [2015]. Experiments on synthetic and real data confirm that our method significantly outperforms other algorithms from the literature, and we expect that it may find its applications in other FHMM inference problems, too.
1.1 Notation
Throughout the paper, we use the following notation: R denotes the set of real numbers, Sn+ denotes the set of n ⇥ n positive semidefinite matrices, I{E} denotes the indicator function of an event E (that is, it is 1 if the event is true and zero otherwise), 1 denotes a vector of appropriate dimension whose entries are all 1. For an integer K, [K] denotes the set {1, 2, . . . ,K}. N (µ,⌃) denotes the Gaussian distribution with mean µ and covariance matrix ⌃. For a matrix A, trace(A) denotes its trace and diag(A) denotes the vector formed by the diagonal entries of A.
2 System Model
Following Kolter and Jaakkola [2012], the energy usage of the household is modeled using an additive factorial HMM [Ghahramani and Jordan, 1997]. Suppose there are M appliances in a household. Each of them is modeled via an HMM: let P
i 2 RKi⇥Ki denote the transition-probability matrix of appliance i 2 [M ], and assume that for each state s 2 [K
i ], the energy consumption of the appliance is constant µ
i,s (µ i denotes the corresponding K i -dimensional column vector (µ i,1, . . . , µi,Ki) >). Denoting by x
t,i 2 {0, 1}Ki the indicator vector of the state s t,i of appliance i at time t (i.e., x
t,i,s = I{st,i=s}), the total power consumption at time t is P i2[M ] µ > i x t,i
, which we assume is observed with some additive zero mean Gaussian noise of variance 2: y
t
⇠ N ( P
i2[M ] µ > i x t,i , 2 ).1
Given this model, the maximum likelihood estimate of the appliance state vector sequence can be obtained by minimizing the log-posterior function
argmin
xt,i
TX
t=1
(y
t
P
M i=1 x > t,i µ i ) 2
2
2
T 1X
t=1
MX
i=1
x > t,i (logP i )x t+1,i
subject to x t,i 2 {0, 1}Ki , 1>x t,i = 1, i 2 [M ] and t 2 [T ],
(1)
1Alternatively, we can assume that the power consumption yt,iof each appliance is normally distributed with mean µ>i xt,i and variance 2i , where 2 = P i2[M ] 2 i , and yt = P i2[M ] yt,i.
where logP i denotes a matrix obtained from P i by taking the logarithm of each entry.
In our particular application, in addition to the signal’s temporal structure, large changes in total power (in comparison to signal noise) contain valuable information that can be used to further improve the inference results (in fact, solely this information was used for energy disaggregation, e.g., by Dong et al., 2012, 2013, Figueiredo et al., 2012). This observation was used by Kolter and Jaakkola [2012] to amend the posterior with a term that tries to match the large signal changes to the possible changes in the power level when only the state of a single appliance changes.
Formally, let y t = y t+1 yt, µ(i) m,k = µ i,k µ i,m , and define the matrices E t,i 2 RKi⇥Ki
by (E t,i ) m,k = ( y t
µ(i) m,k ) 2 /(2 2 diff), for some constant diff > 0. Intuitively, (Et,i)m,k is
the negative log-likelihood (up to a constant) of observing a change y t in the power level when appliance i transitions from state m to state k under some zero-mean Gaussian noise with variance
2 diff. Making the heuristic approximation that the observation noise and this noise are independent (which clearly does not hold under the previous model), Kolter and Jaakkola [2012] added the term ( P T 1 t=1 P M i=1 x > t,i E t,i x t+1,i) to the objective of (1), arriving at
argmin
xt,i
f(x1, . . . , xT ) :=
TX
t=1
(y
t
P
M i=1 x > t,i µ i ) 2
2
2
T 1X
t=1
MX
i=1
x > t,i (E t,i + logP i )x t+1,i
subject to x t,i 2 {0, 1}Ki , 1>x t,i = 1, i 2 [M ] and t 2 [T ] .
(2)
In the rest of the paper we derive an efficient approximate solution to (2), and demonstrate that it is superior to the approximate solution derived by Kolter and Jaakkola [2012] with respect to several measures quantifying the accuracy of load disaggregation solutions.
3 SDP Relaxation and Randomized Rounding
There are two major challenges to solve the optimization problem (2) exactly: (i) the optimization is over binary vectors x
t,i ; and (ii) the objective function f , even when considering its extension to a convex domain, is in general non-convex (due to the second term). As a remedy we will relax (2) to make it an integer quadratic programming problem, then apply an SDP relaxation and randomized rounding to solve approximately the relaxed problem. We start with reviewing the latter methods.
3.1 Approximate Solutions for Integer Quadratic Programming
In this section we consider approximate solutions to the integer quadratic programming problem
minimize f(x) = x>Dx+ 2d>x subject to x 2 {0, 1}n,
(3)
where D 2 Sn+ is positive semidefinite, and d 2 Rn. While an exact solution of (3) can be found by enumerating all possible combination of binary values within a properly chosen box or ellipsoid, the running time of such exact methods is nearly exponential in the number n of binary variables, making these methods unfit for large scale problems.
One way to avoid exponential running times is to replace (3) with a convex problem with the hope that the solutions of the convex problems can serve as a good starting point to find high-quality solutions to (3). The standard approach to this is to linearize (3) by introducing a new variable X 2 Sn+ tied to x trough X = xx>, so that x>Dx = trace(DX), and then relax the nonconvex constraints X = xx
>, x 2 {0, 1}n to X ⌫ xx>, diag(X) = x, x 2 [0, 1]n. This leads to the relaxed SDP problem
minimize trace(D>X) + 2d>x subject to 1 x >
x X
⌫ 0, diag(X) = x, x 2 [0, 1]n
(4)
By introducing ˆX = 1 x >
x X
this can be written in the compact SDP form
minimize trace( ˆD> ˆX) subject to ˆX ⌫ 0, A ˆX = b .
(5)
where ˆD = 0 d >
d D
2 Sn+1+ , b 2 Rm and A : Sn+ ! Rm is an appropriate linear operator. This
general SDP optimization problem can be solved with arbitrary precision in polynomial time using interior-point methods [Malick et al., 2009, Wen et al., 2010]. As discussed before, this approach becomes impractical in terms of both the running time and the required memory if either the number of variables or the optimization constraints are large [Wen et al., 2010]. We will return to the issue of building scaleable solvers for NILM in Section 5.
Note that introducing the new variable X , the problem is projected into a higher dimensional space, which is computationally more challenging than just simply relaxing the integrality constraint in (3), but leads to a tighter approximation of the optimum (c.f., Park and Boyd, 2015; see also Lovász and Schrijver, 1991, Burer and Vandenbussche, 2006).
To obtain a feasible point of (3) from the solution of (5), we still need to change the solution x to a binary vector. This can be done via randomized rounding [Park and Boyd, 2015, Goemans and Williamson, 1995]: Instead of letting x 2 [0, 1]n, the integrality constraint x 2 {0, 1}n in (3) can be replaced by the inequalities x
i
(x
i 1) 0 for all i 2 [n]. Although these constraints are nonconvex, they admit an interesting probabilistic interpretation: the optimization problem
minimize E w⇠N (µ,⌃)[w
> Dw + 2d > w]
subject to E w⇠N (µ,⌃)[wi(wi 1)] 0, i 2 [n], µ 2 Rn, ⌃ ⌫ 0
is equivalent to
minimize trace((⌃+ µµ>)D) + 2d>µ subject to ⌃
i,i
+ µ 2 i µ i
0, i 2 [n], (6)
which is in the form of (4) with X = ⌃ + µµ> and x = µ (above, E x⇠P [f(x)] stands forR
f(x)dP (x)). This leads to the rounding procedure: starting from a solution (x⇤, X⇤) of (4), we randomly draw several samples w(j) from N (x⇤, X⇤ x⇤x⇤>), round w(j)
i to 0 or 1 to obtain x
(j), and keep the x(j) with the smallest objective value. In a series of experiments, Park and Boyd [2015] found this procedure to be better than just naively rounding the coordinates of x⇤.
4 An Efficient Algorithm for Inference in FHMMs
To arrive at our method we apply the results of the previous subsection to (2). To do so, as mentioned at the beginning of the section, we need to change the problem to a convex one, since the elements of the second term in the objective of (2), x>
t,i
(E
t,i
+ logP
i
)x t+1,i are not convex. To address this issue, we relax the problem by introducing new variables Z
t,i
= x
t,i
x > t+1,i and replace the constraint
Z
t,i
= x
t,i
x > t+1,i with two new ones:
Z
t,i 1 = x t,i and Z> t,i 1 = x t+1,i.
To simplify the presentation, we will assume that K i = K for all i 2 [M ]. Then problem (2) becomes
argmin
xt,i
TX
t=1
⇢ 1
2
2
y
t x> t µ 2 p> t z t
subject to x t 2 {0, 1}MK , t 2 [T ], ẑ
t 2 {0, 1}MKK , t 2 [T 1], 1>x
t,i = 1, t 2 [T ] and i 2 [M ], Z
t,i 1> = x t,i , Z > t,i 1> = x t+1,i , t 2 [T 1] and i 2 [M ],
(7)
Algorithm 1 ADMM-RR: Randomized rounding algorithm for suboptimal solution to (2) Given: number of iterations: itermax, length of input data: T Solve the optimization problem (8): Run Algorithm 2 to get X⇤t and z⇤t Set xbestt := z⇤t and Xbestt := X⇤t for t = 1, . . . , T for t = 2, . . . , T 1 do
Set x := [xbestt 1 > , x best t > , x best t+1 > ]> Set X := block(Xbestt 1 , Xbestt , Xbestt+1 ) where block(·, ·) constructs block diagonal matrix from input arguments Set f best := 1 Form the covariance matrix ⌃ := X xxT and find its Cholesky factorization LL> = ⌃. for k = 1, 2, . . . , itermax do
Random sampling: zk := x+ Lw, where w ⇠ N (0, I) Round zk to the nearest integer point xk that satisfies the constraints of (7) If f best > ft(xk) then update xbestt and Xbestt from the corresponding entries of xk and xkxk
>, respectively
end for end for
where x> t = [x > t,1, . . . , x > t,M ], µ> = [µ>1 , . . . , µ> M ], z> t = [vec(Z t,1) > , . . . , vec(Z t,M ) > ] and
p > t = [vec(E t,1 + logP1), . . . , vec(logPT )], with vec(A) denoting the column vector obtained by concatenating the columns of A for a matrix A. Expanding the first term of (7) and following the relaxation method of Section 3.1, we get the following SDP problem:2
arg min
Xt,zt
TX
t=1
trace(D> t X t ) + d > t z t
subject to AX t = b, BX t + Cz t + EX t+1 = g,
X
t ⌫ 0, X t , z t 0 .
(8)
Here A : SMK+1+ ! Rm, B, E : SMK+1+ ! Rm 0 and C 2 RMKK⇥m0 are all appropriate linear operators, and the integers m and m0 are determined by the number of equality constraints, while
D
t
= 1 2 2
0 y
t
µ
>
y t µ µµ
>
and d
t
= p
t
. Notice that (8) is a simple, though huge-dimensional SDP
problem in the form of (5) where ˆD has a special block structure.
Next we apply the randomized rounding method from Section 3.1 to provide an approximate solution to our original problem (2). Starting from an optimal solution (z⇤, X⇤) of (8) , and utilizing that we have an SDP problem for each time step t, we obtain Algorithm 1 that performs the rounding sequentially for t = 1, 2, . . . , T . However we run the randomized method for three consecutive time steps, since X
t appears at both time steps t 1 and t + 1 in addition to time t (cf., equation 9). Following Park and Boyd [2015], in the experiments we introduce a simple greedy search within Algorithm 1: after finding the initial point xk, we greedily try to objective the target value by change the status of a single appliance at a single time instant. The search stops when no such improvement is possible, and we use the resulting point as the estimate.
5 ADMM Solver for Large-Scale, Sparse Block-Structured SDP Problems
Given the relaxation and randomized rounding presented in the previous subsection all that remains is to find X⇤
t
, z ⇤ t to initialize Algorithm 1. Although interior point methods can solve SDP problems efficiently, even for problems with sparse constraints as (4), the running time to obtain an ✏ optimal solution is of the order of n3.5 log(1/✏) [Nesterov, 2004, Section 4.3.3], which becomes prohibitive in our case since the number of variables scales linearly with the time horizon T .
As an alternative solution, first-order methods can be used for large scale problems [Wen et al., 2010]. Since our problem (8) is an SDP problem where the objective function is separable, ADMM is a promising candidate to find a near-optimal solution. To apply ADMM, we use the Moreau-Yosida quadratic regularization [Malick et al., 2009], which is well suited for the primal formulation we
2The only modification is that we need to keep the equality constraints in (7) that are missing from (3).
Algorithm 2 ADMM for sparse SDPs of the form (8) Given: length of input data: T , number of iterations: itermax. Set the initial values to zero. W 0t , P 0t , S0 = 0, 0t = 0, ⌫0t = 0, and r0t , h0t = 0 Set µ = 0.001 {Default step-size value} for k = 0, 1, . . . , itermax do
for t = 1, 2, . . . , T do Update P kt , W kt , k, Skt , rkt , hkt , and ⌫kt , respectively, according to (11) (Appendix A).
end for end for
consider. When implementing ADMM over the variables (X t , z t ) t , the sparse structure of our constraints allows to consider the SDP problems for each time step t sequentially:
arg min
Xt,zt
trace(D> t X t ) + d > t z t
subject to AX t = b,
BX t + Cz t + EX t+1 = g,
BX t 1 + Czt 1 + EXt = g,
X
t ⌫ 0, X t , z t 0 .
(9)
The regularized Lagrangian function for (9) is3
L µ =trace(D>X) + d>z + 1 2µ kX Sk2 F + 1 2µ kz rk22 + >(b AX)
+ ⌫ > (g BX Cz EX+) + ⌫> (g BX Cz EX)
trace(W>X) trace(P>X) h>z,
(10)
where , ⌫, W 0, P ⌫ 0, and h 0 are dual variables, and µ > 0 is a constant. By taking the derivatives of L
µ and computing the optimal values of X and z, one can derive the standard ADMM updates, which, due to space constraints, are given in Appendix A. The final algorithm, which updates the variables for each t sequentially, is given by Algorithm 2.
Algorithms 1 and 2 together give an efficient algorithm for finding an approximate solution to (2) and thus also to the inference problem of additive FHMMs.
6 Learning the Model
The previous section provided an algorithm to solve the inference part of our energy disaggregation problem. However, to be able to run the inference method, we need to set up the model. To learn the HMMs describing each appliance, we use the method of Kontorovich et al. [2013] to learn the transition matrix, and the spectral learning method of Anandkumar et al. [2012] (following Mattfeld, 2014) to determine the emission parameters.
However, when it comes to the specific application of NILM, the problem of unknown, time-varying bias also needs to be addressed, which appears due to the presence of unknown/unmodeled appliances in the measured signal. A simple idea, which is also followed by Kolter and Jaakkola [2012], is to use a “generic model” whose contribution to the objective function is downweighted. Surprisingly, incorporating this idea in the FHMM inference creates some unexpected challenges.4
Therefore, in this work we come up with a practical, heuristic solution tailored to NILM. First we identify all electric events defined by a large change y
t in the power usage (using some ad-hoc threshold). Then we discard all events that are similar to any possible level change µ(i)
m,k . The remaining large jumps are regarded as coming from a generic HMM model describing the unregistered appliances: they are clustered into K 1 clusters, and an HMM model is built where each cluster is regarded as power usage coming from a single state of the unregistered appliances. We also allow an “off state” with power usage 0.
3We drop the subscript t and replace t+ 1 and t 1 with + and signs, respectively. 4For example, the incorporation of this generic model breaks the derivation of the algorithm of Kolter and
Jaakkola [2012]. See Appendix B for a discussion of this.
7 Experimental Results
We evaluate the performance of our algorithm in two setups:5 we use a synthetic dataset to test the inference method in a controlled environment, while we used the REDD dataset of Kolter and Johnson [2011] to see how the method performs on non-simulated, “real” data. The performance of our algorithm is compared to the structured variational inference (SVI) method of Ghahramani and Jordan [1997], the method of Kolter and Jaakkola [2012] and that of Zhong et al. [2014]; we shall refer to the last two algorithms as KJ and ZGS, respectively.
7.1 Experimental Results: Synthetic Data
The synthetic dataset was generated randomly (the exact procedure is described in Appendix C). To evaluate the performance, we use normalized disaggregation error as suggested by Kolter and Jaakkola [2012] and also adopted by Zhong et al. [2014]. This measures the reconstruction error for each individual appliance. Given the true output y
t,i and the estimated output ŷ t,i (i.e. ŷ t,i = µ > i x̂ t,i ), the error measure is defined as
NDE = qP
t,i
(y
t,i ŷ t,i )
2 / P t,i (y t,i ) 2 .
Figures 1 and 2 show the performance of the algorithms as the number HMMs (M ) (resp., number of states, K) is varied. Each plot is a report for T = 1000 steps averaged over 100 random models and realizations, showing the mean and standard deviation of NDE. Our method, shown under the label ADMM-RR, runs ADMM for 2500 iterations, runs the local search at the end of each 250 iterations, and chooses the result that has the maximum likelihood. ADMM is the algorithm which applies naive rounding. It can be observed that the variational inference method is significantly outperformed by all other methods, while our algorithm consistently obtained better results than its competitors, KJ coming second and ZGS third.
7.2 Experimental Results: Real Data
In this section, we also compared the 3 best methods on the real dataset REDD [Kolter and Johnson, 2011]. We use the first half of the data for training and the second half for testing. Each HMM (i.e.,
5Our code is available online at https://github.com/kiarashshaloudegi/FHMM_inference.
appliance) is trained separately using the associated circuit level data, and the HMM corresponding to unregistered appliances is trained using the main panel data. In this set of experiments we monitor appliances consuming more than 100 watts. ADMM-RR is run for 1000 iterations, and the local search is run at the end of each 250 iterations, and the result with the largest likelihood is chosen. To be able to use the ZGS method on this data, we need to have some prior information about the usage of each appliance; the authors suggestion is to us national energy surveys, but in the lack of this information (also about the number of residents, type of houses, etc.) we used the training data to extract this prior knowledge, which is expected to help this method.
Detailed results about the precision and recall of estimating which appliances are ‘on’ at any given time are given in Table 1. In Appendix D we also report the error of the total power usage assigned to different appliances (Table 2), as well as the amount of assigned power to each appliance as a percentage of total power (Figure 3). As a summary, we can see that our method consistently outperformed the others, achieving an average precision and recall of 60.97% and 78.56%, with about 50% better precision than KJ with essentially the same recall (38.68/75.02%), while significantly improving upon ZGS (17.97/36.22%). Considering the error in assigning the power consumption to different appliances, our method achieved about 30 35% smaller error (ADMM-RR: 2.87%, KJ: 4.44%, ZGS: 3.94%) than its competitors.
In our real-data experiments, there are about 1 million decision variables: M = 7 or 6 appliances (for phase A and B power, respectively) with K = 4 states each and for about T = 30, 000 time steps for one day, 1 sample every 6 seconds. KJ and ZGS solve quadratic programs, increasing their memory usage (14GB vs 6GB in our case). On the other hand, our implementation of their method, using the commercial solver MOSEK inside the Matlab-based YALMIP [Löfberg, 2004], runs in 5 minutes, while our algorithm, which is purely Matlab-based takes 5 hours to finish. We expect that an optimized C++ version of our method could achieve a significant speed-up compared to our current implementation.
8 Conclusion
FHMMs are widely used in energy disaggregation. However, the resulting model has a huge (factored) state space, making standard inference FHMM algorithms infeasible even for only a handful of appliances. In this paper we developed a scalable approximate inference algorithm, based on a semidefinite relaxation combined with randomized rounding, which significantly outperformed the state of the art in our experiments. A crucial component of our solution is a scalable ADMM method that utilizes the special block-diagonal-like structure of the SDP relaxation and provides a good initialization for randomized rounding. We expect that our method may prove useful in solving other FHMM inference problems, as well as in large scale integer quadratic programming.
Acknowledgements
This work was supported in part by the Alberta Innovates Technology Futures through the Alberta Ingenuity Centre for Machine Learning and by NSERC. K. is indebted to Pooria Joulani and Mohammad Ajallooeian, whom provided much useful technical advise, while all authors are grateful for Zico Kolter for sharing his code. | 1. What are the contributions and advancements proposed by the paper in additive factorial HMMs?
2. How does the reviewer assess the technical quality and presentation of the paper?
3. What are the strengths and weaknesses of the proposed approach compared to prior works?
4. How does the reviewer evaluate the potential societal impact and originality of the work?
5. Are there any concerns or suggestions from the reviewer regarding the novelty, scalability, and comparisons with other works? | Review | Review
The paper proposes refined optimization methods for learning additive factorial HMMs, motivated by the application to energy disaggregation. The method is compared to baseline approaches both on synthetic and real-world data sets.The technical quality seems very solid. Overall, the presentation is good, but the authors should do a check for grammatical errors. The potential societal impact of this work is high. I would rank the originality of lower, as there exists already quite a bit of work in this direction. The experiments suggest substantial improvements over existing methods, so I'd consider this a significant improvement over the state-of-the-art. In their rebuttal, the authors carefully addressed concerns by some of the reviewers about the novelty and scalability of their method, and the comparison with results obtained by Kolter and Jaakkola (2012). |
NIPS | Title
SDP Relaxation with Randomized Rounding for Energy Disaggregation
Abstract
We develop a scalable, computationally efficient method for the task of energy disaggregation for home appliance monitoring. In this problem the goal is to estimate the energy consumption of each appliance over time based on the total energy-consumption signal of a household. The current state of the art is to model the problem as inference in factorial HMMs, and use quadratic programming to find an approximate solution to the resulting quadratic integer program. Here we take a more principled approach, better suited to integer programming problems, and find an approximate optimum by combining convex semidefinite relaxations randomized rounding, as well as a scalable ADMM method that exploits the special structure of the resulting semidefinite program. Simulation results both in synthetic and real-world datasets demonstrate the superiority of our method.
1 Introduction
Energy efficiency is becoming one of the most important issues in our society. Identifying the energy consumption of individual electrical appliances in homes can raise awareness of power consumption and lead to significant saving in utility bills. Detailed feedback about the power consumption of individual appliances helps energy consumers to identify potential areas for energy savings, and increases their willingness to invest in more efficient products. Notifying home owners of accidentally running stoves, ovens, etc., may not only result in savings but also improves safety. Energy disaggregation or non-intrusive load monitoring (NILM) uses data from utility smart meters to separate individual load consumptions (i.e., a load signal) from the total measured power (i.e., the mixture of the signals) in households.
The bulk of the research in NILM has mostly concentrated on applying different data mining and pattern recognition methods to track the footprint of each appliance in total power measurements. Several techniques, such as artificial neural networks (ANN) [Prudenzi, 2002, Chang et al., 2012, Liang et al., 2010], deep neural networks [Kelly and Knottenbelt, 2015], k-nearest neighbor (k-NN) [Figueiredo et al., 2012, Weiss et al., 2012], sparse coding [Kolter et al., 2010], or ad-hoc heuristic methods [Dong et al., 2012] have been employed. Recent works, rather than turning electrical events into features fed into classifiers, consider the temporal structure of the data[Zia et al., 2011, Kolter and Jaakkola, 2012, Kim et al., 2011, Zhong et al., 2014, Egarter et al., 2015, Guo et al., 2015], resulting in state-of-the-art performance [Kolter and Jaakkola, 2012]. These works usually model the individual appliances by independent hidden Markov models (HMMs), which leads to a factorial HMM (FHMM) model describing the total consumption.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
FHMMs, introduced by Ghahramani and Jordan [1997], are powerful tools for modeling times series generated from multiple independent sources, and are great for modeling speech with multiple people simultaneously talking [Rennie et al., 2009], or energy monitoring which we consider here [Kim et al., 2011]. Doing exact inference in FHMMs is NP hard; therefore, computationally efficient approximate methods have been the subject of study. Classic approaches include sampling methods, such as MCMC or particle filtering [Koller and Friedman, 2009] and variational Bayes methods [Wainwright and Jordan, 2007, Ghahramani and Jordan, 1997]. In practice, both methods are nontrivial to make work and we are not aware of any works that would have demonstrated good results in our application domain with the type of FHMMs we need to work and at practical scales.
In this paper we follow the work of Kolter and Jaakkola [2012] to model the NILM problem by FHMMs. The distinguishing features of FHMMs in this setting are that (i) the output is the sum of the output of the underlying HMMs (perhaps with some noise), and (ii) the number of transitions are small in comparison to the signal length. FHMMs with the first property are called additive. In this paper we derive an efficient, convex relaxation based method for FHMMs of the above type, which significantly outperforms the state-of-the-art algorithms. Our approach is based on revisiting relaxations to the integer programming formulation of Kolter and Jaakkola [2012]. In particular, we replace the quadratic programming relaxation of Kolter and Jaakkola, 2012 with a relaxation to an semi-definite program (SDP), which, based on the literature of relaxations is expected to be tighter and thus better. While SDPs are convex and could in theory be solved using interior-point (IP) methods in polynomial time [Malick et al., 2009], IP scales poorly with the size of the problem and is thus unsuitable to our large scale problem which may involve as many a million variables. To address this problem, capitalizing on the structure of our relaxation coming from our FHMM model, we develop a novel variant of ADMM [Boyd et al., 2011] that uses Moreau-Yosida regularization and combine it with a version of randomized rounding that is inspired by the the recent work of Park and Boyd [2015]. Experiments on synthetic and real data confirm that our method significantly outperforms other algorithms from the literature, and we expect that it may find its applications in other FHMM inference problems, too.
1.1 Notation
Throughout the paper, we use the following notation: R denotes the set of real numbers, Sn+ denotes the set of n ⇥ n positive semidefinite matrices, I{E} denotes the indicator function of an event E (that is, it is 1 if the event is true and zero otherwise), 1 denotes a vector of appropriate dimension whose entries are all 1. For an integer K, [K] denotes the set {1, 2, . . . ,K}. N (µ,⌃) denotes the Gaussian distribution with mean µ and covariance matrix ⌃. For a matrix A, trace(A) denotes its trace and diag(A) denotes the vector formed by the diagonal entries of A.
2 System Model
Following Kolter and Jaakkola [2012], the energy usage of the household is modeled using an additive factorial HMM [Ghahramani and Jordan, 1997]. Suppose there are M appliances in a household. Each of them is modeled via an HMM: let P
i 2 RKi⇥Ki denote the transition-probability matrix of appliance i 2 [M ], and assume that for each state s 2 [K
i ], the energy consumption of the appliance is constant µ
i,s (µ i denotes the corresponding K i -dimensional column vector (µ i,1, . . . , µi,Ki) >). Denoting by x
t,i 2 {0, 1}Ki the indicator vector of the state s t,i of appliance i at time t (i.e., x
t,i,s = I{st,i=s}), the total power consumption at time t is P i2[M ] µ > i x t,i
, which we assume is observed with some additive zero mean Gaussian noise of variance 2: y
t
⇠ N ( P
i2[M ] µ > i x t,i , 2 ).1
Given this model, the maximum likelihood estimate of the appliance state vector sequence can be obtained by minimizing the log-posterior function
argmin
xt,i
TX
t=1
(y
t
P
M i=1 x > t,i µ i ) 2
2
2
T 1X
t=1
MX
i=1
x > t,i (logP i )x t+1,i
subject to x t,i 2 {0, 1}Ki , 1>x t,i = 1, i 2 [M ] and t 2 [T ],
(1)
1Alternatively, we can assume that the power consumption yt,iof each appliance is normally distributed with mean µ>i xt,i and variance 2i , where 2 = P i2[M ] 2 i , and yt = P i2[M ] yt,i.
where logP i denotes a matrix obtained from P i by taking the logarithm of each entry.
In our particular application, in addition to the signal’s temporal structure, large changes in total power (in comparison to signal noise) contain valuable information that can be used to further improve the inference results (in fact, solely this information was used for energy disaggregation, e.g., by Dong et al., 2012, 2013, Figueiredo et al., 2012). This observation was used by Kolter and Jaakkola [2012] to amend the posterior with a term that tries to match the large signal changes to the possible changes in the power level when only the state of a single appliance changes.
Formally, let y t = y t+1 yt, µ(i) m,k = µ i,k µ i,m , and define the matrices E t,i 2 RKi⇥Ki
by (E t,i ) m,k = ( y t
µ(i) m,k ) 2 /(2 2 diff), for some constant diff > 0. Intuitively, (Et,i)m,k is
the negative log-likelihood (up to a constant) of observing a change y t in the power level when appliance i transitions from state m to state k under some zero-mean Gaussian noise with variance
2 diff. Making the heuristic approximation that the observation noise and this noise are independent (which clearly does not hold under the previous model), Kolter and Jaakkola [2012] added the term ( P T 1 t=1 P M i=1 x > t,i E t,i x t+1,i) to the objective of (1), arriving at
argmin
xt,i
f(x1, . . . , xT ) :=
TX
t=1
(y
t
P
M i=1 x > t,i µ i ) 2
2
2
T 1X
t=1
MX
i=1
x > t,i (E t,i + logP i )x t+1,i
subject to x t,i 2 {0, 1}Ki , 1>x t,i = 1, i 2 [M ] and t 2 [T ] .
(2)
In the rest of the paper we derive an efficient approximate solution to (2), and demonstrate that it is superior to the approximate solution derived by Kolter and Jaakkola [2012] with respect to several measures quantifying the accuracy of load disaggregation solutions.
3 SDP Relaxation and Randomized Rounding
There are two major challenges to solve the optimization problem (2) exactly: (i) the optimization is over binary vectors x
t,i ; and (ii) the objective function f , even when considering its extension to a convex domain, is in general non-convex (due to the second term). As a remedy we will relax (2) to make it an integer quadratic programming problem, then apply an SDP relaxation and randomized rounding to solve approximately the relaxed problem. We start with reviewing the latter methods.
3.1 Approximate Solutions for Integer Quadratic Programming
In this section we consider approximate solutions to the integer quadratic programming problem
minimize f(x) = x>Dx+ 2d>x subject to x 2 {0, 1}n,
(3)
where D 2 Sn+ is positive semidefinite, and d 2 Rn. While an exact solution of (3) can be found by enumerating all possible combination of binary values within a properly chosen box or ellipsoid, the running time of such exact methods is nearly exponential in the number n of binary variables, making these methods unfit for large scale problems.
One way to avoid exponential running times is to replace (3) with a convex problem with the hope that the solutions of the convex problems can serve as a good starting point to find high-quality solutions to (3). The standard approach to this is to linearize (3) by introducing a new variable X 2 Sn+ tied to x trough X = xx>, so that x>Dx = trace(DX), and then relax the nonconvex constraints X = xx
>, x 2 {0, 1}n to X ⌫ xx>, diag(X) = x, x 2 [0, 1]n. This leads to the relaxed SDP problem
minimize trace(D>X) + 2d>x subject to 1 x >
x X
⌫ 0, diag(X) = x, x 2 [0, 1]n
(4)
By introducing ˆX = 1 x >
x X
this can be written in the compact SDP form
minimize trace( ˆD> ˆX) subject to ˆX ⌫ 0, A ˆX = b .
(5)
where ˆD = 0 d >
d D
2 Sn+1+ , b 2 Rm and A : Sn+ ! Rm is an appropriate linear operator. This
general SDP optimization problem can be solved with arbitrary precision in polynomial time using interior-point methods [Malick et al., 2009, Wen et al., 2010]. As discussed before, this approach becomes impractical in terms of both the running time and the required memory if either the number of variables or the optimization constraints are large [Wen et al., 2010]. We will return to the issue of building scaleable solvers for NILM in Section 5.
Note that introducing the new variable X , the problem is projected into a higher dimensional space, which is computationally more challenging than just simply relaxing the integrality constraint in (3), but leads to a tighter approximation of the optimum (c.f., Park and Boyd, 2015; see also Lovász and Schrijver, 1991, Burer and Vandenbussche, 2006).
To obtain a feasible point of (3) from the solution of (5), we still need to change the solution x to a binary vector. This can be done via randomized rounding [Park and Boyd, 2015, Goemans and Williamson, 1995]: Instead of letting x 2 [0, 1]n, the integrality constraint x 2 {0, 1}n in (3) can be replaced by the inequalities x
i
(x
i 1) 0 for all i 2 [n]. Although these constraints are nonconvex, they admit an interesting probabilistic interpretation: the optimization problem
minimize E w⇠N (µ,⌃)[w
> Dw + 2d > w]
subject to E w⇠N (µ,⌃)[wi(wi 1)] 0, i 2 [n], µ 2 Rn, ⌃ ⌫ 0
is equivalent to
minimize trace((⌃+ µµ>)D) + 2d>µ subject to ⌃
i,i
+ µ 2 i µ i
0, i 2 [n], (6)
which is in the form of (4) with X = ⌃ + µµ> and x = µ (above, E x⇠P [f(x)] stands forR
f(x)dP (x)). This leads to the rounding procedure: starting from a solution (x⇤, X⇤) of (4), we randomly draw several samples w(j) from N (x⇤, X⇤ x⇤x⇤>), round w(j)
i to 0 or 1 to obtain x
(j), and keep the x(j) with the smallest objective value. In a series of experiments, Park and Boyd [2015] found this procedure to be better than just naively rounding the coordinates of x⇤.
4 An Efficient Algorithm for Inference in FHMMs
To arrive at our method we apply the results of the previous subsection to (2). To do so, as mentioned at the beginning of the section, we need to change the problem to a convex one, since the elements of the second term in the objective of (2), x>
t,i
(E
t,i
+ logP
i
)x t+1,i are not convex. To address this issue, we relax the problem by introducing new variables Z
t,i
= x
t,i
x > t+1,i and replace the constraint
Z
t,i
= x
t,i
x > t+1,i with two new ones:
Z
t,i 1 = x t,i and Z> t,i 1 = x t+1,i.
To simplify the presentation, we will assume that K i = K for all i 2 [M ]. Then problem (2) becomes
argmin
xt,i
TX
t=1
⇢ 1
2
2
y
t x> t µ 2 p> t z t
subject to x t 2 {0, 1}MK , t 2 [T ], ẑ
t 2 {0, 1}MKK , t 2 [T 1], 1>x
t,i = 1, t 2 [T ] and i 2 [M ], Z
t,i 1> = x t,i , Z > t,i 1> = x t+1,i , t 2 [T 1] and i 2 [M ],
(7)
Algorithm 1 ADMM-RR: Randomized rounding algorithm for suboptimal solution to (2) Given: number of iterations: itermax, length of input data: T Solve the optimization problem (8): Run Algorithm 2 to get X⇤t and z⇤t Set xbestt := z⇤t and Xbestt := X⇤t for t = 1, . . . , T for t = 2, . . . , T 1 do
Set x := [xbestt 1 > , x best t > , x best t+1 > ]> Set X := block(Xbestt 1 , Xbestt , Xbestt+1 ) where block(·, ·) constructs block diagonal matrix from input arguments Set f best := 1 Form the covariance matrix ⌃ := X xxT and find its Cholesky factorization LL> = ⌃. for k = 1, 2, . . . , itermax do
Random sampling: zk := x+ Lw, where w ⇠ N (0, I) Round zk to the nearest integer point xk that satisfies the constraints of (7) If f best > ft(xk) then update xbestt and Xbestt from the corresponding entries of xk and xkxk
>, respectively
end for end for
where x> t = [x > t,1, . . . , x > t,M ], µ> = [µ>1 , . . . , µ> M ], z> t = [vec(Z t,1) > , . . . , vec(Z t,M ) > ] and
p > t = [vec(E t,1 + logP1), . . . , vec(logPT )], with vec(A) denoting the column vector obtained by concatenating the columns of A for a matrix A. Expanding the first term of (7) and following the relaxation method of Section 3.1, we get the following SDP problem:2
arg min
Xt,zt
TX
t=1
trace(D> t X t ) + d > t z t
subject to AX t = b, BX t + Cz t + EX t+1 = g,
X
t ⌫ 0, X t , z t 0 .
(8)
Here A : SMK+1+ ! Rm, B, E : SMK+1+ ! Rm 0 and C 2 RMKK⇥m0 are all appropriate linear operators, and the integers m and m0 are determined by the number of equality constraints, while
D
t
= 1 2 2
0 y
t
µ
>
y t µ µµ
>
and d
t
= p
t
. Notice that (8) is a simple, though huge-dimensional SDP
problem in the form of (5) where ˆD has a special block structure.
Next we apply the randomized rounding method from Section 3.1 to provide an approximate solution to our original problem (2). Starting from an optimal solution (z⇤, X⇤) of (8) , and utilizing that we have an SDP problem for each time step t, we obtain Algorithm 1 that performs the rounding sequentially for t = 1, 2, . . . , T . However we run the randomized method for three consecutive time steps, since X
t appears at both time steps t 1 and t + 1 in addition to time t (cf., equation 9). Following Park and Boyd [2015], in the experiments we introduce a simple greedy search within Algorithm 1: after finding the initial point xk, we greedily try to objective the target value by change the status of a single appliance at a single time instant. The search stops when no such improvement is possible, and we use the resulting point as the estimate.
5 ADMM Solver for Large-Scale, Sparse Block-Structured SDP Problems
Given the relaxation and randomized rounding presented in the previous subsection all that remains is to find X⇤
t
, z ⇤ t to initialize Algorithm 1. Although interior point methods can solve SDP problems efficiently, even for problems with sparse constraints as (4), the running time to obtain an ✏ optimal solution is of the order of n3.5 log(1/✏) [Nesterov, 2004, Section 4.3.3], which becomes prohibitive in our case since the number of variables scales linearly with the time horizon T .
As an alternative solution, first-order methods can be used for large scale problems [Wen et al., 2010]. Since our problem (8) is an SDP problem where the objective function is separable, ADMM is a promising candidate to find a near-optimal solution. To apply ADMM, we use the Moreau-Yosida quadratic regularization [Malick et al., 2009], which is well suited for the primal formulation we
2The only modification is that we need to keep the equality constraints in (7) that are missing from (3).
Algorithm 2 ADMM for sparse SDPs of the form (8) Given: length of input data: T , number of iterations: itermax. Set the initial values to zero. W 0t , P 0t , S0 = 0, 0t = 0, ⌫0t = 0, and r0t , h0t = 0 Set µ = 0.001 {Default step-size value} for k = 0, 1, . . . , itermax do
for t = 1, 2, . . . , T do Update P kt , W kt , k, Skt , rkt , hkt , and ⌫kt , respectively, according to (11) (Appendix A).
end for end for
consider. When implementing ADMM over the variables (X t , z t ) t , the sparse structure of our constraints allows to consider the SDP problems for each time step t sequentially:
arg min
Xt,zt
trace(D> t X t ) + d > t z t
subject to AX t = b,
BX t + Cz t + EX t+1 = g,
BX t 1 + Czt 1 + EXt = g,
X
t ⌫ 0, X t , z t 0 .
(9)
The regularized Lagrangian function for (9) is3
L µ =trace(D>X) + d>z + 1 2µ kX Sk2 F + 1 2µ kz rk22 + >(b AX)
+ ⌫ > (g BX Cz EX+) + ⌫> (g BX Cz EX)
trace(W>X) trace(P>X) h>z,
(10)
where , ⌫, W 0, P ⌫ 0, and h 0 are dual variables, and µ > 0 is a constant. By taking the derivatives of L
µ and computing the optimal values of X and z, one can derive the standard ADMM updates, which, due to space constraints, are given in Appendix A. The final algorithm, which updates the variables for each t sequentially, is given by Algorithm 2.
Algorithms 1 and 2 together give an efficient algorithm for finding an approximate solution to (2) and thus also to the inference problem of additive FHMMs.
6 Learning the Model
The previous section provided an algorithm to solve the inference part of our energy disaggregation problem. However, to be able to run the inference method, we need to set up the model. To learn the HMMs describing each appliance, we use the method of Kontorovich et al. [2013] to learn the transition matrix, and the spectral learning method of Anandkumar et al. [2012] (following Mattfeld, 2014) to determine the emission parameters.
However, when it comes to the specific application of NILM, the problem of unknown, time-varying bias also needs to be addressed, which appears due to the presence of unknown/unmodeled appliances in the measured signal. A simple idea, which is also followed by Kolter and Jaakkola [2012], is to use a “generic model” whose contribution to the objective function is downweighted. Surprisingly, incorporating this idea in the FHMM inference creates some unexpected challenges.4
Therefore, in this work we come up with a practical, heuristic solution tailored to NILM. First we identify all electric events defined by a large change y
t in the power usage (using some ad-hoc threshold). Then we discard all events that are similar to any possible level change µ(i)
m,k . The remaining large jumps are regarded as coming from a generic HMM model describing the unregistered appliances: they are clustered into K 1 clusters, and an HMM model is built where each cluster is regarded as power usage coming from a single state of the unregistered appliances. We also allow an “off state” with power usage 0.
3We drop the subscript t and replace t+ 1 and t 1 with + and signs, respectively. 4For example, the incorporation of this generic model breaks the derivation of the algorithm of Kolter and
Jaakkola [2012]. See Appendix B for a discussion of this.
7 Experimental Results
We evaluate the performance of our algorithm in two setups:5 we use a synthetic dataset to test the inference method in a controlled environment, while we used the REDD dataset of Kolter and Johnson [2011] to see how the method performs on non-simulated, “real” data. The performance of our algorithm is compared to the structured variational inference (SVI) method of Ghahramani and Jordan [1997], the method of Kolter and Jaakkola [2012] and that of Zhong et al. [2014]; we shall refer to the last two algorithms as KJ and ZGS, respectively.
7.1 Experimental Results: Synthetic Data
The synthetic dataset was generated randomly (the exact procedure is described in Appendix C). To evaluate the performance, we use normalized disaggregation error as suggested by Kolter and Jaakkola [2012] and also adopted by Zhong et al. [2014]. This measures the reconstruction error for each individual appliance. Given the true output y
t,i and the estimated output ŷ t,i (i.e. ŷ t,i = µ > i x̂ t,i ), the error measure is defined as
NDE = qP
t,i
(y
t,i ŷ t,i )
2 / P t,i (y t,i ) 2 .
Figures 1 and 2 show the performance of the algorithms as the number HMMs (M ) (resp., number of states, K) is varied. Each plot is a report for T = 1000 steps averaged over 100 random models and realizations, showing the mean and standard deviation of NDE. Our method, shown under the label ADMM-RR, runs ADMM for 2500 iterations, runs the local search at the end of each 250 iterations, and chooses the result that has the maximum likelihood. ADMM is the algorithm which applies naive rounding. It can be observed that the variational inference method is significantly outperformed by all other methods, while our algorithm consistently obtained better results than its competitors, KJ coming second and ZGS third.
7.2 Experimental Results: Real Data
In this section, we also compared the 3 best methods on the real dataset REDD [Kolter and Johnson, 2011]. We use the first half of the data for training and the second half for testing. Each HMM (i.e.,
5Our code is available online at https://github.com/kiarashshaloudegi/FHMM_inference.
appliance) is trained separately using the associated circuit level data, and the HMM corresponding to unregistered appliances is trained using the main panel data. In this set of experiments we monitor appliances consuming more than 100 watts. ADMM-RR is run for 1000 iterations, and the local search is run at the end of each 250 iterations, and the result with the largest likelihood is chosen. To be able to use the ZGS method on this data, we need to have some prior information about the usage of each appliance; the authors suggestion is to us national energy surveys, but in the lack of this information (also about the number of residents, type of houses, etc.) we used the training data to extract this prior knowledge, which is expected to help this method.
Detailed results about the precision and recall of estimating which appliances are ‘on’ at any given time are given in Table 1. In Appendix D we also report the error of the total power usage assigned to different appliances (Table 2), as well as the amount of assigned power to each appliance as a percentage of total power (Figure 3). As a summary, we can see that our method consistently outperformed the others, achieving an average precision and recall of 60.97% and 78.56%, with about 50% better precision than KJ with essentially the same recall (38.68/75.02%), while significantly improving upon ZGS (17.97/36.22%). Considering the error in assigning the power consumption to different appliances, our method achieved about 30 35% smaller error (ADMM-RR: 2.87%, KJ: 4.44%, ZGS: 3.94%) than its competitors.
In our real-data experiments, there are about 1 million decision variables: M = 7 or 6 appliances (for phase A and B power, respectively) with K = 4 states each and for about T = 30, 000 time steps for one day, 1 sample every 6 seconds. KJ and ZGS solve quadratic programs, increasing their memory usage (14GB vs 6GB in our case). On the other hand, our implementation of their method, using the commercial solver MOSEK inside the Matlab-based YALMIP [Löfberg, 2004], runs in 5 minutes, while our algorithm, which is purely Matlab-based takes 5 hours to finish. We expect that an optimized C++ version of our method could achieve a significant speed-up compared to our current implementation.
8 Conclusion
FHMMs are widely used in energy disaggregation. However, the resulting model has a huge (factored) state space, making standard inference FHMM algorithms infeasible even for only a handful of appliances. In this paper we developed a scalable approximate inference algorithm, based on a semidefinite relaxation combined with randomized rounding, which significantly outperformed the state of the art in our experiments. A crucial component of our solution is a scalable ADMM method that utilizes the special block-diagonal-like structure of the SDP relaxation and provides a good initialization for randomized rounding. We expect that our method may prove useful in solving other FHMM inference problems, as well as in large scale integer quadratic programming.
Acknowledgements
This work was supported in part by the Alberta Innovates Technology Futures through the Alberta Ingenuity Centre for Machine Learning and by NSERC. K. is indebted to Pooria Joulani and Mohammad Ajallooeian, whom provided much useful technical advise, while all authors are grateful for Zico Kolter for sharing his code. | 1. What is the main contribution of the paper in energy disaggregation?
2. What are the strengths of the proposed approach compared to previous methods?
3. How does the reviewer assess the quality and clarity of the paper's content?
4. Are there any suggestions for improving the organization of the paper?
5. What is the significance of the techniques used in this paper for the machine learning community? | Review | Review
This paper presents an efficient approximate solution for the task of energy disaggregation for home appliance monitoring formulated as a binary quadratic program -- equation (2). First equation (2) is relaxed into a convex problem (7) by introducing new variables. Then, it is relaxed again as a SDP problem with continuous variables leading to problem (8). Also, they proposed to use a variant of alternating direction method of multipliers for solving this large SDP. Binary variable are then obtained via via randomized rounding. The proposed solution appear to be superior to the one previously proposed by Kolter & Jaakkola (2012) with respect to several measures quantifying the accuracy of load disaggregation solutions.The paper is well-written, almost self contained, easy to read and represents a significant piece of work. Its global organization could be improved since we have a subsection 1.1 without 1.2, a 3.1 without 3.2 and a 4.1 without 4.2. The interesting point of the paper is the global methodology used to solve the initial optimization problem. The whole Machine learning community would benefit from the techniques used in this paper. details/typos L75: in the indicator function equation 1 and 2: K -> K_i L189: yet.Kolter -> yet. Kolter |
NIPS | Title
SDP Relaxation with Randomized Rounding for Energy Disaggregation
Abstract
We develop a scalable, computationally efficient method for the task of energy disaggregation for home appliance monitoring. In this problem the goal is to estimate the energy consumption of each appliance over time based on the total energy-consumption signal of a household. The current state of the art is to model the problem as inference in factorial HMMs, and use quadratic programming to find an approximate solution to the resulting quadratic integer program. Here we take a more principled approach, better suited to integer programming problems, and find an approximate optimum by combining convex semidefinite relaxations randomized rounding, as well as a scalable ADMM method that exploits the special structure of the resulting semidefinite program. Simulation results both in synthetic and real-world datasets demonstrate the superiority of our method.
1 Introduction
Energy efficiency is becoming one of the most important issues in our society. Identifying the energy consumption of individual electrical appliances in homes can raise awareness of power consumption and lead to significant saving in utility bills. Detailed feedback about the power consumption of individual appliances helps energy consumers to identify potential areas for energy savings, and increases their willingness to invest in more efficient products. Notifying home owners of accidentally running stoves, ovens, etc., may not only result in savings but also improves safety. Energy disaggregation or non-intrusive load monitoring (NILM) uses data from utility smart meters to separate individual load consumptions (i.e., a load signal) from the total measured power (i.e., the mixture of the signals) in households.
The bulk of the research in NILM has mostly concentrated on applying different data mining and pattern recognition methods to track the footprint of each appliance in total power measurements. Several techniques, such as artificial neural networks (ANN) [Prudenzi, 2002, Chang et al., 2012, Liang et al., 2010], deep neural networks [Kelly and Knottenbelt, 2015], k-nearest neighbor (k-NN) [Figueiredo et al., 2012, Weiss et al., 2012], sparse coding [Kolter et al., 2010], or ad-hoc heuristic methods [Dong et al., 2012] have been employed. Recent works, rather than turning electrical events into features fed into classifiers, consider the temporal structure of the data[Zia et al., 2011, Kolter and Jaakkola, 2012, Kim et al., 2011, Zhong et al., 2014, Egarter et al., 2015, Guo et al., 2015], resulting in state-of-the-art performance [Kolter and Jaakkola, 2012]. These works usually model the individual appliances by independent hidden Markov models (HMMs), which leads to a factorial HMM (FHMM) model describing the total consumption.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
FHMMs, introduced by Ghahramani and Jordan [1997], are powerful tools for modeling times series generated from multiple independent sources, and are great for modeling speech with multiple people simultaneously talking [Rennie et al., 2009], or energy monitoring which we consider here [Kim et al., 2011]. Doing exact inference in FHMMs is NP hard; therefore, computationally efficient approximate methods have been the subject of study. Classic approaches include sampling methods, such as MCMC or particle filtering [Koller and Friedman, 2009] and variational Bayes methods [Wainwright and Jordan, 2007, Ghahramani and Jordan, 1997]. In practice, both methods are nontrivial to make work and we are not aware of any works that would have demonstrated good results in our application domain with the type of FHMMs we need to work and at practical scales.
In this paper we follow the work of Kolter and Jaakkola [2012] to model the NILM problem by FHMMs. The distinguishing features of FHMMs in this setting are that (i) the output is the sum of the output of the underlying HMMs (perhaps with some noise), and (ii) the number of transitions are small in comparison to the signal length. FHMMs with the first property are called additive. In this paper we derive an efficient, convex relaxation based method for FHMMs of the above type, which significantly outperforms the state-of-the-art algorithms. Our approach is based on revisiting relaxations to the integer programming formulation of Kolter and Jaakkola [2012]. In particular, we replace the quadratic programming relaxation of Kolter and Jaakkola, 2012 with a relaxation to an semi-definite program (SDP), which, based on the literature of relaxations is expected to be tighter and thus better. While SDPs are convex and could in theory be solved using interior-point (IP) methods in polynomial time [Malick et al., 2009], IP scales poorly with the size of the problem and is thus unsuitable to our large scale problem which may involve as many a million variables. To address this problem, capitalizing on the structure of our relaxation coming from our FHMM model, we develop a novel variant of ADMM [Boyd et al., 2011] that uses Moreau-Yosida regularization and combine it with a version of randomized rounding that is inspired by the the recent work of Park and Boyd [2015]. Experiments on synthetic and real data confirm that our method significantly outperforms other algorithms from the literature, and we expect that it may find its applications in other FHMM inference problems, too.
1.1 Notation
Throughout the paper, we use the following notation: R denotes the set of real numbers, Sn+ denotes the set of n ⇥ n positive semidefinite matrices, I{E} denotes the indicator function of an event E (that is, it is 1 if the event is true and zero otherwise), 1 denotes a vector of appropriate dimension whose entries are all 1. For an integer K, [K] denotes the set {1, 2, . . . ,K}. N (µ,⌃) denotes the Gaussian distribution with mean µ and covariance matrix ⌃. For a matrix A, trace(A) denotes its trace and diag(A) denotes the vector formed by the diagonal entries of A.
2 System Model
Following Kolter and Jaakkola [2012], the energy usage of the household is modeled using an additive factorial HMM [Ghahramani and Jordan, 1997]. Suppose there are M appliances in a household. Each of them is modeled via an HMM: let P
i 2 RKi⇥Ki denote the transition-probability matrix of appliance i 2 [M ], and assume that for each state s 2 [K
i ], the energy consumption of the appliance is constant µ
i,s (µ i denotes the corresponding K i -dimensional column vector (µ i,1, . . . , µi,Ki) >). Denoting by x
t,i 2 {0, 1}Ki the indicator vector of the state s t,i of appliance i at time t (i.e., x
t,i,s = I{st,i=s}), the total power consumption at time t is P i2[M ] µ > i x t,i
, which we assume is observed with some additive zero mean Gaussian noise of variance 2: y
t
⇠ N ( P
i2[M ] µ > i x t,i , 2 ).1
Given this model, the maximum likelihood estimate of the appliance state vector sequence can be obtained by minimizing the log-posterior function
argmin
xt,i
TX
t=1
(y
t
P
M i=1 x > t,i µ i ) 2
2
2
T 1X
t=1
MX
i=1
x > t,i (logP i )x t+1,i
subject to x t,i 2 {0, 1}Ki , 1>x t,i = 1, i 2 [M ] and t 2 [T ],
(1)
1Alternatively, we can assume that the power consumption yt,iof each appliance is normally distributed with mean µ>i xt,i and variance 2i , where 2 = P i2[M ] 2 i , and yt = P i2[M ] yt,i.
where logP i denotes a matrix obtained from P i by taking the logarithm of each entry.
In our particular application, in addition to the signal’s temporal structure, large changes in total power (in comparison to signal noise) contain valuable information that can be used to further improve the inference results (in fact, solely this information was used for energy disaggregation, e.g., by Dong et al., 2012, 2013, Figueiredo et al., 2012). This observation was used by Kolter and Jaakkola [2012] to amend the posterior with a term that tries to match the large signal changes to the possible changes in the power level when only the state of a single appliance changes.
Formally, let y t = y t+1 yt, µ(i) m,k = µ i,k µ i,m , and define the matrices E t,i 2 RKi⇥Ki
by (E t,i ) m,k = ( y t
µ(i) m,k ) 2 /(2 2 diff), for some constant diff > 0. Intuitively, (Et,i)m,k is
the negative log-likelihood (up to a constant) of observing a change y t in the power level when appliance i transitions from state m to state k under some zero-mean Gaussian noise with variance
2 diff. Making the heuristic approximation that the observation noise and this noise are independent (which clearly does not hold under the previous model), Kolter and Jaakkola [2012] added the term ( P T 1 t=1 P M i=1 x > t,i E t,i x t+1,i) to the objective of (1), arriving at
argmin
xt,i
f(x1, . . . , xT ) :=
TX
t=1
(y
t
P
M i=1 x > t,i µ i ) 2
2
2
T 1X
t=1
MX
i=1
x > t,i (E t,i + logP i )x t+1,i
subject to x t,i 2 {0, 1}Ki , 1>x t,i = 1, i 2 [M ] and t 2 [T ] .
(2)
In the rest of the paper we derive an efficient approximate solution to (2), and demonstrate that it is superior to the approximate solution derived by Kolter and Jaakkola [2012] with respect to several measures quantifying the accuracy of load disaggregation solutions.
3 SDP Relaxation and Randomized Rounding
There are two major challenges to solve the optimization problem (2) exactly: (i) the optimization is over binary vectors x
t,i ; and (ii) the objective function f , even when considering its extension to a convex domain, is in general non-convex (due to the second term). As a remedy we will relax (2) to make it an integer quadratic programming problem, then apply an SDP relaxation and randomized rounding to solve approximately the relaxed problem. We start with reviewing the latter methods.
3.1 Approximate Solutions for Integer Quadratic Programming
In this section we consider approximate solutions to the integer quadratic programming problem
minimize f(x) = x>Dx+ 2d>x subject to x 2 {0, 1}n,
(3)
where D 2 Sn+ is positive semidefinite, and d 2 Rn. While an exact solution of (3) can be found by enumerating all possible combination of binary values within a properly chosen box or ellipsoid, the running time of such exact methods is nearly exponential in the number n of binary variables, making these methods unfit for large scale problems.
One way to avoid exponential running times is to replace (3) with a convex problem with the hope that the solutions of the convex problems can serve as a good starting point to find high-quality solutions to (3). The standard approach to this is to linearize (3) by introducing a new variable X 2 Sn+ tied to x trough X = xx>, so that x>Dx = trace(DX), and then relax the nonconvex constraints X = xx
>, x 2 {0, 1}n to X ⌫ xx>, diag(X) = x, x 2 [0, 1]n. This leads to the relaxed SDP problem
minimize trace(D>X) + 2d>x subject to 1 x >
x X
⌫ 0, diag(X) = x, x 2 [0, 1]n
(4)
By introducing ˆX = 1 x >
x X
this can be written in the compact SDP form
minimize trace( ˆD> ˆX) subject to ˆX ⌫ 0, A ˆX = b .
(5)
where ˆD = 0 d >
d D
2 Sn+1+ , b 2 Rm and A : Sn+ ! Rm is an appropriate linear operator. This
general SDP optimization problem can be solved with arbitrary precision in polynomial time using interior-point methods [Malick et al., 2009, Wen et al., 2010]. As discussed before, this approach becomes impractical in terms of both the running time and the required memory if either the number of variables or the optimization constraints are large [Wen et al., 2010]. We will return to the issue of building scaleable solvers for NILM in Section 5.
Note that introducing the new variable X , the problem is projected into a higher dimensional space, which is computationally more challenging than just simply relaxing the integrality constraint in (3), but leads to a tighter approximation of the optimum (c.f., Park and Boyd, 2015; see also Lovász and Schrijver, 1991, Burer and Vandenbussche, 2006).
To obtain a feasible point of (3) from the solution of (5), we still need to change the solution x to a binary vector. This can be done via randomized rounding [Park and Boyd, 2015, Goemans and Williamson, 1995]: Instead of letting x 2 [0, 1]n, the integrality constraint x 2 {0, 1}n in (3) can be replaced by the inequalities x
i
(x
i 1) 0 for all i 2 [n]. Although these constraints are nonconvex, they admit an interesting probabilistic interpretation: the optimization problem
minimize E w⇠N (µ,⌃)[w
> Dw + 2d > w]
subject to E w⇠N (µ,⌃)[wi(wi 1)] 0, i 2 [n], µ 2 Rn, ⌃ ⌫ 0
is equivalent to
minimize trace((⌃+ µµ>)D) + 2d>µ subject to ⌃
i,i
+ µ 2 i µ i
0, i 2 [n], (6)
which is in the form of (4) with X = ⌃ + µµ> and x = µ (above, E x⇠P [f(x)] stands forR
f(x)dP (x)). This leads to the rounding procedure: starting from a solution (x⇤, X⇤) of (4), we randomly draw several samples w(j) from N (x⇤, X⇤ x⇤x⇤>), round w(j)
i to 0 or 1 to obtain x
(j), and keep the x(j) with the smallest objective value. In a series of experiments, Park and Boyd [2015] found this procedure to be better than just naively rounding the coordinates of x⇤.
4 An Efficient Algorithm for Inference in FHMMs
To arrive at our method we apply the results of the previous subsection to (2). To do so, as mentioned at the beginning of the section, we need to change the problem to a convex one, since the elements of the second term in the objective of (2), x>
t,i
(E
t,i
+ logP
i
)x t+1,i are not convex. To address this issue, we relax the problem by introducing new variables Z
t,i
= x
t,i
x > t+1,i and replace the constraint
Z
t,i
= x
t,i
x > t+1,i with two new ones:
Z
t,i 1 = x t,i and Z> t,i 1 = x t+1,i.
To simplify the presentation, we will assume that K i = K for all i 2 [M ]. Then problem (2) becomes
argmin
xt,i
TX
t=1
⇢ 1
2
2
y
t x> t µ 2 p> t z t
subject to x t 2 {0, 1}MK , t 2 [T ], ẑ
t 2 {0, 1}MKK , t 2 [T 1], 1>x
t,i = 1, t 2 [T ] and i 2 [M ], Z
t,i 1> = x t,i , Z > t,i 1> = x t+1,i , t 2 [T 1] and i 2 [M ],
(7)
Algorithm 1 ADMM-RR: Randomized rounding algorithm for suboptimal solution to (2) Given: number of iterations: itermax, length of input data: T Solve the optimization problem (8): Run Algorithm 2 to get X⇤t and z⇤t Set xbestt := z⇤t and Xbestt := X⇤t for t = 1, . . . , T for t = 2, . . . , T 1 do
Set x := [xbestt 1 > , x best t > , x best t+1 > ]> Set X := block(Xbestt 1 , Xbestt , Xbestt+1 ) where block(·, ·) constructs block diagonal matrix from input arguments Set f best := 1 Form the covariance matrix ⌃ := X xxT and find its Cholesky factorization LL> = ⌃. for k = 1, 2, . . . , itermax do
Random sampling: zk := x+ Lw, where w ⇠ N (0, I) Round zk to the nearest integer point xk that satisfies the constraints of (7) If f best > ft(xk) then update xbestt and Xbestt from the corresponding entries of xk and xkxk
>, respectively
end for end for
where x> t = [x > t,1, . . . , x > t,M ], µ> = [µ>1 , . . . , µ> M ], z> t = [vec(Z t,1) > , . . . , vec(Z t,M ) > ] and
p > t = [vec(E t,1 + logP1), . . . , vec(logPT )], with vec(A) denoting the column vector obtained by concatenating the columns of A for a matrix A. Expanding the first term of (7) and following the relaxation method of Section 3.1, we get the following SDP problem:2
arg min
Xt,zt
TX
t=1
trace(D> t X t ) + d > t z t
subject to AX t = b, BX t + Cz t + EX t+1 = g,
X
t ⌫ 0, X t , z t 0 .
(8)
Here A : SMK+1+ ! Rm, B, E : SMK+1+ ! Rm 0 and C 2 RMKK⇥m0 are all appropriate linear operators, and the integers m and m0 are determined by the number of equality constraints, while
D
t
= 1 2 2
0 y
t
µ
>
y t µ µµ
>
and d
t
= p
t
. Notice that (8) is a simple, though huge-dimensional SDP
problem in the form of (5) where ˆD has a special block structure.
Next we apply the randomized rounding method from Section 3.1 to provide an approximate solution to our original problem (2). Starting from an optimal solution (z⇤, X⇤) of (8) , and utilizing that we have an SDP problem for each time step t, we obtain Algorithm 1 that performs the rounding sequentially for t = 1, 2, . . . , T . However we run the randomized method for three consecutive time steps, since X
t appears at both time steps t 1 and t + 1 in addition to time t (cf., equation 9). Following Park and Boyd [2015], in the experiments we introduce a simple greedy search within Algorithm 1: after finding the initial point xk, we greedily try to objective the target value by change the status of a single appliance at a single time instant. The search stops when no such improvement is possible, and we use the resulting point as the estimate.
5 ADMM Solver for Large-Scale, Sparse Block-Structured SDP Problems
Given the relaxation and randomized rounding presented in the previous subsection all that remains is to find X⇤
t
, z ⇤ t to initialize Algorithm 1. Although interior point methods can solve SDP problems efficiently, even for problems with sparse constraints as (4), the running time to obtain an ✏ optimal solution is of the order of n3.5 log(1/✏) [Nesterov, 2004, Section 4.3.3], which becomes prohibitive in our case since the number of variables scales linearly with the time horizon T .
As an alternative solution, first-order methods can be used for large scale problems [Wen et al., 2010]. Since our problem (8) is an SDP problem where the objective function is separable, ADMM is a promising candidate to find a near-optimal solution. To apply ADMM, we use the Moreau-Yosida quadratic regularization [Malick et al., 2009], which is well suited for the primal formulation we
2The only modification is that we need to keep the equality constraints in (7) that are missing from (3).
Algorithm 2 ADMM for sparse SDPs of the form (8) Given: length of input data: T , number of iterations: itermax. Set the initial values to zero. W 0t , P 0t , S0 = 0, 0t = 0, ⌫0t = 0, and r0t , h0t = 0 Set µ = 0.001 {Default step-size value} for k = 0, 1, . . . , itermax do
for t = 1, 2, . . . , T do Update P kt , W kt , k, Skt , rkt , hkt , and ⌫kt , respectively, according to (11) (Appendix A).
end for end for
consider. When implementing ADMM over the variables (X t , z t ) t , the sparse structure of our constraints allows to consider the SDP problems for each time step t sequentially:
arg min
Xt,zt
trace(D> t X t ) + d > t z t
subject to AX t = b,
BX t + Cz t + EX t+1 = g,
BX t 1 + Czt 1 + EXt = g,
X
t ⌫ 0, X t , z t 0 .
(9)
The regularized Lagrangian function for (9) is3
L µ =trace(D>X) + d>z + 1 2µ kX Sk2 F + 1 2µ kz rk22 + >(b AX)
+ ⌫ > (g BX Cz EX+) + ⌫> (g BX Cz EX)
trace(W>X) trace(P>X) h>z,
(10)
where , ⌫, W 0, P ⌫ 0, and h 0 are dual variables, and µ > 0 is a constant. By taking the derivatives of L
µ and computing the optimal values of X and z, one can derive the standard ADMM updates, which, due to space constraints, are given in Appendix A. The final algorithm, which updates the variables for each t sequentially, is given by Algorithm 2.
Algorithms 1 and 2 together give an efficient algorithm for finding an approximate solution to (2) and thus also to the inference problem of additive FHMMs.
6 Learning the Model
The previous section provided an algorithm to solve the inference part of our energy disaggregation problem. However, to be able to run the inference method, we need to set up the model. To learn the HMMs describing each appliance, we use the method of Kontorovich et al. [2013] to learn the transition matrix, and the spectral learning method of Anandkumar et al. [2012] (following Mattfeld, 2014) to determine the emission parameters.
However, when it comes to the specific application of NILM, the problem of unknown, time-varying bias also needs to be addressed, which appears due to the presence of unknown/unmodeled appliances in the measured signal. A simple idea, which is also followed by Kolter and Jaakkola [2012], is to use a “generic model” whose contribution to the objective function is downweighted. Surprisingly, incorporating this idea in the FHMM inference creates some unexpected challenges.4
Therefore, in this work we come up with a practical, heuristic solution tailored to NILM. First we identify all electric events defined by a large change y
t in the power usage (using some ad-hoc threshold). Then we discard all events that are similar to any possible level change µ(i)
m,k . The remaining large jumps are regarded as coming from a generic HMM model describing the unregistered appliances: they are clustered into K 1 clusters, and an HMM model is built where each cluster is regarded as power usage coming from a single state of the unregistered appliances. We also allow an “off state” with power usage 0.
3We drop the subscript t and replace t+ 1 and t 1 with + and signs, respectively. 4For example, the incorporation of this generic model breaks the derivation of the algorithm of Kolter and
Jaakkola [2012]. See Appendix B for a discussion of this.
7 Experimental Results
We evaluate the performance of our algorithm in two setups:5 we use a synthetic dataset to test the inference method in a controlled environment, while we used the REDD dataset of Kolter and Johnson [2011] to see how the method performs on non-simulated, “real” data. The performance of our algorithm is compared to the structured variational inference (SVI) method of Ghahramani and Jordan [1997], the method of Kolter and Jaakkola [2012] and that of Zhong et al. [2014]; we shall refer to the last two algorithms as KJ and ZGS, respectively.
7.1 Experimental Results: Synthetic Data
The synthetic dataset was generated randomly (the exact procedure is described in Appendix C). To evaluate the performance, we use normalized disaggregation error as suggested by Kolter and Jaakkola [2012] and also adopted by Zhong et al. [2014]. This measures the reconstruction error for each individual appliance. Given the true output y
t,i and the estimated output ŷ t,i (i.e. ŷ t,i = µ > i x̂ t,i ), the error measure is defined as
NDE = qP
t,i
(y
t,i ŷ t,i )
2 / P t,i (y t,i ) 2 .
Figures 1 and 2 show the performance of the algorithms as the number HMMs (M ) (resp., number of states, K) is varied. Each plot is a report for T = 1000 steps averaged over 100 random models and realizations, showing the mean and standard deviation of NDE. Our method, shown under the label ADMM-RR, runs ADMM for 2500 iterations, runs the local search at the end of each 250 iterations, and chooses the result that has the maximum likelihood. ADMM is the algorithm which applies naive rounding. It can be observed that the variational inference method is significantly outperformed by all other methods, while our algorithm consistently obtained better results than its competitors, KJ coming second and ZGS third.
7.2 Experimental Results: Real Data
In this section, we also compared the 3 best methods on the real dataset REDD [Kolter and Johnson, 2011]. We use the first half of the data for training and the second half for testing. Each HMM (i.e.,
5Our code is available online at https://github.com/kiarashshaloudegi/FHMM_inference.
appliance) is trained separately using the associated circuit level data, and the HMM corresponding to unregistered appliances is trained using the main panel data. In this set of experiments we monitor appliances consuming more than 100 watts. ADMM-RR is run for 1000 iterations, and the local search is run at the end of each 250 iterations, and the result with the largest likelihood is chosen. To be able to use the ZGS method on this data, we need to have some prior information about the usage of each appliance; the authors suggestion is to us national energy surveys, but in the lack of this information (also about the number of residents, type of houses, etc.) we used the training data to extract this prior knowledge, which is expected to help this method.
Detailed results about the precision and recall of estimating which appliances are ‘on’ at any given time are given in Table 1. In Appendix D we also report the error of the total power usage assigned to different appliances (Table 2), as well as the amount of assigned power to each appliance as a percentage of total power (Figure 3). As a summary, we can see that our method consistently outperformed the others, achieving an average precision and recall of 60.97% and 78.56%, with about 50% better precision than KJ with essentially the same recall (38.68/75.02%), while significantly improving upon ZGS (17.97/36.22%). Considering the error in assigning the power consumption to different appliances, our method achieved about 30 35% smaller error (ADMM-RR: 2.87%, KJ: 4.44%, ZGS: 3.94%) than its competitors.
In our real-data experiments, there are about 1 million decision variables: M = 7 or 6 appliances (for phase A and B power, respectively) with K = 4 states each and for about T = 30, 000 time steps for one day, 1 sample every 6 seconds. KJ and ZGS solve quadratic programs, increasing their memory usage (14GB vs 6GB in our case). On the other hand, our implementation of their method, using the commercial solver MOSEK inside the Matlab-based YALMIP [Löfberg, 2004], runs in 5 minutes, while our algorithm, which is purely Matlab-based takes 5 hours to finish. We expect that an optimized C++ version of our method could achieve a significant speed-up compared to our current implementation.
8 Conclusion
FHMMs are widely used in energy disaggregation. However, the resulting model has a huge (factored) state space, making standard inference FHMM algorithms infeasible even for only a handful of appliances. In this paper we developed a scalable approximate inference algorithm, based on a semidefinite relaxation combined with randomized rounding, which significantly outperformed the state of the art in our experiments. A crucial component of our solution is a scalable ADMM method that utilizes the special block-diagonal-like structure of the SDP relaxation and provides a good initialization for randomized rounding. We expect that our method may prove useful in solving other FHMM inference problems, as well as in large scale integer quadratic programming.
Acknowledgements
This work was supported in part by the Alberta Innovates Technology Futures through the Alberta Ingenuity Centre for Machine Learning and by NSERC. K. is indebted to Pooria Joulani and Mohammad Ajallooeian, whom provided much useful technical advise, while all authors are grateful for Zico Kolter for sharing his code. | 1. What is the focus of the paper, and what are the authors' contributions to energy disaggregation load monitoring?
2. What are the strengths and weaknesses of the proposed approach compared to prior works, particularly regarding its ability to handle various challenges in energy disaggregation?
3. Do you have any concerns about the experimental study, such as comparisons with other methods, running times, and the choice of datasets?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content, especially regarding its theoretical analysis and experimental results?
5. Are there any minor issues or suggestions for improvement in the paper, such as unclear notation, missing information, or inconsistencies in the presentation? | Review | Review
This paper studies the task of energy disaggregation load minotoring. The authors formulated the problem as an integer quadratic programming optimization problem, and then apply standard SDP relaxation and randomized rounding. The theory part of this paper is a bit incremental. My main concern is on the experimental study of this paper: (1) Since the real dataset (REDD) used in this paper is the same as that in Kolter&Johnson (KJ), I took a quick look at the experiments in KJ. It seems that the precision/recall in KJ is much higher (average 87.2%/60.3% over 7 appliances), even higher than the proposed algorithm of this paper (in Table 1). This looks a bit strange to me, or, one of the two results must have issues. (2) There is no comparison on the running time in the experimental study. I feel this comparison is necessary since, as mentioned in the introduction, the goal of this paper is to "develop a scalable, computationally efficient method" I actually don't know how large is the tested dataset, and thus I don't know why solving SDP directly is infeasible here. About related work (second paragraph of intro), there are quite a few paper mentioned in the introduction that are published after 2012 (e.g., Zhong et. al. 2014). It will help to explain why KJ is the state-of-the-art? Minors: -- Line 75, what is the meaning of I_{} s_{t,i} = s? -- Title of Sec. 6. Why say "Synthetic Data Set"? |
NIPS | Title
SDP Relaxation with Randomized Rounding for Energy Disaggregation
Abstract
We develop a scalable, computationally efficient method for the task of energy disaggregation for home appliance monitoring. In this problem the goal is to estimate the energy consumption of each appliance over time based on the total energy-consumption signal of a household. The current state of the art is to model the problem as inference in factorial HMMs, and use quadratic programming to find an approximate solution to the resulting quadratic integer program. Here we take a more principled approach, better suited to integer programming problems, and find an approximate optimum by combining convex semidefinite relaxations randomized rounding, as well as a scalable ADMM method that exploits the special structure of the resulting semidefinite program. Simulation results both in synthetic and real-world datasets demonstrate the superiority of our method.
1 Introduction
Energy efficiency is becoming one of the most important issues in our society. Identifying the energy consumption of individual electrical appliances in homes can raise awareness of power consumption and lead to significant saving in utility bills. Detailed feedback about the power consumption of individual appliances helps energy consumers to identify potential areas for energy savings, and increases their willingness to invest in more efficient products. Notifying home owners of accidentally running stoves, ovens, etc., may not only result in savings but also improves safety. Energy disaggregation or non-intrusive load monitoring (NILM) uses data from utility smart meters to separate individual load consumptions (i.e., a load signal) from the total measured power (i.e., the mixture of the signals) in households.
The bulk of the research in NILM has mostly concentrated on applying different data mining and pattern recognition methods to track the footprint of each appliance in total power measurements. Several techniques, such as artificial neural networks (ANN) [Prudenzi, 2002, Chang et al., 2012, Liang et al., 2010], deep neural networks [Kelly and Knottenbelt, 2015], k-nearest neighbor (k-NN) [Figueiredo et al., 2012, Weiss et al., 2012], sparse coding [Kolter et al., 2010], or ad-hoc heuristic methods [Dong et al., 2012] have been employed. Recent works, rather than turning electrical events into features fed into classifiers, consider the temporal structure of the data[Zia et al., 2011, Kolter and Jaakkola, 2012, Kim et al., 2011, Zhong et al., 2014, Egarter et al., 2015, Guo et al., 2015], resulting in state-of-the-art performance [Kolter and Jaakkola, 2012]. These works usually model the individual appliances by independent hidden Markov models (HMMs), which leads to a factorial HMM (FHMM) model describing the total consumption.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
FHMMs, introduced by Ghahramani and Jordan [1997], are powerful tools for modeling times series generated from multiple independent sources, and are great for modeling speech with multiple people simultaneously talking [Rennie et al., 2009], or energy monitoring which we consider here [Kim et al., 2011]. Doing exact inference in FHMMs is NP hard; therefore, computationally efficient approximate methods have been the subject of study. Classic approaches include sampling methods, such as MCMC or particle filtering [Koller and Friedman, 2009] and variational Bayes methods [Wainwright and Jordan, 2007, Ghahramani and Jordan, 1997]. In practice, both methods are nontrivial to make work and we are not aware of any works that would have demonstrated good results in our application domain with the type of FHMMs we need to work and at practical scales.
In this paper we follow the work of Kolter and Jaakkola [2012] to model the NILM problem by FHMMs. The distinguishing features of FHMMs in this setting are that (i) the output is the sum of the output of the underlying HMMs (perhaps with some noise), and (ii) the number of transitions are small in comparison to the signal length. FHMMs with the first property are called additive. In this paper we derive an efficient, convex relaxation based method for FHMMs of the above type, which significantly outperforms the state-of-the-art algorithms. Our approach is based on revisiting relaxations to the integer programming formulation of Kolter and Jaakkola [2012]. In particular, we replace the quadratic programming relaxation of Kolter and Jaakkola, 2012 with a relaxation to an semi-definite program (SDP), which, based on the literature of relaxations is expected to be tighter and thus better. While SDPs are convex and could in theory be solved using interior-point (IP) methods in polynomial time [Malick et al., 2009], IP scales poorly with the size of the problem and is thus unsuitable to our large scale problem which may involve as many a million variables. To address this problem, capitalizing on the structure of our relaxation coming from our FHMM model, we develop a novel variant of ADMM [Boyd et al., 2011] that uses Moreau-Yosida regularization and combine it with a version of randomized rounding that is inspired by the the recent work of Park and Boyd [2015]. Experiments on synthetic and real data confirm that our method significantly outperforms other algorithms from the literature, and we expect that it may find its applications in other FHMM inference problems, too.
1.1 Notation
Throughout the paper, we use the following notation: R denotes the set of real numbers, Sn+ denotes the set of n ⇥ n positive semidefinite matrices, I{E} denotes the indicator function of an event E (that is, it is 1 if the event is true and zero otherwise), 1 denotes a vector of appropriate dimension whose entries are all 1. For an integer K, [K] denotes the set {1, 2, . . . ,K}. N (µ,⌃) denotes the Gaussian distribution with mean µ and covariance matrix ⌃. For a matrix A, trace(A) denotes its trace and diag(A) denotes the vector formed by the diagonal entries of A.
2 System Model
Following Kolter and Jaakkola [2012], the energy usage of the household is modeled using an additive factorial HMM [Ghahramani and Jordan, 1997]. Suppose there are M appliances in a household. Each of them is modeled via an HMM: let P
i 2 RKi⇥Ki denote the transition-probability matrix of appliance i 2 [M ], and assume that for each state s 2 [K
i ], the energy consumption of the appliance is constant µ
i,s (µ i denotes the corresponding K i -dimensional column vector (µ i,1, . . . , µi,Ki) >). Denoting by x
t,i 2 {0, 1}Ki the indicator vector of the state s t,i of appliance i at time t (i.e., x
t,i,s = I{st,i=s}), the total power consumption at time t is P i2[M ] µ > i x t,i
, which we assume is observed with some additive zero mean Gaussian noise of variance 2: y
t
⇠ N ( P
i2[M ] µ > i x t,i , 2 ).1
Given this model, the maximum likelihood estimate of the appliance state vector sequence can be obtained by minimizing the log-posterior function
argmin
xt,i
TX
t=1
(y
t
P
M i=1 x > t,i µ i ) 2
2
2
T 1X
t=1
MX
i=1
x > t,i (logP i )x t+1,i
subject to x t,i 2 {0, 1}Ki , 1>x t,i = 1, i 2 [M ] and t 2 [T ],
(1)
1Alternatively, we can assume that the power consumption yt,iof each appliance is normally distributed with mean µ>i xt,i and variance 2i , where 2 = P i2[M ] 2 i , and yt = P i2[M ] yt,i.
where logP i denotes a matrix obtained from P i by taking the logarithm of each entry.
In our particular application, in addition to the signal’s temporal structure, large changes in total power (in comparison to signal noise) contain valuable information that can be used to further improve the inference results (in fact, solely this information was used for energy disaggregation, e.g., by Dong et al., 2012, 2013, Figueiredo et al., 2012). This observation was used by Kolter and Jaakkola [2012] to amend the posterior with a term that tries to match the large signal changes to the possible changes in the power level when only the state of a single appliance changes.
Formally, let y t = y t+1 yt, µ(i) m,k = µ i,k µ i,m , and define the matrices E t,i 2 RKi⇥Ki
by (E t,i ) m,k = ( y t
µ(i) m,k ) 2 /(2 2 diff), for some constant diff > 0. Intuitively, (Et,i)m,k is
the negative log-likelihood (up to a constant) of observing a change y t in the power level when appliance i transitions from state m to state k under some zero-mean Gaussian noise with variance
2 diff. Making the heuristic approximation that the observation noise and this noise are independent (which clearly does not hold under the previous model), Kolter and Jaakkola [2012] added the term ( P T 1 t=1 P M i=1 x > t,i E t,i x t+1,i) to the objective of (1), arriving at
argmin
xt,i
f(x1, . . . , xT ) :=
TX
t=1
(y
t
P
M i=1 x > t,i µ i ) 2
2
2
T 1X
t=1
MX
i=1
x > t,i (E t,i + logP i )x t+1,i
subject to x t,i 2 {0, 1}Ki , 1>x t,i = 1, i 2 [M ] and t 2 [T ] .
(2)
In the rest of the paper we derive an efficient approximate solution to (2), and demonstrate that it is superior to the approximate solution derived by Kolter and Jaakkola [2012] with respect to several measures quantifying the accuracy of load disaggregation solutions.
3 SDP Relaxation and Randomized Rounding
There are two major challenges to solve the optimization problem (2) exactly: (i) the optimization is over binary vectors x
t,i ; and (ii) the objective function f , even when considering its extension to a convex domain, is in general non-convex (due to the second term). As a remedy we will relax (2) to make it an integer quadratic programming problem, then apply an SDP relaxation and randomized rounding to solve approximately the relaxed problem. We start with reviewing the latter methods.
3.1 Approximate Solutions for Integer Quadratic Programming
In this section we consider approximate solutions to the integer quadratic programming problem
minimize f(x) = x>Dx+ 2d>x subject to x 2 {0, 1}n,
(3)
where D 2 Sn+ is positive semidefinite, and d 2 Rn. While an exact solution of (3) can be found by enumerating all possible combination of binary values within a properly chosen box or ellipsoid, the running time of such exact methods is nearly exponential in the number n of binary variables, making these methods unfit for large scale problems.
One way to avoid exponential running times is to replace (3) with a convex problem with the hope that the solutions of the convex problems can serve as a good starting point to find high-quality solutions to (3). The standard approach to this is to linearize (3) by introducing a new variable X 2 Sn+ tied to x trough X = xx>, so that x>Dx = trace(DX), and then relax the nonconvex constraints X = xx
>, x 2 {0, 1}n to X ⌫ xx>, diag(X) = x, x 2 [0, 1]n. This leads to the relaxed SDP problem
minimize trace(D>X) + 2d>x subject to 1 x >
x X
⌫ 0, diag(X) = x, x 2 [0, 1]n
(4)
By introducing ˆX = 1 x >
x X
this can be written in the compact SDP form
minimize trace( ˆD> ˆX) subject to ˆX ⌫ 0, A ˆX = b .
(5)
where ˆD = 0 d >
d D
2 Sn+1+ , b 2 Rm and A : Sn+ ! Rm is an appropriate linear operator. This
general SDP optimization problem can be solved with arbitrary precision in polynomial time using interior-point methods [Malick et al., 2009, Wen et al., 2010]. As discussed before, this approach becomes impractical in terms of both the running time and the required memory if either the number of variables or the optimization constraints are large [Wen et al., 2010]. We will return to the issue of building scaleable solvers for NILM in Section 5.
Note that introducing the new variable X , the problem is projected into a higher dimensional space, which is computationally more challenging than just simply relaxing the integrality constraint in (3), but leads to a tighter approximation of the optimum (c.f., Park and Boyd, 2015; see also Lovász and Schrijver, 1991, Burer and Vandenbussche, 2006).
To obtain a feasible point of (3) from the solution of (5), we still need to change the solution x to a binary vector. This can be done via randomized rounding [Park and Boyd, 2015, Goemans and Williamson, 1995]: Instead of letting x 2 [0, 1]n, the integrality constraint x 2 {0, 1}n in (3) can be replaced by the inequalities x
i
(x
i 1) 0 for all i 2 [n]. Although these constraints are nonconvex, they admit an interesting probabilistic interpretation: the optimization problem
minimize E w⇠N (µ,⌃)[w
> Dw + 2d > w]
subject to E w⇠N (µ,⌃)[wi(wi 1)] 0, i 2 [n], µ 2 Rn, ⌃ ⌫ 0
is equivalent to
minimize trace((⌃+ µµ>)D) + 2d>µ subject to ⌃
i,i
+ µ 2 i µ i
0, i 2 [n], (6)
which is in the form of (4) with X = ⌃ + µµ> and x = µ (above, E x⇠P [f(x)] stands forR
f(x)dP (x)). This leads to the rounding procedure: starting from a solution (x⇤, X⇤) of (4), we randomly draw several samples w(j) from N (x⇤, X⇤ x⇤x⇤>), round w(j)
i to 0 or 1 to obtain x
(j), and keep the x(j) with the smallest objective value. In a series of experiments, Park and Boyd [2015] found this procedure to be better than just naively rounding the coordinates of x⇤.
4 An Efficient Algorithm for Inference in FHMMs
To arrive at our method we apply the results of the previous subsection to (2). To do so, as mentioned at the beginning of the section, we need to change the problem to a convex one, since the elements of the second term in the objective of (2), x>
t,i
(E
t,i
+ logP
i
)x t+1,i are not convex. To address this issue, we relax the problem by introducing new variables Z
t,i
= x
t,i
x > t+1,i and replace the constraint
Z
t,i
= x
t,i
x > t+1,i with two new ones:
Z
t,i 1 = x t,i and Z> t,i 1 = x t+1,i.
To simplify the presentation, we will assume that K i = K for all i 2 [M ]. Then problem (2) becomes
argmin
xt,i
TX
t=1
⇢ 1
2
2
y
t x> t µ 2 p> t z t
subject to x t 2 {0, 1}MK , t 2 [T ], ẑ
t 2 {0, 1}MKK , t 2 [T 1], 1>x
t,i = 1, t 2 [T ] and i 2 [M ], Z
t,i 1> = x t,i , Z > t,i 1> = x t+1,i , t 2 [T 1] and i 2 [M ],
(7)
Algorithm 1 ADMM-RR: Randomized rounding algorithm for suboptimal solution to (2) Given: number of iterations: itermax, length of input data: T Solve the optimization problem (8): Run Algorithm 2 to get X⇤t and z⇤t Set xbestt := z⇤t and Xbestt := X⇤t for t = 1, . . . , T for t = 2, . . . , T 1 do
Set x := [xbestt 1 > , x best t > , x best t+1 > ]> Set X := block(Xbestt 1 , Xbestt , Xbestt+1 ) where block(·, ·) constructs block diagonal matrix from input arguments Set f best := 1 Form the covariance matrix ⌃ := X xxT and find its Cholesky factorization LL> = ⌃. for k = 1, 2, . . . , itermax do
Random sampling: zk := x+ Lw, where w ⇠ N (0, I) Round zk to the nearest integer point xk that satisfies the constraints of (7) If f best > ft(xk) then update xbestt and Xbestt from the corresponding entries of xk and xkxk
>, respectively
end for end for
where x> t = [x > t,1, . . . , x > t,M ], µ> = [µ>1 , . . . , µ> M ], z> t = [vec(Z t,1) > , . . . , vec(Z t,M ) > ] and
p > t = [vec(E t,1 + logP1), . . . , vec(logPT )], with vec(A) denoting the column vector obtained by concatenating the columns of A for a matrix A. Expanding the first term of (7) and following the relaxation method of Section 3.1, we get the following SDP problem:2
arg min
Xt,zt
TX
t=1
trace(D> t X t ) + d > t z t
subject to AX t = b, BX t + Cz t + EX t+1 = g,
X
t ⌫ 0, X t , z t 0 .
(8)
Here A : SMK+1+ ! Rm, B, E : SMK+1+ ! Rm 0 and C 2 RMKK⇥m0 are all appropriate linear operators, and the integers m and m0 are determined by the number of equality constraints, while
D
t
= 1 2 2
0 y
t
µ
>
y t µ µµ
>
and d
t
= p
t
. Notice that (8) is a simple, though huge-dimensional SDP
problem in the form of (5) where ˆD has a special block structure.
Next we apply the randomized rounding method from Section 3.1 to provide an approximate solution to our original problem (2). Starting from an optimal solution (z⇤, X⇤) of (8) , and utilizing that we have an SDP problem for each time step t, we obtain Algorithm 1 that performs the rounding sequentially for t = 1, 2, . . . , T . However we run the randomized method for three consecutive time steps, since X
t appears at both time steps t 1 and t + 1 in addition to time t (cf., equation 9). Following Park and Boyd [2015], in the experiments we introduce a simple greedy search within Algorithm 1: after finding the initial point xk, we greedily try to objective the target value by change the status of a single appliance at a single time instant. The search stops when no such improvement is possible, and we use the resulting point as the estimate.
5 ADMM Solver for Large-Scale, Sparse Block-Structured SDP Problems
Given the relaxation and randomized rounding presented in the previous subsection all that remains is to find X⇤
t
, z ⇤ t to initialize Algorithm 1. Although interior point methods can solve SDP problems efficiently, even for problems with sparse constraints as (4), the running time to obtain an ✏ optimal solution is of the order of n3.5 log(1/✏) [Nesterov, 2004, Section 4.3.3], which becomes prohibitive in our case since the number of variables scales linearly with the time horizon T .
As an alternative solution, first-order methods can be used for large scale problems [Wen et al., 2010]. Since our problem (8) is an SDP problem where the objective function is separable, ADMM is a promising candidate to find a near-optimal solution. To apply ADMM, we use the Moreau-Yosida quadratic regularization [Malick et al., 2009], which is well suited for the primal formulation we
2The only modification is that we need to keep the equality constraints in (7) that are missing from (3).
Algorithm 2 ADMM for sparse SDPs of the form (8) Given: length of input data: T , number of iterations: itermax. Set the initial values to zero. W 0t , P 0t , S0 = 0, 0t = 0, ⌫0t = 0, and r0t , h0t = 0 Set µ = 0.001 {Default step-size value} for k = 0, 1, . . . , itermax do
for t = 1, 2, . . . , T do Update P kt , W kt , k, Skt , rkt , hkt , and ⌫kt , respectively, according to (11) (Appendix A).
end for end for
consider. When implementing ADMM over the variables (X t , z t ) t , the sparse structure of our constraints allows to consider the SDP problems for each time step t sequentially:
arg min
Xt,zt
trace(D> t X t ) + d > t z t
subject to AX t = b,
BX t + Cz t + EX t+1 = g,
BX t 1 + Czt 1 + EXt = g,
X
t ⌫ 0, X t , z t 0 .
(9)
The regularized Lagrangian function for (9) is3
L µ =trace(D>X) + d>z + 1 2µ kX Sk2 F + 1 2µ kz rk22 + >(b AX)
+ ⌫ > (g BX Cz EX+) + ⌫> (g BX Cz EX)
trace(W>X) trace(P>X) h>z,
(10)
where , ⌫, W 0, P ⌫ 0, and h 0 are dual variables, and µ > 0 is a constant. By taking the derivatives of L
µ and computing the optimal values of X and z, one can derive the standard ADMM updates, which, due to space constraints, are given in Appendix A. The final algorithm, which updates the variables for each t sequentially, is given by Algorithm 2.
Algorithms 1 and 2 together give an efficient algorithm for finding an approximate solution to (2) and thus also to the inference problem of additive FHMMs.
6 Learning the Model
The previous section provided an algorithm to solve the inference part of our energy disaggregation problem. However, to be able to run the inference method, we need to set up the model. To learn the HMMs describing each appliance, we use the method of Kontorovich et al. [2013] to learn the transition matrix, and the spectral learning method of Anandkumar et al. [2012] (following Mattfeld, 2014) to determine the emission parameters.
However, when it comes to the specific application of NILM, the problem of unknown, time-varying bias also needs to be addressed, which appears due to the presence of unknown/unmodeled appliances in the measured signal. A simple idea, which is also followed by Kolter and Jaakkola [2012], is to use a “generic model” whose contribution to the objective function is downweighted. Surprisingly, incorporating this idea in the FHMM inference creates some unexpected challenges.4
Therefore, in this work we come up with a practical, heuristic solution tailored to NILM. First we identify all electric events defined by a large change y
t in the power usage (using some ad-hoc threshold). Then we discard all events that are similar to any possible level change µ(i)
m,k . The remaining large jumps are regarded as coming from a generic HMM model describing the unregistered appliances: they are clustered into K 1 clusters, and an HMM model is built where each cluster is regarded as power usage coming from a single state of the unregistered appliances. We also allow an “off state” with power usage 0.
3We drop the subscript t and replace t+ 1 and t 1 with + and signs, respectively. 4For example, the incorporation of this generic model breaks the derivation of the algorithm of Kolter and
Jaakkola [2012]. See Appendix B for a discussion of this.
7 Experimental Results
We evaluate the performance of our algorithm in two setups:5 we use a synthetic dataset to test the inference method in a controlled environment, while we used the REDD dataset of Kolter and Johnson [2011] to see how the method performs on non-simulated, “real” data. The performance of our algorithm is compared to the structured variational inference (SVI) method of Ghahramani and Jordan [1997], the method of Kolter and Jaakkola [2012] and that of Zhong et al. [2014]; we shall refer to the last two algorithms as KJ and ZGS, respectively.
7.1 Experimental Results: Synthetic Data
The synthetic dataset was generated randomly (the exact procedure is described in Appendix C). To evaluate the performance, we use normalized disaggregation error as suggested by Kolter and Jaakkola [2012] and also adopted by Zhong et al. [2014]. This measures the reconstruction error for each individual appliance. Given the true output y
t,i and the estimated output ŷ t,i (i.e. ŷ t,i = µ > i x̂ t,i ), the error measure is defined as
NDE = qP
t,i
(y
t,i ŷ t,i )
2 / P t,i (y t,i ) 2 .
Figures 1 and 2 show the performance of the algorithms as the number HMMs (M ) (resp., number of states, K) is varied. Each plot is a report for T = 1000 steps averaged over 100 random models and realizations, showing the mean and standard deviation of NDE. Our method, shown under the label ADMM-RR, runs ADMM for 2500 iterations, runs the local search at the end of each 250 iterations, and chooses the result that has the maximum likelihood. ADMM is the algorithm which applies naive rounding. It can be observed that the variational inference method is significantly outperformed by all other methods, while our algorithm consistently obtained better results than its competitors, KJ coming second and ZGS third.
7.2 Experimental Results: Real Data
In this section, we also compared the 3 best methods on the real dataset REDD [Kolter and Johnson, 2011]. We use the first half of the data for training and the second half for testing. Each HMM (i.e.,
5Our code is available online at https://github.com/kiarashshaloudegi/FHMM_inference.
appliance) is trained separately using the associated circuit level data, and the HMM corresponding to unregistered appliances is trained using the main panel data. In this set of experiments we monitor appliances consuming more than 100 watts. ADMM-RR is run for 1000 iterations, and the local search is run at the end of each 250 iterations, and the result with the largest likelihood is chosen. To be able to use the ZGS method on this data, we need to have some prior information about the usage of each appliance; the authors suggestion is to us national energy surveys, but in the lack of this information (also about the number of residents, type of houses, etc.) we used the training data to extract this prior knowledge, which is expected to help this method.
Detailed results about the precision and recall of estimating which appliances are ‘on’ at any given time are given in Table 1. In Appendix D we also report the error of the total power usage assigned to different appliances (Table 2), as well as the amount of assigned power to each appliance as a percentage of total power (Figure 3). As a summary, we can see that our method consistently outperformed the others, achieving an average precision and recall of 60.97% and 78.56%, with about 50% better precision than KJ with essentially the same recall (38.68/75.02%), while significantly improving upon ZGS (17.97/36.22%). Considering the error in assigning the power consumption to different appliances, our method achieved about 30 35% smaller error (ADMM-RR: 2.87%, KJ: 4.44%, ZGS: 3.94%) than its competitors.
In our real-data experiments, there are about 1 million decision variables: M = 7 or 6 appliances (for phase A and B power, respectively) with K = 4 states each and for about T = 30, 000 time steps for one day, 1 sample every 6 seconds. KJ and ZGS solve quadratic programs, increasing their memory usage (14GB vs 6GB in our case). On the other hand, our implementation of their method, using the commercial solver MOSEK inside the Matlab-based YALMIP [Löfberg, 2004], runs in 5 minutes, while our algorithm, which is purely Matlab-based takes 5 hours to finish. We expect that an optimized C++ version of our method could achieve a significant speed-up compared to our current implementation.
8 Conclusion
FHMMs are widely used in energy disaggregation. However, the resulting model has a huge (factored) state space, making standard inference FHMM algorithms infeasible even for only a handful of appliances. In this paper we developed a scalable approximate inference algorithm, based on a semidefinite relaxation combined with randomized rounding, which significantly outperformed the state of the art in our experiments. A crucial component of our solution is a scalable ADMM method that utilizes the special block-diagonal-like structure of the SDP relaxation and provides a good initialization for randomized rounding. We expect that our method may prove useful in solving other FHMM inference problems, as well as in large scale integer quadratic programming.
Acknowledgements
This work was supported in part by the Alberta Innovates Technology Futures through the Alberta Ingenuity Centre for Machine Learning and by NSERC. K. is indebted to Pooria Joulani and Mohammad Ajallooeian, whom provided much useful technical advise, while all authors are grateful for Zico Kolter for sharing his code. | 1. What is the focus of the paper, and how does it contribute to the issue of energy consumption management?
2. What are the strengths of the proposed approach, particularly in terms of its technical soundness and novelty?
3. What are the weaknesses of the paper, especially regarding computational complexity and presentation suitability?
4. Do you have any concerns or suggestions regarding the authors' responses to reviewer comments? | Review | Review
The authors provide a method used to estimate the energy consumption in a domestic house. They propose a system based on an additive factorial hidden Markov model and, with respect to previous implementations, they add an additional constraint to the objective function to be minimized, in order to account sudden changes in the level of power consumption. The paper is really interesting and it propose a technical sound methodologies, which can be applied to tackle the important issue of managing energy consumption. I was already positive for what concerns the value of novelty in the presented study, comments of reviewers and authors feedback confirmed my position. My most serious concern at this point is the same raised from some reviewers, about providing a study on the computational complexity of the method proposed. Indeed, the authors discuss it in the rebuttal and I think it should be included in the final version of the paper. However, given the nature of the problem tackled, I believe a more formal study should be provided. Finally, the paper is quite technical and I believe a poster presentation would be much more suitable to present it. |
NIPS | Title
SDP Relaxation with Randomized Rounding for Energy Disaggregation
Abstract
We develop a scalable, computationally efficient method for the task of energy disaggregation for home appliance monitoring. In this problem the goal is to estimate the energy consumption of each appliance over time based on the total energy-consumption signal of a household. The current state of the art is to model the problem as inference in factorial HMMs, and use quadratic programming to find an approximate solution to the resulting quadratic integer program. Here we take a more principled approach, better suited to integer programming problems, and find an approximate optimum by combining convex semidefinite relaxations randomized rounding, as well as a scalable ADMM method that exploits the special structure of the resulting semidefinite program. Simulation results both in synthetic and real-world datasets demonstrate the superiority of our method.
1 Introduction
Energy efficiency is becoming one of the most important issues in our society. Identifying the energy consumption of individual electrical appliances in homes can raise awareness of power consumption and lead to significant saving in utility bills. Detailed feedback about the power consumption of individual appliances helps energy consumers to identify potential areas for energy savings, and increases their willingness to invest in more efficient products. Notifying home owners of accidentally running stoves, ovens, etc., may not only result in savings but also improves safety. Energy disaggregation or non-intrusive load monitoring (NILM) uses data from utility smart meters to separate individual load consumptions (i.e., a load signal) from the total measured power (i.e., the mixture of the signals) in households.
The bulk of the research in NILM has mostly concentrated on applying different data mining and pattern recognition methods to track the footprint of each appliance in total power measurements. Several techniques, such as artificial neural networks (ANN) [Prudenzi, 2002, Chang et al., 2012, Liang et al., 2010], deep neural networks [Kelly and Knottenbelt, 2015], k-nearest neighbor (k-NN) [Figueiredo et al., 2012, Weiss et al., 2012], sparse coding [Kolter et al., 2010], or ad-hoc heuristic methods [Dong et al., 2012] have been employed. Recent works, rather than turning electrical events into features fed into classifiers, consider the temporal structure of the data[Zia et al., 2011, Kolter and Jaakkola, 2012, Kim et al., 2011, Zhong et al., 2014, Egarter et al., 2015, Guo et al., 2015], resulting in state-of-the-art performance [Kolter and Jaakkola, 2012]. These works usually model the individual appliances by independent hidden Markov models (HMMs), which leads to a factorial HMM (FHMM) model describing the total consumption.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
FHMMs, introduced by Ghahramani and Jordan [1997], are powerful tools for modeling times series generated from multiple independent sources, and are great for modeling speech with multiple people simultaneously talking [Rennie et al., 2009], or energy monitoring which we consider here [Kim et al., 2011]. Doing exact inference in FHMMs is NP hard; therefore, computationally efficient approximate methods have been the subject of study. Classic approaches include sampling methods, such as MCMC or particle filtering [Koller and Friedman, 2009] and variational Bayes methods [Wainwright and Jordan, 2007, Ghahramani and Jordan, 1997]. In practice, both methods are nontrivial to make work and we are not aware of any works that would have demonstrated good results in our application domain with the type of FHMMs we need to work and at practical scales.
In this paper we follow the work of Kolter and Jaakkola [2012] to model the NILM problem by FHMMs. The distinguishing features of FHMMs in this setting are that (i) the output is the sum of the output of the underlying HMMs (perhaps with some noise), and (ii) the number of transitions are small in comparison to the signal length. FHMMs with the first property are called additive. In this paper we derive an efficient, convex relaxation based method for FHMMs of the above type, which significantly outperforms the state-of-the-art algorithms. Our approach is based on revisiting relaxations to the integer programming formulation of Kolter and Jaakkola [2012]. In particular, we replace the quadratic programming relaxation of Kolter and Jaakkola, 2012 with a relaxation to an semi-definite program (SDP), which, based on the literature of relaxations is expected to be tighter and thus better. While SDPs are convex and could in theory be solved using interior-point (IP) methods in polynomial time [Malick et al., 2009], IP scales poorly with the size of the problem and is thus unsuitable to our large scale problem which may involve as many a million variables. To address this problem, capitalizing on the structure of our relaxation coming from our FHMM model, we develop a novel variant of ADMM [Boyd et al., 2011] that uses Moreau-Yosida regularization and combine it with a version of randomized rounding that is inspired by the the recent work of Park and Boyd [2015]. Experiments on synthetic and real data confirm that our method significantly outperforms other algorithms from the literature, and we expect that it may find its applications in other FHMM inference problems, too.
1.1 Notation
Throughout the paper, we use the following notation: R denotes the set of real numbers, Sn+ denotes the set of n ⇥ n positive semidefinite matrices, I{E} denotes the indicator function of an event E (that is, it is 1 if the event is true and zero otherwise), 1 denotes a vector of appropriate dimension whose entries are all 1. For an integer K, [K] denotes the set {1, 2, . . . ,K}. N (µ,⌃) denotes the Gaussian distribution with mean µ and covariance matrix ⌃. For a matrix A, trace(A) denotes its trace and diag(A) denotes the vector formed by the diagonal entries of A.
2 System Model
Following Kolter and Jaakkola [2012], the energy usage of the household is modeled using an additive factorial HMM [Ghahramani and Jordan, 1997]. Suppose there are M appliances in a household. Each of them is modeled via an HMM: let P
i 2 RKi⇥Ki denote the transition-probability matrix of appliance i 2 [M ], and assume that for each state s 2 [K
i ], the energy consumption of the appliance is constant µ
i,s (µ i denotes the corresponding K i -dimensional column vector (µ i,1, . . . , µi,Ki) >). Denoting by x
t,i 2 {0, 1}Ki the indicator vector of the state s t,i of appliance i at time t (i.e., x
t,i,s = I{st,i=s}), the total power consumption at time t is P i2[M ] µ > i x t,i
, which we assume is observed with some additive zero mean Gaussian noise of variance 2: y
t
⇠ N ( P
i2[M ] µ > i x t,i , 2 ).1
Given this model, the maximum likelihood estimate of the appliance state vector sequence can be obtained by minimizing the log-posterior function
argmin
xt,i
TX
t=1
(y
t
P
M i=1 x > t,i µ i ) 2
2
2
T 1X
t=1
MX
i=1
x > t,i (logP i )x t+1,i
subject to x t,i 2 {0, 1}Ki , 1>x t,i = 1, i 2 [M ] and t 2 [T ],
(1)
1Alternatively, we can assume that the power consumption yt,iof each appliance is normally distributed with mean µ>i xt,i and variance 2i , where 2 = P i2[M ] 2 i , and yt = P i2[M ] yt,i.
where logP i denotes a matrix obtained from P i by taking the logarithm of each entry.
In our particular application, in addition to the signal’s temporal structure, large changes in total power (in comparison to signal noise) contain valuable information that can be used to further improve the inference results (in fact, solely this information was used for energy disaggregation, e.g., by Dong et al., 2012, 2013, Figueiredo et al., 2012). This observation was used by Kolter and Jaakkola [2012] to amend the posterior with a term that tries to match the large signal changes to the possible changes in the power level when only the state of a single appliance changes.
Formally, let y t = y t+1 yt, µ(i) m,k = µ i,k µ i,m , and define the matrices E t,i 2 RKi⇥Ki
by (E t,i ) m,k = ( y t
µ(i) m,k ) 2 /(2 2 diff), for some constant diff > 0. Intuitively, (Et,i)m,k is
the negative log-likelihood (up to a constant) of observing a change y t in the power level when appliance i transitions from state m to state k under some zero-mean Gaussian noise with variance
2 diff. Making the heuristic approximation that the observation noise and this noise are independent (which clearly does not hold under the previous model), Kolter and Jaakkola [2012] added the term ( P T 1 t=1 P M i=1 x > t,i E t,i x t+1,i) to the objective of (1), arriving at
argmin
xt,i
f(x1, . . . , xT ) :=
TX
t=1
(y
t
P
M i=1 x > t,i µ i ) 2
2
2
T 1X
t=1
MX
i=1
x > t,i (E t,i + logP i )x t+1,i
subject to x t,i 2 {0, 1}Ki , 1>x t,i = 1, i 2 [M ] and t 2 [T ] .
(2)
In the rest of the paper we derive an efficient approximate solution to (2), and demonstrate that it is superior to the approximate solution derived by Kolter and Jaakkola [2012] with respect to several measures quantifying the accuracy of load disaggregation solutions.
3 SDP Relaxation and Randomized Rounding
There are two major challenges to solve the optimization problem (2) exactly: (i) the optimization is over binary vectors x
t,i ; and (ii) the objective function f , even when considering its extension to a convex domain, is in general non-convex (due to the second term). As a remedy we will relax (2) to make it an integer quadratic programming problem, then apply an SDP relaxation and randomized rounding to solve approximately the relaxed problem. We start with reviewing the latter methods.
3.1 Approximate Solutions for Integer Quadratic Programming
In this section we consider approximate solutions to the integer quadratic programming problem
minimize f(x) = x>Dx+ 2d>x subject to x 2 {0, 1}n,
(3)
where D 2 Sn+ is positive semidefinite, and d 2 Rn. While an exact solution of (3) can be found by enumerating all possible combination of binary values within a properly chosen box or ellipsoid, the running time of such exact methods is nearly exponential in the number n of binary variables, making these methods unfit for large scale problems.
One way to avoid exponential running times is to replace (3) with a convex problem with the hope that the solutions of the convex problems can serve as a good starting point to find high-quality solutions to (3). The standard approach to this is to linearize (3) by introducing a new variable X 2 Sn+ tied to x trough X = xx>, so that x>Dx = trace(DX), and then relax the nonconvex constraints X = xx
>, x 2 {0, 1}n to X ⌫ xx>, diag(X) = x, x 2 [0, 1]n. This leads to the relaxed SDP problem
minimize trace(D>X) + 2d>x subject to 1 x >
x X
⌫ 0, diag(X) = x, x 2 [0, 1]n
(4)
By introducing ˆX = 1 x >
x X
this can be written in the compact SDP form
minimize trace( ˆD> ˆX) subject to ˆX ⌫ 0, A ˆX = b .
(5)
where ˆD = 0 d >
d D
2 Sn+1+ , b 2 Rm and A : Sn+ ! Rm is an appropriate linear operator. This
general SDP optimization problem can be solved with arbitrary precision in polynomial time using interior-point methods [Malick et al., 2009, Wen et al., 2010]. As discussed before, this approach becomes impractical in terms of both the running time and the required memory if either the number of variables or the optimization constraints are large [Wen et al., 2010]. We will return to the issue of building scaleable solvers for NILM in Section 5.
Note that introducing the new variable X , the problem is projected into a higher dimensional space, which is computationally more challenging than just simply relaxing the integrality constraint in (3), but leads to a tighter approximation of the optimum (c.f., Park and Boyd, 2015; see also Lovász and Schrijver, 1991, Burer and Vandenbussche, 2006).
To obtain a feasible point of (3) from the solution of (5), we still need to change the solution x to a binary vector. This can be done via randomized rounding [Park and Boyd, 2015, Goemans and Williamson, 1995]: Instead of letting x 2 [0, 1]n, the integrality constraint x 2 {0, 1}n in (3) can be replaced by the inequalities x
i
(x
i 1) 0 for all i 2 [n]. Although these constraints are nonconvex, they admit an interesting probabilistic interpretation: the optimization problem
minimize E w⇠N (µ,⌃)[w
> Dw + 2d > w]
subject to E w⇠N (µ,⌃)[wi(wi 1)] 0, i 2 [n], µ 2 Rn, ⌃ ⌫ 0
is equivalent to
minimize trace((⌃+ µµ>)D) + 2d>µ subject to ⌃
i,i
+ µ 2 i µ i
0, i 2 [n], (6)
which is in the form of (4) with X = ⌃ + µµ> and x = µ (above, E x⇠P [f(x)] stands forR
f(x)dP (x)). This leads to the rounding procedure: starting from a solution (x⇤, X⇤) of (4), we randomly draw several samples w(j) from N (x⇤, X⇤ x⇤x⇤>), round w(j)
i to 0 or 1 to obtain x
(j), and keep the x(j) with the smallest objective value. In a series of experiments, Park and Boyd [2015] found this procedure to be better than just naively rounding the coordinates of x⇤.
4 An Efficient Algorithm for Inference in FHMMs
To arrive at our method we apply the results of the previous subsection to (2). To do so, as mentioned at the beginning of the section, we need to change the problem to a convex one, since the elements of the second term in the objective of (2), x>
t,i
(E
t,i
+ logP
i
)x t+1,i are not convex. To address this issue, we relax the problem by introducing new variables Z
t,i
= x
t,i
x > t+1,i and replace the constraint
Z
t,i
= x
t,i
x > t+1,i with two new ones:
Z
t,i 1 = x t,i and Z> t,i 1 = x t+1,i.
To simplify the presentation, we will assume that K i = K for all i 2 [M ]. Then problem (2) becomes
argmin
xt,i
TX
t=1
⇢ 1
2
2
y
t x> t µ 2 p> t z t
subject to x t 2 {0, 1}MK , t 2 [T ], ẑ
t 2 {0, 1}MKK , t 2 [T 1], 1>x
t,i = 1, t 2 [T ] and i 2 [M ], Z
t,i 1> = x t,i , Z > t,i 1> = x t+1,i , t 2 [T 1] and i 2 [M ],
(7)
Algorithm 1 ADMM-RR: Randomized rounding algorithm for suboptimal solution to (2) Given: number of iterations: itermax, length of input data: T Solve the optimization problem (8): Run Algorithm 2 to get X⇤t and z⇤t Set xbestt := z⇤t and Xbestt := X⇤t for t = 1, . . . , T for t = 2, . . . , T 1 do
Set x := [xbestt 1 > , x best t > , x best t+1 > ]> Set X := block(Xbestt 1 , Xbestt , Xbestt+1 ) where block(·, ·) constructs block diagonal matrix from input arguments Set f best := 1 Form the covariance matrix ⌃ := X xxT and find its Cholesky factorization LL> = ⌃. for k = 1, 2, . . . , itermax do
Random sampling: zk := x+ Lw, where w ⇠ N (0, I) Round zk to the nearest integer point xk that satisfies the constraints of (7) If f best > ft(xk) then update xbestt and Xbestt from the corresponding entries of xk and xkxk
>, respectively
end for end for
where x> t = [x > t,1, . . . , x > t,M ], µ> = [µ>1 , . . . , µ> M ], z> t = [vec(Z t,1) > , . . . , vec(Z t,M ) > ] and
p > t = [vec(E t,1 + logP1), . . . , vec(logPT )], with vec(A) denoting the column vector obtained by concatenating the columns of A for a matrix A. Expanding the first term of (7) and following the relaxation method of Section 3.1, we get the following SDP problem:2
arg min
Xt,zt
TX
t=1
trace(D> t X t ) + d > t z t
subject to AX t = b, BX t + Cz t + EX t+1 = g,
X
t ⌫ 0, X t , z t 0 .
(8)
Here A : SMK+1+ ! Rm, B, E : SMK+1+ ! Rm 0 and C 2 RMKK⇥m0 are all appropriate linear operators, and the integers m and m0 are determined by the number of equality constraints, while
D
t
= 1 2 2
0 y
t
µ
>
y t µ µµ
>
and d
t
= p
t
. Notice that (8) is a simple, though huge-dimensional SDP
problem in the form of (5) where ˆD has a special block structure.
Next we apply the randomized rounding method from Section 3.1 to provide an approximate solution to our original problem (2). Starting from an optimal solution (z⇤, X⇤) of (8) , and utilizing that we have an SDP problem for each time step t, we obtain Algorithm 1 that performs the rounding sequentially for t = 1, 2, . . . , T . However we run the randomized method for three consecutive time steps, since X
t appears at both time steps t 1 and t + 1 in addition to time t (cf., equation 9). Following Park and Boyd [2015], in the experiments we introduce a simple greedy search within Algorithm 1: after finding the initial point xk, we greedily try to objective the target value by change the status of a single appliance at a single time instant. The search stops when no such improvement is possible, and we use the resulting point as the estimate.
5 ADMM Solver for Large-Scale, Sparse Block-Structured SDP Problems
Given the relaxation and randomized rounding presented in the previous subsection all that remains is to find X⇤
t
, z ⇤ t to initialize Algorithm 1. Although interior point methods can solve SDP problems efficiently, even for problems with sparse constraints as (4), the running time to obtain an ✏ optimal solution is of the order of n3.5 log(1/✏) [Nesterov, 2004, Section 4.3.3], which becomes prohibitive in our case since the number of variables scales linearly with the time horizon T .
As an alternative solution, first-order methods can be used for large scale problems [Wen et al., 2010]. Since our problem (8) is an SDP problem where the objective function is separable, ADMM is a promising candidate to find a near-optimal solution. To apply ADMM, we use the Moreau-Yosida quadratic regularization [Malick et al., 2009], which is well suited for the primal formulation we
2The only modification is that we need to keep the equality constraints in (7) that are missing from (3).
Algorithm 2 ADMM for sparse SDPs of the form (8) Given: length of input data: T , number of iterations: itermax. Set the initial values to zero. W 0t , P 0t , S0 = 0, 0t = 0, ⌫0t = 0, and r0t , h0t = 0 Set µ = 0.001 {Default step-size value} for k = 0, 1, . . . , itermax do
for t = 1, 2, . . . , T do Update P kt , W kt , k, Skt , rkt , hkt , and ⌫kt , respectively, according to (11) (Appendix A).
end for end for
consider. When implementing ADMM over the variables (X t , z t ) t , the sparse structure of our constraints allows to consider the SDP problems for each time step t sequentially:
arg min
Xt,zt
trace(D> t X t ) + d > t z t
subject to AX t = b,
BX t + Cz t + EX t+1 = g,
BX t 1 + Czt 1 + EXt = g,
X
t ⌫ 0, X t , z t 0 .
(9)
The regularized Lagrangian function for (9) is3
L µ =trace(D>X) + d>z + 1 2µ kX Sk2 F + 1 2µ kz rk22 + >(b AX)
+ ⌫ > (g BX Cz EX+) + ⌫> (g BX Cz EX)
trace(W>X) trace(P>X) h>z,
(10)
where , ⌫, W 0, P ⌫ 0, and h 0 are dual variables, and µ > 0 is a constant. By taking the derivatives of L
µ and computing the optimal values of X and z, one can derive the standard ADMM updates, which, due to space constraints, are given in Appendix A. The final algorithm, which updates the variables for each t sequentially, is given by Algorithm 2.
Algorithms 1 and 2 together give an efficient algorithm for finding an approximate solution to (2) and thus also to the inference problem of additive FHMMs.
6 Learning the Model
The previous section provided an algorithm to solve the inference part of our energy disaggregation problem. However, to be able to run the inference method, we need to set up the model. To learn the HMMs describing each appliance, we use the method of Kontorovich et al. [2013] to learn the transition matrix, and the spectral learning method of Anandkumar et al. [2012] (following Mattfeld, 2014) to determine the emission parameters.
However, when it comes to the specific application of NILM, the problem of unknown, time-varying bias also needs to be addressed, which appears due to the presence of unknown/unmodeled appliances in the measured signal. A simple idea, which is also followed by Kolter and Jaakkola [2012], is to use a “generic model” whose contribution to the objective function is downweighted. Surprisingly, incorporating this idea in the FHMM inference creates some unexpected challenges.4
Therefore, in this work we come up with a practical, heuristic solution tailored to NILM. First we identify all electric events defined by a large change y
t in the power usage (using some ad-hoc threshold). Then we discard all events that are similar to any possible level change µ(i)
m,k . The remaining large jumps are regarded as coming from a generic HMM model describing the unregistered appliances: they are clustered into K 1 clusters, and an HMM model is built where each cluster is regarded as power usage coming from a single state of the unregistered appliances. We also allow an “off state” with power usage 0.
3We drop the subscript t and replace t+ 1 and t 1 with + and signs, respectively. 4For example, the incorporation of this generic model breaks the derivation of the algorithm of Kolter and
Jaakkola [2012]. See Appendix B for a discussion of this.
7 Experimental Results
We evaluate the performance of our algorithm in two setups:5 we use a synthetic dataset to test the inference method in a controlled environment, while we used the REDD dataset of Kolter and Johnson [2011] to see how the method performs on non-simulated, “real” data. The performance of our algorithm is compared to the structured variational inference (SVI) method of Ghahramani and Jordan [1997], the method of Kolter and Jaakkola [2012] and that of Zhong et al. [2014]; we shall refer to the last two algorithms as KJ and ZGS, respectively.
7.1 Experimental Results: Synthetic Data
The synthetic dataset was generated randomly (the exact procedure is described in Appendix C). To evaluate the performance, we use normalized disaggregation error as suggested by Kolter and Jaakkola [2012] and also adopted by Zhong et al. [2014]. This measures the reconstruction error for each individual appliance. Given the true output y
t,i and the estimated output ŷ t,i (i.e. ŷ t,i = µ > i x̂ t,i ), the error measure is defined as
NDE = qP
t,i
(y
t,i ŷ t,i )
2 / P t,i (y t,i ) 2 .
Figures 1 and 2 show the performance of the algorithms as the number HMMs (M ) (resp., number of states, K) is varied. Each plot is a report for T = 1000 steps averaged over 100 random models and realizations, showing the mean and standard deviation of NDE. Our method, shown under the label ADMM-RR, runs ADMM for 2500 iterations, runs the local search at the end of each 250 iterations, and chooses the result that has the maximum likelihood. ADMM is the algorithm which applies naive rounding. It can be observed that the variational inference method is significantly outperformed by all other methods, while our algorithm consistently obtained better results than its competitors, KJ coming second and ZGS third.
7.2 Experimental Results: Real Data
In this section, we also compared the 3 best methods on the real dataset REDD [Kolter and Johnson, 2011]. We use the first half of the data for training and the second half for testing. Each HMM (i.e.,
5Our code is available online at https://github.com/kiarashshaloudegi/FHMM_inference.
appliance) is trained separately using the associated circuit level data, and the HMM corresponding to unregistered appliances is trained using the main panel data. In this set of experiments we monitor appliances consuming more than 100 watts. ADMM-RR is run for 1000 iterations, and the local search is run at the end of each 250 iterations, and the result with the largest likelihood is chosen. To be able to use the ZGS method on this data, we need to have some prior information about the usage of each appliance; the authors suggestion is to us national energy surveys, but in the lack of this information (also about the number of residents, type of houses, etc.) we used the training data to extract this prior knowledge, which is expected to help this method.
Detailed results about the precision and recall of estimating which appliances are ‘on’ at any given time are given in Table 1. In Appendix D we also report the error of the total power usage assigned to different appliances (Table 2), as well as the amount of assigned power to each appliance as a percentage of total power (Figure 3). As a summary, we can see that our method consistently outperformed the others, achieving an average precision and recall of 60.97% and 78.56%, with about 50% better precision than KJ with essentially the same recall (38.68/75.02%), while significantly improving upon ZGS (17.97/36.22%). Considering the error in assigning the power consumption to different appliances, our method achieved about 30 35% smaller error (ADMM-RR: 2.87%, KJ: 4.44%, ZGS: 3.94%) than its competitors.
In our real-data experiments, there are about 1 million decision variables: M = 7 or 6 appliances (for phase A and B power, respectively) with K = 4 states each and for about T = 30, 000 time steps for one day, 1 sample every 6 seconds. KJ and ZGS solve quadratic programs, increasing their memory usage (14GB vs 6GB in our case). On the other hand, our implementation of their method, using the commercial solver MOSEK inside the Matlab-based YALMIP [Löfberg, 2004], runs in 5 minutes, while our algorithm, which is purely Matlab-based takes 5 hours to finish. We expect that an optimized C++ version of our method could achieve a significant speed-up compared to our current implementation.
8 Conclusion
FHMMs are widely used in energy disaggregation. However, the resulting model has a huge (factored) state space, making standard inference FHMM algorithms infeasible even for only a handful of appliances. In this paper we developed a scalable approximate inference algorithm, based on a semidefinite relaxation combined with randomized rounding, which significantly outperformed the state of the art in our experiments. A crucial component of our solution is a scalable ADMM method that utilizes the special block-diagonal-like structure of the SDP relaxation and provides a good initialization for randomized rounding. We expect that our method may prove useful in solving other FHMM inference problems, as well as in large scale integer quadratic programming.
Acknowledgements
This work was supported in part by the Alberta Innovates Technology Futures through the Alberta Ingenuity Centre for Machine Learning and by NSERC. K. is indebted to Pooria Joulani and Mohammad Ajallooeian, whom provided much useful technical advise, while all authors are grateful for Zico Kolter for sharing his code. | 1. What is the focus of the paper on energy disaggregation?
2. What are the strengths of the proposed approach, particularly in terms of performance and scalability?
3. What are the weaknesses of the paper regarding its lack of novelty and originality?
4. Do you have any concerns regarding the computational complexity of the proposed method?
5. How does the reviewer assess the clarity and quality of the paper's content? | Review | Review
This paper proposed method to find better approximate solution for energy disaggregation or non-intrusive load monitoring (NILM) problem based on the previous work done by Kolter & Jaakkola (2012) who used an additive factorial HMM to model the energy consumption. The authors combined a SDP relaxation and randomized rounding (Park & Boyd, 2015) and applied ADMM (Boyd, 2010). The proposed method achieved superior performance to the existing method.It is meaningful that the proposed method perform outstandingly well on simulation results in aspects of accuracy. However, this paper lacks academic novelty and originality in that the proposed method solely combined existing works. The authors argue that proposed algorithm is scalable and computationally efficient, but there is no explanation about computational complexity on method or experimental results. It is necessary to show effectiveness in aspects of computational complexity. Alternatively, comparison with existing methods on execution time or experiments using large-scale data could be provided. This paper is well written overall and states essential information. The author should modify some mistakes in expression including the title of section 6 and its subsection, line 75 and line 223. It would be better to represent important figures or tables on the manuscript not on the supplementary material in spite of the limited space. In addition, conclusion part is too short. |
NIPS | Title
Stochastic Variance Reduced Primal Dual Algorithms for Empirical Composition Optimization
Abstract
We consider a generic empirical composition optimization problem, where there are empirical averages present both outside and inside nonlinear loss functions. Such a problem is of interest in various machine learning applications, and cannot be directly solved by standard methods such as stochastic gradient descent. We take a novel approach to solving this problem by reformulating the original minimization objective into an equivalent min-max objective, which brings out all the empirical averages that are originally inside the nonlinear loss functions. We exploit the rich structures of the reformulated problem and develop a stochastic primal-dual algorithm, SVRPDA-I, to solve the problem efficiently. We carry out extensive theoretical analysis of the proposed algorithm, obtaining the convergence rate, the computation complexity and the storage complexity. In particular, the algorithm is shown to converge at a linear rate when the problem is strongly convex. Moreover, we also develop an approximate version of the algorithm, named SVRPDA-II, which further reduces the memory requirement. Finally, we evaluate our proposed algorithms on several real-world benchmarks, and experimental results show that the proposed algorithms significantly outperform existing techniques.
1 Introduction
In this paper, we consider the following regularized empirical composition optimization problem:
min θ
1
nX nX−1∑ i=0 φi ( 1 nYi nYi−1∑ j=0 fθ(xi, yij) ) + g(θ), (1)
where (xi, yij) ∈ Rmx × Rmy is the (i, j)-th data sample, fθ : Rmx × Rmy → R` is a function parameterized by θ ∈ Rd, φi : R` → R+ is a convex merit function, which measures a certain loss of the parametric function fθ, and g(θ) is a µ-strongly convex regularization term.
Problems of the form (1) widely appear in many machine learning applications such as reinforcement learning [5, 3, 2, 13], unsupervised sequence classification [12, 21] and risk-averse learning [15, 18, 9, 10, 19] — see our detailed discussion in Section 2. Note that the cost function (1) has an empirical average (over xi) outside the (nonlinear) merit function φi(·) and an empirical average (over yij) inside the merit function, which makes it different from the empirical risk minimization problems that are common in machine learning [17]. Problem (1) can be understood as a generalized version of the one considered in [9, 10].3 In these prior works, yij and nYi are assumed to be independent of
∗Department of Electrical and Computer Engineering, University of Florida, Gainesville, USA. Email: adithyamdevraj@ufl.edu. The work was done during an internship at Tencent AI Lab, Bellevue, WA. †Tencent AI Lab, Bellevue, WA, USA. Email: jianshuchen@tencent.com. 3In addition to the term in (2), the cost function in [10] also has another convex regularization term.
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
i and fθ is only a function of yj so that problem (1) can be reduced to the following special case:
min θ
1
nX nX−1∑ i=0 φi ( 1 nY nY −1∑ j=0 fθ(yj) ) . (2)
Our more general problem formulation (1) encompasses wider applications (see Section 2). Furthermore, different from [2, 19, 18], we focus on the finite sample setting, where we have empirical averages (instead of expectations) in (1). As we shall see below, the finite-sum structures allows us to develop efficient stochastic gradient methods that converges at linear rate.
While problem (1) is important in many machine learning applications, there are several key challenges in solving it efficiently. First, the number of samples (i.e., nX and nYi) could be extremely large: they could be larger than one million or even one billion. Therefore, it is unrealistic to use batch gradient descent algorithm to solve the problem, which requires going over all the data samples at each gradient update step. Moreover, since there is an empirical average inside the nonlinear merit function φi(·), it is not possible to directly apply the classical stochastic gradient descent (SGD) algorithm. This is because sampling from both empirical averages outside and inside φi(·) simultaneously would make the stochastic gradients intrinsically biased (see Appendix A for a discussion).
To address these challenges, in this paper, we first reformulate the original problem (1) into an equivalent saddle point problem (i.e., min-max problem), which brings out all the empirical averages inside φi(·) and exhibits useful dual decomposition and finite-sum structures (Section 3.1). To fully exploit these properties, we develop a stochastic primal-dual algorithm that alternates between a dual step of stochastic variance reduced coordinate ascent and a primal step of stochastic variance reduced gradient descent (Section 3.2). In particular, we develop a novel variance reduced stochastic gradient estimator for the primal step, which achieves better variance reduction with low complexity (Section 3.3). We derive the convergence rate, the finite-time complexity bound, and the storage complexity of our proposed algorithm (Section 4). In particular, it is shown that the proposed algorithms converge at a linear rate when the problem is strongly convex. Moreover, we also develop an approximate version of the algorithm that further reduces the storage complexity without much performance degradation in experiments. We evaluate the performance of our algorithms on several real-world benchmarks, where the experimental results show that they significantly outperform existing methods (Section 5). Finally, we discuss related works in Section 6 and conclude our paper in Section 7.
2 Motivation and Applications
To motivate our composition optimization problem (1), we discuss several important machine learning applications where cost functions of the form (1) arise naturally.
Unsupervised sequence classification: Developing algorithms that can learn classifiers from unlabeled data could benefit many machine learning systems, which could save a huge amount of human labeling costs. In [12, 21], the authors proposed such unsupervised learning algorithms by exploiting the sequential output structures. The developed algorithms are applied to optical character recognition (OCR) problems and automatic speech recognition (ASR) problems. In these works, the learning algorithms seek to learn a sequence classifier by optimizing the empirical output distribution match (Empirical-ODM) cost, which is in the following form (written in our notation):
min θ { − nX−1∑ i=0 pLM(xi) log ( 1 nY nY −1∑ j=0 fθ(xi, yj) )} , (3)
where pLM is a known language model (LM) that describes the distribution of output sequence (e.g., xi represents different n-grams), and fθ is a functional of the sequence classifier to be learned, with θ being its model parameter vector. The key idea is to learn the classifier so that its predicted output n-gram distribution is close to the prior n-gram distribution pLM (see [12, 21] for more details). The cost function (3) can be viewed as a special case of (1) by setting nYi = nY , yij = yj and φi(u) = −pLM (xi) log(u). Note that the formulation (2) cannot be directly used here, because of the dependency of the function fθ on both xi and yj .
Risk-averse learning: Another application where (1) arises naturally is the risk-averse learning problem, which is common in finance [15, 18, 9, 10, 19, 20]. Let xi ∈ Rd be a vector consisting of
the rewards from d assets at the i-th instance, where 0 ≤ i ≤ n − 1. The objective in risk-averse learning is to find the optimal weights of the d assets so that the average returns are maximized while the risk is minimized. It could be formulated as the following optimization problem:
min θ − 1 n n−1∑ i=0 〈xi, θ〉+ 1 n n−1∑ i=0 ( 〈xi, θ〉− 1 n n−1∑ j=0 〈xj , θ〉 )2 , (4)
where θ ∈ Rd denotes the weight vector. The objective function in (4) seeks a tradeoff between the mean (the first term) and the variance (the second term). It can be understood as a special case of (2) (which is a further special case of (1)) by making the following identifications:
nX=nY =n, yi≡xi, fθ(yj)=[θT, −〈yj , θ〉]T, φi(u)=(〈xi, u0:d−1〉+ud)2−〈xi, u0:d−1〉, (5)
where u0:d−1 denotes the subvector constructed from the first d elements of u, and ud denotes the d-th element. An alternative yet simpler way of dealing with (4) is to treat the second term in (4) as a special case of (1) by setting
nX = nYi = n, yij ≡ xj , fθ(xi, yij) = 〈xi − yij , θ〉, φi(u) = u2, u ∈ R. (6)
In addition, we observe that the first term in (4) is in standard empirical risk minimization form, which can be dealt with in a straightforward manner. This second formulation leads to algorithms with lower complexity due to the lower dimension of the functions: ` = 1 instead of ` = d+ 1 in the first formulation. Therefore, we will adopt this formulation in our experiment section (Section 5).
Other applications: Cost functions of the form (1) also appear in reinforcement learning [5, 2, 3] and other applications [18]. In Appendix D, we demonstrate its applications in policy evaluation.
3 Algorithms
3.1 Saddle point formulation
Recall from (1) that there is an empirical average inside each (nonlinear) merit function φi(·), which prevents the direct application of stochastic gradient descent to (1) due to the inherent bias (see Appendix A for more discussions). Nevertheless, we will show that minimizing the original cost function (1) can be transformed into an equivalent saddle point problem, which brings out all the empirical averages inside φi(·). In what follows, we will use the machinery of convex conjugate functions [14]. For a function ψ : R` → R, its convex conjugate function ψ∗ : R` → R is defined as ψ∗(y) = supx∈R`(〈x, y〉−ψ(x)). Under certain mild conditions on ψ(x) [14], one can also express ψ(x) as a functional of its conjugate function: ψ(x) = supy∈R`(〈x, y〉−ψ∗(y)). Let φ∗i (wi) denote the conjugate function of φi(u). Then, we can express φi(u) as
φi(u) = sup wi∈R`
(〈u,wi〉 − φ∗i (wi)), (7)
where wi is the corresponding dual variable. Substituting (7) into the original minimization problem (1), we obtain its equivalent min-max problem as:
min θ max w
{ L(θ, w) + g(θ) , 1
nX nX−1∑ i=0 [〈 1 nYi nYi−1∑ j=0 fθ(xi, yij), wi 〉 − φ∗i (wi) ] + g(θ) } , (8)
where w,{w0, . . . , wnX−1}, is a collection of all dual variables. We note that the transformation of the original problem (1) into (8) brings out all the empirical averages that are present inside φi(·). This new formulation allows us to develop stochastic variance reduced algorithms below.
3.2 Stochastic variance reduced primal-dual algorithm
One common solution for the min-max problem (8) is to alternate between the step of minimization (with respect to the primal variable θ) and the step of maximization (with respect to the dual variable w). However, such an approach generally suffers from high computation complexity because each minimization/maximization step requires a summation over many components and requires a full
pass over all the data samples. The complexity of such a batch algorithm would be prohibitively high when the number of data samples (i.e., nX and nYi ) is large (e.g., they could be larger than one million or even one billion in applications like unsupervised speech recognition [21]). On the other hand, problem (8) indeed has rich structures that we can exploit to develop more efficient solutions.
To this end, we make the following observations. First, expression (8) implies that when θ is fixed, the maximization over the dual variable w can be decoupled into a total of nX individual maximizations over different wi’s. Second, the objective function in each individual maximization (with respect to wi) contains a finite-sum structure over j. Third, by (8), for a fixed w, the minimization with respect to the primal variable θ is also performed over an objective function with a finite-sum structure. Based on these observations, we will develop an efficient stochastic variance reduced primal-dual algorithm (named SVRPDA-I). It alternates between (i) a dual step of stochastic variance reduced coordinate ascent and (ii) a primal step of stochastic variance reduced gradient descent. The full algorithm is summarized in Algorithm 1, with its key ideas explained below.
Dual step: stochastic variance reduced coordinate ascent. To exploit the decoupled dual maximization over w in (8), we can randomly sample an index i, and update wi according to:
w (k) i = argminwi { − 〈 1 nYi nYi−1∑ j=0 fθ(k−1)(xi, yij), wi 〉 + φ∗i (wi) + 1 2αw ‖wi − w(k−1)i ‖ 2 } , (9)
while keeping all other wj’s (j 6= i) unchanged, where αw denotes a step-size. Note that each step of recursion (9) still requires a summation over nYi components. To further reduce the complexity, we approximate the sum over j by a variance reduced stochastic estimator defined in (12) (to be discussed in Section 3.3). The dual step in our algorithm is summarized in (13), where we assume that the function φ∗i (wi) is in a simple form so that the argmin could be solved in closed-form. Note that we flip the sign of the objective function to change maximization to minimization and apply coordinate descent. We will still refer to the dual step as “coordinate ascent” (instead of descent).
Primal step: stochastic variance reduced gradient descent We now consider the minimization in (8) with respect to θ when w is fixed. The gradient descent step for minimizing L(θ, w) is given by
θ(k) = argmin θ {〈 nX−1∑ i=0 nYi−1∑ j=0 1 nXnYi f ′θ(k−1)(xi, yij)w (k) i , θ 〉 + 1 2αθ ‖θ − θ(k−1)‖2 } , (10)
where αθ denotes a step-size. It is easy to see that the update equation (10) has high complexity, it requires evaluating and averaging the gradient f ′θ(·, ·) at every data sample. To reduce the complexity, we use a variance reduced gradient estimator, defined in (15), to approximate the sums in (10) (to be discussed in Section 3.3). The primal step in our algorithm is summarized in (16) in Algorithm 1.
3.3 Low-complexity stochastic variance reduced estimators
We now proceed to explain the design of the variance reduced gradient estimators in both the dual and the primal updates. The main idea is inspired by the stochastic variance reduced gradient (SVRG) algorithm [7]. Specifically, for a vector-valued function h(θ) = 1n ∑n−1 i=0 hi(θ), we can construct its SVRG estimator δk at each iteration step k by using the following expression:
δk = hik(θ)− hik(θ̃) + h(θ̃), (17)
where ik is a randomly sampled index from {0, . . . , n − 1}, and θ̃ is a reference variable that is updated periodically (to be explained below). The first term hi(θ) in (17) is an unbiased estimator of h(θ) and is generally known as the stochastic gradient when h(θ) is the gradient of a certain cost function. The last two terms in (17) construct a control variate that has zero mean and is negatively correlated with hi(θ), which keeps δk unbiased while significantly reducing its variance. The reference variable θ̃ is usually set to be a delayed version of θ: for example, after every M updates of θ, it can be reset to the most recent iterate of θ. Note that there is a trade-off in the choice of M : a smaller M further reduces the variance of δk since θ̃ will be closer to θ and the first two terms in (17) cancel more with each other; on the other hand, it will also require more frequent evaluations of the costly batch term h(θ̃), which has a complexity of O(n).
Algorithm 1 SVRPDA-I 1: Inputs: data {(xi, yij) : 0≤ i<nX , 0≤j<nYi}; step-sizes αθ and αw; # inner iterations M . 2: Initialization: θ̃0 ∈ Rd and w̃0 ∈ R`nX . 3: for s = 1, 2, . . . do 4: Set θ̃= θ̃s−1, θ(0)= θ̃, w̃= w̃s−1, w(0)= w̃s−1, and compute the batch quantities (for each 0≤ i<nX ):
U0 = nX−1∑ i=0 nYi−1∑ j=0 f ′ θ̃ (xi, yij)w (0) i nXnYi , f i(θ̃) , nYi−1∑ j=0 fθ̃(xi, yij) nYi , f ′ i(θ̃) = nYi−1∑ j=0 f ′ θ̃ (xi, yij) nYi . (11)
5: for k = 1 to M do 6: Randomly sample ik ∈ {0, . . . , nX−1} and then jk ∈ {0, . . . , nYik−1} at uniform. 7: Compute the stochastic variance reduced gradient for dual update:
δwk = fθ(k−1)(xik , yikjk )− fθ̃(xik , yikjk ) + f ik (θ̃). (12)
8: Update the dual variables:
w (k) i = argminwi [ − 〈δwk , wi〉+ φ∗i (wi) + 1 2αw ‖wi − w(k−1)i ‖ 2 ] if i = ik
w (k−1) i if i 6= ik
. (13)
9: Update Uk (primal batch gradient at θ̃ and w(k)) according to the following recursion:
Uk = Uk−1 + 1 nX f ′ ik (θ̃) ( w (k) ik − w(k−1)ik ) . (14)
10: Randomly sample i′k ∈ {0, . . . , nX − 1} and then j′k ∈ {0, . . . , nYi′ k − 1}, independent of ik and jk,
and compute the stochastic variance reduced gradient for primal update:
δθk = f ′ θ(k−1)(xi′k , yi ′ k j′ k )w (k) i′ k − f ′θ̃(xi′k , yi′kj′k )w (k) i′ k + Uk. (15)
11: Update the primal variable:
θ(k) = argmin θ
[ 〈δθk, θ〉+ g(θ) + 1
2αθ ‖θ − θ(k−1)‖2
] . (16)
12: end for 13: Option I: Set w̃s = w(M) and θ̃s = θ(M). 14: Option II: Set w̃s = w(M) and θ̃s = θ(t) for randomly sampled t ∈ {0, . . . ,M−1}. 15: end for 16: Output: θ̃s at the last outer-loop iteration.
Based on (17), we develop two stochastic variance reduced estimators, (12) and (15), to approximate the finite-sums in (9) and (10), respectively. The dual gradient estimator δwk in (12) is constructed in a standard manner using (17), where the reference variable θ̃ is a delayed version of θ(k)4. On the other hand, the primal gradient estimator δθk in (15) is constructed by using reference variables (θ̃, w
(k)); that is, we uses the most recent w(k) as the dual reference variable, without any delay. As discussed earlier, such a choice leads to a smaller variance in the stochastic estimator δkθ at a potentially higher computation cost (from more frequent evaluation of the batch term). Nevertheless, we are able to show that, with the dual coordinate ascent structure in our algorithm, the batch term Uk in (15), which is the summation in (10) evaluated at (θ̃, w(k)), can be computed efficiently. To see this, note that, after each dual update step in (13), only one term inside this summation in (10), has been changed, i.e., the one associated with i = ik. Therefore, we can correct Uk for this term by using recursion (14), which only requires an extra O(d`)-complexity per step (same complexity as (15)).
Note that SVRPDA-I (Algorithm 1) requires to compute and store all the f ′ i(θ̃) in (11), which is O(nXd`)-complexity in storage and could be expensive in some applications. To avoid the cost, we develop a variant of Algorithm 1, named as SVRPDA-II (see Algorithm 1 in the supplementary material), by approximating f ik(θ̃) in (14) with f ′ θ̃ (xik , yikj′′k ), where j ′′ k is another randomly sampled index from {0, . . . , nYi − 1}, independent of all other indexes. By doing this, we can significantly
4As in [7], we also consider Option II wherein θ̃ is randomly chosen from the previous M θ(k)’s.
reduce the memory requirement from O(nXd`) in SVRPDA-I to O(d+ nX`) in SVRPDA-II (see Section 4.2). In addition, experimental results in Section 5 will show that such an approximation only cause slight performance loss compared to that of SVRPDA-I algorithm.
4 Theoretical Analysis
4.1 Computation complexity
We now perform convergence analysis for the SVRPDA-I algorithm and also derive their complexities in computation and storage. To begin with, we first introduce the following assumptions. Assumption 4.1. The function g(θ) is µ-strongly convex in θ, and each φi is 1/γ-smooth. Assumption 4.2. The merit functions φi(u) are Lipschitz with a uniform constant Bw: |φi(u)− φi(u′)| ≤ Bw‖u− u′‖, ∀u, u′; ∀i = 0, . . . , nX − 1. Assumption 4.3. fθ(xi, yij) is Bθ-smooth in θ, and has bounded gradients with constant Bf :
‖f ′θ1(xi, yij)− f ′ θ2(xi, yij)‖ ≤ Bθ‖θ1 − θ2‖, ‖f ′ θ(xi, yij)‖ ≤ Bf , ∀θ, θ1, θ2, ∀i, j.
Assumption 4.4. For each given w in its domain, the function L(θ, w) defined in (8) is convex in θ: L(θ1, w)− L(θ2, w) ≥ 〈L′θ(θ2, w), θ1 − θ2〉, ∀θ1, θ2.
The above assumptions are commonly used in existing compositional optimization works [9, 10, 18, 19, 22]. Based on these assumptions, we establish the non-asymptotic error bounds for SVRPDAI (using either Option I or Option II in Algorithm 1). The main results are summarized in the following theorems, and their proofs can be found in Appendix E. Theorem 4.5. Suppose Assumptions 4.1–4.4 hold. If in Algorithm 1 (with Option I) we choose
αθ = 1
nXµ(64κ+ 1) , αw =
nXµ
γ αθ, M =
⌈ 78.8nXκ+1.3nX+1.3 ⌉ where dxe denotes the roundup operation and κ = B2f/γµ+B2wB2θ/µ2, then the Lyapunov function Ps := E‖θ̃s − θ∗‖2 + γµ · 64κ+3 64nXκ+nX+1
E‖w̃s − w∗‖2 satisfies Ps ≤ (3/4)sP0. Furthermore, the overall computational cost (in number of oracle calls5) for reaching Ps ≤ is upper bounded by
O ( (nXnY + nXκ+ nX) ln(1/ ) ) . (18)
where, with a slight abuse of notation, nY is defined as nY = (nY0 + · · ·+ nYnX−1)/nX . Theorem 4.6. Suppose Assumptions 4.1–4.4 hold. If in Algorithm 1 (with Option II) we choose
αθ = (25B2f
γ +10BθBw+
80B2wB 2 θ
µ
)−1 , αw = µ
40B2f , M = max
( 10
αθµ , 2nX αwγ , 4nX
) ,
then Ps := E‖θ̃s−θ∗‖2+ γnXµE‖w̃s−w ∗‖2 ≤ (5/8)sP0. Furthermore, let κ = B2f γµ + B2wB 2 θ
µ2 . Then, the overall computational cost (in number of oracle calls) for reaching Ps ≤ is upper bounded by
O ( (nXnY + nXκ+ nX) ln(1/ ) ) . (19)
The above theorems show that the Lyapunov function Ps for SVRPDA-I converges to zero at a linear rate when either Option I or II is used. Since E‖θ̃s − θ∗‖2 ≤ Ps, they imply that the computational cost (in number of oracle calls) for reaching E‖θ̃s− θ∗‖2 ≤ is also upper bounded by (18) and (19).
5One oracle call is defined as querying fθ , f ′θ , or φi(u) for any 0 ≤ i < n and u ∈ R`.
Comparison with existing composition optimization algorithms Table 1 summarizes the complexity bounds for our SVRPDA-I algorithm and compares them with existing stochastic composition optimization algorithms. First, to our best knowledge, none of the existing methods consider the general objective function (1) as we did. Instead, they consider its special case (2), and even in this special case, our algorithm still has better (or comparable) complexity bound than other methods. For example, our bound is better than that of [9] since κ2 > nX generally holds, and it is better than that of ASCVRG, which does not achieve linear convergence rate (as no strong convexity is assumed). In addition, our method has better complexity than C-SAGA algorithm when nX = 1 (regardless of mini-batch size in C-SAGA), and it is better than C-SAGA for (2) when the mini-batch size is 1.6 However, since we have not derived our bound for mini-batch setting, it is unclear which one is better in this case, and is an interesting topic for future work. One notable fact from Table 1 is that in this special case (2), the complexity of SVRPDA-I is reduced from O((nXnY +nXκ) ln 1 ) to O((nX+nY +nXκ) ln 1 ). This is because the complexity for evaluating the batch quantities in (11) (Algorithm 1) can be reduced from O(nXnY ) in the general case (1) to O(nX + nY ) in the special case (2). To see this, note that fθ and nYi = nY become independent of i in (2) and (11), meaning that we can factor U0 in (11) as U0 = 1nXnY ∑nY −1 j=0 f ′ θ̃ (yj) ∑nX i=0 w (0) i , where the two sums can be evaluated independently with complexity O(nY ) and O(nX), respectively. The other two quantities in (11) need only O(nY ) due to their independence of i. Second, we consider the further special case of (2) with nX = 1, which simplifies the objective function (1) so that there is no empirical average outside φi(·). This takes the form of the unsupervised learning objective function that appears in [12]. Note that our results O((nY +κ) log 1 ) enjoys a linear convergence rate (i.e., log-dependency on ) due to the variance reduction technique. In contrast, stochastic primal-dual gradient (SPDG) method in [12], which does not use variance reduction, can only have sublinear convergence rate (i.e., O( 1 )).
Relation to SPDC [23] Lastly, we consider the case where nYi = 1 for all 1 ≤ i ≤ nX and fθ is a linear function in θ. This simplifies (1) to the problem considered in [23], known as the regularized empirical risk minimization of linear predictors. It has applications in support vector machines, regularized logistic regression, and more, depending on how the merit function φi is defined. In this special case, the overall complexity for SVRPDA-I becomes (see Appendix F):
O ( (nX + κ) ln(1/ ) ) , (20)
where the condition number κ = B2f/µγ. In comparison, the authors in [23] propose a stochastic primal dual coordinate (SPDC) algorithm for this special case and prove an overall complexity of O (( nX + √ nXκ ) ln ( 1 )) to achieve an -error solution. It is interesting to note that the complexity result in (20) and the complexity result in [23] only differ in their dependency on κ. This difference is most likely due to the acceleration technique that is employed in the primal update of the SPDC algorithm. We conjecture that the dependency on the condition number of SVRPDA-I can be further improved using a similar acceleration technique.
4.2 Storage complexity
We now briefly discuss and compare the storage complexities of both SVRPDA-I and SVRPDA-II. In Table 2, we report the itemized and total storage complexities for both algorithms, which shows that SVRPDA-II significantly reduces the memory footprint. We also observe that the batch quantities in (11), especially f ′ i(θ̃), dominates the storage complexity in SVRPDA-I. On the other hand, the memory usage in SVRPDA-II is more uniformly distributed over different quantities. Furthermore, although the total complexity of SVRPDA-II, O(d+ nX`), grows with the number of samples nX , the nX` term is relatively small because the dimension ` is small in many practical problems (e.g., ` = 1 in (3) and (4)). This is similar to the storage requirement in SPDC [23] and SAGA [4].
6In Appendix D, we also show that our algorithms outperform C-SAGA in experiments.
5 Experiments
In this section we consider the problem of risk-averse learning for portfolio management optimization [9, 10], introduced in Section 2.7 Specifically, we want to solve the optimization problem (4) for a given set of reward vectors {xi ∈ Rd : 0 ≤ i ≤ n − 1}. As we discussed in Section 2, we adopt the alternative formulation (6) for the second term so that it becomes a special case of our general problem (1). Then, we rewrite the cost function into a min-max problem by following the argument in Section 3.1 and apply our SVRPDA-I and SVRPDA-II algorithms (see Appendix C.1 for the details).
We evaluate our algorithms on 18 real-world US Research Returns datasets obtained from the Center for Research in Security Prices (CRSP) website8, with the same setup as in [10]. In each of these datasets, we have d = 25 and n = 7240. We compare the performance of our proposed SVRPDA-I and SVRPDA-II algorithms9 with the following state-of-the art algorithms designed to solve composition optimization problems: (i) Compositional-SVRG-1 (Algorithm 2 of [9]), (ii) Compositional-SVRG-2 (Algorithm 3 of [9]), (iii) Full batch gradient descent, and (iv) ASCVRG algorithm [10]. For the compositional-SVRG algorithms, we follow [9] to formulate it as a special case of the form (2) by using the identification (5). Note that we cannot use the identification (6) for the compositional SVRG algorithms because it will lead to the more general formulation (1) with fθ depending on both xi and yij ≡ xj . For further details, the reader is referred to [9]. As in previous works, we compare different algorithms based on the number of oracle calls required to achieve a certain objective gap (the difference between the objective function evaluated at the current iterate and at the optimal parameters). One oracle call is defined as accessing the function fθ, its derivative f ′θ, or φi(u) for any 0 ≤ i < n and u ∈ R`. The results are shown in Figure 1, which shows that our proposed algorithms significantly outperform the baseline methods on all datasets. In addition, we also observe that SVRPDA-II also converges at a linear rate, and the performance loss caused by the approximation is relatively small compared to SVRPDA-I.
7Additional experiments on the application to policy evaluation in MDPs can be found in Appendix D. 8The processed data in the form of .mat file was obtained from https://github.com/tyDLin/SCVRG 9The choice of the hyper-parameters can be found in Appendix C.2, and the code will be released publicly.
6 Related Works
Composition optimization have attracted significant attention in optimization literature. The stochastic version of the problem (2), where the empirical averages are replaced by expectations, is studied in [18]. The authors propose a two-timescale stochastic approximation algorithm known as SCGD, and establish sublinear convergence rates. In [19], the authors propose the ASC-PG algorithm by using a proximal gradient method to deal with nonsmooth regularizations. The works that are more closely related to our setting are [9] and [10], which consider a finite-sum minimization problem (2) (a special case of our general formulation (1)). In [9], the authors propose the compositional-SVRG methods, which combine SCGD with the SVRG technique from [7] and obtain linear convergence rates. In [10], the authors propose the ASCVRG algorithms that extends to convex but non-smooth objectives. Recently, the authors in [22] propose a C-SAGA algorithm to solve the special case of (2) with nX = 1, and extend to general nX . Different from these works, we take an efficient primal-dual approach that fully exploits the dual decomposition and the finite-sum structures.
On the other hand, problems similar to (1) (and its stochastic versions) are also examined in different specific machine learning problems. [16] considers the minimization of the mean square projected Bellman error (MSPBE) for policy evaluation, which has an expectation inside a quadratic loss. The authors propose a two-timescale stochastic approximation algorithm, GTD2, and establish its asymptotic convergence. [11] and [13] independently showed that the GTD2 is a stochastic gradient method for solving an equivalent saddle-point problem. In [2] and [3], the authors derived saddlepoint formulations for two other variants of costs (MSBE and MSCBE) in the policy evaluation and the control settings, and develop their stochastic primal-dual algorithms. All these works consider the stochastic version of the composition optimization and the proposed algorithms have sublinear convergence rates. In [5], different variance reduction methods are developed to solve the finite-sum version of MSPBE and achieve linear rate even without strongly convex regularization. Then the authors in [6] extends this linear convergence results to the general convex-concave problem with linear coupling and without strong convexity. Besides, problem of the form (1) was also studied in the context of unsupervised learning [12, 21] in the stochastic setting (with expectations in (1)).
Finally, our work is inspired by the stochastic variance reduction techniques in optimization [8, 7, 4, 1, 23], which considers the minimization of a cost that is a finite-sum of many component functions. Different versions of variance reduced stochastic gradients are constructed in these works to achieve linear convergence rate. In particular, our variance reduced stochastic estimators are constructed based on the idea of SVRG [7] with a novel design of the control variates. Our work is also related to the SPDC algorithm [23], which also integrates dual coordinate ascent with variance reduced primal gradient. However, our work is different from SPDC in the following aspects. First, we consider a more general composition optimization problem (1) while SPDC focuses on regularized empirical risk minimization with linear predictors, i.e., nYi ≡ 1 and fθ is linear in θ. Second, because of the composition structures in the problem, our algorithms also needs SVRG in the dual coordinate ascent update, while SPDC does not. Third, the primal update in SPDC is specifically designed for linear predictors. In contrast, our work is not restricted to that by using a novel variance reduced gradient.
7 Conclusions and Future Work
We developed a stochastic primal-dual algorithms, SVRPDA-I to efficiently solve the empirical composition optimization problem. This is achieved by fully exploiting the rich structures inherent in the reformulated min-max problem, including the dual decomposition and the finite-sum structures. It alternates between (i) a dual step of stochastic variance reduced coordinate ascent and (ii) a primal step of stochastic variance reduced gradient descent. In particular, we proposed a novel variance reduced gradient for the primal update, which achieves better variance reduction with low complexity. We derive a non-asymptotic bound for the error sequence and show that it converges at a linear rate when the problem is strongly convex. Moreover, we also developed an approximate version of the algorithm named SVRPDA-II, which further reduces the storage complexity. Experimental results on several real-world benchmarks showed that both SVRPDA-I and SVRPDA-II significantly outperform existing techniques on all these tasks, and the approximation in SVRPDA-II only caused a slight performance loss. Future extensions of our work include the theoretical analysis of SVRPDA-II, the generalization of our algorithms to Bregman divergences, and applying it to large-scale machine learning problems with non-convex cost functions (e.g., unsupervised sequence classifications). | 1. What is the focus of the paper in terms of optimization problems?
2. What are the strengths of the proposed method in comparison to other approaches like vanilla SGD and SCGD with SVRG?
3. How does the reviewer assess the originality and significance of the paper's contribution regarding composition optimization problems?
4. Are there any concerns or suggestions regarding the theoretical analysis or experimental results presented in the paper? | Review | Review
This paper proposes a new method for empirical composition optimization problems to which the vanilla SGD is not applicable because of a finite-sum structure inside non-linear loss functions. The method is a type of primal-dual methods with variance reduction for saddle-point problems which is a reformulation of the original problem. In a theoretical analysis part, a linear convergence rate of the method is provided under the strong convexity. In experiments, the superior performance of the method is verified empirically over competitors on portfolio management optimization problems. Clarity: The paper is clear and well written. Quality: The work is of good quality and is technically sound. Originality and significance: The problem (1) treated in this paper is important and contains several applications as mentioned in the paper. However, there seems to be no method that can converge at the linear rate for this problem. As for sub-problems (2), SCGD with SVRG [8] exhibits the linear convergence, but there are some important machine learning tasks not to be covered by (2) as explained in the paper. In addition, it is confirmed in experiments that the proposed method can significantly outperform existing methods including SCGD with SVRG. Hence, this paper makes certain contributions to both theorists and practitioners. If the authors can show a theoretical advantage of the proposed method over SCGD with SVRG [8] on the problem (2) besides the empirical performance, it will make the paper stronger. Minor comments: A regularization term $g$ should be added to equation (10). ----- I have read the author's response and I keep the score. |
NIPS | Title
Stochastic Variance Reduced Primal Dual Algorithms for Empirical Composition Optimization
Abstract
We consider a generic empirical composition optimization problem, where there are empirical averages present both outside and inside nonlinear loss functions. Such a problem is of interest in various machine learning applications, and cannot be directly solved by standard methods such as stochastic gradient descent. We take a novel approach to solving this problem by reformulating the original minimization objective into an equivalent min-max objective, which brings out all the empirical averages that are originally inside the nonlinear loss functions. We exploit the rich structures of the reformulated problem and develop a stochastic primal-dual algorithm, SVRPDA-I, to solve the problem efficiently. We carry out extensive theoretical analysis of the proposed algorithm, obtaining the convergence rate, the computation complexity and the storage complexity. In particular, the algorithm is shown to converge at a linear rate when the problem is strongly convex. Moreover, we also develop an approximate version of the algorithm, named SVRPDA-II, which further reduces the memory requirement. Finally, we evaluate our proposed algorithms on several real-world benchmarks, and experimental results show that the proposed algorithms significantly outperform existing techniques.
1 Introduction
In this paper, we consider the following regularized empirical composition optimization problem:
min θ
1
nX nX−1∑ i=0 φi ( 1 nYi nYi−1∑ j=0 fθ(xi, yij) ) + g(θ), (1)
where (xi, yij) ∈ Rmx × Rmy is the (i, j)-th data sample, fθ : Rmx × Rmy → R` is a function parameterized by θ ∈ Rd, φi : R` → R+ is a convex merit function, which measures a certain loss of the parametric function fθ, and g(θ) is a µ-strongly convex regularization term.
Problems of the form (1) widely appear in many machine learning applications such as reinforcement learning [5, 3, 2, 13], unsupervised sequence classification [12, 21] and risk-averse learning [15, 18, 9, 10, 19] — see our detailed discussion in Section 2. Note that the cost function (1) has an empirical average (over xi) outside the (nonlinear) merit function φi(·) and an empirical average (over yij) inside the merit function, which makes it different from the empirical risk minimization problems that are common in machine learning [17]. Problem (1) can be understood as a generalized version of the one considered in [9, 10].3 In these prior works, yij and nYi are assumed to be independent of
∗Department of Electrical and Computer Engineering, University of Florida, Gainesville, USA. Email: adithyamdevraj@ufl.edu. The work was done during an internship at Tencent AI Lab, Bellevue, WA. †Tencent AI Lab, Bellevue, WA, USA. Email: jianshuchen@tencent.com. 3In addition to the term in (2), the cost function in [10] also has another convex regularization term.
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
i and fθ is only a function of yj so that problem (1) can be reduced to the following special case:
min θ
1
nX nX−1∑ i=0 φi ( 1 nY nY −1∑ j=0 fθ(yj) ) . (2)
Our more general problem formulation (1) encompasses wider applications (see Section 2). Furthermore, different from [2, 19, 18], we focus on the finite sample setting, where we have empirical averages (instead of expectations) in (1). As we shall see below, the finite-sum structures allows us to develop efficient stochastic gradient methods that converges at linear rate.
While problem (1) is important in many machine learning applications, there are several key challenges in solving it efficiently. First, the number of samples (i.e., nX and nYi) could be extremely large: they could be larger than one million or even one billion. Therefore, it is unrealistic to use batch gradient descent algorithm to solve the problem, which requires going over all the data samples at each gradient update step. Moreover, since there is an empirical average inside the nonlinear merit function φi(·), it is not possible to directly apply the classical stochastic gradient descent (SGD) algorithm. This is because sampling from both empirical averages outside and inside φi(·) simultaneously would make the stochastic gradients intrinsically biased (see Appendix A for a discussion).
To address these challenges, in this paper, we first reformulate the original problem (1) into an equivalent saddle point problem (i.e., min-max problem), which brings out all the empirical averages inside φi(·) and exhibits useful dual decomposition and finite-sum structures (Section 3.1). To fully exploit these properties, we develop a stochastic primal-dual algorithm that alternates between a dual step of stochastic variance reduced coordinate ascent and a primal step of stochastic variance reduced gradient descent (Section 3.2). In particular, we develop a novel variance reduced stochastic gradient estimator for the primal step, which achieves better variance reduction with low complexity (Section 3.3). We derive the convergence rate, the finite-time complexity bound, and the storage complexity of our proposed algorithm (Section 4). In particular, it is shown that the proposed algorithms converge at a linear rate when the problem is strongly convex. Moreover, we also develop an approximate version of the algorithm that further reduces the storage complexity without much performance degradation in experiments. We evaluate the performance of our algorithms on several real-world benchmarks, where the experimental results show that they significantly outperform existing methods (Section 5). Finally, we discuss related works in Section 6 and conclude our paper in Section 7.
2 Motivation and Applications
To motivate our composition optimization problem (1), we discuss several important machine learning applications where cost functions of the form (1) arise naturally.
Unsupervised sequence classification: Developing algorithms that can learn classifiers from unlabeled data could benefit many machine learning systems, which could save a huge amount of human labeling costs. In [12, 21], the authors proposed such unsupervised learning algorithms by exploiting the sequential output structures. The developed algorithms are applied to optical character recognition (OCR) problems and automatic speech recognition (ASR) problems. In these works, the learning algorithms seek to learn a sequence classifier by optimizing the empirical output distribution match (Empirical-ODM) cost, which is in the following form (written in our notation):
min θ { − nX−1∑ i=0 pLM(xi) log ( 1 nY nY −1∑ j=0 fθ(xi, yj) )} , (3)
where pLM is a known language model (LM) that describes the distribution of output sequence (e.g., xi represents different n-grams), and fθ is a functional of the sequence classifier to be learned, with θ being its model parameter vector. The key idea is to learn the classifier so that its predicted output n-gram distribution is close to the prior n-gram distribution pLM (see [12, 21] for more details). The cost function (3) can be viewed as a special case of (1) by setting nYi = nY , yij = yj and φi(u) = −pLM (xi) log(u). Note that the formulation (2) cannot be directly used here, because of the dependency of the function fθ on both xi and yj .
Risk-averse learning: Another application where (1) arises naturally is the risk-averse learning problem, which is common in finance [15, 18, 9, 10, 19, 20]. Let xi ∈ Rd be a vector consisting of
the rewards from d assets at the i-th instance, where 0 ≤ i ≤ n − 1. The objective in risk-averse learning is to find the optimal weights of the d assets so that the average returns are maximized while the risk is minimized. It could be formulated as the following optimization problem:
min θ − 1 n n−1∑ i=0 〈xi, θ〉+ 1 n n−1∑ i=0 ( 〈xi, θ〉− 1 n n−1∑ j=0 〈xj , θ〉 )2 , (4)
where θ ∈ Rd denotes the weight vector. The objective function in (4) seeks a tradeoff between the mean (the first term) and the variance (the second term). It can be understood as a special case of (2) (which is a further special case of (1)) by making the following identifications:
nX=nY =n, yi≡xi, fθ(yj)=[θT, −〈yj , θ〉]T, φi(u)=(〈xi, u0:d−1〉+ud)2−〈xi, u0:d−1〉, (5)
where u0:d−1 denotes the subvector constructed from the first d elements of u, and ud denotes the d-th element. An alternative yet simpler way of dealing with (4) is to treat the second term in (4) as a special case of (1) by setting
nX = nYi = n, yij ≡ xj , fθ(xi, yij) = 〈xi − yij , θ〉, φi(u) = u2, u ∈ R. (6)
In addition, we observe that the first term in (4) is in standard empirical risk minimization form, which can be dealt with in a straightforward manner. This second formulation leads to algorithms with lower complexity due to the lower dimension of the functions: ` = 1 instead of ` = d+ 1 in the first formulation. Therefore, we will adopt this formulation in our experiment section (Section 5).
Other applications: Cost functions of the form (1) also appear in reinforcement learning [5, 2, 3] and other applications [18]. In Appendix D, we demonstrate its applications in policy evaluation.
3 Algorithms
3.1 Saddle point formulation
Recall from (1) that there is an empirical average inside each (nonlinear) merit function φi(·), which prevents the direct application of stochastic gradient descent to (1) due to the inherent bias (see Appendix A for more discussions). Nevertheless, we will show that minimizing the original cost function (1) can be transformed into an equivalent saddle point problem, which brings out all the empirical averages inside φi(·). In what follows, we will use the machinery of convex conjugate functions [14]. For a function ψ : R` → R, its convex conjugate function ψ∗ : R` → R is defined as ψ∗(y) = supx∈R`(〈x, y〉−ψ(x)). Under certain mild conditions on ψ(x) [14], one can also express ψ(x) as a functional of its conjugate function: ψ(x) = supy∈R`(〈x, y〉−ψ∗(y)). Let φ∗i (wi) denote the conjugate function of φi(u). Then, we can express φi(u) as
φi(u) = sup wi∈R`
(〈u,wi〉 − φ∗i (wi)), (7)
where wi is the corresponding dual variable. Substituting (7) into the original minimization problem (1), we obtain its equivalent min-max problem as:
min θ max w
{ L(θ, w) + g(θ) , 1
nX nX−1∑ i=0 [〈 1 nYi nYi−1∑ j=0 fθ(xi, yij), wi 〉 − φ∗i (wi) ] + g(θ) } , (8)
where w,{w0, . . . , wnX−1}, is a collection of all dual variables. We note that the transformation of the original problem (1) into (8) brings out all the empirical averages that are present inside φi(·). This new formulation allows us to develop stochastic variance reduced algorithms below.
3.2 Stochastic variance reduced primal-dual algorithm
One common solution for the min-max problem (8) is to alternate between the step of minimization (with respect to the primal variable θ) and the step of maximization (with respect to the dual variable w). However, such an approach generally suffers from high computation complexity because each minimization/maximization step requires a summation over many components and requires a full
pass over all the data samples. The complexity of such a batch algorithm would be prohibitively high when the number of data samples (i.e., nX and nYi ) is large (e.g., they could be larger than one million or even one billion in applications like unsupervised speech recognition [21]). On the other hand, problem (8) indeed has rich structures that we can exploit to develop more efficient solutions.
To this end, we make the following observations. First, expression (8) implies that when θ is fixed, the maximization over the dual variable w can be decoupled into a total of nX individual maximizations over different wi’s. Second, the objective function in each individual maximization (with respect to wi) contains a finite-sum structure over j. Third, by (8), for a fixed w, the minimization with respect to the primal variable θ is also performed over an objective function with a finite-sum structure. Based on these observations, we will develop an efficient stochastic variance reduced primal-dual algorithm (named SVRPDA-I). It alternates between (i) a dual step of stochastic variance reduced coordinate ascent and (ii) a primal step of stochastic variance reduced gradient descent. The full algorithm is summarized in Algorithm 1, with its key ideas explained below.
Dual step: stochastic variance reduced coordinate ascent. To exploit the decoupled dual maximization over w in (8), we can randomly sample an index i, and update wi according to:
w (k) i = argminwi { − 〈 1 nYi nYi−1∑ j=0 fθ(k−1)(xi, yij), wi 〉 + φ∗i (wi) + 1 2αw ‖wi − w(k−1)i ‖ 2 } , (9)
while keeping all other wj’s (j 6= i) unchanged, where αw denotes a step-size. Note that each step of recursion (9) still requires a summation over nYi components. To further reduce the complexity, we approximate the sum over j by a variance reduced stochastic estimator defined in (12) (to be discussed in Section 3.3). The dual step in our algorithm is summarized in (13), where we assume that the function φ∗i (wi) is in a simple form so that the argmin could be solved in closed-form. Note that we flip the sign of the objective function to change maximization to minimization and apply coordinate descent. We will still refer to the dual step as “coordinate ascent” (instead of descent).
Primal step: stochastic variance reduced gradient descent We now consider the minimization in (8) with respect to θ when w is fixed. The gradient descent step for minimizing L(θ, w) is given by
θ(k) = argmin θ {〈 nX−1∑ i=0 nYi−1∑ j=0 1 nXnYi f ′θ(k−1)(xi, yij)w (k) i , θ 〉 + 1 2αθ ‖θ − θ(k−1)‖2 } , (10)
where αθ denotes a step-size. It is easy to see that the update equation (10) has high complexity, it requires evaluating and averaging the gradient f ′θ(·, ·) at every data sample. To reduce the complexity, we use a variance reduced gradient estimator, defined in (15), to approximate the sums in (10) (to be discussed in Section 3.3). The primal step in our algorithm is summarized in (16) in Algorithm 1.
3.3 Low-complexity stochastic variance reduced estimators
We now proceed to explain the design of the variance reduced gradient estimators in both the dual and the primal updates. The main idea is inspired by the stochastic variance reduced gradient (SVRG) algorithm [7]. Specifically, for a vector-valued function h(θ) = 1n ∑n−1 i=0 hi(θ), we can construct its SVRG estimator δk at each iteration step k by using the following expression:
δk = hik(θ)− hik(θ̃) + h(θ̃), (17)
where ik is a randomly sampled index from {0, . . . , n − 1}, and θ̃ is a reference variable that is updated periodically (to be explained below). The first term hi(θ) in (17) is an unbiased estimator of h(θ) and is generally known as the stochastic gradient when h(θ) is the gradient of a certain cost function. The last two terms in (17) construct a control variate that has zero mean and is negatively correlated with hi(θ), which keeps δk unbiased while significantly reducing its variance. The reference variable θ̃ is usually set to be a delayed version of θ: for example, after every M updates of θ, it can be reset to the most recent iterate of θ. Note that there is a trade-off in the choice of M : a smaller M further reduces the variance of δk since θ̃ will be closer to θ and the first two terms in (17) cancel more with each other; on the other hand, it will also require more frequent evaluations of the costly batch term h(θ̃), which has a complexity of O(n).
Algorithm 1 SVRPDA-I 1: Inputs: data {(xi, yij) : 0≤ i<nX , 0≤j<nYi}; step-sizes αθ and αw; # inner iterations M . 2: Initialization: θ̃0 ∈ Rd and w̃0 ∈ R`nX . 3: for s = 1, 2, . . . do 4: Set θ̃= θ̃s−1, θ(0)= θ̃, w̃= w̃s−1, w(0)= w̃s−1, and compute the batch quantities (for each 0≤ i<nX ):
U0 = nX−1∑ i=0 nYi−1∑ j=0 f ′ θ̃ (xi, yij)w (0) i nXnYi , f i(θ̃) , nYi−1∑ j=0 fθ̃(xi, yij) nYi , f ′ i(θ̃) = nYi−1∑ j=0 f ′ θ̃ (xi, yij) nYi . (11)
5: for k = 1 to M do 6: Randomly sample ik ∈ {0, . . . , nX−1} and then jk ∈ {0, . . . , nYik−1} at uniform. 7: Compute the stochastic variance reduced gradient for dual update:
δwk = fθ(k−1)(xik , yikjk )− fθ̃(xik , yikjk ) + f ik (θ̃). (12)
8: Update the dual variables:
w (k) i = argminwi [ − 〈δwk , wi〉+ φ∗i (wi) + 1 2αw ‖wi − w(k−1)i ‖ 2 ] if i = ik
w (k−1) i if i 6= ik
. (13)
9: Update Uk (primal batch gradient at θ̃ and w(k)) according to the following recursion:
Uk = Uk−1 + 1 nX f ′ ik (θ̃) ( w (k) ik − w(k−1)ik ) . (14)
10: Randomly sample i′k ∈ {0, . . . , nX − 1} and then j′k ∈ {0, . . . , nYi′ k − 1}, independent of ik and jk,
and compute the stochastic variance reduced gradient for primal update:
δθk = f ′ θ(k−1)(xi′k , yi ′ k j′ k )w (k) i′ k − f ′θ̃(xi′k , yi′kj′k )w (k) i′ k + Uk. (15)
11: Update the primal variable:
θ(k) = argmin θ
[ 〈δθk, θ〉+ g(θ) + 1
2αθ ‖θ − θ(k−1)‖2
] . (16)
12: end for 13: Option I: Set w̃s = w(M) and θ̃s = θ(M). 14: Option II: Set w̃s = w(M) and θ̃s = θ(t) for randomly sampled t ∈ {0, . . . ,M−1}. 15: end for 16: Output: θ̃s at the last outer-loop iteration.
Based on (17), we develop two stochastic variance reduced estimators, (12) and (15), to approximate the finite-sums in (9) and (10), respectively. The dual gradient estimator δwk in (12) is constructed in a standard manner using (17), where the reference variable θ̃ is a delayed version of θ(k)4. On the other hand, the primal gradient estimator δθk in (15) is constructed by using reference variables (θ̃, w
(k)); that is, we uses the most recent w(k) as the dual reference variable, without any delay. As discussed earlier, such a choice leads to a smaller variance in the stochastic estimator δkθ at a potentially higher computation cost (from more frequent evaluation of the batch term). Nevertheless, we are able to show that, with the dual coordinate ascent structure in our algorithm, the batch term Uk in (15), which is the summation in (10) evaluated at (θ̃, w(k)), can be computed efficiently. To see this, note that, after each dual update step in (13), only one term inside this summation in (10), has been changed, i.e., the one associated with i = ik. Therefore, we can correct Uk for this term by using recursion (14), which only requires an extra O(d`)-complexity per step (same complexity as (15)).
Note that SVRPDA-I (Algorithm 1) requires to compute and store all the f ′ i(θ̃) in (11), which is O(nXd`)-complexity in storage and could be expensive in some applications. To avoid the cost, we develop a variant of Algorithm 1, named as SVRPDA-II (see Algorithm 1 in the supplementary material), by approximating f ik(θ̃) in (14) with f ′ θ̃ (xik , yikj′′k ), where j ′′ k is another randomly sampled index from {0, . . . , nYi − 1}, independent of all other indexes. By doing this, we can significantly
4As in [7], we also consider Option II wherein θ̃ is randomly chosen from the previous M θ(k)’s.
reduce the memory requirement from O(nXd`) in SVRPDA-I to O(d+ nX`) in SVRPDA-II (see Section 4.2). In addition, experimental results in Section 5 will show that such an approximation only cause slight performance loss compared to that of SVRPDA-I algorithm.
4 Theoretical Analysis
4.1 Computation complexity
We now perform convergence analysis for the SVRPDA-I algorithm and also derive their complexities in computation and storage. To begin with, we first introduce the following assumptions. Assumption 4.1. The function g(θ) is µ-strongly convex in θ, and each φi is 1/γ-smooth. Assumption 4.2. The merit functions φi(u) are Lipschitz with a uniform constant Bw: |φi(u)− φi(u′)| ≤ Bw‖u− u′‖, ∀u, u′; ∀i = 0, . . . , nX − 1. Assumption 4.3. fθ(xi, yij) is Bθ-smooth in θ, and has bounded gradients with constant Bf :
‖f ′θ1(xi, yij)− f ′ θ2(xi, yij)‖ ≤ Bθ‖θ1 − θ2‖, ‖f ′ θ(xi, yij)‖ ≤ Bf , ∀θ, θ1, θ2, ∀i, j.
Assumption 4.4. For each given w in its domain, the function L(θ, w) defined in (8) is convex in θ: L(θ1, w)− L(θ2, w) ≥ 〈L′θ(θ2, w), θ1 − θ2〉, ∀θ1, θ2.
The above assumptions are commonly used in existing compositional optimization works [9, 10, 18, 19, 22]. Based on these assumptions, we establish the non-asymptotic error bounds for SVRPDAI (using either Option I or Option II in Algorithm 1). The main results are summarized in the following theorems, and their proofs can be found in Appendix E. Theorem 4.5. Suppose Assumptions 4.1–4.4 hold. If in Algorithm 1 (with Option I) we choose
αθ = 1
nXµ(64κ+ 1) , αw =
nXµ
γ αθ, M =
⌈ 78.8nXκ+1.3nX+1.3 ⌉ where dxe denotes the roundup operation and κ = B2f/γµ+B2wB2θ/µ2, then the Lyapunov function Ps := E‖θ̃s − θ∗‖2 + γµ · 64κ+3 64nXκ+nX+1
E‖w̃s − w∗‖2 satisfies Ps ≤ (3/4)sP0. Furthermore, the overall computational cost (in number of oracle calls5) for reaching Ps ≤ is upper bounded by
O ( (nXnY + nXκ+ nX) ln(1/ ) ) . (18)
where, with a slight abuse of notation, nY is defined as nY = (nY0 + · · ·+ nYnX−1)/nX . Theorem 4.6. Suppose Assumptions 4.1–4.4 hold. If in Algorithm 1 (with Option II) we choose
αθ = (25B2f
γ +10BθBw+
80B2wB 2 θ
µ
)−1 , αw = µ
40B2f , M = max
( 10
αθµ , 2nX αwγ , 4nX
) ,
then Ps := E‖θ̃s−θ∗‖2+ γnXµE‖w̃s−w ∗‖2 ≤ (5/8)sP0. Furthermore, let κ = B2f γµ + B2wB 2 θ
µ2 . Then, the overall computational cost (in number of oracle calls) for reaching Ps ≤ is upper bounded by
O ( (nXnY + nXκ+ nX) ln(1/ ) ) . (19)
The above theorems show that the Lyapunov function Ps for SVRPDA-I converges to zero at a linear rate when either Option I or II is used. Since E‖θ̃s − θ∗‖2 ≤ Ps, they imply that the computational cost (in number of oracle calls) for reaching E‖θ̃s− θ∗‖2 ≤ is also upper bounded by (18) and (19).
5One oracle call is defined as querying fθ , f ′θ , or φi(u) for any 0 ≤ i < n and u ∈ R`.
Comparison with existing composition optimization algorithms Table 1 summarizes the complexity bounds for our SVRPDA-I algorithm and compares them with existing stochastic composition optimization algorithms. First, to our best knowledge, none of the existing methods consider the general objective function (1) as we did. Instead, they consider its special case (2), and even in this special case, our algorithm still has better (or comparable) complexity bound than other methods. For example, our bound is better than that of [9] since κ2 > nX generally holds, and it is better than that of ASCVRG, which does not achieve linear convergence rate (as no strong convexity is assumed). In addition, our method has better complexity than C-SAGA algorithm when nX = 1 (regardless of mini-batch size in C-SAGA), and it is better than C-SAGA for (2) when the mini-batch size is 1.6 However, since we have not derived our bound for mini-batch setting, it is unclear which one is better in this case, and is an interesting topic for future work. One notable fact from Table 1 is that in this special case (2), the complexity of SVRPDA-I is reduced from O((nXnY +nXκ) ln 1 ) to O((nX+nY +nXκ) ln 1 ). This is because the complexity for evaluating the batch quantities in (11) (Algorithm 1) can be reduced from O(nXnY ) in the general case (1) to O(nX + nY ) in the special case (2). To see this, note that fθ and nYi = nY become independent of i in (2) and (11), meaning that we can factor U0 in (11) as U0 = 1nXnY ∑nY −1 j=0 f ′ θ̃ (yj) ∑nX i=0 w (0) i , where the two sums can be evaluated independently with complexity O(nY ) and O(nX), respectively. The other two quantities in (11) need only O(nY ) due to their independence of i. Second, we consider the further special case of (2) with nX = 1, which simplifies the objective function (1) so that there is no empirical average outside φi(·). This takes the form of the unsupervised learning objective function that appears in [12]. Note that our results O((nY +κ) log 1 ) enjoys a linear convergence rate (i.e., log-dependency on ) due to the variance reduction technique. In contrast, stochastic primal-dual gradient (SPDG) method in [12], which does not use variance reduction, can only have sublinear convergence rate (i.e., O( 1 )).
Relation to SPDC [23] Lastly, we consider the case where nYi = 1 for all 1 ≤ i ≤ nX and fθ is a linear function in θ. This simplifies (1) to the problem considered in [23], known as the regularized empirical risk minimization of linear predictors. It has applications in support vector machines, regularized logistic regression, and more, depending on how the merit function φi is defined. In this special case, the overall complexity for SVRPDA-I becomes (see Appendix F):
O ( (nX + κ) ln(1/ ) ) , (20)
where the condition number κ = B2f/µγ. In comparison, the authors in [23] propose a stochastic primal dual coordinate (SPDC) algorithm for this special case and prove an overall complexity of O (( nX + √ nXκ ) ln ( 1 )) to achieve an -error solution. It is interesting to note that the complexity result in (20) and the complexity result in [23] only differ in their dependency on κ. This difference is most likely due to the acceleration technique that is employed in the primal update of the SPDC algorithm. We conjecture that the dependency on the condition number of SVRPDA-I can be further improved using a similar acceleration technique.
4.2 Storage complexity
We now briefly discuss and compare the storage complexities of both SVRPDA-I and SVRPDA-II. In Table 2, we report the itemized and total storage complexities for both algorithms, which shows that SVRPDA-II significantly reduces the memory footprint. We also observe that the batch quantities in (11), especially f ′ i(θ̃), dominates the storage complexity in SVRPDA-I. On the other hand, the memory usage in SVRPDA-II is more uniformly distributed over different quantities. Furthermore, although the total complexity of SVRPDA-II, O(d+ nX`), grows with the number of samples nX , the nX` term is relatively small because the dimension ` is small in many practical problems (e.g., ` = 1 in (3) and (4)). This is similar to the storage requirement in SPDC [23] and SAGA [4].
6In Appendix D, we also show that our algorithms outperform C-SAGA in experiments.
5 Experiments
In this section we consider the problem of risk-averse learning for portfolio management optimization [9, 10], introduced in Section 2.7 Specifically, we want to solve the optimization problem (4) for a given set of reward vectors {xi ∈ Rd : 0 ≤ i ≤ n − 1}. As we discussed in Section 2, we adopt the alternative formulation (6) for the second term so that it becomes a special case of our general problem (1). Then, we rewrite the cost function into a min-max problem by following the argument in Section 3.1 and apply our SVRPDA-I and SVRPDA-II algorithms (see Appendix C.1 for the details).
We evaluate our algorithms on 18 real-world US Research Returns datasets obtained from the Center for Research in Security Prices (CRSP) website8, with the same setup as in [10]. In each of these datasets, we have d = 25 and n = 7240. We compare the performance of our proposed SVRPDA-I and SVRPDA-II algorithms9 with the following state-of-the art algorithms designed to solve composition optimization problems: (i) Compositional-SVRG-1 (Algorithm 2 of [9]), (ii) Compositional-SVRG-2 (Algorithm 3 of [9]), (iii) Full batch gradient descent, and (iv) ASCVRG algorithm [10]. For the compositional-SVRG algorithms, we follow [9] to formulate it as a special case of the form (2) by using the identification (5). Note that we cannot use the identification (6) for the compositional SVRG algorithms because it will lead to the more general formulation (1) with fθ depending on both xi and yij ≡ xj . For further details, the reader is referred to [9]. As in previous works, we compare different algorithms based on the number of oracle calls required to achieve a certain objective gap (the difference between the objective function evaluated at the current iterate and at the optimal parameters). One oracle call is defined as accessing the function fθ, its derivative f ′θ, or φi(u) for any 0 ≤ i < n and u ∈ R`. The results are shown in Figure 1, which shows that our proposed algorithms significantly outperform the baseline methods on all datasets. In addition, we also observe that SVRPDA-II also converges at a linear rate, and the performance loss caused by the approximation is relatively small compared to SVRPDA-I.
7Additional experiments on the application to policy evaluation in MDPs can be found in Appendix D. 8The processed data in the form of .mat file was obtained from https://github.com/tyDLin/SCVRG 9The choice of the hyper-parameters can be found in Appendix C.2, and the code will be released publicly.
6 Related Works
Composition optimization have attracted significant attention in optimization literature. The stochastic version of the problem (2), where the empirical averages are replaced by expectations, is studied in [18]. The authors propose a two-timescale stochastic approximation algorithm known as SCGD, and establish sublinear convergence rates. In [19], the authors propose the ASC-PG algorithm by using a proximal gradient method to deal with nonsmooth regularizations. The works that are more closely related to our setting are [9] and [10], which consider a finite-sum minimization problem (2) (a special case of our general formulation (1)). In [9], the authors propose the compositional-SVRG methods, which combine SCGD with the SVRG technique from [7] and obtain linear convergence rates. In [10], the authors propose the ASCVRG algorithms that extends to convex but non-smooth objectives. Recently, the authors in [22] propose a C-SAGA algorithm to solve the special case of (2) with nX = 1, and extend to general nX . Different from these works, we take an efficient primal-dual approach that fully exploits the dual decomposition and the finite-sum structures.
On the other hand, problems similar to (1) (and its stochastic versions) are also examined in different specific machine learning problems. [16] considers the minimization of the mean square projected Bellman error (MSPBE) for policy evaluation, which has an expectation inside a quadratic loss. The authors propose a two-timescale stochastic approximation algorithm, GTD2, and establish its asymptotic convergence. [11] and [13] independently showed that the GTD2 is a stochastic gradient method for solving an equivalent saddle-point problem. In [2] and [3], the authors derived saddlepoint formulations for two other variants of costs (MSBE and MSCBE) in the policy evaluation and the control settings, and develop their stochastic primal-dual algorithms. All these works consider the stochastic version of the composition optimization and the proposed algorithms have sublinear convergence rates. In [5], different variance reduction methods are developed to solve the finite-sum version of MSPBE and achieve linear rate even without strongly convex regularization. Then the authors in [6] extends this linear convergence results to the general convex-concave problem with linear coupling and without strong convexity. Besides, problem of the form (1) was also studied in the context of unsupervised learning [12, 21] in the stochastic setting (with expectations in (1)).
Finally, our work is inspired by the stochastic variance reduction techniques in optimization [8, 7, 4, 1, 23], which considers the minimization of a cost that is a finite-sum of many component functions. Different versions of variance reduced stochastic gradients are constructed in these works to achieve linear convergence rate. In particular, our variance reduced stochastic estimators are constructed based on the idea of SVRG [7] with a novel design of the control variates. Our work is also related to the SPDC algorithm [23], which also integrates dual coordinate ascent with variance reduced primal gradient. However, our work is different from SPDC in the following aspects. First, we consider a more general composition optimization problem (1) while SPDC focuses on regularized empirical risk minimization with linear predictors, i.e., nYi ≡ 1 and fθ is linear in θ. Second, because of the composition structures in the problem, our algorithms also needs SVRG in the dual coordinate ascent update, while SPDC does not. Third, the primal update in SPDC is specifically designed for linear predictors. In contrast, our work is not restricted to that by using a novel variance reduced gradient.
7 Conclusions and Future Work
We developed a stochastic primal-dual algorithms, SVRPDA-I to efficiently solve the empirical composition optimization problem. This is achieved by fully exploiting the rich structures inherent in the reformulated min-max problem, including the dual decomposition and the finite-sum structures. It alternates between (i) a dual step of stochastic variance reduced coordinate ascent and (ii) a primal step of stochastic variance reduced gradient descent. In particular, we proposed a novel variance reduced gradient for the primal update, which achieves better variance reduction with low complexity. We derive a non-asymptotic bound for the error sequence and show that it converges at a linear rate when the problem is strongly convex. Moreover, we also developed an approximate version of the algorithm named SVRPDA-II, which further reduces the storage complexity. Experimental results on several real-world benchmarks showed that both SVRPDA-I and SVRPDA-II significantly outperform existing techniques on all these tasks, and the approximation in SVRPDA-II only caused a slight performance loss. Future extensions of our work include the theoretical analysis of SVRPDA-II, the generalization of our algorithms to Bregman divergences, and applying it to large-scale machine learning problems with non-convex cost functions (e.g., unsupervised sequence classifications). | 1. What are the strengths and weaknesses of the paper regarding its motivation, algorithmic contribution, and theoretical analysis?
2. How does the paper's approach compare to other state-of-the-art algorithms in terms of theoretical convergence results and practical applicability?
3. Are there any concerns about the novelty of the proposed algorithm, particularly in relation to previous works such as SVRG-type methods?
4. What are the limitations of the experimental results, especially considering the small dataset size and lack of diverse applications?
5. How could the authors improve the paper, and what additional aspects should they consider to make it more comprehensive and impactful? | Review | Review
The author response addressed my doubts and questions to satisfactory and hence I raised my score. ------------------------------------------------------------------------------------ The motivation of this paper is good, while the algorithmic contribution is also solid and seems to have wide applications in many popular fields of machine learning. However, I feel the paper being weak in several aspects: Firstly, the discussion of the theoretical results are highly lacking: how does the theoretical convergence result compare to the related state-of-the-art algorithms such as the ones in the cited papers [1, 5, 8, 9]? How realistic is the bounded gradient assumption B_f ? (standard SVRG-type methods does not require bounded gradient). In addition, the algorithmic idea of combining the stochastic variance-reduced gradient and primal-dual reformulation is not brand new [1, 5], but seems that the discussion of the relevance of these work is lacking. The experiments are limited only for portfolio management optimization with very small datasets (d = 25, n < 10000), which is a bit frustrating in my sight. I would expect much larger-scale experiments, and ideally with some more applications such as policy evaluation [5] in reinforcement learning, which the proposed algorithm can be also applied. I wish also to see the experimental comparison of the very recent C-SAGA algorithm (Junyu Zhang and Lin Xiao. A composite randomized incremental gradient method. ICML 2019). In short, I think the current version of this paper can be massively improved and wish to encourage the authors to continue working on this. |
NIPS | Title
Stochastic Variance Reduced Primal Dual Algorithms for Empirical Composition Optimization
Abstract
We consider a generic empirical composition optimization problem, where there are empirical averages present both outside and inside nonlinear loss functions. Such a problem is of interest in various machine learning applications, and cannot be directly solved by standard methods such as stochastic gradient descent. We take a novel approach to solving this problem by reformulating the original minimization objective into an equivalent min-max objective, which brings out all the empirical averages that are originally inside the nonlinear loss functions. We exploit the rich structures of the reformulated problem and develop a stochastic primal-dual algorithm, SVRPDA-I, to solve the problem efficiently. We carry out extensive theoretical analysis of the proposed algorithm, obtaining the convergence rate, the computation complexity and the storage complexity. In particular, the algorithm is shown to converge at a linear rate when the problem is strongly convex. Moreover, we also develop an approximate version of the algorithm, named SVRPDA-II, which further reduces the memory requirement. Finally, we evaluate our proposed algorithms on several real-world benchmarks, and experimental results show that the proposed algorithms significantly outperform existing techniques.
1 Introduction
In this paper, we consider the following regularized empirical composition optimization problem:
min θ
1
nX nX−1∑ i=0 φi ( 1 nYi nYi−1∑ j=0 fθ(xi, yij) ) + g(θ), (1)
where (xi, yij) ∈ Rmx × Rmy is the (i, j)-th data sample, fθ : Rmx × Rmy → R` is a function parameterized by θ ∈ Rd, φi : R` → R+ is a convex merit function, which measures a certain loss of the parametric function fθ, and g(θ) is a µ-strongly convex regularization term.
Problems of the form (1) widely appear in many machine learning applications such as reinforcement learning [5, 3, 2, 13], unsupervised sequence classification [12, 21] and risk-averse learning [15, 18, 9, 10, 19] — see our detailed discussion in Section 2. Note that the cost function (1) has an empirical average (over xi) outside the (nonlinear) merit function φi(·) and an empirical average (over yij) inside the merit function, which makes it different from the empirical risk minimization problems that are common in machine learning [17]. Problem (1) can be understood as a generalized version of the one considered in [9, 10].3 In these prior works, yij and nYi are assumed to be independent of
∗Department of Electrical and Computer Engineering, University of Florida, Gainesville, USA. Email: adithyamdevraj@ufl.edu. The work was done during an internship at Tencent AI Lab, Bellevue, WA. †Tencent AI Lab, Bellevue, WA, USA. Email: jianshuchen@tencent.com. 3In addition to the term in (2), the cost function in [10] also has another convex regularization term.
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
i and fθ is only a function of yj so that problem (1) can be reduced to the following special case:
min θ
1
nX nX−1∑ i=0 φi ( 1 nY nY −1∑ j=0 fθ(yj) ) . (2)
Our more general problem formulation (1) encompasses wider applications (see Section 2). Furthermore, different from [2, 19, 18], we focus on the finite sample setting, where we have empirical averages (instead of expectations) in (1). As we shall see below, the finite-sum structures allows us to develop efficient stochastic gradient methods that converges at linear rate.
While problem (1) is important in many machine learning applications, there are several key challenges in solving it efficiently. First, the number of samples (i.e., nX and nYi) could be extremely large: they could be larger than one million or even one billion. Therefore, it is unrealistic to use batch gradient descent algorithm to solve the problem, which requires going over all the data samples at each gradient update step. Moreover, since there is an empirical average inside the nonlinear merit function φi(·), it is not possible to directly apply the classical stochastic gradient descent (SGD) algorithm. This is because sampling from both empirical averages outside and inside φi(·) simultaneously would make the stochastic gradients intrinsically biased (see Appendix A for a discussion).
To address these challenges, in this paper, we first reformulate the original problem (1) into an equivalent saddle point problem (i.e., min-max problem), which brings out all the empirical averages inside φi(·) and exhibits useful dual decomposition and finite-sum structures (Section 3.1). To fully exploit these properties, we develop a stochastic primal-dual algorithm that alternates between a dual step of stochastic variance reduced coordinate ascent and a primal step of stochastic variance reduced gradient descent (Section 3.2). In particular, we develop a novel variance reduced stochastic gradient estimator for the primal step, which achieves better variance reduction with low complexity (Section 3.3). We derive the convergence rate, the finite-time complexity bound, and the storage complexity of our proposed algorithm (Section 4). In particular, it is shown that the proposed algorithms converge at a linear rate when the problem is strongly convex. Moreover, we also develop an approximate version of the algorithm that further reduces the storage complexity without much performance degradation in experiments. We evaluate the performance of our algorithms on several real-world benchmarks, where the experimental results show that they significantly outperform existing methods (Section 5). Finally, we discuss related works in Section 6 and conclude our paper in Section 7.
2 Motivation and Applications
To motivate our composition optimization problem (1), we discuss several important machine learning applications where cost functions of the form (1) arise naturally.
Unsupervised sequence classification: Developing algorithms that can learn classifiers from unlabeled data could benefit many machine learning systems, which could save a huge amount of human labeling costs. In [12, 21], the authors proposed such unsupervised learning algorithms by exploiting the sequential output structures. The developed algorithms are applied to optical character recognition (OCR) problems and automatic speech recognition (ASR) problems. In these works, the learning algorithms seek to learn a sequence classifier by optimizing the empirical output distribution match (Empirical-ODM) cost, which is in the following form (written in our notation):
min θ { − nX−1∑ i=0 pLM(xi) log ( 1 nY nY −1∑ j=0 fθ(xi, yj) )} , (3)
where pLM is a known language model (LM) that describes the distribution of output sequence (e.g., xi represents different n-grams), and fθ is a functional of the sequence classifier to be learned, with θ being its model parameter vector. The key idea is to learn the classifier so that its predicted output n-gram distribution is close to the prior n-gram distribution pLM (see [12, 21] for more details). The cost function (3) can be viewed as a special case of (1) by setting nYi = nY , yij = yj and φi(u) = −pLM (xi) log(u). Note that the formulation (2) cannot be directly used here, because of the dependency of the function fθ on both xi and yj .
Risk-averse learning: Another application where (1) arises naturally is the risk-averse learning problem, which is common in finance [15, 18, 9, 10, 19, 20]. Let xi ∈ Rd be a vector consisting of
the rewards from d assets at the i-th instance, where 0 ≤ i ≤ n − 1. The objective in risk-averse learning is to find the optimal weights of the d assets so that the average returns are maximized while the risk is minimized. It could be formulated as the following optimization problem:
min θ − 1 n n−1∑ i=0 〈xi, θ〉+ 1 n n−1∑ i=0 ( 〈xi, θ〉− 1 n n−1∑ j=0 〈xj , θ〉 )2 , (4)
where θ ∈ Rd denotes the weight vector. The objective function in (4) seeks a tradeoff between the mean (the first term) and the variance (the second term). It can be understood as a special case of (2) (which is a further special case of (1)) by making the following identifications:
nX=nY =n, yi≡xi, fθ(yj)=[θT, −〈yj , θ〉]T, φi(u)=(〈xi, u0:d−1〉+ud)2−〈xi, u0:d−1〉, (5)
where u0:d−1 denotes the subvector constructed from the first d elements of u, and ud denotes the d-th element. An alternative yet simpler way of dealing with (4) is to treat the second term in (4) as a special case of (1) by setting
nX = nYi = n, yij ≡ xj , fθ(xi, yij) = 〈xi − yij , θ〉, φi(u) = u2, u ∈ R. (6)
In addition, we observe that the first term in (4) is in standard empirical risk minimization form, which can be dealt with in a straightforward manner. This second formulation leads to algorithms with lower complexity due to the lower dimension of the functions: ` = 1 instead of ` = d+ 1 in the first formulation. Therefore, we will adopt this formulation in our experiment section (Section 5).
Other applications: Cost functions of the form (1) also appear in reinforcement learning [5, 2, 3] and other applications [18]. In Appendix D, we demonstrate its applications in policy evaluation.
3 Algorithms
3.1 Saddle point formulation
Recall from (1) that there is an empirical average inside each (nonlinear) merit function φi(·), which prevents the direct application of stochastic gradient descent to (1) due to the inherent bias (see Appendix A for more discussions). Nevertheless, we will show that minimizing the original cost function (1) can be transformed into an equivalent saddle point problem, which brings out all the empirical averages inside φi(·). In what follows, we will use the machinery of convex conjugate functions [14]. For a function ψ : R` → R, its convex conjugate function ψ∗ : R` → R is defined as ψ∗(y) = supx∈R`(〈x, y〉−ψ(x)). Under certain mild conditions on ψ(x) [14], one can also express ψ(x) as a functional of its conjugate function: ψ(x) = supy∈R`(〈x, y〉−ψ∗(y)). Let φ∗i (wi) denote the conjugate function of φi(u). Then, we can express φi(u) as
φi(u) = sup wi∈R`
(〈u,wi〉 − φ∗i (wi)), (7)
where wi is the corresponding dual variable. Substituting (7) into the original minimization problem (1), we obtain its equivalent min-max problem as:
min θ max w
{ L(θ, w) + g(θ) , 1
nX nX−1∑ i=0 [〈 1 nYi nYi−1∑ j=0 fθ(xi, yij), wi 〉 − φ∗i (wi) ] + g(θ) } , (8)
where w,{w0, . . . , wnX−1}, is a collection of all dual variables. We note that the transformation of the original problem (1) into (8) brings out all the empirical averages that are present inside φi(·). This new formulation allows us to develop stochastic variance reduced algorithms below.
3.2 Stochastic variance reduced primal-dual algorithm
One common solution for the min-max problem (8) is to alternate between the step of minimization (with respect to the primal variable θ) and the step of maximization (with respect to the dual variable w). However, such an approach generally suffers from high computation complexity because each minimization/maximization step requires a summation over many components and requires a full
pass over all the data samples. The complexity of such a batch algorithm would be prohibitively high when the number of data samples (i.e., nX and nYi ) is large (e.g., they could be larger than one million or even one billion in applications like unsupervised speech recognition [21]). On the other hand, problem (8) indeed has rich structures that we can exploit to develop more efficient solutions.
To this end, we make the following observations. First, expression (8) implies that when θ is fixed, the maximization over the dual variable w can be decoupled into a total of nX individual maximizations over different wi’s. Second, the objective function in each individual maximization (with respect to wi) contains a finite-sum structure over j. Third, by (8), for a fixed w, the minimization with respect to the primal variable θ is also performed over an objective function with a finite-sum structure. Based on these observations, we will develop an efficient stochastic variance reduced primal-dual algorithm (named SVRPDA-I). It alternates between (i) a dual step of stochastic variance reduced coordinate ascent and (ii) a primal step of stochastic variance reduced gradient descent. The full algorithm is summarized in Algorithm 1, with its key ideas explained below.
Dual step: stochastic variance reduced coordinate ascent. To exploit the decoupled dual maximization over w in (8), we can randomly sample an index i, and update wi according to:
w (k) i = argminwi { − 〈 1 nYi nYi−1∑ j=0 fθ(k−1)(xi, yij), wi 〉 + φ∗i (wi) + 1 2αw ‖wi − w(k−1)i ‖ 2 } , (9)
while keeping all other wj’s (j 6= i) unchanged, where αw denotes a step-size. Note that each step of recursion (9) still requires a summation over nYi components. To further reduce the complexity, we approximate the sum over j by a variance reduced stochastic estimator defined in (12) (to be discussed in Section 3.3). The dual step in our algorithm is summarized in (13), where we assume that the function φ∗i (wi) is in a simple form so that the argmin could be solved in closed-form. Note that we flip the sign of the objective function to change maximization to minimization and apply coordinate descent. We will still refer to the dual step as “coordinate ascent” (instead of descent).
Primal step: stochastic variance reduced gradient descent We now consider the minimization in (8) with respect to θ when w is fixed. The gradient descent step for minimizing L(θ, w) is given by
θ(k) = argmin θ {〈 nX−1∑ i=0 nYi−1∑ j=0 1 nXnYi f ′θ(k−1)(xi, yij)w (k) i , θ 〉 + 1 2αθ ‖θ − θ(k−1)‖2 } , (10)
where αθ denotes a step-size. It is easy to see that the update equation (10) has high complexity, it requires evaluating and averaging the gradient f ′θ(·, ·) at every data sample. To reduce the complexity, we use a variance reduced gradient estimator, defined in (15), to approximate the sums in (10) (to be discussed in Section 3.3). The primal step in our algorithm is summarized in (16) in Algorithm 1.
3.3 Low-complexity stochastic variance reduced estimators
We now proceed to explain the design of the variance reduced gradient estimators in both the dual and the primal updates. The main idea is inspired by the stochastic variance reduced gradient (SVRG) algorithm [7]. Specifically, for a vector-valued function h(θ) = 1n ∑n−1 i=0 hi(θ), we can construct its SVRG estimator δk at each iteration step k by using the following expression:
δk = hik(θ)− hik(θ̃) + h(θ̃), (17)
where ik is a randomly sampled index from {0, . . . , n − 1}, and θ̃ is a reference variable that is updated periodically (to be explained below). The first term hi(θ) in (17) is an unbiased estimator of h(θ) and is generally known as the stochastic gradient when h(θ) is the gradient of a certain cost function. The last two terms in (17) construct a control variate that has zero mean and is negatively correlated with hi(θ), which keeps δk unbiased while significantly reducing its variance. The reference variable θ̃ is usually set to be a delayed version of θ: for example, after every M updates of θ, it can be reset to the most recent iterate of θ. Note that there is a trade-off in the choice of M : a smaller M further reduces the variance of δk since θ̃ will be closer to θ and the first two terms in (17) cancel more with each other; on the other hand, it will also require more frequent evaluations of the costly batch term h(θ̃), which has a complexity of O(n).
Algorithm 1 SVRPDA-I 1: Inputs: data {(xi, yij) : 0≤ i<nX , 0≤j<nYi}; step-sizes αθ and αw; # inner iterations M . 2: Initialization: θ̃0 ∈ Rd and w̃0 ∈ R`nX . 3: for s = 1, 2, . . . do 4: Set θ̃= θ̃s−1, θ(0)= θ̃, w̃= w̃s−1, w(0)= w̃s−1, and compute the batch quantities (for each 0≤ i<nX ):
U0 = nX−1∑ i=0 nYi−1∑ j=0 f ′ θ̃ (xi, yij)w (0) i nXnYi , f i(θ̃) , nYi−1∑ j=0 fθ̃(xi, yij) nYi , f ′ i(θ̃) = nYi−1∑ j=0 f ′ θ̃ (xi, yij) nYi . (11)
5: for k = 1 to M do 6: Randomly sample ik ∈ {0, . . . , nX−1} and then jk ∈ {0, . . . , nYik−1} at uniform. 7: Compute the stochastic variance reduced gradient for dual update:
δwk = fθ(k−1)(xik , yikjk )− fθ̃(xik , yikjk ) + f ik (θ̃). (12)
8: Update the dual variables:
w (k) i = argminwi [ − 〈δwk , wi〉+ φ∗i (wi) + 1 2αw ‖wi − w(k−1)i ‖ 2 ] if i = ik
w (k−1) i if i 6= ik
. (13)
9: Update Uk (primal batch gradient at θ̃ and w(k)) according to the following recursion:
Uk = Uk−1 + 1 nX f ′ ik (θ̃) ( w (k) ik − w(k−1)ik ) . (14)
10: Randomly sample i′k ∈ {0, . . . , nX − 1} and then j′k ∈ {0, . . . , nYi′ k − 1}, independent of ik and jk,
and compute the stochastic variance reduced gradient for primal update:
δθk = f ′ θ(k−1)(xi′k , yi ′ k j′ k )w (k) i′ k − f ′θ̃(xi′k , yi′kj′k )w (k) i′ k + Uk. (15)
11: Update the primal variable:
θ(k) = argmin θ
[ 〈δθk, θ〉+ g(θ) + 1
2αθ ‖θ − θ(k−1)‖2
] . (16)
12: end for 13: Option I: Set w̃s = w(M) and θ̃s = θ(M). 14: Option II: Set w̃s = w(M) and θ̃s = θ(t) for randomly sampled t ∈ {0, . . . ,M−1}. 15: end for 16: Output: θ̃s at the last outer-loop iteration.
Based on (17), we develop two stochastic variance reduced estimators, (12) and (15), to approximate the finite-sums in (9) and (10), respectively. The dual gradient estimator δwk in (12) is constructed in a standard manner using (17), where the reference variable θ̃ is a delayed version of θ(k)4. On the other hand, the primal gradient estimator δθk in (15) is constructed by using reference variables (θ̃, w
(k)); that is, we uses the most recent w(k) as the dual reference variable, without any delay. As discussed earlier, such a choice leads to a smaller variance in the stochastic estimator δkθ at a potentially higher computation cost (from more frequent evaluation of the batch term). Nevertheless, we are able to show that, with the dual coordinate ascent structure in our algorithm, the batch term Uk in (15), which is the summation in (10) evaluated at (θ̃, w(k)), can be computed efficiently. To see this, note that, after each dual update step in (13), only one term inside this summation in (10), has been changed, i.e., the one associated with i = ik. Therefore, we can correct Uk for this term by using recursion (14), which only requires an extra O(d`)-complexity per step (same complexity as (15)).
Note that SVRPDA-I (Algorithm 1) requires to compute and store all the f ′ i(θ̃) in (11), which is O(nXd`)-complexity in storage and could be expensive in some applications. To avoid the cost, we develop a variant of Algorithm 1, named as SVRPDA-II (see Algorithm 1 in the supplementary material), by approximating f ik(θ̃) in (14) with f ′ θ̃ (xik , yikj′′k ), where j ′′ k is another randomly sampled index from {0, . . . , nYi − 1}, independent of all other indexes. By doing this, we can significantly
4As in [7], we also consider Option II wherein θ̃ is randomly chosen from the previous M θ(k)’s.
reduce the memory requirement from O(nXd`) in SVRPDA-I to O(d+ nX`) in SVRPDA-II (see Section 4.2). In addition, experimental results in Section 5 will show that such an approximation only cause slight performance loss compared to that of SVRPDA-I algorithm.
4 Theoretical Analysis
4.1 Computation complexity
We now perform convergence analysis for the SVRPDA-I algorithm and also derive their complexities in computation and storage. To begin with, we first introduce the following assumptions. Assumption 4.1. The function g(θ) is µ-strongly convex in θ, and each φi is 1/γ-smooth. Assumption 4.2. The merit functions φi(u) are Lipschitz with a uniform constant Bw: |φi(u)− φi(u′)| ≤ Bw‖u− u′‖, ∀u, u′; ∀i = 0, . . . , nX − 1. Assumption 4.3. fθ(xi, yij) is Bθ-smooth in θ, and has bounded gradients with constant Bf :
‖f ′θ1(xi, yij)− f ′ θ2(xi, yij)‖ ≤ Bθ‖θ1 − θ2‖, ‖f ′ θ(xi, yij)‖ ≤ Bf , ∀θ, θ1, θ2, ∀i, j.
Assumption 4.4. For each given w in its domain, the function L(θ, w) defined in (8) is convex in θ: L(θ1, w)− L(θ2, w) ≥ 〈L′θ(θ2, w), θ1 − θ2〉, ∀θ1, θ2.
The above assumptions are commonly used in existing compositional optimization works [9, 10, 18, 19, 22]. Based on these assumptions, we establish the non-asymptotic error bounds for SVRPDAI (using either Option I or Option II in Algorithm 1). The main results are summarized in the following theorems, and their proofs can be found in Appendix E. Theorem 4.5. Suppose Assumptions 4.1–4.4 hold. If in Algorithm 1 (with Option I) we choose
αθ = 1
nXµ(64κ+ 1) , αw =
nXµ
γ αθ, M =
⌈ 78.8nXκ+1.3nX+1.3 ⌉ where dxe denotes the roundup operation and κ = B2f/γµ+B2wB2θ/µ2, then the Lyapunov function Ps := E‖θ̃s − θ∗‖2 + γµ · 64κ+3 64nXκ+nX+1
E‖w̃s − w∗‖2 satisfies Ps ≤ (3/4)sP0. Furthermore, the overall computational cost (in number of oracle calls5) for reaching Ps ≤ is upper bounded by
O ( (nXnY + nXκ+ nX) ln(1/ ) ) . (18)
where, with a slight abuse of notation, nY is defined as nY = (nY0 + · · ·+ nYnX−1)/nX . Theorem 4.6. Suppose Assumptions 4.1–4.4 hold. If in Algorithm 1 (with Option II) we choose
αθ = (25B2f
γ +10BθBw+
80B2wB 2 θ
µ
)−1 , αw = µ
40B2f , M = max
( 10
αθµ , 2nX αwγ , 4nX
) ,
then Ps := E‖θ̃s−θ∗‖2+ γnXµE‖w̃s−w ∗‖2 ≤ (5/8)sP0. Furthermore, let κ = B2f γµ + B2wB 2 θ
µ2 . Then, the overall computational cost (in number of oracle calls) for reaching Ps ≤ is upper bounded by
O ( (nXnY + nXκ+ nX) ln(1/ ) ) . (19)
The above theorems show that the Lyapunov function Ps for SVRPDA-I converges to zero at a linear rate when either Option I or II is used. Since E‖θ̃s − θ∗‖2 ≤ Ps, they imply that the computational cost (in number of oracle calls) for reaching E‖θ̃s− θ∗‖2 ≤ is also upper bounded by (18) and (19).
5One oracle call is defined as querying fθ , f ′θ , or φi(u) for any 0 ≤ i < n and u ∈ R`.
Comparison with existing composition optimization algorithms Table 1 summarizes the complexity bounds for our SVRPDA-I algorithm and compares them with existing stochastic composition optimization algorithms. First, to our best knowledge, none of the existing methods consider the general objective function (1) as we did. Instead, they consider its special case (2), and even in this special case, our algorithm still has better (or comparable) complexity bound than other methods. For example, our bound is better than that of [9] since κ2 > nX generally holds, and it is better than that of ASCVRG, which does not achieve linear convergence rate (as no strong convexity is assumed). In addition, our method has better complexity than C-SAGA algorithm when nX = 1 (regardless of mini-batch size in C-SAGA), and it is better than C-SAGA for (2) when the mini-batch size is 1.6 However, since we have not derived our bound for mini-batch setting, it is unclear which one is better in this case, and is an interesting topic for future work. One notable fact from Table 1 is that in this special case (2), the complexity of SVRPDA-I is reduced from O((nXnY +nXκ) ln 1 ) to O((nX+nY +nXκ) ln 1 ). This is because the complexity for evaluating the batch quantities in (11) (Algorithm 1) can be reduced from O(nXnY ) in the general case (1) to O(nX + nY ) in the special case (2). To see this, note that fθ and nYi = nY become independent of i in (2) and (11), meaning that we can factor U0 in (11) as U0 = 1nXnY ∑nY −1 j=0 f ′ θ̃ (yj) ∑nX i=0 w (0) i , where the two sums can be evaluated independently with complexity O(nY ) and O(nX), respectively. The other two quantities in (11) need only O(nY ) due to their independence of i. Second, we consider the further special case of (2) with nX = 1, which simplifies the objective function (1) so that there is no empirical average outside φi(·). This takes the form of the unsupervised learning objective function that appears in [12]. Note that our results O((nY +κ) log 1 ) enjoys a linear convergence rate (i.e., log-dependency on ) due to the variance reduction technique. In contrast, stochastic primal-dual gradient (SPDG) method in [12], which does not use variance reduction, can only have sublinear convergence rate (i.e., O( 1 )).
Relation to SPDC [23] Lastly, we consider the case where nYi = 1 for all 1 ≤ i ≤ nX and fθ is a linear function in θ. This simplifies (1) to the problem considered in [23], known as the regularized empirical risk minimization of linear predictors. It has applications in support vector machines, regularized logistic regression, and more, depending on how the merit function φi is defined. In this special case, the overall complexity for SVRPDA-I becomes (see Appendix F):
O ( (nX + κ) ln(1/ ) ) , (20)
where the condition number κ = B2f/µγ. In comparison, the authors in [23] propose a stochastic primal dual coordinate (SPDC) algorithm for this special case and prove an overall complexity of O (( nX + √ nXκ ) ln ( 1 )) to achieve an -error solution. It is interesting to note that the complexity result in (20) and the complexity result in [23] only differ in their dependency on κ. This difference is most likely due to the acceleration technique that is employed in the primal update of the SPDC algorithm. We conjecture that the dependency on the condition number of SVRPDA-I can be further improved using a similar acceleration technique.
4.2 Storage complexity
We now briefly discuss and compare the storage complexities of both SVRPDA-I and SVRPDA-II. In Table 2, we report the itemized and total storage complexities for both algorithms, which shows that SVRPDA-II significantly reduces the memory footprint. We also observe that the batch quantities in (11), especially f ′ i(θ̃), dominates the storage complexity in SVRPDA-I. On the other hand, the memory usage in SVRPDA-II is more uniformly distributed over different quantities. Furthermore, although the total complexity of SVRPDA-II, O(d+ nX`), grows with the number of samples nX , the nX` term is relatively small because the dimension ` is small in many practical problems (e.g., ` = 1 in (3) and (4)). This is similar to the storage requirement in SPDC [23] and SAGA [4].
6In Appendix D, we also show that our algorithms outperform C-SAGA in experiments.
5 Experiments
In this section we consider the problem of risk-averse learning for portfolio management optimization [9, 10], introduced in Section 2.7 Specifically, we want to solve the optimization problem (4) for a given set of reward vectors {xi ∈ Rd : 0 ≤ i ≤ n − 1}. As we discussed in Section 2, we adopt the alternative formulation (6) for the second term so that it becomes a special case of our general problem (1). Then, we rewrite the cost function into a min-max problem by following the argument in Section 3.1 and apply our SVRPDA-I and SVRPDA-II algorithms (see Appendix C.1 for the details).
We evaluate our algorithms on 18 real-world US Research Returns datasets obtained from the Center for Research in Security Prices (CRSP) website8, with the same setup as in [10]. In each of these datasets, we have d = 25 and n = 7240. We compare the performance of our proposed SVRPDA-I and SVRPDA-II algorithms9 with the following state-of-the art algorithms designed to solve composition optimization problems: (i) Compositional-SVRG-1 (Algorithm 2 of [9]), (ii) Compositional-SVRG-2 (Algorithm 3 of [9]), (iii) Full batch gradient descent, and (iv) ASCVRG algorithm [10]. For the compositional-SVRG algorithms, we follow [9] to formulate it as a special case of the form (2) by using the identification (5). Note that we cannot use the identification (6) for the compositional SVRG algorithms because it will lead to the more general formulation (1) with fθ depending on both xi and yij ≡ xj . For further details, the reader is referred to [9]. As in previous works, we compare different algorithms based on the number of oracle calls required to achieve a certain objective gap (the difference between the objective function evaluated at the current iterate and at the optimal parameters). One oracle call is defined as accessing the function fθ, its derivative f ′θ, or φi(u) for any 0 ≤ i < n and u ∈ R`. The results are shown in Figure 1, which shows that our proposed algorithms significantly outperform the baseline methods on all datasets. In addition, we also observe that SVRPDA-II also converges at a linear rate, and the performance loss caused by the approximation is relatively small compared to SVRPDA-I.
7Additional experiments on the application to policy evaluation in MDPs can be found in Appendix D. 8The processed data in the form of .mat file was obtained from https://github.com/tyDLin/SCVRG 9The choice of the hyper-parameters can be found in Appendix C.2, and the code will be released publicly.
6 Related Works
Composition optimization have attracted significant attention in optimization literature. The stochastic version of the problem (2), where the empirical averages are replaced by expectations, is studied in [18]. The authors propose a two-timescale stochastic approximation algorithm known as SCGD, and establish sublinear convergence rates. In [19], the authors propose the ASC-PG algorithm by using a proximal gradient method to deal with nonsmooth regularizations. The works that are more closely related to our setting are [9] and [10], which consider a finite-sum minimization problem (2) (a special case of our general formulation (1)). In [9], the authors propose the compositional-SVRG methods, which combine SCGD with the SVRG technique from [7] and obtain linear convergence rates. In [10], the authors propose the ASCVRG algorithms that extends to convex but non-smooth objectives. Recently, the authors in [22] propose a C-SAGA algorithm to solve the special case of (2) with nX = 1, and extend to general nX . Different from these works, we take an efficient primal-dual approach that fully exploits the dual decomposition and the finite-sum structures.
On the other hand, problems similar to (1) (and its stochastic versions) are also examined in different specific machine learning problems. [16] considers the minimization of the mean square projected Bellman error (MSPBE) for policy evaluation, which has an expectation inside a quadratic loss. The authors propose a two-timescale stochastic approximation algorithm, GTD2, and establish its asymptotic convergence. [11] and [13] independently showed that the GTD2 is a stochastic gradient method for solving an equivalent saddle-point problem. In [2] and [3], the authors derived saddlepoint formulations for two other variants of costs (MSBE and MSCBE) in the policy evaluation and the control settings, and develop their stochastic primal-dual algorithms. All these works consider the stochastic version of the composition optimization and the proposed algorithms have sublinear convergence rates. In [5], different variance reduction methods are developed to solve the finite-sum version of MSPBE and achieve linear rate even without strongly convex regularization. Then the authors in [6] extends this linear convergence results to the general convex-concave problem with linear coupling and without strong convexity. Besides, problem of the form (1) was also studied in the context of unsupervised learning [12, 21] in the stochastic setting (with expectations in (1)).
Finally, our work is inspired by the stochastic variance reduction techniques in optimization [8, 7, 4, 1, 23], which considers the minimization of a cost that is a finite-sum of many component functions. Different versions of variance reduced stochastic gradients are constructed in these works to achieve linear convergence rate. In particular, our variance reduced stochastic estimators are constructed based on the idea of SVRG [7] with a novel design of the control variates. Our work is also related to the SPDC algorithm [23], which also integrates dual coordinate ascent with variance reduced primal gradient. However, our work is different from SPDC in the following aspects. First, we consider a more general composition optimization problem (1) while SPDC focuses on regularized empirical risk minimization with linear predictors, i.e., nYi ≡ 1 and fθ is linear in θ. Second, because of the composition structures in the problem, our algorithms also needs SVRG in the dual coordinate ascent update, while SPDC does not. Third, the primal update in SPDC is specifically designed for linear predictors. In contrast, our work is not restricted to that by using a novel variance reduced gradient.
7 Conclusions and Future Work
We developed a stochastic primal-dual algorithms, SVRPDA-I to efficiently solve the empirical composition optimization problem. This is achieved by fully exploiting the rich structures inherent in the reformulated min-max problem, including the dual decomposition and the finite-sum structures. It alternates between (i) a dual step of stochastic variance reduced coordinate ascent and (ii) a primal step of stochastic variance reduced gradient descent. In particular, we proposed a novel variance reduced gradient for the primal update, which achieves better variance reduction with low complexity. We derive a non-asymptotic bound for the error sequence and show that it converges at a linear rate when the problem is strongly convex. Moreover, we also developed an approximate version of the algorithm named SVRPDA-II, which further reduces the storage complexity. Experimental results on several real-world benchmarks showed that both SVRPDA-I and SVRPDA-II significantly outperform existing techniques on all these tasks, and the approximation in SVRPDA-II only caused a slight performance loss. Future extensions of our work include the theoretical analysis of SVRPDA-II, the generalization of our algorithms to Bregman divergences, and applying it to large-scale machine learning problems with non-convex cost functions (e.g., unsupervised sequence classifications). | 1. How does the reviewer assess the contribution and efficiency of the proposed algorithm?
2. What concerns did the reviewer have, and how were they addressed by the authors?
3. What is the main challenge the paper tackles, and how does the proposed algorithm fare against other algorithms in terms of theoretical complexity and empirical performance?
4. What questions does the reviewer raise regarding the gap between the proposed algorithm's theoretical complexity and another algorithm's lower theoretical complexity for large n_X, n_Y, and kappa^3?
5. What issue does the reviewer point out regarding strong convexity in the risk adverse learning example, and how does it relate to Assumption 4.1 required in the analysis?
6. Can you provide any additional information or context regarding the related reference mentioned by the reviewer? | Review | Review
The authors have satisfactorily answered my concerns and I am happy to raise my score. The complexity comparison table is interesting and should be included in the final version. ========= This paper studies a variance reduced primal dual gradient algorithm for solving strongly convex composite optimization problems (with a finite sum structure). The algorithm is shown to enjoy a linear rate of convergence, both theoretically and empirically, and the variance reduction structure has allowed for efficient implementation. The paper also provides a few convincing experiments on real dataset compared to state-of-the-art algorithms. Overall, the paper has tackled a challenging problem with an efficient algorithm and the proof appears to be correct. Here are a few comments from the reviewer: 1. The complexity bound in Theorem 1,2 The theoretical convergence rate of the algorithms grow with the order of O(n_X (n_Y + kappa) log(1/eps)) - since both n_X, n_Y are large, it seems to be undesirable. Particularly, since the epoch size M has to be chosen at the same order as O(n_X), it seems that after 1 epoch, the algorithm would only improve its optimality of the order of (1 - O(1/n_Y)). May I know if this is the case? In light of the above comparison, it seems that the algorithm in [8] (that is also compared in the paper) has a lower theoretical complexity for large n_Y,n_X, i.e., the latter only has a complexity of O( (n_X+n_Y+kappa^3) log(1/eps) ). Even though the proposed algorithm is demonstrated to be faster from the experiments, explaining such gap from an analytical lens is also important. 2. Strong Convexity in the Risk Adverse learning Example The risk adverse learning is given as an example in the paper and the numerical experiments. Yet it seems that the problem itself does not satisfy Assumption 4.1 required in the analysis. From (4) and the discussion that follows, we know that (4) is a special case of (2), where the latter is a special case of (1) with *g(theta)=0*. However, Assumption 4.1 requires g(theta) to be strongly convex, which is not the case here. 3. Related Reference The linear convergence rate result proven in this paper seems to be related to: S. Du, W. Hu, "Linear Convergence of the Primal-Dual Gradient Method for Convex-Concave Saddle Point Problems without Strong Convexity", AISTATS 2019, which studies the linear convergence of a variance reduced primal dual algorithm with only a non-strongly-convex + strongly-concave structure, whose primal-dual variables are linearly coupled. This is similar to a special case for the setting of (8), where (8) has a nonlinear coupling. |
NIPS | Title
Online Meta-Learning via Learning with Layer-Distributed Memory
Abstract
We demonstrate that efficient meta-learning can be achieved via end-to-end training of deep neural networks with memory distributed across layers. The persistent state of this memory assumes the entire burden of guiding task adaptation. Moreover, its distributed nature is instrumental in orchestrating adaptation. Ablation experiments demonstrate that providing relevant feedback to memory units distributed across the depth of the network enables them to guide adaptation throughout the entire network. Our results show that this is a successful strategy for simplifying metalearning – often cast as a bi-level optimization problem – to standard end-to-end training, while outperforming gradient-based, prototype-based, and other memorybased meta-learning strategies. Additionally, our adaptation strategy naturally handles online learning scenarios with a significant delay between observing a sample and its corresponding label – a setting in which other approaches struggle. Adaptation via distributed memory is effective across a wide range of learning tasks, ranging from classification to online few-shot semantic segmentation.
1 Introduction
Meta-learning or learning-to-learn is a paradigm that enables models to generalize to a distribution of tasks rather than specialize to just one task [1, 2]. When encountering examples from a new task, we would like the model to adapt to the new task after seeing just a few samples. This is commonly achieved via episodic training of deep neural networks, where, in each episode, the network is exposed to a variety of inputs from the same distribution [3, 4], and the distribution shifts over episodes. The ability of deep networks to adapt to a new task within just a few samples or iterations is central to the application of meta-learning methods in few-shot and online learning scenarios [5, 6].
A recent surge of interest directed towards meta-learning using neural networks has spurred development of a variety of methods [7–9]. In a standard episodic training framework, a network must adapt to a sampled task (or collection of tasks) and incurs a generalization loss for that task (or collection); this generalization loss is backpropagated to update the network weights. Methods differ in the underlying architecture and mechanisms they use to support adaptation. Strategies include using gradient descent in an inner loop, storing and updating prototypes, parameterizing update rules by another neural network, and employing neural memory [3, 10–12]. Section 2 provides an overview.
We focus on memory-based meta-learning, and specifically investigate the organization of neural memory for meta-learning. Motivating this focus is the generality and flexibility of memory-based approaches. Relying on memory for adaptation allows one to cast meta-learning as merely a learning problem using a straightforward loss formulation (viewing entire episodes as examples) and standard optimization techniques. The actual burden of adaptation becomes an implicit responsibility of the memory subsystem: the network must learn to use its persistent memory in a manner that facilitates task adaptation. This contrasts with explicit adaptation mechanisms such as stored prototypes.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
In this implicit adaptation setting, memory architecture plays a crucial role in determining what kind of adaptation can be learned. We experimentally evaluate the effectiveness of alternative neural memory architectures for meta-learning and observe particular advantages to distributing memory throughout a network. More specifically, we view the generic LSTM equations, Wx+Wh−1, as adaptation induced by hidden states in activation space (see Figure 1). By distributing LSTM memory cells across the depth of the network, each layer is tasked with generating hidden states that are useful for adaptation. Such a memory organization is compatible with many standard networks, including CNNs, and can be achieved by merely swapping LSTM memory cells in place of existing filters.
Our simple approach also contrasts with several existing memory-based meta-learning methods used in both generative and classification tasks [13–17]. These methods view memory as a means to store and retrieve useful inductive biases for task adaptation, and hence focus on designing better read and write protocols. They typically have a feature extractor that feeds into a memory network that performs adaptation, whereas our architecture makes no such distinction between stages.
We test the efficacy of network architectures with distributed memory cells on online few-shot and continual learning tasks as in Santoro et al. [13], Ren et al. [18] and Javed and White [6]. The online setting is challenging for two reasons: 1) It is empirically observed that networks are not well suited for training/adaptation with a batch size of one [19]; 2) In this setting the model has to adapt to one image at a time step, thus having to deal with a prolonged adaptation phase. For these reasons, we see these tasks as suitable for evaluating the adaptation capabilities of the hidden states generated by the network.
We empirically observe that our method outperforms strong gradient-based and prototypical baselines, delineating the efficacy of the local adaptation rule learnt by each layer. Particularly important is the distributed nature of our memory, which allows every network layer to adapt when provided with label information; in comparison, restricting adaptability to only later network layers delivers far less compelling performance. These results suggest that co-design of memory architecture and metatraining strategies should be a primary consideration in the ongoing development of memory-based meta-learning. We further test our model in a harder online few-shot learning scenario, wherein the corresponding label to a sample arrives after a long delay [20]. Our method adapts seamlessly, without requiring any changes to the model, while, in this setting, other adaptation strategies struggle. These results highlight promising directions for advancing and simplifying meta-learning by relying upon distributed memory for adaptation.
2 Related Work
Early work on meta-learning introduces many relevant concepts. Schmidhuber [21] proposes using task specific weights, called fast weights, and weights that are adapted across tasks, called slow weights. Bengio et al. [2] updates the network via a learning rule which is parameterized by another neural network. Thrun [22] presents meta-learning in a life-long scenario, where the algorithm accrues information from the past experiences to adapt effectively for the task at hand. Hochreiter et al. [23] train a memory network to learn its own adaptation rule via just its recurrent states. These high level concepts can be seen in more recent methods. We group current meta-learning methods based on the nature of adaptation strategy and discuss them below.
Gradient-based Adaptation Methods. Methods that adapt via gradients constitute a prominent class of meta-learning algorithms [9]. Model-agnostic meta-learning (MAML) [4] learns an initialization that can efficiently be adapted by gradient descent for a new task. Finn et al. [24] focus on learning a network that can use experience from previously seen tasks for current task adaptation. They adapt to the current task by using a network that is MAML pre-trained on the samples from the previous task. Nagabandi et al. [25], Caccia et al. [26] perform online adaptation under non-stationary distributions, either by using a mixture model or by spawning a MAML pre-trained network when the
input distribution changes. Javed and White [6], Beaulieu et al. [27] employ a bi-level optimization routine similar to MAML, except the outer loop loss is catastrophic forgetting. They thereby learn representations that are robust to forgetting and accelerate future learning under online updates.
Memory and Gradient-based Adaptation. Andrychowicz et al. [28], Ravi and Larochelle [10] learn an update rule for network weights by transforming gradients via a LSTM, which outperforms human-designed and fixed SGD update rules. Munkhdalai and Yu [29] learn a transform that maps gradients to fast (task specific) weights, which are stored and retrieved via attention during evaluation. They update slow weights (across task weights) at the end of each task.
Prototypical Methods. These methods learn an encoder which projects training data to a metric space, and obtain class-wise prototypes via averaging representations within the same class. Following this, test data is mapped to the same metric space, wherein classification is achieved via a simple rule (e.g., nearest neighbor prototype based on either euclidean distance or cosine similarity) [5, 30, 31]. These methods are naturally amenable for online learning as class-wise prototypes can be updated in an online manner as shown by Ren et al. [18].
Memory-based Adaptation. Santoro et al. [13] design efficient read and write protocols for a Neural Turning Machine [32] for the purposes of online few-shot learning. Rae et al. [33] design sparse read and write operations, thereby making them scalable in both time and space. Ramalho and Garnelo [11] use logits generated by the model to decide if a certain sample is written to neural memory. Mishra et al. [7] employ an attention-based mechanism to perform adaptation, and use a CNN to generate features for the attention mechanism. Their model requires storing samples across all time steps explicitly, thereby violating the online learning assumption of being able to access each sample only once. All of these methods mainly focus on designing better memory modules either via using more recent attention mechanisms or by designing better read and write rules to neural memory. These methods typically use a CNN which is not adapted for the current task. Our approach differs from these methods, in that we study efficient organization of memory for both online few-shot learning and meta-learning more generally, and show that as a consequence of our distributed memory organization, the entire network is capable of effective adaptation when provided with relevant feedback.
Kirsch and Schmidhuber [34] introduce an interesting form of weight sharing wherein LSTM cells (with tied weights) are distributed throughout the width and depth of the network, however each position has its own hidden state. Further, they have backward connections from the later layers to the earlier layers, enabling the network to implement its own learning algorithm or clone a humandesigned learning algorithm such as backprop. Both our model and theirs implement an adaptation strategy purely using the recurrent states. The difference, however, is in the nature of the adaptation strategy implemented in the recurrent states. Similar to conventional learning algorithms, their backward connections help propagate error from the last layer to the earlier layers. In our architecture, the feedback signal is presented as another input, propagated from the first layer to the last layer.
In addition to being used in classification settings, Guez et al. [35] employ memory-based metalearning approach to perform adaptation for reinforcement learning tasks indicating the generality of using memory as a means for adaptation.
Few-shot Semantic Segmentation. Few-shot segmentation methods commonly rely on using prototypes [36, 37], though recent approaches include gradient-based methods analogous to MAML [38]. The methods that use neural memory typically employ it in final network stages to fuse features of different formats for efficient segmentation: Li et al. [39] use ConvLSTMs [40] to fuse features from different stages of the network; Valipour et al. [41] to fuse spatio-temporal features while segmenting videos; Hu et al. [42] use a ConvLSTM to fuse features of query with the features of support set; Azad et al. [43] use a bidirectional ConvLSTM to fuse segmentation derived from multiple scale space representations. We differ from these works in organization, use of, and information provided to memory module: 1) Memory is distributed across the network as the sole driver of adaptation; 2) Label information is provided to assist with adaptation.
Meta-learning Benchmarks. Caccia et al. [26] present benchmarks that measure the ability of a model to adapt to a new task, using the inductive biases that it has acquired over solving previously seen tasks. More specifically, the benchmark presents an online non-stationary stream of tasks, and the model’s ability to adapt to a new task at each time step is evaluated. Note that they do not measure
the model’s ability to remember earlier tasks; they only want the model to adapt well on a newly presented task.
Antoniou et al. [44] present benchmarks for continual few-shot learning. The network is presented a number of few-shot tasks, one after the other, and then is expected to generalize even to the previously seen tasks. This is a challenging and interesting setup, in that, the network has to show robustness to catastrophic forgetting while learning from limited data. However, we are interested in evaluating the online adaptation ability of models, while Antoniou et al. [44] feed data in a batch setting. We follow experimental setup as in Javed and White [6], where in, the model is required to remember inductive biases acquired over a longer time frame when compared to the experimental setup used by Antoniou et al. [44].
3 Methodology
3.1 Problem: Online Few-shot Learning
This setting combines facets of online and few-shot learning: the model is expected to make predictions on a stream of input samples, while it sees only a few samples per class in the given input stream. In particular, we use a task protocol similar to Santoro et al. [13]. At time step i, an image xi is presented to the model and it makes a prediction for xi. In the following time step, the correct label yi is revealed to the model. The model’s performance depends on the correctness of its prediction at each time step. The following ordered set constitutes a task: T = ( (x1, null), (x2, y1), · · · (xt, yt−1) ) . Here null indicates that no label is passed at the first time step, and t is the total number of time steps (length) of the task. For a k-way N-shot task t = k×N . The entire duration of the task is considered as the adaptation phase, as with every time step the model gets a new sample and must adapt on it to improve its understanding of class concepts.
3.2 Memory as Adaptation in Activation Space
Consider modulating the output of a network F for input x with a persistent state h: u = F (x, h). Now, if adding h aids in realizing a better representation u than otherwise (F (x)), we could view this as adaptation in activation space. In Figure 1, model F ∗ adapts to tasks using its persistent states h. Specifically let us consider the generic LSTM equations Wx+W ′ h−1, we could view Wx as the original response and W ′ h−1 as modulation by a persistent state (memory) in the activation space. So, for the online learning task at hand, we seek to train a LSTM which learns to generate hidden state hi at each time step i, such that it could enable better adaptation in ensuing time steps. We note that adaptation in activation space has been discussed in earlier works. We use this perspective to organize memory better and to enable effective layer-wise adaptation across the network.
3.3 Model
Architecture. We distribute memory across the layers of the network, in order to enable the layers to learn local layer-wise adaptation rules. In particular, we use a model in which each layer of the feature extractor is a convolutional LSTM (CL) [40] followed by a LSTM [45] and a classifier, as shown in Figure 2.
Similar to the LSTM, each convolutional LSTM (CL) layer consists of its own input, forget, and output gates. The key difference is that convolution operations (denoted by ∗) replace matrixvector multiplication. In this setup, we view the addition by Whi ∗ ht−1 as adaptation in the ith time step within the input gate. The same view could be extended to other gates as well. The cell and hidden state generation are likewise similar to LSTM, but use convolution operations:
it = σ(Wii ∗ xt +Whi ∗ ht−1) (1) ft = σ(Wif ∗ xt +Whf ∗ ht−1) (2) ot = σ(Wio ∗ xt +Who ∗ ht−1) (3)
ct = ft ct−1 + it tanh(Wig ∗ xt +Whg ∗ ht−1) (4)
ht = ot tanh(ct) (5)
In initial experiments, we observe that for tasks with 50 time steps these models did not train well. We hypothesize that this could be due to the same network being repeated 50 times, thereby inducing
an effectively very deep network. We resolve this issue by adding skip connections between the second layer and the fourth layer (omitted in Figure 2). Further discussion on this is in Appendix B.
Label Encoding. As label information is essential for learning an adaptation rule, we inject labels offset by one time step to the ConvLSTM feature extractor and the LSTM. This provides the opportunity for each layer to learn an adaptation rule. For a k-way classification problem involving images of spatial resolution s, we feed the label information as a k × s2 matrix with all ones in the cth row if c is indeed the true label. We reshape this matrix as a k × s× s tensor and concatenate it along the channel dimension of image at the next time step. To the LSTM layer, we feed the label in its one-hot form by concatenating it with the flattened activations from the previous layer.
3.4 Training and Evaluation
Following Santoro et al. [13], we perform episodic training by exposing the model to a variety of tasks from the training distribution P(Ttrain). For a given task, the model incurs a loss Li at every time step of the task; we sum these losses and backpropagate through the sum at the end of the task. This is detailed in Algorithm 1 in Appendix A. We evaluate the model using a partition of the dataset that is class-wise disjoint from the training partition. The model makes a prediction at every time step and adapts to the sequence by using its own hidden states, thereby not requiring any gradient information for adaptation. Algorithm 2 in Appendix A provides details.
4 Experiments
4.1 Online Few-Shot Learning
We use CIFAR-FS [46] and Omniglot [47] datasets for our few-shot learning tasks; see Appendix A for details. We adopt the following methods to serve as baselines for comparison.
LSTM and NTM. Santoro et al. [13] use a LSTM and a NTM [32] with read and write protocols for the task of online few shot learning. Both aim to meta-learn tasks by employing a neural memory.
Adaptive Posterior Learning (APL). Ramalho and Garnelo [11] propose a memory-augmented model that stores data point embeddings based on a measure of surprise, which is computed by the loss incurred by each sample. During inference, they retrieve a fixed number of nearest-neighbor data embeddings, which are then fed to a classifier alongside the current sample.
Online Prototypical Networks (OPN). Ren et al. [18] extend prototypical networks to the online case, where they sequentially update the current class-wise prototypes using weighted averaging.
Contextual Prototypical Memory (CPM). Ren et al. [18] improve on OPN by learning a representation space that is conditioned on the current task. Furthermore, weights used to update prototypes are determined by a newly-introduced gating mechanism.
Table 1 shows that our model outperforms the baselines in most settings. These results suggest that the adaptation rules emergent from our design are more efficient than adaptation via prototypes, and adaptation via other memory-based architectures. In the CIFAR-FS experiments, the prototypical methods outperform our method only in the 1-shot scenario. As the 5-shot and 8-shot scenarios have a longer fine-tuning or adaptation phase, this shows that our method is more adept at handling tasks with longer adaptation phases. One reason could be that the stored prototypes which form the persistent state of OPN and CPM are more rigid than the persistent state of our method. The rigidity stems from the predetermined representation size of each prototype, which thereby prevents allocation of representation size depending upon classification difficulty. In our architecture, the network has the freedom to allocate representation size for each class as it deems fit. Consequently, this may help the network learn more efficient adaptation strategies that improve with time.
We examine the importance of distributed adaptation through ablation experiments that vary the layer into which we inject label information. Table 2 shows that models with feature extractors that do not receive label information are outperformed by the model whose earlier layers do receive label information (injecting into CL-1); the latter is even better than pre-trained models. By distributing memory across each layer and allowing label information to flow to each memory module, we enable every layer to learn its own adaptation rule. Here, the CNN baselines are pre-trained with MAML; these pre-trained networks replace the ConvLSTM part and are jointly trained with the LSTM (which receives the labels) and classifier. In these cases, we just replace the ConvLSTM in Figure 2 with a CNN. During the meta-testing phase, the CNN is a just feature extractor and the burden of adaptation falls entirely on the LSTM. In CNN-F, we freeze the weights during meta-training. Our CL+LSTM, restricted to adapt only in the final layer (3rd row; label injection into final LSTM layer only) performs comparably to the CNN baselines. The same model, with full adaptivity (last row) outperforms.
4.2 Delayed Feedback
We consider a task similar to online few-shot classification (Section 3.1), except instead of offset by one timestep, labels are offset by a delay parameter. Supposing the label delay is 3, then the task T is presented to the model as the sequence: T = ( (x1, null), (x2, null), (x3, null), (x4, y1), · · · (xt, yt−3) ) , where t is the sequence length. The model must discern and account for the time delay.
Table 3 shows that our network can learn under these conditions, though performance decreases with increase in delay. This could be imputed to difficulty in associating the hidden representation of a sample with the correct label, consequently creating a noisy environment for learning adaptation rules. We see that pre-training helps: we take our network pre-trained for label delay of 1 and meta-train for tasks with label delay of 5. This improves the model accuracy to outperform the model directly trained with label delay of 4. This could be because the necessary adaptation rules are already learnt by the pre-trained model, and it only has to learn the quantum of delay. Furthermore, from Tables 1 and 3, even with a delay of 2 our method outperforms CPM with no delay.
In this setting, our model can be used in a seamless manner, without having to make any adjustments. Gradient-based and prototypical methods cannot be used as is, and would require storing the samples (violating online assumption) for the time period of delay, causing memory usage to grow linearly with delay; in contrast, it is constant for our method. Further, to use prototypical or gradient-based methods, we would have to know the delay parameter in advance; our network learns the delay.
4.3 Online Continual Learning
We address the problem of continual learning in the online setting. In this setup, the model sees a stream of samples from a non-stationary task distribution, and the model is expected to generalize well even while encountering samples from a previously seen task distribution. Concretely, for a single continual learning task we construct n subtasks from an underlying dataset and first present to the model samples from the first subtask, then the second subtask, so on until the nth subtask in that order. Once the model is trained on all n subtasks sequentially, it is expected to classify images from any of the subtasks, thereby demonstrating robustness catastrophic forgetting [48].
Task Details. We use the Omniglot dataset for our experiments. Following [6], we define each subtask as learning a single class concept. So in this protocol a single online 5-way 5-shot continual learning task is defined as the following ordered set: T = ( T1, T2, T3, T4, T5 ) . Here, subtask Ti contains 5 samples from 1 particular Omniglot class. After adaptation is done on these 5 subtasks (25 samples) we expect the model to classify samples from a query set consisting of samples from all of the subtasks. The performance of the model is the prediction accuracy on the query set. We experiment by varying the total number of subtasks from 5 to 20 as in Figure 3.
Training Details. We perform episodic training by exposing our model to a variety of continual learning tasks from the training partition. At the end of each continual learning task, the model incurs a loss on the query set. We update our model by backpropagating through this query set loss. Note that during evaluation on the query set, we freeze the persistent states of our model in order to prevent any information leak across the query set. Since propagating gradients across long time steps renders training difficult, we train our model using a simple curriculum of increasing task length every 5K episodes. This improves generalization and convergence. Appendix C presents more details. Further, we shuffle the labels across tasks in order to prevent the model from memorizing the training classes. During evaluation, we sample tasks from classes the model has not encountered. The model adapts to the subtasks using just the hidden states and then acquires the ability to predict on the query set, which contains samples from all of the subtasks. We use the same class wise disjoint train/test split as in Lake [47].
Baseline: Online Meta Learning (OML). Javed and White [6] adopt a meta-training strategy similar to MAML. They adapt deeper layers in the inner loop for the current task, while updating the entire network in the outer loop, based on a loss measuring forgetting. For our OML experiments we use a 4-layer CNN followed by two fully connected layers. Appendix C provides implementation details.
Baseline: A Neuromodulated Meta-Learning Algorithm (ANML). Beaulieu et al. [27] use a hypernetwork to modulate the output of the trunk network. In the inner loop, the trunk network is adapted via gradient descent. In the outer loop, they update both the hypernetwork and the trunk network on a loss measuring forgetting. For our ANML experiments, we use a 4-layer CNN followed by a linear layer as the trunk network, with a 3-layer hypernetwork modulating the activations of the CNN. They use 3 times as many parameters as our CL+LSTM model. Appendix C provides details.
Results. Figure 3 plots average accuracy on increasing the length of the continual learning task. Task length is the number of subtasks within each continual learning task, which ranges from 5 to 20 subtasks in our experiments. As expected, we observe that the average accuracy generally decreases with increased task length for all models. However, the CL+LSTM model’s performance degrades slower than the baselines, suggesting that the model has learnt an efficient way of storing inductive biases required to solve each of the subtasks effectively.
From Figure 4, we see that CL+LSTM is robust against forgetting, as the variance on performance across subtasks is low. This suggests that the CL+LSTM model learns adaptation rules that minimally interfere with other tasks.
Analysis of Computational Cost. During inference, our model does not require any gradient computation and fully relies on hidden states to perform adaptation. Consequently, it has lower computational requirements compared to gradient-based models – assuming adaptation is required at every time step. For a comparative case study, let us consider three models and their corresponding GFLOPs per forward pass: OML baseline (1.46 GFLOPs); CL+LSTM (0.40 GFLOPs); 4-layer CNN (0.30 GFLOPs) with parameter count similar to CL+LSTM. Here, we employ standard methodology for estimating of compute cost [49], with a forward and backward pass together incurring three times the operations in a forward pass alone.
We can extend these estimates to compute GFLOPs for the entire adaptation phase. Suppose we are adapting/updating our network on a task of length t iterations. The OML baseline and the 4-layer CNN (adapting on gradient descent) would consume 4.38t GFLOPs and 0.9t GFLOPs, respectively. Our CL+LSTM model would consume only 0.40t GFLOPs; here, we drop the factor of three while computing GFLOPs for the CL+LSTM model, since we do not require any gradient computation for adaptation. During training, we lose this advantage since we perform backpropagation through time, making the computational cost similar to computing meta-gradients.
4.4 Online few-shot Semantic Segmentation
These experiments investigate the efficacy and applicability of adaptation via persistent states to a challenging segmentation task and analyze the effectiveness of label injection for segmentation.
Task Details. We consider a binary segmentation task: we present the model a sequence of images, one at each time step (as in Figure 5), and the model must either segment or mask out the image based on whether it is a distractor. Similar to the classification tasks, we augment the ground truth segmentation information along the channel dimension. The ground truth is offset by 1 time step, so at the first time step we concatenate to the channel dimension an all -1 matrix as a null label, at the next time step we concatenate to the channel dimension the actual ground truth of the image at time step 1. If it is an image to be segmented, we concatenate the ground truth binary mask of the object and the background in the form of a binary matrix. If it is a distractor image, we concatenate to the channel dimension an all zeros matrix indicating that the entire image should be masked out. k-shot scores for segmentation is the IoU of the predicted segmentation on the k + 1th time the model sees the object we want to segment out. For k-shot masking scores, we compute the fraction of the object that has been masked, when model sees the distractor image for the k + 1th time step. We sample our episodes from the dataset FSS1000 [50]; more dataset details are in Appendix D.
The construction of this task avoids zero-shot transfer of inductive biases required for segmentation and forces the model to rely on the task data to learn which objects are to be segmented.
Training Details. We augment a 10 layer U-Net [51] like CNN with memory cells in each layer, by converting each convolution into a convolutional LSTM–referred to as CL U-Net (architecture details in Appendix D. We utilize episodic training, where each episode is an online few-shot segmentation task, as in Figure 5 with 18 time steps in total (9 segmentation images and 9 distractors). We follow a simple training curriculum to train: the first 100k episodes we train without any distractors; in the next 100k episodes we train with distractors as in Figure 5. Further training details are in Appendix D. The episodes presented during evaluation contain novel classes.
Baselines. We use a 10 layer U-Net like CNN pre-trained with MAML for segmentation without any distractors (architecture details in Appendix D). We use this model as our fine-tuning CNN baseline, in that we fine-tune the model on the online stream of images using gradient descent at each time step. From Table 4, we see that the model fails to mask out the distractors, indicating its inability to ability to adapt to the online feed.
From Table 4, we see that CL U-Net variants are capable of effective online adaptation; both models are capable of segmenting and masking images. However we observe that providing label information at the first layer significantly boosts our performance, thereby bolstering our claim that effective task adaptation can be achieved by providing relevant feedback to a network containing distributed memory.
4.5 Standard Supervised Learning
Finally, we assess whether our proposed model can be directly employed in a classic supervised learning setting i.e., without requiring modifications in terms of architecture design. The central
motivation behind these experiments is to see if meta-learning methods can be applied to standard supervised learning tasks without requiring any change in methodology. Hence, in a setting when a priori knowledge of whether the task at hand is a standard supervised learning task or meta-learning task is unavailable, we could use ConvLSTM models. This is similar to the experiments done in [11], where they try to close the gap between standard supervised learning approaches and their meta-learning method applied to standard supervised learning tasks.
We use CIFAR data as our standard supervised learning benchmark [52]; further dataset details are in Appendix E. We use standard networks such as VGG [53] and ResNet [54] as our baselines. In Table 5, we observe that CL variants perform comparably in most cases. This affirms that the ConvLSTM model is capable of handling a conventional supervised learning scenario without any change in training procedure. Even in the absence of temporal signal, ConvLSTMs can still operate well. This is interesting since direct application of gradient-based meta-learners to the conventional supervised learning setting would require optimizing through a prohibitively long inner loop.
5 Conclusion
Our results highlight distributed memory architectures as a promising technical approach to recasting the problem of meta-learning as simply learning with memory-augmented models. This view has potential to eliminate the need for ad-hoc design of mechanisms or optimization procedures for task adaptation, replacing them with generic and general-purpose memory modules. Our ablation studies show the effectiveness of distributing memory throughout a deep neural network (resulting in an increased capacity for adaptation), rather than limiting it to a single layer or final classification stage.
We demonstrate that standard LSTM cells, when provided with relevant feedback, can act as a basic building block of a network designed for meta-learning. On a wide variety of tasks, a distributed memory architecture can learn adaptation strategies that outperform existing methods. The applicability of a purely memory-based network to online semantic segmentation points to the untapped versatility and efficacy of adaptation enabled by distributed persistent states.
Acknowledgments and Disclosure of Funding
We thank Greg Shakhnarovich and Tri Huynh for useful comments. This work was supported in part by the University of Chicago CERES Center. The authors have no competing interests. | 1. What is the main contribution of the paper regarding meta-learning?
2. What are the strengths and weaknesses of the proposed Deep Convolutional LSTM architecture?
3. How does the reviewer assess the clarity and readability of the paper's content?
4. What are the concerns regarding the tasks used for experimentation?
5. Are there any suggestions for improving the paper's title, abstract, and reference list?
6. Are there any questions regarding the training process, pre-training, and performance evaluation? | Summary Of The Paper
Review | Summary Of The Paper
The authors approach the solution to the problem of meta-learning as simply running a recurrent network through various tasks and back-propagating to train the RNN to adapt to new tasks (as in some previous works). The main advance of this work is proposing an architecture - a Deep Convolutional LSTM (with few architectural details) and showing that it works very well. The simplicity of this advance is a plus. The authors should change initial writing and clarify what they mean by distributed memory (see below in more detail) but otherwise the paper is easy to read. A good number of experiments is conducted, though it would be good if it was tested in harder tasks.
Review
There are the basic cons of this work: The first one should be easy to fix: The description from the start with very mysterious - talking about distributed memory, makes a reader wonder what kind of new paradigm have they devised, just to later find out it is just a different RNN architecture. Taking about this as “memory” is misleading because while the RNN activations do store information, often times one considers weights of RNN to be the memory, and learning being the adaptation of the weights - which is not what they are meta-learning (the weights update has a fixed algorithm - back-propagation). The authors should make it clear from the beginning what they mean by memory (=activations of an RNN) and what they mean by distributed (=across layers in deep RNN). Once the reader knows that, the paper is well written and easy to read.
The second issue is that the tasks they use are have quite a short time spans - I am reluctant to call this meta-learning. Nevertheless, these types of problems have been studied before, and this paper makes a nice contribution with a number of experiments. However it is not clear how well would this learning to adapt the activations work in problems with much larger time scales.
Details:
I would say in the abstract what the basic idea is: e.g. something like “We device a variant of deep convolutional LSTM architecture, the feature of which is a large capacity state of activations, and use standard end-to-end training by back-propagating through sequence of tasks, …”
I would even ideally change a title - distributed memory can mean many things, like adapting weights of a networks and neither I feel like this should even be referred to as meta-learning (due to shot time scale nature of the problems).
This kind of architecture is indeed quite capable - very similar one was used in RL: “An Investigation of model-free planning”, Guez at al., - you could add the reference.
134-142 - May be add here some of you previous relevant references of works that do this (train RNN end-to-end) - end of the second last sentence.
202: You mention pre-training but what it is is only mentioned in the sentence after the next one.
243: In 240 you said that task has 5 samples, but here you are changing to 5-20 - does it have more examples or the same 5 are repeated more. If the former, shouldn’t the performance of the model, Figure 3, theoretically increase since you see more examples of a class?
247: Meaning that every query example is started with the same state? (That after training set of tasks)
Figure 4: I don’t understand what is on the x axis. I thought we have a sequence of tasks, and then test the model on the query set containing samples from all the subtasks.
Section 4.5: How do you do the training here? Do you sample examples from training set at random and treat them as a long sequence, back-propagating every n steps, or how is this done? What is the architecture of CL-ResNet-20 say? Resnet followed by convLSTM followed by LSTM or something else?
I would be happy to raise my score if the issues are addressed and there are no major other issues I have missed. |
NIPS | Title
Online Meta-Learning via Learning with Layer-Distributed Memory
Abstract
We demonstrate that efficient meta-learning can be achieved via end-to-end training of deep neural networks with memory distributed across layers. The persistent state of this memory assumes the entire burden of guiding task adaptation. Moreover, its distributed nature is instrumental in orchestrating adaptation. Ablation experiments demonstrate that providing relevant feedback to memory units distributed across the depth of the network enables them to guide adaptation throughout the entire network. Our results show that this is a successful strategy for simplifying metalearning – often cast as a bi-level optimization problem – to standard end-to-end training, while outperforming gradient-based, prototype-based, and other memorybased meta-learning strategies. Additionally, our adaptation strategy naturally handles online learning scenarios with a significant delay between observing a sample and its corresponding label – a setting in which other approaches struggle. Adaptation via distributed memory is effective across a wide range of learning tasks, ranging from classification to online few-shot semantic segmentation.
1 Introduction
Meta-learning or learning-to-learn is a paradigm that enables models to generalize to a distribution of tasks rather than specialize to just one task [1, 2]. When encountering examples from a new task, we would like the model to adapt to the new task after seeing just a few samples. This is commonly achieved via episodic training of deep neural networks, where, in each episode, the network is exposed to a variety of inputs from the same distribution [3, 4], and the distribution shifts over episodes. The ability of deep networks to adapt to a new task within just a few samples or iterations is central to the application of meta-learning methods in few-shot and online learning scenarios [5, 6].
A recent surge of interest directed towards meta-learning using neural networks has spurred development of a variety of methods [7–9]. In a standard episodic training framework, a network must adapt to a sampled task (or collection of tasks) and incurs a generalization loss for that task (or collection); this generalization loss is backpropagated to update the network weights. Methods differ in the underlying architecture and mechanisms they use to support adaptation. Strategies include using gradient descent in an inner loop, storing and updating prototypes, parameterizing update rules by another neural network, and employing neural memory [3, 10–12]. Section 2 provides an overview.
We focus on memory-based meta-learning, and specifically investigate the organization of neural memory for meta-learning. Motivating this focus is the generality and flexibility of memory-based approaches. Relying on memory for adaptation allows one to cast meta-learning as merely a learning problem using a straightforward loss formulation (viewing entire episodes as examples) and standard optimization techniques. The actual burden of adaptation becomes an implicit responsibility of the memory subsystem: the network must learn to use its persistent memory in a manner that facilitates task adaptation. This contrasts with explicit adaptation mechanisms such as stored prototypes.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
In this implicit adaptation setting, memory architecture plays a crucial role in determining what kind of adaptation can be learned. We experimentally evaluate the effectiveness of alternative neural memory architectures for meta-learning and observe particular advantages to distributing memory throughout a network. More specifically, we view the generic LSTM equations, Wx+Wh−1, as adaptation induced by hidden states in activation space (see Figure 1). By distributing LSTM memory cells across the depth of the network, each layer is tasked with generating hidden states that are useful for adaptation. Such a memory organization is compatible with many standard networks, including CNNs, and can be achieved by merely swapping LSTM memory cells in place of existing filters.
Our simple approach also contrasts with several existing memory-based meta-learning methods used in both generative and classification tasks [13–17]. These methods view memory as a means to store and retrieve useful inductive biases for task adaptation, and hence focus on designing better read and write protocols. They typically have a feature extractor that feeds into a memory network that performs adaptation, whereas our architecture makes no such distinction between stages.
We test the efficacy of network architectures with distributed memory cells on online few-shot and continual learning tasks as in Santoro et al. [13], Ren et al. [18] and Javed and White [6]. The online setting is challenging for two reasons: 1) It is empirically observed that networks are not well suited for training/adaptation with a batch size of one [19]; 2) In this setting the model has to adapt to one image at a time step, thus having to deal with a prolonged adaptation phase. For these reasons, we see these tasks as suitable for evaluating the adaptation capabilities of the hidden states generated by the network.
We empirically observe that our method outperforms strong gradient-based and prototypical baselines, delineating the efficacy of the local adaptation rule learnt by each layer. Particularly important is the distributed nature of our memory, which allows every network layer to adapt when provided with label information; in comparison, restricting adaptability to only later network layers delivers far less compelling performance. These results suggest that co-design of memory architecture and metatraining strategies should be a primary consideration in the ongoing development of memory-based meta-learning. We further test our model in a harder online few-shot learning scenario, wherein the corresponding label to a sample arrives after a long delay [20]. Our method adapts seamlessly, without requiring any changes to the model, while, in this setting, other adaptation strategies struggle. These results highlight promising directions for advancing and simplifying meta-learning by relying upon distributed memory for adaptation.
2 Related Work
Early work on meta-learning introduces many relevant concepts. Schmidhuber [21] proposes using task specific weights, called fast weights, and weights that are adapted across tasks, called slow weights. Bengio et al. [2] updates the network via a learning rule which is parameterized by another neural network. Thrun [22] presents meta-learning in a life-long scenario, where the algorithm accrues information from the past experiences to adapt effectively for the task at hand. Hochreiter et al. [23] train a memory network to learn its own adaptation rule via just its recurrent states. These high level concepts can be seen in more recent methods. We group current meta-learning methods based on the nature of adaptation strategy and discuss them below.
Gradient-based Adaptation Methods. Methods that adapt via gradients constitute a prominent class of meta-learning algorithms [9]. Model-agnostic meta-learning (MAML) [4] learns an initialization that can efficiently be adapted by gradient descent for a new task. Finn et al. [24] focus on learning a network that can use experience from previously seen tasks for current task adaptation. They adapt to the current task by using a network that is MAML pre-trained on the samples from the previous task. Nagabandi et al. [25], Caccia et al. [26] perform online adaptation under non-stationary distributions, either by using a mixture model or by spawning a MAML pre-trained network when the
input distribution changes. Javed and White [6], Beaulieu et al. [27] employ a bi-level optimization routine similar to MAML, except the outer loop loss is catastrophic forgetting. They thereby learn representations that are robust to forgetting and accelerate future learning under online updates.
Memory and Gradient-based Adaptation. Andrychowicz et al. [28], Ravi and Larochelle [10] learn an update rule for network weights by transforming gradients via a LSTM, which outperforms human-designed and fixed SGD update rules. Munkhdalai and Yu [29] learn a transform that maps gradients to fast (task specific) weights, which are stored and retrieved via attention during evaluation. They update slow weights (across task weights) at the end of each task.
Prototypical Methods. These methods learn an encoder which projects training data to a metric space, and obtain class-wise prototypes via averaging representations within the same class. Following this, test data is mapped to the same metric space, wherein classification is achieved via a simple rule (e.g., nearest neighbor prototype based on either euclidean distance or cosine similarity) [5, 30, 31]. These methods are naturally amenable for online learning as class-wise prototypes can be updated in an online manner as shown by Ren et al. [18].
Memory-based Adaptation. Santoro et al. [13] design efficient read and write protocols for a Neural Turning Machine [32] for the purposes of online few-shot learning. Rae et al. [33] design sparse read and write operations, thereby making them scalable in both time and space. Ramalho and Garnelo [11] use logits generated by the model to decide if a certain sample is written to neural memory. Mishra et al. [7] employ an attention-based mechanism to perform adaptation, and use a CNN to generate features for the attention mechanism. Their model requires storing samples across all time steps explicitly, thereby violating the online learning assumption of being able to access each sample only once. All of these methods mainly focus on designing better memory modules either via using more recent attention mechanisms or by designing better read and write rules to neural memory. These methods typically use a CNN which is not adapted for the current task. Our approach differs from these methods, in that we study efficient organization of memory for both online few-shot learning and meta-learning more generally, and show that as a consequence of our distributed memory organization, the entire network is capable of effective adaptation when provided with relevant feedback.
Kirsch and Schmidhuber [34] introduce an interesting form of weight sharing wherein LSTM cells (with tied weights) are distributed throughout the width and depth of the network, however each position has its own hidden state. Further, they have backward connections from the later layers to the earlier layers, enabling the network to implement its own learning algorithm or clone a humandesigned learning algorithm such as backprop. Both our model and theirs implement an adaptation strategy purely using the recurrent states. The difference, however, is in the nature of the adaptation strategy implemented in the recurrent states. Similar to conventional learning algorithms, their backward connections help propagate error from the last layer to the earlier layers. In our architecture, the feedback signal is presented as another input, propagated from the first layer to the last layer.
In addition to being used in classification settings, Guez et al. [35] employ memory-based metalearning approach to perform adaptation for reinforcement learning tasks indicating the generality of using memory as a means for adaptation.
Few-shot Semantic Segmentation. Few-shot segmentation methods commonly rely on using prototypes [36, 37], though recent approaches include gradient-based methods analogous to MAML [38]. The methods that use neural memory typically employ it in final network stages to fuse features of different formats for efficient segmentation: Li et al. [39] use ConvLSTMs [40] to fuse features from different stages of the network; Valipour et al. [41] to fuse spatio-temporal features while segmenting videos; Hu et al. [42] use a ConvLSTM to fuse features of query with the features of support set; Azad et al. [43] use a bidirectional ConvLSTM to fuse segmentation derived from multiple scale space representations. We differ from these works in organization, use of, and information provided to memory module: 1) Memory is distributed across the network as the sole driver of adaptation; 2) Label information is provided to assist with adaptation.
Meta-learning Benchmarks. Caccia et al. [26] present benchmarks that measure the ability of a model to adapt to a new task, using the inductive biases that it has acquired over solving previously seen tasks. More specifically, the benchmark presents an online non-stationary stream of tasks, and the model’s ability to adapt to a new task at each time step is evaluated. Note that they do not measure
the model’s ability to remember earlier tasks; they only want the model to adapt well on a newly presented task.
Antoniou et al. [44] present benchmarks for continual few-shot learning. The network is presented a number of few-shot tasks, one after the other, and then is expected to generalize even to the previously seen tasks. This is a challenging and interesting setup, in that, the network has to show robustness to catastrophic forgetting while learning from limited data. However, we are interested in evaluating the online adaptation ability of models, while Antoniou et al. [44] feed data in a batch setting. We follow experimental setup as in Javed and White [6], where in, the model is required to remember inductive biases acquired over a longer time frame when compared to the experimental setup used by Antoniou et al. [44].
3 Methodology
3.1 Problem: Online Few-shot Learning
This setting combines facets of online and few-shot learning: the model is expected to make predictions on a stream of input samples, while it sees only a few samples per class in the given input stream. In particular, we use a task protocol similar to Santoro et al. [13]. At time step i, an image xi is presented to the model and it makes a prediction for xi. In the following time step, the correct label yi is revealed to the model. The model’s performance depends on the correctness of its prediction at each time step. The following ordered set constitutes a task: T = ( (x1, null), (x2, y1), · · · (xt, yt−1) ) . Here null indicates that no label is passed at the first time step, and t is the total number of time steps (length) of the task. For a k-way N-shot task t = k×N . The entire duration of the task is considered as the adaptation phase, as with every time step the model gets a new sample and must adapt on it to improve its understanding of class concepts.
3.2 Memory as Adaptation in Activation Space
Consider modulating the output of a network F for input x with a persistent state h: u = F (x, h). Now, if adding h aids in realizing a better representation u than otherwise (F (x)), we could view this as adaptation in activation space. In Figure 1, model F ∗ adapts to tasks using its persistent states h. Specifically let us consider the generic LSTM equations Wx+W ′ h−1, we could view Wx as the original response and W ′ h−1 as modulation by a persistent state (memory) in the activation space. So, for the online learning task at hand, we seek to train a LSTM which learns to generate hidden state hi at each time step i, such that it could enable better adaptation in ensuing time steps. We note that adaptation in activation space has been discussed in earlier works. We use this perspective to organize memory better and to enable effective layer-wise adaptation across the network.
3.3 Model
Architecture. We distribute memory across the layers of the network, in order to enable the layers to learn local layer-wise adaptation rules. In particular, we use a model in which each layer of the feature extractor is a convolutional LSTM (CL) [40] followed by a LSTM [45] and a classifier, as shown in Figure 2.
Similar to the LSTM, each convolutional LSTM (CL) layer consists of its own input, forget, and output gates. The key difference is that convolution operations (denoted by ∗) replace matrixvector multiplication. In this setup, we view the addition by Whi ∗ ht−1 as adaptation in the ith time step within the input gate. The same view could be extended to other gates as well. The cell and hidden state generation are likewise similar to LSTM, but use convolution operations:
it = σ(Wii ∗ xt +Whi ∗ ht−1) (1) ft = σ(Wif ∗ xt +Whf ∗ ht−1) (2) ot = σ(Wio ∗ xt +Who ∗ ht−1) (3)
ct = ft ct−1 + it tanh(Wig ∗ xt +Whg ∗ ht−1) (4)
ht = ot tanh(ct) (5)
In initial experiments, we observe that for tasks with 50 time steps these models did not train well. We hypothesize that this could be due to the same network being repeated 50 times, thereby inducing
an effectively very deep network. We resolve this issue by adding skip connections between the second layer and the fourth layer (omitted in Figure 2). Further discussion on this is in Appendix B.
Label Encoding. As label information is essential for learning an adaptation rule, we inject labels offset by one time step to the ConvLSTM feature extractor and the LSTM. This provides the opportunity for each layer to learn an adaptation rule. For a k-way classification problem involving images of spatial resolution s, we feed the label information as a k × s2 matrix with all ones in the cth row if c is indeed the true label. We reshape this matrix as a k × s× s tensor and concatenate it along the channel dimension of image at the next time step. To the LSTM layer, we feed the label in its one-hot form by concatenating it with the flattened activations from the previous layer.
3.4 Training and Evaluation
Following Santoro et al. [13], we perform episodic training by exposing the model to a variety of tasks from the training distribution P(Ttrain). For a given task, the model incurs a loss Li at every time step of the task; we sum these losses and backpropagate through the sum at the end of the task. This is detailed in Algorithm 1 in Appendix A. We evaluate the model using a partition of the dataset that is class-wise disjoint from the training partition. The model makes a prediction at every time step and adapts to the sequence by using its own hidden states, thereby not requiring any gradient information for adaptation. Algorithm 2 in Appendix A provides details.
4 Experiments
4.1 Online Few-Shot Learning
We use CIFAR-FS [46] and Omniglot [47] datasets for our few-shot learning tasks; see Appendix A for details. We adopt the following methods to serve as baselines for comparison.
LSTM and NTM. Santoro et al. [13] use a LSTM and a NTM [32] with read and write protocols for the task of online few shot learning. Both aim to meta-learn tasks by employing a neural memory.
Adaptive Posterior Learning (APL). Ramalho and Garnelo [11] propose a memory-augmented model that stores data point embeddings based on a measure of surprise, which is computed by the loss incurred by each sample. During inference, they retrieve a fixed number of nearest-neighbor data embeddings, which are then fed to a classifier alongside the current sample.
Online Prototypical Networks (OPN). Ren et al. [18] extend prototypical networks to the online case, where they sequentially update the current class-wise prototypes using weighted averaging.
Contextual Prototypical Memory (CPM). Ren et al. [18] improve on OPN by learning a representation space that is conditioned on the current task. Furthermore, weights used to update prototypes are determined by a newly-introduced gating mechanism.
Table 1 shows that our model outperforms the baselines in most settings. These results suggest that the adaptation rules emergent from our design are more efficient than adaptation via prototypes, and adaptation via other memory-based architectures. In the CIFAR-FS experiments, the prototypical methods outperform our method only in the 1-shot scenario. As the 5-shot and 8-shot scenarios have a longer fine-tuning or adaptation phase, this shows that our method is more adept at handling tasks with longer adaptation phases. One reason could be that the stored prototypes which form the persistent state of OPN and CPM are more rigid than the persistent state of our method. The rigidity stems from the predetermined representation size of each prototype, which thereby prevents allocation of representation size depending upon classification difficulty. In our architecture, the network has the freedom to allocate representation size for each class as it deems fit. Consequently, this may help the network learn more efficient adaptation strategies that improve with time.
We examine the importance of distributed adaptation through ablation experiments that vary the layer into which we inject label information. Table 2 shows that models with feature extractors that do not receive label information are outperformed by the model whose earlier layers do receive label information (injecting into CL-1); the latter is even better than pre-trained models. By distributing memory across each layer and allowing label information to flow to each memory module, we enable every layer to learn its own adaptation rule. Here, the CNN baselines are pre-trained with MAML; these pre-trained networks replace the ConvLSTM part and are jointly trained with the LSTM (which receives the labels) and classifier. In these cases, we just replace the ConvLSTM in Figure 2 with a CNN. During the meta-testing phase, the CNN is a just feature extractor and the burden of adaptation falls entirely on the LSTM. In CNN-F, we freeze the weights during meta-training. Our CL+LSTM, restricted to adapt only in the final layer (3rd row; label injection into final LSTM layer only) performs comparably to the CNN baselines. The same model, with full adaptivity (last row) outperforms.
4.2 Delayed Feedback
We consider a task similar to online few-shot classification (Section 3.1), except instead of offset by one timestep, labels are offset by a delay parameter. Supposing the label delay is 3, then the task T is presented to the model as the sequence: T = ( (x1, null), (x2, null), (x3, null), (x4, y1), · · · (xt, yt−3) ) , where t is the sequence length. The model must discern and account for the time delay.
Table 3 shows that our network can learn under these conditions, though performance decreases with increase in delay. This could be imputed to difficulty in associating the hidden representation of a sample with the correct label, consequently creating a noisy environment for learning adaptation rules. We see that pre-training helps: we take our network pre-trained for label delay of 1 and meta-train for tasks with label delay of 5. This improves the model accuracy to outperform the model directly trained with label delay of 4. This could be because the necessary adaptation rules are already learnt by the pre-trained model, and it only has to learn the quantum of delay. Furthermore, from Tables 1 and 3, even with a delay of 2 our method outperforms CPM with no delay.
In this setting, our model can be used in a seamless manner, without having to make any adjustments. Gradient-based and prototypical methods cannot be used as is, and would require storing the samples (violating online assumption) for the time period of delay, causing memory usage to grow linearly with delay; in contrast, it is constant for our method. Further, to use prototypical or gradient-based methods, we would have to know the delay parameter in advance; our network learns the delay.
4.3 Online Continual Learning
We address the problem of continual learning in the online setting. In this setup, the model sees a stream of samples from a non-stationary task distribution, and the model is expected to generalize well even while encountering samples from a previously seen task distribution. Concretely, for a single continual learning task we construct n subtasks from an underlying dataset and first present to the model samples from the first subtask, then the second subtask, so on until the nth subtask in that order. Once the model is trained on all n subtasks sequentially, it is expected to classify images from any of the subtasks, thereby demonstrating robustness catastrophic forgetting [48].
Task Details. We use the Omniglot dataset for our experiments. Following [6], we define each subtask as learning a single class concept. So in this protocol a single online 5-way 5-shot continual learning task is defined as the following ordered set: T = ( T1, T2, T3, T4, T5 ) . Here, subtask Ti contains 5 samples from 1 particular Omniglot class. After adaptation is done on these 5 subtasks (25 samples) we expect the model to classify samples from a query set consisting of samples from all of the subtasks. The performance of the model is the prediction accuracy on the query set. We experiment by varying the total number of subtasks from 5 to 20 as in Figure 3.
Training Details. We perform episodic training by exposing our model to a variety of continual learning tasks from the training partition. At the end of each continual learning task, the model incurs a loss on the query set. We update our model by backpropagating through this query set loss. Note that during evaluation on the query set, we freeze the persistent states of our model in order to prevent any information leak across the query set. Since propagating gradients across long time steps renders training difficult, we train our model using a simple curriculum of increasing task length every 5K episodes. This improves generalization and convergence. Appendix C presents more details. Further, we shuffle the labels across tasks in order to prevent the model from memorizing the training classes. During evaluation, we sample tasks from classes the model has not encountered. The model adapts to the subtasks using just the hidden states and then acquires the ability to predict on the query set, which contains samples from all of the subtasks. We use the same class wise disjoint train/test split as in Lake [47].
Baseline: Online Meta Learning (OML). Javed and White [6] adopt a meta-training strategy similar to MAML. They adapt deeper layers in the inner loop for the current task, while updating the entire network in the outer loop, based on a loss measuring forgetting. For our OML experiments we use a 4-layer CNN followed by two fully connected layers. Appendix C provides implementation details.
Baseline: A Neuromodulated Meta-Learning Algorithm (ANML). Beaulieu et al. [27] use a hypernetwork to modulate the output of the trunk network. In the inner loop, the trunk network is adapted via gradient descent. In the outer loop, they update both the hypernetwork and the trunk network on a loss measuring forgetting. For our ANML experiments, we use a 4-layer CNN followed by a linear layer as the trunk network, with a 3-layer hypernetwork modulating the activations of the CNN. They use 3 times as many parameters as our CL+LSTM model. Appendix C provides details.
Results. Figure 3 plots average accuracy on increasing the length of the continual learning task. Task length is the number of subtasks within each continual learning task, which ranges from 5 to 20 subtasks in our experiments. As expected, we observe that the average accuracy generally decreases with increased task length for all models. However, the CL+LSTM model’s performance degrades slower than the baselines, suggesting that the model has learnt an efficient way of storing inductive biases required to solve each of the subtasks effectively.
From Figure 4, we see that CL+LSTM is robust against forgetting, as the variance on performance across subtasks is low. This suggests that the CL+LSTM model learns adaptation rules that minimally interfere with other tasks.
Analysis of Computational Cost. During inference, our model does not require any gradient computation and fully relies on hidden states to perform adaptation. Consequently, it has lower computational requirements compared to gradient-based models – assuming adaptation is required at every time step. For a comparative case study, let us consider three models and their corresponding GFLOPs per forward pass: OML baseline (1.46 GFLOPs); CL+LSTM (0.40 GFLOPs); 4-layer CNN (0.30 GFLOPs) with parameter count similar to CL+LSTM. Here, we employ standard methodology for estimating of compute cost [49], with a forward and backward pass together incurring three times the operations in a forward pass alone.
We can extend these estimates to compute GFLOPs for the entire adaptation phase. Suppose we are adapting/updating our network on a task of length t iterations. The OML baseline and the 4-layer CNN (adapting on gradient descent) would consume 4.38t GFLOPs and 0.9t GFLOPs, respectively. Our CL+LSTM model would consume only 0.40t GFLOPs; here, we drop the factor of three while computing GFLOPs for the CL+LSTM model, since we do not require any gradient computation for adaptation. During training, we lose this advantage since we perform backpropagation through time, making the computational cost similar to computing meta-gradients.
4.4 Online few-shot Semantic Segmentation
These experiments investigate the efficacy and applicability of adaptation via persistent states to a challenging segmentation task and analyze the effectiveness of label injection for segmentation.
Task Details. We consider a binary segmentation task: we present the model a sequence of images, one at each time step (as in Figure 5), and the model must either segment or mask out the image based on whether it is a distractor. Similar to the classification tasks, we augment the ground truth segmentation information along the channel dimension. The ground truth is offset by 1 time step, so at the first time step we concatenate to the channel dimension an all -1 matrix as a null label, at the next time step we concatenate to the channel dimension the actual ground truth of the image at time step 1. If it is an image to be segmented, we concatenate the ground truth binary mask of the object and the background in the form of a binary matrix. If it is a distractor image, we concatenate to the channel dimension an all zeros matrix indicating that the entire image should be masked out. k-shot scores for segmentation is the IoU of the predicted segmentation on the k + 1th time the model sees the object we want to segment out. For k-shot masking scores, we compute the fraction of the object that has been masked, when model sees the distractor image for the k + 1th time step. We sample our episodes from the dataset FSS1000 [50]; more dataset details are in Appendix D.
The construction of this task avoids zero-shot transfer of inductive biases required for segmentation and forces the model to rely on the task data to learn which objects are to be segmented.
Training Details. We augment a 10 layer U-Net [51] like CNN with memory cells in each layer, by converting each convolution into a convolutional LSTM–referred to as CL U-Net (architecture details in Appendix D. We utilize episodic training, where each episode is an online few-shot segmentation task, as in Figure 5 with 18 time steps in total (9 segmentation images and 9 distractors). We follow a simple training curriculum to train: the first 100k episodes we train without any distractors; in the next 100k episodes we train with distractors as in Figure 5. Further training details are in Appendix D. The episodes presented during evaluation contain novel classes.
Baselines. We use a 10 layer U-Net like CNN pre-trained with MAML for segmentation without any distractors (architecture details in Appendix D). We use this model as our fine-tuning CNN baseline, in that we fine-tune the model on the online stream of images using gradient descent at each time step. From Table 4, we see that the model fails to mask out the distractors, indicating its inability to ability to adapt to the online feed.
From Table 4, we see that CL U-Net variants are capable of effective online adaptation; both models are capable of segmenting and masking images. However we observe that providing label information at the first layer significantly boosts our performance, thereby bolstering our claim that effective task adaptation can be achieved by providing relevant feedback to a network containing distributed memory.
4.5 Standard Supervised Learning
Finally, we assess whether our proposed model can be directly employed in a classic supervised learning setting i.e., without requiring modifications in terms of architecture design. The central
motivation behind these experiments is to see if meta-learning methods can be applied to standard supervised learning tasks without requiring any change in methodology. Hence, in a setting when a priori knowledge of whether the task at hand is a standard supervised learning task or meta-learning task is unavailable, we could use ConvLSTM models. This is similar to the experiments done in [11], where they try to close the gap between standard supervised learning approaches and their meta-learning method applied to standard supervised learning tasks.
We use CIFAR data as our standard supervised learning benchmark [52]; further dataset details are in Appendix E. We use standard networks such as VGG [53] and ResNet [54] as our baselines. In Table 5, we observe that CL variants perform comparably in most cases. This affirms that the ConvLSTM model is capable of handling a conventional supervised learning scenario without any change in training procedure. Even in the absence of temporal signal, ConvLSTMs can still operate well. This is interesting since direct application of gradient-based meta-learners to the conventional supervised learning setting would require optimizing through a prohibitively long inner loop.
5 Conclusion
Our results highlight distributed memory architectures as a promising technical approach to recasting the problem of meta-learning as simply learning with memory-augmented models. This view has potential to eliminate the need for ad-hoc design of mechanisms or optimization procedures for task adaptation, replacing them with generic and general-purpose memory modules. Our ablation studies show the effectiveness of distributing memory throughout a deep neural network (resulting in an increased capacity for adaptation), rather than limiting it to a single layer or final classification stage.
We demonstrate that standard LSTM cells, when provided with relevant feedback, can act as a basic building block of a network designed for meta-learning. On a wide variety of tasks, a distributed memory architecture can learn adaptation strategies that outperform existing methods. The applicability of a purely memory-based network to online semantic segmentation points to the untapped versatility and efficacy of adaptation enabled by distributed persistent states.
Acknowledgments and Disclosure of Funding
We thank Greg Shakhnarovich and Tri Huynh for useful comments. This work was supported in part by the University of Chicago CERES Center. The authors have no competing interests. | 1. What is the novel network architecture proposed by the authors for online few-shot learning?
2. What are the issues with the organization and readability of the paper according to the reviewer?
3. What are the questions raised by the reviewer regarding the motivation for using memory-based methods?
4. How does the reviewer assess the claim made by the authors about the performance of their method compared to other meta-learning strategies?
5. What are the concerns regarding the choice of datasets used in the experiments?
6. Is there any detail missing in the method section that makes it hard to understand the implementation? | Summary Of The Paper
Review | Summary Of The Paper
In this paper, the authors propose a novel network architecture for online few-shot learning. They propose to use an LTSM framework and store the previous information in the hidden states of different layers. They also evaluate their methods on other tasks, such as online continual learning, online few-shot semantic segmentation, and standard supervised learning.
Review
This paper is poorly organized and very hard to read. I have been working on few-shot learning for several years and have read lots of papers in this field. However, after reading the abstract and introduction, I totally don’t understand what they want to do. For example, many concepts occur before they are defined. In Line 2, the authors mention “the persistent state”. What is it? It is not defined. In Line 134, the authors also use “h”. I cannot find the definition for h before this paragraph. Is that a value, a vector, or a matrix?
The motivation for using memory-based methods is unclear. I think the authors try to motivate their method in Lines 30-36. However, I have lots of questions about this motivation, e.g., (1) what are the definitions of generality and flexibility in few-shot learning, and (2) why using a straightforward loss formulation and standard optimization techniques is better? After reading the introduction, I still don’t know what is the main challenge in this paper and how the authors address it.
The claim “their method outperforms gradient-based, prototype-based, and other memory-based meta-learning strategies” seems incorrect. Most of the meta-learning methods, such as MAML and ProtoNets, evaluate the methods in the standard few-shot classification setting. However, the authors evaluate their methods in the online few-shot learning setting. Besides, the comparing methods (LSTM, NTM, APL, OPN, and CPM) don’t contain a gradient-based method.
The experiments are conducted on very small datasets. The authors use CIFAR-FS and Omniglot, which are two small datasets. In standard few-shot classification, we usually use miniImageNet and tieredImageNet. In [18], they use RoamingImageNet, which is based on tieredImageNet. So it is not reasonable to run experiments on such small datasets.
The method section is too simple. The authors only use one page for the method section. Many details about the implementation are unclear. |
NIPS | Title
Online Meta-Learning via Learning with Layer-Distributed Memory
Abstract
We demonstrate that efficient meta-learning can be achieved via end-to-end training of deep neural networks with memory distributed across layers. The persistent state of this memory assumes the entire burden of guiding task adaptation. Moreover, its distributed nature is instrumental in orchestrating adaptation. Ablation experiments demonstrate that providing relevant feedback to memory units distributed across the depth of the network enables them to guide adaptation throughout the entire network. Our results show that this is a successful strategy for simplifying metalearning – often cast as a bi-level optimization problem – to standard end-to-end training, while outperforming gradient-based, prototype-based, and other memorybased meta-learning strategies. Additionally, our adaptation strategy naturally handles online learning scenarios with a significant delay between observing a sample and its corresponding label – a setting in which other approaches struggle. Adaptation via distributed memory is effective across a wide range of learning tasks, ranging from classification to online few-shot semantic segmentation.
1 Introduction
Meta-learning or learning-to-learn is a paradigm that enables models to generalize to a distribution of tasks rather than specialize to just one task [1, 2]. When encountering examples from a new task, we would like the model to adapt to the new task after seeing just a few samples. This is commonly achieved via episodic training of deep neural networks, where, in each episode, the network is exposed to a variety of inputs from the same distribution [3, 4], and the distribution shifts over episodes. The ability of deep networks to adapt to a new task within just a few samples or iterations is central to the application of meta-learning methods in few-shot and online learning scenarios [5, 6].
A recent surge of interest directed towards meta-learning using neural networks has spurred development of a variety of methods [7–9]. In a standard episodic training framework, a network must adapt to a sampled task (or collection of tasks) and incurs a generalization loss for that task (or collection); this generalization loss is backpropagated to update the network weights. Methods differ in the underlying architecture and mechanisms they use to support adaptation. Strategies include using gradient descent in an inner loop, storing and updating prototypes, parameterizing update rules by another neural network, and employing neural memory [3, 10–12]. Section 2 provides an overview.
We focus on memory-based meta-learning, and specifically investigate the organization of neural memory for meta-learning. Motivating this focus is the generality and flexibility of memory-based approaches. Relying on memory for adaptation allows one to cast meta-learning as merely a learning problem using a straightforward loss formulation (viewing entire episodes as examples) and standard optimization techniques. The actual burden of adaptation becomes an implicit responsibility of the memory subsystem: the network must learn to use its persistent memory in a manner that facilitates task adaptation. This contrasts with explicit adaptation mechanisms such as stored prototypes.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
In this implicit adaptation setting, memory architecture plays a crucial role in determining what kind of adaptation can be learned. We experimentally evaluate the effectiveness of alternative neural memory architectures for meta-learning and observe particular advantages to distributing memory throughout a network. More specifically, we view the generic LSTM equations, Wx+Wh−1, as adaptation induced by hidden states in activation space (see Figure 1). By distributing LSTM memory cells across the depth of the network, each layer is tasked with generating hidden states that are useful for adaptation. Such a memory organization is compatible with many standard networks, including CNNs, and can be achieved by merely swapping LSTM memory cells in place of existing filters.
Our simple approach also contrasts with several existing memory-based meta-learning methods used in both generative and classification tasks [13–17]. These methods view memory as a means to store and retrieve useful inductive biases for task adaptation, and hence focus on designing better read and write protocols. They typically have a feature extractor that feeds into a memory network that performs adaptation, whereas our architecture makes no such distinction between stages.
We test the efficacy of network architectures with distributed memory cells on online few-shot and continual learning tasks as in Santoro et al. [13], Ren et al. [18] and Javed and White [6]. The online setting is challenging for two reasons: 1) It is empirically observed that networks are not well suited for training/adaptation with a batch size of one [19]; 2) In this setting the model has to adapt to one image at a time step, thus having to deal with a prolonged adaptation phase. For these reasons, we see these tasks as suitable for evaluating the adaptation capabilities of the hidden states generated by the network.
We empirically observe that our method outperforms strong gradient-based and prototypical baselines, delineating the efficacy of the local adaptation rule learnt by each layer. Particularly important is the distributed nature of our memory, which allows every network layer to adapt when provided with label information; in comparison, restricting adaptability to only later network layers delivers far less compelling performance. These results suggest that co-design of memory architecture and metatraining strategies should be a primary consideration in the ongoing development of memory-based meta-learning. We further test our model in a harder online few-shot learning scenario, wherein the corresponding label to a sample arrives after a long delay [20]. Our method adapts seamlessly, without requiring any changes to the model, while, in this setting, other adaptation strategies struggle. These results highlight promising directions for advancing and simplifying meta-learning by relying upon distributed memory for adaptation.
2 Related Work
Early work on meta-learning introduces many relevant concepts. Schmidhuber [21] proposes using task specific weights, called fast weights, and weights that are adapted across tasks, called slow weights. Bengio et al. [2] updates the network via a learning rule which is parameterized by another neural network. Thrun [22] presents meta-learning in a life-long scenario, where the algorithm accrues information from the past experiences to adapt effectively for the task at hand. Hochreiter et al. [23] train a memory network to learn its own adaptation rule via just its recurrent states. These high level concepts can be seen in more recent methods. We group current meta-learning methods based on the nature of adaptation strategy and discuss them below.
Gradient-based Adaptation Methods. Methods that adapt via gradients constitute a prominent class of meta-learning algorithms [9]. Model-agnostic meta-learning (MAML) [4] learns an initialization that can efficiently be adapted by gradient descent for a new task. Finn et al. [24] focus on learning a network that can use experience from previously seen tasks for current task adaptation. They adapt to the current task by using a network that is MAML pre-trained on the samples from the previous task. Nagabandi et al. [25], Caccia et al. [26] perform online adaptation under non-stationary distributions, either by using a mixture model or by spawning a MAML pre-trained network when the
input distribution changes. Javed and White [6], Beaulieu et al. [27] employ a bi-level optimization routine similar to MAML, except the outer loop loss is catastrophic forgetting. They thereby learn representations that are robust to forgetting and accelerate future learning under online updates.
Memory and Gradient-based Adaptation. Andrychowicz et al. [28], Ravi and Larochelle [10] learn an update rule for network weights by transforming gradients via a LSTM, which outperforms human-designed and fixed SGD update rules. Munkhdalai and Yu [29] learn a transform that maps gradients to fast (task specific) weights, which are stored and retrieved via attention during evaluation. They update slow weights (across task weights) at the end of each task.
Prototypical Methods. These methods learn an encoder which projects training data to a metric space, and obtain class-wise prototypes via averaging representations within the same class. Following this, test data is mapped to the same metric space, wherein classification is achieved via a simple rule (e.g., nearest neighbor prototype based on either euclidean distance or cosine similarity) [5, 30, 31]. These methods are naturally amenable for online learning as class-wise prototypes can be updated in an online manner as shown by Ren et al. [18].
Memory-based Adaptation. Santoro et al. [13] design efficient read and write protocols for a Neural Turning Machine [32] for the purposes of online few-shot learning. Rae et al. [33] design sparse read and write operations, thereby making them scalable in both time and space. Ramalho and Garnelo [11] use logits generated by the model to decide if a certain sample is written to neural memory. Mishra et al. [7] employ an attention-based mechanism to perform adaptation, and use a CNN to generate features for the attention mechanism. Their model requires storing samples across all time steps explicitly, thereby violating the online learning assumption of being able to access each sample only once. All of these methods mainly focus on designing better memory modules either via using more recent attention mechanisms or by designing better read and write rules to neural memory. These methods typically use a CNN which is not adapted for the current task. Our approach differs from these methods, in that we study efficient organization of memory for both online few-shot learning and meta-learning more generally, and show that as a consequence of our distributed memory organization, the entire network is capable of effective adaptation when provided with relevant feedback.
Kirsch and Schmidhuber [34] introduce an interesting form of weight sharing wherein LSTM cells (with tied weights) are distributed throughout the width and depth of the network, however each position has its own hidden state. Further, they have backward connections from the later layers to the earlier layers, enabling the network to implement its own learning algorithm or clone a humandesigned learning algorithm such as backprop. Both our model and theirs implement an adaptation strategy purely using the recurrent states. The difference, however, is in the nature of the adaptation strategy implemented in the recurrent states. Similar to conventional learning algorithms, their backward connections help propagate error from the last layer to the earlier layers. In our architecture, the feedback signal is presented as another input, propagated from the first layer to the last layer.
In addition to being used in classification settings, Guez et al. [35] employ memory-based metalearning approach to perform adaptation for reinforcement learning tasks indicating the generality of using memory as a means for adaptation.
Few-shot Semantic Segmentation. Few-shot segmentation methods commonly rely on using prototypes [36, 37], though recent approaches include gradient-based methods analogous to MAML [38]. The methods that use neural memory typically employ it in final network stages to fuse features of different formats for efficient segmentation: Li et al. [39] use ConvLSTMs [40] to fuse features from different stages of the network; Valipour et al. [41] to fuse spatio-temporal features while segmenting videos; Hu et al. [42] use a ConvLSTM to fuse features of query with the features of support set; Azad et al. [43] use a bidirectional ConvLSTM to fuse segmentation derived from multiple scale space representations. We differ from these works in organization, use of, and information provided to memory module: 1) Memory is distributed across the network as the sole driver of adaptation; 2) Label information is provided to assist with adaptation.
Meta-learning Benchmarks. Caccia et al. [26] present benchmarks that measure the ability of a model to adapt to a new task, using the inductive biases that it has acquired over solving previously seen tasks. More specifically, the benchmark presents an online non-stationary stream of tasks, and the model’s ability to adapt to a new task at each time step is evaluated. Note that they do not measure
the model’s ability to remember earlier tasks; they only want the model to adapt well on a newly presented task.
Antoniou et al. [44] present benchmarks for continual few-shot learning. The network is presented a number of few-shot tasks, one after the other, and then is expected to generalize even to the previously seen tasks. This is a challenging and interesting setup, in that, the network has to show robustness to catastrophic forgetting while learning from limited data. However, we are interested in evaluating the online adaptation ability of models, while Antoniou et al. [44] feed data in a batch setting. We follow experimental setup as in Javed and White [6], where in, the model is required to remember inductive biases acquired over a longer time frame when compared to the experimental setup used by Antoniou et al. [44].
3 Methodology
3.1 Problem: Online Few-shot Learning
This setting combines facets of online and few-shot learning: the model is expected to make predictions on a stream of input samples, while it sees only a few samples per class in the given input stream. In particular, we use a task protocol similar to Santoro et al. [13]. At time step i, an image xi is presented to the model and it makes a prediction for xi. In the following time step, the correct label yi is revealed to the model. The model’s performance depends on the correctness of its prediction at each time step. The following ordered set constitutes a task: T = ( (x1, null), (x2, y1), · · · (xt, yt−1) ) . Here null indicates that no label is passed at the first time step, and t is the total number of time steps (length) of the task. For a k-way N-shot task t = k×N . The entire duration of the task is considered as the adaptation phase, as with every time step the model gets a new sample and must adapt on it to improve its understanding of class concepts.
3.2 Memory as Adaptation in Activation Space
Consider modulating the output of a network F for input x with a persistent state h: u = F (x, h). Now, if adding h aids in realizing a better representation u than otherwise (F (x)), we could view this as adaptation in activation space. In Figure 1, model F ∗ adapts to tasks using its persistent states h. Specifically let us consider the generic LSTM equations Wx+W ′ h−1, we could view Wx as the original response and W ′ h−1 as modulation by a persistent state (memory) in the activation space. So, for the online learning task at hand, we seek to train a LSTM which learns to generate hidden state hi at each time step i, such that it could enable better adaptation in ensuing time steps. We note that adaptation in activation space has been discussed in earlier works. We use this perspective to organize memory better and to enable effective layer-wise adaptation across the network.
3.3 Model
Architecture. We distribute memory across the layers of the network, in order to enable the layers to learn local layer-wise adaptation rules. In particular, we use a model in which each layer of the feature extractor is a convolutional LSTM (CL) [40] followed by a LSTM [45] and a classifier, as shown in Figure 2.
Similar to the LSTM, each convolutional LSTM (CL) layer consists of its own input, forget, and output gates. The key difference is that convolution operations (denoted by ∗) replace matrixvector multiplication. In this setup, we view the addition by Whi ∗ ht−1 as adaptation in the ith time step within the input gate. The same view could be extended to other gates as well. The cell and hidden state generation are likewise similar to LSTM, but use convolution operations:
it = σ(Wii ∗ xt +Whi ∗ ht−1) (1) ft = σ(Wif ∗ xt +Whf ∗ ht−1) (2) ot = σ(Wio ∗ xt +Who ∗ ht−1) (3)
ct = ft ct−1 + it tanh(Wig ∗ xt +Whg ∗ ht−1) (4)
ht = ot tanh(ct) (5)
In initial experiments, we observe that for tasks with 50 time steps these models did not train well. We hypothesize that this could be due to the same network being repeated 50 times, thereby inducing
an effectively very deep network. We resolve this issue by adding skip connections between the second layer and the fourth layer (omitted in Figure 2). Further discussion on this is in Appendix B.
Label Encoding. As label information is essential for learning an adaptation rule, we inject labels offset by one time step to the ConvLSTM feature extractor and the LSTM. This provides the opportunity for each layer to learn an adaptation rule. For a k-way classification problem involving images of spatial resolution s, we feed the label information as a k × s2 matrix with all ones in the cth row if c is indeed the true label. We reshape this matrix as a k × s× s tensor and concatenate it along the channel dimension of image at the next time step. To the LSTM layer, we feed the label in its one-hot form by concatenating it with the flattened activations from the previous layer.
3.4 Training and Evaluation
Following Santoro et al. [13], we perform episodic training by exposing the model to a variety of tasks from the training distribution P(Ttrain). For a given task, the model incurs a loss Li at every time step of the task; we sum these losses and backpropagate through the sum at the end of the task. This is detailed in Algorithm 1 in Appendix A. We evaluate the model using a partition of the dataset that is class-wise disjoint from the training partition. The model makes a prediction at every time step and adapts to the sequence by using its own hidden states, thereby not requiring any gradient information for adaptation. Algorithm 2 in Appendix A provides details.
4 Experiments
4.1 Online Few-Shot Learning
We use CIFAR-FS [46] and Omniglot [47] datasets for our few-shot learning tasks; see Appendix A for details. We adopt the following methods to serve as baselines for comparison.
LSTM and NTM. Santoro et al. [13] use a LSTM and a NTM [32] with read and write protocols for the task of online few shot learning. Both aim to meta-learn tasks by employing a neural memory.
Adaptive Posterior Learning (APL). Ramalho and Garnelo [11] propose a memory-augmented model that stores data point embeddings based on a measure of surprise, which is computed by the loss incurred by each sample. During inference, they retrieve a fixed number of nearest-neighbor data embeddings, which are then fed to a classifier alongside the current sample.
Online Prototypical Networks (OPN). Ren et al. [18] extend prototypical networks to the online case, where they sequentially update the current class-wise prototypes using weighted averaging.
Contextual Prototypical Memory (CPM). Ren et al. [18] improve on OPN by learning a representation space that is conditioned on the current task. Furthermore, weights used to update prototypes are determined by a newly-introduced gating mechanism.
Table 1 shows that our model outperforms the baselines in most settings. These results suggest that the adaptation rules emergent from our design are more efficient than adaptation via prototypes, and adaptation via other memory-based architectures. In the CIFAR-FS experiments, the prototypical methods outperform our method only in the 1-shot scenario. As the 5-shot and 8-shot scenarios have a longer fine-tuning or adaptation phase, this shows that our method is more adept at handling tasks with longer adaptation phases. One reason could be that the stored prototypes which form the persistent state of OPN and CPM are more rigid than the persistent state of our method. The rigidity stems from the predetermined representation size of each prototype, which thereby prevents allocation of representation size depending upon classification difficulty. In our architecture, the network has the freedom to allocate representation size for each class as it deems fit. Consequently, this may help the network learn more efficient adaptation strategies that improve with time.
We examine the importance of distributed adaptation through ablation experiments that vary the layer into which we inject label information. Table 2 shows that models with feature extractors that do not receive label information are outperformed by the model whose earlier layers do receive label information (injecting into CL-1); the latter is even better than pre-trained models. By distributing memory across each layer and allowing label information to flow to each memory module, we enable every layer to learn its own adaptation rule. Here, the CNN baselines are pre-trained with MAML; these pre-trained networks replace the ConvLSTM part and are jointly trained with the LSTM (which receives the labels) and classifier. In these cases, we just replace the ConvLSTM in Figure 2 with a CNN. During the meta-testing phase, the CNN is a just feature extractor and the burden of adaptation falls entirely on the LSTM. In CNN-F, we freeze the weights during meta-training. Our CL+LSTM, restricted to adapt only in the final layer (3rd row; label injection into final LSTM layer only) performs comparably to the CNN baselines. The same model, with full adaptivity (last row) outperforms.
4.2 Delayed Feedback
We consider a task similar to online few-shot classification (Section 3.1), except instead of offset by one timestep, labels are offset by a delay parameter. Supposing the label delay is 3, then the task T is presented to the model as the sequence: T = ( (x1, null), (x2, null), (x3, null), (x4, y1), · · · (xt, yt−3) ) , where t is the sequence length. The model must discern and account for the time delay.
Table 3 shows that our network can learn under these conditions, though performance decreases with increase in delay. This could be imputed to difficulty in associating the hidden representation of a sample with the correct label, consequently creating a noisy environment for learning adaptation rules. We see that pre-training helps: we take our network pre-trained for label delay of 1 and meta-train for tasks with label delay of 5. This improves the model accuracy to outperform the model directly trained with label delay of 4. This could be because the necessary adaptation rules are already learnt by the pre-trained model, and it only has to learn the quantum of delay. Furthermore, from Tables 1 and 3, even with a delay of 2 our method outperforms CPM with no delay.
In this setting, our model can be used in a seamless manner, without having to make any adjustments. Gradient-based and prototypical methods cannot be used as is, and would require storing the samples (violating online assumption) for the time period of delay, causing memory usage to grow linearly with delay; in contrast, it is constant for our method. Further, to use prototypical or gradient-based methods, we would have to know the delay parameter in advance; our network learns the delay.
4.3 Online Continual Learning
We address the problem of continual learning in the online setting. In this setup, the model sees a stream of samples from a non-stationary task distribution, and the model is expected to generalize well even while encountering samples from a previously seen task distribution. Concretely, for a single continual learning task we construct n subtasks from an underlying dataset and first present to the model samples from the first subtask, then the second subtask, so on until the nth subtask in that order. Once the model is trained on all n subtasks sequentially, it is expected to classify images from any of the subtasks, thereby demonstrating robustness catastrophic forgetting [48].
Task Details. We use the Omniglot dataset for our experiments. Following [6], we define each subtask as learning a single class concept. So in this protocol a single online 5-way 5-shot continual learning task is defined as the following ordered set: T = ( T1, T2, T3, T4, T5 ) . Here, subtask Ti contains 5 samples from 1 particular Omniglot class. After adaptation is done on these 5 subtasks (25 samples) we expect the model to classify samples from a query set consisting of samples from all of the subtasks. The performance of the model is the prediction accuracy on the query set. We experiment by varying the total number of subtasks from 5 to 20 as in Figure 3.
Training Details. We perform episodic training by exposing our model to a variety of continual learning tasks from the training partition. At the end of each continual learning task, the model incurs a loss on the query set. We update our model by backpropagating through this query set loss. Note that during evaluation on the query set, we freeze the persistent states of our model in order to prevent any information leak across the query set. Since propagating gradients across long time steps renders training difficult, we train our model using a simple curriculum of increasing task length every 5K episodes. This improves generalization and convergence. Appendix C presents more details. Further, we shuffle the labels across tasks in order to prevent the model from memorizing the training classes. During evaluation, we sample tasks from classes the model has not encountered. The model adapts to the subtasks using just the hidden states and then acquires the ability to predict on the query set, which contains samples from all of the subtasks. We use the same class wise disjoint train/test split as in Lake [47].
Baseline: Online Meta Learning (OML). Javed and White [6] adopt a meta-training strategy similar to MAML. They adapt deeper layers in the inner loop for the current task, while updating the entire network in the outer loop, based on a loss measuring forgetting. For our OML experiments we use a 4-layer CNN followed by two fully connected layers. Appendix C provides implementation details.
Baseline: A Neuromodulated Meta-Learning Algorithm (ANML). Beaulieu et al. [27] use a hypernetwork to modulate the output of the trunk network. In the inner loop, the trunk network is adapted via gradient descent. In the outer loop, they update both the hypernetwork and the trunk network on a loss measuring forgetting. For our ANML experiments, we use a 4-layer CNN followed by a linear layer as the trunk network, with a 3-layer hypernetwork modulating the activations of the CNN. They use 3 times as many parameters as our CL+LSTM model. Appendix C provides details.
Results. Figure 3 plots average accuracy on increasing the length of the continual learning task. Task length is the number of subtasks within each continual learning task, which ranges from 5 to 20 subtasks in our experiments. As expected, we observe that the average accuracy generally decreases with increased task length for all models. However, the CL+LSTM model’s performance degrades slower than the baselines, suggesting that the model has learnt an efficient way of storing inductive biases required to solve each of the subtasks effectively.
From Figure 4, we see that CL+LSTM is robust against forgetting, as the variance on performance across subtasks is low. This suggests that the CL+LSTM model learns adaptation rules that minimally interfere with other tasks.
Analysis of Computational Cost. During inference, our model does not require any gradient computation and fully relies on hidden states to perform adaptation. Consequently, it has lower computational requirements compared to gradient-based models – assuming adaptation is required at every time step. For a comparative case study, let us consider three models and their corresponding GFLOPs per forward pass: OML baseline (1.46 GFLOPs); CL+LSTM (0.40 GFLOPs); 4-layer CNN (0.30 GFLOPs) with parameter count similar to CL+LSTM. Here, we employ standard methodology for estimating of compute cost [49], with a forward and backward pass together incurring three times the operations in a forward pass alone.
We can extend these estimates to compute GFLOPs for the entire adaptation phase. Suppose we are adapting/updating our network on a task of length t iterations. The OML baseline and the 4-layer CNN (adapting on gradient descent) would consume 4.38t GFLOPs and 0.9t GFLOPs, respectively. Our CL+LSTM model would consume only 0.40t GFLOPs; here, we drop the factor of three while computing GFLOPs for the CL+LSTM model, since we do not require any gradient computation for adaptation. During training, we lose this advantage since we perform backpropagation through time, making the computational cost similar to computing meta-gradients.
4.4 Online few-shot Semantic Segmentation
These experiments investigate the efficacy and applicability of adaptation via persistent states to a challenging segmentation task and analyze the effectiveness of label injection for segmentation.
Task Details. We consider a binary segmentation task: we present the model a sequence of images, one at each time step (as in Figure 5), and the model must either segment or mask out the image based on whether it is a distractor. Similar to the classification tasks, we augment the ground truth segmentation information along the channel dimension. The ground truth is offset by 1 time step, so at the first time step we concatenate to the channel dimension an all -1 matrix as a null label, at the next time step we concatenate to the channel dimension the actual ground truth of the image at time step 1. If it is an image to be segmented, we concatenate the ground truth binary mask of the object and the background in the form of a binary matrix. If it is a distractor image, we concatenate to the channel dimension an all zeros matrix indicating that the entire image should be masked out. k-shot scores for segmentation is the IoU of the predicted segmentation on the k + 1th time the model sees the object we want to segment out. For k-shot masking scores, we compute the fraction of the object that has been masked, when model sees the distractor image for the k + 1th time step. We sample our episodes from the dataset FSS1000 [50]; more dataset details are in Appendix D.
The construction of this task avoids zero-shot transfer of inductive biases required for segmentation and forces the model to rely on the task data to learn which objects are to be segmented.
Training Details. We augment a 10 layer U-Net [51] like CNN with memory cells in each layer, by converting each convolution into a convolutional LSTM–referred to as CL U-Net (architecture details in Appendix D. We utilize episodic training, where each episode is an online few-shot segmentation task, as in Figure 5 with 18 time steps in total (9 segmentation images and 9 distractors). We follow a simple training curriculum to train: the first 100k episodes we train without any distractors; in the next 100k episodes we train with distractors as in Figure 5. Further training details are in Appendix D. The episodes presented during evaluation contain novel classes.
Baselines. We use a 10 layer U-Net like CNN pre-trained with MAML for segmentation without any distractors (architecture details in Appendix D). We use this model as our fine-tuning CNN baseline, in that we fine-tune the model on the online stream of images using gradient descent at each time step. From Table 4, we see that the model fails to mask out the distractors, indicating its inability to ability to adapt to the online feed.
From Table 4, we see that CL U-Net variants are capable of effective online adaptation; both models are capable of segmenting and masking images. However we observe that providing label information at the first layer significantly boosts our performance, thereby bolstering our claim that effective task adaptation can be achieved by providing relevant feedback to a network containing distributed memory.
4.5 Standard Supervised Learning
Finally, we assess whether our proposed model can be directly employed in a classic supervised learning setting i.e., without requiring modifications in terms of architecture design. The central
motivation behind these experiments is to see if meta-learning methods can be applied to standard supervised learning tasks without requiring any change in methodology. Hence, in a setting when a priori knowledge of whether the task at hand is a standard supervised learning task or meta-learning task is unavailable, we could use ConvLSTM models. This is similar to the experiments done in [11], where they try to close the gap between standard supervised learning approaches and their meta-learning method applied to standard supervised learning tasks.
We use CIFAR data as our standard supervised learning benchmark [52]; further dataset details are in Appendix E. We use standard networks such as VGG [53] and ResNet [54] as our baselines. In Table 5, we observe that CL variants perform comparably in most cases. This affirms that the ConvLSTM model is capable of handling a conventional supervised learning scenario without any change in training procedure. Even in the absence of temporal signal, ConvLSTMs can still operate well. This is interesting since direct application of gradient-based meta-learners to the conventional supervised learning setting would require optimizing through a prohibitively long inner loop.
5 Conclusion
Our results highlight distributed memory architectures as a promising technical approach to recasting the problem of meta-learning as simply learning with memory-augmented models. This view has potential to eliminate the need for ad-hoc design of mechanisms or optimization procedures for task adaptation, replacing them with generic and general-purpose memory modules. Our ablation studies show the effectiveness of distributing memory throughout a deep neural network (resulting in an increased capacity for adaptation), rather than limiting it to a single layer or final classification stage.
We demonstrate that standard LSTM cells, when provided with relevant feedback, can act as a basic building block of a network designed for meta-learning. On a wide variety of tasks, a distributed memory architecture can learn adaptation strategies that outperform existing methods. The applicability of a purely memory-based network to online semantic segmentation points to the untapped versatility and efficacy of adaptation enabled by distributed persistent states.
Acknowledgments and Disclosure of Funding
We thank Greg Shakhnarovich and Tri Huynh for useful comments. This work was supported in part by the University of Chicago CERES Center. The authors have no competing interests. | 1. What is the main contribution of the paper regarding Meta RNNs?
2. How does the proposed architecture differ from previous works, particularly Meta RNNs?
3. Why do the authors suggest using multiple layers of LSTMs and making some convolutional?
4. What are the strengths and weaknesses of the paper's demonstration of performance in few-shot learning and continual learning?
5. Are there any concerns or questions about the connections between the proposed work and recent work on distributed memory/fast weights? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes to adapt Meta RNNs (memory-based meta-learning) by instantiating multiple layers of LSTMs where some of them are convolutional. The authors refer to this as distributed memory due to each layer having its own LSTM-based memory. They demonstrate good performance in few-shot learning and continual learning.
Review
Originality and Significance
The authors suggest that instead of using a single LSTM in MetaRNNs (memory-based meta-learning) multiple layers should be added and some should be convolutional. While this seems like a fairly incremental change, they demonstrate convincingly that this helps in few-shot learning and continual learning. In particular, the improvement / similar performance over ANML and OML is interesting as no inner gradients are required.
MetaRNNs (memory-based meta-learning) not correctly cited
The proposed architecture is very similar to Meta RNNs (supervised [1], RL [2,3]) that the authors do not cite. They only discuss other forms of external memory in the related work section. The author's contribution of adding multiple layers of LSTMs and making some of them convolutional should be highlighted more clearly. This contribution tends to get lost in the paper over discussions that also apply to previous work.
[1] Hochreiter, S., Younger, A. S., & Conwell, P. R. (2001). Learning to learn using gradient descent. International Conference on Artificial Neural Networks.
[2] Duan, Y., Schulman, J., Chen, X., Bartlett, P. L., Sutskever, I., & Abbeel, P. (2016). RL^2: Fast Reinforcement Learning via Slow Reinforcement Learning. ArXiv Preprint ArXiv:1611.02779.
[3] Wang, J. X., Kurth-Nelson, Z., Tirumala, D., Soyer, H., Leibo, J. Z., Munos, R., Blundell, C., Kumaran, D., & Botvinick, M. (2016). Learning to reinforcement learn. ArXiv Preprint ArXiv:1611.05763.
Connections to other recent work on distributed memory / fast weights should be added
Recent work [4] also introduced a form of distributed memory for meta-learning online adaptation strategies / learning algorithms. In that case, each weight in a neural network is replaced by an LSTM, which is a different but related architectural choice compared to the proposed work. The connection should be discussed and cited.
[4] Kirsch, L., & Schmidhuber, J. (2020). Meta Learning Backpropagation And Improving It. ArXiv Preprint ArXiv:2012.14905.
Conclusion
Overall I think it is a good paper with interesting results on meta-learning with distributed memory. Unfortunately, it is missing some crucial relationships to previous work. I would like to give the paper a good score based on appropriate adjustments in the rebuttal. |
NIPS | Title
Online Meta-Learning via Learning with Layer-Distributed Memory
Abstract
We demonstrate that efficient meta-learning can be achieved via end-to-end training of deep neural networks with memory distributed across layers. The persistent state of this memory assumes the entire burden of guiding task adaptation. Moreover, its distributed nature is instrumental in orchestrating adaptation. Ablation experiments demonstrate that providing relevant feedback to memory units distributed across the depth of the network enables them to guide adaptation throughout the entire network. Our results show that this is a successful strategy for simplifying metalearning – often cast as a bi-level optimization problem – to standard end-to-end training, while outperforming gradient-based, prototype-based, and other memorybased meta-learning strategies. Additionally, our adaptation strategy naturally handles online learning scenarios with a significant delay between observing a sample and its corresponding label – a setting in which other approaches struggle. Adaptation via distributed memory is effective across a wide range of learning tasks, ranging from classification to online few-shot semantic segmentation.
1 Introduction
Meta-learning or learning-to-learn is a paradigm that enables models to generalize to a distribution of tasks rather than specialize to just one task [1, 2]. When encountering examples from a new task, we would like the model to adapt to the new task after seeing just a few samples. This is commonly achieved via episodic training of deep neural networks, where, in each episode, the network is exposed to a variety of inputs from the same distribution [3, 4], and the distribution shifts over episodes. The ability of deep networks to adapt to a new task within just a few samples or iterations is central to the application of meta-learning methods in few-shot and online learning scenarios [5, 6].
A recent surge of interest directed towards meta-learning using neural networks has spurred development of a variety of methods [7–9]. In a standard episodic training framework, a network must adapt to a sampled task (or collection of tasks) and incurs a generalization loss for that task (or collection); this generalization loss is backpropagated to update the network weights. Methods differ in the underlying architecture and mechanisms they use to support adaptation. Strategies include using gradient descent in an inner loop, storing and updating prototypes, parameterizing update rules by another neural network, and employing neural memory [3, 10–12]. Section 2 provides an overview.
We focus on memory-based meta-learning, and specifically investigate the organization of neural memory for meta-learning. Motivating this focus is the generality and flexibility of memory-based approaches. Relying on memory for adaptation allows one to cast meta-learning as merely a learning problem using a straightforward loss formulation (viewing entire episodes as examples) and standard optimization techniques. The actual burden of adaptation becomes an implicit responsibility of the memory subsystem: the network must learn to use its persistent memory in a manner that facilitates task adaptation. This contrasts with explicit adaptation mechanisms such as stored prototypes.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
In this implicit adaptation setting, memory architecture plays a crucial role in determining what kind of adaptation can be learned. We experimentally evaluate the effectiveness of alternative neural memory architectures for meta-learning and observe particular advantages to distributing memory throughout a network. More specifically, we view the generic LSTM equations, Wx+Wh−1, as adaptation induced by hidden states in activation space (see Figure 1). By distributing LSTM memory cells across the depth of the network, each layer is tasked with generating hidden states that are useful for adaptation. Such a memory organization is compatible with many standard networks, including CNNs, and can be achieved by merely swapping LSTM memory cells in place of existing filters.
Our simple approach also contrasts with several existing memory-based meta-learning methods used in both generative and classification tasks [13–17]. These methods view memory as a means to store and retrieve useful inductive biases for task adaptation, and hence focus on designing better read and write protocols. They typically have a feature extractor that feeds into a memory network that performs adaptation, whereas our architecture makes no such distinction between stages.
We test the efficacy of network architectures with distributed memory cells on online few-shot and continual learning tasks as in Santoro et al. [13], Ren et al. [18] and Javed and White [6]. The online setting is challenging for two reasons: 1) It is empirically observed that networks are not well suited for training/adaptation with a batch size of one [19]; 2) In this setting the model has to adapt to one image at a time step, thus having to deal with a prolonged adaptation phase. For these reasons, we see these tasks as suitable for evaluating the adaptation capabilities of the hidden states generated by the network.
We empirically observe that our method outperforms strong gradient-based and prototypical baselines, delineating the efficacy of the local adaptation rule learnt by each layer. Particularly important is the distributed nature of our memory, which allows every network layer to adapt when provided with label information; in comparison, restricting adaptability to only later network layers delivers far less compelling performance. These results suggest that co-design of memory architecture and metatraining strategies should be a primary consideration in the ongoing development of memory-based meta-learning. We further test our model in a harder online few-shot learning scenario, wherein the corresponding label to a sample arrives after a long delay [20]. Our method adapts seamlessly, without requiring any changes to the model, while, in this setting, other adaptation strategies struggle. These results highlight promising directions for advancing and simplifying meta-learning by relying upon distributed memory for adaptation.
2 Related Work
Early work on meta-learning introduces many relevant concepts. Schmidhuber [21] proposes using task specific weights, called fast weights, and weights that are adapted across tasks, called slow weights. Bengio et al. [2] updates the network via a learning rule which is parameterized by another neural network. Thrun [22] presents meta-learning in a life-long scenario, where the algorithm accrues information from the past experiences to adapt effectively for the task at hand. Hochreiter et al. [23] train a memory network to learn its own adaptation rule via just its recurrent states. These high level concepts can be seen in more recent methods. We group current meta-learning methods based on the nature of adaptation strategy and discuss them below.
Gradient-based Adaptation Methods. Methods that adapt via gradients constitute a prominent class of meta-learning algorithms [9]. Model-agnostic meta-learning (MAML) [4] learns an initialization that can efficiently be adapted by gradient descent for a new task. Finn et al. [24] focus on learning a network that can use experience from previously seen tasks for current task adaptation. They adapt to the current task by using a network that is MAML pre-trained on the samples from the previous task. Nagabandi et al. [25], Caccia et al. [26] perform online adaptation under non-stationary distributions, either by using a mixture model or by spawning a MAML pre-trained network when the
input distribution changes. Javed and White [6], Beaulieu et al. [27] employ a bi-level optimization routine similar to MAML, except the outer loop loss is catastrophic forgetting. They thereby learn representations that are robust to forgetting and accelerate future learning under online updates.
Memory and Gradient-based Adaptation. Andrychowicz et al. [28], Ravi and Larochelle [10] learn an update rule for network weights by transforming gradients via a LSTM, which outperforms human-designed and fixed SGD update rules. Munkhdalai and Yu [29] learn a transform that maps gradients to fast (task specific) weights, which are stored and retrieved via attention during evaluation. They update slow weights (across task weights) at the end of each task.
Prototypical Methods. These methods learn an encoder which projects training data to a metric space, and obtain class-wise prototypes via averaging representations within the same class. Following this, test data is mapped to the same metric space, wherein classification is achieved via a simple rule (e.g., nearest neighbor prototype based on either euclidean distance or cosine similarity) [5, 30, 31]. These methods are naturally amenable for online learning as class-wise prototypes can be updated in an online manner as shown by Ren et al. [18].
Memory-based Adaptation. Santoro et al. [13] design efficient read and write protocols for a Neural Turning Machine [32] for the purposes of online few-shot learning. Rae et al. [33] design sparse read and write operations, thereby making them scalable in both time and space. Ramalho and Garnelo [11] use logits generated by the model to decide if a certain sample is written to neural memory. Mishra et al. [7] employ an attention-based mechanism to perform adaptation, and use a CNN to generate features for the attention mechanism. Their model requires storing samples across all time steps explicitly, thereby violating the online learning assumption of being able to access each sample only once. All of these methods mainly focus on designing better memory modules either via using more recent attention mechanisms or by designing better read and write rules to neural memory. These methods typically use a CNN which is not adapted for the current task. Our approach differs from these methods, in that we study efficient organization of memory for both online few-shot learning and meta-learning more generally, and show that as a consequence of our distributed memory organization, the entire network is capable of effective adaptation when provided with relevant feedback.
Kirsch and Schmidhuber [34] introduce an interesting form of weight sharing wherein LSTM cells (with tied weights) are distributed throughout the width and depth of the network, however each position has its own hidden state. Further, they have backward connections from the later layers to the earlier layers, enabling the network to implement its own learning algorithm or clone a humandesigned learning algorithm such as backprop. Both our model and theirs implement an adaptation strategy purely using the recurrent states. The difference, however, is in the nature of the adaptation strategy implemented in the recurrent states. Similar to conventional learning algorithms, their backward connections help propagate error from the last layer to the earlier layers. In our architecture, the feedback signal is presented as another input, propagated from the first layer to the last layer.
In addition to being used in classification settings, Guez et al. [35] employ memory-based metalearning approach to perform adaptation for reinforcement learning tasks indicating the generality of using memory as a means for adaptation.
Few-shot Semantic Segmentation. Few-shot segmentation methods commonly rely on using prototypes [36, 37], though recent approaches include gradient-based methods analogous to MAML [38]. The methods that use neural memory typically employ it in final network stages to fuse features of different formats for efficient segmentation: Li et al. [39] use ConvLSTMs [40] to fuse features from different stages of the network; Valipour et al. [41] to fuse spatio-temporal features while segmenting videos; Hu et al. [42] use a ConvLSTM to fuse features of query with the features of support set; Azad et al. [43] use a bidirectional ConvLSTM to fuse segmentation derived from multiple scale space representations. We differ from these works in organization, use of, and information provided to memory module: 1) Memory is distributed across the network as the sole driver of adaptation; 2) Label information is provided to assist with adaptation.
Meta-learning Benchmarks. Caccia et al. [26] present benchmarks that measure the ability of a model to adapt to a new task, using the inductive biases that it has acquired over solving previously seen tasks. More specifically, the benchmark presents an online non-stationary stream of tasks, and the model’s ability to adapt to a new task at each time step is evaluated. Note that they do not measure
the model’s ability to remember earlier tasks; they only want the model to adapt well on a newly presented task.
Antoniou et al. [44] present benchmarks for continual few-shot learning. The network is presented a number of few-shot tasks, one after the other, and then is expected to generalize even to the previously seen tasks. This is a challenging and interesting setup, in that, the network has to show robustness to catastrophic forgetting while learning from limited data. However, we are interested in evaluating the online adaptation ability of models, while Antoniou et al. [44] feed data in a batch setting. We follow experimental setup as in Javed and White [6], where in, the model is required to remember inductive biases acquired over a longer time frame when compared to the experimental setup used by Antoniou et al. [44].
3 Methodology
3.1 Problem: Online Few-shot Learning
This setting combines facets of online and few-shot learning: the model is expected to make predictions on a stream of input samples, while it sees only a few samples per class in the given input stream. In particular, we use a task protocol similar to Santoro et al. [13]. At time step i, an image xi is presented to the model and it makes a prediction for xi. In the following time step, the correct label yi is revealed to the model. The model’s performance depends on the correctness of its prediction at each time step. The following ordered set constitutes a task: T = ( (x1, null), (x2, y1), · · · (xt, yt−1) ) . Here null indicates that no label is passed at the first time step, and t is the total number of time steps (length) of the task. For a k-way N-shot task t = k×N . The entire duration of the task is considered as the adaptation phase, as with every time step the model gets a new sample and must adapt on it to improve its understanding of class concepts.
3.2 Memory as Adaptation in Activation Space
Consider modulating the output of a network F for input x with a persistent state h: u = F (x, h). Now, if adding h aids in realizing a better representation u than otherwise (F (x)), we could view this as adaptation in activation space. In Figure 1, model F ∗ adapts to tasks using its persistent states h. Specifically let us consider the generic LSTM equations Wx+W ′ h−1, we could view Wx as the original response and W ′ h−1 as modulation by a persistent state (memory) in the activation space. So, for the online learning task at hand, we seek to train a LSTM which learns to generate hidden state hi at each time step i, such that it could enable better adaptation in ensuing time steps. We note that adaptation in activation space has been discussed in earlier works. We use this perspective to organize memory better and to enable effective layer-wise adaptation across the network.
3.3 Model
Architecture. We distribute memory across the layers of the network, in order to enable the layers to learn local layer-wise adaptation rules. In particular, we use a model in which each layer of the feature extractor is a convolutional LSTM (CL) [40] followed by a LSTM [45] and a classifier, as shown in Figure 2.
Similar to the LSTM, each convolutional LSTM (CL) layer consists of its own input, forget, and output gates. The key difference is that convolution operations (denoted by ∗) replace matrixvector multiplication. In this setup, we view the addition by Whi ∗ ht−1 as adaptation in the ith time step within the input gate. The same view could be extended to other gates as well. The cell and hidden state generation are likewise similar to LSTM, but use convolution operations:
it = σ(Wii ∗ xt +Whi ∗ ht−1) (1) ft = σ(Wif ∗ xt +Whf ∗ ht−1) (2) ot = σ(Wio ∗ xt +Who ∗ ht−1) (3)
ct = ft ct−1 + it tanh(Wig ∗ xt +Whg ∗ ht−1) (4)
ht = ot tanh(ct) (5)
In initial experiments, we observe that for tasks with 50 time steps these models did not train well. We hypothesize that this could be due to the same network being repeated 50 times, thereby inducing
an effectively very deep network. We resolve this issue by adding skip connections between the second layer and the fourth layer (omitted in Figure 2). Further discussion on this is in Appendix B.
Label Encoding. As label information is essential for learning an adaptation rule, we inject labels offset by one time step to the ConvLSTM feature extractor and the LSTM. This provides the opportunity for each layer to learn an adaptation rule. For a k-way classification problem involving images of spatial resolution s, we feed the label information as a k × s2 matrix with all ones in the cth row if c is indeed the true label. We reshape this matrix as a k × s× s tensor and concatenate it along the channel dimension of image at the next time step. To the LSTM layer, we feed the label in its one-hot form by concatenating it with the flattened activations from the previous layer.
3.4 Training and Evaluation
Following Santoro et al. [13], we perform episodic training by exposing the model to a variety of tasks from the training distribution P(Ttrain). For a given task, the model incurs a loss Li at every time step of the task; we sum these losses and backpropagate through the sum at the end of the task. This is detailed in Algorithm 1 in Appendix A. We evaluate the model using a partition of the dataset that is class-wise disjoint from the training partition. The model makes a prediction at every time step and adapts to the sequence by using its own hidden states, thereby not requiring any gradient information for adaptation. Algorithm 2 in Appendix A provides details.
4 Experiments
4.1 Online Few-Shot Learning
We use CIFAR-FS [46] and Omniglot [47] datasets for our few-shot learning tasks; see Appendix A for details. We adopt the following methods to serve as baselines for comparison.
LSTM and NTM. Santoro et al. [13] use a LSTM and a NTM [32] with read and write protocols for the task of online few shot learning. Both aim to meta-learn tasks by employing a neural memory.
Adaptive Posterior Learning (APL). Ramalho and Garnelo [11] propose a memory-augmented model that stores data point embeddings based on a measure of surprise, which is computed by the loss incurred by each sample. During inference, they retrieve a fixed number of nearest-neighbor data embeddings, which are then fed to a classifier alongside the current sample.
Online Prototypical Networks (OPN). Ren et al. [18] extend prototypical networks to the online case, where they sequentially update the current class-wise prototypes using weighted averaging.
Contextual Prototypical Memory (CPM). Ren et al. [18] improve on OPN by learning a representation space that is conditioned on the current task. Furthermore, weights used to update prototypes are determined by a newly-introduced gating mechanism.
Table 1 shows that our model outperforms the baselines in most settings. These results suggest that the adaptation rules emergent from our design are more efficient than adaptation via prototypes, and adaptation via other memory-based architectures. In the CIFAR-FS experiments, the prototypical methods outperform our method only in the 1-shot scenario. As the 5-shot and 8-shot scenarios have a longer fine-tuning or adaptation phase, this shows that our method is more adept at handling tasks with longer adaptation phases. One reason could be that the stored prototypes which form the persistent state of OPN and CPM are more rigid than the persistent state of our method. The rigidity stems from the predetermined representation size of each prototype, which thereby prevents allocation of representation size depending upon classification difficulty. In our architecture, the network has the freedom to allocate representation size for each class as it deems fit. Consequently, this may help the network learn more efficient adaptation strategies that improve with time.
We examine the importance of distributed adaptation through ablation experiments that vary the layer into which we inject label information. Table 2 shows that models with feature extractors that do not receive label information are outperformed by the model whose earlier layers do receive label information (injecting into CL-1); the latter is even better than pre-trained models. By distributing memory across each layer and allowing label information to flow to each memory module, we enable every layer to learn its own adaptation rule. Here, the CNN baselines are pre-trained with MAML; these pre-trained networks replace the ConvLSTM part and are jointly trained with the LSTM (which receives the labels) and classifier. In these cases, we just replace the ConvLSTM in Figure 2 with a CNN. During the meta-testing phase, the CNN is a just feature extractor and the burden of adaptation falls entirely on the LSTM. In CNN-F, we freeze the weights during meta-training. Our CL+LSTM, restricted to adapt only in the final layer (3rd row; label injection into final LSTM layer only) performs comparably to the CNN baselines. The same model, with full adaptivity (last row) outperforms.
4.2 Delayed Feedback
We consider a task similar to online few-shot classification (Section 3.1), except instead of offset by one timestep, labels are offset by a delay parameter. Supposing the label delay is 3, then the task T is presented to the model as the sequence: T = ( (x1, null), (x2, null), (x3, null), (x4, y1), · · · (xt, yt−3) ) , where t is the sequence length. The model must discern and account for the time delay.
Table 3 shows that our network can learn under these conditions, though performance decreases with increase in delay. This could be imputed to difficulty in associating the hidden representation of a sample with the correct label, consequently creating a noisy environment for learning adaptation rules. We see that pre-training helps: we take our network pre-trained for label delay of 1 and meta-train for tasks with label delay of 5. This improves the model accuracy to outperform the model directly trained with label delay of 4. This could be because the necessary adaptation rules are already learnt by the pre-trained model, and it only has to learn the quantum of delay. Furthermore, from Tables 1 and 3, even with a delay of 2 our method outperforms CPM with no delay.
In this setting, our model can be used in a seamless manner, without having to make any adjustments. Gradient-based and prototypical methods cannot be used as is, and would require storing the samples (violating online assumption) for the time period of delay, causing memory usage to grow linearly with delay; in contrast, it is constant for our method. Further, to use prototypical or gradient-based methods, we would have to know the delay parameter in advance; our network learns the delay.
4.3 Online Continual Learning
We address the problem of continual learning in the online setting. In this setup, the model sees a stream of samples from a non-stationary task distribution, and the model is expected to generalize well even while encountering samples from a previously seen task distribution. Concretely, for a single continual learning task we construct n subtasks from an underlying dataset and first present to the model samples from the first subtask, then the second subtask, so on until the nth subtask in that order. Once the model is trained on all n subtasks sequentially, it is expected to classify images from any of the subtasks, thereby demonstrating robustness catastrophic forgetting [48].
Task Details. We use the Omniglot dataset for our experiments. Following [6], we define each subtask as learning a single class concept. So in this protocol a single online 5-way 5-shot continual learning task is defined as the following ordered set: T = ( T1, T2, T3, T4, T5 ) . Here, subtask Ti contains 5 samples from 1 particular Omniglot class. After adaptation is done on these 5 subtasks (25 samples) we expect the model to classify samples from a query set consisting of samples from all of the subtasks. The performance of the model is the prediction accuracy on the query set. We experiment by varying the total number of subtasks from 5 to 20 as in Figure 3.
Training Details. We perform episodic training by exposing our model to a variety of continual learning tasks from the training partition. At the end of each continual learning task, the model incurs a loss on the query set. We update our model by backpropagating through this query set loss. Note that during evaluation on the query set, we freeze the persistent states of our model in order to prevent any information leak across the query set. Since propagating gradients across long time steps renders training difficult, we train our model using a simple curriculum of increasing task length every 5K episodes. This improves generalization and convergence. Appendix C presents more details. Further, we shuffle the labels across tasks in order to prevent the model from memorizing the training classes. During evaluation, we sample tasks from classes the model has not encountered. The model adapts to the subtasks using just the hidden states and then acquires the ability to predict on the query set, which contains samples from all of the subtasks. We use the same class wise disjoint train/test split as in Lake [47].
Baseline: Online Meta Learning (OML). Javed and White [6] adopt a meta-training strategy similar to MAML. They adapt deeper layers in the inner loop for the current task, while updating the entire network in the outer loop, based on a loss measuring forgetting. For our OML experiments we use a 4-layer CNN followed by two fully connected layers. Appendix C provides implementation details.
Baseline: A Neuromodulated Meta-Learning Algorithm (ANML). Beaulieu et al. [27] use a hypernetwork to modulate the output of the trunk network. In the inner loop, the trunk network is adapted via gradient descent. In the outer loop, they update both the hypernetwork and the trunk network on a loss measuring forgetting. For our ANML experiments, we use a 4-layer CNN followed by a linear layer as the trunk network, with a 3-layer hypernetwork modulating the activations of the CNN. They use 3 times as many parameters as our CL+LSTM model. Appendix C provides details.
Results. Figure 3 plots average accuracy on increasing the length of the continual learning task. Task length is the number of subtasks within each continual learning task, which ranges from 5 to 20 subtasks in our experiments. As expected, we observe that the average accuracy generally decreases with increased task length for all models. However, the CL+LSTM model’s performance degrades slower than the baselines, suggesting that the model has learnt an efficient way of storing inductive biases required to solve each of the subtasks effectively.
From Figure 4, we see that CL+LSTM is robust against forgetting, as the variance on performance across subtasks is low. This suggests that the CL+LSTM model learns adaptation rules that minimally interfere with other tasks.
Analysis of Computational Cost. During inference, our model does not require any gradient computation and fully relies on hidden states to perform adaptation. Consequently, it has lower computational requirements compared to gradient-based models – assuming adaptation is required at every time step. For a comparative case study, let us consider three models and their corresponding GFLOPs per forward pass: OML baseline (1.46 GFLOPs); CL+LSTM (0.40 GFLOPs); 4-layer CNN (0.30 GFLOPs) with parameter count similar to CL+LSTM. Here, we employ standard methodology for estimating of compute cost [49], with a forward and backward pass together incurring three times the operations in a forward pass alone.
We can extend these estimates to compute GFLOPs for the entire adaptation phase. Suppose we are adapting/updating our network on a task of length t iterations. The OML baseline and the 4-layer CNN (adapting on gradient descent) would consume 4.38t GFLOPs and 0.9t GFLOPs, respectively. Our CL+LSTM model would consume only 0.40t GFLOPs; here, we drop the factor of three while computing GFLOPs for the CL+LSTM model, since we do not require any gradient computation for adaptation. During training, we lose this advantage since we perform backpropagation through time, making the computational cost similar to computing meta-gradients.
4.4 Online few-shot Semantic Segmentation
These experiments investigate the efficacy and applicability of adaptation via persistent states to a challenging segmentation task and analyze the effectiveness of label injection for segmentation.
Task Details. We consider a binary segmentation task: we present the model a sequence of images, one at each time step (as in Figure 5), and the model must either segment or mask out the image based on whether it is a distractor. Similar to the classification tasks, we augment the ground truth segmentation information along the channel dimension. The ground truth is offset by 1 time step, so at the first time step we concatenate to the channel dimension an all -1 matrix as a null label, at the next time step we concatenate to the channel dimension the actual ground truth of the image at time step 1. If it is an image to be segmented, we concatenate the ground truth binary mask of the object and the background in the form of a binary matrix. If it is a distractor image, we concatenate to the channel dimension an all zeros matrix indicating that the entire image should be masked out. k-shot scores for segmentation is the IoU of the predicted segmentation on the k + 1th time the model sees the object we want to segment out. For k-shot masking scores, we compute the fraction of the object that has been masked, when model sees the distractor image for the k + 1th time step. We sample our episodes from the dataset FSS1000 [50]; more dataset details are in Appendix D.
The construction of this task avoids zero-shot transfer of inductive biases required for segmentation and forces the model to rely on the task data to learn which objects are to be segmented.
Training Details. We augment a 10 layer U-Net [51] like CNN with memory cells in each layer, by converting each convolution into a convolutional LSTM–referred to as CL U-Net (architecture details in Appendix D. We utilize episodic training, where each episode is an online few-shot segmentation task, as in Figure 5 with 18 time steps in total (9 segmentation images and 9 distractors). We follow a simple training curriculum to train: the first 100k episodes we train without any distractors; in the next 100k episodes we train with distractors as in Figure 5. Further training details are in Appendix D. The episodes presented during evaluation contain novel classes.
Baselines. We use a 10 layer U-Net like CNN pre-trained with MAML for segmentation without any distractors (architecture details in Appendix D). We use this model as our fine-tuning CNN baseline, in that we fine-tune the model on the online stream of images using gradient descent at each time step. From Table 4, we see that the model fails to mask out the distractors, indicating its inability to ability to adapt to the online feed.
From Table 4, we see that CL U-Net variants are capable of effective online adaptation; both models are capable of segmenting and masking images. However we observe that providing label information at the first layer significantly boosts our performance, thereby bolstering our claim that effective task adaptation can be achieved by providing relevant feedback to a network containing distributed memory.
4.5 Standard Supervised Learning
Finally, we assess whether our proposed model can be directly employed in a classic supervised learning setting i.e., without requiring modifications in terms of architecture design. The central
motivation behind these experiments is to see if meta-learning methods can be applied to standard supervised learning tasks without requiring any change in methodology. Hence, in a setting when a priori knowledge of whether the task at hand is a standard supervised learning task or meta-learning task is unavailable, we could use ConvLSTM models. This is similar to the experiments done in [11], where they try to close the gap between standard supervised learning approaches and their meta-learning method applied to standard supervised learning tasks.
We use CIFAR data as our standard supervised learning benchmark [52]; further dataset details are in Appendix E. We use standard networks such as VGG [53] and ResNet [54] as our baselines. In Table 5, we observe that CL variants perform comparably in most cases. This affirms that the ConvLSTM model is capable of handling a conventional supervised learning scenario without any change in training procedure. Even in the absence of temporal signal, ConvLSTMs can still operate well. This is interesting since direct application of gradient-based meta-learners to the conventional supervised learning setting would require optimizing through a prohibitively long inner loop.
5 Conclusion
Our results highlight distributed memory architectures as a promising technical approach to recasting the problem of meta-learning as simply learning with memory-augmented models. This view has potential to eliminate the need for ad-hoc design of mechanisms or optimization procedures for task adaptation, replacing them with generic and general-purpose memory modules. Our ablation studies show the effectiveness of distributing memory throughout a deep neural network (resulting in an increased capacity for adaptation), rather than limiting it to a single layer or final classification stage.
We demonstrate that standard LSTM cells, when provided with relevant feedback, can act as a basic building block of a network designed for meta-learning. On a wide variety of tasks, a distributed memory architecture can learn adaptation strategies that outperform existing methods. The applicability of a purely memory-based network to online semantic segmentation points to the untapped versatility and efficacy of adaptation enabled by distributed persistent states.
Acknowledgments and Disclosure of Funding
We thank Greg Shakhnarovich and Tri Huynh for useful comments. This work was supported in part by the University of Chicago CERES Center. The authors have no competing interests. | 1. What is the main contribution of the paper, and how does it differ from previous works in the online few-shot setting?
2. How does the proposed method manage the problem of delayed labels, and what are the limitations of this approach?
3. What are the computational costs and memory issues associated with the proposed system, and how do they impact its scalability?
4. Are there simpler ways to manage the online few-shot setting without relying on complex memory mechanisms? If so, why are these approaches not as competitive as the proposed method?
5. How does the proposed method compare to recent works in online few-shot learning that have investigated different formulations and settings?
6. What are the advantages and disadvantages of using Omniglot and CIFAR-FS as evaluation benchmarks for the proposed method?
7. How would the authors address the issue of outdated benchmarks and limited diversity in their evaluation?
8. Can the authors provide empirical evidence to justify the use of a memory mechanism in their proposed system? | Summary Of The Paper
Review | Summary Of The Paper
In this paper the authors propose a distributed memory system to tackle the online few-shot setting. The system is based on layers of conv-LSTMs that take as input an image and the labels at previous steps to generate activations and store information.
Overall I think that the technical contribution is marginal. Both the system structure and the problem formulation make the method impractical and difficult to scale to more complex settings. Experiments are limited to small datasets and do not show the potential of the proposed solution. Limitations are not appropriately discussed and a better framing w.r.t. more recent online few-shot learning work is missing. In the current form the paper is not ready to be accepted. I am open to change my mind and increase my score if the authors provide a strong rebuttal and a satisfactory answer to the concerns I have outlined below.
Review
Clarification on the importance of having a memory. I have checked algorithm 1 in the appendix and the details of how task are sampled and aggregated in appendix A.1. The authors wrote: "we sample 10-shot 5-way online few-shot tasks by sampling 5 classes and 10 samples per class (a total of 50 images), which are fed to the model at a rate of one image per time step". My understanding is that those 50 images are shuffled and presented in a random order. I am wondering if there are simpler ways to manage this setting, for instance by just accumulating the classes seen so far into bins and then pass the histogram instead of the one-hot label. This could be done by simply summing the one-hot vectors seen so far (and optionally normalize by the total number of images in the sequence). One could have a standard neural network that makes a prediction by using the image at time t and the accumulated histogram. If I am correct in my assumptions, then this seems to be a trivial baseline to try in order to justify the use of a memory. In other words, why we need a complex memory mechanism if the only information that matters is the current image and the labels accumulated so far in the sequence? The authors should clarify this point and provide empirical evidences that this trivial baseline is not as competitive as the proposed method.
Problem setting. The particular formulation used in this work has been used previously, for instance in [1], and it is based on the key assumption that labels are delayed by a given factor. However, this formulation is just one of the possible ways meta-learning can be applied to the online setting, and it may suffer several shortcomings when used in practice. For instance, in many real-world settings the feedback can be delayed by a variable factor, it can change between train and evaluation stages, and even be overwritten by inputs belonging to different classes. Recent work has investigated online few-shot learning in some of those challenging settings but the authors did not mention it or discuss it. I suggest including these papers [2,3,4] with an explanation of the advantages/disadvantages of the formulation used by the authors.
Computational cost. A satisfactory discussion of computational costs and memory issues has not been provided. The system seems quite expensive in terms of computational resources. The authors pointed out in lines 154-157 some stability issues that they assume are due to: "the same network being repeated 50 times". If backpropagation is performed from the last step to the first one, it will require the storage of all the activations along the way. This is a serious limitation, the model may not scale well to large images and deeper backbones due to this bottleneck. A discussion of all these issues is expected.
Datasets. The datasets used for the evaluation of the proposed method are Omniglot and CIFAR-FS. I think that Omniglot is starting to be outdated and has been replaced by more sophisticated benchmarks in recent publications (e.g. Meta-Dataset [5], SlimImageNet [2]). CIFAR-FS has also some serious limitations (limited diversity, small resolution, etc) when compared to recent benchmarks. The authors are encouraged to move towards more recent benchmarks if they want to fully showcase the potential of the proposed solution.
References
[1] Santoro, A., Bartunov, S., Botvinick, M., Wierstra, D., & Lillicrap, T. (2016). One-shot learning with memory-augmented neural networks. arXiv preprint arXiv:1605.06065.
[2] Antoniou, A., Patacchiola, M., Ochal, M., & Storkey, A. (2020). Defining benchmarks for continual few-shot learning. arXiv preprint arXiv:2004.11967.
[3] Caccia, M., Rodriguez, P., Ostapenko, O., Normandin, F., Lin, M., Caccia, L., ... & Charlin, L. (2020). Online fast adaptation and knowledge accumulation: a new approach to continual learning. arXiv preprint arXiv:2003.05856.
[4] Ren, M., Iuzzolino, M. L., Mozer, M. C., & Zemel, R. S. (2020). Wandering within a world: Online contextualized few-shot learning. arXiv preprint arXiv:2007.04546.
[5] Triantafillou, E., Zhu, T., Dumoulin, V., Lamblin, P., Evci, U., Xu, K., ... & Larochelle, H. (2019). Meta-dataset: A dataset of datasets for learning to learn from few examples. arXiv preprint arXiv:1903.03096. |
NIPS | Title
Look More but Care Less in Video Recognition
Abstract
Existing action recognition methods typically sample a few frames to represent each video to avoid the enormous computation, which often limits the recognition performance. To tackle this problem, we propose Ample and Focal Network (AFNet), which is composed of two branches to utilize more frames but with less computation. Specifically, the Ample Branch takes all input frames to obtain abundant information with condensed computation and provides the guidance for Focal Branch by the proposed Navigation Module; the Focal Branch squeezes the temporal size to only focus on the salient frames at each convolution block; in the end, the results of two branches are adaptively fused to prevent the loss of information. With this design, we can introduce more frames to the network but cost less computation. Besides, we demonstrate AFNet can utilize fewer frames while achieving higher accuracy as the dynamic selection in intermediate features enforces implicit temporal modeling. Further, we show that our method can be extended to reduce spatial redundancy with even less cost. Extensive experiments on five datasets demonstrate the effectiveness and efficiency of our method. Our code is available at https://github.com/BeSpontaneous/AFNet-pytorch.
1 Introduction
Online videos have grown wildly in recent years and video analysis is necessary for many applications such as recommendation [6], surveillance [4, 5] and autonomous driving [31, 17]. These applications require not only accurate but also efficient video understanding algorithms. With the introduction of deep learning networks [3] in video recognition, there has been rapid advancement in the performance of the methods in this area. Though successful, these deep learning methods often cost huge computation, making them hard to be deployed in the real world.
In video recognition, we need to sample multiple frames to represent each video which makes the computational cost scale proportionally to the number of sampled frames. In most cases, a small proportion of all the frames is sampled for each input, which only contains limited information of the original video. A straightforward solution is to sample more frames to the network but the computation expands proportionally to the number of sampled frames.
There are some works proposed recently to dynamically sample salient frames [29, 16] for higher efficiency. The selection step of these methods is made before the frames are sent to the classification network, which means the information of those unimportant frames is totally lost and it consumes a considerable time for the selection procedure. Some other methods proposed to address the spatial redundancy in action recognition by adaptively resizing the resolution based on the importance of each frame [23], or cropping the most salient patch for every frame [28]. However, these methods still completely abandon the information that the network recognizes as unimportant and introduce a policy network to make decisions for each sample which leads to extra computation and complicates the training strategies.
∗Corresponding Author: markcheung9248@gmail.com.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
In our work, we go from another perspective compared with previous works. We propose a method which makes frame selection within the classification network. Shown in Figure 1, we design an architecture called Ample and Focal Network (AFNet) which is composed of two branches: the ample branch takes a glimpse of all the input features with lightweight computation as we downsample the features for smaller resolution and further reduce the channel size; the focal branch receives the guidance from the proposed navigation module to squeeze the temporal size by only computing on the selected frames to save cost; in the end, we adaptively fuse the features of these two branches to prevent the information loss of the unselected frames.
In this manner, the two branches are both very lightweight and we enable AFNet to look broadly by sampling more frames and stay focused on the important information for less computation. Considering these two branches in a uniform manner, on the one hand, we can avoid the loss of information compared to other dynamic methods as the ample branch preserves the information of all the input; on the other hand, we can restrain the noise from the unimportant frames by deactivating them in each convolutional block. Further, we have demonstrated that the dynamic selection strategy at intermediate features is beneficial for temporal modeling as it implicitly implements frame-wise attention which can enable our network to utilize fewer frames while obtaining higher accuracy. In addition, instead of introducing a policy network to select frames, we design a lightweight navigation module which can be plugged into the network so that our method can easily be trained in an end-toend fashion. Furthermore, AFNet is compatible with spatial adaptive works which can help to further reduce the computations of our method.
We summarize the main contributions as follows:
• We propose an adaptive two-branch framework which enables 2D-CNNs to process more frames with less computational cost. With this design, we not only prevent the loss of information but strengthen the representation of essential frames.
• We propose a lightweight navigation module to dynamically select salient frames at each convolution block which can easily be trained in an end-to-end fashion.
• The selection strategy at intermediate features not only empowers the model with strong flexibility as different frames will be selected at different layers, but also enforces implicit temporal modeling which enables AFNet to obtain higher accuracy with fewer frames.
• We have conducted comprehensive experiments on five video recognition datasets. The results show the superiority of AFNet compared to other competitive methods.
2 Related Work
2.1 Video Recognition
The development of deep learning in recent years serves as a huge boost to the research of video recognition. A straightforward method for this task is using 2D-CNNs to extract the features of sampled frames and use specific aggregation methods to model the temporal relationships across frames. For instance, TSN [27] proposes to average the temporal information between frames. While TSM [20] shifts channels with adjacent frames to allow information exchange at temporal dimension. Another approach is to build 3D-CNNs to for spatiotemporal learning, such as C3D [26], I3D [3] and SlowFast [8]. Though being shown effective, methods based on 3D-CNNs are computationally expensive, which brings great difficulty in real-world deployment.
While the two-branch design has been explored by SlowFast, our motivation and detailed structure are different from it in the following ways: 1) network category: SlowFast is a static 3D model, but
AFNet is a dynamic 2D network; 2) motivation: SlowFast aims to collect semantic information and changing motion with branches at different temporal speeds for better performance, while AFNet is aimed to dynamically skip frames to save computation and the design of two-branch structure is to prevent the information loss; 3) specific design: AFNet is designed to downsample features for efficiency at ample branch while SlowFast processes features in the original resolution; 4) temporal modeling: SlowFast applies 3D convolutions for temporal modeling, AFNet is a 2D model which is enforced with implicit temporal modeling by the designed navigation module.
2.2 Redundancy in Data
The efficiency of 2D-CNNs has been broadly studied in recent years. While some of the works aim at designing efficient network structure [13], there is another line of research focusing on reducing the intrinsic redundancy in image-based data [32, 11]. In video recognition, people usually sample limited number of frames to represent each video to prevent numerous computational costs. Even though, the computation for video recognition is still a heavy burden for researchers and a common strategy to address this problem is reducing the temporal redundancy in videos as not all frames are essential to the final prediction. [33] proposes to use reinforcement learning to skip frames for action detection. There are other works [29, 16] dynamically sampling salient frames to save computational cost. As spatial redundancy widely exists in image-based data, [23] adaptively processes frames with different resolutions. [28] provides the solution as cropping the most salient patch for each frame. However, the unselected regions or frames of these works are completely abandoned. Hence, there will be some information lost in their designed procedures. Moreover, most of these works adopt a policy network to make dynamic decisions, which introduces additional computation somehow and splits the training into several stages. In contrast, our method adopts a two-branch design, allocating different computational resources based on the importance of each frame and preventing the loss of information. Besides, we design a lightweight navigation module to guide the network where to look, which can be incorporated into the backbone network and trained in an end-to-end way. Moreover, we validate that the dynamic frame selection at intermediate features will not only empower the model with strong flexibility as different frames will be selected at different layers, but result in learned frame-wise weights which enforce implicit temporal modeling.
3 Methodology
Intuitively, considering more frames enhances the temporal modeling but results in higher computational cost. To efficiently achieve the competitive performance, we propose AFNet to involve more frames but wisely extract information from them to keep the low computational cost. Specifically, we design a two-branch structure to treat frames differently based on their importance and process the features in an adaptive manner which can provide our method with strong flexibility. Besides, we demonstrate that the dynamic selection of frames in the intermediate features results in learned frame-wise weights which can be regarded as implicit temporal modeling.
3.1 Architecture Design
As is shown in Figure 2, we design our Ample and Focal (AF) module as a two-branch structure: the ample branch (top) processes abundant features of all the frames in a lower resolution and a squeezed channel size; the focal branch (bottom) receives the guidance from ample branch generated by the navigation module and makes computation only on the selected frames. Such design can be conveniently applied to existing CNN structures to build AF module.
Ample Branch. The ample branch is designed to involve all frames with cheap computation, which serves as 1) guidance to select salient frames to help focal branch to concentrate on important information; 2) a complementary stream with focal branch to prevent the information loss via a carefully designed fusion strategy.
Formally, we denote video sample i as vi, containing T frames as vi = { f i1, f i 2, ..., f i T } . For convenience, we omit the superscript i in the following sections if no confusion arises. We denote the input of ample branch as vx ∈ RT×C×H×W , where C represents the channel size and H ×W is the spatial size. The features generated by the ample branch can be written as:
vya = F a (vx) , (1)
where vya ∈ RT×(Co/2)×(Ho/2)×(Wo/2) represents the output of ample branch and F a stands for a series of convolution blocks. While the channel, height, width at focal branch are denoted as Co, Ho, Wo correspondingly. We set the stride of the first convolution block to 2 to downsample the resolution of this branch and we upsample the feature at the end of this branch by nearest interpolation.
Navigation Module. The proposed navigation module is designed to guide the focal branch where to look by adaptively selecting the most salient frames for video vi.
Specifically, the navigation module generates a binary temporal mask Ln using the output from the n-th convolution block in ample branch vyan . At first, average pooling is applied to vyan to resize the spatial dimension to 1× 1, then we perform convolution to transform the channel size to 2:
ṽyan = ReLU ( BN ( W1 ∗ Pool ( vyan ))) , (2)
where ∗ stands for convolution and W1 denotes the weights of the 1× 1 convolution. After that, we reshape the dimension of feature ṽyan from T × 2 × 1 × 1 to 1 × (2× T ) × 1 × 1 so that we can model the temporal relations for each video from channel dimension by:
ptn = W2 ∗ ṽyan , (3) where W2 represents the weights of the second 1× 1 convolution and it will generate a binary logit ptn ∈ R2 for each frame t which denotes whether to select it. However, directly sampling from such discrete distribution is non-differentiable. In this work, we apply Gumbel-Softmax [14] to resolve this non-differentiability. Specifically, we generate a normalized categorical distribution by using Softmax:
π = lj | lj = exp ( p tj n ) exp ( pt0n ) + exp ( pt1n ) , (4)
and we draw discrete samples from the distribution π as:
L = arg max j (log lj +Gj) , (5)
where Gj = − log(− log Uj) is sampled from a Gumbel distribution and Uj is sampled from Unif(0,1) which is a uniform distribution. As argmax cannot be differentiated, we relax the discrete sample L in backpropagation via Softmax:
l̂j = exp ((log lj +Gj) /τ)∑2
k=1 exp ((log lk +Gk) /τ) , (6)
the distribution l̂ will become a one-hot vector when the temperature factor τ → 0 and we let τ decrease from 1 to 0.01 during training.
Focal Branch. The focal branch is guided by the navigation module to only compute the selected frames, which diminishes the computational cost and potential noise from redundant frames.
The features at the n-th convolution block in this branch can be denoted as vyfn ∈ R T×Co×Ho×Wo . Based on the temporal mask Ln generated from the navigation module, we select frames which have corresponding non-zero values in the binary mask for each video and apply convolutional operations only on these extracted frames v′
yfn ∈ RTl×Co×Ho×Wo :
v′ yfn = F fn ( v′ yfn−1 ) , (7)
where F fn is the n-th convolution blocks at this branch and we set the group number of convolutions to 2 in order to further reduce the computations. After the convolution operation at n-th block, we generate a zero-tensor which shares the same shape with vyfn and fill the value by adding v ′ yfn
and vyfn−1 with the residual design following [12].
At the end of these two branches, inspired by [1, 11], we generate a weighting factor θ by pooling and linear layers to fuse the features from two branches:
vy = θ ⊙ vya + (1− θ)⊙ vyf , (8) where ⊙ denotes the channel-wise multiplication.
3.2 Implicit Temporal Modeling
While our work is mainly designed to reduce the computation in video recognition like [28, 24], we demonstrate that AFNet enforces implicit temporal modeling by the dynamic selection of frames in the intermediate features. Considering a TSN[27] network which adapts vanilla ResNet[12] structure, the feature at the n-th convolutional block in each stage can be written as vn ∈ RT×C×H×W . Thus, the feature at n+ 1-th block can be represented as:
vn+1 = vn + Fn+1 (vn)
= (1 + ∆vn+1) vn, (9)
∆vn+1 = Fn+1 (vn)
vn , (10)
where Fn+1 is the n+ 1-th convolutional block and we define ∆vn+1 as the coefficient learned from this block. By that we can write the output of this stage vN as:
vN =
[ N∏
n=2
(1 + ∆vn) ] ∗ v1. (11)
Similarly, we define the features in ample and focal branch as:
vyaN =
[ N∏
n=2
( 1 + ∆vyan )] ∗ vy1 , (12)
vyfN =
[ N∏
n=2
( 1 + Ln ∗∆vyfn )] ∗ vy1 , (13)
where Ln is the binary temporal mask generated by Equation 5 and vy1 denotes the input of this stage. Based on Equation 8, we can get the output of this stage as:
vyN = θ ⊙ vyaN + (1− θ)⊙ vyfN
= { θ ⊙ [ N∏
n=2
( 1 + ∆vyan )] + (1− θ)⊙ [ N∏
n=2
( 1 + Ln ∗∆vyfn )]} ∗ vy1 .
(14)
As Ln is a temporal-wise binary mask, it will decide whether the coefficient ∆vyfn will be calculated in each frame at every convolutional block. Considering the whole stage is made up of multiple convolutional blocks, the series multiplication of focal branch’s output with the binary mask Ln will approximate soft weights. This results in learned frame-wise weights in each video which we regard as implicit temporal modeling. Although we do not explicitly build any temporal modeling module, the generation of Ln in Equation 3 has already taken the temporal information into account so that the learned temporal weights equal performing implicit temporal modeling at each stage.
3.3 Spatial Redundancy Reduction
In this part, we show that our approach is compatible with methods that aim to solve the problem of spatial redundancy. We extend the navigation module by applying similar procedures with the temporal mask generation and the work [11] to generate a spatial logit for the n-th convolution block which is shown in Figure 3:
qtn = W4 ∗ ( Pool ( ReLU ( BN ( W3 ∗ vyan )))) , (15)
where W3 denotes the weights of the 3× 3 convolution and W4 stands for the weights of convolution with kernel size 1× 1. After that, we still use Gumbel-Softmax to sample from discrete distribution to generate spatial mask Mn and navigate the focal branch to merely focus on the salient regions of the selected frames to further reduce the cost.
3.4 Loss functions
Inspired by [27], we take the average of each frame’s prediction to represent the final output of the corresponding video and our optimization objective is minimizing:
L = ∑ (v,y)
[ −y log (P (v)) + λ ·
N∑ n=1
(r −RT )2 ] . (16)
The first term is the cross-entropy between predictions P (v) for input video v and the corresponding one-hot label y. We denote r in the second term as the ratio of selected frames in every mini-batch and RT as the target ratio which is set before the training (RS is the target ratio when extending navigation module to reduce spatial redundancy). We let r approximate RT by adding the second loss term and manage the trade-off between efficiency and accuracy by introducing a factor λ which balances these two terms.
4 Empirical Validation
In this section, we conduct comprehensive experiments to validate the proposed method. We first compare our method with plain 2D CNNs to demonstrate that our AF module implicitly implements temporal-wise attention which is beneficial for temporal modeling. Then, we validate AFNet’s efficiency by introducing more frames but costing less computation compared with other methods. Further, we show AFNet’s strong performance compared with other efficient action recognition frameworks. Finally, we provide qualitative analysis and extensive ablation results to demonstrate the effectiveness of the proposed navigation module and two-branch design.
Datasets. Our method is evaluated on five video recognition datasets: (1) Mini-Kinetics [23, 24] is a subset of Kinetics [15] which selects 200 classes from Kinetics, containing 121k training videos and 10k validation videos; (2) ActivityNet-v1.3 [2] is an untrimmed dataset with 200 action categories and average duration of 117 seconds. It contains 10,024 video samples for training and 4,926 for validation; (3) Jester is a hand gesture recognition dataset introduced by [22]. The dataset consists of 27 classes, with 119k training videos and 15k validation videos; (4) Something-Something V1&V2 [10] are two human action datasets with strong temporal information, including 98k and 194k videos for training and validation respectively.
Data pre-processing. We sample 8 frames uniformly to represent every video on Jester, MiniKinetics, and 12 frames on ActivityNet and Something-Something to compare with existing works unless specified. During training, the training data is randomly cropped to 224 × 224 following [35], and we perform random flipping except for Something-Something. At inference stage, all frames are center-cropped to 224 × 224 and we use one-crop one-clip per video for efficiency.
Implementation details. Our method is bulit on ResNet50 [12] in default and we replace the first three stages of the network with our proposed AF module. We first train our two-branch network from scratch on ImageNet for fair comparisons with other methods. Then we add the proposed navigation module and train it along with the backbone network on video recognition datasets. In our implementations, RT denotes the ratio of selected frames while RS represents the ratio of select regions which will decrease from 1 to the number we set before training by steps. We let the temperature τ in navigation module decay from 1 to 0.01 exponentially during training. Due to limited space, we include more details of implementation in supplementary material.
4.1 Comparisons with Existing Methods
Less is more. At first, we implement AFNet on Something-Something V1 and Jester datasets with 8 sampled frames. We compare it with the baseline method TSN as both methods do not explicitly build temporal modeling module and are built on ResNet50. In Table 1, our method AFNet(RT=1.00) shows similar performance with TSN when selecting all the frames. Nonetheless, when we select fewer frames in AFNet, it exhibits much higher accuracy compared to TSN and AFNet(RT=1.00) which achieves Less is
More by utilizing less frames but obtaining higher accuracy. The results may seem counterintuitive as seeing more frames is usually beneficial for video recognition. The explanation is that the two-branch design of AFNet can preserve the information of all input frames and the selection of salient frames at intermediate features implements implicit temporal modeling as we have analyzed in Section 3.2. As the binary mask learned by the navigation module will decide whether the coefficient will be calculated for each frame at every convolutional block, it will result in learned temporal weights in each video. To better illustrate this point, we conduct the experiment by removing Gumbel-Softmax [14] in our navigation module and modifying it to learn soft temporal weights for the features at focal branch. We can observe that AFNet(soft-weights) has a similar performance with AFNet(RT=0.25), AFNet(RT=0.50) and outperforms AFNet(RT=1.00) significantly which indicates that learning soft frame-wise weights causes the similar effect.
More is less. We incorporate our method with temporal shift module (TSM [20]) to validate that AFNet can further reduce the redundancy of such competing methods and achieve More is Less by seeing more frames with less computation. We implement our method on Something-Something V1&V2 datasets which contain strong temporal information and relevant results are shown in Table 2.
Table 3: Comparisons with competitive efficient video recognition methods on Mini-Kinetics. AFNet achieves the best trade-off compared to existing works. GFLOPs represents the average computation to process one video.
Method Mini-Kinetics
Top-1 Acc. GFLOPs
LiteEval [30] 61.0% 99.0 SCSampler [16] 70.8% 42.0 AR-Net [23] 71.7% 32.0 AdaFuse [24] 72.3% 23.0 AdaFocus [28] 72.2% 26.6 VideoIQ [25] 72.3% 20.4
AFNet (RT=0.4) 72.8% 19.4 AFNet (RT=0.8) 73.5% 22.0
Table 4: Comparisons with competitive efficient video recognition methods on ActivityNet. AFNet achieves the best trade-off compared to existing works. GFLOPs represents the average computation to process one video.
Method ActivityNet
mAP GFLOPs
AdaFrame [29] 71.5% 79.0 LiteEval [30] 72.7% 95.1 ListenToLook [9] 72.3% 81.4 SCSampler [16] 72.9% 42.0 AR-Net [23] 73.8% 33.5 VideoIQ [25] 74.8% 28.1 AdaFocus [28] 75.0% 26.6
AFNet (RS=0.4,RT=0.8) 75.6% 24.6
Compared to TSM which samples 8 frames, our method shows significant advantages in performance as we introduce more frames and the two-branch structure can preserve the information of all frames. Yet, our computational cost is much smaller than TSM as we allocate frames with different computation resources by this two-branch design and adaptively skip the unimportant frames with the proposed navigation module. Moreover, AFNet outperforms many static methods, which carefully design their structures for better temporal modeling, both in accuracy and efficiency. This can be explained by that the navigation module restrains the noise of unimportant frames and enforces frame-wise attention which is beneficial for temporal modeling. As for other competitive dynamic methods like AdaFuse and AdaFocus, our method shows an obviously better performance both in accuracy and computations. When costing similar computation, AFNet outperforms AdaFuse and AdaFocus by 3.1% and 1.8% respectively on Something-Something V1. Furthermore, we implement our method on other backbones for even higher accuracy and efficiency. When we build AFNet on efficient structure MobileNetV3, we can obtain similar performance with TSM but only with the computation of 2.3 GFLOPs. Besides, AFNet-TSM(RT=0.8) with the backbone of ResNet101 can achieve the accuracy of 50.1% and 63.2% on Something-Something V1 and V2, respectively, which further validate the effectiveness and generalization ability of our framework.
Comparisons with competitive dynamic methods. Then, we implement our method on MiniKinetics and ActivityNet, and compare AFNet with other efficient video recognition approaches. At first, we validate our method on Mini-Kinetics and AFNet shows the best performance both in accuracy and computations compared with other efficient approaches in Table 3. To demonstrate that AFNet can further reduce spatial redundancy, we extend the navigation module to select salient
regions of important frame on ActivityNet. We move the temporal navigation module to the first layer of the network to avoid huge variance in features when incorporating spatial navigation module and note that we only apply this procedure in this part. We can see from Table 4 that our method obtains the best performance while costing the least computation compared to other works. Moreover, we change the ratio of selected frames and plot the mean Average Precision and computational cost of various methods in Figure 4. We can conclude that AFNet exhibits a better trade-off between accuracy and efficiency compared to other works.
4.2 Visualizations
We show the distribution of RT among different convolution blocks under different selection ratios in Figure 5 and utilize 3rd-order polynomials to display the trend of distribution (shown in dash lines). One can see a decreased trend in RT for all the curves with the increased index in convolution blocks and this can be explained that earlier layers mostly capture low-level information which has relatively large divergence among different frames. While high-level semantics between different frames are more similar, therefore AFNet tends to skip more at later convolution blocks. In Figure 6, we visualize the selected frames in the 3rd-block of our AFNet with RT=0.5 on the validation set of Something-Something V1 where we uniformly sample 8 frames. Our navigation module effectively guides the focal branch to concentrate on frames which are more task-relevant and deactivate the frames that contain similar information.
4.3 Ablation Study
In this part, we implement our method on ActivityNet with 12 sampled frames to conduct comprehensive ablation study to verify the effectiveness of our design.
Effect of two branch design. We first incorporate our navigation module into ResNet50 and compare it with AFNet to prove the strength of our designed two-branch architecture. From Table 5, AFNet shows substantial advantages in accuracy under different ratios of select frames. Aside from it, models which adopted our structure but with a fixed sampling policy also show significantly better performance compared with the network based on single branch which can further demonstrate the effectiveness of our two-branch structure and the necessity to preserve the information of all frames.
Effect of navigation module. In this part, we further compare our proposed navigation module with three alternative sampling strategies in different selection ratios: (1) random sampling; (2) uniform sampling: sample frames in equal step; (3) normal sampling: sample frames from a standard gaussian distribution. Shown in Table 5, our proposed strategy continuously outperforms other fixed sampling policies under different selection ratios which validates the effectiveness of the navigation module. Moreover, the advantage of our method is more obvious when the ratio of selected frames is small which demonstrates that our selected frames are more task-relevant and contain essential information for the recognition. Further, we evaluate the extension of navigation module which can reduce spatial redundancy, and compare it with: (1) random sampling; (2) center cropping. Our method shows better performance compared with fixed sampling strategies under various selection ratios which verifies the effectiveness of this design.
5 Conclusion
This paper proposes an adaptive Ample and Focal Network (AFNet) to reduce temporal redundancy in videos with the consideration of architecture design and the intrinsic redundancy in data. Our method enables 2D-CNNs to have access to more frames to look broadly but with less computation by staying focused on the salient information. AFNet exhibits promising performance as our twobranch design preserves the information of all the input frames instead of discarding part of the knowledge at the beginning of the network. Moreover, the dynamic temporal selection within the network not only restrains the noise of unimportant frames but enforces implicit temporal modeling as well. This enables AFNet to obtain even higher accuracy when using fewer frames compared with static method without temporal modeling module. We further show that our method can be extended to reduce spatial redundancy by only computing important regions of the selected frames. Comprehensive experiments have shown that our method outperforms competing efficient approaches both in accuracy and computational efficiency.
Acknowledgments and Disclosure of Funding
Research was sponsored by the DEVCOM Analysis Center and was accomplished under Cooperative Agreement Number W911NF-22-2-0001. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. | 1. What is the main contribution of the paper, and how does it differ from prior works?
2. What are the strengths and weaknesses of the proposed AFNet architecture?
3. How does the reviewer assess the novelty of individual components of AFNet?
4. What are some potential improvements or extensions suggested by the reviewer?
5. Can you provide more information about the computational cost reduction during training, and how it might be addressed?
6. What are your thoughts on using alternative "hard-sampling" algorithms for the selection module?
7. Would it be beneficial to include more experimental results with stronger backbones and more frames?
8. Could you explain the counterintuitive result regarding GFLOPs for RT=0.8, and how it compares to the backbone model?
9. Is there any merit to learning a linear function that transforms original T frames to T' steps instead of a sampling strategy?
10. Are there any limitations to the approach proposed in the paper? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper proposes a two-branch network for efficient video recognition. In particular, the Ample Branch takes densely sampled input frames and processes them with reduced channel sizes. On the other hand, the Focal Branch only processes salient frames selected by a navigation module. Extensive experiments on multiple video benchmarks show that the proposed AFNet achieves state-of-the-art results with lower computational cost.
Strengths And Weaknesses
Strength
The proposed architecture, AFNet, is simple yet effective for efficient video action recognition. As shown in the experiment section, AFNet achieves even better results than its baseline with lower computational cost. The idea of leveraging more input frames for avoiding information loss and salient frame selection is well-motivated, and the encouraging results provided in this paper are potentially insightful for the research community.
The paper is overall well organized and well written.
Weakness
The novelty of individual components of AFNet is limited. For example, the two-branch design with downsampled "fast branch" is explored in SlowFast network (without dynamic selection of salient frames though); the navigation module along with the Gumbel softmax optimization technique is also used in prior work for dynamic selection [1]. However, I believe that the contribution of the overall architecture design is sufficient and the proposed method achieves good results.
Since Gumbel softmax is used for dynamic sampling of frames, the computational cost cannot be reduced during training. Although we usually care more about the computational cost of a model at inference, the cost of training will become a bottleneck if (1) the model is too large to fit into GPU memory (e.g., using more frames than 12 frames used in the paper); (2) the model training takes too much time. I understand it's out of the scope of this paper, but I'd suggest the author to try other "hard-sampling" algorithms for the selection module, for example, the perturbed maximum method [2, 3].
Because the proposed two-branch design is generic to different backbone models, it's always better to see more experimental results with stronger backbones (e.g., ResNet-101 or even Transformers) and using more frames (12 frames are still a small number for long videos such as those in ActivityNet).
[1] Rao, Y., Zhao, W., Liu, B., Lu, J., Zhou, J., Hsieh, C.J.: Dynamicvit: Efficient vision transformers with dynamic token sparsification. In: NIPS (2021) [2] Berthet, Quentin, et al. "Learning with differentiable pertubed optimizers." Advances in neural information processing systems 33 (2020): 9508-9519. [3] Cordonnier, Jean-Baptiste, et al. "Differentiable patch selection for image recognition." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.
Questions
The GFLOPs of RT=0.8 is a bit counterintuitive. Considering (1) the introduction of the addition Amber Branch, navigation module and (2) the last stage of the network remains unchanged, the GFLOPs of the model with RT=0.8 should be larger than 80% of that of the backbone model. This trend can be observed for the setting RT=0.4. However, for RT=0.8, GFLOPs = 31.7 is only 64.6% of TSM (GFLOPs=49.1), which is just slightly larger than RT=0.4 (GFLOPs=27.8). Please clarify this in the rebuttal.
For the navigation module, instead of learning a sampling strategy, what if we learn a linear function that transforms the original T frames to T' steps? In other words, we aim to learn a T x T' weight matrix that compute T' weighted averages of the T frames. It's differentiable and easy to optimized with the rest of the model, and I'm curious whether it'll give worse results than doing frame sampling.
Limitations
Yes. |
NIPS | Title
Look More but Care Less in Video Recognition
Abstract
Existing action recognition methods typically sample a few frames to represent each video to avoid the enormous computation, which often limits the recognition performance. To tackle this problem, we propose Ample and Focal Network (AFNet), which is composed of two branches to utilize more frames but with less computation. Specifically, the Ample Branch takes all input frames to obtain abundant information with condensed computation and provides the guidance for Focal Branch by the proposed Navigation Module; the Focal Branch squeezes the temporal size to only focus on the salient frames at each convolution block; in the end, the results of two branches are adaptively fused to prevent the loss of information. With this design, we can introduce more frames to the network but cost less computation. Besides, we demonstrate AFNet can utilize fewer frames while achieving higher accuracy as the dynamic selection in intermediate features enforces implicit temporal modeling. Further, we show that our method can be extended to reduce spatial redundancy with even less cost. Extensive experiments on five datasets demonstrate the effectiveness and efficiency of our method. Our code is available at https://github.com/BeSpontaneous/AFNet-pytorch.
1 Introduction
Online videos have grown wildly in recent years and video analysis is necessary for many applications such as recommendation [6], surveillance [4, 5] and autonomous driving [31, 17]. These applications require not only accurate but also efficient video understanding algorithms. With the introduction of deep learning networks [3] in video recognition, there has been rapid advancement in the performance of the methods in this area. Though successful, these deep learning methods often cost huge computation, making them hard to be deployed in the real world.
In video recognition, we need to sample multiple frames to represent each video which makes the computational cost scale proportionally to the number of sampled frames. In most cases, a small proportion of all the frames is sampled for each input, which only contains limited information of the original video. A straightforward solution is to sample more frames to the network but the computation expands proportionally to the number of sampled frames.
There are some works proposed recently to dynamically sample salient frames [29, 16] for higher efficiency. The selection step of these methods is made before the frames are sent to the classification network, which means the information of those unimportant frames is totally lost and it consumes a considerable time for the selection procedure. Some other methods proposed to address the spatial redundancy in action recognition by adaptively resizing the resolution based on the importance of each frame [23], or cropping the most salient patch for every frame [28]. However, these methods still completely abandon the information that the network recognizes as unimportant and introduce a policy network to make decisions for each sample which leads to extra computation and complicates the training strategies.
∗Corresponding Author: markcheung9248@gmail.com.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
In our work, we go from another perspective compared with previous works. We propose a method which makes frame selection within the classification network. Shown in Figure 1, we design an architecture called Ample and Focal Network (AFNet) which is composed of two branches: the ample branch takes a glimpse of all the input features with lightweight computation as we downsample the features for smaller resolution and further reduce the channel size; the focal branch receives the guidance from the proposed navigation module to squeeze the temporal size by only computing on the selected frames to save cost; in the end, we adaptively fuse the features of these two branches to prevent the information loss of the unselected frames.
In this manner, the two branches are both very lightweight and we enable AFNet to look broadly by sampling more frames and stay focused on the important information for less computation. Considering these two branches in a uniform manner, on the one hand, we can avoid the loss of information compared to other dynamic methods as the ample branch preserves the information of all the input; on the other hand, we can restrain the noise from the unimportant frames by deactivating them in each convolutional block. Further, we have demonstrated that the dynamic selection strategy at intermediate features is beneficial for temporal modeling as it implicitly implements frame-wise attention which can enable our network to utilize fewer frames while obtaining higher accuracy. In addition, instead of introducing a policy network to select frames, we design a lightweight navigation module which can be plugged into the network so that our method can easily be trained in an end-toend fashion. Furthermore, AFNet is compatible with spatial adaptive works which can help to further reduce the computations of our method.
We summarize the main contributions as follows:
• We propose an adaptive two-branch framework which enables 2D-CNNs to process more frames with less computational cost. With this design, we not only prevent the loss of information but strengthen the representation of essential frames.
• We propose a lightweight navigation module to dynamically select salient frames at each convolution block which can easily be trained in an end-to-end fashion.
• The selection strategy at intermediate features not only empowers the model with strong flexibility as different frames will be selected at different layers, but also enforces implicit temporal modeling which enables AFNet to obtain higher accuracy with fewer frames.
• We have conducted comprehensive experiments on five video recognition datasets. The results show the superiority of AFNet compared to other competitive methods.
2 Related Work
2.1 Video Recognition
The development of deep learning in recent years serves as a huge boost to the research of video recognition. A straightforward method for this task is using 2D-CNNs to extract the features of sampled frames and use specific aggregation methods to model the temporal relationships across frames. For instance, TSN [27] proposes to average the temporal information between frames. While TSM [20] shifts channels with adjacent frames to allow information exchange at temporal dimension. Another approach is to build 3D-CNNs to for spatiotemporal learning, such as C3D [26], I3D [3] and SlowFast [8]. Though being shown effective, methods based on 3D-CNNs are computationally expensive, which brings great difficulty in real-world deployment.
While the two-branch design has been explored by SlowFast, our motivation and detailed structure are different from it in the following ways: 1) network category: SlowFast is a static 3D model, but
AFNet is a dynamic 2D network; 2) motivation: SlowFast aims to collect semantic information and changing motion with branches at different temporal speeds for better performance, while AFNet is aimed to dynamically skip frames to save computation and the design of two-branch structure is to prevent the information loss; 3) specific design: AFNet is designed to downsample features for efficiency at ample branch while SlowFast processes features in the original resolution; 4) temporal modeling: SlowFast applies 3D convolutions for temporal modeling, AFNet is a 2D model which is enforced with implicit temporal modeling by the designed navigation module.
2.2 Redundancy in Data
The efficiency of 2D-CNNs has been broadly studied in recent years. While some of the works aim at designing efficient network structure [13], there is another line of research focusing on reducing the intrinsic redundancy in image-based data [32, 11]. In video recognition, people usually sample limited number of frames to represent each video to prevent numerous computational costs. Even though, the computation for video recognition is still a heavy burden for researchers and a common strategy to address this problem is reducing the temporal redundancy in videos as not all frames are essential to the final prediction. [33] proposes to use reinforcement learning to skip frames for action detection. There are other works [29, 16] dynamically sampling salient frames to save computational cost. As spatial redundancy widely exists in image-based data, [23] adaptively processes frames with different resolutions. [28] provides the solution as cropping the most salient patch for each frame. However, the unselected regions or frames of these works are completely abandoned. Hence, there will be some information lost in their designed procedures. Moreover, most of these works adopt a policy network to make dynamic decisions, which introduces additional computation somehow and splits the training into several stages. In contrast, our method adopts a two-branch design, allocating different computational resources based on the importance of each frame and preventing the loss of information. Besides, we design a lightweight navigation module to guide the network where to look, which can be incorporated into the backbone network and trained in an end-to-end way. Moreover, we validate that the dynamic frame selection at intermediate features will not only empower the model with strong flexibility as different frames will be selected at different layers, but result in learned frame-wise weights which enforce implicit temporal modeling.
3 Methodology
Intuitively, considering more frames enhances the temporal modeling but results in higher computational cost. To efficiently achieve the competitive performance, we propose AFNet to involve more frames but wisely extract information from them to keep the low computational cost. Specifically, we design a two-branch structure to treat frames differently based on their importance and process the features in an adaptive manner which can provide our method with strong flexibility. Besides, we demonstrate that the dynamic selection of frames in the intermediate features results in learned frame-wise weights which can be regarded as implicit temporal modeling.
3.1 Architecture Design
As is shown in Figure 2, we design our Ample and Focal (AF) module as a two-branch structure: the ample branch (top) processes abundant features of all the frames in a lower resolution and a squeezed channel size; the focal branch (bottom) receives the guidance from ample branch generated by the navigation module and makes computation only on the selected frames. Such design can be conveniently applied to existing CNN structures to build AF module.
Ample Branch. The ample branch is designed to involve all frames with cheap computation, which serves as 1) guidance to select salient frames to help focal branch to concentrate on important information; 2) a complementary stream with focal branch to prevent the information loss via a carefully designed fusion strategy.
Formally, we denote video sample i as vi, containing T frames as vi = { f i1, f i 2, ..., f i T } . For convenience, we omit the superscript i in the following sections if no confusion arises. We denote the input of ample branch as vx ∈ RT×C×H×W , where C represents the channel size and H ×W is the spatial size. The features generated by the ample branch can be written as:
vya = F a (vx) , (1)
where vya ∈ RT×(Co/2)×(Ho/2)×(Wo/2) represents the output of ample branch and F a stands for a series of convolution blocks. While the channel, height, width at focal branch are denoted as Co, Ho, Wo correspondingly. We set the stride of the first convolution block to 2 to downsample the resolution of this branch and we upsample the feature at the end of this branch by nearest interpolation.
Navigation Module. The proposed navigation module is designed to guide the focal branch where to look by adaptively selecting the most salient frames for video vi.
Specifically, the navigation module generates a binary temporal mask Ln using the output from the n-th convolution block in ample branch vyan . At first, average pooling is applied to vyan to resize the spatial dimension to 1× 1, then we perform convolution to transform the channel size to 2:
ṽyan = ReLU ( BN ( W1 ∗ Pool ( vyan ))) , (2)
where ∗ stands for convolution and W1 denotes the weights of the 1× 1 convolution. After that, we reshape the dimension of feature ṽyan from T × 2 × 1 × 1 to 1 × (2× T ) × 1 × 1 so that we can model the temporal relations for each video from channel dimension by:
ptn = W2 ∗ ṽyan , (3) where W2 represents the weights of the second 1× 1 convolution and it will generate a binary logit ptn ∈ R2 for each frame t which denotes whether to select it. However, directly sampling from such discrete distribution is non-differentiable. In this work, we apply Gumbel-Softmax [14] to resolve this non-differentiability. Specifically, we generate a normalized categorical distribution by using Softmax:
π = lj | lj = exp ( p tj n ) exp ( pt0n ) + exp ( pt1n ) , (4)
and we draw discrete samples from the distribution π as:
L = arg max j (log lj +Gj) , (5)
where Gj = − log(− log Uj) is sampled from a Gumbel distribution and Uj is sampled from Unif(0,1) which is a uniform distribution. As argmax cannot be differentiated, we relax the discrete sample L in backpropagation via Softmax:
l̂j = exp ((log lj +Gj) /τ)∑2
k=1 exp ((log lk +Gk) /τ) , (6)
the distribution l̂ will become a one-hot vector when the temperature factor τ → 0 and we let τ decrease from 1 to 0.01 during training.
Focal Branch. The focal branch is guided by the navigation module to only compute the selected frames, which diminishes the computational cost and potential noise from redundant frames.
The features at the n-th convolution block in this branch can be denoted as vyfn ∈ R T×Co×Ho×Wo . Based on the temporal mask Ln generated from the navigation module, we select frames which have corresponding non-zero values in the binary mask for each video and apply convolutional operations only on these extracted frames v′
yfn ∈ RTl×Co×Ho×Wo :
v′ yfn = F fn ( v′ yfn−1 ) , (7)
where F fn is the n-th convolution blocks at this branch and we set the group number of convolutions to 2 in order to further reduce the computations. After the convolution operation at n-th block, we generate a zero-tensor which shares the same shape with vyfn and fill the value by adding v ′ yfn
and vyfn−1 with the residual design following [12].
At the end of these two branches, inspired by [1, 11], we generate a weighting factor θ by pooling and linear layers to fuse the features from two branches:
vy = θ ⊙ vya + (1− θ)⊙ vyf , (8) where ⊙ denotes the channel-wise multiplication.
3.2 Implicit Temporal Modeling
While our work is mainly designed to reduce the computation in video recognition like [28, 24], we demonstrate that AFNet enforces implicit temporal modeling by the dynamic selection of frames in the intermediate features. Considering a TSN[27] network which adapts vanilla ResNet[12] structure, the feature at the n-th convolutional block in each stage can be written as vn ∈ RT×C×H×W . Thus, the feature at n+ 1-th block can be represented as:
vn+1 = vn + Fn+1 (vn)
= (1 + ∆vn+1) vn, (9)
∆vn+1 = Fn+1 (vn)
vn , (10)
where Fn+1 is the n+ 1-th convolutional block and we define ∆vn+1 as the coefficient learned from this block. By that we can write the output of this stage vN as:
vN =
[ N∏
n=2
(1 + ∆vn) ] ∗ v1. (11)
Similarly, we define the features in ample and focal branch as:
vyaN =
[ N∏
n=2
( 1 + ∆vyan )] ∗ vy1 , (12)
vyfN =
[ N∏
n=2
( 1 + Ln ∗∆vyfn )] ∗ vy1 , (13)
where Ln is the binary temporal mask generated by Equation 5 and vy1 denotes the input of this stage. Based on Equation 8, we can get the output of this stage as:
vyN = θ ⊙ vyaN + (1− θ)⊙ vyfN
= { θ ⊙ [ N∏
n=2
( 1 + ∆vyan )] + (1− θ)⊙ [ N∏
n=2
( 1 + Ln ∗∆vyfn )]} ∗ vy1 .
(14)
As Ln is a temporal-wise binary mask, it will decide whether the coefficient ∆vyfn will be calculated in each frame at every convolutional block. Considering the whole stage is made up of multiple convolutional blocks, the series multiplication of focal branch’s output with the binary mask Ln will approximate soft weights. This results in learned frame-wise weights in each video which we regard as implicit temporal modeling. Although we do not explicitly build any temporal modeling module, the generation of Ln in Equation 3 has already taken the temporal information into account so that the learned temporal weights equal performing implicit temporal modeling at each stage.
3.3 Spatial Redundancy Reduction
In this part, we show that our approach is compatible with methods that aim to solve the problem of spatial redundancy. We extend the navigation module by applying similar procedures with the temporal mask generation and the work [11] to generate a spatial logit for the n-th convolution block which is shown in Figure 3:
qtn = W4 ∗ ( Pool ( ReLU ( BN ( W3 ∗ vyan )))) , (15)
where W3 denotes the weights of the 3× 3 convolution and W4 stands for the weights of convolution with kernel size 1× 1. After that, we still use Gumbel-Softmax to sample from discrete distribution to generate spatial mask Mn and navigate the focal branch to merely focus on the salient regions of the selected frames to further reduce the cost.
3.4 Loss functions
Inspired by [27], we take the average of each frame’s prediction to represent the final output of the corresponding video and our optimization objective is minimizing:
L = ∑ (v,y)
[ −y log (P (v)) + λ ·
N∑ n=1
(r −RT )2 ] . (16)
The first term is the cross-entropy between predictions P (v) for input video v and the corresponding one-hot label y. We denote r in the second term as the ratio of selected frames in every mini-batch and RT as the target ratio which is set before the training (RS is the target ratio when extending navigation module to reduce spatial redundancy). We let r approximate RT by adding the second loss term and manage the trade-off between efficiency and accuracy by introducing a factor λ which balances these two terms.
4 Empirical Validation
In this section, we conduct comprehensive experiments to validate the proposed method. We first compare our method with plain 2D CNNs to demonstrate that our AF module implicitly implements temporal-wise attention which is beneficial for temporal modeling. Then, we validate AFNet’s efficiency by introducing more frames but costing less computation compared with other methods. Further, we show AFNet’s strong performance compared with other efficient action recognition frameworks. Finally, we provide qualitative analysis and extensive ablation results to demonstrate the effectiveness of the proposed navigation module and two-branch design.
Datasets. Our method is evaluated on five video recognition datasets: (1) Mini-Kinetics [23, 24] is a subset of Kinetics [15] which selects 200 classes from Kinetics, containing 121k training videos and 10k validation videos; (2) ActivityNet-v1.3 [2] is an untrimmed dataset with 200 action categories and average duration of 117 seconds. It contains 10,024 video samples for training and 4,926 for validation; (3) Jester is a hand gesture recognition dataset introduced by [22]. The dataset consists of 27 classes, with 119k training videos and 15k validation videos; (4) Something-Something V1&V2 [10] are two human action datasets with strong temporal information, including 98k and 194k videos for training and validation respectively.
Data pre-processing. We sample 8 frames uniformly to represent every video on Jester, MiniKinetics, and 12 frames on ActivityNet and Something-Something to compare with existing works unless specified. During training, the training data is randomly cropped to 224 × 224 following [35], and we perform random flipping except for Something-Something. At inference stage, all frames are center-cropped to 224 × 224 and we use one-crop one-clip per video for efficiency.
Implementation details. Our method is bulit on ResNet50 [12] in default and we replace the first three stages of the network with our proposed AF module. We first train our two-branch network from scratch on ImageNet for fair comparisons with other methods. Then we add the proposed navigation module and train it along with the backbone network on video recognition datasets. In our implementations, RT denotes the ratio of selected frames while RS represents the ratio of select regions which will decrease from 1 to the number we set before training by steps. We let the temperature τ in navigation module decay from 1 to 0.01 exponentially during training. Due to limited space, we include more details of implementation in supplementary material.
4.1 Comparisons with Existing Methods
Less is more. At first, we implement AFNet on Something-Something V1 and Jester datasets with 8 sampled frames. We compare it with the baseline method TSN as both methods do not explicitly build temporal modeling module and are built on ResNet50. In Table 1, our method AFNet(RT=1.00) shows similar performance with TSN when selecting all the frames. Nonetheless, when we select fewer frames in AFNet, it exhibits much higher accuracy compared to TSN and AFNet(RT=1.00) which achieves Less is
More by utilizing less frames but obtaining higher accuracy. The results may seem counterintuitive as seeing more frames is usually beneficial for video recognition. The explanation is that the two-branch design of AFNet can preserve the information of all input frames and the selection of salient frames at intermediate features implements implicit temporal modeling as we have analyzed in Section 3.2. As the binary mask learned by the navigation module will decide whether the coefficient will be calculated for each frame at every convolutional block, it will result in learned temporal weights in each video. To better illustrate this point, we conduct the experiment by removing Gumbel-Softmax [14] in our navigation module and modifying it to learn soft temporal weights for the features at focal branch. We can observe that AFNet(soft-weights) has a similar performance with AFNet(RT=0.25), AFNet(RT=0.50) and outperforms AFNet(RT=1.00) significantly which indicates that learning soft frame-wise weights causes the similar effect.
More is less. We incorporate our method with temporal shift module (TSM [20]) to validate that AFNet can further reduce the redundancy of such competing methods and achieve More is Less by seeing more frames with less computation. We implement our method on Something-Something V1&V2 datasets which contain strong temporal information and relevant results are shown in Table 2.
Table 3: Comparisons with competitive efficient video recognition methods on Mini-Kinetics. AFNet achieves the best trade-off compared to existing works. GFLOPs represents the average computation to process one video.
Method Mini-Kinetics
Top-1 Acc. GFLOPs
LiteEval [30] 61.0% 99.0 SCSampler [16] 70.8% 42.0 AR-Net [23] 71.7% 32.0 AdaFuse [24] 72.3% 23.0 AdaFocus [28] 72.2% 26.6 VideoIQ [25] 72.3% 20.4
AFNet (RT=0.4) 72.8% 19.4 AFNet (RT=0.8) 73.5% 22.0
Table 4: Comparisons with competitive efficient video recognition methods on ActivityNet. AFNet achieves the best trade-off compared to existing works. GFLOPs represents the average computation to process one video.
Method ActivityNet
mAP GFLOPs
AdaFrame [29] 71.5% 79.0 LiteEval [30] 72.7% 95.1 ListenToLook [9] 72.3% 81.4 SCSampler [16] 72.9% 42.0 AR-Net [23] 73.8% 33.5 VideoIQ [25] 74.8% 28.1 AdaFocus [28] 75.0% 26.6
AFNet (RS=0.4,RT=0.8) 75.6% 24.6
Compared to TSM which samples 8 frames, our method shows significant advantages in performance as we introduce more frames and the two-branch structure can preserve the information of all frames. Yet, our computational cost is much smaller than TSM as we allocate frames with different computation resources by this two-branch design and adaptively skip the unimportant frames with the proposed navigation module. Moreover, AFNet outperforms many static methods, which carefully design their structures for better temporal modeling, both in accuracy and efficiency. This can be explained by that the navigation module restrains the noise of unimportant frames and enforces frame-wise attention which is beneficial for temporal modeling. As for other competitive dynamic methods like AdaFuse and AdaFocus, our method shows an obviously better performance both in accuracy and computations. When costing similar computation, AFNet outperforms AdaFuse and AdaFocus by 3.1% and 1.8% respectively on Something-Something V1. Furthermore, we implement our method on other backbones for even higher accuracy and efficiency. When we build AFNet on efficient structure MobileNetV3, we can obtain similar performance with TSM but only with the computation of 2.3 GFLOPs. Besides, AFNet-TSM(RT=0.8) with the backbone of ResNet101 can achieve the accuracy of 50.1% and 63.2% on Something-Something V1 and V2, respectively, which further validate the effectiveness and generalization ability of our framework.
Comparisons with competitive dynamic methods. Then, we implement our method on MiniKinetics and ActivityNet, and compare AFNet with other efficient video recognition approaches. At first, we validate our method on Mini-Kinetics and AFNet shows the best performance both in accuracy and computations compared with other efficient approaches in Table 3. To demonstrate that AFNet can further reduce spatial redundancy, we extend the navigation module to select salient
regions of important frame on ActivityNet. We move the temporal navigation module to the first layer of the network to avoid huge variance in features when incorporating spatial navigation module and note that we only apply this procedure in this part. We can see from Table 4 that our method obtains the best performance while costing the least computation compared to other works. Moreover, we change the ratio of selected frames and plot the mean Average Precision and computational cost of various methods in Figure 4. We can conclude that AFNet exhibits a better trade-off between accuracy and efficiency compared to other works.
4.2 Visualizations
We show the distribution of RT among different convolution blocks under different selection ratios in Figure 5 and utilize 3rd-order polynomials to display the trend of distribution (shown in dash lines). One can see a decreased trend in RT for all the curves with the increased index in convolution blocks and this can be explained that earlier layers mostly capture low-level information which has relatively large divergence among different frames. While high-level semantics between different frames are more similar, therefore AFNet tends to skip more at later convolution blocks. In Figure 6, we visualize the selected frames in the 3rd-block of our AFNet with RT=0.5 on the validation set of Something-Something V1 where we uniformly sample 8 frames. Our navigation module effectively guides the focal branch to concentrate on frames which are more task-relevant and deactivate the frames that contain similar information.
4.3 Ablation Study
In this part, we implement our method on ActivityNet with 12 sampled frames to conduct comprehensive ablation study to verify the effectiveness of our design.
Effect of two branch design. We first incorporate our navigation module into ResNet50 and compare it with AFNet to prove the strength of our designed two-branch architecture. From Table 5, AFNet shows substantial advantages in accuracy under different ratios of select frames. Aside from it, models which adopted our structure but with a fixed sampling policy also show significantly better performance compared with the network based on single branch which can further demonstrate the effectiveness of our two-branch structure and the necessity to preserve the information of all frames.
Effect of navigation module. In this part, we further compare our proposed navigation module with three alternative sampling strategies in different selection ratios: (1) random sampling; (2) uniform sampling: sample frames in equal step; (3) normal sampling: sample frames from a standard gaussian distribution. Shown in Table 5, our proposed strategy continuously outperforms other fixed sampling policies under different selection ratios which validates the effectiveness of the navigation module. Moreover, the advantage of our method is more obvious when the ratio of selected frames is small which demonstrates that our selected frames are more task-relevant and contain essential information for the recognition. Further, we evaluate the extension of navigation module which can reduce spatial redundancy, and compare it with: (1) random sampling; (2) center cropping. Our method shows better performance compared with fixed sampling strategies under various selection ratios which verifies the effectiveness of this design.
5 Conclusion
This paper proposes an adaptive Ample and Focal Network (AFNet) to reduce temporal redundancy in videos with the consideration of architecture design and the intrinsic redundancy in data. Our method enables 2D-CNNs to have access to more frames to look broadly but with less computation by staying focused on the salient information. AFNet exhibits promising performance as our twobranch design preserves the information of all the input frames instead of discarding part of the knowledge at the beginning of the network. Moreover, the dynamic temporal selection within the network not only restrains the noise of unimportant frames but enforces implicit temporal modeling as well. This enables AFNet to obtain even higher accuracy when using fewer frames compared with static method without temporal modeling module. We further show that our method can be extended to reduce spatial redundancy by only computing important regions of the selected frames. Comprehensive experiments have shown that our method outperforms competing efficient approaches both in accuracy and computational efficiency.
Acknowledgments and Disclosure of Funding
Research was sponsored by the DEVCOM Analysis Center and was accomplished under Cooperative Agreement Number W911NF-22-2-0001. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. | 1. What is the main contribution of the paper regarding action recognition?
2. What are the strengths and weaknesses of the proposed method, particularly in its architecture and experimental support?
3. Do you have any concerns or questions about the method's ability to handle complex actions, temporal redundancy, and feature alignment?
4. How does the reviewer assess the novelty and significance of the paper's contributions compared to prior works like SlowFast?
5. Are there any potential issues or limitations in the paper's approach to frame selection and feature fusion? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper mainly targets action recognition. The authors advocate the importance of utilizing as many frames from each video as possible, while preserving proper computational cost. To this end they propose a novel architecture with two branches, one of which handles the whole video snippet with lower resolution and samples informative frames for fine-grained inference in another branch.
Strengths And Weaknesses
Strength:
The idea of end-to-end frame selection is interesting.
The authors provide abundant experiment to verify the effectiveness of their method.
Weakness:
The description is not clear enough. For example, the authors mention that the motivation of their method is to avoid ‘the loss of information compared to other dynamic methods’. However, it is not shown in this paper what kind of information loss exists in previous methods, and how such loss does harm to the performance. It would be better if more pilot study can be provided.
The proposed two-branch structure with different temporal scale is similar to SlowFast [1], which is one of the most famous action recognition models and also builds lateral connection between a slow branch and a fast branch with difference frame rates. The authors should consider discussing about this paper when introducing the proposed method.
[1] Feichtenhofer C, Fan H, Malik J, et al. Slowfast networks for video recognition[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 6202-6211.
Questions
In fact the claim that information loss leads to inferior performance is kind of counterfactual. Many previous studies have shown that for simple actions even one frame is enough for inference. On the other hand, for those complicated actions, temporal redundancy still exists. If the authors want to show the merit of utilizing all frames, it may be a good choice to directly train a model with all frames regardless of computational cost to show if it can significantly outperform the existing methods.
The authors mention that outputs from two branches are merged using ‘specially designed fusion strategy’ (L120). However, the fusion strategy as in Eq. 8 is simply a weighted average with learnable weight. I am afraid that the authors overclaim their contribution on this module.
The authors utilize residual connection between layers in the focal branch. This is questionable since different layers inquires potentially different frames due to the designed navigation module, which means features from different layers are not aligned in temporal dimension. I wonder if it is proper to directly add these features together.
The authors claim that the proposed method do not build temporal modeling module. I am not sure whether the
W
2
in Eq. 2 plays such a role to interact among frames.
Limitations
Several points of limitations are mentioned, which is comprehensive and provides promising future works for the current method. |
NIPS | Title
Look More but Care Less in Video Recognition
Abstract
Existing action recognition methods typically sample a few frames to represent each video to avoid the enormous computation, which often limits the recognition performance. To tackle this problem, we propose Ample and Focal Network (AFNet), which is composed of two branches to utilize more frames but with less computation. Specifically, the Ample Branch takes all input frames to obtain abundant information with condensed computation and provides the guidance for Focal Branch by the proposed Navigation Module; the Focal Branch squeezes the temporal size to only focus on the salient frames at each convolution block; in the end, the results of two branches are adaptively fused to prevent the loss of information. With this design, we can introduce more frames to the network but cost less computation. Besides, we demonstrate AFNet can utilize fewer frames while achieving higher accuracy as the dynamic selection in intermediate features enforces implicit temporal modeling. Further, we show that our method can be extended to reduce spatial redundancy with even less cost. Extensive experiments on five datasets demonstrate the effectiveness and efficiency of our method. Our code is available at https://github.com/BeSpontaneous/AFNet-pytorch.
1 Introduction
Online videos have grown wildly in recent years and video analysis is necessary for many applications such as recommendation [6], surveillance [4, 5] and autonomous driving [31, 17]. These applications require not only accurate but also efficient video understanding algorithms. With the introduction of deep learning networks [3] in video recognition, there has been rapid advancement in the performance of the methods in this area. Though successful, these deep learning methods often cost huge computation, making them hard to be deployed in the real world.
In video recognition, we need to sample multiple frames to represent each video which makes the computational cost scale proportionally to the number of sampled frames. In most cases, a small proportion of all the frames is sampled for each input, which only contains limited information of the original video. A straightforward solution is to sample more frames to the network but the computation expands proportionally to the number of sampled frames.
There are some works proposed recently to dynamically sample salient frames [29, 16] for higher efficiency. The selection step of these methods is made before the frames are sent to the classification network, which means the information of those unimportant frames is totally lost and it consumes a considerable time for the selection procedure. Some other methods proposed to address the spatial redundancy in action recognition by adaptively resizing the resolution based on the importance of each frame [23], or cropping the most salient patch for every frame [28]. However, these methods still completely abandon the information that the network recognizes as unimportant and introduce a policy network to make decisions for each sample which leads to extra computation and complicates the training strategies.
∗Corresponding Author: markcheung9248@gmail.com.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
In our work, we go from another perspective compared with previous works. We propose a method which makes frame selection within the classification network. Shown in Figure 1, we design an architecture called Ample and Focal Network (AFNet) which is composed of two branches: the ample branch takes a glimpse of all the input features with lightweight computation as we downsample the features for smaller resolution and further reduce the channel size; the focal branch receives the guidance from the proposed navigation module to squeeze the temporal size by only computing on the selected frames to save cost; in the end, we adaptively fuse the features of these two branches to prevent the information loss of the unselected frames.
In this manner, the two branches are both very lightweight and we enable AFNet to look broadly by sampling more frames and stay focused on the important information for less computation. Considering these two branches in a uniform manner, on the one hand, we can avoid the loss of information compared to other dynamic methods as the ample branch preserves the information of all the input; on the other hand, we can restrain the noise from the unimportant frames by deactivating them in each convolutional block. Further, we have demonstrated that the dynamic selection strategy at intermediate features is beneficial for temporal modeling as it implicitly implements frame-wise attention which can enable our network to utilize fewer frames while obtaining higher accuracy. In addition, instead of introducing a policy network to select frames, we design a lightweight navigation module which can be plugged into the network so that our method can easily be trained in an end-toend fashion. Furthermore, AFNet is compatible with spatial adaptive works which can help to further reduce the computations of our method.
We summarize the main contributions as follows:
• We propose an adaptive two-branch framework which enables 2D-CNNs to process more frames with less computational cost. With this design, we not only prevent the loss of information but strengthen the representation of essential frames.
• We propose a lightweight navigation module to dynamically select salient frames at each convolution block which can easily be trained in an end-to-end fashion.
• The selection strategy at intermediate features not only empowers the model with strong flexibility as different frames will be selected at different layers, but also enforces implicit temporal modeling which enables AFNet to obtain higher accuracy with fewer frames.
• We have conducted comprehensive experiments on five video recognition datasets. The results show the superiority of AFNet compared to other competitive methods.
2 Related Work
2.1 Video Recognition
The development of deep learning in recent years serves as a huge boost to the research of video recognition. A straightforward method for this task is using 2D-CNNs to extract the features of sampled frames and use specific aggregation methods to model the temporal relationships across frames. For instance, TSN [27] proposes to average the temporal information between frames. While TSM [20] shifts channels with adjacent frames to allow information exchange at temporal dimension. Another approach is to build 3D-CNNs to for spatiotemporal learning, such as C3D [26], I3D [3] and SlowFast [8]. Though being shown effective, methods based on 3D-CNNs are computationally expensive, which brings great difficulty in real-world deployment.
While the two-branch design has been explored by SlowFast, our motivation and detailed structure are different from it in the following ways: 1) network category: SlowFast is a static 3D model, but
AFNet is a dynamic 2D network; 2) motivation: SlowFast aims to collect semantic information and changing motion with branches at different temporal speeds for better performance, while AFNet is aimed to dynamically skip frames to save computation and the design of two-branch structure is to prevent the information loss; 3) specific design: AFNet is designed to downsample features for efficiency at ample branch while SlowFast processes features in the original resolution; 4) temporal modeling: SlowFast applies 3D convolutions for temporal modeling, AFNet is a 2D model which is enforced with implicit temporal modeling by the designed navigation module.
2.2 Redundancy in Data
The efficiency of 2D-CNNs has been broadly studied in recent years. While some of the works aim at designing efficient network structure [13], there is another line of research focusing on reducing the intrinsic redundancy in image-based data [32, 11]. In video recognition, people usually sample limited number of frames to represent each video to prevent numerous computational costs. Even though, the computation for video recognition is still a heavy burden for researchers and a common strategy to address this problem is reducing the temporal redundancy in videos as not all frames are essential to the final prediction. [33] proposes to use reinforcement learning to skip frames for action detection. There are other works [29, 16] dynamically sampling salient frames to save computational cost. As spatial redundancy widely exists in image-based data, [23] adaptively processes frames with different resolutions. [28] provides the solution as cropping the most salient patch for each frame. However, the unselected regions or frames of these works are completely abandoned. Hence, there will be some information lost in their designed procedures. Moreover, most of these works adopt a policy network to make dynamic decisions, which introduces additional computation somehow and splits the training into several stages. In contrast, our method adopts a two-branch design, allocating different computational resources based on the importance of each frame and preventing the loss of information. Besides, we design a lightweight navigation module to guide the network where to look, which can be incorporated into the backbone network and trained in an end-to-end way. Moreover, we validate that the dynamic frame selection at intermediate features will not only empower the model with strong flexibility as different frames will be selected at different layers, but result in learned frame-wise weights which enforce implicit temporal modeling.
3 Methodology
Intuitively, considering more frames enhances the temporal modeling but results in higher computational cost. To efficiently achieve the competitive performance, we propose AFNet to involve more frames but wisely extract information from them to keep the low computational cost. Specifically, we design a two-branch structure to treat frames differently based on their importance and process the features in an adaptive manner which can provide our method with strong flexibility. Besides, we demonstrate that the dynamic selection of frames in the intermediate features results in learned frame-wise weights which can be regarded as implicit temporal modeling.
3.1 Architecture Design
As is shown in Figure 2, we design our Ample and Focal (AF) module as a two-branch structure: the ample branch (top) processes abundant features of all the frames in a lower resolution and a squeezed channel size; the focal branch (bottom) receives the guidance from ample branch generated by the navigation module and makes computation only on the selected frames. Such design can be conveniently applied to existing CNN structures to build AF module.
Ample Branch. The ample branch is designed to involve all frames with cheap computation, which serves as 1) guidance to select salient frames to help focal branch to concentrate on important information; 2) a complementary stream with focal branch to prevent the information loss via a carefully designed fusion strategy.
Formally, we denote video sample i as vi, containing T frames as vi = { f i1, f i 2, ..., f i T } . For convenience, we omit the superscript i in the following sections if no confusion arises. We denote the input of ample branch as vx ∈ RT×C×H×W , where C represents the channel size and H ×W is the spatial size. The features generated by the ample branch can be written as:
vya = F a (vx) , (1)
where vya ∈ RT×(Co/2)×(Ho/2)×(Wo/2) represents the output of ample branch and F a stands for a series of convolution blocks. While the channel, height, width at focal branch are denoted as Co, Ho, Wo correspondingly. We set the stride of the first convolution block to 2 to downsample the resolution of this branch and we upsample the feature at the end of this branch by nearest interpolation.
Navigation Module. The proposed navigation module is designed to guide the focal branch where to look by adaptively selecting the most salient frames for video vi.
Specifically, the navigation module generates a binary temporal mask Ln using the output from the n-th convolution block in ample branch vyan . At first, average pooling is applied to vyan to resize the spatial dimension to 1× 1, then we perform convolution to transform the channel size to 2:
ṽyan = ReLU ( BN ( W1 ∗ Pool ( vyan ))) , (2)
where ∗ stands for convolution and W1 denotes the weights of the 1× 1 convolution. After that, we reshape the dimension of feature ṽyan from T × 2 × 1 × 1 to 1 × (2× T ) × 1 × 1 so that we can model the temporal relations for each video from channel dimension by:
ptn = W2 ∗ ṽyan , (3) where W2 represents the weights of the second 1× 1 convolution and it will generate a binary logit ptn ∈ R2 for each frame t which denotes whether to select it. However, directly sampling from such discrete distribution is non-differentiable. In this work, we apply Gumbel-Softmax [14] to resolve this non-differentiability. Specifically, we generate a normalized categorical distribution by using Softmax:
π = lj | lj = exp ( p tj n ) exp ( pt0n ) + exp ( pt1n ) , (4)
and we draw discrete samples from the distribution π as:
L = arg max j (log lj +Gj) , (5)
where Gj = − log(− log Uj) is sampled from a Gumbel distribution and Uj is sampled from Unif(0,1) which is a uniform distribution. As argmax cannot be differentiated, we relax the discrete sample L in backpropagation via Softmax:
l̂j = exp ((log lj +Gj) /τ)∑2
k=1 exp ((log lk +Gk) /τ) , (6)
the distribution l̂ will become a one-hot vector when the temperature factor τ → 0 and we let τ decrease from 1 to 0.01 during training.
Focal Branch. The focal branch is guided by the navigation module to only compute the selected frames, which diminishes the computational cost and potential noise from redundant frames.
The features at the n-th convolution block in this branch can be denoted as vyfn ∈ R T×Co×Ho×Wo . Based on the temporal mask Ln generated from the navigation module, we select frames which have corresponding non-zero values in the binary mask for each video and apply convolutional operations only on these extracted frames v′
yfn ∈ RTl×Co×Ho×Wo :
v′ yfn = F fn ( v′ yfn−1 ) , (7)
where F fn is the n-th convolution blocks at this branch and we set the group number of convolutions to 2 in order to further reduce the computations. After the convolution operation at n-th block, we generate a zero-tensor which shares the same shape with vyfn and fill the value by adding v ′ yfn
and vyfn−1 with the residual design following [12].
At the end of these two branches, inspired by [1, 11], we generate a weighting factor θ by pooling and linear layers to fuse the features from two branches:
vy = θ ⊙ vya + (1− θ)⊙ vyf , (8) where ⊙ denotes the channel-wise multiplication.
3.2 Implicit Temporal Modeling
While our work is mainly designed to reduce the computation in video recognition like [28, 24], we demonstrate that AFNet enforces implicit temporal modeling by the dynamic selection of frames in the intermediate features. Considering a TSN[27] network which adapts vanilla ResNet[12] structure, the feature at the n-th convolutional block in each stage can be written as vn ∈ RT×C×H×W . Thus, the feature at n+ 1-th block can be represented as:
vn+1 = vn + Fn+1 (vn)
= (1 + ∆vn+1) vn, (9)
∆vn+1 = Fn+1 (vn)
vn , (10)
where Fn+1 is the n+ 1-th convolutional block and we define ∆vn+1 as the coefficient learned from this block. By that we can write the output of this stage vN as:
vN =
[ N∏
n=2
(1 + ∆vn) ] ∗ v1. (11)
Similarly, we define the features in ample and focal branch as:
vyaN =
[ N∏
n=2
( 1 + ∆vyan )] ∗ vy1 , (12)
vyfN =
[ N∏
n=2
( 1 + Ln ∗∆vyfn )] ∗ vy1 , (13)
where Ln is the binary temporal mask generated by Equation 5 and vy1 denotes the input of this stage. Based on Equation 8, we can get the output of this stage as:
vyN = θ ⊙ vyaN + (1− θ)⊙ vyfN
= { θ ⊙ [ N∏
n=2
( 1 + ∆vyan )] + (1− θ)⊙ [ N∏
n=2
( 1 + Ln ∗∆vyfn )]} ∗ vy1 .
(14)
As Ln is a temporal-wise binary mask, it will decide whether the coefficient ∆vyfn will be calculated in each frame at every convolutional block. Considering the whole stage is made up of multiple convolutional blocks, the series multiplication of focal branch’s output with the binary mask Ln will approximate soft weights. This results in learned frame-wise weights in each video which we regard as implicit temporal modeling. Although we do not explicitly build any temporal modeling module, the generation of Ln in Equation 3 has already taken the temporal information into account so that the learned temporal weights equal performing implicit temporal modeling at each stage.
3.3 Spatial Redundancy Reduction
In this part, we show that our approach is compatible with methods that aim to solve the problem of spatial redundancy. We extend the navigation module by applying similar procedures with the temporal mask generation and the work [11] to generate a spatial logit for the n-th convolution block which is shown in Figure 3:
qtn = W4 ∗ ( Pool ( ReLU ( BN ( W3 ∗ vyan )))) , (15)
where W3 denotes the weights of the 3× 3 convolution and W4 stands for the weights of convolution with kernel size 1× 1. After that, we still use Gumbel-Softmax to sample from discrete distribution to generate spatial mask Mn and navigate the focal branch to merely focus on the salient regions of the selected frames to further reduce the cost.
3.4 Loss functions
Inspired by [27], we take the average of each frame’s prediction to represent the final output of the corresponding video and our optimization objective is minimizing:
L = ∑ (v,y)
[ −y log (P (v)) + λ ·
N∑ n=1
(r −RT )2 ] . (16)
The first term is the cross-entropy between predictions P (v) for input video v and the corresponding one-hot label y. We denote r in the second term as the ratio of selected frames in every mini-batch and RT as the target ratio which is set before the training (RS is the target ratio when extending navigation module to reduce spatial redundancy). We let r approximate RT by adding the second loss term and manage the trade-off between efficiency and accuracy by introducing a factor λ which balances these two terms.
4 Empirical Validation
In this section, we conduct comprehensive experiments to validate the proposed method. We first compare our method with plain 2D CNNs to demonstrate that our AF module implicitly implements temporal-wise attention which is beneficial for temporal modeling. Then, we validate AFNet’s efficiency by introducing more frames but costing less computation compared with other methods. Further, we show AFNet’s strong performance compared with other efficient action recognition frameworks. Finally, we provide qualitative analysis and extensive ablation results to demonstrate the effectiveness of the proposed navigation module and two-branch design.
Datasets. Our method is evaluated on five video recognition datasets: (1) Mini-Kinetics [23, 24] is a subset of Kinetics [15] which selects 200 classes from Kinetics, containing 121k training videos and 10k validation videos; (2) ActivityNet-v1.3 [2] is an untrimmed dataset with 200 action categories and average duration of 117 seconds. It contains 10,024 video samples for training and 4,926 for validation; (3) Jester is a hand gesture recognition dataset introduced by [22]. The dataset consists of 27 classes, with 119k training videos and 15k validation videos; (4) Something-Something V1&V2 [10] are two human action datasets with strong temporal information, including 98k and 194k videos for training and validation respectively.
Data pre-processing. We sample 8 frames uniformly to represent every video on Jester, MiniKinetics, and 12 frames on ActivityNet and Something-Something to compare with existing works unless specified. During training, the training data is randomly cropped to 224 × 224 following [35], and we perform random flipping except for Something-Something. At inference stage, all frames are center-cropped to 224 × 224 and we use one-crop one-clip per video for efficiency.
Implementation details. Our method is bulit on ResNet50 [12] in default and we replace the first three stages of the network with our proposed AF module. We first train our two-branch network from scratch on ImageNet for fair comparisons with other methods. Then we add the proposed navigation module and train it along with the backbone network on video recognition datasets. In our implementations, RT denotes the ratio of selected frames while RS represents the ratio of select regions which will decrease from 1 to the number we set before training by steps. We let the temperature τ in navigation module decay from 1 to 0.01 exponentially during training. Due to limited space, we include more details of implementation in supplementary material.
4.1 Comparisons with Existing Methods
Less is more. At first, we implement AFNet on Something-Something V1 and Jester datasets with 8 sampled frames. We compare it with the baseline method TSN as both methods do not explicitly build temporal modeling module and are built on ResNet50. In Table 1, our method AFNet(RT=1.00) shows similar performance with TSN when selecting all the frames. Nonetheless, when we select fewer frames in AFNet, it exhibits much higher accuracy compared to TSN and AFNet(RT=1.00) which achieves Less is
More by utilizing less frames but obtaining higher accuracy. The results may seem counterintuitive as seeing more frames is usually beneficial for video recognition. The explanation is that the two-branch design of AFNet can preserve the information of all input frames and the selection of salient frames at intermediate features implements implicit temporal modeling as we have analyzed in Section 3.2. As the binary mask learned by the navigation module will decide whether the coefficient will be calculated for each frame at every convolutional block, it will result in learned temporal weights in each video. To better illustrate this point, we conduct the experiment by removing Gumbel-Softmax [14] in our navigation module and modifying it to learn soft temporal weights for the features at focal branch. We can observe that AFNet(soft-weights) has a similar performance with AFNet(RT=0.25), AFNet(RT=0.50) and outperforms AFNet(RT=1.00) significantly which indicates that learning soft frame-wise weights causes the similar effect.
More is less. We incorporate our method with temporal shift module (TSM [20]) to validate that AFNet can further reduce the redundancy of such competing methods and achieve More is Less by seeing more frames with less computation. We implement our method on Something-Something V1&V2 datasets which contain strong temporal information and relevant results are shown in Table 2.
Table 3: Comparisons with competitive efficient video recognition methods on Mini-Kinetics. AFNet achieves the best trade-off compared to existing works. GFLOPs represents the average computation to process one video.
Method Mini-Kinetics
Top-1 Acc. GFLOPs
LiteEval [30] 61.0% 99.0 SCSampler [16] 70.8% 42.0 AR-Net [23] 71.7% 32.0 AdaFuse [24] 72.3% 23.0 AdaFocus [28] 72.2% 26.6 VideoIQ [25] 72.3% 20.4
AFNet (RT=0.4) 72.8% 19.4 AFNet (RT=0.8) 73.5% 22.0
Table 4: Comparisons with competitive efficient video recognition methods on ActivityNet. AFNet achieves the best trade-off compared to existing works. GFLOPs represents the average computation to process one video.
Method ActivityNet
mAP GFLOPs
AdaFrame [29] 71.5% 79.0 LiteEval [30] 72.7% 95.1 ListenToLook [9] 72.3% 81.4 SCSampler [16] 72.9% 42.0 AR-Net [23] 73.8% 33.5 VideoIQ [25] 74.8% 28.1 AdaFocus [28] 75.0% 26.6
AFNet (RS=0.4,RT=0.8) 75.6% 24.6
Compared to TSM which samples 8 frames, our method shows significant advantages in performance as we introduce more frames and the two-branch structure can preserve the information of all frames. Yet, our computational cost is much smaller than TSM as we allocate frames with different computation resources by this two-branch design and adaptively skip the unimportant frames with the proposed navigation module. Moreover, AFNet outperforms many static methods, which carefully design their structures for better temporal modeling, both in accuracy and efficiency. This can be explained by that the navigation module restrains the noise of unimportant frames and enforces frame-wise attention which is beneficial for temporal modeling. As for other competitive dynamic methods like AdaFuse and AdaFocus, our method shows an obviously better performance both in accuracy and computations. When costing similar computation, AFNet outperforms AdaFuse and AdaFocus by 3.1% and 1.8% respectively on Something-Something V1. Furthermore, we implement our method on other backbones for even higher accuracy and efficiency. When we build AFNet on efficient structure MobileNetV3, we can obtain similar performance with TSM but only with the computation of 2.3 GFLOPs. Besides, AFNet-TSM(RT=0.8) with the backbone of ResNet101 can achieve the accuracy of 50.1% and 63.2% on Something-Something V1 and V2, respectively, which further validate the effectiveness and generalization ability of our framework.
Comparisons with competitive dynamic methods. Then, we implement our method on MiniKinetics and ActivityNet, and compare AFNet with other efficient video recognition approaches. At first, we validate our method on Mini-Kinetics and AFNet shows the best performance both in accuracy and computations compared with other efficient approaches in Table 3. To demonstrate that AFNet can further reduce spatial redundancy, we extend the navigation module to select salient
regions of important frame on ActivityNet. We move the temporal navigation module to the first layer of the network to avoid huge variance in features when incorporating spatial navigation module and note that we only apply this procedure in this part. We can see from Table 4 that our method obtains the best performance while costing the least computation compared to other works. Moreover, we change the ratio of selected frames and plot the mean Average Precision and computational cost of various methods in Figure 4. We can conclude that AFNet exhibits a better trade-off between accuracy and efficiency compared to other works.
4.2 Visualizations
We show the distribution of RT among different convolution blocks under different selection ratios in Figure 5 and utilize 3rd-order polynomials to display the trend of distribution (shown in dash lines). One can see a decreased trend in RT for all the curves with the increased index in convolution blocks and this can be explained that earlier layers mostly capture low-level information which has relatively large divergence among different frames. While high-level semantics between different frames are more similar, therefore AFNet tends to skip more at later convolution blocks. In Figure 6, we visualize the selected frames in the 3rd-block of our AFNet with RT=0.5 on the validation set of Something-Something V1 where we uniformly sample 8 frames. Our navigation module effectively guides the focal branch to concentrate on frames which are more task-relevant and deactivate the frames that contain similar information.
4.3 Ablation Study
In this part, we implement our method on ActivityNet with 12 sampled frames to conduct comprehensive ablation study to verify the effectiveness of our design.
Effect of two branch design. We first incorporate our navigation module into ResNet50 and compare it with AFNet to prove the strength of our designed two-branch architecture. From Table 5, AFNet shows substantial advantages in accuracy under different ratios of select frames. Aside from it, models which adopted our structure but with a fixed sampling policy also show significantly better performance compared with the network based on single branch which can further demonstrate the effectiveness of our two-branch structure and the necessity to preserve the information of all frames.
Effect of navigation module. In this part, we further compare our proposed navigation module with three alternative sampling strategies in different selection ratios: (1) random sampling; (2) uniform sampling: sample frames in equal step; (3) normal sampling: sample frames from a standard gaussian distribution. Shown in Table 5, our proposed strategy continuously outperforms other fixed sampling policies under different selection ratios which validates the effectiveness of the navigation module. Moreover, the advantage of our method is more obvious when the ratio of selected frames is small which demonstrates that our selected frames are more task-relevant and contain essential information for the recognition. Further, we evaluate the extension of navigation module which can reduce spatial redundancy, and compare it with: (1) random sampling; (2) center cropping. Our method shows better performance compared with fixed sampling strategies under various selection ratios which verifies the effectiveness of this design.
5 Conclusion
This paper proposes an adaptive Ample and Focal Network (AFNet) to reduce temporal redundancy in videos with the consideration of architecture design and the intrinsic redundancy in data. Our method enables 2D-CNNs to have access to more frames to look broadly but with less computation by staying focused on the salient information. AFNet exhibits promising performance as our twobranch design preserves the information of all the input frames instead of discarding part of the knowledge at the beginning of the network. Moreover, the dynamic temporal selection within the network not only restrains the noise of unimportant frames but enforces implicit temporal modeling as well. This enables AFNet to obtain even higher accuracy when using fewer frames compared with static method without temporal modeling module. We further show that our method can be extended to reduce spatial redundancy by only computing important regions of the selected frames. Comprehensive experiments have shown that our method outperforms competing efficient approaches both in accuracy and computational efficiency.
Acknowledgments and Disclosure of Funding
Research was sponsored by the DEVCOM Analysis Center and was accomplished under Cooperative Agreement Number W911NF-22-2-0001. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. | 1. What is the main contribution of the paper on video recognition using Ample and Focal Network (AFNet)?
2. What are the strengths and weaknesses of the proposed approach, particularly in its technical contribution and comparisons with prior works?
3. Do you have any concerns regarding the method's efficiency or the selection of frames?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any other relevant works that should be considered or discussed in the paper? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper presents an efficient framework called Ample and Focal Network (AFNet) for video recognition. This framework consists of two branches: the ample branch preserves all input features by lightweight computation; the focal branch extracts features only from selected frames to save cost. By fusing the features of these two branches, AFNet can keep its focus on the crucial information while requiring less computation. Experiments on five datasets demonstrate the superiority of AFNet compared to state-of-the-art methods. However, I have some concerns about this paper. My detailed comments are as follows.
Strengths And Weaknesses
Strengths:
This paper presents an efficient framework called Ample and Focal Network for video recognition, which uses more frames but reduces computation.
The authors use a two-branch framework, in which two branches are complementary to each other, to prevent information loss when selecting fewer frames.
The authors propose a navigation module that can select informative frames to save computational cost and is compatible with spatial adaptive works.
Weaknesses:
My biggest concern lies in the technical contribution of this paper. This method uses two streams, one for dealing with frames with high spatial resolution but low temporal resolution. The other process frames with low spatial resolution and high temporal resolution. Such an idea seems similar compared with the Slow-Fast [a] network.
As for the adaptive frame selection, Scsampler [b] also uses a tiny network to select frames and a larger network to do action recognition. I suggest adding more discussions between them.
In the experiments, the authors use only 12 frames (compared with 8-frame settings in previous methods), which is not convincing enough to verify the efficiency of the proposed methos.
[a] SlowFast Networks for Video Recognition. ICCV 2019. [b] Compressing Videos to One Clip With Single-Step Sampling. CVPR 2022.
Questions
The proposed method uses several navigation modules in a recurrent manner. What did the model learn to select in different stages?
In section 3.1, the logit
p
n
t
for frame t is generated with Eq.(3). Then for $p_n={p_n^t}{t=1}^T
,
a
l
l
p_n^t
a
r
e
w
i
t
h
t
h
e
s
a
m
e
v
a
l
u
e
s
a
s
t
h
e
y
a
r
e
g
e
n
e
r
a
t
e
d
w
i
t
h
t
h
e
s
a
m
e
c
o
n
v
o
l
u
t
i
o
n
w
e
i
g
h
t
s
W_2
a
n
d
f
e
a
t
u
r
e
\tilde{v}{y_n^a}$. More discussions are required.
The second term in Eq.(16) is used to contain the ratio of the selected frame with the square of the difference between
r
and
R
T
. However, This introduces an additional hyper-parameter
R
T
and restricts the model from selecting fewer frames for lower computation cost.
InTab.1, AFNet achieves higher accuracy with fewer frames, which is counterintuitive. The explanation given by the authors is only for why AFNet achieves higher accuracy than TSN, not for why the fewer frames are selected, the higher accuracy AFNet achieves. More clear explanations are required.
Why is AFNet not compared to other efficient frameworks like MoViNet[3]? More explanations are needed.
Some action recognition methods are missing, such as Two-Stream Network [1] and T-C3D [2].
[1] “Convolutional Two-Stream Network Fusion for Video Action Recognition.” CVPR (2016) [2] “T-C3D: Temporal Convolutional 3D Network for Real-Time Action Recognition.” AAAI (2018). [3] "Movinets: Mobile video networks for efficient video recognition." CVPR (2021)
Limitations
The authors adequately addressed the limitations and potential negative societal impact of their work. |
NIPS | Title
Look More but Care Less in Video Recognition
Abstract
Existing action recognition methods typically sample a few frames to represent each video to avoid the enormous computation, which often limits the recognition performance. To tackle this problem, we propose Ample and Focal Network (AFNet), which is composed of two branches to utilize more frames but with less computation. Specifically, the Ample Branch takes all input frames to obtain abundant information with condensed computation and provides the guidance for Focal Branch by the proposed Navigation Module; the Focal Branch squeezes the temporal size to only focus on the salient frames at each convolution block; in the end, the results of two branches are adaptively fused to prevent the loss of information. With this design, we can introduce more frames to the network but cost less computation. Besides, we demonstrate AFNet can utilize fewer frames while achieving higher accuracy as the dynamic selection in intermediate features enforces implicit temporal modeling. Further, we show that our method can be extended to reduce spatial redundancy with even less cost. Extensive experiments on five datasets demonstrate the effectiveness and efficiency of our method. Our code is available at https://github.com/BeSpontaneous/AFNet-pytorch.
1 Introduction
Online videos have grown wildly in recent years and video analysis is necessary for many applications such as recommendation [6], surveillance [4, 5] and autonomous driving [31, 17]. These applications require not only accurate but also efficient video understanding algorithms. With the introduction of deep learning networks [3] in video recognition, there has been rapid advancement in the performance of the methods in this area. Though successful, these deep learning methods often cost huge computation, making them hard to be deployed in the real world.
In video recognition, we need to sample multiple frames to represent each video which makes the computational cost scale proportionally to the number of sampled frames. In most cases, a small proportion of all the frames is sampled for each input, which only contains limited information of the original video. A straightforward solution is to sample more frames to the network but the computation expands proportionally to the number of sampled frames.
There are some works proposed recently to dynamically sample salient frames [29, 16] for higher efficiency. The selection step of these methods is made before the frames are sent to the classification network, which means the information of those unimportant frames is totally lost and it consumes a considerable time for the selection procedure. Some other methods proposed to address the spatial redundancy in action recognition by adaptively resizing the resolution based on the importance of each frame [23], or cropping the most salient patch for every frame [28]. However, these methods still completely abandon the information that the network recognizes as unimportant and introduce a policy network to make decisions for each sample which leads to extra computation and complicates the training strategies.
∗Corresponding Author: markcheung9248@gmail.com.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
In our work, we go from another perspective compared with previous works. We propose a method which makes frame selection within the classification network. Shown in Figure 1, we design an architecture called Ample and Focal Network (AFNet) which is composed of two branches: the ample branch takes a glimpse of all the input features with lightweight computation as we downsample the features for smaller resolution and further reduce the channel size; the focal branch receives the guidance from the proposed navigation module to squeeze the temporal size by only computing on the selected frames to save cost; in the end, we adaptively fuse the features of these two branches to prevent the information loss of the unselected frames.
In this manner, the two branches are both very lightweight and we enable AFNet to look broadly by sampling more frames and stay focused on the important information for less computation. Considering these two branches in a uniform manner, on the one hand, we can avoid the loss of information compared to other dynamic methods as the ample branch preserves the information of all the input; on the other hand, we can restrain the noise from the unimportant frames by deactivating them in each convolutional block. Further, we have demonstrated that the dynamic selection strategy at intermediate features is beneficial for temporal modeling as it implicitly implements frame-wise attention which can enable our network to utilize fewer frames while obtaining higher accuracy. In addition, instead of introducing a policy network to select frames, we design a lightweight navigation module which can be plugged into the network so that our method can easily be trained in an end-toend fashion. Furthermore, AFNet is compatible with spatial adaptive works which can help to further reduce the computations of our method.
We summarize the main contributions as follows:
• We propose an adaptive two-branch framework which enables 2D-CNNs to process more frames with less computational cost. With this design, we not only prevent the loss of information but strengthen the representation of essential frames.
• We propose a lightweight navigation module to dynamically select salient frames at each convolution block which can easily be trained in an end-to-end fashion.
• The selection strategy at intermediate features not only empowers the model with strong flexibility as different frames will be selected at different layers, but also enforces implicit temporal modeling which enables AFNet to obtain higher accuracy with fewer frames.
• We have conducted comprehensive experiments on five video recognition datasets. The results show the superiority of AFNet compared to other competitive methods.
2 Related Work
2.1 Video Recognition
The development of deep learning in recent years serves as a huge boost to the research of video recognition. A straightforward method for this task is using 2D-CNNs to extract the features of sampled frames and use specific aggregation methods to model the temporal relationships across frames. For instance, TSN [27] proposes to average the temporal information between frames. While TSM [20] shifts channels with adjacent frames to allow information exchange at temporal dimension. Another approach is to build 3D-CNNs to for spatiotemporal learning, such as C3D [26], I3D [3] and SlowFast [8]. Though being shown effective, methods based on 3D-CNNs are computationally expensive, which brings great difficulty in real-world deployment.
While the two-branch design has been explored by SlowFast, our motivation and detailed structure are different from it in the following ways: 1) network category: SlowFast is a static 3D model, but
AFNet is a dynamic 2D network; 2) motivation: SlowFast aims to collect semantic information and changing motion with branches at different temporal speeds for better performance, while AFNet is aimed to dynamically skip frames to save computation and the design of two-branch structure is to prevent the information loss; 3) specific design: AFNet is designed to downsample features for efficiency at ample branch while SlowFast processes features in the original resolution; 4) temporal modeling: SlowFast applies 3D convolutions for temporal modeling, AFNet is a 2D model which is enforced with implicit temporal modeling by the designed navigation module.
2.2 Redundancy in Data
The efficiency of 2D-CNNs has been broadly studied in recent years. While some of the works aim at designing efficient network structure [13], there is another line of research focusing on reducing the intrinsic redundancy in image-based data [32, 11]. In video recognition, people usually sample limited number of frames to represent each video to prevent numerous computational costs. Even though, the computation for video recognition is still a heavy burden for researchers and a common strategy to address this problem is reducing the temporal redundancy in videos as not all frames are essential to the final prediction. [33] proposes to use reinforcement learning to skip frames for action detection. There are other works [29, 16] dynamically sampling salient frames to save computational cost. As spatial redundancy widely exists in image-based data, [23] adaptively processes frames with different resolutions. [28] provides the solution as cropping the most salient patch for each frame. However, the unselected regions or frames of these works are completely abandoned. Hence, there will be some information lost in their designed procedures. Moreover, most of these works adopt a policy network to make dynamic decisions, which introduces additional computation somehow and splits the training into several stages. In contrast, our method adopts a two-branch design, allocating different computational resources based on the importance of each frame and preventing the loss of information. Besides, we design a lightweight navigation module to guide the network where to look, which can be incorporated into the backbone network and trained in an end-to-end way. Moreover, we validate that the dynamic frame selection at intermediate features will not only empower the model with strong flexibility as different frames will be selected at different layers, but result in learned frame-wise weights which enforce implicit temporal modeling.
3 Methodology
Intuitively, considering more frames enhances the temporal modeling but results in higher computational cost. To efficiently achieve the competitive performance, we propose AFNet to involve more frames but wisely extract information from them to keep the low computational cost. Specifically, we design a two-branch structure to treat frames differently based on their importance and process the features in an adaptive manner which can provide our method with strong flexibility. Besides, we demonstrate that the dynamic selection of frames in the intermediate features results in learned frame-wise weights which can be regarded as implicit temporal modeling.
3.1 Architecture Design
As is shown in Figure 2, we design our Ample and Focal (AF) module as a two-branch structure: the ample branch (top) processes abundant features of all the frames in a lower resolution and a squeezed channel size; the focal branch (bottom) receives the guidance from ample branch generated by the navigation module and makes computation only on the selected frames. Such design can be conveniently applied to existing CNN structures to build AF module.
Ample Branch. The ample branch is designed to involve all frames with cheap computation, which serves as 1) guidance to select salient frames to help focal branch to concentrate on important information; 2) a complementary stream with focal branch to prevent the information loss via a carefully designed fusion strategy.
Formally, we denote video sample i as vi, containing T frames as vi = { f i1, f i 2, ..., f i T } . For convenience, we omit the superscript i in the following sections if no confusion arises. We denote the input of ample branch as vx ∈ RT×C×H×W , where C represents the channel size and H ×W is the spatial size. The features generated by the ample branch can be written as:
vya = F a (vx) , (1)
where vya ∈ RT×(Co/2)×(Ho/2)×(Wo/2) represents the output of ample branch and F a stands for a series of convolution blocks. While the channel, height, width at focal branch are denoted as Co, Ho, Wo correspondingly. We set the stride of the first convolution block to 2 to downsample the resolution of this branch and we upsample the feature at the end of this branch by nearest interpolation.
Navigation Module. The proposed navigation module is designed to guide the focal branch where to look by adaptively selecting the most salient frames for video vi.
Specifically, the navigation module generates a binary temporal mask Ln using the output from the n-th convolution block in ample branch vyan . At first, average pooling is applied to vyan to resize the spatial dimension to 1× 1, then we perform convolution to transform the channel size to 2:
ṽyan = ReLU ( BN ( W1 ∗ Pool ( vyan ))) , (2)
where ∗ stands for convolution and W1 denotes the weights of the 1× 1 convolution. After that, we reshape the dimension of feature ṽyan from T × 2 × 1 × 1 to 1 × (2× T ) × 1 × 1 so that we can model the temporal relations for each video from channel dimension by:
ptn = W2 ∗ ṽyan , (3) where W2 represents the weights of the second 1× 1 convolution and it will generate a binary logit ptn ∈ R2 for each frame t which denotes whether to select it. However, directly sampling from such discrete distribution is non-differentiable. In this work, we apply Gumbel-Softmax [14] to resolve this non-differentiability. Specifically, we generate a normalized categorical distribution by using Softmax:
π = lj | lj = exp ( p tj n ) exp ( pt0n ) + exp ( pt1n ) , (4)
and we draw discrete samples from the distribution π as:
L = arg max j (log lj +Gj) , (5)
where Gj = − log(− log Uj) is sampled from a Gumbel distribution and Uj is sampled from Unif(0,1) which is a uniform distribution. As argmax cannot be differentiated, we relax the discrete sample L in backpropagation via Softmax:
l̂j = exp ((log lj +Gj) /τ)∑2
k=1 exp ((log lk +Gk) /τ) , (6)
the distribution l̂ will become a one-hot vector when the temperature factor τ → 0 and we let τ decrease from 1 to 0.01 during training.
Focal Branch. The focal branch is guided by the navigation module to only compute the selected frames, which diminishes the computational cost and potential noise from redundant frames.
The features at the n-th convolution block in this branch can be denoted as vyfn ∈ R T×Co×Ho×Wo . Based on the temporal mask Ln generated from the navigation module, we select frames which have corresponding non-zero values in the binary mask for each video and apply convolutional operations only on these extracted frames v′
yfn ∈ RTl×Co×Ho×Wo :
v′ yfn = F fn ( v′ yfn−1 ) , (7)
where F fn is the n-th convolution blocks at this branch and we set the group number of convolutions to 2 in order to further reduce the computations. After the convolution operation at n-th block, we generate a zero-tensor which shares the same shape with vyfn and fill the value by adding v ′ yfn
and vyfn−1 with the residual design following [12].
At the end of these two branches, inspired by [1, 11], we generate a weighting factor θ by pooling and linear layers to fuse the features from two branches:
vy = θ ⊙ vya + (1− θ)⊙ vyf , (8) where ⊙ denotes the channel-wise multiplication.
3.2 Implicit Temporal Modeling
While our work is mainly designed to reduce the computation in video recognition like [28, 24], we demonstrate that AFNet enforces implicit temporal modeling by the dynamic selection of frames in the intermediate features. Considering a TSN[27] network which adapts vanilla ResNet[12] structure, the feature at the n-th convolutional block in each stage can be written as vn ∈ RT×C×H×W . Thus, the feature at n+ 1-th block can be represented as:
vn+1 = vn + Fn+1 (vn)
= (1 + ∆vn+1) vn, (9)
∆vn+1 = Fn+1 (vn)
vn , (10)
where Fn+1 is the n+ 1-th convolutional block and we define ∆vn+1 as the coefficient learned from this block. By that we can write the output of this stage vN as:
vN =
[ N∏
n=2
(1 + ∆vn) ] ∗ v1. (11)
Similarly, we define the features in ample and focal branch as:
vyaN =
[ N∏
n=2
( 1 + ∆vyan )] ∗ vy1 , (12)
vyfN =
[ N∏
n=2
( 1 + Ln ∗∆vyfn )] ∗ vy1 , (13)
where Ln is the binary temporal mask generated by Equation 5 and vy1 denotes the input of this stage. Based on Equation 8, we can get the output of this stage as:
vyN = θ ⊙ vyaN + (1− θ)⊙ vyfN
= { θ ⊙ [ N∏
n=2
( 1 + ∆vyan )] + (1− θ)⊙ [ N∏
n=2
( 1 + Ln ∗∆vyfn )]} ∗ vy1 .
(14)
As Ln is a temporal-wise binary mask, it will decide whether the coefficient ∆vyfn will be calculated in each frame at every convolutional block. Considering the whole stage is made up of multiple convolutional blocks, the series multiplication of focal branch’s output with the binary mask Ln will approximate soft weights. This results in learned frame-wise weights in each video which we regard as implicit temporal modeling. Although we do not explicitly build any temporal modeling module, the generation of Ln in Equation 3 has already taken the temporal information into account so that the learned temporal weights equal performing implicit temporal modeling at each stage.
3.3 Spatial Redundancy Reduction
In this part, we show that our approach is compatible with methods that aim to solve the problem of spatial redundancy. We extend the navigation module by applying similar procedures with the temporal mask generation and the work [11] to generate a spatial logit for the n-th convolution block which is shown in Figure 3:
qtn = W4 ∗ ( Pool ( ReLU ( BN ( W3 ∗ vyan )))) , (15)
where W3 denotes the weights of the 3× 3 convolution and W4 stands for the weights of convolution with kernel size 1× 1. After that, we still use Gumbel-Softmax to sample from discrete distribution to generate spatial mask Mn and navigate the focal branch to merely focus on the salient regions of the selected frames to further reduce the cost.
3.4 Loss functions
Inspired by [27], we take the average of each frame’s prediction to represent the final output of the corresponding video and our optimization objective is minimizing:
L = ∑ (v,y)
[ −y log (P (v)) + λ ·
N∑ n=1
(r −RT )2 ] . (16)
The first term is the cross-entropy between predictions P (v) for input video v and the corresponding one-hot label y. We denote r in the second term as the ratio of selected frames in every mini-batch and RT as the target ratio which is set before the training (RS is the target ratio when extending navigation module to reduce spatial redundancy). We let r approximate RT by adding the second loss term and manage the trade-off between efficiency and accuracy by introducing a factor λ which balances these two terms.
4 Empirical Validation
In this section, we conduct comprehensive experiments to validate the proposed method. We first compare our method with plain 2D CNNs to demonstrate that our AF module implicitly implements temporal-wise attention which is beneficial for temporal modeling. Then, we validate AFNet’s efficiency by introducing more frames but costing less computation compared with other methods. Further, we show AFNet’s strong performance compared with other efficient action recognition frameworks. Finally, we provide qualitative analysis and extensive ablation results to demonstrate the effectiveness of the proposed navigation module and two-branch design.
Datasets. Our method is evaluated on five video recognition datasets: (1) Mini-Kinetics [23, 24] is a subset of Kinetics [15] which selects 200 classes from Kinetics, containing 121k training videos and 10k validation videos; (2) ActivityNet-v1.3 [2] is an untrimmed dataset with 200 action categories and average duration of 117 seconds. It contains 10,024 video samples for training and 4,926 for validation; (3) Jester is a hand gesture recognition dataset introduced by [22]. The dataset consists of 27 classes, with 119k training videos and 15k validation videos; (4) Something-Something V1&V2 [10] are two human action datasets with strong temporal information, including 98k and 194k videos for training and validation respectively.
Data pre-processing. We sample 8 frames uniformly to represent every video on Jester, MiniKinetics, and 12 frames on ActivityNet and Something-Something to compare with existing works unless specified. During training, the training data is randomly cropped to 224 × 224 following [35], and we perform random flipping except for Something-Something. At inference stage, all frames are center-cropped to 224 × 224 and we use one-crop one-clip per video for efficiency.
Implementation details. Our method is bulit on ResNet50 [12] in default and we replace the first three stages of the network with our proposed AF module. We first train our two-branch network from scratch on ImageNet for fair comparisons with other methods. Then we add the proposed navigation module and train it along with the backbone network on video recognition datasets. In our implementations, RT denotes the ratio of selected frames while RS represents the ratio of select regions which will decrease from 1 to the number we set before training by steps. We let the temperature τ in navigation module decay from 1 to 0.01 exponentially during training. Due to limited space, we include more details of implementation in supplementary material.
4.1 Comparisons with Existing Methods
Less is more. At first, we implement AFNet on Something-Something V1 and Jester datasets with 8 sampled frames. We compare it with the baseline method TSN as both methods do not explicitly build temporal modeling module and are built on ResNet50. In Table 1, our method AFNet(RT=1.00) shows similar performance with TSN when selecting all the frames. Nonetheless, when we select fewer frames in AFNet, it exhibits much higher accuracy compared to TSN and AFNet(RT=1.00) which achieves Less is
More by utilizing less frames but obtaining higher accuracy. The results may seem counterintuitive as seeing more frames is usually beneficial for video recognition. The explanation is that the two-branch design of AFNet can preserve the information of all input frames and the selection of salient frames at intermediate features implements implicit temporal modeling as we have analyzed in Section 3.2. As the binary mask learned by the navigation module will decide whether the coefficient will be calculated for each frame at every convolutional block, it will result in learned temporal weights in each video. To better illustrate this point, we conduct the experiment by removing Gumbel-Softmax [14] in our navigation module and modifying it to learn soft temporal weights for the features at focal branch. We can observe that AFNet(soft-weights) has a similar performance with AFNet(RT=0.25), AFNet(RT=0.50) and outperforms AFNet(RT=1.00) significantly which indicates that learning soft frame-wise weights causes the similar effect.
More is less. We incorporate our method with temporal shift module (TSM [20]) to validate that AFNet can further reduce the redundancy of such competing methods and achieve More is Less by seeing more frames with less computation. We implement our method on Something-Something V1&V2 datasets which contain strong temporal information and relevant results are shown in Table 2.
Table 3: Comparisons with competitive efficient video recognition methods on Mini-Kinetics. AFNet achieves the best trade-off compared to existing works. GFLOPs represents the average computation to process one video.
Method Mini-Kinetics
Top-1 Acc. GFLOPs
LiteEval [30] 61.0% 99.0 SCSampler [16] 70.8% 42.0 AR-Net [23] 71.7% 32.0 AdaFuse [24] 72.3% 23.0 AdaFocus [28] 72.2% 26.6 VideoIQ [25] 72.3% 20.4
AFNet (RT=0.4) 72.8% 19.4 AFNet (RT=0.8) 73.5% 22.0
Table 4: Comparisons with competitive efficient video recognition methods on ActivityNet. AFNet achieves the best trade-off compared to existing works. GFLOPs represents the average computation to process one video.
Method ActivityNet
mAP GFLOPs
AdaFrame [29] 71.5% 79.0 LiteEval [30] 72.7% 95.1 ListenToLook [9] 72.3% 81.4 SCSampler [16] 72.9% 42.0 AR-Net [23] 73.8% 33.5 VideoIQ [25] 74.8% 28.1 AdaFocus [28] 75.0% 26.6
AFNet (RS=0.4,RT=0.8) 75.6% 24.6
Compared to TSM which samples 8 frames, our method shows significant advantages in performance as we introduce more frames and the two-branch structure can preserve the information of all frames. Yet, our computational cost is much smaller than TSM as we allocate frames with different computation resources by this two-branch design and adaptively skip the unimportant frames with the proposed navigation module. Moreover, AFNet outperforms many static methods, which carefully design their structures for better temporal modeling, both in accuracy and efficiency. This can be explained by that the navigation module restrains the noise of unimportant frames and enforces frame-wise attention which is beneficial for temporal modeling. As for other competitive dynamic methods like AdaFuse and AdaFocus, our method shows an obviously better performance both in accuracy and computations. When costing similar computation, AFNet outperforms AdaFuse and AdaFocus by 3.1% and 1.8% respectively on Something-Something V1. Furthermore, we implement our method on other backbones for even higher accuracy and efficiency. When we build AFNet on efficient structure MobileNetV3, we can obtain similar performance with TSM but only with the computation of 2.3 GFLOPs. Besides, AFNet-TSM(RT=0.8) with the backbone of ResNet101 can achieve the accuracy of 50.1% and 63.2% on Something-Something V1 and V2, respectively, which further validate the effectiveness and generalization ability of our framework.
Comparisons with competitive dynamic methods. Then, we implement our method on MiniKinetics and ActivityNet, and compare AFNet with other efficient video recognition approaches. At first, we validate our method on Mini-Kinetics and AFNet shows the best performance both in accuracy and computations compared with other efficient approaches in Table 3. To demonstrate that AFNet can further reduce spatial redundancy, we extend the navigation module to select salient
regions of important frame on ActivityNet. We move the temporal navigation module to the first layer of the network to avoid huge variance in features when incorporating spatial navigation module and note that we only apply this procedure in this part. We can see from Table 4 that our method obtains the best performance while costing the least computation compared to other works. Moreover, we change the ratio of selected frames and plot the mean Average Precision and computational cost of various methods in Figure 4. We can conclude that AFNet exhibits a better trade-off between accuracy and efficiency compared to other works.
4.2 Visualizations
We show the distribution of RT among different convolution blocks under different selection ratios in Figure 5 and utilize 3rd-order polynomials to display the trend of distribution (shown in dash lines). One can see a decreased trend in RT for all the curves with the increased index in convolution blocks and this can be explained that earlier layers mostly capture low-level information which has relatively large divergence among different frames. While high-level semantics between different frames are more similar, therefore AFNet tends to skip more at later convolution blocks. In Figure 6, we visualize the selected frames in the 3rd-block of our AFNet with RT=0.5 on the validation set of Something-Something V1 where we uniformly sample 8 frames. Our navigation module effectively guides the focal branch to concentrate on frames which are more task-relevant and deactivate the frames that contain similar information.
4.3 Ablation Study
In this part, we implement our method on ActivityNet with 12 sampled frames to conduct comprehensive ablation study to verify the effectiveness of our design.
Effect of two branch design. We first incorporate our navigation module into ResNet50 and compare it with AFNet to prove the strength of our designed two-branch architecture. From Table 5, AFNet shows substantial advantages in accuracy under different ratios of select frames. Aside from it, models which adopted our structure but with a fixed sampling policy also show significantly better performance compared with the network based on single branch which can further demonstrate the effectiveness of our two-branch structure and the necessity to preserve the information of all frames.
Effect of navigation module. In this part, we further compare our proposed navigation module with three alternative sampling strategies in different selection ratios: (1) random sampling; (2) uniform sampling: sample frames in equal step; (3) normal sampling: sample frames from a standard gaussian distribution. Shown in Table 5, our proposed strategy continuously outperforms other fixed sampling policies under different selection ratios which validates the effectiveness of the navigation module. Moreover, the advantage of our method is more obvious when the ratio of selected frames is small which demonstrates that our selected frames are more task-relevant and contain essential information for the recognition. Further, we evaluate the extension of navigation module which can reduce spatial redundancy, and compare it with: (1) random sampling; (2) center cropping. Our method shows better performance compared with fixed sampling strategies under various selection ratios which verifies the effectiveness of this design.
5 Conclusion
This paper proposes an adaptive Ample and Focal Network (AFNet) to reduce temporal redundancy in videos with the consideration of architecture design and the intrinsic redundancy in data. Our method enables 2D-CNNs to have access to more frames to look broadly but with less computation by staying focused on the salient information. AFNet exhibits promising performance as our twobranch design preserves the information of all the input frames instead of discarding part of the knowledge at the beginning of the network. Moreover, the dynamic temporal selection within the network not only restrains the noise of unimportant frames but enforces implicit temporal modeling as well. This enables AFNet to obtain even higher accuracy when using fewer frames compared with static method without temporal modeling module. We further show that our method can be extended to reduce spatial redundancy by only computing important regions of the selected frames. Comprehensive experiments have shown that our method outperforms competing efficient approaches both in accuracy and computational efficiency.
Acknowledgments and Disclosure of Funding
Research was sponsored by the DEVCOM Analysis Center and was accomplished under Cooperative Agreement Number W911NF-22-2-0001. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. | 1. What is the main contribution of the paper, and how does it improve upon previous works in video recognition?
2. What are the strengths and weaknesses of the proposed Ample and Focal Network (AFNet) approach?
3. Are there any concerns regarding the efficiency and effectiveness of the frame selection process, and how does it impact the overall performance of the method?
4. How does the proposed method compare to other recent works in video recognition, such as SlowFast Networks and "End-to-end Learning of Action Detection from Frame Glimpses in Videos"?
5. Are there any typos or unclear points in the paper that need to be addressed? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper proposes Ample and Focal Network (AFNet) for video recognition. Specifically, the network has an ample and a focal branch. The ample branch operates on a set of neighboring (with strides) frames with reduce-sized feature maps (in height, width and channels), whereas the focal networks then takes in both input frames and intermediate predictions from focal network to deceptively process selected frames, with higher computation budget. The resulting network is claimed to have a better accuracy-computation trade-off than previous work. The authors conducted experiments on Something-Something v1/v2, Mini-Kinetics, Jester and ActivityNet and demonstrated that their method yields better accuracy compared to several baselines (e.g. TSM, AdaFuse-TSM, bLVNet etc) on these datasets.
Strengths And Weaknesses
Strengths
The two-branch design makes intuitive sense, with one focus on lightweight processing on dense inputs, while the other process sparsely selected inputs with heavier computation.
The paper is comprehensive in structure --- in addition to the qualitative description of the approach, the authors also included a section on the theoretical analysis of implicit temporal modeling.
The result section includes comparison to several baseline methods across several datasets. It seems the proposed method has an edge on accuracy-computation trade-off across these comparisons. There are also some qualitative visualizations and ablations to dissect the approach.
Weaknesses
As shown in Figure 2, the frame selection process is implemented as sparse convolutions, for which from the texts I cannot tell how efficient they are. This becomes more of an issue since in all tables the authors report FLOPs rather than actual inference latency.
Across Sth-Sth (Table 2), Mini-Kinetics (Table 3), and ActivityNet (Table 4), the accuracy/computation gains over the competitor is definitely not to a level that I'll consider significant.
There is no promise on code release, which might make it hard to reproduce the reported results.
For related work, the proposed approach missed an important citation: SlowFast Networks from Feichtenhofer et al. As it also sits on this two-branch idea for video recognition where one lightweight branch focuses on motion and another heavy branch focuses on semantics. This should be added and discussed. Another related paper is "End-to-end Learning of Action Detection from Frame Glimpses in Videos" from Yeung et al., as it first proposed to selective focus on a subset of frames for video recognition.
Writing/Typos
L58, "but strength the representation" --> "but strengthen the representation"
L125, C_o, H_o and W_o are not introduced anywhere up until this point.
L273, "to analysis the results" --> "to analyze the results"
In L136, t in p^t_n is used to index frames, whereas in Eq 4, the superscript is overloaded to denote the frame selection flag. This will cause some confusions.
Questions
L150, isn't L_n a continuous vector computed using Eq 6? Is there some thresholding used here before selecting non-zero values?
Eq 1, is it necessary to keep the v notation on the left hand side of the equation? Since it represents input video, keeping it here might cause some confusion.
L145, "we let tau decrease from 1 to 0.01 during train", what's the decay schedule used? Do different schedulings make a difference?
Table 1, the improvement from TSN to AFNet is quite significant, and honestly a bit surprising. Does the author investigate the possibility of overfitting for the TSN results?
Limitations
Yes |
NIPS | Title
Look More but Care Less in Video Recognition
Abstract
Existing action recognition methods typically sample a few frames to represent each video to avoid the enormous computation, which often limits the recognition performance. To tackle this problem, we propose Ample and Focal Network (AFNet), which is composed of two branches to utilize more frames but with less computation. Specifically, the Ample Branch takes all input frames to obtain abundant information with condensed computation and provides the guidance for Focal Branch by the proposed Navigation Module; the Focal Branch squeezes the temporal size to only focus on the salient frames at each convolution block; in the end, the results of two branches are adaptively fused to prevent the loss of information. With this design, we can introduce more frames to the network but cost less computation. Besides, we demonstrate AFNet can utilize fewer frames while achieving higher accuracy as the dynamic selection in intermediate features enforces implicit temporal modeling. Further, we show that our method can be extended to reduce spatial redundancy with even less cost. Extensive experiments on five datasets demonstrate the effectiveness and efficiency of our method. Our code is available at https://github.com/BeSpontaneous/AFNet-pytorch.
1 Introduction
Online videos have grown wildly in recent years and video analysis is necessary for many applications such as recommendation [6], surveillance [4, 5] and autonomous driving [31, 17]. These applications require not only accurate but also efficient video understanding algorithms. With the introduction of deep learning networks [3] in video recognition, there has been rapid advancement in the performance of the methods in this area. Though successful, these deep learning methods often cost huge computation, making them hard to be deployed in the real world.
In video recognition, we need to sample multiple frames to represent each video which makes the computational cost scale proportionally to the number of sampled frames. In most cases, a small proportion of all the frames is sampled for each input, which only contains limited information of the original video. A straightforward solution is to sample more frames to the network but the computation expands proportionally to the number of sampled frames.
There are some works proposed recently to dynamically sample salient frames [29, 16] for higher efficiency. The selection step of these methods is made before the frames are sent to the classification network, which means the information of those unimportant frames is totally lost and it consumes a considerable time for the selection procedure. Some other methods proposed to address the spatial redundancy in action recognition by adaptively resizing the resolution based on the importance of each frame [23], or cropping the most salient patch for every frame [28]. However, these methods still completely abandon the information that the network recognizes as unimportant and introduce a policy network to make decisions for each sample which leads to extra computation and complicates the training strategies.
∗Corresponding Author: markcheung9248@gmail.com.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
In our work, we go from another perspective compared with previous works. We propose a method which makes frame selection within the classification network. Shown in Figure 1, we design an architecture called Ample and Focal Network (AFNet) which is composed of two branches: the ample branch takes a glimpse of all the input features with lightweight computation as we downsample the features for smaller resolution and further reduce the channel size; the focal branch receives the guidance from the proposed navigation module to squeeze the temporal size by only computing on the selected frames to save cost; in the end, we adaptively fuse the features of these two branches to prevent the information loss of the unselected frames.
In this manner, the two branches are both very lightweight and we enable AFNet to look broadly by sampling more frames and stay focused on the important information for less computation. Considering these two branches in a uniform manner, on the one hand, we can avoid the loss of information compared to other dynamic methods as the ample branch preserves the information of all the input; on the other hand, we can restrain the noise from the unimportant frames by deactivating them in each convolutional block. Further, we have demonstrated that the dynamic selection strategy at intermediate features is beneficial for temporal modeling as it implicitly implements frame-wise attention which can enable our network to utilize fewer frames while obtaining higher accuracy. In addition, instead of introducing a policy network to select frames, we design a lightweight navigation module which can be plugged into the network so that our method can easily be trained in an end-toend fashion. Furthermore, AFNet is compatible with spatial adaptive works which can help to further reduce the computations of our method.
We summarize the main contributions as follows:
• We propose an adaptive two-branch framework which enables 2D-CNNs to process more frames with less computational cost. With this design, we not only prevent the loss of information but strengthen the representation of essential frames.
• We propose a lightweight navigation module to dynamically select salient frames at each convolution block which can easily be trained in an end-to-end fashion.
• The selection strategy at intermediate features not only empowers the model with strong flexibility as different frames will be selected at different layers, but also enforces implicit temporal modeling which enables AFNet to obtain higher accuracy with fewer frames.
• We have conducted comprehensive experiments on five video recognition datasets. The results show the superiority of AFNet compared to other competitive methods.
2 Related Work
2.1 Video Recognition
The development of deep learning in recent years serves as a huge boost to the research of video recognition. A straightforward method for this task is using 2D-CNNs to extract the features of sampled frames and use specific aggregation methods to model the temporal relationships across frames. For instance, TSN [27] proposes to average the temporal information between frames. While TSM [20] shifts channels with adjacent frames to allow information exchange at temporal dimension. Another approach is to build 3D-CNNs to for spatiotemporal learning, such as C3D [26], I3D [3] and SlowFast [8]. Though being shown effective, methods based on 3D-CNNs are computationally expensive, which brings great difficulty in real-world deployment.
While the two-branch design has been explored by SlowFast, our motivation and detailed structure are different from it in the following ways: 1) network category: SlowFast is a static 3D model, but
AFNet is a dynamic 2D network; 2) motivation: SlowFast aims to collect semantic information and changing motion with branches at different temporal speeds for better performance, while AFNet is aimed to dynamically skip frames to save computation and the design of two-branch structure is to prevent the information loss; 3) specific design: AFNet is designed to downsample features for efficiency at ample branch while SlowFast processes features in the original resolution; 4) temporal modeling: SlowFast applies 3D convolutions for temporal modeling, AFNet is a 2D model which is enforced with implicit temporal modeling by the designed navigation module.
2.2 Redundancy in Data
The efficiency of 2D-CNNs has been broadly studied in recent years. While some of the works aim at designing efficient network structure [13], there is another line of research focusing on reducing the intrinsic redundancy in image-based data [32, 11]. In video recognition, people usually sample limited number of frames to represent each video to prevent numerous computational costs. Even though, the computation for video recognition is still a heavy burden for researchers and a common strategy to address this problem is reducing the temporal redundancy in videos as not all frames are essential to the final prediction. [33] proposes to use reinforcement learning to skip frames for action detection. There are other works [29, 16] dynamically sampling salient frames to save computational cost. As spatial redundancy widely exists in image-based data, [23] adaptively processes frames with different resolutions. [28] provides the solution as cropping the most salient patch for each frame. However, the unselected regions or frames of these works are completely abandoned. Hence, there will be some information lost in their designed procedures. Moreover, most of these works adopt a policy network to make dynamic decisions, which introduces additional computation somehow and splits the training into several stages. In contrast, our method adopts a two-branch design, allocating different computational resources based on the importance of each frame and preventing the loss of information. Besides, we design a lightweight navigation module to guide the network where to look, which can be incorporated into the backbone network and trained in an end-to-end way. Moreover, we validate that the dynamic frame selection at intermediate features will not only empower the model with strong flexibility as different frames will be selected at different layers, but result in learned frame-wise weights which enforce implicit temporal modeling.
3 Methodology
Intuitively, considering more frames enhances the temporal modeling but results in higher computational cost. To efficiently achieve the competitive performance, we propose AFNet to involve more frames but wisely extract information from them to keep the low computational cost. Specifically, we design a two-branch structure to treat frames differently based on their importance and process the features in an adaptive manner which can provide our method with strong flexibility. Besides, we demonstrate that the dynamic selection of frames in the intermediate features results in learned frame-wise weights which can be regarded as implicit temporal modeling.
3.1 Architecture Design
As is shown in Figure 2, we design our Ample and Focal (AF) module as a two-branch structure: the ample branch (top) processes abundant features of all the frames in a lower resolution and a squeezed channel size; the focal branch (bottom) receives the guidance from ample branch generated by the navigation module and makes computation only on the selected frames. Such design can be conveniently applied to existing CNN structures to build AF module.
Ample Branch. The ample branch is designed to involve all frames with cheap computation, which serves as 1) guidance to select salient frames to help focal branch to concentrate on important information; 2) a complementary stream with focal branch to prevent the information loss via a carefully designed fusion strategy.
Formally, we denote video sample i as vi, containing T frames as vi = { f i1, f i 2, ..., f i T } . For convenience, we omit the superscript i in the following sections if no confusion arises. We denote the input of ample branch as vx ∈ RT×C×H×W , where C represents the channel size and H ×W is the spatial size. The features generated by the ample branch can be written as:
vya = F a (vx) , (1)
where vya ∈ RT×(Co/2)×(Ho/2)×(Wo/2) represents the output of ample branch and F a stands for a series of convolution blocks. While the channel, height, width at focal branch are denoted as Co, Ho, Wo correspondingly. We set the stride of the first convolution block to 2 to downsample the resolution of this branch and we upsample the feature at the end of this branch by nearest interpolation.
Navigation Module. The proposed navigation module is designed to guide the focal branch where to look by adaptively selecting the most salient frames for video vi.
Specifically, the navigation module generates a binary temporal mask Ln using the output from the n-th convolution block in ample branch vyan . At first, average pooling is applied to vyan to resize the spatial dimension to 1× 1, then we perform convolution to transform the channel size to 2:
ṽyan = ReLU ( BN ( W1 ∗ Pool ( vyan ))) , (2)
where ∗ stands for convolution and W1 denotes the weights of the 1× 1 convolution. After that, we reshape the dimension of feature ṽyan from T × 2 × 1 × 1 to 1 × (2× T ) × 1 × 1 so that we can model the temporal relations for each video from channel dimension by:
ptn = W2 ∗ ṽyan , (3) where W2 represents the weights of the second 1× 1 convolution and it will generate a binary logit ptn ∈ R2 for each frame t which denotes whether to select it. However, directly sampling from such discrete distribution is non-differentiable. In this work, we apply Gumbel-Softmax [14] to resolve this non-differentiability. Specifically, we generate a normalized categorical distribution by using Softmax:
π = lj | lj = exp ( p tj n ) exp ( pt0n ) + exp ( pt1n ) , (4)
and we draw discrete samples from the distribution π as:
L = arg max j (log lj +Gj) , (5)
where Gj = − log(− log Uj) is sampled from a Gumbel distribution and Uj is sampled from Unif(0,1) which is a uniform distribution. As argmax cannot be differentiated, we relax the discrete sample L in backpropagation via Softmax:
l̂j = exp ((log lj +Gj) /τ)∑2
k=1 exp ((log lk +Gk) /τ) , (6)
the distribution l̂ will become a one-hot vector when the temperature factor τ → 0 and we let τ decrease from 1 to 0.01 during training.
Focal Branch. The focal branch is guided by the navigation module to only compute the selected frames, which diminishes the computational cost and potential noise from redundant frames.
The features at the n-th convolution block in this branch can be denoted as vyfn ∈ R T×Co×Ho×Wo . Based on the temporal mask Ln generated from the navigation module, we select frames which have corresponding non-zero values in the binary mask for each video and apply convolutional operations only on these extracted frames v′
yfn ∈ RTl×Co×Ho×Wo :
v′ yfn = F fn ( v′ yfn−1 ) , (7)
where F fn is the n-th convolution blocks at this branch and we set the group number of convolutions to 2 in order to further reduce the computations. After the convolution operation at n-th block, we generate a zero-tensor which shares the same shape with vyfn and fill the value by adding v ′ yfn
and vyfn−1 with the residual design following [12].
At the end of these two branches, inspired by [1, 11], we generate a weighting factor θ by pooling and linear layers to fuse the features from two branches:
vy = θ ⊙ vya + (1− θ)⊙ vyf , (8) where ⊙ denotes the channel-wise multiplication.
3.2 Implicit Temporal Modeling
While our work is mainly designed to reduce the computation in video recognition like [28, 24], we demonstrate that AFNet enforces implicit temporal modeling by the dynamic selection of frames in the intermediate features. Considering a TSN[27] network which adapts vanilla ResNet[12] structure, the feature at the n-th convolutional block in each stage can be written as vn ∈ RT×C×H×W . Thus, the feature at n+ 1-th block can be represented as:
vn+1 = vn + Fn+1 (vn)
= (1 + ∆vn+1) vn, (9)
∆vn+1 = Fn+1 (vn)
vn , (10)
where Fn+1 is the n+ 1-th convolutional block and we define ∆vn+1 as the coefficient learned from this block. By that we can write the output of this stage vN as:
vN =
[ N∏
n=2
(1 + ∆vn) ] ∗ v1. (11)
Similarly, we define the features in ample and focal branch as:
vyaN =
[ N∏
n=2
( 1 + ∆vyan )] ∗ vy1 , (12)
vyfN =
[ N∏
n=2
( 1 + Ln ∗∆vyfn )] ∗ vy1 , (13)
where Ln is the binary temporal mask generated by Equation 5 and vy1 denotes the input of this stage. Based on Equation 8, we can get the output of this stage as:
vyN = θ ⊙ vyaN + (1− θ)⊙ vyfN
= { θ ⊙ [ N∏
n=2
( 1 + ∆vyan )] + (1− θ)⊙ [ N∏
n=2
( 1 + Ln ∗∆vyfn )]} ∗ vy1 .
(14)
As Ln is a temporal-wise binary mask, it will decide whether the coefficient ∆vyfn will be calculated in each frame at every convolutional block. Considering the whole stage is made up of multiple convolutional blocks, the series multiplication of focal branch’s output with the binary mask Ln will approximate soft weights. This results in learned frame-wise weights in each video which we regard as implicit temporal modeling. Although we do not explicitly build any temporal modeling module, the generation of Ln in Equation 3 has already taken the temporal information into account so that the learned temporal weights equal performing implicit temporal modeling at each stage.
3.3 Spatial Redundancy Reduction
In this part, we show that our approach is compatible with methods that aim to solve the problem of spatial redundancy. We extend the navigation module by applying similar procedures with the temporal mask generation and the work [11] to generate a spatial logit for the n-th convolution block which is shown in Figure 3:
qtn = W4 ∗ ( Pool ( ReLU ( BN ( W3 ∗ vyan )))) , (15)
where W3 denotes the weights of the 3× 3 convolution and W4 stands for the weights of convolution with kernel size 1× 1. After that, we still use Gumbel-Softmax to sample from discrete distribution to generate spatial mask Mn and navigate the focal branch to merely focus on the salient regions of the selected frames to further reduce the cost.
3.4 Loss functions
Inspired by [27], we take the average of each frame’s prediction to represent the final output of the corresponding video and our optimization objective is minimizing:
L = ∑ (v,y)
[ −y log (P (v)) + λ ·
N∑ n=1
(r −RT )2 ] . (16)
The first term is the cross-entropy between predictions P (v) for input video v and the corresponding one-hot label y. We denote r in the second term as the ratio of selected frames in every mini-batch and RT as the target ratio which is set before the training (RS is the target ratio when extending navigation module to reduce spatial redundancy). We let r approximate RT by adding the second loss term and manage the trade-off between efficiency and accuracy by introducing a factor λ which balances these two terms.
4 Empirical Validation
In this section, we conduct comprehensive experiments to validate the proposed method. We first compare our method with plain 2D CNNs to demonstrate that our AF module implicitly implements temporal-wise attention which is beneficial for temporal modeling. Then, we validate AFNet’s efficiency by introducing more frames but costing less computation compared with other methods. Further, we show AFNet’s strong performance compared with other efficient action recognition frameworks. Finally, we provide qualitative analysis and extensive ablation results to demonstrate the effectiveness of the proposed navigation module and two-branch design.
Datasets. Our method is evaluated on five video recognition datasets: (1) Mini-Kinetics [23, 24] is a subset of Kinetics [15] which selects 200 classes from Kinetics, containing 121k training videos and 10k validation videos; (2) ActivityNet-v1.3 [2] is an untrimmed dataset with 200 action categories and average duration of 117 seconds. It contains 10,024 video samples for training and 4,926 for validation; (3) Jester is a hand gesture recognition dataset introduced by [22]. The dataset consists of 27 classes, with 119k training videos and 15k validation videos; (4) Something-Something V1&V2 [10] are two human action datasets with strong temporal information, including 98k and 194k videos for training and validation respectively.
Data pre-processing. We sample 8 frames uniformly to represent every video on Jester, MiniKinetics, and 12 frames on ActivityNet and Something-Something to compare with existing works unless specified. During training, the training data is randomly cropped to 224 × 224 following [35], and we perform random flipping except for Something-Something. At inference stage, all frames are center-cropped to 224 × 224 and we use one-crop one-clip per video for efficiency.
Implementation details. Our method is bulit on ResNet50 [12] in default and we replace the first three stages of the network with our proposed AF module. We first train our two-branch network from scratch on ImageNet for fair comparisons with other methods. Then we add the proposed navigation module and train it along with the backbone network on video recognition datasets. In our implementations, RT denotes the ratio of selected frames while RS represents the ratio of select regions which will decrease from 1 to the number we set before training by steps. We let the temperature τ in navigation module decay from 1 to 0.01 exponentially during training. Due to limited space, we include more details of implementation in supplementary material.
4.1 Comparisons with Existing Methods
Less is more. At first, we implement AFNet on Something-Something V1 and Jester datasets with 8 sampled frames. We compare it with the baseline method TSN as both methods do not explicitly build temporal modeling module and are built on ResNet50. In Table 1, our method AFNet(RT=1.00) shows similar performance with TSN when selecting all the frames. Nonetheless, when we select fewer frames in AFNet, it exhibits much higher accuracy compared to TSN and AFNet(RT=1.00) which achieves Less is
More by utilizing less frames but obtaining higher accuracy. The results may seem counterintuitive as seeing more frames is usually beneficial for video recognition. The explanation is that the two-branch design of AFNet can preserve the information of all input frames and the selection of salient frames at intermediate features implements implicit temporal modeling as we have analyzed in Section 3.2. As the binary mask learned by the navigation module will decide whether the coefficient will be calculated for each frame at every convolutional block, it will result in learned temporal weights in each video. To better illustrate this point, we conduct the experiment by removing Gumbel-Softmax [14] in our navigation module and modifying it to learn soft temporal weights for the features at focal branch. We can observe that AFNet(soft-weights) has a similar performance with AFNet(RT=0.25), AFNet(RT=0.50) and outperforms AFNet(RT=1.00) significantly which indicates that learning soft frame-wise weights causes the similar effect.
More is less. We incorporate our method with temporal shift module (TSM [20]) to validate that AFNet can further reduce the redundancy of such competing methods and achieve More is Less by seeing more frames with less computation. We implement our method on Something-Something V1&V2 datasets which contain strong temporal information and relevant results are shown in Table 2.
Table 3: Comparisons with competitive efficient video recognition methods on Mini-Kinetics. AFNet achieves the best trade-off compared to existing works. GFLOPs represents the average computation to process one video.
Method Mini-Kinetics
Top-1 Acc. GFLOPs
LiteEval [30] 61.0% 99.0 SCSampler [16] 70.8% 42.0 AR-Net [23] 71.7% 32.0 AdaFuse [24] 72.3% 23.0 AdaFocus [28] 72.2% 26.6 VideoIQ [25] 72.3% 20.4
AFNet (RT=0.4) 72.8% 19.4 AFNet (RT=0.8) 73.5% 22.0
Table 4: Comparisons with competitive efficient video recognition methods on ActivityNet. AFNet achieves the best trade-off compared to existing works. GFLOPs represents the average computation to process one video.
Method ActivityNet
mAP GFLOPs
AdaFrame [29] 71.5% 79.0 LiteEval [30] 72.7% 95.1 ListenToLook [9] 72.3% 81.4 SCSampler [16] 72.9% 42.0 AR-Net [23] 73.8% 33.5 VideoIQ [25] 74.8% 28.1 AdaFocus [28] 75.0% 26.6
AFNet (RS=0.4,RT=0.8) 75.6% 24.6
Compared to TSM which samples 8 frames, our method shows significant advantages in performance as we introduce more frames and the two-branch structure can preserve the information of all frames. Yet, our computational cost is much smaller than TSM as we allocate frames with different computation resources by this two-branch design and adaptively skip the unimportant frames with the proposed navigation module. Moreover, AFNet outperforms many static methods, which carefully design their structures for better temporal modeling, both in accuracy and efficiency. This can be explained by that the navigation module restrains the noise of unimportant frames and enforces frame-wise attention which is beneficial for temporal modeling. As for other competitive dynamic methods like AdaFuse and AdaFocus, our method shows an obviously better performance both in accuracy and computations. When costing similar computation, AFNet outperforms AdaFuse and AdaFocus by 3.1% and 1.8% respectively on Something-Something V1. Furthermore, we implement our method on other backbones for even higher accuracy and efficiency. When we build AFNet on efficient structure MobileNetV3, we can obtain similar performance with TSM but only with the computation of 2.3 GFLOPs. Besides, AFNet-TSM(RT=0.8) with the backbone of ResNet101 can achieve the accuracy of 50.1% and 63.2% on Something-Something V1 and V2, respectively, which further validate the effectiveness and generalization ability of our framework.
Comparisons with competitive dynamic methods. Then, we implement our method on MiniKinetics and ActivityNet, and compare AFNet with other efficient video recognition approaches. At first, we validate our method on Mini-Kinetics and AFNet shows the best performance both in accuracy and computations compared with other efficient approaches in Table 3. To demonstrate that AFNet can further reduce spatial redundancy, we extend the navigation module to select salient
regions of important frame on ActivityNet. We move the temporal navigation module to the first layer of the network to avoid huge variance in features when incorporating spatial navigation module and note that we only apply this procedure in this part. We can see from Table 4 that our method obtains the best performance while costing the least computation compared to other works. Moreover, we change the ratio of selected frames and plot the mean Average Precision and computational cost of various methods in Figure 4. We can conclude that AFNet exhibits a better trade-off between accuracy and efficiency compared to other works.
4.2 Visualizations
We show the distribution of RT among different convolution blocks under different selection ratios in Figure 5 and utilize 3rd-order polynomials to display the trend of distribution (shown in dash lines). One can see a decreased trend in RT for all the curves with the increased index in convolution blocks and this can be explained that earlier layers mostly capture low-level information which has relatively large divergence among different frames. While high-level semantics between different frames are more similar, therefore AFNet tends to skip more at later convolution blocks. In Figure 6, we visualize the selected frames in the 3rd-block of our AFNet with RT=0.5 on the validation set of Something-Something V1 where we uniformly sample 8 frames. Our navigation module effectively guides the focal branch to concentrate on frames which are more task-relevant and deactivate the frames that contain similar information.
4.3 Ablation Study
In this part, we implement our method on ActivityNet with 12 sampled frames to conduct comprehensive ablation study to verify the effectiveness of our design.
Effect of two branch design. We first incorporate our navigation module into ResNet50 and compare it with AFNet to prove the strength of our designed two-branch architecture. From Table 5, AFNet shows substantial advantages in accuracy under different ratios of select frames. Aside from it, models which adopted our structure but with a fixed sampling policy also show significantly better performance compared with the network based on single branch which can further demonstrate the effectiveness of our two-branch structure and the necessity to preserve the information of all frames.
Effect of navigation module. In this part, we further compare our proposed navigation module with three alternative sampling strategies in different selection ratios: (1) random sampling; (2) uniform sampling: sample frames in equal step; (3) normal sampling: sample frames from a standard gaussian distribution. Shown in Table 5, our proposed strategy continuously outperforms other fixed sampling policies under different selection ratios which validates the effectiveness of the navigation module. Moreover, the advantage of our method is more obvious when the ratio of selected frames is small which demonstrates that our selected frames are more task-relevant and contain essential information for the recognition. Further, we evaluate the extension of navigation module which can reduce spatial redundancy, and compare it with: (1) random sampling; (2) center cropping. Our method shows better performance compared with fixed sampling strategies under various selection ratios which verifies the effectiveness of this design.
5 Conclusion
This paper proposes an adaptive Ample and Focal Network (AFNet) to reduce temporal redundancy in videos with the consideration of architecture design and the intrinsic redundancy in data. Our method enables 2D-CNNs to have access to more frames to look broadly but with less computation by staying focused on the salient information. AFNet exhibits promising performance as our twobranch design preserves the information of all the input frames instead of discarding part of the knowledge at the beginning of the network. Moreover, the dynamic temporal selection within the network not only restrains the noise of unimportant frames but enforces implicit temporal modeling as well. This enables AFNet to obtain even higher accuracy when using fewer frames compared with static method without temporal modeling module. We further show that our method can be extended to reduce spatial redundancy by only computing important regions of the selected frames. Comprehensive experiments have shown that our method outperforms competing efficient approaches both in accuracy and computational efficiency.
Acknowledgments and Disclosure of Funding
Research was sponsored by the DEVCOM Analysis Center and was accomplished under Cooperative Agreement Number W911NF-22-2-0001. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. | 1. What is the main contribution of the paper regarding video action recognition?
2. What are the strengths and weaknesses of the proposed two-stream approach?
3. How does the navigation module select frames for processing in the focal stream?
4. Can you elaborate on how the implicit temporal modeling works in the ample stream?
5. How do the results of the paper compare to other related works, particularly SlowFast Networks for Video Recognition?
6. Why was TSN used as a baseline in Table 1, despite being a six-year-old method?
7. Are there any limitations to the approach proposed in the paper? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The paper proposed to reduce the cost and boost the accuracy of CNN models applied to video action recognition via a two-stream approach. The first, "ample" stream processes all of the frames, but cheaply, by using low spatial resolution and number of channels. The second "focal" stream processes frames at high-resolution, but only processes a few of the input frames. A navigation model uses the input from the ample stream to select which frames the focal stream should process, using Gumbel softmax.
The paper presents results on the ActivityNet, Something-Something, and Mini-Kinetics datasets, demonstrating strong accuracies at competitively-low flop counts.
Additional studies explore allowing the navigation module to make spatial, as well temporal, selections, and demonstrate that the learned navigation model achieves superior performance to other simpler sampling strategies.
Strengths And Weaknesses
Originality The main weakness is that the premise has already been explored thoroughly in other publications. Specifically in "SlowFast Networks for Video Recognition" https://arxiv.org/abs/1812.03982 (ICCV 2019, 1300+ citations). Both use the idea of a high frame rate low spatial resolution / channels stream, combined with a low frame rate high spatial resolution, with lateral connections from the fast pathway to the slow pathway. Both are targeted at limiting flops while boosting accuracy.
More recently in "Multiview Transformers for Video Recognition" (CVPR 2022) which extends the slow-fast premise to 2+ streams using transformer backbones, instead of CNNs.
The paper and accompanying analysis is incomplete without a direct comparison to SlowFast.
The primary difference is the adaptive selection of frames by the navigation module, vs. SlowFast's fixed temporal sampling rates plus lateral connections. Given that, at a quick glance the numbers seem comparable between the two(*), additional comparison between the approaches is warranted.
Clarity: The paper is clearly written and generally easy to follow. Section 3.2 is a minor exception: I was eager to see the implicit temporal modeling, but found this explanation hard to follow. Perhaps it would benefit from more prose and fewer equations?
Line 293: Please elaborate on how to "sample frames from a gaussian distribution", which is not obvious since Gaussian variables are continuous, not discrete.
Quality: Generally good, except for the omission of highly-cited related work.
Significance: As mentioned, this paper's significance is diminished by its limited originality.
===
(*) The original SlowFast paper publishes numbers on different datasets than this work, but from what I can piece together, SlowFast seems better or comparable. If it's fair to compare numbers on MiniKinetics and Kinetics-400, SlowFast gets 74.2% top-1 at 28.6 GFLOPs (see SlowFast Table 2 (b)), while this paper reports 73.5% top-1 at 22.0 GFLOPs. On Something-Something v2, SlowFast-ResNet50 seems to get 61.7 (see "Multiview Transformers..." Table 2 (c)) which is comparable to this paper's 61.3/62.5.
I recognize the limitations of this analysis, and would love to see a proper apples-to-apples comparison.
Questions
Why is TSN used as a baseline in Table 1? Although it's a great paper, the method is 6 years old.
Limitations
yes |
NIPS | Title
When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness
Abstract
Machine learning is now being used to make crucial decisions about people’s lives. For nearly all of these decisions there is a risk that individuals of a certain race, gender, sexual orientation, or any other subpopulation are unfairly discriminated against. Our recent method has demonstrated how to use techniques from counterfactual inference to make predictions fair across different subpopulations. This method requires that one provides the causal model that generated the data at hand. In general, validating all causal implications of the model is not possible without further assumptions. Hence, it is desirable to integrate competing causal models to provide counterfactually fair decisions, regardless of which causal “world” is the correct one. In this paper, we show how it is possible to make predictions that are approximately fair with respect to multiple possible causal models at once, thus mitigating the problem of exact causal specification. We frame the goal of learning a fair classifier as an optimization problem with fairness constraints entailed by competing causal explanations. We show how this optimization problem can be efficiently solved using gradient-based methods. We demonstrate the flexibility of our model on two real-world fair classification problems. We show that our model can seamlessly balance fairness in multiple worlds with prediction accuracy.
1 Introduction
Machine learning algorithms can do extraordinary things with data. From generating realistic images from noise [7], to predicting what you will look like when you become older [18]. Today, governments and other organizations make use of it in criminal sentencing [4], predicting where to allocate police officers [3, 16], and to estimate an individual’s risk of failing to pay back a loan [8]. However, in many of these settings, the data used to train machine learning algorithms contains biases against certain races, sexes, or other subgroups in the population [3, 6]. Unwittingly, this discrimination is then reflected in the predictions of such algorithms. Simply being born male or female can change an individual’s opportunities that follow from automated decision making trained to reflect historical biases. The implication is that, without taking this into account, classifiers that maximize accuracy risk perpetuating biases present in society.
∗Equal contribution. †This work was done while JL was a Research Fellow at the Alan Turing Institute.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
For instance, consider the rise of ‘predictive policing’, described as “taking data from disparate sources, analyzing them, and then using the results to anticipate, prevent and respond more effectively to future crime” [16]. Today, 38% of U.S. police departments surveyed by the Police Executive Research Forum are using predictive policing and 70% plan to in the next 2 to 5 years. However, there have been significant doubts raised by researchers, journalists, and activists that if the data used by these algorithms is collected by departments that have been biased against minority groups, the predictions of these algorithms could reflect that bias [9, 12].
At the same time, fundamental mathematical results make it difficult to design fair classifiers. In criminal sentencing the COMPAS score [4] predicts if a prisoner will commit a crime upon release, and is widely used by judges to set bail and parole. While it has been shown that black and white defendants with the same COMPAS score commit a crime at similar rates after being released [1], it was also shown that black individuals were more often incorrectly predicted to commit crimes after release by COMPAS than white individuals were [2]. In fact, except for very specific cases, it is impossible to balance these measures of fairness [3, 10, 20].
The question becomes how to address the fact that the data itself may bias the learning algorithm and even addressing this is theoretically difficult. One promising avenue is a recent approach, introduced by us in [11], called counterfactual fairness. In this work, we model how unfairness enters a dataset using techniques from causal modeling. Given such a model, we state whether an algorithm is fair if it would give the same predictions had an individual’s race, sex, or other sensitive attributes been different. We show how to formalize this notion using counterfactuals, following a rich tradition of causal modeling in the artificial intelligence literature [15], and how it can be placed into a machine learning pipeline. The big challenge in applying this work is that evaluating a counterfactual e.g., “What if I had been born a different sex?”, requires a causal model which describes how your sex changes your predictions, other things being equal.
Using “world” to describe any causal model evaluated at a particular counterfactual configuration, we have dependent “worlds” within a same causal model that can never be jointly observed, and possibly incompatible “worlds” across different models. Questions requiring the joint distribution of counterfactuals are hard to answer, as they demand partially untestable “cross-world” assumptions [5, 17], and even many of the empirically testable assumptions cannot be falsified from observational data alone [14], requiring possibly infeasible randomized trials. Because of this, different experts as well as different algorithms may disagree about the right causal model. Further disputes may arise due to the conflict between accurately modeling unfair data and producing a fair result, or because some degrees of unfairness may be considered allowable while others are not.
To address these problems, we propose a method for ensuring fairness within multiple causal models. We do so by introducing continuous relaxations of counterfactual fairness. With these relaxations in hand, we frame learning a fair classifier as an optimization problem with fairness constraints. We give efficient algorithms for solving these optimization problems for different classes of causal models. We demonstrate on three real-world fair classification datasets how our model is able to simultaneously achieve fairness in multiple models while flexibly trading off classification accuracy.
2 Background
We begin by describing aspects causal modeling and counterfactual inference relevant for modeling fairness in data. We then briefly review counterfactual fairness [11], but we recommend that the interested reader should read the original paper in full. We describe how uncertainty may arise over the correct causal model and some difficulties with the original counterfactual fairness definition. We will use A to denote the set of protected attributes, a scalar in all of our examples but which without loss of generality can take the form of a set. Likewise, we denote as Y the outcome of interest that needs to be predicted using a predictor Ŷ . Finally, we will use X to denote the set of observed variables other than A and Y , and U to denote a set of hidden variables, which without loss of generality can be assumed to have no observable causes in a corresponding causal model.
2.1 Causal Modeling and Counterfactual Inference
We will use the causal framework of Pearl [15], which we describe using a simple example. Imagine we have a dataset of university students and we would like to model the causal relationships that
lead up to whether a student graduates on time. In our dataset, we have information about whether a student holds a job J , the number of hours they study per week S, and whether they graduate Y . Because we are interested in modeling any unfairness in our data, we also have information about a student’s race A. Pearl’s framework allows us to model causal relationships between these variables and any postulated unobserved latent variables, such as some U quantifying how motivated a student is to graduate. This uses a directed acyclic graph (DAG) with causal semantics, called a causal diagram. We show a possible causal diagram for this example in Figure 1, (Left). Each node corresponds to a variable and each set of edges into a node corresponds to a generative model specifying how the “parents” of that node causally generated it. In its most specific description, this generative model is a functional relationship deterministically generating its output given a set of observed and latent variables. For instance, one possible set of functions described by this model could be as follows:
S = g(J, U) + Y = I[φ(h(S,U)) ≥ 0.5] (1) where g, h are arbitrary functions and I is the indicator function that evaluates to 1 if the condition holds and 0 otherwise. Additionally, φ is the logistic function φ(a) = 1/(1 + exp(−a)) and is drawn independently of all variables from the standard normal distributionN (0, 1). It is also possible to specify non-deterministic relationships:
U ∼ N (0, 1) S ∼ N (g(J, U), σS) Y ∼ Bernoulli(φ(h(S,U)) (2) where σS is a model parameter. The power of this causal modeling framework is that, given a fully-specified set of equations, we can compute what (the distribution of) any of the variables would have been had certain other variables been different, other things being equal. For instance, given the causal model we can ask “Would individual i have graduated (Y =1) if they hadn’t had a job?”, even if they did not actually graduate in the dataset. Questions of this type are called counterfactuals.
For any observed variables V,W we denote the value of the counterfactual “What would V have been if W had been equal to w?” as VW←w. Pearl et al. [15] describe how to compute these counterfactuals (or, for non-deterministic models, how to compute their distribution) using three steps: 1. Abduction: Given the set of observed variables X ={X1, . . . , Xd} compute the values of the set of unobserved variables U = {U1, . . . , Up} given the model (for non-deterministic models, we compute the posterior distribution P(U|X )); 2. Action: Replace all occurrences of the variable W with value w in the model equations; 3. Prediction: Using the new model equations, and U (or P(U|X )) compute the value of V (or P (V |X )). This final step provides the value or distribution of VW←w given the observed, factual, variables.
2.2 Counterfactual Fairness
In the above example, the university may wish to predict Y , whether a student will graduate, in order to determine if they should admit them into an honors program. While the university prefers to admit students who will graduate on time, it is willing to give a chance to some students without a confident graduation prediction in order to remedy unfairness associated with race in the honors
program. The university believes that whether a student needs a job J may be influenced by their race. As evidence they cite the National Center for Education Statistics, which reported3 that fewer (25%) Asian-American students were employed while attending university as full-time students relative to students of other races (at least 35%). We show the corresponding casual diagram for this in Figure 1 (Center). As having a job J affects study which affects graduation likelihood Y this may mean different races take longer to graduate and thus unfairly have a harder time getting into the honors program.
Counterfactual fairness aims to correct predictions of a label variable Y that are unfairly altered by an individual’s sensitive attribute A (race in this case). Fairness is defined in terms of counterfactuals:
Definition 1 (Counterfactual Fairness [11]). A predictor Ŷ of Y is counterfactually fair given the sensitive attribute A=a and any observed variables X if
P(ŶA←a=y | X = x, A = a) = P(ŶA←a′ =y | X = x, A = a) (3) for all y and a′ 6=a.
In what follows, we will also refer to Ŷ as a function f(x, a) of hidden variables U , of (usually a subset of) an instantiation x of X , and of protected attribute A. We leave U implicit in this notation since, as we will see, this set might differ across different competing models. The notation implies
ŶA←a = f(xA←a, a). (4)
Notice that if counterfactual fairness holds exactly for Ŷ , then this predictor can only be a non-trivial function of X for those elements X ∈ X such that XA←a = XA←a′ . Moreover, by construction UA←a = UA←a′ , as each element of U is defined to have no causes in A ∪ X . The probabilities in eq. (3) are given by the posterior distribution over the unobserved variables P(U | X = x, A = a). Hence, a counterfactual ŶA←a may be deterministic if this distribution is degenerate, that is, if U is a deterministic function of X and A. One nice property of this definition is that it is easy to interpret: a decision is fair if it would have been the same had a person had a different A (e.g., a different race4), other things being equal. In [11], we give an efficient algorithm for designing a predictor that is counterfactually fair. In the university graduation example, a predictor constructed from the unobserved motivation variable U is counterfactually fair.
One difficulty of the definition of counterfactual fairness is it requires one to postulate causal relationships between variables, including latent variables that may be impractical to measure directly. In general, different causal models will create different fair predictors Ŷ . But there are several reasons why it may be unrealistic to assume that any single, fixed causal model will be appropriate. There may not be a consensus among experts or previous literature about the existence, functional form, direction, or magnitude of a particular causal effect, and it may be impossible to determine these from the available data without untestable assumptions. And given the sensitive, even political nature of problems involving fairness, it is also possible that disputes may arise over the presence of a feature of the causal model, based on competing notions of dependencies and latent variables. Consider the following example, formulated as a dispute over the presence of edges. For the university graduation model, one may ask if differences in study are due only to differences in employment, or whether instead there is some other direct effect of A on study levels. Also, having a job may directly affect graduation likelihood. We show these changes to the model in Figure 1 (Right). There is also potential for disagreement over whether some causal paths from A to graduation should be excluded from the definition of fairness. For example, an adherent to strict meritocracy may argue the numbers of hours a student has studied should not be given a counterfactual value. This could be incorporated in a separate model by omitting chosen edges when propagating counterfactual information through the graph in the Prediction step of counterfactual inference5. To summarize, there may be disagreements about the right causal model due to: 1. Changing the structure of the DAG, e.g. adding an edge; 2. Changing the latent variables, e.g. changing the function generating a vertex to have a different signal vs. noise decomposition; 3. Preventing certain paths from propagating counterfactual values.
3https://nces.ed.gov/programs/coe/indicator_ssa.asp 4At the same time, the notion of a “counterfactual race,” sex, etc. often raises debate. See [11] for our take on this. 5In the Supplementary Material of [11], we explain how counterfactual fairness can be restricted to particular paths from A to Y , as opposed to all paths.
3 Fairness under Causal Uncertainty
In this section, we describe a technique for learning a fair predictor without knowing the true casual model. We first describe why in general counterfactual fairness will often not hold in multiple different models. We then describe a relaxation of the definition of counterfactual fairness for both deterministic and non-deterministic models. Finally we show an efficient method for learning classifiers that are simultaneously accurate and fair in multiple worlds. In all that follows we denote sets in calligraphic script X , random variables in uppercase X , scalars in lowercase x, matrices in bold uppercase X, and vectors in bold lowercase x.
3.1 Exact Counterfactual Fairness Across Worlds
We can imagine extending the definition of counterfactual fairness so that it holds for every plausible causal world. To see why this is inherently difficult consider the setting of deterministic causal models. If each causal model of the world generates different counterfactuals then each additional model induces a new set of constraints that the classifier must satisfy, and in the limit the only classifiers that are fair across all possible worlds are constant classifiers. For non-deterministic counterfactuals, these issues are magnified. To guarantee counterfactual fairness, Kusner et al. [11] assumed access to latent variables that hold the same value in an original datapoint and in its corresponding counterfactuals. While the latent variables of one world can remain constant under the generation of counterfactuals from its corresponding model, there is no guarantee that they remain constant under the counterfactuals generated from different models. Even in a two model case, if the P.D.F. of one model’s counterfactual has non-zero density everywhere (as is the case under Gaussian noise assumptions) it may be the case that the only classifiers that satisfy counterfactual fairness for both worlds are the constant classifiers. If we are to achieve some measure of fairness from informative classifiers, and over a family of different worlds, we need a more robust alternative to counterfactual fairness.
3.2 Approximate Counterfactual Fairness
We define two approximations to counterfactual fairness to solve the problem of learning a fair classifier across multiple causal worlds.
Definition 2 (( , δ)-Approximate Counterfactual Fairness). A predictor f(X , A) satisfies ( , 0)- approximate counterfactual fairness (( , 0)-ACF) if, given the sensitive attribute A = a and any instantiation x of the other observed variables X , we have that:∣∣f(xA←a, a)− f(xA←a′ , a′)∣∣ ≤ (5) for all a′ 6= a if the system deterministically implies the counterfactual values of X . For a nondeterministic causal system, f satisfies ( , δ)-approximate counterfactual fairness, (( , δ)-ACF) if:
P( ∣∣f(XA←a, a)− f(XA←a′ , a′)∣∣ ≤ | X = x, A = a) ≥ 1− δ (6)
for all a′ 6=a.
Both definitions must hold uniformly over the sample space of X ×A. The probability measures used are with respect to the conditional distribution of background latent variables U given the observations. We leave a discussion of the statistical asymptotic properties of such plug-in estimator for future work. These definitions relax counterfactual fairness to ensure that, for deterministic systems, predictions f change by at most when an input is replaced by its counterfactual. For non-deterministic systems, the condition in (6) means that this change must occur with high probability, where the probability is again given by the posterior distribution P(U|X ) computed in the Abduction step of counterfactual inference. If = 0, the deterministic definitions eq. (5) is equivalent to the original counterfactual fairness definition. If also δ=0 the non-deterministic definition eq. (6) is actually a stronger condition than the counterfactual fairness definition eq. (3) as it guarantees equality in probability instead of equality in distribution6.
6In the Supplementary Material of [11], we describe in more detail the implications of the stronger condition.
Algorithm 1 Multi-World Fairness 1: Input: features X = [x1, . . . ,xn], labels y = [y1, . . . , yn], sensitive attributes a = [a1, . . . , an],
privacy parameters ( , δ), trade-off parameters L = [λ1, . . . , λl]. 2: Fit causal models: M1, . . . ,Mm using X,a (and possibly y). 3: Sample counterfactuals: XA1←a′ , . . . ,XAm←a′ for all unobserved values a′. 4: for λ ∈ L do 5: Initialize classifier fλ. 6: while loop until convergence do 7: Select random batches Xb of inputs and batch of counterfactuals XA1←a′ , . . . ,XAm←a′ . 8: Compute the gradient of equation (7). 9: Update fλ using any stochastic gradient optimization method.
10: end while 11: end for 12: Select model fλ: For deterministic models select the smallest λ such that equation (5) using fλ
holds. For non-deterministic models select the λ that corresponds to δ given fλ.
3.3 Learning a Fair Classifier
Assume we are given a dataset of n observations a = [a1, . . . , an] of the sensitive attribute A and of other features X = [x1, . . . ,xn] drawn from X . We wish to accurately predict a label Y given observations y=[y1, . . . , yn] while also satisfying ( , δ)-approximate counterfactual fairness. We learn a classifier f(x, a) by minimizing a loss function `(f(x, a), y). At the same time, we incorporate an unfairness term µj(f,x, a, a′) for each causal model j to reduce the unfairness in f . We formulate this as a penalized optimization problem:
min f
1
n n∑ i=1 `(f(xi, ai), yi) + λ m∑ j=1 1 n n∑ i=1 ∑ a′ 6=ai µj(f,xi, ai, a ′) (7)
where λ trades-off classification accuracy for multi-world fair predictions. We show how to naturally define the unfairness function µj for deterministic and non-deterministic counterfactuals.
Deterministic counterfactuals. To enforce ( , 0)-approximate counterfactual fairness a natural penalty for unfairness is an indicator function which is one whenever ( , 0)-ACF does not hold, and zero otherwise:
µj(f,xi, ai, a ′) := I[ ∣∣f(xi,Aj←ai , ai)− f(xi,Aj←a′ , a′)∣∣ ≥ ] (8) Unfortunately, the indicator function I is non-convex, discontinuous and difficult to optimize. Instead, we propose to use the tightest convex relaxation to the indicator function:
µj(f,xi, ai, a ′) := max{0, ∣∣f(xi,Aj←ai , ai)− f(xi,Aj←a′ , a′)∣∣− } (9) Note that when ( , 0)-approximate counterfactual fairness is not satisfied µj is non-zero and thus the optimization problem will penalize f for this unfairness. Where ( , 0)-approximate counterfactual fairness is satisfied µj evaluates to 0 and it does not affect the objective. For sufficiently large λ, the value of µj will dominate the training loss 1n ∑n i=1 `(f(xi, ai), yi) and any solution will satisfy ( , 0)-approximate counterfactual fairness. However, an overly large choice of λ causes numeric instability, and will decrease the accuracy of the classifier found. Thus, to find the most accurate classifier that satisfies the fairness condition one can simply perform a grid or binary search for the smallest λ such that the condition holds.
Non-deterministic counterfactuals. For non-deterministic counterfactuals we begin by writing a Monte-Carlo approximation to ( , δ)-ACF, eq. (6) as follows:
1
S S∑ s=1 I( ∣∣f(xsAj←ai , ai)−f(xsAj←a′ , a′)∣∣≥ )≤ δ (10)
where xk is sampled from the posterior distribution P(U|X ). We can again form the tightest convex relaxation of the left-hand side of the expression to yield our unfairness function:
µj(f,xi, ai, a ′) :=
1
S S∑ s=1 max{0, ∣∣f(xsi,Aj←ai , ai)− f(xsi,Aj←a′ , a′)∣∣− } (11)
Note that different choices of λ in eq. (7) correspond to different values of δ. Indeed, by choosing λ = 0 we have the ( , δ)-fair classifier corresponding to an unfair classifier7. While a sufficiently large, but finite, λ will correspond to a ( , 0) approximately counterfactually fair classifier. By varying λ between these two extremes, we induce classifiers that satisfy ( , δ)-ACF for different values of δ.
With these unfairness functions we have a differentiable optimization problem eq. (7) which can be solved with gradient-based methods. Thus, our method allows practitioners to smoothly trade-off accuracy with multi-world fairness. We call our method Multi-World Fairness (MWF). We give a complete method for learning a MWF classifier in Algorithm 1.
For both deterministic and non-deterministic models, this convex approximation essentially describes an expected unfairness that is allowed by the classifier: Definition 3 (Expected -Unfairness). For any counterfactual a′ 6= a, the Expected -Unfairness of a classifier f , or E [f ], is
E [ max{0, ∣∣f(XA←a, a)− f(XA←a′ , a′)∣∣− } | X = x, A = a] (12)
where the expectation is over any unobserved U (and is degenerate for deterministic counterfactuals). We note that the term max{0, ∣∣f(XA←a, a)−f(XA←a′ , a′)∣∣− } is strictly non-negative and therefore the expected -unfairness is zero if and only if f satisfies ( , 0)-approximate counterfactual fairness almost everywhere.
Linear Classifiers and Convexity Although we have presented these results in their most general form, it is worth noting that for linear classifiers, convexity guarantees are preserved. The family of linear classifiers we consider is relatively broad, and consists those linear in their learned weights w, as such it includes both SVMs and a variety of regression methods used in conjuncture with kernels or finite polynomial bases.
Consider any classifier whose output is linear in the learned parameters, i.e., the family of classifiers f all have the form f(X , A) = ∑ l wlgl(X , a), for a set of fixed kernels gl. Then the expected
-unfairness is a linear function of w taking the form: E [ max{0, ∣∣f(XA←a, a)− f(XA←a′ , a′)∣∣− }] (13)
= E [ max{0, ∣∣∑
l
wl(gl(XA←a, a)− gl(XA←a′ , a′)) ∣∣}]
This expression is linear in w and therefore, if the classification loss is also convex (as is the case for most regression tasks), a global optima can be ready found via convex programming. In particular, globally optimal linear classifiers satisfying ( , 0)-ACF or ( , δ)-ACF, can be found efficiently.
Bayesian alternatives and their shortcomings. One may argue that a more direct alternative is to provide probabilities associated with each world and to marginalize set of the optimal counterfactually fair classifiers over all possible worlds. We argue this is undesirable for two reasons: first, the averaged prediction for any particular individual may violate (3) by an undesirable margin for one, more or even all considered worlds; second, a practitioner may be restricted by regulations to show that, to the best of their knowledge, the worst-case violation is bounded across all viable worlds with high probability. However, if the number of possible models is extremely large (for example if the causal structure of the world is known, but the associated parameters are not) and we have a probability associated with each world, then one natural extension is to adapt Expected -Unfairness eq. (3) to marginalize over the space of possible worlds. However, we leave this extension to future work.
4 Experiments
We demonstrate the flexibility of our method on two real-world fair classification problems: 1. fair predictions of student performance in law schools; and 2. predicting whether criminals will re-offend upon being released. For each dataset we begin by giving details of the fair prediction problem. We then introduce multiple causal models that each possibly describe how unfairness plays a role in the data. Finally, we give results of Multi-World Fairness (MWF) and show how it changes for different settings of the fairness parameters ( , δ).
7In the worst case, δ may equal 1.
4.1 Fairly predicting law grades
We begin by investigating a dataset of survey results across 163 U.S. law schools conducted by the Law School Admission Council [19] . It contains information on over 20,000 students including their race A (here we look at just black and white students as this difference had the largest effect in counterfactuals in [11]), their grade-point average G obtained prior to law school, law school entrance exam scores L, and their first year average grade Y . Consider that law schools may be interested in predicting Y for all applicants to law school using G and L in order to decide
whether to accept or deny them entrance. However, due to societal inequalities, an individual’s race may have affected their access to educational opportunities, and thus affected G and L. Accordingly, we model this possibility using the causal graphs in Figure 2 (Left). In this graph we also model the fact that G,L may have been affected by other unobserved quantities. However, we may be uncertain whether what the right way to model these unobserved quantities is. Thus we propose to model this dataset with the two worlds described in Figure 2 (Left). Note that these are the same models as used in Kusner et al. [11] (except here we consider race as the sensitive variable). The corresponding equations for these two worlds are as follows:
G = bG + w A GA+ G G ∼ N (bG + wAGA+ wUGU, σG) (14)
L = bL + w A LA+ L L ∼ Poisson(exp(bL + wALA+ wULU))
Y = bY + w A YA+ Y Y ∼ N (wAYA+ wUY U, 1)
G, L, Y ∼ N (0, 1) U ∼ N (0, 1)
where variables b, w are parameters of the causal model.
Results. Figure 3 shows the result of learning a linear MWF classifier on the deterministic law school models. We split the law school data into a random 80/20 train/test split and we fit casual models and classifiers on the training set and evaluate performance on the test set. We plot the test RMSE of the constant predictor satisfying counterfactual fairness in red, the unfair predictor with λ=0, and MWF, averaged across 5 runs. Here as we have one deterministic and one non-deterministic model we will evaluate MWF for different and δ (with the knowledge that the only change in the MWF classifier for different δ is due to the non-deterministic model). For each , δ, we selected the smallest λ across a grid (λ ∈ {10−510−4, . . . , 1010}) such that the constraint in eq. (6) held across 95% of the individuals in both models. We see that MWF is able to reliably sacrifice accuracy for fairness as is reduced. Note that as we change δ we can further alter the accuracy/fairness trade-off.
4.2 Fair recidivism prediction (COMPAS)
We next turn our attention to predicting whether a criminal will re-offend, or ‘recidivate’ after being released from prison. ProPublica [13] released data on prisoners in Broward County, Florida who were awaiting a sentencing hearing. For each of the prisoners we have information on their race A (as above we only consider black versus white individuals), their age E, their number of juvenile felonies JF , juvenile misdemeanors JM , the type of crime they committed T , the number of prior offenses they have P , and whether they recidivated Y . There is also a proprietary COMPAS score [13] C designed to indicate the likelihood a prisoner recidivates.
We model this dataset with two different non-deterministic causal models, shown in Figure 2 (Right). The first model includes the dotted edges, the second omits them. In both models we believe that two unobserved latent factors juvenile criminality UJ and adult criminality UD also contribute to JF , JM , C, T, P . We show the equations for both of our casual models below, where the first causal model includes the blue terms and the second does not:
T ∼ Bernoulli(φ(bT + wUDC UD + w E CE + w A CA) (15)
C ∼ N (bC + wUDC UD + w E CE + w A CA + w T CT + w P CP + w JF C JF + w JM C JM , σC)
P ∼ Poisson(exp(bP + wUDP UD + w E PE + w A PA))
JF ∼ Poisson(exp(bJF + w UJ JF + wEJFE + w A JFA))
JM ∼ Poisson(exp(bJM + w UJ JM + wEJME + w A JMA))
[UJ , UD] ∼ N (0,Σ)
Results. Figure 4 shows how classification accuracy using both logistic regression (linear) and a 3-layer neural network (deep) changes as both and δ change. We split the COMPAS dataset randomly into an 80/20 train/test split, and report all results on the test set. As in the law school experiment we grid-search over λ to find the smallest value such that for any and δ the ( , δ)-ACF) constraint in eq. (6) is satisfied for at least 95% of the individuals in the dataset, across both worlds. We average all results except the constant classifier over 5 runs and plot the mean and standard deviations. We see that for small δ (high fairness) both linear and deep MWF classifiers significantly outperform the constant classifier and begin to approach the accuracy of the unfair classifier as increases. As we increase δ (lowered fairness) the deep classifier is better able to learn a decision boundary that trades-off accuracy for fairness. But if , δ is increased enough (e.g., ≥0.13, δ=0.5), the linear MWF classifier matches the performance of the deep classifier.
5 Conclusion
This paper has presented a natural extension to counterfactual fairness that allows us to guarantee fair properties of algorithms, even when we are unsure of the causal model that describes the world.
As the use of machine learning becomes widespread across many domains, it becomes more important to take algorithmic fairness out of the hands of experts and make it available to everybody. The conceptual simplicity of our method, our robust use of counterfactuals, and the ease of implementing our method mean that it can be directly applied to many interesting problems. A further benefit of our approach over previous work on counterfactual fairness is that our approach only requires the estimation of counterfactuals at training time, and no knowledge of latent variables during testing. As such, our classifiers offer a fair drop-in replacement for other existing classifiers.
6 Acknowledgments
This work was supported by The Alan Turing Institute under the EPSRC grant EP/N510129/1. CR acknowledges additional support under the EPSRC Platform Grant EP/P022529/1. | 1. What is the primary contribution of the paper regarding fairness in causal graphical models?
2. What are the strengths and weaknesses of the proposed method for constructing counterfactual estimates?
3. How does the reviewer assess the limitation of the specific approach, and what suggestions do they provide for improving it?
4. What is the potential benefit of comparing multiple causal models in this context?
5. Are there any concerns or suggestions regarding the computational feasibility of extending the analysis to many more causal worlds? | Review | Review
This paper tackles the primary criticism aimed at applications of causal graphical models for fairness: one needs to completely believe an assumed causal model for the results to be valid. Instead, it presents a definition of fairness where we can assume many plausible causal models and requires fairness violations to be bounded below a threshold for all such plausible models.
The authors present a simple way to formally express this idea: by defining an approximate notion of counterfactual fairness and using the amount of fairness violation as a regularizer for a supervised learner. This is an important theoretical advance and I think can lead to promising work.
The key part, then, is to develop a method to construct counterfactual estimates. This is a hard problem because even for a single causal model, there might be unknown and unobserved confounders that affect relationships between observed variables. The authors use a method from past work where they first estimate the distribution for the unobserved confounders and then construct counterfactuals assuming perfect knowledge of the confounders.
I find this method problematic because confounders can be a combination of many variables and can take many levels. It is unclear whether an arbitrary parameterization for them can account for all of their effects in a causal model. Further, if it was possible to model confounders and estimate counterfactuals from observed data in this way, then we could use it for every causal inference application (which is unlikely). It seems, therefore, that the estimated counterfactuals will depend heavily on the exact parameterization used for the confounders. I suggest that the authors discuss this limitations of their specific approach . It might also be useful to separate out counterfactual estimation as simply a pluggable component of their main contribution, which is to propose a learning algorithm robust to multiple causal models.
That said, this is exactly where the concept of comparing multiple causal models can shine. To decrease dependence on specific parameterizations, one could imagine optimizing over many possible parameterized causal models. In the results section, the authors do test their method on 2 or 3 different worlds, but I think it will be useful if they can extend their analysis to many more causal worlds for each application. Not sure if there are constraints in doing so (computational or otherwise), but if so, will be good to mention them explicitly. |
NIPS | Title
When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness
Abstract
Machine learning is now being used to make crucial decisions about people’s lives. For nearly all of these decisions there is a risk that individuals of a certain race, gender, sexual orientation, or any other subpopulation are unfairly discriminated against. Our recent method has demonstrated how to use techniques from counterfactual inference to make predictions fair across different subpopulations. This method requires that one provides the causal model that generated the data at hand. In general, validating all causal implications of the model is not possible without further assumptions. Hence, it is desirable to integrate competing causal models to provide counterfactually fair decisions, regardless of which causal “world” is the correct one. In this paper, we show how it is possible to make predictions that are approximately fair with respect to multiple possible causal models at once, thus mitigating the problem of exact causal specification. We frame the goal of learning a fair classifier as an optimization problem with fairness constraints entailed by competing causal explanations. We show how this optimization problem can be efficiently solved using gradient-based methods. We demonstrate the flexibility of our model on two real-world fair classification problems. We show that our model can seamlessly balance fairness in multiple worlds with prediction accuracy.
1 Introduction
Machine learning algorithms can do extraordinary things with data. From generating realistic images from noise [7], to predicting what you will look like when you become older [18]. Today, governments and other organizations make use of it in criminal sentencing [4], predicting where to allocate police officers [3, 16], and to estimate an individual’s risk of failing to pay back a loan [8]. However, in many of these settings, the data used to train machine learning algorithms contains biases against certain races, sexes, or other subgroups in the population [3, 6]. Unwittingly, this discrimination is then reflected in the predictions of such algorithms. Simply being born male or female can change an individual’s opportunities that follow from automated decision making trained to reflect historical biases. The implication is that, without taking this into account, classifiers that maximize accuracy risk perpetuating biases present in society.
∗Equal contribution. †This work was done while JL was a Research Fellow at the Alan Turing Institute.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
For instance, consider the rise of ‘predictive policing’, described as “taking data from disparate sources, analyzing them, and then using the results to anticipate, prevent and respond more effectively to future crime” [16]. Today, 38% of U.S. police departments surveyed by the Police Executive Research Forum are using predictive policing and 70% plan to in the next 2 to 5 years. However, there have been significant doubts raised by researchers, journalists, and activists that if the data used by these algorithms is collected by departments that have been biased against minority groups, the predictions of these algorithms could reflect that bias [9, 12].
At the same time, fundamental mathematical results make it difficult to design fair classifiers. In criminal sentencing the COMPAS score [4] predicts if a prisoner will commit a crime upon release, and is widely used by judges to set bail and parole. While it has been shown that black and white defendants with the same COMPAS score commit a crime at similar rates after being released [1], it was also shown that black individuals were more often incorrectly predicted to commit crimes after release by COMPAS than white individuals were [2]. In fact, except for very specific cases, it is impossible to balance these measures of fairness [3, 10, 20].
The question becomes how to address the fact that the data itself may bias the learning algorithm and even addressing this is theoretically difficult. One promising avenue is a recent approach, introduced by us in [11], called counterfactual fairness. In this work, we model how unfairness enters a dataset using techniques from causal modeling. Given such a model, we state whether an algorithm is fair if it would give the same predictions had an individual’s race, sex, or other sensitive attributes been different. We show how to formalize this notion using counterfactuals, following a rich tradition of causal modeling in the artificial intelligence literature [15], and how it can be placed into a machine learning pipeline. The big challenge in applying this work is that evaluating a counterfactual e.g., “What if I had been born a different sex?”, requires a causal model which describes how your sex changes your predictions, other things being equal.
Using “world” to describe any causal model evaluated at a particular counterfactual configuration, we have dependent “worlds” within a same causal model that can never be jointly observed, and possibly incompatible “worlds” across different models. Questions requiring the joint distribution of counterfactuals are hard to answer, as they demand partially untestable “cross-world” assumptions [5, 17], and even many of the empirically testable assumptions cannot be falsified from observational data alone [14], requiring possibly infeasible randomized trials. Because of this, different experts as well as different algorithms may disagree about the right causal model. Further disputes may arise due to the conflict between accurately modeling unfair data and producing a fair result, or because some degrees of unfairness may be considered allowable while others are not.
To address these problems, we propose a method for ensuring fairness within multiple causal models. We do so by introducing continuous relaxations of counterfactual fairness. With these relaxations in hand, we frame learning a fair classifier as an optimization problem with fairness constraints. We give efficient algorithms for solving these optimization problems for different classes of causal models. We demonstrate on three real-world fair classification datasets how our model is able to simultaneously achieve fairness in multiple models while flexibly trading off classification accuracy.
2 Background
We begin by describing aspects causal modeling and counterfactual inference relevant for modeling fairness in data. We then briefly review counterfactual fairness [11], but we recommend that the interested reader should read the original paper in full. We describe how uncertainty may arise over the correct causal model and some difficulties with the original counterfactual fairness definition. We will use A to denote the set of protected attributes, a scalar in all of our examples but which without loss of generality can take the form of a set. Likewise, we denote as Y the outcome of interest that needs to be predicted using a predictor Ŷ . Finally, we will use X to denote the set of observed variables other than A and Y , and U to denote a set of hidden variables, which without loss of generality can be assumed to have no observable causes in a corresponding causal model.
2.1 Causal Modeling and Counterfactual Inference
We will use the causal framework of Pearl [15], which we describe using a simple example. Imagine we have a dataset of university students and we would like to model the causal relationships that
lead up to whether a student graduates on time. In our dataset, we have information about whether a student holds a job J , the number of hours they study per week S, and whether they graduate Y . Because we are interested in modeling any unfairness in our data, we also have information about a student’s race A. Pearl’s framework allows us to model causal relationships between these variables and any postulated unobserved latent variables, such as some U quantifying how motivated a student is to graduate. This uses a directed acyclic graph (DAG) with causal semantics, called a causal diagram. We show a possible causal diagram for this example in Figure 1, (Left). Each node corresponds to a variable and each set of edges into a node corresponds to a generative model specifying how the “parents” of that node causally generated it. In its most specific description, this generative model is a functional relationship deterministically generating its output given a set of observed and latent variables. For instance, one possible set of functions described by this model could be as follows:
S = g(J, U) + Y = I[φ(h(S,U)) ≥ 0.5] (1) where g, h are arbitrary functions and I is the indicator function that evaluates to 1 if the condition holds and 0 otherwise. Additionally, φ is the logistic function φ(a) = 1/(1 + exp(−a)) and is drawn independently of all variables from the standard normal distributionN (0, 1). It is also possible to specify non-deterministic relationships:
U ∼ N (0, 1) S ∼ N (g(J, U), σS) Y ∼ Bernoulli(φ(h(S,U)) (2) where σS is a model parameter. The power of this causal modeling framework is that, given a fully-specified set of equations, we can compute what (the distribution of) any of the variables would have been had certain other variables been different, other things being equal. For instance, given the causal model we can ask “Would individual i have graduated (Y =1) if they hadn’t had a job?”, even if they did not actually graduate in the dataset. Questions of this type are called counterfactuals.
For any observed variables V,W we denote the value of the counterfactual “What would V have been if W had been equal to w?” as VW←w. Pearl et al. [15] describe how to compute these counterfactuals (or, for non-deterministic models, how to compute their distribution) using three steps: 1. Abduction: Given the set of observed variables X ={X1, . . . , Xd} compute the values of the set of unobserved variables U = {U1, . . . , Up} given the model (for non-deterministic models, we compute the posterior distribution P(U|X )); 2. Action: Replace all occurrences of the variable W with value w in the model equations; 3. Prediction: Using the new model equations, and U (or P(U|X )) compute the value of V (or P (V |X )). This final step provides the value or distribution of VW←w given the observed, factual, variables.
2.2 Counterfactual Fairness
In the above example, the university may wish to predict Y , whether a student will graduate, in order to determine if they should admit them into an honors program. While the university prefers to admit students who will graduate on time, it is willing to give a chance to some students without a confident graduation prediction in order to remedy unfairness associated with race in the honors
program. The university believes that whether a student needs a job J may be influenced by their race. As evidence they cite the National Center for Education Statistics, which reported3 that fewer (25%) Asian-American students were employed while attending university as full-time students relative to students of other races (at least 35%). We show the corresponding casual diagram for this in Figure 1 (Center). As having a job J affects study which affects graduation likelihood Y this may mean different races take longer to graduate and thus unfairly have a harder time getting into the honors program.
Counterfactual fairness aims to correct predictions of a label variable Y that are unfairly altered by an individual’s sensitive attribute A (race in this case). Fairness is defined in terms of counterfactuals:
Definition 1 (Counterfactual Fairness [11]). A predictor Ŷ of Y is counterfactually fair given the sensitive attribute A=a and any observed variables X if
P(ŶA←a=y | X = x, A = a) = P(ŶA←a′ =y | X = x, A = a) (3) for all y and a′ 6=a.
In what follows, we will also refer to Ŷ as a function f(x, a) of hidden variables U , of (usually a subset of) an instantiation x of X , and of protected attribute A. We leave U implicit in this notation since, as we will see, this set might differ across different competing models. The notation implies
ŶA←a = f(xA←a, a). (4)
Notice that if counterfactual fairness holds exactly for Ŷ , then this predictor can only be a non-trivial function of X for those elements X ∈ X such that XA←a = XA←a′ . Moreover, by construction UA←a = UA←a′ , as each element of U is defined to have no causes in A ∪ X . The probabilities in eq. (3) are given by the posterior distribution over the unobserved variables P(U | X = x, A = a). Hence, a counterfactual ŶA←a may be deterministic if this distribution is degenerate, that is, if U is a deterministic function of X and A. One nice property of this definition is that it is easy to interpret: a decision is fair if it would have been the same had a person had a different A (e.g., a different race4), other things being equal. In [11], we give an efficient algorithm for designing a predictor that is counterfactually fair. In the university graduation example, a predictor constructed from the unobserved motivation variable U is counterfactually fair.
One difficulty of the definition of counterfactual fairness is it requires one to postulate causal relationships between variables, including latent variables that may be impractical to measure directly. In general, different causal models will create different fair predictors Ŷ . But there are several reasons why it may be unrealistic to assume that any single, fixed causal model will be appropriate. There may not be a consensus among experts or previous literature about the existence, functional form, direction, or magnitude of a particular causal effect, and it may be impossible to determine these from the available data without untestable assumptions. And given the sensitive, even political nature of problems involving fairness, it is also possible that disputes may arise over the presence of a feature of the causal model, based on competing notions of dependencies and latent variables. Consider the following example, formulated as a dispute over the presence of edges. For the university graduation model, one may ask if differences in study are due only to differences in employment, or whether instead there is some other direct effect of A on study levels. Also, having a job may directly affect graduation likelihood. We show these changes to the model in Figure 1 (Right). There is also potential for disagreement over whether some causal paths from A to graduation should be excluded from the definition of fairness. For example, an adherent to strict meritocracy may argue the numbers of hours a student has studied should not be given a counterfactual value. This could be incorporated in a separate model by omitting chosen edges when propagating counterfactual information through the graph in the Prediction step of counterfactual inference5. To summarize, there may be disagreements about the right causal model due to: 1. Changing the structure of the DAG, e.g. adding an edge; 2. Changing the latent variables, e.g. changing the function generating a vertex to have a different signal vs. noise decomposition; 3. Preventing certain paths from propagating counterfactual values.
3https://nces.ed.gov/programs/coe/indicator_ssa.asp 4At the same time, the notion of a “counterfactual race,” sex, etc. often raises debate. See [11] for our take on this. 5In the Supplementary Material of [11], we explain how counterfactual fairness can be restricted to particular paths from A to Y , as opposed to all paths.
3 Fairness under Causal Uncertainty
In this section, we describe a technique for learning a fair predictor without knowing the true casual model. We first describe why in general counterfactual fairness will often not hold in multiple different models. We then describe a relaxation of the definition of counterfactual fairness for both deterministic and non-deterministic models. Finally we show an efficient method for learning classifiers that are simultaneously accurate and fair in multiple worlds. In all that follows we denote sets in calligraphic script X , random variables in uppercase X , scalars in lowercase x, matrices in bold uppercase X, and vectors in bold lowercase x.
3.1 Exact Counterfactual Fairness Across Worlds
We can imagine extending the definition of counterfactual fairness so that it holds for every plausible causal world. To see why this is inherently difficult consider the setting of deterministic causal models. If each causal model of the world generates different counterfactuals then each additional model induces a new set of constraints that the classifier must satisfy, and in the limit the only classifiers that are fair across all possible worlds are constant classifiers. For non-deterministic counterfactuals, these issues are magnified. To guarantee counterfactual fairness, Kusner et al. [11] assumed access to latent variables that hold the same value in an original datapoint and in its corresponding counterfactuals. While the latent variables of one world can remain constant under the generation of counterfactuals from its corresponding model, there is no guarantee that they remain constant under the counterfactuals generated from different models. Even in a two model case, if the P.D.F. of one model’s counterfactual has non-zero density everywhere (as is the case under Gaussian noise assumptions) it may be the case that the only classifiers that satisfy counterfactual fairness for both worlds are the constant classifiers. If we are to achieve some measure of fairness from informative classifiers, and over a family of different worlds, we need a more robust alternative to counterfactual fairness.
3.2 Approximate Counterfactual Fairness
We define two approximations to counterfactual fairness to solve the problem of learning a fair classifier across multiple causal worlds.
Definition 2 (( , δ)-Approximate Counterfactual Fairness). A predictor f(X , A) satisfies ( , 0)- approximate counterfactual fairness (( , 0)-ACF) if, given the sensitive attribute A = a and any instantiation x of the other observed variables X , we have that:∣∣f(xA←a, a)− f(xA←a′ , a′)∣∣ ≤ (5) for all a′ 6= a if the system deterministically implies the counterfactual values of X . For a nondeterministic causal system, f satisfies ( , δ)-approximate counterfactual fairness, (( , δ)-ACF) if:
P( ∣∣f(XA←a, a)− f(XA←a′ , a′)∣∣ ≤ | X = x, A = a) ≥ 1− δ (6)
for all a′ 6=a.
Both definitions must hold uniformly over the sample space of X ×A. The probability measures used are with respect to the conditional distribution of background latent variables U given the observations. We leave a discussion of the statistical asymptotic properties of such plug-in estimator for future work. These definitions relax counterfactual fairness to ensure that, for deterministic systems, predictions f change by at most when an input is replaced by its counterfactual. For non-deterministic systems, the condition in (6) means that this change must occur with high probability, where the probability is again given by the posterior distribution P(U|X ) computed in the Abduction step of counterfactual inference. If = 0, the deterministic definitions eq. (5) is equivalent to the original counterfactual fairness definition. If also δ=0 the non-deterministic definition eq. (6) is actually a stronger condition than the counterfactual fairness definition eq. (3) as it guarantees equality in probability instead of equality in distribution6.
6In the Supplementary Material of [11], we describe in more detail the implications of the stronger condition.
Algorithm 1 Multi-World Fairness 1: Input: features X = [x1, . . . ,xn], labels y = [y1, . . . , yn], sensitive attributes a = [a1, . . . , an],
privacy parameters ( , δ), trade-off parameters L = [λ1, . . . , λl]. 2: Fit causal models: M1, . . . ,Mm using X,a (and possibly y). 3: Sample counterfactuals: XA1←a′ , . . . ,XAm←a′ for all unobserved values a′. 4: for λ ∈ L do 5: Initialize classifier fλ. 6: while loop until convergence do 7: Select random batches Xb of inputs and batch of counterfactuals XA1←a′ , . . . ,XAm←a′ . 8: Compute the gradient of equation (7). 9: Update fλ using any stochastic gradient optimization method.
10: end while 11: end for 12: Select model fλ: For deterministic models select the smallest λ such that equation (5) using fλ
holds. For non-deterministic models select the λ that corresponds to δ given fλ.
3.3 Learning a Fair Classifier
Assume we are given a dataset of n observations a = [a1, . . . , an] of the sensitive attribute A and of other features X = [x1, . . . ,xn] drawn from X . We wish to accurately predict a label Y given observations y=[y1, . . . , yn] while also satisfying ( , δ)-approximate counterfactual fairness. We learn a classifier f(x, a) by minimizing a loss function `(f(x, a), y). At the same time, we incorporate an unfairness term µj(f,x, a, a′) for each causal model j to reduce the unfairness in f . We formulate this as a penalized optimization problem:
min f
1
n n∑ i=1 `(f(xi, ai), yi) + λ m∑ j=1 1 n n∑ i=1 ∑ a′ 6=ai µj(f,xi, ai, a ′) (7)
where λ trades-off classification accuracy for multi-world fair predictions. We show how to naturally define the unfairness function µj for deterministic and non-deterministic counterfactuals.
Deterministic counterfactuals. To enforce ( , 0)-approximate counterfactual fairness a natural penalty for unfairness is an indicator function which is one whenever ( , 0)-ACF does not hold, and zero otherwise:
µj(f,xi, ai, a ′) := I[ ∣∣f(xi,Aj←ai , ai)− f(xi,Aj←a′ , a′)∣∣ ≥ ] (8) Unfortunately, the indicator function I is non-convex, discontinuous and difficult to optimize. Instead, we propose to use the tightest convex relaxation to the indicator function:
µj(f,xi, ai, a ′) := max{0, ∣∣f(xi,Aj←ai , ai)− f(xi,Aj←a′ , a′)∣∣− } (9) Note that when ( , 0)-approximate counterfactual fairness is not satisfied µj is non-zero and thus the optimization problem will penalize f for this unfairness. Where ( , 0)-approximate counterfactual fairness is satisfied µj evaluates to 0 and it does not affect the objective. For sufficiently large λ, the value of µj will dominate the training loss 1n ∑n i=1 `(f(xi, ai), yi) and any solution will satisfy ( , 0)-approximate counterfactual fairness. However, an overly large choice of λ causes numeric instability, and will decrease the accuracy of the classifier found. Thus, to find the most accurate classifier that satisfies the fairness condition one can simply perform a grid or binary search for the smallest λ such that the condition holds.
Non-deterministic counterfactuals. For non-deterministic counterfactuals we begin by writing a Monte-Carlo approximation to ( , δ)-ACF, eq. (6) as follows:
1
S S∑ s=1 I( ∣∣f(xsAj←ai , ai)−f(xsAj←a′ , a′)∣∣≥ )≤ δ (10)
where xk is sampled from the posterior distribution P(U|X ). We can again form the tightest convex relaxation of the left-hand side of the expression to yield our unfairness function:
µj(f,xi, ai, a ′) :=
1
S S∑ s=1 max{0, ∣∣f(xsi,Aj←ai , ai)− f(xsi,Aj←a′ , a′)∣∣− } (11)
Note that different choices of λ in eq. (7) correspond to different values of δ. Indeed, by choosing λ = 0 we have the ( , δ)-fair classifier corresponding to an unfair classifier7. While a sufficiently large, but finite, λ will correspond to a ( , 0) approximately counterfactually fair classifier. By varying λ between these two extremes, we induce classifiers that satisfy ( , δ)-ACF for different values of δ.
With these unfairness functions we have a differentiable optimization problem eq. (7) which can be solved with gradient-based methods. Thus, our method allows practitioners to smoothly trade-off accuracy with multi-world fairness. We call our method Multi-World Fairness (MWF). We give a complete method for learning a MWF classifier in Algorithm 1.
For both deterministic and non-deterministic models, this convex approximation essentially describes an expected unfairness that is allowed by the classifier: Definition 3 (Expected -Unfairness). For any counterfactual a′ 6= a, the Expected -Unfairness of a classifier f , or E [f ], is
E [ max{0, ∣∣f(XA←a, a)− f(XA←a′ , a′)∣∣− } | X = x, A = a] (12)
where the expectation is over any unobserved U (and is degenerate for deterministic counterfactuals). We note that the term max{0, ∣∣f(XA←a, a)−f(XA←a′ , a′)∣∣− } is strictly non-negative and therefore the expected -unfairness is zero if and only if f satisfies ( , 0)-approximate counterfactual fairness almost everywhere.
Linear Classifiers and Convexity Although we have presented these results in their most general form, it is worth noting that for linear classifiers, convexity guarantees are preserved. The family of linear classifiers we consider is relatively broad, and consists those linear in their learned weights w, as such it includes both SVMs and a variety of regression methods used in conjuncture with kernels or finite polynomial bases.
Consider any classifier whose output is linear in the learned parameters, i.e., the family of classifiers f all have the form f(X , A) = ∑ l wlgl(X , a), for a set of fixed kernels gl. Then the expected
-unfairness is a linear function of w taking the form: E [ max{0, ∣∣f(XA←a, a)− f(XA←a′ , a′)∣∣− }] (13)
= E [ max{0, ∣∣∑
l
wl(gl(XA←a, a)− gl(XA←a′ , a′)) ∣∣}]
This expression is linear in w and therefore, if the classification loss is also convex (as is the case for most regression tasks), a global optima can be ready found via convex programming. In particular, globally optimal linear classifiers satisfying ( , 0)-ACF or ( , δ)-ACF, can be found efficiently.
Bayesian alternatives and their shortcomings. One may argue that a more direct alternative is to provide probabilities associated with each world and to marginalize set of the optimal counterfactually fair classifiers over all possible worlds. We argue this is undesirable for two reasons: first, the averaged prediction for any particular individual may violate (3) by an undesirable margin for one, more or even all considered worlds; second, a practitioner may be restricted by regulations to show that, to the best of their knowledge, the worst-case violation is bounded across all viable worlds with high probability. However, if the number of possible models is extremely large (for example if the causal structure of the world is known, but the associated parameters are not) and we have a probability associated with each world, then one natural extension is to adapt Expected -Unfairness eq. (3) to marginalize over the space of possible worlds. However, we leave this extension to future work.
4 Experiments
We demonstrate the flexibility of our method on two real-world fair classification problems: 1. fair predictions of student performance in law schools; and 2. predicting whether criminals will re-offend upon being released. For each dataset we begin by giving details of the fair prediction problem. We then introduce multiple causal models that each possibly describe how unfairness plays a role in the data. Finally, we give results of Multi-World Fairness (MWF) and show how it changes for different settings of the fairness parameters ( , δ).
7In the worst case, δ may equal 1.
4.1 Fairly predicting law grades
We begin by investigating a dataset of survey results across 163 U.S. law schools conducted by the Law School Admission Council [19] . It contains information on over 20,000 students including their race A (here we look at just black and white students as this difference had the largest effect in counterfactuals in [11]), their grade-point average G obtained prior to law school, law school entrance exam scores L, and their first year average grade Y . Consider that law schools may be interested in predicting Y for all applicants to law school using G and L in order to decide
whether to accept or deny them entrance. However, due to societal inequalities, an individual’s race may have affected their access to educational opportunities, and thus affected G and L. Accordingly, we model this possibility using the causal graphs in Figure 2 (Left). In this graph we also model the fact that G,L may have been affected by other unobserved quantities. However, we may be uncertain whether what the right way to model these unobserved quantities is. Thus we propose to model this dataset with the two worlds described in Figure 2 (Left). Note that these are the same models as used in Kusner et al. [11] (except here we consider race as the sensitive variable). The corresponding equations for these two worlds are as follows:
G = bG + w A GA+ G G ∼ N (bG + wAGA+ wUGU, σG) (14)
L = bL + w A LA+ L L ∼ Poisson(exp(bL + wALA+ wULU))
Y = bY + w A YA+ Y Y ∼ N (wAYA+ wUY U, 1)
G, L, Y ∼ N (0, 1) U ∼ N (0, 1)
where variables b, w are parameters of the causal model.
Results. Figure 3 shows the result of learning a linear MWF classifier on the deterministic law school models. We split the law school data into a random 80/20 train/test split and we fit casual models and classifiers on the training set and evaluate performance on the test set. We plot the test RMSE of the constant predictor satisfying counterfactual fairness in red, the unfair predictor with λ=0, and MWF, averaged across 5 runs. Here as we have one deterministic and one non-deterministic model we will evaluate MWF for different and δ (with the knowledge that the only change in the MWF classifier for different δ is due to the non-deterministic model). For each , δ, we selected the smallest λ across a grid (λ ∈ {10−510−4, . . . , 1010}) such that the constraint in eq. (6) held across 95% of the individuals in both models. We see that MWF is able to reliably sacrifice accuracy for fairness as is reduced. Note that as we change δ we can further alter the accuracy/fairness trade-off.
4.2 Fair recidivism prediction (COMPAS)
We next turn our attention to predicting whether a criminal will re-offend, or ‘recidivate’ after being released from prison. ProPublica [13] released data on prisoners in Broward County, Florida who were awaiting a sentencing hearing. For each of the prisoners we have information on their race A (as above we only consider black versus white individuals), their age E, their number of juvenile felonies JF , juvenile misdemeanors JM , the type of crime they committed T , the number of prior offenses they have P , and whether they recidivated Y . There is also a proprietary COMPAS score [13] C designed to indicate the likelihood a prisoner recidivates.
We model this dataset with two different non-deterministic causal models, shown in Figure 2 (Right). The first model includes the dotted edges, the second omits them. In both models we believe that two unobserved latent factors juvenile criminality UJ and adult criminality UD also contribute to JF , JM , C, T, P . We show the equations for both of our casual models below, where the first causal model includes the blue terms and the second does not:
T ∼ Bernoulli(φ(bT + wUDC UD + w E CE + w A CA) (15)
C ∼ N (bC + wUDC UD + w E CE + w A CA + w T CT + w P CP + w JF C JF + w JM C JM , σC)
P ∼ Poisson(exp(bP + wUDP UD + w E PE + w A PA))
JF ∼ Poisson(exp(bJF + w UJ JF + wEJFE + w A JFA))
JM ∼ Poisson(exp(bJM + w UJ JM + wEJME + w A JMA))
[UJ , UD] ∼ N (0,Σ)
Results. Figure 4 shows how classification accuracy using both logistic regression (linear) and a 3-layer neural network (deep) changes as both and δ change. We split the COMPAS dataset randomly into an 80/20 train/test split, and report all results on the test set. As in the law school experiment we grid-search over λ to find the smallest value such that for any and δ the ( , δ)-ACF) constraint in eq. (6) is satisfied for at least 95% of the individuals in the dataset, across both worlds. We average all results except the constant classifier over 5 runs and plot the mean and standard deviations. We see that for small δ (high fairness) both linear and deep MWF classifiers significantly outperform the constant classifier and begin to approach the accuracy of the unfair classifier as increases. As we increase δ (lowered fairness) the deep classifier is better able to learn a decision boundary that trades-off accuracy for fairness. But if , δ is increased enough (e.g., ≥0.13, δ=0.5), the linear MWF classifier matches the performance of the deep classifier.
5 Conclusion
This paper has presented a natural extension to counterfactual fairness that allows us to guarantee fair properties of algorithms, even when we are unsure of the causal model that describes the world.
As the use of machine learning becomes widespread across many domains, it becomes more important to take algorithmic fairness out of the hands of experts and make it available to everybody. The conceptual simplicity of our method, our robust use of counterfactuals, and the ease of implementing our method mean that it can be directly applied to many interesting problems. A further benefit of our approach over previous work on counterfactual fairness is that our approach only requires the estimation of counterfactuals at training time, and no knowledge of latent variables during testing. As such, our classifiers offer a fair drop-in replacement for other existing classifiers.
6 Acknowledgments
This work was supported by The Alan Turing Institute under the EPSRC grant EP/N510129/1. CR acknowledges additional support under the EPSRC Platform Grant EP/P022529/1. | 1. What is the focus of the paper regarding fairness constraints in supervised learning?
2. What are the strengths and weaknesses of the proposed method in terms of counterfactual fairness and causal models?
3. Do you have any concerns regarding the definition of counterfactual fairness and its connection to existing fairness measures?
4. How does the reviewer assess the contribution and novelty of the paper's approach compared to prior works?
5. Are there any issues with the paper's assumptions or requirements for fully specified causal models? | Review | Review
The authors consider a novel supervised learning problem with fairness constraints, where the goal is to find an optimal predictor that is counterfactually fair from a list of candidate causal models. The parameterizations of each candidate causal model are known. The authors incorporate the fairness constraint as a regularization term in the loss function. Evaluations are performed on two real-world datasets, and results show that the proposed method balance fairness in multiple worlds with prediction accuracy.
While the idea of exploring a novel fairness measure in counterfactual semantics and enforcing it over multiple candidate models is interesting, there are a few issues I find confusing, which is listed next:
1. The Counterfactual Fairness definition (Def 1) is not clear. It is not immediate to see which counterfactual quantity the authors are trying to measure. Eq2, the probabilistic counterfactual fairness definition, measures the total causal effect (P(Y_x)) of the sensitive feature X on the predicted outcome Y. It is a relatively simple counterfactual quantity, which can be directly computed by physically setting X to a fixed value, without using Pearlâs algorithm of three steps.
2. If the authors are referring to the total causal effect, it is thus unnecessary to use a rather complicated algorithm to compute counterfactuals (line87-94).
If the authors are indeed referring to the counterfactual fairness defined in [1], the motivation of using this novel counterfactual fairness has not been properly justified. A newly proposed fairness definition should often fall into one of following categories: i. It provides a stronger condition for existing fairness definitions; ii. It captures discriminations which are not covered by existing definitions; or iii. It provides a reasonable relaxation to improve prediction accuracy. I went back to the original counterfactual fairness paper [1] and found discussions regarding this problem. Since this a rather recent result, it would be better if the authors could explain it a bit further in the background section.
3. The contribution of this paper seems to be incremental unless I am missing something. The authors claim that the proposed technique âlearns a fair predictor without knowing the true causal modelâ, but it still requires a finite list of known candidate causal models and then ranges over them. The natural question at this point is how to obtain a list of candidate models? In causal literature, ânot knowing the true causal modelâ often means that only observational data is available, let alone a list of fully-parametrized possible models exists. The relaxation considered in this paper may be a good starting point, but it does not address the fundamental challenge of the unavailability of the underlying model.
Minor comments:
- All references to Figure2 in Sec2.2 should be Figure1.
[1] Matt J Kusner, Joshua R Loftus, Chris Russell, and Ricardo Silva. Counterfactual fairness. arXiv preprint arXiv:1703.06856, 2017.
Post-rebuttal:
Some of the main issues were addressed by the authors. One of the issues was around Definition 1, and I believe the authors can fix that in the camera-ready version. Connections with existing fairness measures can be added in the Supplement.
Still, unless I am missing something, it seems the paper requires that each of the causal models is *fully* specified, which means that they know precisely the underlying structural functions and distributions over the exogenous variables. This is, generally, an overly strong requirement.
I felt that the argument in the rebuttal saying that âThese may come from expert knowledge or causal discovery algorithms like the popular PC or FCI algorithms [P. Spirtes, C. Glymour, and R. Scheines. âCausation, Prediction, and Searchâ, 2000]â is somewhat misleading. Even when the learning algorithms like FCI can pin down a unique causal structure (almost never the case), itâs still not the accurate to say that they provide the fully specified model with the structural functions and distributions over the exogenous. If one doesnât have this type of knowledge and setting, one cannot run Pearlâs 3-step algorithm. I am, therefore, unable to find any reasonable justification or setting that would support the feasibility of the proposed approach. |
NIPS | Title
When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness
Abstract
Machine learning is now being used to make crucial decisions about people’s lives. For nearly all of these decisions there is a risk that individuals of a certain race, gender, sexual orientation, or any other subpopulation are unfairly discriminated against. Our recent method has demonstrated how to use techniques from counterfactual inference to make predictions fair across different subpopulations. This method requires that one provides the causal model that generated the data at hand. In general, validating all causal implications of the model is not possible without further assumptions. Hence, it is desirable to integrate competing causal models to provide counterfactually fair decisions, regardless of which causal “world” is the correct one. In this paper, we show how it is possible to make predictions that are approximately fair with respect to multiple possible causal models at once, thus mitigating the problem of exact causal specification. We frame the goal of learning a fair classifier as an optimization problem with fairness constraints entailed by competing causal explanations. We show how this optimization problem can be efficiently solved using gradient-based methods. We demonstrate the flexibility of our model on two real-world fair classification problems. We show that our model can seamlessly balance fairness in multiple worlds with prediction accuracy.
1 Introduction
Machine learning algorithms can do extraordinary things with data. From generating realistic images from noise [7], to predicting what you will look like when you become older [18]. Today, governments and other organizations make use of it in criminal sentencing [4], predicting where to allocate police officers [3, 16], and to estimate an individual’s risk of failing to pay back a loan [8]. However, in many of these settings, the data used to train machine learning algorithms contains biases against certain races, sexes, or other subgroups in the population [3, 6]. Unwittingly, this discrimination is then reflected in the predictions of such algorithms. Simply being born male or female can change an individual’s opportunities that follow from automated decision making trained to reflect historical biases. The implication is that, without taking this into account, classifiers that maximize accuracy risk perpetuating biases present in society.
∗Equal contribution. †This work was done while JL was a Research Fellow at the Alan Turing Institute.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
For instance, consider the rise of ‘predictive policing’, described as “taking data from disparate sources, analyzing them, and then using the results to anticipate, prevent and respond more effectively to future crime” [16]. Today, 38% of U.S. police departments surveyed by the Police Executive Research Forum are using predictive policing and 70% plan to in the next 2 to 5 years. However, there have been significant doubts raised by researchers, journalists, and activists that if the data used by these algorithms is collected by departments that have been biased against minority groups, the predictions of these algorithms could reflect that bias [9, 12].
At the same time, fundamental mathematical results make it difficult to design fair classifiers. In criminal sentencing the COMPAS score [4] predicts if a prisoner will commit a crime upon release, and is widely used by judges to set bail and parole. While it has been shown that black and white defendants with the same COMPAS score commit a crime at similar rates after being released [1], it was also shown that black individuals were more often incorrectly predicted to commit crimes after release by COMPAS than white individuals were [2]. In fact, except for very specific cases, it is impossible to balance these measures of fairness [3, 10, 20].
The question becomes how to address the fact that the data itself may bias the learning algorithm and even addressing this is theoretically difficult. One promising avenue is a recent approach, introduced by us in [11], called counterfactual fairness. In this work, we model how unfairness enters a dataset using techniques from causal modeling. Given such a model, we state whether an algorithm is fair if it would give the same predictions had an individual’s race, sex, or other sensitive attributes been different. We show how to formalize this notion using counterfactuals, following a rich tradition of causal modeling in the artificial intelligence literature [15], and how it can be placed into a machine learning pipeline. The big challenge in applying this work is that evaluating a counterfactual e.g., “What if I had been born a different sex?”, requires a causal model which describes how your sex changes your predictions, other things being equal.
Using “world” to describe any causal model evaluated at a particular counterfactual configuration, we have dependent “worlds” within a same causal model that can never be jointly observed, and possibly incompatible “worlds” across different models. Questions requiring the joint distribution of counterfactuals are hard to answer, as they demand partially untestable “cross-world” assumptions [5, 17], and even many of the empirically testable assumptions cannot be falsified from observational data alone [14], requiring possibly infeasible randomized trials. Because of this, different experts as well as different algorithms may disagree about the right causal model. Further disputes may arise due to the conflict between accurately modeling unfair data and producing a fair result, or because some degrees of unfairness may be considered allowable while others are not.
To address these problems, we propose a method for ensuring fairness within multiple causal models. We do so by introducing continuous relaxations of counterfactual fairness. With these relaxations in hand, we frame learning a fair classifier as an optimization problem with fairness constraints. We give efficient algorithms for solving these optimization problems for different classes of causal models. We demonstrate on three real-world fair classification datasets how our model is able to simultaneously achieve fairness in multiple models while flexibly trading off classification accuracy.
2 Background
We begin by describing aspects causal modeling and counterfactual inference relevant for modeling fairness in data. We then briefly review counterfactual fairness [11], but we recommend that the interested reader should read the original paper in full. We describe how uncertainty may arise over the correct causal model and some difficulties with the original counterfactual fairness definition. We will use A to denote the set of protected attributes, a scalar in all of our examples but which without loss of generality can take the form of a set. Likewise, we denote as Y the outcome of interest that needs to be predicted using a predictor Ŷ . Finally, we will use X to denote the set of observed variables other than A and Y , and U to denote a set of hidden variables, which without loss of generality can be assumed to have no observable causes in a corresponding causal model.
2.1 Causal Modeling and Counterfactual Inference
We will use the causal framework of Pearl [15], which we describe using a simple example. Imagine we have a dataset of university students and we would like to model the causal relationships that
lead up to whether a student graduates on time. In our dataset, we have information about whether a student holds a job J , the number of hours they study per week S, and whether they graduate Y . Because we are interested in modeling any unfairness in our data, we also have information about a student’s race A. Pearl’s framework allows us to model causal relationships between these variables and any postulated unobserved latent variables, such as some U quantifying how motivated a student is to graduate. This uses a directed acyclic graph (DAG) with causal semantics, called a causal diagram. We show a possible causal diagram for this example in Figure 1, (Left). Each node corresponds to a variable and each set of edges into a node corresponds to a generative model specifying how the “parents” of that node causally generated it. In its most specific description, this generative model is a functional relationship deterministically generating its output given a set of observed and latent variables. For instance, one possible set of functions described by this model could be as follows:
S = g(J, U) + Y = I[φ(h(S,U)) ≥ 0.5] (1) where g, h are arbitrary functions and I is the indicator function that evaluates to 1 if the condition holds and 0 otherwise. Additionally, φ is the logistic function φ(a) = 1/(1 + exp(−a)) and is drawn independently of all variables from the standard normal distributionN (0, 1). It is also possible to specify non-deterministic relationships:
U ∼ N (0, 1) S ∼ N (g(J, U), σS) Y ∼ Bernoulli(φ(h(S,U)) (2) where σS is a model parameter. The power of this causal modeling framework is that, given a fully-specified set of equations, we can compute what (the distribution of) any of the variables would have been had certain other variables been different, other things being equal. For instance, given the causal model we can ask “Would individual i have graduated (Y =1) if they hadn’t had a job?”, even if they did not actually graduate in the dataset. Questions of this type are called counterfactuals.
For any observed variables V,W we denote the value of the counterfactual “What would V have been if W had been equal to w?” as VW←w. Pearl et al. [15] describe how to compute these counterfactuals (or, for non-deterministic models, how to compute their distribution) using three steps: 1. Abduction: Given the set of observed variables X ={X1, . . . , Xd} compute the values of the set of unobserved variables U = {U1, . . . , Up} given the model (for non-deterministic models, we compute the posterior distribution P(U|X )); 2. Action: Replace all occurrences of the variable W with value w in the model equations; 3. Prediction: Using the new model equations, and U (or P(U|X )) compute the value of V (or P (V |X )). This final step provides the value or distribution of VW←w given the observed, factual, variables.
2.2 Counterfactual Fairness
In the above example, the university may wish to predict Y , whether a student will graduate, in order to determine if they should admit them into an honors program. While the university prefers to admit students who will graduate on time, it is willing to give a chance to some students without a confident graduation prediction in order to remedy unfairness associated with race in the honors
program. The university believes that whether a student needs a job J may be influenced by their race. As evidence they cite the National Center for Education Statistics, which reported3 that fewer (25%) Asian-American students were employed while attending university as full-time students relative to students of other races (at least 35%). We show the corresponding casual diagram for this in Figure 1 (Center). As having a job J affects study which affects graduation likelihood Y this may mean different races take longer to graduate and thus unfairly have a harder time getting into the honors program.
Counterfactual fairness aims to correct predictions of a label variable Y that are unfairly altered by an individual’s sensitive attribute A (race in this case). Fairness is defined in terms of counterfactuals:
Definition 1 (Counterfactual Fairness [11]). A predictor Ŷ of Y is counterfactually fair given the sensitive attribute A=a and any observed variables X if
P(ŶA←a=y | X = x, A = a) = P(ŶA←a′ =y | X = x, A = a) (3) for all y and a′ 6=a.
In what follows, we will also refer to Ŷ as a function f(x, a) of hidden variables U , of (usually a subset of) an instantiation x of X , and of protected attribute A. We leave U implicit in this notation since, as we will see, this set might differ across different competing models. The notation implies
ŶA←a = f(xA←a, a). (4)
Notice that if counterfactual fairness holds exactly for Ŷ , then this predictor can only be a non-trivial function of X for those elements X ∈ X such that XA←a = XA←a′ . Moreover, by construction UA←a = UA←a′ , as each element of U is defined to have no causes in A ∪ X . The probabilities in eq. (3) are given by the posterior distribution over the unobserved variables P(U | X = x, A = a). Hence, a counterfactual ŶA←a may be deterministic if this distribution is degenerate, that is, if U is a deterministic function of X and A. One nice property of this definition is that it is easy to interpret: a decision is fair if it would have been the same had a person had a different A (e.g., a different race4), other things being equal. In [11], we give an efficient algorithm for designing a predictor that is counterfactually fair. In the university graduation example, a predictor constructed from the unobserved motivation variable U is counterfactually fair.
One difficulty of the definition of counterfactual fairness is it requires one to postulate causal relationships between variables, including latent variables that may be impractical to measure directly. In general, different causal models will create different fair predictors Ŷ . But there are several reasons why it may be unrealistic to assume that any single, fixed causal model will be appropriate. There may not be a consensus among experts or previous literature about the existence, functional form, direction, or magnitude of a particular causal effect, and it may be impossible to determine these from the available data without untestable assumptions. And given the sensitive, even political nature of problems involving fairness, it is also possible that disputes may arise over the presence of a feature of the causal model, based on competing notions of dependencies and latent variables. Consider the following example, formulated as a dispute over the presence of edges. For the university graduation model, one may ask if differences in study are due only to differences in employment, or whether instead there is some other direct effect of A on study levels. Also, having a job may directly affect graduation likelihood. We show these changes to the model in Figure 1 (Right). There is also potential for disagreement over whether some causal paths from A to graduation should be excluded from the definition of fairness. For example, an adherent to strict meritocracy may argue the numbers of hours a student has studied should not be given a counterfactual value. This could be incorporated in a separate model by omitting chosen edges when propagating counterfactual information through the graph in the Prediction step of counterfactual inference5. To summarize, there may be disagreements about the right causal model due to: 1. Changing the structure of the DAG, e.g. adding an edge; 2. Changing the latent variables, e.g. changing the function generating a vertex to have a different signal vs. noise decomposition; 3. Preventing certain paths from propagating counterfactual values.
3https://nces.ed.gov/programs/coe/indicator_ssa.asp 4At the same time, the notion of a “counterfactual race,” sex, etc. often raises debate. See [11] for our take on this. 5In the Supplementary Material of [11], we explain how counterfactual fairness can be restricted to particular paths from A to Y , as opposed to all paths.
3 Fairness under Causal Uncertainty
In this section, we describe a technique for learning a fair predictor without knowing the true casual model. We first describe why in general counterfactual fairness will often not hold in multiple different models. We then describe a relaxation of the definition of counterfactual fairness for both deterministic and non-deterministic models. Finally we show an efficient method for learning classifiers that are simultaneously accurate and fair in multiple worlds. In all that follows we denote sets in calligraphic script X , random variables in uppercase X , scalars in lowercase x, matrices in bold uppercase X, and vectors in bold lowercase x.
3.1 Exact Counterfactual Fairness Across Worlds
We can imagine extending the definition of counterfactual fairness so that it holds for every plausible causal world. To see why this is inherently difficult consider the setting of deterministic causal models. If each causal model of the world generates different counterfactuals then each additional model induces a new set of constraints that the classifier must satisfy, and in the limit the only classifiers that are fair across all possible worlds are constant classifiers. For non-deterministic counterfactuals, these issues are magnified. To guarantee counterfactual fairness, Kusner et al. [11] assumed access to latent variables that hold the same value in an original datapoint and in its corresponding counterfactuals. While the latent variables of one world can remain constant under the generation of counterfactuals from its corresponding model, there is no guarantee that they remain constant under the counterfactuals generated from different models. Even in a two model case, if the P.D.F. of one model’s counterfactual has non-zero density everywhere (as is the case under Gaussian noise assumptions) it may be the case that the only classifiers that satisfy counterfactual fairness for both worlds are the constant classifiers. If we are to achieve some measure of fairness from informative classifiers, and over a family of different worlds, we need a more robust alternative to counterfactual fairness.
3.2 Approximate Counterfactual Fairness
We define two approximations to counterfactual fairness to solve the problem of learning a fair classifier across multiple causal worlds.
Definition 2 (( , δ)-Approximate Counterfactual Fairness). A predictor f(X , A) satisfies ( , 0)- approximate counterfactual fairness (( , 0)-ACF) if, given the sensitive attribute A = a and any instantiation x of the other observed variables X , we have that:∣∣f(xA←a, a)− f(xA←a′ , a′)∣∣ ≤ (5) for all a′ 6= a if the system deterministically implies the counterfactual values of X . For a nondeterministic causal system, f satisfies ( , δ)-approximate counterfactual fairness, (( , δ)-ACF) if:
P( ∣∣f(XA←a, a)− f(XA←a′ , a′)∣∣ ≤ | X = x, A = a) ≥ 1− δ (6)
for all a′ 6=a.
Both definitions must hold uniformly over the sample space of X ×A. The probability measures used are with respect to the conditional distribution of background latent variables U given the observations. We leave a discussion of the statistical asymptotic properties of such plug-in estimator for future work. These definitions relax counterfactual fairness to ensure that, for deterministic systems, predictions f change by at most when an input is replaced by its counterfactual. For non-deterministic systems, the condition in (6) means that this change must occur with high probability, where the probability is again given by the posterior distribution P(U|X ) computed in the Abduction step of counterfactual inference. If = 0, the deterministic definitions eq. (5) is equivalent to the original counterfactual fairness definition. If also δ=0 the non-deterministic definition eq. (6) is actually a stronger condition than the counterfactual fairness definition eq. (3) as it guarantees equality in probability instead of equality in distribution6.
6In the Supplementary Material of [11], we describe in more detail the implications of the stronger condition.
Algorithm 1 Multi-World Fairness 1: Input: features X = [x1, . . . ,xn], labels y = [y1, . . . , yn], sensitive attributes a = [a1, . . . , an],
privacy parameters ( , δ), trade-off parameters L = [λ1, . . . , λl]. 2: Fit causal models: M1, . . . ,Mm using X,a (and possibly y). 3: Sample counterfactuals: XA1←a′ , . . . ,XAm←a′ for all unobserved values a′. 4: for λ ∈ L do 5: Initialize classifier fλ. 6: while loop until convergence do 7: Select random batches Xb of inputs and batch of counterfactuals XA1←a′ , . . . ,XAm←a′ . 8: Compute the gradient of equation (7). 9: Update fλ using any stochastic gradient optimization method.
10: end while 11: end for 12: Select model fλ: For deterministic models select the smallest λ such that equation (5) using fλ
holds. For non-deterministic models select the λ that corresponds to δ given fλ.
3.3 Learning a Fair Classifier
Assume we are given a dataset of n observations a = [a1, . . . , an] of the sensitive attribute A and of other features X = [x1, . . . ,xn] drawn from X . We wish to accurately predict a label Y given observations y=[y1, . . . , yn] while also satisfying ( , δ)-approximate counterfactual fairness. We learn a classifier f(x, a) by minimizing a loss function `(f(x, a), y). At the same time, we incorporate an unfairness term µj(f,x, a, a′) for each causal model j to reduce the unfairness in f . We formulate this as a penalized optimization problem:
min f
1
n n∑ i=1 `(f(xi, ai), yi) + λ m∑ j=1 1 n n∑ i=1 ∑ a′ 6=ai µj(f,xi, ai, a ′) (7)
where λ trades-off classification accuracy for multi-world fair predictions. We show how to naturally define the unfairness function µj for deterministic and non-deterministic counterfactuals.
Deterministic counterfactuals. To enforce ( , 0)-approximate counterfactual fairness a natural penalty for unfairness is an indicator function which is one whenever ( , 0)-ACF does not hold, and zero otherwise:
µj(f,xi, ai, a ′) := I[ ∣∣f(xi,Aj←ai , ai)− f(xi,Aj←a′ , a′)∣∣ ≥ ] (8) Unfortunately, the indicator function I is non-convex, discontinuous and difficult to optimize. Instead, we propose to use the tightest convex relaxation to the indicator function:
µj(f,xi, ai, a ′) := max{0, ∣∣f(xi,Aj←ai , ai)− f(xi,Aj←a′ , a′)∣∣− } (9) Note that when ( , 0)-approximate counterfactual fairness is not satisfied µj is non-zero and thus the optimization problem will penalize f for this unfairness. Where ( , 0)-approximate counterfactual fairness is satisfied µj evaluates to 0 and it does not affect the objective. For sufficiently large λ, the value of µj will dominate the training loss 1n ∑n i=1 `(f(xi, ai), yi) and any solution will satisfy ( , 0)-approximate counterfactual fairness. However, an overly large choice of λ causes numeric instability, and will decrease the accuracy of the classifier found. Thus, to find the most accurate classifier that satisfies the fairness condition one can simply perform a grid or binary search for the smallest λ such that the condition holds.
Non-deterministic counterfactuals. For non-deterministic counterfactuals we begin by writing a Monte-Carlo approximation to ( , δ)-ACF, eq. (6) as follows:
1
S S∑ s=1 I( ∣∣f(xsAj←ai , ai)−f(xsAj←a′ , a′)∣∣≥ )≤ δ (10)
where xk is sampled from the posterior distribution P(U|X ). We can again form the tightest convex relaxation of the left-hand side of the expression to yield our unfairness function:
µj(f,xi, ai, a ′) :=
1
S S∑ s=1 max{0, ∣∣f(xsi,Aj←ai , ai)− f(xsi,Aj←a′ , a′)∣∣− } (11)
Note that different choices of λ in eq. (7) correspond to different values of δ. Indeed, by choosing λ = 0 we have the ( , δ)-fair classifier corresponding to an unfair classifier7. While a sufficiently large, but finite, λ will correspond to a ( , 0) approximately counterfactually fair classifier. By varying λ between these two extremes, we induce classifiers that satisfy ( , δ)-ACF for different values of δ.
With these unfairness functions we have a differentiable optimization problem eq. (7) which can be solved with gradient-based methods. Thus, our method allows practitioners to smoothly trade-off accuracy with multi-world fairness. We call our method Multi-World Fairness (MWF). We give a complete method for learning a MWF classifier in Algorithm 1.
For both deterministic and non-deterministic models, this convex approximation essentially describes an expected unfairness that is allowed by the classifier: Definition 3 (Expected -Unfairness). For any counterfactual a′ 6= a, the Expected -Unfairness of a classifier f , or E [f ], is
E [ max{0, ∣∣f(XA←a, a)− f(XA←a′ , a′)∣∣− } | X = x, A = a] (12)
where the expectation is over any unobserved U (and is degenerate for deterministic counterfactuals). We note that the term max{0, ∣∣f(XA←a, a)−f(XA←a′ , a′)∣∣− } is strictly non-negative and therefore the expected -unfairness is zero if and only if f satisfies ( , 0)-approximate counterfactual fairness almost everywhere.
Linear Classifiers and Convexity Although we have presented these results in their most general form, it is worth noting that for linear classifiers, convexity guarantees are preserved. The family of linear classifiers we consider is relatively broad, and consists those linear in their learned weights w, as such it includes both SVMs and a variety of regression methods used in conjuncture with kernels or finite polynomial bases.
Consider any classifier whose output is linear in the learned parameters, i.e., the family of classifiers f all have the form f(X , A) = ∑ l wlgl(X , a), for a set of fixed kernels gl. Then the expected
-unfairness is a linear function of w taking the form: E [ max{0, ∣∣f(XA←a, a)− f(XA←a′ , a′)∣∣− }] (13)
= E [ max{0, ∣∣∑
l
wl(gl(XA←a, a)− gl(XA←a′ , a′)) ∣∣}]
This expression is linear in w and therefore, if the classification loss is also convex (as is the case for most regression tasks), a global optima can be ready found via convex programming. In particular, globally optimal linear classifiers satisfying ( , 0)-ACF or ( , δ)-ACF, can be found efficiently.
Bayesian alternatives and their shortcomings. One may argue that a more direct alternative is to provide probabilities associated with each world and to marginalize set of the optimal counterfactually fair classifiers over all possible worlds. We argue this is undesirable for two reasons: first, the averaged prediction for any particular individual may violate (3) by an undesirable margin for one, more or even all considered worlds; second, a practitioner may be restricted by regulations to show that, to the best of their knowledge, the worst-case violation is bounded across all viable worlds with high probability. However, if the number of possible models is extremely large (for example if the causal structure of the world is known, but the associated parameters are not) and we have a probability associated with each world, then one natural extension is to adapt Expected -Unfairness eq. (3) to marginalize over the space of possible worlds. However, we leave this extension to future work.
4 Experiments
We demonstrate the flexibility of our method on two real-world fair classification problems: 1. fair predictions of student performance in law schools; and 2. predicting whether criminals will re-offend upon being released. For each dataset we begin by giving details of the fair prediction problem. We then introduce multiple causal models that each possibly describe how unfairness plays a role in the data. Finally, we give results of Multi-World Fairness (MWF) and show how it changes for different settings of the fairness parameters ( , δ).
7In the worst case, δ may equal 1.
4.1 Fairly predicting law grades
We begin by investigating a dataset of survey results across 163 U.S. law schools conducted by the Law School Admission Council [19] . It contains information on over 20,000 students including their race A (here we look at just black and white students as this difference had the largest effect in counterfactuals in [11]), their grade-point average G obtained prior to law school, law school entrance exam scores L, and their first year average grade Y . Consider that law schools may be interested in predicting Y for all applicants to law school using G and L in order to decide
whether to accept or deny them entrance. However, due to societal inequalities, an individual’s race may have affected their access to educational opportunities, and thus affected G and L. Accordingly, we model this possibility using the causal graphs in Figure 2 (Left). In this graph we also model the fact that G,L may have been affected by other unobserved quantities. However, we may be uncertain whether what the right way to model these unobserved quantities is. Thus we propose to model this dataset with the two worlds described in Figure 2 (Left). Note that these are the same models as used in Kusner et al. [11] (except here we consider race as the sensitive variable). The corresponding equations for these two worlds are as follows:
G = bG + w A GA+ G G ∼ N (bG + wAGA+ wUGU, σG) (14)
L = bL + w A LA+ L L ∼ Poisson(exp(bL + wALA+ wULU))
Y = bY + w A YA+ Y Y ∼ N (wAYA+ wUY U, 1)
G, L, Y ∼ N (0, 1) U ∼ N (0, 1)
where variables b, w are parameters of the causal model.
Results. Figure 3 shows the result of learning a linear MWF classifier on the deterministic law school models. We split the law school data into a random 80/20 train/test split and we fit casual models and classifiers on the training set and evaluate performance on the test set. We plot the test RMSE of the constant predictor satisfying counterfactual fairness in red, the unfair predictor with λ=0, and MWF, averaged across 5 runs. Here as we have one deterministic and one non-deterministic model we will evaluate MWF for different and δ (with the knowledge that the only change in the MWF classifier for different δ is due to the non-deterministic model). For each , δ, we selected the smallest λ across a grid (λ ∈ {10−510−4, . . . , 1010}) such that the constraint in eq. (6) held across 95% of the individuals in both models. We see that MWF is able to reliably sacrifice accuracy for fairness as is reduced. Note that as we change δ we can further alter the accuracy/fairness trade-off.
4.2 Fair recidivism prediction (COMPAS)
We next turn our attention to predicting whether a criminal will re-offend, or ‘recidivate’ after being released from prison. ProPublica [13] released data on prisoners in Broward County, Florida who were awaiting a sentencing hearing. For each of the prisoners we have information on their race A (as above we only consider black versus white individuals), their age E, their number of juvenile felonies JF , juvenile misdemeanors JM , the type of crime they committed T , the number of prior offenses they have P , and whether they recidivated Y . There is also a proprietary COMPAS score [13] C designed to indicate the likelihood a prisoner recidivates.
We model this dataset with two different non-deterministic causal models, shown in Figure 2 (Right). The first model includes the dotted edges, the second omits them. In both models we believe that two unobserved latent factors juvenile criminality UJ and adult criminality UD also contribute to JF , JM , C, T, P . We show the equations for both of our casual models below, where the first causal model includes the blue terms and the second does not:
T ∼ Bernoulli(φ(bT + wUDC UD + w E CE + w A CA) (15)
C ∼ N (bC + wUDC UD + w E CE + w A CA + w T CT + w P CP + w JF C JF + w JM C JM , σC)
P ∼ Poisson(exp(bP + wUDP UD + w E PE + w A PA))
JF ∼ Poisson(exp(bJF + w UJ JF + wEJFE + w A JFA))
JM ∼ Poisson(exp(bJM + w UJ JM + wEJME + w A JMA))
[UJ , UD] ∼ N (0,Σ)
Results. Figure 4 shows how classification accuracy using both logistic regression (linear) and a 3-layer neural network (deep) changes as both and δ change. We split the COMPAS dataset randomly into an 80/20 train/test split, and report all results on the test set. As in the law school experiment we grid-search over λ to find the smallest value such that for any and δ the ( , δ)-ACF) constraint in eq. (6) is satisfied for at least 95% of the individuals in the dataset, across both worlds. We average all results except the constant classifier over 5 runs and plot the mean and standard deviations. We see that for small δ (high fairness) both linear and deep MWF classifiers significantly outperform the constant classifier and begin to approach the accuracy of the unfair classifier as increases. As we increase δ (lowered fairness) the deep classifier is better able to learn a decision boundary that trades-off accuracy for fairness. But if , δ is increased enough (e.g., ≥0.13, δ=0.5), the linear MWF classifier matches the performance of the deep classifier.
5 Conclusion
This paper has presented a natural extension to counterfactual fairness that allows us to guarantee fair properties of algorithms, even when we are unsure of the causal model that describes the world.
As the use of machine learning becomes widespread across many domains, it becomes more important to take algorithmic fairness out of the hands of experts and make it available to everybody. The conceptual simplicity of our method, our robust use of counterfactuals, and the ease of implementing our method mean that it can be directly applied to many interesting problems. A further benefit of our approach over previous work on counterfactual fairness is that our approach only requires the estimation of counterfactuals at training time, and no knowledge of latent variables during testing. As such, our classifiers offer a fair drop-in replacement for other existing classifiers.
6 Acknowledgments
This work was supported by The Alan Turing Institute under the EPSRC grant EP/N510129/1. CR acknowledges additional support under the EPSRC Platform Grant EP/P022529/1. | 1. What is the main contribution of the paper regarding fairness in machine learning?
2. How does the proposed method relax the assumption of exact fairness according to a known causal model?
3. Can you explain how the approach trades off prediction accuracy and fairness?
4. Are there any limitations or concerns regarding the proposed approach?
5. How does the paper address the issue of weighing different causal models in the family? | Review | Review
Summary. This paper addresses the problem of learning predictors that trade-off prediction accuracy and fairness. A fair predictor with respect to attribute A is defined using the notion of counterfactual fairness, which basically means that predictions should be independent of which value a sensitive attribute A attains (for example, predictions are the same in distribution for both A=male and A=female). The contribution of the paper is to relax the problem of attaining exact fairness according to a known causal model of the world, to the problem of attaining approximate fairness without assuming the correct causal model specification is known. Instead, a family of M model specifications is allowed. Concretely, this is manifested by incorporating new terms that are added to the loss function penalising deviations from perfect fairness for each of the causal models in the family. By varying the importance of these terms, we can then trade-off prediction accuracy and fairness. The model is applied to two real-world datasets, and trade-off curves are presented to showcase the functionality of the approach.
Comments. The paper is clear, well-written, technically sound and addresses an important problem domain. The idea of trading-off predictability with fairness by introducing penalty terms for deviations from fairness within each causal model is natural and intuitive. The paper sets up the optimisation problem and proposes an algorithm to solve it. It does not address the issue of how to weigh the different causal models in the family, and it does not provide a baseline trading-off strategy for comparison with the proposed approach. This is perhaps ok but I found it weakens the contribution of the paper. Could the authors address these concerns in their response, please? Certainly, the simplicity of the approach is appealing, but it is not easy to infer from the text how practical the approach may be in its current form. |
NIPS | Title
A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis
Abstract
The advancement of generative radiance fields has pushed the boundary of 3Daware image synthesis. Motivated by the observation that a 3D object should look realistic from multiple viewpoints, these methods introduce a multi-view constraint as regularization to learn valid 3D radiance fields from 2D images. Despite the progress, they often fall short of capturing accurate 3D shapes due to the shapecolor ambiguity, limiting their applicability in downstream tasks. In this work, we address this ambiguity by proposing a novel shading-guided generative implicit model that is able to learn a starkly improved shape representation. Our key insight is that an accurate 3D shape should also yield a realistic rendering under different lighting conditions. This multi-lighting constraint is realized by modeling illumination explicitly and performing shading with various lighting conditions. Gradients are derived by feeding the synthesized images to a discriminator. To compensate for the additional computational burden of calculating surface normals, we further devise an efficient volume rendering strategy via surface tracking, reducing the training and inference time by 24% and 48%, respectively. Our experiments on multiple datasets show that the proposed approach achieves photorealistic 3D-aware image synthesis while capturing accurate underlying 3D shapes. We demonstrate improved performance of our approach on 3D shape reconstruction against existing methods, and show its applicability on image relighting. Our code will be released at https://github.com/XingangPan/ShadeGAN.
1 Introduction
Advanced deep generative models, e.g., StyleGAN [1, 2] and BigGAN [3], have achieved great successes in natural image synthesis. While producing impressive results, these 2D representationbased models cannot synthesize novel views of an instance in a 3D-consistent manner. They also fall short of representing an explicit 3D object shape. To overcome such limitations, researchers have proposed new deep generative models that represent 3D scenes as neural radiance fields [4, 5]. Such 3D-aware generative models allow explicit control of viewpoint while preserving 3D consistency during image synthesis. Perhaps a more fascinating merit is that they have shown the great potential of learning 3D shapes in an unsupervised manner from just a collection of unconstrained 2D images. If we could train a 3D-aware generative model that learns accurate 3D object shapes, it would broaden various downstream applications such as 3D shape reconstruction and image relighting.
Existing attempts for 3D-aware image synthesis [4, 5] tend to learn coarse 3D shapes that are inaccurate and noisy, as shown in Fig.1 (a). We found that such inaccuracy arises from an inevitable ambiguity inherent in the training strategy adopted by these methods. In particular, a form of
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
regularization, which we refer to as "multi-view constraint", is used to enforce the 3D representation to look realistic from different viewpoints. The constraint is commonly implemented by first projecting the generator’s outputs (e.g., radiance fields [6]) to randomly sampled viewpoints, and then feeding them to a discriminator as fake images for training. While such a constraint enables these models to synthesize images in a 3D-aware manner, it suffers from the shape-color ambiguity, i.e., small variations of shape could lead to similar RGB images that look equally plausible to the discriminator, as the color of many objects is locally smooth. Consequently, inaccurate shapes are concealed under this constraint.
In this work, we propose a novel shading-guided generative implicit model (ShadeGAN) to address the aforementioned ambiguity. In particular, ShadeGAN learns more accurate 3D shapes by explicitly modeling shading, i.e., the interaction of illumination and shape. We believe that an accurate 3D shape should look realistic not only from different viewpoints, but also under different lighting conditions, i.e., satisfying the "multi-lighting constraint". This idea shares similar intuition with photometric stereo [7], which shows that accurate surface normal could be recovered from images taken under different lighting conditions. Note that the multi-lighting constraint is feasible as real-world images used for training are often taken under various lighting conditions. To fulfill this constraint, ShadeGAN takes a relightable color field as the intermediate representation, which approximates the albedo but does not necessarily satisfy viewpoint independence. The color field is shaded under a randomly sampled lighting condition during rendering. Since image appearance via such a shading process is strongly dependent on surface normals, inaccurate 3D shape representations will be much more clearly revealed than in earlier shading-agnostic generative models. Hence, by satisfying the multi-lighting constraint, ShadeGAN is encouraged to infer more accurate 3D shapes as shown in Fig.1 (b).
The above shading process requires the calculation of the normal direction via back-propagation through the generator, and such calculation needs to be repeated dozens of times for a pixel in volume rendering [4, 5], introducing additional computational overhead. Existing efficient volume rendering techniques [8, 9, 10, 11, 12] mainly target static scenes, and could not be directly applied to generative models due to their dynamic nature. Therefore, to improve the rendering speed of ShadeGAN, we formulate an efficient surface tracking network to estimate the rendered object surface conditioned on the latent code. This enables us to save rendering computations by just querying points near the predicted surface, leading to 24% and 48% reduction of training and inference time without affecting the quality of rendered images.
Comprehensive experiments are conducted across multiple datasets to verify the effectiveness of ShadeGAN. The results show that our approach is capable of synthesizing photorealistic images while capturing more accurate underlying 3D shapes than previous generative methods. The learned
distribution of 3D shapes enables various downstream tasks like 3D shape reconstruction, where our approach significantly outperforms other baselines on the BFM dataset [13]. Besides, modeling the shading process enables explicit control over lighting conditions, achieving image relighting effect. Our contributions can be summarized as follows: 1) We address the shape-color ambiguity in existing 3D-aware image synthesis methods with a shading-guided generative model that satisfies the proposed multi-lighting constraint. In this way, ShadeGAN is able to learn more accurate 3D shapes for better image synthesis. 2) We devise an efficient rendering technique via surface tracking, which significantly saves training and inference time for volume rendering-based generative models. 3) We show that ShadeGAN learns to disentangle shading and color that well approximates the albedo, achieving natural relighting effects in image synthesis.
2 Related Work
Neural volume rendering. Starting from the seminal work of neural radiance fields (NeRF) [6], neural volume rendering has gained much popularity in representing 3D scenes and synthesizing novel views. By integrating coordinate-based neural networks with volume rendering, NeRF performs high-fidelity view synthesis in a 3D consistent manner. Several attempts have been proposed to extend or improve NeRF. For instance, [14, 15, 16] further model illumination, and learn to disentangle reflectance with shading given well-aligned multi-view and multi-lighting images. Besides, many studies accelerate the rendering of static scenes from the perspective of spatial sparsity [8, 9], architectural design [10, 11], or efficient rendering [17, 12]. However, it is not trivial to apply these illumination and acceleration techniques to volume rendering-based generative models [5, 4], as they typically learn from unposed and unpaired images, and represent dynamic scenes that change with respect to the input latent codes.
In this work, we take the first attempt to model illumination in volume rendering-based generative models, which serves as a regularization for accurate 3D shape learning. We further devise an efficient rendering technique for our approach, which shares similar insight with [12], but does not rely on ground truth depth for training and it is not limited to a small viewpoint range.
Generative 3D-aware image synthesis. Generative adversarial networks (GANs) [18] are capable of generating photorealistic images of high-resolution, but lack explicit control over camera viewpoint. In order to enable them to synthesis images in a 3D-aware manner, many recent approaches investigate how 3D representations could be incorporated into GANs [19, 20, 21, 22, 23, 24, 25, 26, 27, 5, 4, 28, 29, 30]. While some works directly learn from 3D data [19, 20, 21, 22, 30], in this work we focus on approaches that only have access to unconstrained 2D images, which is a more practical setting. Several attempts [23, 24, 25] adopt 3D voxel features with learned neural rendering. These methods produce realistic 3D-aware synthesis, but the 3D voxels are not interpretable, i.e., they cannot be transferred to 3D shapes. By leveraging differentiable renderer, [26] and [27] learn interpretable 3D voxels and meshes respectively, but [26] suffers from limited visual quality due to low voxel resolution while the learned 3D shapes of [27] exhibit noticeable distortions. The success of NeRF has motivated researchers to use radiance fields as the intermediate 3D representation in GANs [5, 4, 28]. While achieving impressive 3D-aware image synthesis with multi-view consistency, the extracted 3D shapes of these approaches are often imprecise and noisy. Our main goal in this work is to address the inaccurate shape by explicitly modeling illumination in the rendering process. This innovation helps achieve better 3D-aware image synthesis with broader applications.
Unsupervised 3D shape learning from 2D images. Our work is also related to unsupervised approaches that learn 3D object shapes from unconstrained, monocular view 2D images. While several approaches use external 3D shape templates or 2D key-points as weak supervisions to facilitate learning [31, 32, 33, 34, 35, 36, 37], in this work we consider the harder setting where only 2D images are available. To tackle this problem, most approaches adopt an “analysis-by-synthesis” paradigm [38, 39, 40]. Specifically, they design photo-geometric autoencoders to infer the 3D shape and viewpoint of each image with a reconstruction loss. While succeed in learning the 3D shapes for some object categories, these approaches typically rely on certain regularization to prevent trivial solutions, like the commonly used symmetry assumption on object shapes [39, 40, 31, 32]. Such assumption tends to produce symmetric results that may overlook the asymmetric aspects of objects. Recently, GAN2Shape [41] shows that it is possible to recover 3D shapes for images generated by 2D GANs. This method, however, requires inefficient instance-specific training, and recovers depth maps instead of full 3D representations.
The proposed 3D-aware generative model also serves as a powerful approach for unsupervised 3D shape learning. Compared with aforementioned autoencoder-based methods, our GAN-based approach avoids the need to infer the viewpoint of each image, and does not rely on strong regularizations. In experiments, we demonstrate superior performance over recent state-of-the-art approaches Unsup3d [39] and GAN2Shape [41].
3 Methodology
We consider the problem of 3D-aware image synthesis by learning from a collection of unconstrained and unlabeled 2D images. We argue that modeling shading, i.e., the interaction of illumination and shape, in a generative implicit model enables unsupervised learning of more accurate 3D object shapes. In the following, we first provide some preliminaries on neural radiance fields (NeRF) [6], and then introduce our shading-guided generative implicit model.
3.1 Preliminaries on Neural Radiance Fields
As a deep implicit model, NeRF [6] uses an MLP network to represent a 3D scene as a radiance field. The MLP fθ : (x,d)→ (σ, c) takes a 3D coordinate x ∈ R3 and a viewing direction d ∈ S2 as inputs, and outputs a volume density σ ∈ R+ and a color c ∈ R3. To render an image under a given camera pose, each pixel color C of the image is obtained via volume rendering along its corresponding camera ray r(t) = o+ td with near and far bounds tn and tf as below:
C(r) = ∫ tf tn T (t)σ(r(t))c(r(t),d)dt, where T (t) = exp(− ∫ t tn σ(r(s))ds). (1)
In practice, this volume rendering is implemented with a discretized form using stratified and hierarchical sampling. As this rendering process is differentiable, NeRF could be directly optimized via posed images of a static scene. After training, NeRF allows the rendering of images under new camera poses, achieving high-quality novel view synthesis.
3.2 Shading-Guided Generative Implicit Model
In this work, we are interested in developing a generative implicit model that explicitly models the shading process for 3D-aware image synthesis. To achieve this, we make two extensions to the MLP network in NeRF. First, similar to most deep generative models, it is further conditioned on a latent code z sampled from a prior distribution N (0, I)d. Second, instead of directly outputting the color c, it outputs a relightable pre-cosine color term a ∈ R3, which is conceptually similar to albedo in the way that it could be shaded under a given lighting condition. While albedo is viewpoint-independent, in this work we do not strictly enforce such independence for a in order to account for dataset bias. Thus, our generator gθ : (x,d, z)→ (σ,a) takes a coordinate x, a viewing direction d, and a latent
code z as inputs, and outputs a volume density σ and a pre-cosine color a. Note that here σ is independent of d, while the dependence of a on d is optional. To obtain the color C of a camera ray r(t) = o+ td with near and far bounds tn and tf , we calculate the final pre-cosine colorA via:
A(r, z) = ∫ tf tn T (t, z)σ(r(t), z)a(r(t),d, z)dt, where T (t, z) = exp(− ∫ t tn σ(r(s), z)ds).
(2)
We also calculate the normal direction n with: n(r, z) = n̂(r, z)/‖n̂(r, z)‖2, where n̂(r, z) = − ∫ tf tn T (t, z)σ(r(t), z)∇r(t)σ(r(t), z)dt,
(3)
where ∇r(t)σ(r(t), z) is the derivative of volume density σ with respect to its input coordinate, which naturally captures the local normal direction, and could be calculated via back-propagation. Then the final color C is obtained via Lambertian shading as:
C(r, z) = A(r, z)(ka + kdmax(0, l · n(r, z))), (4)
where l ∈ S2 is the lighting direction, ka and kd are the ambient and diffuse coefficients. We provide more discussions on this shading formulation at the end of this subsection.
Camera and Lighting Sampling. Eq.(2 - 4) describe the process of rendering a pixel color given a camera ray r(t) and a lighting condition µ = (l, ka, kd). Generating a full image Ig ∈ R3×H×W requires one to sample a camera pose ξ and a lighting condition µ in addition to the latent code z, i.e., Ig = Gθ(z, ξ,µ). In our setting, the camera pose ξ could be described by pitch and yaw angles, and is sampled from a prior Gaussian or uniform distribution pξ, as also done in previous works [4, 5]. Sampling the camera pose randomly during training would motivate the learned 3D scene to look realistic from different viewpoints. While this multi-view constraint is beneficial for learning a valid 3D representation, it is often insufficient to infer the accurate 3D object shape. Thus, in our approach, we further introduce a multi-lighting constraint by also randomly sampling a lighting condition µ from a prior distribution pµ. In practice, pµ could be estimated from the dataset using existing approaches like [39]. We also show in our experiments that a simple and manually tuned prior distribution could also produce reasonable results. As the shading process is sensitive to the normal direction due to the diffuse term kdmax(0, l · n(r, z)) in Eq.(4), this multi-lighting constraint would regularize the model to learn more accurate 3D shapes that produce natural shading, as shown in Fig.1 (b).
Training. Our generative model follows the paradigm of GANs [18], where the generator is trained together with a discriminator D with parameters φ in an adversarial manner. During training, the generator generates fake images Ig = Gθ(z, ξ,µ) by sampling the latent code z, camera pose ξ and lighting condition µ from their corresponding prior distributions pz , pξ, and pµ. Let I denotes real images sampled from the data distribution pI . We train our model with a non-saturating GAN loss with R1 regularization [42]:
L(θ, φ) = Ez∼pz,ξ∼pξ,µ∼pµ [ f ( Dφ(Gθ(z, ξ,µ)) )] + EI∼pD [ f(−Dφ(I)) + λ‖∇Dφ(I)‖2 ] ,
(5) where f(u) = − log(1 + exp(−u)), and λ controls the strength of regularization. More implementation details are provided in the supplementary material.
Discussion. Note that in Eq.(2 - 4), we perform shading after A and n are obtained via volume rendering. An alternative way is to perform shading at each local spatial point as c(r(t),d, z) = a(r(t),d, z)(ka + kdmax(0, l · n(r(t), z))), where n(r(t), z) = −∇r(t)σ(r(t), z)/‖∇r(t)σ(r(t), z)‖2 is the local normal. Then we could perform volume rendering using c(r(t), z) to get the final pixel color. In practice, we observe that this formulation obtains suboptimal results. An intuitive reason is that in this formulation, the normal direction is normalized at each local point, neglecting the magnitude of∇r(t)σ(r(t), z), which tends to be larger near the object surfaces. We provide more analysis in experiments and the supplementary material.
The Lambertian shading we used is an approximation to the real illumination scenario. While serving as a good regularization for improving the learned 3D shape, it could possibly introduce an additional gap between the distribution of generated images and that of real images. To compensate
for such risk, we could optionally let the predicted a be conditioned on the lighting condition, i.e., a = a(r(t),d,µ, z). Thus, in cases where the lighting condition deviates from the real data distribution, the generator could learn to adjust the value of a and reduce the aforementioned gap. We show the benefit of this design in the experiments.
3.3 Efficient Volume Rendering via Surface Tracking
Similar to NeRF, we implement volume rendering with a discretized integral, which typically requires to sample dozens of points along a camera ray, as shown in Fig. 3 (a). In our approach, we also need to perform back-propagation across the generator in Eq.(3) to get the normal direction for each point, which introduces additional computational cost. To achieve more efficient volume rendering, a natural idea is to exploit spatial sparsity. Usually, the weight T (t, z)σ(r(t), z) in volume rendering would concentrate on the object surface position during training. Thus, if we know the rough surface position before rendering, we could sample points near the surface to save computation. While for a static scene it is possible to store such spatial sparsity in a sparse voxel grid [8, 9], this technique cannot be directly applied to our generative model, as the 3D scene keeps changing with respect to the input latent code.
To achieve more efficient volume rendering in our generative implicit model, we further propose a surface tracking network S that learns to mimic the surface position conditioned on the latent code. In particular, the volume rendering naturally allows the depth estimation of the object surface via:
ts(r, z) = ∫ tf tn T (t, z)σ(r(t), z)t dt, (6)
where T (t, z) is defined the same way as in Eq.(2). Thus, given a camera pose ξ and a latent code z, we could render the full depth map ts(z, ξ). As shown in Fig. 3 (b), we mimic ts(z, ξ) with the surface tracking network Sψ , which is a light-weighted convolutional neural network that takes z, ξ as inputs and outputs a depth map. The depth mimic loss is:
L(ψ) = Ez∼pz,ξ∼pξ [‖Sψ(z, ξ)− ts(z, ξ)‖1 + Prec(Sψ(z, ξ), ts(d(z, ξ))] , (7) where Prec is the perceptual loss that motivates Sψ to better capture edges of the surface.
During training, Sψ is optimized jointly with the generator and the discriminator. Thus, each time after we sample a latent code z and a camera pose ξ, we can get an initial guess of the depth map as Sψ(z, ξ). Then for a pixel with predicted depth s, we could perform volume rendering in Eq.(2,3,6) with near bound tn = s−∆i/2 and far bound tf = s+ ∆i/2, where ∆i is the interval for volume rendering that decreases as the training iteration i grows. Specifically, we start with a large interval ∆max and decrease to ∆min with an exponential schedule. As ∆i decreases, the number of points used for rendering m also decreases accordingly. Note that the computational cost of our efficient surface tracking network is marginal compared to the generator, as the former only needs a single forward pass to render an image while the latter will be queried for H ×W ×m times. Thus, the reduction of m would significantly accelerate the training and inference speed for ShadeGAN.
4 Experiments
In this section, we evaluate the proposed ShadeGAN on 3D-aware image synthesis. We also show that ShadeGAN learns much more accurate 3D shapes than previous methods, and in the meantime allows explicit control over lighting conditions. The datasets used include CelebA [43], BFM [13], and Cats [44], all of which contain only unconstrained 2D RGB images.
Implementation. In terms of model architectures, we adopt a SIREN-based MLP [45] as the generator and a convolutional neural network as the discriminator following [4]. For the prior distribution of lighting conditions, we use Unsup3d [39] to estimate the lighting conditions of real data and subsequently fit a multivariate Gaussian distribution of µ = (l, ka, kd) as the prior. A hand-crafted prior distribution is also included in the ablation study. In quantitative study, we let the pre-cosine color a be conditioned on the lighting condition µ as well as the viewing direction d unless otherwise stated. In qualitative study, we observe that removing view conditioning achieves slightly better 3D shapes for CelebA and BFM datasets. Thus, we show results without view conditioning for these two datasets in the main paper, and put those with view conditioning in Fig. 4 of the supplementary material. Other implementation details are also provided in the supplementary.
Figure 5: Generated face images and their 3D meshes.
Normal AlbedoMeshImage
(a) ShadeGAN
(b) Local normal
(c) Manual prior
Figure 6: Qualitative ablation. See the main text for discussions.
Comparison with baselines. We compare ShadeGAN with two state-of-the-art generative implicit models, namely GRAF [5] and pi-GAN [4]. Specifically, Fig. 4 includes both synthesized images as well as their corresponding 3D meshes, which are obtained by performing marching cubes on the volume density σ. While GRAF and pi-GAN could synthesize images with controllable poses, their learned 3D shapes are inaccurate and noisy. In contrast, our approach not only synthesizes photorealistic 3D-consistent images, but also learns much more accurate 3D shapes and surface normals, indicating the effectiveness of the proposed multi-lighting constraint as a regularization. More synthesized images and their corresponding shapes are included in Fig.5. Besides more accurate 3D shapes, ShadeGAN can also learn the albedo and diffuse shading components inherently. As shown in Fig. 4, although not perfect, ShadeGAN has managed to disentangle shading and albedo with satisfying quality, as such disentanglement is a natural solution to the multi-lighting constraint.
The quality of learned 3D shapes is quantitatively evaluated on the BFM dataset. Specifically, we use each of the generative implicit models to generate 50k images and their corresponding depth maps. Image-depth pairs from each model are used as training data to train an additional convolutional neural network (CNN) that learns to predict the depth map of an input image. We then test each trained CNN on the BFM test set and compare its predictions to the ground-truth depth maps as a measurement of the quality of learned 3D shapes. Following [39], we report the scale-invariant depth error (SIDE) and mean angle deviation (MAD) metrics. The results are included in Tab. 1, where ShadeGAN significantly outperforms GRAF and pi-GAN. Besides, ShadeGAN also outperforms other advanced unsupervised 3D shape learning approaches including Unsup3d [39] and GAN2Shape [41], demonstrating its large potential in unsupervised 3D shapes learning. In terms of image quality, Tab. 1 includes the FID [46] scores of images synthesized by different models, where the FID score of ShadeGAN is slightly inferior to pi-GAN in BFM and CelebA. Intuitively, this is caused by the gap between our approximated shading (i.e. Lambertian shading) and the real illumination, which can be potentially avoided by adopting more realistic shading models and improving the lighting prior.
In Tab. 2, we also show the quantitative results of different models on CelebA and Cats. To evaluate the learned shape, we use each generative implicit model to generate 2k front-view images and their corresponding depth maps. While these datasets do not have ground truth depth, we report MAD obtained by testing pretrained Unsup3d models [39] on these generated image-depth pairs as
Table 3: Ablation study on the BFM dataset.
No. Method FID ↓ SIDE ↓ MAD ↓ (1) ShadeGAN 17.7 0.607 14.52 (2) local shading 30.1 0.754 18.18 (3) w/o light 19.2 0.618 14.53 (4) w/o view 18.6 0.622 14.88 (5) manual prior 20.2 0.643 15.38 (6) +efficient 18.2 0.673 14.72
Table 4: Training and inference time cost on CelebA. The efficient volume rendering significantly improves training and inference speed.
Method Train (h) Inference (s) FID ShadeGAN 92.3 0.343 16.4 +efficient 70.2 0.179 16.2 pi-GAN 56.8 0.204 15.7 +efficient 46.9 0.114 15.9
A lb
ed o
(a) w/o light condition (b) with light condition
S h
ad in
g Im
ag e
(c) with specular
Figure 8: Illumination-aware image synthesis. ShadeGAN allows explicit control over the lighting. The pre-cosine color (albedo) is independent of lighting in (a) and is conditioned on lighting in (b). We show results of adding a specular term in (c).
a reference. As we can observe, results on CelebA and Cat are consistent with those on the BFM dataset.
Ablation studies. We further study the effects of several design choices in ShadeGAN. First, we perform local points-specific shading as mentioned in the discussion of Sec. 3.2. As Tab. 3 No.(2) and Fig. 6 (b) show, the results of such a local shading strategy are notably worse than the original one, which indicates that taking the magnitude of∇xσ into account is beneficial. Besides, the results of Tab. 3 No.(3) and No.(4) imply that removing a’s dependence on the lighting µ or the viewpoint d could lead to a slight performance drop. The results of using a simple manually tuned lighting prior are provided in Tab. 3 No.(5) and Fig. 6 (c), which are only moderately worse than the results of using a data-driven prior, and the generated shapes are still significantly better than the ones produced by existing approaches.
To verify the effectiveness of the proposed efficient volume rendering technique, we include its effects on image quality and training/inference time in Tab. 3 No.(6) and Tab. 4. It is observed that the efficient volume rendering has marginal effects on the performance, but significantly reduces the training and inference time by 24% and 48% for ShadeGAN. Moreover, in Fig. 7 we visualize the depth maps predicted by our surface tracking network and those obtained via volume rendering. It is shown that under varying identities and camera poses, the surface tracking network could consistently predict depth values that are quite close to the real surface positions, so that we can sample points near the predicted surface for rendering without sacrificing image quality.
Illumination-aware image synthesis. As ShadeGAN models the shading process, it by design allows explicit control over the lighting condition. We provide such illumination-aware image synthesis results in Fig.8, where ShadeGAN generates promising images under different lighting directions. We also show that in cases where the predicted a is conditioned on the lighting condition µ, a would slightly change w.r.t. the lighting condition, e.g., it would be brighter in areas having a overly dim shading in order to make the final image more natural. Besides, we could optionally add a specular term ksmax(0,h · n)p in Eq. 4 (i.e., Blinn-Phong shading [47], where h is the bisector of the angle between the viewpoint and the lighting direction) to create specular highlight effects, as shown in Fig.8 (c).
GAN inversion. ShadeGAN could also be used to reconstruct a given target image by performing GAN inversion. As shown in Fig. 9 such inversion allows us to obtain several factors of the image, including the 3D shape, surface normal, approximated albedo, and shading. Besides, we can further perform view synthesis and relighting by changing the viewpoint and lighting condition. The implementation of GAN inversion is provided in the supplementary material.
Discussions. As the Lambertian shading we used is an approximation to the real illumination, the albedo learned by ShadeGAN is not perfectly disentangled. Our approach does not consider the spatially-varying material properties of objects as well. In the future, we intend to incorporate more sophisticated shading models to learn better disentangled generative reflectance fields.
5 Conclusion
In this work, we present ShadeGAN, a new generative implicit model for shape-accurate 3D-aware image synthesis. We have shown that the multi-lighting constraint, achieved in ShadeGAN by explicit illumination modeling, significantly helps learning accurate 3D shapes from 2D images. ShadeGAN also allows us to control the lighting condition during image synthesis, achieving natural image relighting effects. To reduce the computational cost, we have further devised a light-weighted surface tracking network, which enables an efficient volume rendering technique for generative implicit models, achieving significant acceleration on both training and inference speed. A generative model with shape-accurate 3D representation could broaden its applications in vision and graphics, and our work has taken a solid step towards this goal.
Acknowledgment. We would like to thank Eric R. Chan for sharing the codebase of pi-GAN. This study is supported under the ERC Consolidator Grant 4DRepLy (770784). This study is also supported under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). | 1. What is the main contribution of the paper regarding generative adversarial networks (GANs) and radiance fields?
2. What are the strengths of the proposed approach, particularly in comparison to pi-GAN?
3. Do you have any concerns or suggestions regarding the terminology used in the paper, such as the term "albedo field"?
4. Have you considered alternative approaches that could potentially improve the result quality, such as directly conditioning the albedo field generation on the light direction?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a GAN capable of generating a relightable radiance field when trained on an unlabeled dataset of face images. The relightable radiance field generator is conditioned on the 3D location and a latent code and outputs a volume density field (à la NeRF) and an "albedo" field (I strongly advise against the use of this word; see below) that allows explicit control of the light location. The training is conducted in the GAN style, where the training signal comes from only a discriminator loss. The intuition is that the generator is better off generating a meaningful 3D volume that, when rendered from a random camera view, generates a photorealistic face image that falls into the training data distribution, than generating some intricately-designed volume that gives faces of different identities when viewed from different angles.
The authors demonstrate that the model is able to learn 3D shapes in an unsupervised fashion from just 2D images. The work is mainly compared against pi-GAN, which is shown to suffer from what the authors call "color-shape ambiguities." For efficient rendering, the authors also propose an auxiliary network that predicts the surface location given the latent code and a viewing direction, such that the network can sample around the predicted surface to avoid expensive sampling of unoccupied space.
Review
The paper explores a cute idea: by additionally modeling lighting, one learns better 3D shapes in an unsupervised way from unstructured 2D images. This gain seems free by exploiting the lighting variation that already exists in the datasets.
The model design is sensible and simple (in a positive sense). The observation that bad 3D shapes do not affect view synthesis in approaches that do not model lighting like pi-GAN is interesting. The shape results look convincing. The insight that the "local shading" alternative underperforms the adopted shading scene is appreciated. The paper is well-written and easy to follow.
Now the drawbacks:
I strongly advise against calling the generated radiance field the "albedo field." Although the authors clarify in the text that this "albedo" depends on the viewing direction and lighting and hence is not the conventional albedo, this makes the paper really hard to read since any vision and graphics person is familiar with the concept that albedo does not depend on viewing directions or lighting. I had to keep reminding myself of the fact that this "albedo" is special. If the properties contradict the two fundamental properties of real albedo, why use it? Maybe something like "pre-cosine radiance" or "unmodulated/demodulated radiance"?
Related to this albedo complaint, the authors applied Lambertian shading to the "albedo" to allow explicit control of lighting. A simpler alternative is directly conditioning the albedo field generation on the light direction also, eliminating the need for a post-hoc Lambertian shading process. While I can see this alternative may not provide direct control over lighting, it warrants an ablation study. If this simpler alternative ends up working better, the authors can consider naming the field something like "light transport field"? See https://arxiv.org/pdf/2008.03806.pdf and https://arxiv.org/pdf/1911.11530.pdf. The current post-hoc Lambertian shading is a bit awkward, without a solid reason for its existence.
In terms of result quality, I'm concerned on the diversity of what the GAN is able to generate. The results shown seem to be suffering from mode collapse. This concern can be easily addressed by showing a video where we vary the latent code and in the meanwhile change the viewpoint and light direction. Then one would be able to judge if the generated results are diverse or not. Not asking for new experiments, but would love to see this in the revision or a future version of this paper. |
NIPS | Title
A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis
Abstract
The advancement of generative radiance fields has pushed the boundary of 3Daware image synthesis. Motivated by the observation that a 3D object should look realistic from multiple viewpoints, these methods introduce a multi-view constraint as regularization to learn valid 3D radiance fields from 2D images. Despite the progress, they often fall short of capturing accurate 3D shapes due to the shapecolor ambiguity, limiting their applicability in downstream tasks. In this work, we address this ambiguity by proposing a novel shading-guided generative implicit model that is able to learn a starkly improved shape representation. Our key insight is that an accurate 3D shape should also yield a realistic rendering under different lighting conditions. This multi-lighting constraint is realized by modeling illumination explicitly and performing shading with various lighting conditions. Gradients are derived by feeding the synthesized images to a discriminator. To compensate for the additional computational burden of calculating surface normals, we further devise an efficient volume rendering strategy via surface tracking, reducing the training and inference time by 24% and 48%, respectively. Our experiments on multiple datasets show that the proposed approach achieves photorealistic 3D-aware image synthesis while capturing accurate underlying 3D shapes. We demonstrate improved performance of our approach on 3D shape reconstruction against existing methods, and show its applicability on image relighting. Our code will be released at https://github.com/XingangPan/ShadeGAN.
1 Introduction
Advanced deep generative models, e.g., StyleGAN [1, 2] and BigGAN [3], have achieved great successes in natural image synthesis. While producing impressive results, these 2D representationbased models cannot synthesize novel views of an instance in a 3D-consistent manner. They also fall short of representing an explicit 3D object shape. To overcome such limitations, researchers have proposed new deep generative models that represent 3D scenes as neural radiance fields [4, 5]. Such 3D-aware generative models allow explicit control of viewpoint while preserving 3D consistency during image synthesis. Perhaps a more fascinating merit is that they have shown the great potential of learning 3D shapes in an unsupervised manner from just a collection of unconstrained 2D images. If we could train a 3D-aware generative model that learns accurate 3D object shapes, it would broaden various downstream applications such as 3D shape reconstruction and image relighting.
Existing attempts for 3D-aware image synthesis [4, 5] tend to learn coarse 3D shapes that are inaccurate and noisy, as shown in Fig.1 (a). We found that such inaccuracy arises from an inevitable ambiguity inherent in the training strategy adopted by these methods. In particular, a form of
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
regularization, which we refer to as "multi-view constraint", is used to enforce the 3D representation to look realistic from different viewpoints. The constraint is commonly implemented by first projecting the generator’s outputs (e.g., radiance fields [6]) to randomly sampled viewpoints, and then feeding them to a discriminator as fake images for training. While such a constraint enables these models to synthesize images in a 3D-aware manner, it suffers from the shape-color ambiguity, i.e., small variations of shape could lead to similar RGB images that look equally plausible to the discriminator, as the color of many objects is locally smooth. Consequently, inaccurate shapes are concealed under this constraint.
In this work, we propose a novel shading-guided generative implicit model (ShadeGAN) to address the aforementioned ambiguity. In particular, ShadeGAN learns more accurate 3D shapes by explicitly modeling shading, i.e., the interaction of illumination and shape. We believe that an accurate 3D shape should look realistic not only from different viewpoints, but also under different lighting conditions, i.e., satisfying the "multi-lighting constraint". This idea shares similar intuition with photometric stereo [7], which shows that accurate surface normal could be recovered from images taken under different lighting conditions. Note that the multi-lighting constraint is feasible as real-world images used for training are often taken under various lighting conditions. To fulfill this constraint, ShadeGAN takes a relightable color field as the intermediate representation, which approximates the albedo but does not necessarily satisfy viewpoint independence. The color field is shaded under a randomly sampled lighting condition during rendering. Since image appearance via such a shading process is strongly dependent on surface normals, inaccurate 3D shape representations will be much more clearly revealed than in earlier shading-agnostic generative models. Hence, by satisfying the multi-lighting constraint, ShadeGAN is encouraged to infer more accurate 3D shapes as shown in Fig.1 (b).
The above shading process requires the calculation of the normal direction via back-propagation through the generator, and such calculation needs to be repeated dozens of times for a pixel in volume rendering [4, 5], introducing additional computational overhead. Existing efficient volume rendering techniques [8, 9, 10, 11, 12] mainly target static scenes, and could not be directly applied to generative models due to their dynamic nature. Therefore, to improve the rendering speed of ShadeGAN, we formulate an efficient surface tracking network to estimate the rendered object surface conditioned on the latent code. This enables us to save rendering computations by just querying points near the predicted surface, leading to 24% and 48% reduction of training and inference time without affecting the quality of rendered images.
Comprehensive experiments are conducted across multiple datasets to verify the effectiveness of ShadeGAN. The results show that our approach is capable of synthesizing photorealistic images while capturing more accurate underlying 3D shapes than previous generative methods. The learned
distribution of 3D shapes enables various downstream tasks like 3D shape reconstruction, where our approach significantly outperforms other baselines on the BFM dataset [13]. Besides, modeling the shading process enables explicit control over lighting conditions, achieving image relighting effect. Our contributions can be summarized as follows: 1) We address the shape-color ambiguity in existing 3D-aware image synthesis methods with a shading-guided generative model that satisfies the proposed multi-lighting constraint. In this way, ShadeGAN is able to learn more accurate 3D shapes for better image synthesis. 2) We devise an efficient rendering technique via surface tracking, which significantly saves training and inference time for volume rendering-based generative models. 3) We show that ShadeGAN learns to disentangle shading and color that well approximates the albedo, achieving natural relighting effects in image synthesis.
2 Related Work
Neural volume rendering. Starting from the seminal work of neural radiance fields (NeRF) [6], neural volume rendering has gained much popularity in representing 3D scenes and synthesizing novel views. By integrating coordinate-based neural networks with volume rendering, NeRF performs high-fidelity view synthesis in a 3D consistent manner. Several attempts have been proposed to extend or improve NeRF. For instance, [14, 15, 16] further model illumination, and learn to disentangle reflectance with shading given well-aligned multi-view and multi-lighting images. Besides, many studies accelerate the rendering of static scenes from the perspective of spatial sparsity [8, 9], architectural design [10, 11], or efficient rendering [17, 12]. However, it is not trivial to apply these illumination and acceleration techniques to volume rendering-based generative models [5, 4], as they typically learn from unposed and unpaired images, and represent dynamic scenes that change with respect to the input latent codes.
In this work, we take the first attempt to model illumination in volume rendering-based generative models, which serves as a regularization for accurate 3D shape learning. We further devise an efficient rendering technique for our approach, which shares similar insight with [12], but does not rely on ground truth depth for training and it is not limited to a small viewpoint range.
Generative 3D-aware image synthesis. Generative adversarial networks (GANs) [18] are capable of generating photorealistic images of high-resolution, but lack explicit control over camera viewpoint. In order to enable them to synthesis images in a 3D-aware manner, many recent approaches investigate how 3D representations could be incorporated into GANs [19, 20, 21, 22, 23, 24, 25, 26, 27, 5, 4, 28, 29, 30]. While some works directly learn from 3D data [19, 20, 21, 22, 30], in this work we focus on approaches that only have access to unconstrained 2D images, which is a more practical setting. Several attempts [23, 24, 25] adopt 3D voxel features with learned neural rendering. These methods produce realistic 3D-aware synthesis, but the 3D voxels are not interpretable, i.e., they cannot be transferred to 3D shapes. By leveraging differentiable renderer, [26] and [27] learn interpretable 3D voxels and meshes respectively, but [26] suffers from limited visual quality due to low voxel resolution while the learned 3D shapes of [27] exhibit noticeable distortions. The success of NeRF has motivated researchers to use radiance fields as the intermediate 3D representation in GANs [5, 4, 28]. While achieving impressive 3D-aware image synthesis with multi-view consistency, the extracted 3D shapes of these approaches are often imprecise and noisy. Our main goal in this work is to address the inaccurate shape by explicitly modeling illumination in the rendering process. This innovation helps achieve better 3D-aware image synthesis with broader applications.
Unsupervised 3D shape learning from 2D images. Our work is also related to unsupervised approaches that learn 3D object shapes from unconstrained, monocular view 2D images. While several approaches use external 3D shape templates or 2D key-points as weak supervisions to facilitate learning [31, 32, 33, 34, 35, 36, 37], in this work we consider the harder setting where only 2D images are available. To tackle this problem, most approaches adopt an “analysis-by-synthesis” paradigm [38, 39, 40]. Specifically, they design photo-geometric autoencoders to infer the 3D shape and viewpoint of each image with a reconstruction loss. While succeed in learning the 3D shapes for some object categories, these approaches typically rely on certain regularization to prevent trivial solutions, like the commonly used symmetry assumption on object shapes [39, 40, 31, 32]. Such assumption tends to produce symmetric results that may overlook the asymmetric aspects of objects. Recently, GAN2Shape [41] shows that it is possible to recover 3D shapes for images generated by 2D GANs. This method, however, requires inefficient instance-specific training, and recovers depth maps instead of full 3D representations.
The proposed 3D-aware generative model also serves as a powerful approach for unsupervised 3D shape learning. Compared with aforementioned autoencoder-based methods, our GAN-based approach avoids the need to infer the viewpoint of each image, and does not rely on strong regularizations. In experiments, we demonstrate superior performance over recent state-of-the-art approaches Unsup3d [39] and GAN2Shape [41].
3 Methodology
We consider the problem of 3D-aware image synthesis by learning from a collection of unconstrained and unlabeled 2D images. We argue that modeling shading, i.e., the interaction of illumination and shape, in a generative implicit model enables unsupervised learning of more accurate 3D object shapes. In the following, we first provide some preliminaries on neural radiance fields (NeRF) [6], and then introduce our shading-guided generative implicit model.
3.1 Preliminaries on Neural Radiance Fields
As a deep implicit model, NeRF [6] uses an MLP network to represent a 3D scene as a radiance field. The MLP fθ : (x,d)→ (σ, c) takes a 3D coordinate x ∈ R3 and a viewing direction d ∈ S2 as inputs, and outputs a volume density σ ∈ R+ and a color c ∈ R3. To render an image under a given camera pose, each pixel color C of the image is obtained via volume rendering along its corresponding camera ray r(t) = o+ td with near and far bounds tn and tf as below:
C(r) = ∫ tf tn T (t)σ(r(t))c(r(t),d)dt, where T (t) = exp(− ∫ t tn σ(r(s))ds). (1)
In practice, this volume rendering is implemented with a discretized form using stratified and hierarchical sampling. As this rendering process is differentiable, NeRF could be directly optimized via posed images of a static scene. After training, NeRF allows the rendering of images under new camera poses, achieving high-quality novel view synthesis.
3.2 Shading-Guided Generative Implicit Model
In this work, we are interested in developing a generative implicit model that explicitly models the shading process for 3D-aware image synthesis. To achieve this, we make two extensions to the MLP network in NeRF. First, similar to most deep generative models, it is further conditioned on a latent code z sampled from a prior distribution N (0, I)d. Second, instead of directly outputting the color c, it outputs a relightable pre-cosine color term a ∈ R3, which is conceptually similar to albedo in the way that it could be shaded under a given lighting condition. While albedo is viewpoint-independent, in this work we do not strictly enforce such independence for a in order to account for dataset bias. Thus, our generator gθ : (x,d, z)→ (σ,a) takes a coordinate x, a viewing direction d, and a latent
code z as inputs, and outputs a volume density σ and a pre-cosine color a. Note that here σ is independent of d, while the dependence of a on d is optional. To obtain the color C of a camera ray r(t) = o+ td with near and far bounds tn and tf , we calculate the final pre-cosine colorA via:
A(r, z) = ∫ tf tn T (t, z)σ(r(t), z)a(r(t),d, z)dt, where T (t, z) = exp(− ∫ t tn σ(r(s), z)ds).
(2)
We also calculate the normal direction n with: n(r, z) = n̂(r, z)/‖n̂(r, z)‖2, where n̂(r, z) = − ∫ tf tn T (t, z)σ(r(t), z)∇r(t)σ(r(t), z)dt,
(3)
where ∇r(t)σ(r(t), z) is the derivative of volume density σ with respect to its input coordinate, which naturally captures the local normal direction, and could be calculated via back-propagation. Then the final color C is obtained via Lambertian shading as:
C(r, z) = A(r, z)(ka + kdmax(0, l · n(r, z))), (4)
where l ∈ S2 is the lighting direction, ka and kd are the ambient and diffuse coefficients. We provide more discussions on this shading formulation at the end of this subsection.
Camera and Lighting Sampling. Eq.(2 - 4) describe the process of rendering a pixel color given a camera ray r(t) and a lighting condition µ = (l, ka, kd). Generating a full image Ig ∈ R3×H×W requires one to sample a camera pose ξ and a lighting condition µ in addition to the latent code z, i.e., Ig = Gθ(z, ξ,µ). In our setting, the camera pose ξ could be described by pitch and yaw angles, and is sampled from a prior Gaussian or uniform distribution pξ, as also done in previous works [4, 5]. Sampling the camera pose randomly during training would motivate the learned 3D scene to look realistic from different viewpoints. While this multi-view constraint is beneficial for learning a valid 3D representation, it is often insufficient to infer the accurate 3D object shape. Thus, in our approach, we further introduce a multi-lighting constraint by also randomly sampling a lighting condition µ from a prior distribution pµ. In practice, pµ could be estimated from the dataset using existing approaches like [39]. We also show in our experiments that a simple and manually tuned prior distribution could also produce reasonable results. As the shading process is sensitive to the normal direction due to the diffuse term kdmax(0, l · n(r, z)) in Eq.(4), this multi-lighting constraint would regularize the model to learn more accurate 3D shapes that produce natural shading, as shown in Fig.1 (b).
Training. Our generative model follows the paradigm of GANs [18], where the generator is trained together with a discriminator D with parameters φ in an adversarial manner. During training, the generator generates fake images Ig = Gθ(z, ξ,µ) by sampling the latent code z, camera pose ξ and lighting condition µ from their corresponding prior distributions pz , pξ, and pµ. Let I denotes real images sampled from the data distribution pI . We train our model with a non-saturating GAN loss with R1 regularization [42]:
L(θ, φ) = Ez∼pz,ξ∼pξ,µ∼pµ [ f ( Dφ(Gθ(z, ξ,µ)) )] + EI∼pD [ f(−Dφ(I)) + λ‖∇Dφ(I)‖2 ] ,
(5) where f(u) = − log(1 + exp(−u)), and λ controls the strength of regularization. More implementation details are provided in the supplementary material.
Discussion. Note that in Eq.(2 - 4), we perform shading after A and n are obtained via volume rendering. An alternative way is to perform shading at each local spatial point as c(r(t),d, z) = a(r(t),d, z)(ka + kdmax(0, l · n(r(t), z))), where n(r(t), z) = −∇r(t)σ(r(t), z)/‖∇r(t)σ(r(t), z)‖2 is the local normal. Then we could perform volume rendering using c(r(t), z) to get the final pixel color. In practice, we observe that this formulation obtains suboptimal results. An intuitive reason is that in this formulation, the normal direction is normalized at each local point, neglecting the magnitude of∇r(t)σ(r(t), z), which tends to be larger near the object surfaces. We provide more analysis in experiments and the supplementary material.
The Lambertian shading we used is an approximation to the real illumination scenario. While serving as a good regularization for improving the learned 3D shape, it could possibly introduce an additional gap between the distribution of generated images and that of real images. To compensate
for such risk, we could optionally let the predicted a be conditioned on the lighting condition, i.e., a = a(r(t),d,µ, z). Thus, in cases where the lighting condition deviates from the real data distribution, the generator could learn to adjust the value of a and reduce the aforementioned gap. We show the benefit of this design in the experiments.
3.3 Efficient Volume Rendering via Surface Tracking
Similar to NeRF, we implement volume rendering with a discretized integral, which typically requires to sample dozens of points along a camera ray, as shown in Fig. 3 (a). In our approach, we also need to perform back-propagation across the generator in Eq.(3) to get the normal direction for each point, which introduces additional computational cost. To achieve more efficient volume rendering, a natural idea is to exploit spatial sparsity. Usually, the weight T (t, z)σ(r(t), z) in volume rendering would concentrate on the object surface position during training. Thus, if we know the rough surface position before rendering, we could sample points near the surface to save computation. While for a static scene it is possible to store such spatial sparsity in a sparse voxel grid [8, 9], this technique cannot be directly applied to our generative model, as the 3D scene keeps changing with respect to the input latent code.
To achieve more efficient volume rendering in our generative implicit model, we further propose a surface tracking network S that learns to mimic the surface position conditioned on the latent code. In particular, the volume rendering naturally allows the depth estimation of the object surface via:
ts(r, z) = ∫ tf tn T (t, z)σ(r(t), z)t dt, (6)
where T (t, z) is defined the same way as in Eq.(2). Thus, given a camera pose ξ and a latent code z, we could render the full depth map ts(z, ξ). As shown in Fig. 3 (b), we mimic ts(z, ξ) with the surface tracking network Sψ , which is a light-weighted convolutional neural network that takes z, ξ as inputs and outputs a depth map. The depth mimic loss is:
L(ψ) = Ez∼pz,ξ∼pξ [‖Sψ(z, ξ)− ts(z, ξ)‖1 + Prec(Sψ(z, ξ), ts(d(z, ξ))] , (7) where Prec is the perceptual loss that motivates Sψ to better capture edges of the surface.
During training, Sψ is optimized jointly with the generator and the discriminator. Thus, each time after we sample a latent code z and a camera pose ξ, we can get an initial guess of the depth map as Sψ(z, ξ). Then for a pixel with predicted depth s, we could perform volume rendering in Eq.(2,3,6) with near bound tn = s−∆i/2 and far bound tf = s+ ∆i/2, where ∆i is the interval for volume rendering that decreases as the training iteration i grows. Specifically, we start with a large interval ∆max and decrease to ∆min with an exponential schedule. As ∆i decreases, the number of points used for rendering m also decreases accordingly. Note that the computational cost of our efficient surface tracking network is marginal compared to the generator, as the former only needs a single forward pass to render an image while the latter will be queried for H ×W ×m times. Thus, the reduction of m would significantly accelerate the training and inference speed for ShadeGAN.
4 Experiments
In this section, we evaluate the proposed ShadeGAN on 3D-aware image synthesis. We also show that ShadeGAN learns much more accurate 3D shapes than previous methods, and in the meantime allows explicit control over lighting conditions. The datasets used include CelebA [43], BFM [13], and Cats [44], all of which contain only unconstrained 2D RGB images.
Implementation. In terms of model architectures, we adopt a SIREN-based MLP [45] as the generator and a convolutional neural network as the discriminator following [4]. For the prior distribution of lighting conditions, we use Unsup3d [39] to estimate the lighting conditions of real data and subsequently fit a multivariate Gaussian distribution of µ = (l, ka, kd) as the prior. A hand-crafted prior distribution is also included in the ablation study. In quantitative study, we let the pre-cosine color a be conditioned on the lighting condition µ as well as the viewing direction d unless otherwise stated. In qualitative study, we observe that removing view conditioning achieves slightly better 3D shapes for CelebA and BFM datasets. Thus, we show results without view conditioning for these two datasets in the main paper, and put those with view conditioning in Fig. 4 of the supplementary material. Other implementation details are also provided in the supplementary.
Figure 5: Generated face images and their 3D meshes.
Normal AlbedoMeshImage
(a) ShadeGAN
(b) Local normal
(c) Manual prior
Figure 6: Qualitative ablation. See the main text for discussions.
Comparison with baselines. We compare ShadeGAN with two state-of-the-art generative implicit models, namely GRAF [5] and pi-GAN [4]. Specifically, Fig. 4 includes both synthesized images as well as their corresponding 3D meshes, which are obtained by performing marching cubes on the volume density σ. While GRAF and pi-GAN could synthesize images with controllable poses, their learned 3D shapes are inaccurate and noisy. In contrast, our approach not only synthesizes photorealistic 3D-consistent images, but also learns much more accurate 3D shapes and surface normals, indicating the effectiveness of the proposed multi-lighting constraint as a regularization. More synthesized images and their corresponding shapes are included in Fig.5. Besides more accurate 3D shapes, ShadeGAN can also learn the albedo and diffuse shading components inherently. As shown in Fig. 4, although not perfect, ShadeGAN has managed to disentangle shading and albedo with satisfying quality, as such disentanglement is a natural solution to the multi-lighting constraint.
The quality of learned 3D shapes is quantitatively evaluated on the BFM dataset. Specifically, we use each of the generative implicit models to generate 50k images and their corresponding depth maps. Image-depth pairs from each model are used as training data to train an additional convolutional neural network (CNN) that learns to predict the depth map of an input image. We then test each trained CNN on the BFM test set and compare its predictions to the ground-truth depth maps as a measurement of the quality of learned 3D shapes. Following [39], we report the scale-invariant depth error (SIDE) and mean angle deviation (MAD) metrics. The results are included in Tab. 1, where ShadeGAN significantly outperforms GRAF and pi-GAN. Besides, ShadeGAN also outperforms other advanced unsupervised 3D shape learning approaches including Unsup3d [39] and GAN2Shape [41], demonstrating its large potential in unsupervised 3D shapes learning. In terms of image quality, Tab. 1 includes the FID [46] scores of images synthesized by different models, where the FID score of ShadeGAN is slightly inferior to pi-GAN in BFM and CelebA. Intuitively, this is caused by the gap between our approximated shading (i.e. Lambertian shading) and the real illumination, which can be potentially avoided by adopting more realistic shading models and improving the lighting prior.
In Tab. 2, we also show the quantitative results of different models on CelebA and Cats. To evaluate the learned shape, we use each generative implicit model to generate 2k front-view images and their corresponding depth maps. While these datasets do not have ground truth depth, we report MAD obtained by testing pretrained Unsup3d models [39] on these generated image-depth pairs as
Table 3: Ablation study on the BFM dataset.
No. Method FID ↓ SIDE ↓ MAD ↓ (1) ShadeGAN 17.7 0.607 14.52 (2) local shading 30.1 0.754 18.18 (3) w/o light 19.2 0.618 14.53 (4) w/o view 18.6 0.622 14.88 (5) manual prior 20.2 0.643 15.38 (6) +efficient 18.2 0.673 14.72
Table 4: Training and inference time cost on CelebA. The efficient volume rendering significantly improves training and inference speed.
Method Train (h) Inference (s) FID ShadeGAN 92.3 0.343 16.4 +efficient 70.2 0.179 16.2 pi-GAN 56.8 0.204 15.7 +efficient 46.9 0.114 15.9
A lb
ed o
(a) w/o light condition (b) with light condition
S h
ad in
g Im
ag e
(c) with specular
Figure 8: Illumination-aware image synthesis. ShadeGAN allows explicit control over the lighting. The pre-cosine color (albedo) is independent of lighting in (a) and is conditioned on lighting in (b). We show results of adding a specular term in (c).
a reference. As we can observe, results on CelebA and Cat are consistent with those on the BFM dataset.
Ablation studies. We further study the effects of several design choices in ShadeGAN. First, we perform local points-specific shading as mentioned in the discussion of Sec. 3.2. As Tab. 3 No.(2) and Fig. 6 (b) show, the results of such a local shading strategy are notably worse than the original one, which indicates that taking the magnitude of∇xσ into account is beneficial. Besides, the results of Tab. 3 No.(3) and No.(4) imply that removing a’s dependence on the lighting µ or the viewpoint d could lead to a slight performance drop. The results of using a simple manually tuned lighting prior are provided in Tab. 3 No.(5) and Fig. 6 (c), which are only moderately worse than the results of using a data-driven prior, and the generated shapes are still significantly better than the ones produced by existing approaches.
To verify the effectiveness of the proposed efficient volume rendering technique, we include its effects on image quality and training/inference time in Tab. 3 No.(6) and Tab. 4. It is observed that the efficient volume rendering has marginal effects on the performance, but significantly reduces the training and inference time by 24% and 48% for ShadeGAN. Moreover, in Fig. 7 we visualize the depth maps predicted by our surface tracking network and those obtained via volume rendering. It is shown that under varying identities and camera poses, the surface tracking network could consistently predict depth values that are quite close to the real surface positions, so that we can sample points near the predicted surface for rendering without sacrificing image quality.
Illumination-aware image synthesis. As ShadeGAN models the shading process, it by design allows explicit control over the lighting condition. We provide such illumination-aware image synthesis results in Fig.8, where ShadeGAN generates promising images under different lighting directions. We also show that in cases where the predicted a is conditioned on the lighting condition µ, a would slightly change w.r.t. the lighting condition, e.g., it would be brighter in areas having a overly dim shading in order to make the final image more natural. Besides, we could optionally add a specular term ksmax(0,h · n)p in Eq. 4 (i.e., Blinn-Phong shading [47], where h is the bisector of the angle between the viewpoint and the lighting direction) to create specular highlight effects, as shown in Fig.8 (c).
GAN inversion. ShadeGAN could also be used to reconstruct a given target image by performing GAN inversion. As shown in Fig. 9 such inversion allows us to obtain several factors of the image, including the 3D shape, surface normal, approximated albedo, and shading. Besides, we can further perform view synthesis and relighting by changing the viewpoint and lighting condition. The implementation of GAN inversion is provided in the supplementary material.
Discussions. As the Lambertian shading we used is an approximation to the real illumination, the albedo learned by ShadeGAN is not perfectly disentangled. Our approach does not consider the spatially-varying material properties of objects as well. In the future, we intend to incorporate more sophisticated shading models to learn better disentangled generative reflectance fields.
5 Conclusion
In this work, we present ShadeGAN, a new generative implicit model for shape-accurate 3D-aware image synthesis. We have shown that the multi-lighting constraint, achieved in ShadeGAN by explicit illumination modeling, significantly helps learning accurate 3D shapes from 2D images. ShadeGAN also allows us to control the lighting condition during image synthesis, achieving natural image relighting effects. To reduce the computational cost, we have further devised a light-weighted surface tracking network, which enables an efficient volume rendering technique for generative implicit models, achieving significant acceleration on both training and inference speed. A generative model with shape-accurate 3D representation could broaden its applications in vision and graphics, and our work has taken a solid step towards this goal.
Acknowledgment. We would like to thank Eric R. Chan for sharing the codebase of pi-GAN. This study is supported under the ERC Consolidator Grant 4DRepLy (770784). This study is also supported under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). | 1. What is the main contribution of the paper regarding generative radiance field models?
2. What are the strengths of the proposed approach, particularly in terms of improving density distribution?
3. What are the weaknesses of the paper, especially concerning the "albedo" fields and their dependence on direction and lighting?
4. How does the reviewer suggest addressing the issue of view-dependence in the proposed formulation?
5. Can the authors provide more information or experiments to alleviate the concern about the potential for unconstrained optimization in the model? | Summary Of The Paper
Review | Summary Of The Paper
This paper aims to improve the density distribution of generative radiance field models by generating relitable reflectance fields and using different lightings to shade them during training. The core idea is that artifacts will arise when the generated density distribution ("shape") is unnatural and is getting shaded by different lightings, which should be detected and resolved by a discriminator. Experiments verify that this shading regularization can yield naturally looking shapes induced by the generated radiance fields.
Review
---------------UPDATED AFTER BURETTAL
I feel the reponses have addressed my concerns. I recommend acceptance. My suggestion is to add the explanation for view-dependence (to address dataset biases, in addition to non-Lambertian shading) to the main paper.
---------------ORIGINAL REVIEW
Strengths:
Simple and interesting idea to improve generative radiance fields.
Novel in the context of generative radiance fields.
Clearly written.
Straightforward implementation with good qualitative results that achieve the goal.
Weakness:
My major concern is on the unconstrained nature of the "albedo" fields. It is dependent on direction and lighting, making it unconstrained, and possibly unstable, to regularize the density distribution. The "color-shape" ambiguity is still not addressed. In the proposed formulation, the ambiguity can still be baked into the dependency on direction and lighting, i.e., instead of optimizing a good shape, the model might choose to optimize a sophisticated but working A(r,z,d,mu) function. For example, if a generated face shape has no nose, to have a plausible appearance that is multi-view consistent, the model can either (1) generate a nose in the shape and generate a constant albedo in A(z), or (2) keep the "no nose face" and hack the A(r,z,d,mu) function by giving high albedo to left nose when light source is on the left, and low albedo when light source is on the right. This is not seen in the shown images but this might happen to some extent, or might happen to some random seed. Showing results averaged over multiple runs could alleviate this concern and I might increase my rating.
I understand that the flexible A() function is used also to compensate the simplified lighting and reflectance model. But as for the dependence of view angle, why not use a simple specular part in the BRDF? Just like the test in Figure 8c. |
NIPS | Title
A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis
Abstract
The advancement of generative radiance fields has pushed the boundary of 3Daware image synthesis. Motivated by the observation that a 3D object should look realistic from multiple viewpoints, these methods introduce a multi-view constraint as regularization to learn valid 3D radiance fields from 2D images. Despite the progress, they often fall short of capturing accurate 3D shapes due to the shapecolor ambiguity, limiting their applicability in downstream tasks. In this work, we address this ambiguity by proposing a novel shading-guided generative implicit model that is able to learn a starkly improved shape representation. Our key insight is that an accurate 3D shape should also yield a realistic rendering under different lighting conditions. This multi-lighting constraint is realized by modeling illumination explicitly and performing shading with various lighting conditions. Gradients are derived by feeding the synthesized images to a discriminator. To compensate for the additional computational burden of calculating surface normals, we further devise an efficient volume rendering strategy via surface tracking, reducing the training and inference time by 24% and 48%, respectively. Our experiments on multiple datasets show that the proposed approach achieves photorealistic 3D-aware image synthesis while capturing accurate underlying 3D shapes. We demonstrate improved performance of our approach on 3D shape reconstruction against existing methods, and show its applicability on image relighting. Our code will be released at https://github.com/XingangPan/ShadeGAN.
1 Introduction
Advanced deep generative models, e.g., StyleGAN [1, 2] and BigGAN [3], have achieved great successes in natural image synthesis. While producing impressive results, these 2D representationbased models cannot synthesize novel views of an instance in a 3D-consistent manner. They also fall short of representing an explicit 3D object shape. To overcome such limitations, researchers have proposed new deep generative models that represent 3D scenes as neural radiance fields [4, 5]. Such 3D-aware generative models allow explicit control of viewpoint while preserving 3D consistency during image synthesis. Perhaps a more fascinating merit is that they have shown the great potential of learning 3D shapes in an unsupervised manner from just a collection of unconstrained 2D images. If we could train a 3D-aware generative model that learns accurate 3D object shapes, it would broaden various downstream applications such as 3D shape reconstruction and image relighting.
Existing attempts for 3D-aware image synthesis [4, 5] tend to learn coarse 3D shapes that are inaccurate and noisy, as shown in Fig.1 (a). We found that such inaccuracy arises from an inevitable ambiguity inherent in the training strategy adopted by these methods. In particular, a form of
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
regularization, which we refer to as "multi-view constraint", is used to enforce the 3D representation to look realistic from different viewpoints. The constraint is commonly implemented by first projecting the generator’s outputs (e.g., radiance fields [6]) to randomly sampled viewpoints, and then feeding them to a discriminator as fake images for training. While such a constraint enables these models to synthesize images in a 3D-aware manner, it suffers from the shape-color ambiguity, i.e., small variations of shape could lead to similar RGB images that look equally plausible to the discriminator, as the color of many objects is locally smooth. Consequently, inaccurate shapes are concealed under this constraint.
In this work, we propose a novel shading-guided generative implicit model (ShadeGAN) to address the aforementioned ambiguity. In particular, ShadeGAN learns more accurate 3D shapes by explicitly modeling shading, i.e., the interaction of illumination and shape. We believe that an accurate 3D shape should look realistic not only from different viewpoints, but also under different lighting conditions, i.e., satisfying the "multi-lighting constraint". This idea shares similar intuition with photometric stereo [7], which shows that accurate surface normal could be recovered from images taken under different lighting conditions. Note that the multi-lighting constraint is feasible as real-world images used for training are often taken under various lighting conditions. To fulfill this constraint, ShadeGAN takes a relightable color field as the intermediate representation, which approximates the albedo but does not necessarily satisfy viewpoint independence. The color field is shaded under a randomly sampled lighting condition during rendering. Since image appearance via such a shading process is strongly dependent on surface normals, inaccurate 3D shape representations will be much more clearly revealed than in earlier shading-agnostic generative models. Hence, by satisfying the multi-lighting constraint, ShadeGAN is encouraged to infer more accurate 3D shapes as shown in Fig.1 (b).
The above shading process requires the calculation of the normal direction via back-propagation through the generator, and such calculation needs to be repeated dozens of times for a pixel in volume rendering [4, 5], introducing additional computational overhead. Existing efficient volume rendering techniques [8, 9, 10, 11, 12] mainly target static scenes, and could not be directly applied to generative models due to their dynamic nature. Therefore, to improve the rendering speed of ShadeGAN, we formulate an efficient surface tracking network to estimate the rendered object surface conditioned on the latent code. This enables us to save rendering computations by just querying points near the predicted surface, leading to 24% and 48% reduction of training and inference time without affecting the quality of rendered images.
Comprehensive experiments are conducted across multiple datasets to verify the effectiveness of ShadeGAN. The results show that our approach is capable of synthesizing photorealistic images while capturing more accurate underlying 3D shapes than previous generative methods. The learned
distribution of 3D shapes enables various downstream tasks like 3D shape reconstruction, where our approach significantly outperforms other baselines on the BFM dataset [13]. Besides, modeling the shading process enables explicit control over lighting conditions, achieving image relighting effect. Our contributions can be summarized as follows: 1) We address the shape-color ambiguity in existing 3D-aware image synthesis methods with a shading-guided generative model that satisfies the proposed multi-lighting constraint. In this way, ShadeGAN is able to learn more accurate 3D shapes for better image synthesis. 2) We devise an efficient rendering technique via surface tracking, which significantly saves training and inference time for volume rendering-based generative models. 3) We show that ShadeGAN learns to disentangle shading and color that well approximates the albedo, achieving natural relighting effects in image synthesis.
2 Related Work
Neural volume rendering. Starting from the seminal work of neural radiance fields (NeRF) [6], neural volume rendering has gained much popularity in representing 3D scenes and synthesizing novel views. By integrating coordinate-based neural networks with volume rendering, NeRF performs high-fidelity view synthesis in a 3D consistent manner. Several attempts have been proposed to extend or improve NeRF. For instance, [14, 15, 16] further model illumination, and learn to disentangle reflectance with shading given well-aligned multi-view and multi-lighting images. Besides, many studies accelerate the rendering of static scenes from the perspective of spatial sparsity [8, 9], architectural design [10, 11], or efficient rendering [17, 12]. However, it is not trivial to apply these illumination and acceleration techniques to volume rendering-based generative models [5, 4], as they typically learn from unposed and unpaired images, and represent dynamic scenes that change with respect to the input latent codes.
In this work, we take the first attempt to model illumination in volume rendering-based generative models, which serves as a regularization for accurate 3D shape learning. We further devise an efficient rendering technique for our approach, which shares similar insight with [12], but does not rely on ground truth depth for training and it is not limited to a small viewpoint range.
Generative 3D-aware image synthesis. Generative adversarial networks (GANs) [18] are capable of generating photorealistic images of high-resolution, but lack explicit control over camera viewpoint. In order to enable them to synthesis images in a 3D-aware manner, many recent approaches investigate how 3D representations could be incorporated into GANs [19, 20, 21, 22, 23, 24, 25, 26, 27, 5, 4, 28, 29, 30]. While some works directly learn from 3D data [19, 20, 21, 22, 30], in this work we focus on approaches that only have access to unconstrained 2D images, which is a more practical setting. Several attempts [23, 24, 25] adopt 3D voxel features with learned neural rendering. These methods produce realistic 3D-aware synthesis, but the 3D voxels are not interpretable, i.e., they cannot be transferred to 3D shapes. By leveraging differentiable renderer, [26] and [27] learn interpretable 3D voxels and meshes respectively, but [26] suffers from limited visual quality due to low voxel resolution while the learned 3D shapes of [27] exhibit noticeable distortions. The success of NeRF has motivated researchers to use radiance fields as the intermediate 3D representation in GANs [5, 4, 28]. While achieving impressive 3D-aware image synthesis with multi-view consistency, the extracted 3D shapes of these approaches are often imprecise and noisy. Our main goal in this work is to address the inaccurate shape by explicitly modeling illumination in the rendering process. This innovation helps achieve better 3D-aware image synthesis with broader applications.
Unsupervised 3D shape learning from 2D images. Our work is also related to unsupervised approaches that learn 3D object shapes from unconstrained, monocular view 2D images. While several approaches use external 3D shape templates or 2D key-points as weak supervisions to facilitate learning [31, 32, 33, 34, 35, 36, 37], in this work we consider the harder setting where only 2D images are available. To tackle this problem, most approaches adopt an “analysis-by-synthesis” paradigm [38, 39, 40]. Specifically, they design photo-geometric autoencoders to infer the 3D shape and viewpoint of each image with a reconstruction loss. While succeed in learning the 3D shapes for some object categories, these approaches typically rely on certain regularization to prevent trivial solutions, like the commonly used symmetry assumption on object shapes [39, 40, 31, 32]. Such assumption tends to produce symmetric results that may overlook the asymmetric aspects of objects. Recently, GAN2Shape [41] shows that it is possible to recover 3D shapes for images generated by 2D GANs. This method, however, requires inefficient instance-specific training, and recovers depth maps instead of full 3D representations.
The proposed 3D-aware generative model also serves as a powerful approach for unsupervised 3D shape learning. Compared with aforementioned autoencoder-based methods, our GAN-based approach avoids the need to infer the viewpoint of each image, and does not rely on strong regularizations. In experiments, we demonstrate superior performance over recent state-of-the-art approaches Unsup3d [39] and GAN2Shape [41].
3 Methodology
We consider the problem of 3D-aware image synthesis by learning from a collection of unconstrained and unlabeled 2D images. We argue that modeling shading, i.e., the interaction of illumination and shape, in a generative implicit model enables unsupervised learning of more accurate 3D object shapes. In the following, we first provide some preliminaries on neural radiance fields (NeRF) [6], and then introduce our shading-guided generative implicit model.
3.1 Preliminaries on Neural Radiance Fields
As a deep implicit model, NeRF [6] uses an MLP network to represent a 3D scene as a radiance field. The MLP fθ : (x,d)→ (σ, c) takes a 3D coordinate x ∈ R3 and a viewing direction d ∈ S2 as inputs, and outputs a volume density σ ∈ R+ and a color c ∈ R3. To render an image under a given camera pose, each pixel color C of the image is obtained via volume rendering along its corresponding camera ray r(t) = o+ td with near and far bounds tn and tf as below:
C(r) = ∫ tf tn T (t)σ(r(t))c(r(t),d)dt, where T (t) = exp(− ∫ t tn σ(r(s))ds). (1)
In practice, this volume rendering is implemented with a discretized form using stratified and hierarchical sampling. As this rendering process is differentiable, NeRF could be directly optimized via posed images of a static scene. After training, NeRF allows the rendering of images under new camera poses, achieving high-quality novel view synthesis.
3.2 Shading-Guided Generative Implicit Model
In this work, we are interested in developing a generative implicit model that explicitly models the shading process for 3D-aware image synthesis. To achieve this, we make two extensions to the MLP network in NeRF. First, similar to most deep generative models, it is further conditioned on a latent code z sampled from a prior distribution N (0, I)d. Second, instead of directly outputting the color c, it outputs a relightable pre-cosine color term a ∈ R3, which is conceptually similar to albedo in the way that it could be shaded under a given lighting condition. While albedo is viewpoint-independent, in this work we do not strictly enforce such independence for a in order to account for dataset bias. Thus, our generator gθ : (x,d, z)→ (σ,a) takes a coordinate x, a viewing direction d, and a latent
code z as inputs, and outputs a volume density σ and a pre-cosine color a. Note that here σ is independent of d, while the dependence of a on d is optional. To obtain the color C of a camera ray r(t) = o+ td with near and far bounds tn and tf , we calculate the final pre-cosine colorA via:
A(r, z) = ∫ tf tn T (t, z)σ(r(t), z)a(r(t),d, z)dt, where T (t, z) = exp(− ∫ t tn σ(r(s), z)ds).
(2)
We also calculate the normal direction n with: n(r, z) = n̂(r, z)/‖n̂(r, z)‖2, where n̂(r, z) = − ∫ tf tn T (t, z)σ(r(t), z)∇r(t)σ(r(t), z)dt,
(3)
where ∇r(t)σ(r(t), z) is the derivative of volume density σ with respect to its input coordinate, which naturally captures the local normal direction, and could be calculated via back-propagation. Then the final color C is obtained via Lambertian shading as:
C(r, z) = A(r, z)(ka + kdmax(0, l · n(r, z))), (4)
where l ∈ S2 is the lighting direction, ka and kd are the ambient and diffuse coefficients. We provide more discussions on this shading formulation at the end of this subsection.
Camera and Lighting Sampling. Eq.(2 - 4) describe the process of rendering a pixel color given a camera ray r(t) and a lighting condition µ = (l, ka, kd). Generating a full image Ig ∈ R3×H×W requires one to sample a camera pose ξ and a lighting condition µ in addition to the latent code z, i.e., Ig = Gθ(z, ξ,µ). In our setting, the camera pose ξ could be described by pitch and yaw angles, and is sampled from a prior Gaussian or uniform distribution pξ, as also done in previous works [4, 5]. Sampling the camera pose randomly during training would motivate the learned 3D scene to look realistic from different viewpoints. While this multi-view constraint is beneficial for learning a valid 3D representation, it is often insufficient to infer the accurate 3D object shape. Thus, in our approach, we further introduce a multi-lighting constraint by also randomly sampling a lighting condition µ from a prior distribution pµ. In practice, pµ could be estimated from the dataset using existing approaches like [39]. We also show in our experiments that a simple and manually tuned prior distribution could also produce reasonable results. As the shading process is sensitive to the normal direction due to the diffuse term kdmax(0, l · n(r, z)) in Eq.(4), this multi-lighting constraint would regularize the model to learn more accurate 3D shapes that produce natural shading, as shown in Fig.1 (b).
Training. Our generative model follows the paradigm of GANs [18], where the generator is trained together with a discriminator D with parameters φ in an adversarial manner. During training, the generator generates fake images Ig = Gθ(z, ξ,µ) by sampling the latent code z, camera pose ξ and lighting condition µ from their corresponding prior distributions pz , pξ, and pµ. Let I denotes real images sampled from the data distribution pI . We train our model with a non-saturating GAN loss with R1 regularization [42]:
L(θ, φ) = Ez∼pz,ξ∼pξ,µ∼pµ [ f ( Dφ(Gθ(z, ξ,µ)) )] + EI∼pD [ f(−Dφ(I)) + λ‖∇Dφ(I)‖2 ] ,
(5) where f(u) = − log(1 + exp(−u)), and λ controls the strength of regularization. More implementation details are provided in the supplementary material.
Discussion. Note that in Eq.(2 - 4), we perform shading after A and n are obtained via volume rendering. An alternative way is to perform shading at each local spatial point as c(r(t),d, z) = a(r(t),d, z)(ka + kdmax(0, l · n(r(t), z))), where n(r(t), z) = −∇r(t)σ(r(t), z)/‖∇r(t)σ(r(t), z)‖2 is the local normal. Then we could perform volume rendering using c(r(t), z) to get the final pixel color. In practice, we observe that this formulation obtains suboptimal results. An intuitive reason is that in this formulation, the normal direction is normalized at each local point, neglecting the magnitude of∇r(t)σ(r(t), z), which tends to be larger near the object surfaces. We provide more analysis in experiments and the supplementary material.
The Lambertian shading we used is an approximation to the real illumination scenario. While serving as a good regularization for improving the learned 3D shape, it could possibly introduce an additional gap between the distribution of generated images and that of real images. To compensate
for such risk, we could optionally let the predicted a be conditioned on the lighting condition, i.e., a = a(r(t),d,µ, z). Thus, in cases where the lighting condition deviates from the real data distribution, the generator could learn to adjust the value of a and reduce the aforementioned gap. We show the benefit of this design in the experiments.
3.3 Efficient Volume Rendering via Surface Tracking
Similar to NeRF, we implement volume rendering with a discretized integral, which typically requires to sample dozens of points along a camera ray, as shown in Fig. 3 (a). In our approach, we also need to perform back-propagation across the generator in Eq.(3) to get the normal direction for each point, which introduces additional computational cost. To achieve more efficient volume rendering, a natural idea is to exploit spatial sparsity. Usually, the weight T (t, z)σ(r(t), z) in volume rendering would concentrate on the object surface position during training. Thus, if we know the rough surface position before rendering, we could sample points near the surface to save computation. While for a static scene it is possible to store such spatial sparsity in a sparse voxel grid [8, 9], this technique cannot be directly applied to our generative model, as the 3D scene keeps changing with respect to the input latent code.
To achieve more efficient volume rendering in our generative implicit model, we further propose a surface tracking network S that learns to mimic the surface position conditioned on the latent code. In particular, the volume rendering naturally allows the depth estimation of the object surface via:
ts(r, z) = ∫ tf tn T (t, z)σ(r(t), z)t dt, (6)
where T (t, z) is defined the same way as in Eq.(2). Thus, given a camera pose ξ and a latent code z, we could render the full depth map ts(z, ξ). As shown in Fig. 3 (b), we mimic ts(z, ξ) with the surface tracking network Sψ , which is a light-weighted convolutional neural network that takes z, ξ as inputs and outputs a depth map. The depth mimic loss is:
L(ψ) = Ez∼pz,ξ∼pξ [‖Sψ(z, ξ)− ts(z, ξ)‖1 + Prec(Sψ(z, ξ), ts(d(z, ξ))] , (7) where Prec is the perceptual loss that motivates Sψ to better capture edges of the surface.
During training, Sψ is optimized jointly with the generator and the discriminator. Thus, each time after we sample a latent code z and a camera pose ξ, we can get an initial guess of the depth map as Sψ(z, ξ). Then for a pixel with predicted depth s, we could perform volume rendering in Eq.(2,3,6) with near bound tn = s−∆i/2 and far bound tf = s+ ∆i/2, where ∆i is the interval for volume rendering that decreases as the training iteration i grows. Specifically, we start with a large interval ∆max and decrease to ∆min with an exponential schedule. As ∆i decreases, the number of points used for rendering m also decreases accordingly. Note that the computational cost of our efficient surface tracking network is marginal compared to the generator, as the former only needs a single forward pass to render an image while the latter will be queried for H ×W ×m times. Thus, the reduction of m would significantly accelerate the training and inference speed for ShadeGAN.
4 Experiments
In this section, we evaluate the proposed ShadeGAN on 3D-aware image synthesis. We also show that ShadeGAN learns much more accurate 3D shapes than previous methods, and in the meantime allows explicit control over lighting conditions. The datasets used include CelebA [43], BFM [13], and Cats [44], all of which contain only unconstrained 2D RGB images.
Implementation. In terms of model architectures, we adopt a SIREN-based MLP [45] as the generator and a convolutional neural network as the discriminator following [4]. For the prior distribution of lighting conditions, we use Unsup3d [39] to estimate the lighting conditions of real data and subsequently fit a multivariate Gaussian distribution of µ = (l, ka, kd) as the prior. A hand-crafted prior distribution is also included in the ablation study. In quantitative study, we let the pre-cosine color a be conditioned on the lighting condition µ as well as the viewing direction d unless otherwise stated. In qualitative study, we observe that removing view conditioning achieves slightly better 3D shapes for CelebA and BFM datasets. Thus, we show results without view conditioning for these two datasets in the main paper, and put those with view conditioning in Fig. 4 of the supplementary material. Other implementation details are also provided in the supplementary.
Figure 5: Generated face images and their 3D meshes.
Normal AlbedoMeshImage
(a) ShadeGAN
(b) Local normal
(c) Manual prior
Figure 6: Qualitative ablation. See the main text for discussions.
Comparison with baselines. We compare ShadeGAN with two state-of-the-art generative implicit models, namely GRAF [5] and pi-GAN [4]. Specifically, Fig. 4 includes both synthesized images as well as their corresponding 3D meshes, which are obtained by performing marching cubes on the volume density σ. While GRAF and pi-GAN could synthesize images with controllable poses, their learned 3D shapes are inaccurate and noisy. In contrast, our approach not only synthesizes photorealistic 3D-consistent images, but also learns much more accurate 3D shapes and surface normals, indicating the effectiveness of the proposed multi-lighting constraint as a regularization. More synthesized images and their corresponding shapes are included in Fig.5. Besides more accurate 3D shapes, ShadeGAN can also learn the albedo and diffuse shading components inherently. As shown in Fig. 4, although not perfect, ShadeGAN has managed to disentangle shading and albedo with satisfying quality, as such disentanglement is a natural solution to the multi-lighting constraint.
The quality of learned 3D shapes is quantitatively evaluated on the BFM dataset. Specifically, we use each of the generative implicit models to generate 50k images and their corresponding depth maps. Image-depth pairs from each model are used as training data to train an additional convolutional neural network (CNN) that learns to predict the depth map of an input image. We then test each trained CNN on the BFM test set and compare its predictions to the ground-truth depth maps as a measurement of the quality of learned 3D shapes. Following [39], we report the scale-invariant depth error (SIDE) and mean angle deviation (MAD) metrics. The results are included in Tab. 1, where ShadeGAN significantly outperforms GRAF and pi-GAN. Besides, ShadeGAN also outperforms other advanced unsupervised 3D shape learning approaches including Unsup3d [39] and GAN2Shape [41], demonstrating its large potential in unsupervised 3D shapes learning. In terms of image quality, Tab. 1 includes the FID [46] scores of images synthesized by different models, where the FID score of ShadeGAN is slightly inferior to pi-GAN in BFM and CelebA. Intuitively, this is caused by the gap between our approximated shading (i.e. Lambertian shading) and the real illumination, which can be potentially avoided by adopting more realistic shading models and improving the lighting prior.
In Tab. 2, we also show the quantitative results of different models on CelebA and Cats. To evaluate the learned shape, we use each generative implicit model to generate 2k front-view images and their corresponding depth maps. While these datasets do not have ground truth depth, we report MAD obtained by testing pretrained Unsup3d models [39] on these generated image-depth pairs as
Table 3: Ablation study on the BFM dataset.
No. Method FID ↓ SIDE ↓ MAD ↓ (1) ShadeGAN 17.7 0.607 14.52 (2) local shading 30.1 0.754 18.18 (3) w/o light 19.2 0.618 14.53 (4) w/o view 18.6 0.622 14.88 (5) manual prior 20.2 0.643 15.38 (6) +efficient 18.2 0.673 14.72
Table 4: Training and inference time cost on CelebA. The efficient volume rendering significantly improves training and inference speed.
Method Train (h) Inference (s) FID ShadeGAN 92.3 0.343 16.4 +efficient 70.2 0.179 16.2 pi-GAN 56.8 0.204 15.7 +efficient 46.9 0.114 15.9
A lb
ed o
(a) w/o light condition (b) with light condition
S h
ad in
g Im
ag e
(c) with specular
Figure 8: Illumination-aware image synthesis. ShadeGAN allows explicit control over the lighting. The pre-cosine color (albedo) is independent of lighting in (a) and is conditioned on lighting in (b). We show results of adding a specular term in (c).
a reference. As we can observe, results on CelebA and Cat are consistent with those on the BFM dataset.
Ablation studies. We further study the effects of several design choices in ShadeGAN. First, we perform local points-specific shading as mentioned in the discussion of Sec. 3.2. As Tab. 3 No.(2) and Fig. 6 (b) show, the results of such a local shading strategy are notably worse than the original one, which indicates that taking the magnitude of∇xσ into account is beneficial. Besides, the results of Tab. 3 No.(3) and No.(4) imply that removing a’s dependence on the lighting µ or the viewpoint d could lead to a slight performance drop. The results of using a simple manually tuned lighting prior are provided in Tab. 3 No.(5) and Fig. 6 (c), which are only moderately worse than the results of using a data-driven prior, and the generated shapes are still significantly better than the ones produced by existing approaches.
To verify the effectiveness of the proposed efficient volume rendering technique, we include its effects on image quality and training/inference time in Tab. 3 No.(6) and Tab. 4. It is observed that the efficient volume rendering has marginal effects on the performance, but significantly reduces the training and inference time by 24% and 48% for ShadeGAN. Moreover, in Fig. 7 we visualize the depth maps predicted by our surface tracking network and those obtained via volume rendering. It is shown that under varying identities and camera poses, the surface tracking network could consistently predict depth values that are quite close to the real surface positions, so that we can sample points near the predicted surface for rendering without sacrificing image quality.
Illumination-aware image synthesis. As ShadeGAN models the shading process, it by design allows explicit control over the lighting condition. We provide such illumination-aware image synthesis results in Fig.8, where ShadeGAN generates promising images under different lighting directions. We also show that in cases where the predicted a is conditioned on the lighting condition µ, a would slightly change w.r.t. the lighting condition, e.g., it would be brighter in areas having a overly dim shading in order to make the final image more natural. Besides, we could optionally add a specular term ksmax(0,h · n)p in Eq. 4 (i.e., Blinn-Phong shading [47], where h is the bisector of the angle between the viewpoint and the lighting direction) to create specular highlight effects, as shown in Fig.8 (c).
GAN inversion. ShadeGAN could also be used to reconstruct a given target image by performing GAN inversion. As shown in Fig. 9 such inversion allows us to obtain several factors of the image, including the 3D shape, surface normal, approximated albedo, and shading. Besides, we can further perform view synthesis and relighting by changing the viewpoint and lighting condition. The implementation of GAN inversion is provided in the supplementary material.
Discussions. As the Lambertian shading we used is an approximation to the real illumination, the albedo learned by ShadeGAN is not perfectly disentangled. Our approach does not consider the spatially-varying material properties of objects as well. In the future, we intend to incorporate more sophisticated shading models to learn better disentangled generative reflectance fields.
5 Conclusion
In this work, we present ShadeGAN, a new generative implicit model for shape-accurate 3D-aware image synthesis. We have shown that the multi-lighting constraint, achieved in ShadeGAN by explicit illumination modeling, significantly helps learning accurate 3D shapes from 2D images. ShadeGAN also allows us to control the lighting condition during image synthesis, achieving natural image relighting effects. To reduce the computational cost, we have further devised a light-weighted surface tracking network, which enables an efficient volume rendering technique for generative implicit models, achieving significant acceleration on both training and inference speed. A generative model with shape-accurate 3D representation could broaden its applications in vision and graphics, and our work has taken a solid step towards this goal.
Acknowledgment. We would like to thank Eric R. Chan for sharing the codebase of pi-GAN. This study is supported under the ERC Consolidator Grant 4DRepLy (770784). This study is also supported under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). | 1. What is the focus and contribution of the paper regarding generative radiance fields?
2. What are the strengths of the proposed approach, particularly in terms of shape reconstruction and image synthesis quality?
3. Do you have any concerns or suggestions regarding the paper's content, experiments, or minor design improvements?
4. How does the reviewer assess the clarity, organization, and citations of the paper?
5. Are there any potential ways to enhance the rendering quality or address limitations mentioned by the reviewer? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a new method to train generative radiance fields that can achieve more accurate 3D shape reconstruction. The key idea is that instead of directly synthesizing the images from different viewpoints through volume ray tracing, which may suffer from color-shape ambiguity, it synthesizes albedo and normal and then renders the image with different lighting conditions. This new method requires the synthesized albedo and normal to be able to render realistic images under various lighting conditions, providing an extra regularization for shape reconstruction. To further improve the rendering speed, it uses an extra 2D CNN to predict a depth map from latent code and camera poses so that it only needs to sample near the surface in volume ray tracing. Experiments show that the proposed method greatly improves shape reconstruction with similar image synthesis quality compared to state-of-the-arts.
Review
I will list the strengths of the paper and some minor suggestions that I believe may help improve the paper. My current evaluation of the paper is very positive. Authors may prioritize answering other reviewers' questions in the rebuttal.
Strengths:
The major contribution of the paper is clear. It proposes the multi-lighting constraint to help regularize the 3D shape reconstruction when training generative radiance fields, which is intuitively reasonable practically effective. In my opinion, this is an interesting idea and may inspire future research in the related fields.
The paper is clearly written, well-organized, and easy to follow. The citations is complete and appropriate.
The experiments are comprehensive and convincing. It clearly supports the major contribution of the paper that the depth prediction quality can be improved by the multi-lighting constraint. Moreover, ablation studies also demonstrate that all minor design improvements such as depth prediction to reduce sampling number and lighting parameters as inputs can further improve the image synthesis quality.
Authors clearly summarize the limitations of the current method, which may help researchers to build new frameworks based on current results.
Minor suggestions:
When first reading the paper, I was confused by how GAN inversion is done in practice. I later found the details in the supplementary but authors can consider adding a reminder pointing to the supplementary in the main paper.
When discussing the intuition of adding the multi-lighting constraint, authors can consider connecting it to photometric stereo, which can achieve accurate normal reconstruction from images taken under different lighting conditions. This may make the argument more convincing and easy to understand.
One way to improve the rendering quality is that instead of using sampled lighting conditions, authors can use captured real HDR environment maps. This may minimize the domain gap and render more realistic results. It may be even more useful if authors hope to reconstruct specular highlights in the future.
One minor thing to check is if both the lighting and albedo are in the sRGB space or linear RGB space. If they are not in the color space, authors may consider using gamma correction to align them. |
NIPS | Title
A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis
Abstract
The advancement of generative radiance fields has pushed the boundary of 3Daware image synthesis. Motivated by the observation that a 3D object should look realistic from multiple viewpoints, these methods introduce a multi-view constraint as regularization to learn valid 3D radiance fields from 2D images. Despite the progress, they often fall short of capturing accurate 3D shapes due to the shapecolor ambiguity, limiting their applicability in downstream tasks. In this work, we address this ambiguity by proposing a novel shading-guided generative implicit model that is able to learn a starkly improved shape representation. Our key insight is that an accurate 3D shape should also yield a realistic rendering under different lighting conditions. This multi-lighting constraint is realized by modeling illumination explicitly and performing shading with various lighting conditions. Gradients are derived by feeding the synthesized images to a discriminator. To compensate for the additional computational burden of calculating surface normals, we further devise an efficient volume rendering strategy via surface tracking, reducing the training and inference time by 24% and 48%, respectively. Our experiments on multiple datasets show that the proposed approach achieves photorealistic 3D-aware image synthesis while capturing accurate underlying 3D shapes. We demonstrate improved performance of our approach on 3D shape reconstruction against existing methods, and show its applicability on image relighting. Our code will be released at https://github.com/XingangPan/ShadeGAN.
1 Introduction
Advanced deep generative models, e.g., StyleGAN [1, 2] and BigGAN [3], have achieved great successes in natural image synthesis. While producing impressive results, these 2D representationbased models cannot synthesize novel views of an instance in a 3D-consistent manner. They also fall short of representing an explicit 3D object shape. To overcome such limitations, researchers have proposed new deep generative models that represent 3D scenes as neural radiance fields [4, 5]. Such 3D-aware generative models allow explicit control of viewpoint while preserving 3D consistency during image synthesis. Perhaps a more fascinating merit is that they have shown the great potential of learning 3D shapes in an unsupervised manner from just a collection of unconstrained 2D images. If we could train a 3D-aware generative model that learns accurate 3D object shapes, it would broaden various downstream applications such as 3D shape reconstruction and image relighting.
Existing attempts for 3D-aware image synthesis [4, 5] tend to learn coarse 3D shapes that are inaccurate and noisy, as shown in Fig.1 (a). We found that such inaccuracy arises from an inevitable ambiguity inherent in the training strategy adopted by these methods. In particular, a form of
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
regularization, which we refer to as "multi-view constraint", is used to enforce the 3D representation to look realistic from different viewpoints. The constraint is commonly implemented by first projecting the generator’s outputs (e.g., radiance fields [6]) to randomly sampled viewpoints, and then feeding them to a discriminator as fake images for training. While such a constraint enables these models to synthesize images in a 3D-aware manner, it suffers from the shape-color ambiguity, i.e., small variations of shape could lead to similar RGB images that look equally plausible to the discriminator, as the color of many objects is locally smooth. Consequently, inaccurate shapes are concealed under this constraint.
In this work, we propose a novel shading-guided generative implicit model (ShadeGAN) to address the aforementioned ambiguity. In particular, ShadeGAN learns more accurate 3D shapes by explicitly modeling shading, i.e., the interaction of illumination and shape. We believe that an accurate 3D shape should look realistic not only from different viewpoints, but also under different lighting conditions, i.e., satisfying the "multi-lighting constraint". This idea shares similar intuition with photometric stereo [7], which shows that accurate surface normal could be recovered from images taken under different lighting conditions. Note that the multi-lighting constraint is feasible as real-world images used for training are often taken under various lighting conditions. To fulfill this constraint, ShadeGAN takes a relightable color field as the intermediate representation, which approximates the albedo but does not necessarily satisfy viewpoint independence. The color field is shaded under a randomly sampled lighting condition during rendering. Since image appearance via such a shading process is strongly dependent on surface normals, inaccurate 3D shape representations will be much more clearly revealed than in earlier shading-agnostic generative models. Hence, by satisfying the multi-lighting constraint, ShadeGAN is encouraged to infer more accurate 3D shapes as shown in Fig.1 (b).
The above shading process requires the calculation of the normal direction via back-propagation through the generator, and such calculation needs to be repeated dozens of times for a pixel in volume rendering [4, 5], introducing additional computational overhead. Existing efficient volume rendering techniques [8, 9, 10, 11, 12] mainly target static scenes, and could not be directly applied to generative models due to their dynamic nature. Therefore, to improve the rendering speed of ShadeGAN, we formulate an efficient surface tracking network to estimate the rendered object surface conditioned on the latent code. This enables us to save rendering computations by just querying points near the predicted surface, leading to 24% and 48% reduction of training and inference time without affecting the quality of rendered images.
Comprehensive experiments are conducted across multiple datasets to verify the effectiveness of ShadeGAN. The results show that our approach is capable of synthesizing photorealistic images while capturing more accurate underlying 3D shapes than previous generative methods. The learned
distribution of 3D shapes enables various downstream tasks like 3D shape reconstruction, where our approach significantly outperforms other baselines on the BFM dataset [13]. Besides, modeling the shading process enables explicit control over lighting conditions, achieving image relighting effect. Our contributions can be summarized as follows: 1) We address the shape-color ambiguity in existing 3D-aware image synthesis methods with a shading-guided generative model that satisfies the proposed multi-lighting constraint. In this way, ShadeGAN is able to learn more accurate 3D shapes for better image synthesis. 2) We devise an efficient rendering technique via surface tracking, which significantly saves training and inference time for volume rendering-based generative models. 3) We show that ShadeGAN learns to disentangle shading and color that well approximates the albedo, achieving natural relighting effects in image synthesis.
2 Related Work
Neural volume rendering. Starting from the seminal work of neural radiance fields (NeRF) [6], neural volume rendering has gained much popularity in representing 3D scenes and synthesizing novel views. By integrating coordinate-based neural networks with volume rendering, NeRF performs high-fidelity view synthesis in a 3D consistent manner. Several attempts have been proposed to extend or improve NeRF. For instance, [14, 15, 16] further model illumination, and learn to disentangle reflectance with shading given well-aligned multi-view and multi-lighting images. Besides, many studies accelerate the rendering of static scenes from the perspective of spatial sparsity [8, 9], architectural design [10, 11], or efficient rendering [17, 12]. However, it is not trivial to apply these illumination and acceleration techniques to volume rendering-based generative models [5, 4], as they typically learn from unposed and unpaired images, and represent dynamic scenes that change with respect to the input latent codes.
In this work, we take the first attempt to model illumination in volume rendering-based generative models, which serves as a regularization for accurate 3D shape learning. We further devise an efficient rendering technique for our approach, which shares similar insight with [12], but does not rely on ground truth depth for training and it is not limited to a small viewpoint range.
Generative 3D-aware image synthesis. Generative adversarial networks (GANs) [18] are capable of generating photorealistic images of high-resolution, but lack explicit control over camera viewpoint. In order to enable them to synthesis images in a 3D-aware manner, many recent approaches investigate how 3D representations could be incorporated into GANs [19, 20, 21, 22, 23, 24, 25, 26, 27, 5, 4, 28, 29, 30]. While some works directly learn from 3D data [19, 20, 21, 22, 30], in this work we focus on approaches that only have access to unconstrained 2D images, which is a more practical setting. Several attempts [23, 24, 25] adopt 3D voxel features with learned neural rendering. These methods produce realistic 3D-aware synthesis, but the 3D voxels are not interpretable, i.e., they cannot be transferred to 3D shapes. By leveraging differentiable renderer, [26] and [27] learn interpretable 3D voxels and meshes respectively, but [26] suffers from limited visual quality due to low voxel resolution while the learned 3D shapes of [27] exhibit noticeable distortions. The success of NeRF has motivated researchers to use radiance fields as the intermediate 3D representation in GANs [5, 4, 28]. While achieving impressive 3D-aware image synthesis with multi-view consistency, the extracted 3D shapes of these approaches are often imprecise and noisy. Our main goal in this work is to address the inaccurate shape by explicitly modeling illumination in the rendering process. This innovation helps achieve better 3D-aware image synthesis with broader applications.
Unsupervised 3D shape learning from 2D images. Our work is also related to unsupervised approaches that learn 3D object shapes from unconstrained, monocular view 2D images. While several approaches use external 3D shape templates or 2D key-points as weak supervisions to facilitate learning [31, 32, 33, 34, 35, 36, 37], in this work we consider the harder setting where only 2D images are available. To tackle this problem, most approaches adopt an “analysis-by-synthesis” paradigm [38, 39, 40]. Specifically, they design photo-geometric autoencoders to infer the 3D shape and viewpoint of each image with a reconstruction loss. While succeed in learning the 3D shapes for some object categories, these approaches typically rely on certain regularization to prevent trivial solutions, like the commonly used symmetry assumption on object shapes [39, 40, 31, 32]. Such assumption tends to produce symmetric results that may overlook the asymmetric aspects of objects. Recently, GAN2Shape [41] shows that it is possible to recover 3D shapes for images generated by 2D GANs. This method, however, requires inefficient instance-specific training, and recovers depth maps instead of full 3D representations.
The proposed 3D-aware generative model also serves as a powerful approach for unsupervised 3D shape learning. Compared with aforementioned autoencoder-based methods, our GAN-based approach avoids the need to infer the viewpoint of each image, and does not rely on strong regularizations. In experiments, we demonstrate superior performance over recent state-of-the-art approaches Unsup3d [39] and GAN2Shape [41].
3 Methodology
We consider the problem of 3D-aware image synthesis by learning from a collection of unconstrained and unlabeled 2D images. We argue that modeling shading, i.e., the interaction of illumination and shape, in a generative implicit model enables unsupervised learning of more accurate 3D object shapes. In the following, we first provide some preliminaries on neural radiance fields (NeRF) [6], and then introduce our shading-guided generative implicit model.
3.1 Preliminaries on Neural Radiance Fields
As a deep implicit model, NeRF [6] uses an MLP network to represent a 3D scene as a radiance field. The MLP fθ : (x,d)→ (σ, c) takes a 3D coordinate x ∈ R3 and a viewing direction d ∈ S2 as inputs, and outputs a volume density σ ∈ R+ and a color c ∈ R3. To render an image under a given camera pose, each pixel color C of the image is obtained via volume rendering along its corresponding camera ray r(t) = o+ td with near and far bounds tn and tf as below:
C(r) = ∫ tf tn T (t)σ(r(t))c(r(t),d)dt, where T (t) = exp(− ∫ t tn σ(r(s))ds). (1)
In practice, this volume rendering is implemented with a discretized form using stratified and hierarchical sampling. As this rendering process is differentiable, NeRF could be directly optimized via posed images of a static scene. After training, NeRF allows the rendering of images under new camera poses, achieving high-quality novel view synthesis.
3.2 Shading-Guided Generative Implicit Model
In this work, we are interested in developing a generative implicit model that explicitly models the shading process for 3D-aware image synthesis. To achieve this, we make two extensions to the MLP network in NeRF. First, similar to most deep generative models, it is further conditioned on a latent code z sampled from a prior distribution N (0, I)d. Second, instead of directly outputting the color c, it outputs a relightable pre-cosine color term a ∈ R3, which is conceptually similar to albedo in the way that it could be shaded under a given lighting condition. While albedo is viewpoint-independent, in this work we do not strictly enforce such independence for a in order to account for dataset bias. Thus, our generator gθ : (x,d, z)→ (σ,a) takes a coordinate x, a viewing direction d, and a latent
code z as inputs, and outputs a volume density σ and a pre-cosine color a. Note that here σ is independent of d, while the dependence of a on d is optional. To obtain the color C of a camera ray r(t) = o+ td with near and far bounds tn and tf , we calculate the final pre-cosine colorA via:
A(r, z) = ∫ tf tn T (t, z)σ(r(t), z)a(r(t),d, z)dt, where T (t, z) = exp(− ∫ t tn σ(r(s), z)ds).
(2)
We also calculate the normal direction n with: n(r, z) = n̂(r, z)/‖n̂(r, z)‖2, where n̂(r, z) = − ∫ tf tn T (t, z)σ(r(t), z)∇r(t)σ(r(t), z)dt,
(3)
where ∇r(t)σ(r(t), z) is the derivative of volume density σ with respect to its input coordinate, which naturally captures the local normal direction, and could be calculated via back-propagation. Then the final color C is obtained via Lambertian shading as:
C(r, z) = A(r, z)(ka + kdmax(0, l · n(r, z))), (4)
where l ∈ S2 is the lighting direction, ka and kd are the ambient and diffuse coefficients. We provide more discussions on this shading formulation at the end of this subsection.
Camera and Lighting Sampling. Eq.(2 - 4) describe the process of rendering a pixel color given a camera ray r(t) and a lighting condition µ = (l, ka, kd). Generating a full image Ig ∈ R3×H×W requires one to sample a camera pose ξ and a lighting condition µ in addition to the latent code z, i.e., Ig = Gθ(z, ξ,µ). In our setting, the camera pose ξ could be described by pitch and yaw angles, and is sampled from a prior Gaussian or uniform distribution pξ, as also done in previous works [4, 5]. Sampling the camera pose randomly during training would motivate the learned 3D scene to look realistic from different viewpoints. While this multi-view constraint is beneficial for learning a valid 3D representation, it is often insufficient to infer the accurate 3D object shape. Thus, in our approach, we further introduce a multi-lighting constraint by also randomly sampling a lighting condition µ from a prior distribution pµ. In practice, pµ could be estimated from the dataset using existing approaches like [39]. We also show in our experiments that a simple and manually tuned prior distribution could also produce reasonable results. As the shading process is sensitive to the normal direction due to the diffuse term kdmax(0, l · n(r, z)) in Eq.(4), this multi-lighting constraint would regularize the model to learn more accurate 3D shapes that produce natural shading, as shown in Fig.1 (b).
Training. Our generative model follows the paradigm of GANs [18], where the generator is trained together with a discriminator D with parameters φ in an adversarial manner. During training, the generator generates fake images Ig = Gθ(z, ξ,µ) by sampling the latent code z, camera pose ξ and lighting condition µ from their corresponding prior distributions pz , pξ, and pµ. Let I denotes real images sampled from the data distribution pI . We train our model with a non-saturating GAN loss with R1 regularization [42]:
L(θ, φ) = Ez∼pz,ξ∼pξ,µ∼pµ [ f ( Dφ(Gθ(z, ξ,µ)) )] + EI∼pD [ f(−Dφ(I)) + λ‖∇Dφ(I)‖2 ] ,
(5) where f(u) = − log(1 + exp(−u)), and λ controls the strength of regularization. More implementation details are provided in the supplementary material.
Discussion. Note that in Eq.(2 - 4), we perform shading after A and n are obtained via volume rendering. An alternative way is to perform shading at each local spatial point as c(r(t),d, z) = a(r(t),d, z)(ka + kdmax(0, l · n(r(t), z))), where n(r(t), z) = −∇r(t)σ(r(t), z)/‖∇r(t)σ(r(t), z)‖2 is the local normal. Then we could perform volume rendering using c(r(t), z) to get the final pixel color. In practice, we observe that this formulation obtains suboptimal results. An intuitive reason is that in this formulation, the normal direction is normalized at each local point, neglecting the magnitude of∇r(t)σ(r(t), z), which tends to be larger near the object surfaces. We provide more analysis in experiments and the supplementary material.
The Lambertian shading we used is an approximation to the real illumination scenario. While serving as a good regularization for improving the learned 3D shape, it could possibly introduce an additional gap between the distribution of generated images and that of real images. To compensate
for such risk, we could optionally let the predicted a be conditioned on the lighting condition, i.e., a = a(r(t),d,µ, z). Thus, in cases where the lighting condition deviates from the real data distribution, the generator could learn to adjust the value of a and reduce the aforementioned gap. We show the benefit of this design in the experiments.
3.3 Efficient Volume Rendering via Surface Tracking
Similar to NeRF, we implement volume rendering with a discretized integral, which typically requires to sample dozens of points along a camera ray, as shown in Fig. 3 (a). In our approach, we also need to perform back-propagation across the generator in Eq.(3) to get the normal direction for each point, which introduces additional computational cost. To achieve more efficient volume rendering, a natural idea is to exploit spatial sparsity. Usually, the weight T (t, z)σ(r(t), z) in volume rendering would concentrate on the object surface position during training. Thus, if we know the rough surface position before rendering, we could sample points near the surface to save computation. While for a static scene it is possible to store such spatial sparsity in a sparse voxel grid [8, 9], this technique cannot be directly applied to our generative model, as the 3D scene keeps changing with respect to the input latent code.
To achieve more efficient volume rendering in our generative implicit model, we further propose a surface tracking network S that learns to mimic the surface position conditioned on the latent code. In particular, the volume rendering naturally allows the depth estimation of the object surface via:
ts(r, z) = ∫ tf tn T (t, z)σ(r(t), z)t dt, (6)
where T (t, z) is defined the same way as in Eq.(2). Thus, given a camera pose ξ and a latent code z, we could render the full depth map ts(z, ξ). As shown in Fig. 3 (b), we mimic ts(z, ξ) with the surface tracking network Sψ , which is a light-weighted convolutional neural network that takes z, ξ as inputs and outputs a depth map. The depth mimic loss is:
L(ψ) = Ez∼pz,ξ∼pξ [‖Sψ(z, ξ)− ts(z, ξ)‖1 + Prec(Sψ(z, ξ), ts(d(z, ξ))] , (7) where Prec is the perceptual loss that motivates Sψ to better capture edges of the surface.
During training, Sψ is optimized jointly with the generator and the discriminator. Thus, each time after we sample a latent code z and a camera pose ξ, we can get an initial guess of the depth map as Sψ(z, ξ). Then for a pixel with predicted depth s, we could perform volume rendering in Eq.(2,3,6) with near bound tn = s−∆i/2 and far bound tf = s+ ∆i/2, where ∆i is the interval for volume rendering that decreases as the training iteration i grows. Specifically, we start with a large interval ∆max and decrease to ∆min with an exponential schedule. As ∆i decreases, the number of points used for rendering m also decreases accordingly. Note that the computational cost of our efficient surface tracking network is marginal compared to the generator, as the former only needs a single forward pass to render an image while the latter will be queried for H ×W ×m times. Thus, the reduction of m would significantly accelerate the training and inference speed for ShadeGAN.
4 Experiments
In this section, we evaluate the proposed ShadeGAN on 3D-aware image synthesis. We also show that ShadeGAN learns much more accurate 3D shapes than previous methods, and in the meantime allows explicit control over lighting conditions. The datasets used include CelebA [43], BFM [13], and Cats [44], all of which contain only unconstrained 2D RGB images.
Implementation. In terms of model architectures, we adopt a SIREN-based MLP [45] as the generator and a convolutional neural network as the discriminator following [4]. For the prior distribution of lighting conditions, we use Unsup3d [39] to estimate the lighting conditions of real data and subsequently fit a multivariate Gaussian distribution of µ = (l, ka, kd) as the prior. A hand-crafted prior distribution is also included in the ablation study. In quantitative study, we let the pre-cosine color a be conditioned on the lighting condition µ as well as the viewing direction d unless otherwise stated. In qualitative study, we observe that removing view conditioning achieves slightly better 3D shapes for CelebA and BFM datasets. Thus, we show results without view conditioning for these two datasets in the main paper, and put those with view conditioning in Fig. 4 of the supplementary material. Other implementation details are also provided in the supplementary.
Figure 5: Generated face images and their 3D meshes.
Normal AlbedoMeshImage
(a) ShadeGAN
(b) Local normal
(c) Manual prior
Figure 6: Qualitative ablation. See the main text for discussions.
Comparison with baselines. We compare ShadeGAN with two state-of-the-art generative implicit models, namely GRAF [5] and pi-GAN [4]. Specifically, Fig. 4 includes both synthesized images as well as their corresponding 3D meshes, which are obtained by performing marching cubes on the volume density σ. While GRAF and pi-GAN could synthesize images with controllable poses, their learned 3D shapes are inaccurate and noisy. In contrast, our approach not only synthesizes photorealistic 3D-consistent images, but also learns much more accurate 3D shapes and surface normals, indicating the effectiveness of the proposed multi-lighting constraint as a regularization. More synthesized images and their corresponding shapes are included in Fig.5. Besides more accurate 3D shapes, ShadeGAN can also learn the albedo and diffuse shading components inherently. As shown in Fig. 4, although not perfect, ShadeGAN has managed to disentangle shading and albedo with satisfying quality, as such disentanglement is a natural solution to the multi-lighting constraint.
The quality of learned 3D shapes is quantitatively evaluated on the BFM dataset. Specifically, we use each of the generative implicit models to generate 50k images and their corresponding depth maps. Image-depth pairs from each model are used as training data to train an additional convolutional neural network (CNN) that learns to predict the depth map of an input image. We then test each trained CNN on the BFM test set and compare its predictions to the ground-truth depth maps as a measurement of the quality of learned 3D shapes. Following [39], we report the scale-invariant depth error (SIDE) and mean angle deviation (MAD) metrics. The results are included in Tab. 1, where ShadeGAN significantly outperforms GRAF and pi-GAN. Besides, ShadeGAN also outperforms other advanced unsupervised 3D shape learning approaches including Unsup3d [39] and GAN2Shape [41], demonstrating its large potential in unsupervised 3D shapes learning. In terms of image quality, Tab. 1 includes the FID [46] scores of images synthesized by different models, where the FID score of ShadeGAN is slightly inferior to pi-GAN in BFM and CelebA. Intuitively, this is caused by the gap between our approximated shading (i.e. Lambertian shading) and the real illumination, which can be potentially avoided by adopting more realistic shading models and improving the lighting prior.
In Tab. 2, we also show the quantitative results of different models on CelebA and Cats. To evaluate the learned shape, we use each generative implicit model to generate 2k front-view images and their corresponding depth maps. While these datasets do not have ground truth depth, we report MAD obtained by testing pretrained Unsup3d models [39] on these generated image-depth pairs as
Table 3: Ablation study on the BFM dataset.
No. Method FID ↓ SIDE ↓ MAD ↓ (1) ShadeGAN 17.7 0.607 14.52 (2) local shading 30.1 0.754 18.18 (3) w/o light 19.2 0.618 14.53 (4) w/o view 18.6 0.622 14.88 (5) manual prior 20.2 0.643 15.38 (6) +efficient 18.2 0.673 14.72
Table 4: Training and inference time cost on CelebA. The efficient volume rendering significantly improves training and inference speed.
Method Train (h) Inference (s) FID ShadeGAN 92.3 0.343 16.4 +efficient 70.2 0.179 16.2 pi-GAN 56.8 0.204 15.7 +efficient 46.9 0.114 15.9
A lb
ed o
(a) w/o light condition (b) with light condition
S h
ad in
g Im
ag e
(c) with specular
Figure 8: Illumination-aware image synthesis. ShadeGAN allows explicit control over the lighting. The pre-cosine color (albedo) is independent of lighting in (a) and is conditioned on lighting in (b). We show results of adding a specular term in (c).
a reference. As we can observe, results on CelebA and Cat are consistent with those on the BFM dataset.
Ablation studies. We further study the effects of several design choices in ShadeGAN. First, we perform local points-specific shading as mentioned in the discussion of Sec. 3.2. As Tab. 3 No.(2) and Fig. 6 (b) show, the results of such a local shading strategy are notably worse than the original one, which indicates that taking the magnitude of∇xσ into account is beneficial. Besides, the results of Tab. 3 No.(3) and No.(4) imply that removing a’s dependence on the lighting µ or the viewpoint d could lead to a slight performance drop. The results of using a simple manually tuned lighting prior are provided in Tab. 3 No.(5) and Fig. 6 (c), which are only moderately worse than the results of using a data-driven prior, and the generated shapes are still significantly better than the ones produced by existing approaches.
To verify the effectiveness of the proposed efficient volume rendering technique, we include its effects on image quality and training/inference time in Tab. 3 No.(6) and Tab. 4. It is observed that the efficient volume rendering has marginal effects on the performance, but significantly reduces the training and inference time by 24% and 48% for ShadeGAN. Moreover, in Fig. 7 we visualize the depth maps predicted by our surface tracking network and those obtained via volume rendering. It is shown that under varying identities and camera poses, the surface tracking network could consistently predict depth values that are quite close to the real surface positions, so that we can sample points near the predicted surface for rendering without sacrificing image quality.
Illumination-aware image synthesis. As ShadeGAN models the shading process, it by design allows explicit control over the lighting condition. We provide such illumination-aware image synthesis results in Fig.8, where ShadeGAN generates promising images under different lighting directions. We also show that in cases where the predicted a is conditioned on the lighting condition µ, a would slightly change w.r.t. the lighting condition, e.g., it would be brighter in areas having a overly dim shading in order to make the final image more natural. Besides, we could optionally add a specular term ksmax(0,h · n)p in Eq. 4 (i.e., Blinn-Phong shading [47], where h is the bisector of the angle between the viewpoint and the lighting direction) to create specular highlight effects, as shown in Fig.8 (c).
GAN inversion. ShadeGAN could also be used to reconstruct a given target image by performing GAN inversion. As shown in Fig. 9 such inversion allows us to obtain several factors of the image, including the 3D shape, surface normal, approximated albedo, and shading. Besides, we can further perform view synthesis and relighting by changing the viewpoint and lighting condition. The implementation of GAN inversion is provided in the supplementary material.
Discussions. As the Lambertian shading we used is an approximation to the real illumination, the albedo learned by ShadeGAN is not perfectly disentangled. Our approach does not consider the spatially-varying material properties of objects as well. In the future, we intend to incorporate more sophisticated shading models to learn better disentangled generative reflectance fields.
5 Conclusion
In this work, we present ShadeGAN, a new generative implicit model for shape-accurate 3D-aware image synthesis. We have shown that the multi-lighting constraint, achieved in ShadeGAN by explicit illumination modeling, significantly helps learning accurate 3D shapes from 2D images. ShadeGAN also allows us to control the lighting condition during image synthesis, achieving natural image relighting effects. To reduce the computational cost, we have further devised a light-weighted surface tracking network, which enables an efficient volume rendering technique for generative implicit models, achieving significant acceleration on both training and inference speed. A generative model with shape-accurate 3D representation could broaden its applications in vision and graphics, and our work has taken a solid step towards this goal.
Acknowledgment. We would like to thank Eric R. Chan for sharing the codebase of pi-GAN. This study is supported under the ERC Consolidator Grant 4DRepLy (770784). This study is also supported under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). | 1. What is the novel generative model proposed by the paper for neural radiance fields?
2. What is the key idea behind the proposed method, and how does it improve the accuracy of shape representation?
3. Are there any similar works in the literature that have explored the same idea, and how does the paper differentiate itself from them?
4. How convincing are the empirical evaluations provided in the paper, both qualitatively and quantitatively?
5. Does the paper provide sufficient technical contributions and impact to the machine learning community, or is its focus mainly on computer vision applications?
6. Why does the reviewer lean towards a negative overall recommendation, and what are their concerns regarding the paper's limitations? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes a novel generative model for neural radiance field, where the key idea is to synthetically add shading effects based on random lighting configurations. Shading will make images unrealistic when the geometry is corrupted. Therefore, the system gets trained to generate accurate shapes in its implicit radiance field representation. The system makes convincing results both qualitatively and quantitatively.
Review
The idea of the paper is very simple and intuitive (adding lighting effects and utilizing discriminator to discover accurate geometry). I am not familiar with the complete literature but if no papers have shown this idea before, this idea could be an interesting one for the computer vision community but not for the machine learning community. I am curious if other reviewers spot any existing works with similar ideas. Given extensive GAN literature, there may exist similar ideas somewhere.
Empirical evaluations (both qualitative and quantiative) look convincing, where experimental results also support the claims qualitively and quantitatively. However, they are shown only for human faces.
I am stuck on the overall recommendation but am leaning towards negative. The first reason is the merit/impact to the machine learning community. The strength of this paper is the idea which is for the computer vision community. The technical contribution is weak, as there is not much to learn from the paper. The second reason is that results are only presented for faces. This is very critical. We already have very good approaches that turn single image into 3D face models. This paper will not bring any impacts. The idea of this paper will really shine if this idea works on more variety of object categories. With those reasons, I will vote for rejection. |
NIPS | Title
Adversarial Robustness through Local Linearization
Abstract
Adversarial training is an effective methodology to train deep neural networks which are robust against adversarial, norm-bounded perturbations. However, the computational cost of adversarial training grows prohibitively as the size of the model and number of input dimensions increase. Further, training against less expensive and therefore weaker adversaries produces models that are robust against weak attacks but break down under attacks that are stronger. This is often attributed to the phenomenon of gradient obfuscation; such models have a highly non-linear loss surface in the vicinity of training examples, making it hard for gradient-based attacks to succeed even though adversarial examples still exist. In this work, we introduce a novel regularizer that encourages the loss to behave linearly in the vicinity of the training data, thereby penalizing gradient obfuscation while encouraging robustness. We show via extensive experiments on CIFAR-10 and ImageNet, that models trained with our regularizer avoid gradient obfuscation and can be trained significantly faster than adversarial training. Using this regularizer, we exceed current state of the art and achieve 47% adversarial accuracy for ImageNet with `∞ adversarial perturbations of radius 4/255 under an untargeted, strong, white-box attack. Additionally, we match state of the art results for CIFAR-10 at 8/255.
N/A
Adversarial training is an effective methodology to train deep neural networks which are robust against adversarial, norm-bounded perturbations. However, the computational cost of adversarial training grows prohibitively as the size of the model and number of input dimensions increase. Further, training against less expensive and therefore weaker adversaries produces models that are robust against weak attacks but break down under attacks that are stronger. This is often attributed to the phenomenon of gradient obfuscation; such models have a highly non-linear loss surface in the vicinity of training examples, making it hard for gradient-based attacks to succeed even though adversarial examples still exist. In this work, we introduce a novel regularizer that encourages the loss to behave linearly in the vicinity of the training data, thereby penalizing gradient obfuscation while encouraging robustness. We show via extensive experiments on CIFAR-10 and ImageNet, that models trained with our regularizer avoid gradient obfuscation and can be trained significantly faster than adversarial training. Using this regularizer, we exceed current state of the art and achieve 47% adversarial accuracy for ImageNet with `∞ adversarial perturbations of radius 4/255 under an untargeted, strong, white-box attack. Additionally, we match state of the art results for CIFAR-10 at 8/255.
1 Introduction
In a seminal paper, Szegedy et al. [22] demonstrated that neural networks are vulnerable to visually imperceptible but carefully chosen adversarial perturbations which cause them to output incorrect predictions. After this revealing study, a flurry of research has been conducted with the focus of making networks robust against such adversarial perturbations [14, 16, 17, 25]. Concurrently, researchers devised stronger attacks that expose previously unknown vulnerabilities of neural networks [24, 4, 1, 3].
Of the many approaches proposed [19, 2, 6, 21, 15, 17], adversarial training [14, 16] is empirically the best performing algorithm to train networks robust to adversarial perturbations. However, the cost of adversarial training becomes prohibitive with growing model complexity and input dimensionality. This is primarily due to the cost of computing adversarial perturbations, which is incurred at each step of adversarial training. In particular, for each new mini-batch one must perform multiple iterations
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
of a gradient-based optimizer on the network’s inputs to find the perturbations.1 As each step of this optimizer requires a new backwards pass, the total cost of adversarial training scales as roughly the number of such steps. Unfortunately, effective adversarial training of ImageNet often requires large number of steps to avoid problems of gradient obfuscation [1, 24], making it significantly more expensive than conventional training.
One approach which can alleviate the cost of adversarial training is training against weaker adversaries that are cheaper to compute. For example, by taking fewer gradient steps to compute adversarial examples during training. However, this can produce models which are robust against weak attacks, but break down under strong attacks – often due to gradient obfuscation. In particular, one form of gradient obfuscation occurs when the network learns to fool a gradient based attack by making the loss surface highly convoluted and non-linear (see Fig 1), an effect which has also been observed by Papernot et al [18]. This non-linearity prevents gradient based optimization methods from finding an adversarial perturbation within a small number of iterations [4, 24]. In contrast, if the loss surface was linear in the vicinity of the training examples, which is to say well-predicted by local gradient information, gra-
dient obfuscation cannot occur. In this paper, we take up this idea and introduce a novel regularizer that encourages the loss to behave linearly in the vicinity of the training data. We call this regularizer the local linearity regularizer (LLR). Empirically, we find that networks trained with LLR exhibit far less gradient obfuscation, and are almost equally robust against strong attacks as they are against weak attacks. The main contributions of our paper are summarized below:
• We show that training with LLR is significantly faster than adversarial training, allowing us to train a robust ImageNet model with a 5× speed up when training on 128 TPUv3 cores [9].
• We show that LLR trained models exhibit higher robustness relative to adversarially trained models when evaluated under strong attacks. Adversarially trained models can exhibit a decrease in accuracy of 6% when increasing the attack strength at test time for CIFAR-10, whereas LLR shows only a decrease of 2%. • We achieve new state of the art results for adversarial accuracy against untargeted white-box
attack for ImageNet (with = 4/2552): 47%. Furthermore, we match state of the art results for CIFAR 10 (with = 8/255): 52.81%3.
• We perform a large scale evaluation of existing methods for adversarially robust training under consistent, strong, white-box attacks. For this we recreate several baseline models from the literature, training them both for CIFAR-10 and ImageNet (where possible).4
2 Background and Related Work
We denote our classification function by f(x; θ) : x 7→ RC , mapping input features x to the output logits for classes in set C, i.e. pi(y|x; θ) = exp (fi(x; θ)) / ∑ j exp (fj(x; θ)), with θ being the model parameters and y being the label. Adversarial robustness for f is defined as follows: a network is robust to adversarial perturbations of magnitude at input x if and only if
argmax i∈C fi(x; θ) = argmax i∈C
fi(x+ δ; θ) ∀δ ∈ Bp( ) = {δ : ‖δ‖p ≤ }. (1)
1While computing the globally optimal adversarial example is NP-hard [12], gradient descent with several random restarts was empirically shown to be quite effective at computing adversarial perturbations of sufficient quality.
2This means that every pixel is perturbed independently by up to 4 units up or down on a scale where pixels take values ranging between 0 and 255.
3We note that TRADES [27] gets 55% against a much weaker attack; under our strongest attack, it gets 52.5%.
4Baselines created are adversarial training, TRADES and CURE [17]. Contrary to CIFAR-10, we are currently unable to achieve consistent and competitive results on ImageNet at = 4/255 using TRADES.
In this paper, we focus on p =∞ and we use B( ) to denote B∞( ) for brevity. Given the dataset is drawn from distribution D, the standard method to train a classifier f is empirical risk minimization (ERM), which is defined by: minθ E(x,y)∼D[` (x; y, θ)]. Here, `(x; y, θ) is the standard cross-entropy loss function defined by `(x; y, θ) = −yT log (p(x; θ)) , (2) where pi(x; θ) is defined as above, and y is a 1-hot vector representing the class label. While ERM is effective at training neural networks that perform well on heldout test data, the accuracy on the test set goes to zero under adversarial evaluation. This is a result of a distribution shift in the data induced by the attack. To rectify this, adversarial training [17, 14] seeks to perturb the data distribution by performing adversarial attacks during training. More concretely, adversarial training minimizes the loss function
E(x,y)∼D [ max δ∈B( ) `(x+ δ; y, θ) ] , (3)
where the inner maximization, maxδ∈B( ) `(x+ δ; y, θ), is typically performed via a fixed number of steps of a gradient-based optimization method. One such method is Projected-Gradient-Descent (PGD) which performs the following gradient step:
δ ← Proj (δ − η∇δ`(x+ δ; y, θ)) , (4) where Proj(x) = argminξ∈B( ) ‖x− ξ‖. Another popular gradient-based method is to use the sign of the gradient [8]. The cost of solving Eq (3) is dominated by the cost of solving the inner maximization problem. Thus, the inner maximization should be performed efficiently to reduce the overall cost of training. A naive approach is to reduce the number of gradient steps performed by the optimization procedure. Generally, the attack is weaker when we do fewer steps. If the attack is too weak, the trained networks often display gradient obfuscation as shown in Fig 1.
Since the introduction of adversarial training, a corpus of work has researched alternative ways of making networks robust. One such approach is the TRADES method [27], which is a form of regularization that optimizes the trade-off between robustness and accuracy – as many studies have observed these two quantities to be at odds with each other [23]. Others, such as work by Ding et al [7] adaptively increase the perturbation radius by find the minimal length perturbation which changes the output label. Some have proposed architectural changes which promote adversarial robustness, such as the "denoise" model [25] for ImageNet.
The work presented here is a regularization technique which encourages the loss function to be well approximated by its linear Taylor expansion in a sufficiently small neighbourhood. There has been work before which uses gradient information as a form of regularization [20, 17]. The work presented in this paper is closely related to the paper by Moosavi et al [17], which highlights that adversarial training reduces the curvature of `(x; y, θ) with respect to x. Leveraging an empirical observation (the highest curvature is along the direction∇x`(x; y, θ)), they further propose an algorithm to mimic the effects of adversarial training on the loss surface. The algorithm results in comparable performance to adversarial training with a significantly lower cost.
3 Motivating the Local Linearity Regularizer
As described above, the cost of adversarial training is dominated by solving the inner maximization problem maxδ∈B( ) `(x+δ). Throughout we abbreviate `(x; y, θ) with `(x). We can reduce this cost simply by reducing the number of PGD (as defined in Eq (4)) steps taken to solve maxδ∈B( ) `(x+δ). To motivate the local linearity regularizer (LLR), we start with an empirical analysis of how the behavior of adversarial training changes as we increase the number of PGD steps used during training. We find that the loss surface becomes increasingly linear (as captured by the local linearity measure defined below) as we increase the number of PGD steps.
3.1 Local Linearity Measure
Suppose that we are given an adversarial perturbation δ ∈ B( ). The corresponding adversarial loss is given by `(x+ δ). If our loss surface is smooth and approximately linear, then `(x+ δ) is well approximated by its first-order Taylor expansion `(x) + δT∇x`(x). In other words, the absolute difference between these two values,
g(δ;x) = ∣∣`(x+ δ)− `(x)− δT∇x`(x)∣∣ , (5)
is an indicator of how linear the surface is. Consequently, we consider the quantity
γ( , x) = max δ∈B( ) g(δ;x), (6)
to be a measure of how linear the surface is within a neighbourhood B( ). We call this quantity the local linearity measure.
3.2 Empirical Observations on Adversarial Training
We measure γ( , x) for networks trained with adversarial training on CIFAR-10, where the inner maximization maxδ∈B( ) `(x+ δ) is performed with 1, 2, 4, 8 and 16 steps of PGD. γ( , x) is measured throughout training on the training set5. The architecture used is a wide residual network [26] 28 in depth and 10 in width (Wide-ResNet-28-10). The results are shown in Fig 2a and 2b. Fig 2a shows that when we train with one and two steps of PGD for the inner maximization, the local loss surface is extremely non-linear at the end of training. An example visualization of such a loss surface is given in Fig A1a. However, when we train with four or more steps of PGD for the inner maximization, the surface is relatively well approximated by `(x) + δT∇x`(x) as shown in Fig 2b. An example of the loss surface is shown in Fig A1b. For the adversarial accuracy of the networks, see Table A1.
4 Local Linearity Regularizer (LLR)
From the section above, we make the empirical observation that the local linearity measure γ( , x) decreases as we train with stronger attacks6. In this section, we give some theoretical justifications of why local linearity γ( , x) correlates with adversarial robustness, and derive a regularizer from the local linearity measure that can be used for training of robust models.
4.1 Local Linearity Upper Bounds Adversarial Loss
The following proposition establishes that the adversarial loss `(x + δ) is upper bounded by the local linearity measure, plus the change in the loss as predicted by the gradient (which is given by |δT∇x`(x)|). Proposition 4.1. Consider a loss function `(x) that is once-differentiable, and a local neighbourhood defined by B( ). Then for all δ ∈ B( )
|`(x+ δ)− `(x)| ≤ |δT∇x`(x)|+ γ( , x). (7) 5To measure γ( , x) we find maxδ∈B( ) g(δ;x) with 50 steps of PGD using Adam as the optimizer and 0.1
as the step size. 6Here, we imply an increase in the number of PGD steps for the inner maximization maxδ∈B( ) `(x+ δ).
See Appendix B for the proof.
From Eq (7) it is clear that the adversarial loss tends to `(x), i.e., `(x + δ) → `(x), as both |δ>∇x`(x)| → 0 and γ( ;x)→ 0 for all δ ∈ B( ). And assuming `(x+ δ) ≥ `(δ) one also has the upper bound `(x+ δ) ≤ `(x) + |δT∇x`(x)|+ γ( , x).
4.2 Local Linearity Regularization (LLR)
Following the analysis above, we propose the following objective for adversarially robust training L(D) = ED [ `(x) + λγ( , x) + µ|δTLLR∇x`(x)|︸ ︷︷ ︸
LLR
] , (8)
where λ and µ are hyper-parameters to be optimized, and δLLR = argmaxδ∈B( )g(δ;x) (recall the definition of g(δ;x) from Eq (5)). Concretely, we are trying to find the point δLLR in B( ) where the linear approximation `(x) + δT∇x`(x) is maximally violated. To train we penalize both its linear violation γ( , x) =
∣∣`(x+ δLLR)− `(x)− δTLLR∇x`(x)∣∣, and the gradient magnitude term∣∣δTLLR∇x`(x)∣∣, as required by the above proposition. We note that, analogous to adversarial training, LLR requires an inner optimization to find δLLR – performed via gradient descent. However, as we will show in the experiments, much fewer optimization steps are required for the overall scheme to be effective. Pseudo-code for training with this regularizer is given in Appendix E.
4.3 Local Linearity Measure γ( ;x) bounds the adversarial loss by itself
Interestingly, under certain reasonable approximations and standard choices of loss functions, we can bound |δ>∇x`(x)| in terms of γ( ;x). See Appendix C for details. Consequently, the bound in Eq (7) implies that minimizing γ( ;x) (along with the nominal loss `(x)) is sufficient to minimize the adversarial loss `(x+ δ). This prediction is confirmed by our experiments. However, our experiments also show that including |δ>∇x`(x)| in the objective along with `(x) and γ( ;x) works better in practice on certain datasets, especially ImageNet. See Appendix F.3 for details.
5 Experiments and Results
We perform experiments using LLR on both CIFAR-10 [13] and ImageNet [5] datasets. We show that LLR gets state of the art adversarial accuracy on CIFAR-10 (at = 8/255) and ImageNet (at = 4/255) evaluated under a strong adversarial attack. Moreover, we show that as the attack strength increases, the degradation in adversarial accuracy is more graceful for networks trained using LLR than for those trained with standard adversarial training. Further, we demonstrate that training using LLR is 5× faster for ImageNet. Finally, we show that, by linearizing the loss surface, models are less prone to gradient obfuscation.
CIFAR-10: The perturbation radius we examine is = 8/255 and the model architectures we use are Wide-ResNet-28-8, Wide-ResNet-40-8 [26]. Since the validity of our regularizer requires `(x) to be smooth, the activation function we use is softplus function log(1 + exp(x)), which is a smooth version of ReLU. The baselines we compare our results against are adversarial training (ADV) [16], TRADES [27] and CURE [17]. We recreate these baselines from the literature using the same network architecture and activation function. The evaluation is done on the full test set of 10K images.
ImageNet: The perturbation radii considered are = 4/255 and = 16/255. The architecture used for this is from [11] which is ResNet-152. We use softplus as activation function. For = 4/255, the baselines we compare our results against is our recreated versions of ADV [16] and denoising model (DENOISE) [25].7 For = 16/255, we compare LLR to ADV [16] and DENOISE [25] networks which have been published from the the literature. Due to computational constraints, we limit ourselves to evaluating all models on the first 1K images of the test set.
To make sure that we have a close estimate of the true robustness, we evaluate all the models on a wide range of attacks these are described below.
7We attempted to use TRADES on ImageNet but did not manage to get competitive results. Thus they are omitted from the baselines.
5.1 Evaluation Setup
To accurately gauge the true robustness of our network, we tailor our attack to give the lowest possible adversarial accuracy. The two parts which we tune to get the optimal attack is the loss function for the attack and its corresponding optimization procedure. The loss functions used are described below, for the optimization procedure please refer to Appendix F.1.
Loss Functions: The three loss functions we consider are summarized in Table 1. We use the difference between logits for the loss function rather than the cross-entropy loss as we have empirically found the former to yield lower adversarial accuracy.
5.2 Results for Robustness
For CIFAR-10, the main adversarial accuracy results are given in Table 2. We compare LLR training to ADV [16], CURE [17] and TRADES [27], both with our re-implementation and the published models 8. Note that our re-implementation using softplus activations perform at or above the published results for ADV, CURE and TRADES. This is largely due to the learning rate schedule used, which is the similar to the one used by TRADES [27].
8Note the network published for TRADES [27] uses a Wide-ResNet-34-10 so this is not shown in the table but under the same rigorous evaluation we show that TRADES get 84.91% nominal accuracy, 53.41% under Untargeted and 52.58% under Multi-Targeted. We’ve also ran `∞ DeepFool (not in the table as the attack is weaker) giving ADV(S): 64.29%, CURE(S): 58.73%, TRADES(S): 63.4%, LLR(S): 65.87%.
Interestingly, for adversarial training (ADV), using the Multi-Targeted attack for evaluation gives significantly lower adversarial accuracy compared to Untargeted. The accuracy obtained are 49.79% and 55.26% respectively. Evaluation using Multi-Targeted attack consistently gave the lowest adversarial accuracy throughout. Under this attack, the methods which stand out amongst the rest are LLR and TRADES. Using LLR we get state of the art results with 52.81% adversarial accuracy.
For ImageNet, we compare against adversarial training (ADV) [16] and the denoising model (DENOISE) [25]. The results are shown in Table 3. For a perturbation radius of 4/255, LLR gets 47% adversarial accuracy under the Untargeted attack which is notably higher than the adversarial accuracy obtained via adversarial training which is 39.70%. Moreover, LLR is trained with just two-steps of PGD rather than 30 steps for adversarial training. The amount of computation needed for each method is further discussed in Sec 5.2.1.
Further shown in Table 3 are the results for = 16/255. We note a significant drop in nominal accuracy when we train with LLR to perturbation radius 16/255. When testing for perturbation radius of 16/255 we also show that the adversarial accuracy under Untargeted is very poor (below 8%) for all methods. We speculate that this perturbation radius is too large for the robustness problem. Since adversarial perturbations should be, by definition, imperceptible to the human eye, upon inspection of the images generated using an adversarial attack (see Fig F4) - this assumption no longer holds true. The images generated appear to consist of super-imposed object parts of other classes onto the target image. This leads us to believe that a more fine-grained analysis of what should constitute "robustness for ImageNet" is an important topic for debate.
5.2.1 Runtime Speed
For ImageNet, we trained on 128 TPUv3 cores [9], the total training wall time for the LLR network (4/255) is 7 hours for 110 epochs. Similarly, for the adversarially trained (ADV) networks the total wall time is 36 hours for 110 epochs. This is a 5× speed up.
5.2.2 Accuracy Degradation: Strong vs Weak Evaluation
The resulting model trained using LLR degrades gracefully in terms of adversarial accuracy when we increase the strength of attack, as shown in Fig 3. In particular, Fig 3a shows that, for CIFAR-10, when the attack changes from Untargeted to Multi-Targeted, the LLR’s accuracy remains similar with only a 2.18% drop in accuracy. Contrary to adversarial training (ADV), where we see a 5.64% drop in accuracy. We also see similar trends in accuracy in Table 2. This could indicate that some level of obfuscation may be happening under standard adversarial training.
As we empirically observe that LLR evaluates similarly under weak and strong attacks, we hypothesize that this is because LLR explicitly linearizes the loss surface. An extreme case would be when the surface is completely linear - in this instance the optimal adversarial perturbation would be found with just one PGD step. Thus evaluation using a weak attack is often good enough to get an accurate gauge of how it will perform under a stronger attack.
For ImageNet, see Fig 3b, the adversarial accuracy trained using LLR remains significantly higher (7.5%) than the adversarially trained network going from a weak to a stronger attack.
5.3 Resistance to Gradient Obfuscation
10
20
We use either the standard adversarial training objective (ADV-1, ADV-2) or the LLR objective (LLR-1, LLR-2) and taking one or two steps of PGD to maximize each objective. To train LLR-1/2, we only optimize the local linearity γ( , x), i.e. µ in Eq. (8) is set to zero. We see that for adversarial training, as shown in Figs 4a, 4c, the loss surface becomes highly non-linear and jagged – in other words obfuscated. Additionally in this setting, the adversarial accuracy under our strongest attack is 0% for both, see Table F3. In contrast, the loss surface is smooth when we train using LLR as shown in Figs 4b, 4d. Further, Table F3 shows that we obtain an adversarial accuracy of 44.50% with the LLR-2 network under our strongest evaluation. We also evaluate the values of γ( , x) for the CIFAR-10 test set after these networks are trained. This is shown in Fig F3. The values of γ( , x) are comparable when we train with LLR using two steps of PGD to adversarial training with 20 steps of PGD. By comparison, adversarial training with two steps of PGD results in much larger values of γ( , x).
6 Conclusions
We show that, by promoting linearity, deep classification networks are less susceptible to gradient obfuscation, thus allowing us to do fewer gradient descent steps for the inner optimization. Our novel linearity regularizer promotes locally linear behavior as justified from a theoretical perspective. The resulting models achieve state of the art adversarial robustness on the CIFAR-10 and Imagenet datasets, and can be trained 5× faster than regular adversarial training.
Acknowledgements
We would like to acknowledge Jost Tobias Springenberg and Brendan O’Donoghue for careful reading of this manual script. We would also like to acknowledge Jonathan Uesato and Po-Sen Huang for the insightful discussions. | 1. What is the focus of the paper regarding local linearity and loss surfaces?
2. What are the strengths of the proposed LLC regularization technique?
3. Do you have any concerns about the presentation of the paper, particularly regarding the choice of the regularization parameter and the comparison with other works?
4. How do you assess the significance and impact of the proposed method on improving the performance and speed of neural network models?
5. Are there any questions or doubts you have after reading the review, such as the choice of evaluation metrics or the fairness of the comparisons in Table 2? | Review | Review
Originality: Starting from the gamma (local linearity) measure, the paper shows the importance of local linearity of the loss surface. Inspired by this, the authors propose LLC regularization. The story in this paper is complete and convincing. Quality: The submission is technically sound. The claims are well supported by the experiments, although for me the theoretical analysis in Proposition 4.1 is trivial by a simple use of Taylor expansion. Clarity: The paper is well-written and the presentation is clear to me. Significance: Finding better methods to train neural network models with improved robustness is an important research question. The paper moves further by proposing a new regularization technique, which improves both the performance (or comparable performance on CIFAR-10) and the speed over prior work. I also have some questions on the presentation of the paper: 1. It is not very convincing to me why using the difference between logits for the loss function yield lower adversarial accuracy than the cross-entropy, where the latter has been widely used in various papers. 2. The paper does not show how to choose the regularization parameter. 3. In Table 2, it seems that TRADES achieves the highest natural accuracy (thus putting more weights on the accuracy for its regularization parameter). I am wondering how the authors tune the regularization parameter for TRADES. By putting more weights on the robustness, can TRADES outperform the proposed method? ================== I have read the authors' rebuttal. The authors promise to clarify and include full sweep results for various baseline methods in the later version. I am looking forward to it, as I find the reported results in Table 2 are a little strange. In particular, the natural accuracy of well-trained TRADES in many papers is ~84-85%, while the reported result in this paper is ~87-88%. So I guess the authors did not trade the regularization parameter of TRADES for its highest robustness (The author can compare their method with the provided checkpoint in the TRADES official Github as in the footnote 8, but footnote 8 does not show the result of LLR for Wide-ResNet-34-10 architecture). Thus, I still feel skeptical of the fairness of the comparisons in Table 2. Besides this, this is a good paper. So I am willing to vote for acceptance of this paper. |
NIPS | Title
Adversarial Robustness through Local Linearization
Abstract
Adversarial training is an effective methodology to train deep neural networks which are robust against adversarial, norm-bounded perturbations. However, the computational cost of adversarial training grows prohibitively as the size of the model and number of input dimensions increase. Further, training against less expensive and therefore weaker adversaries produces models that are robust against weak attacks but break down under attacks that are stronger. This is often attributed to the phenomenon of gradient obfuscation; such models have a highly non-linear loss surface in the vicinity of training examples, making it hard for gradient-based attacks to succeed even though adversarial examples still exist. In this work, we introduce a novel regularizer that encourages the loss to behave linearly in the vicinity of the training data, thereby penalizing gradient obfuscation while encouraging robustness. We show via extensive experiments on CIFAR-10 and ImageNet, that models trained with our regularizer avoid gradient obfuscation and can be trained significantly faster than adversarial training. Using this regularizer, we exceed current state of the art and achieve 47% adversarial accuracy for ImageNet with `∞ adversarial perturbations of radius 4/255 under an untargeted, strong, white-box attack. Additionally, we match state of the art results for CIFAR-10 at 8/255.
N/A
Adversarial training is an effective methodology to train deep neural networks which are robust against adversarial, norm-bounded perturbations. However, the computational cost of adversarial training grows prohibitively as the size of the model and number of input dimensions increase. Further, training against less expensive and therefore weaker adversaries produces models that are robust against weak attacks but break down under attacks that are stronger. This is often attributed to the phenomenon of gradient obfuscation; such models have a highly non-linear loss surface in the vicinity of training examples, making it hard for gradient-based attacks to succeed even though adversarial examples still exist. In this work, we introduce a novel regularizer that encourages the loss to behave linearly in the vicinity of the training data, thereby penalizing gradient obfuscation while encouraging robustness. We show via extensive experiments on CIFAR-10 and ImageNet, that models trained with our regularizer avoid gradient obfuscation and can be trained significantly faster than adversarial training. Using this regularizer, we exceed current state of the art and achieve 47% adversarial accuracy for ImageNet with `∞ adversarial perturbations of radius 4/255 under an untargeted, strong, white-box attack. Additionally, we match state of the art results for CIFAR-10 at 8/255.
1 Introduction
In a seminal paper, Szegedy et al. [22] demonstrated that neural networks are vulnerable to visually imperceptible but carefully chosen adversarial perturbations which cause them to output incorrect predictions. After this revealing study, a flurry of research has been conducted with the focus of making networks robust against such adversarial perturbations [14, 16, 17, 25]. Concurrently, researchers devised stronger attacks that expose previously unknown vulnerabilities of neural networks [24, 4, 1, 3].
Of the many approaches proposed [19, 2, 6, 21, 15, 17], adversarial training [14, 16] is empirically the best performing algorithm to train networks robust to adversarial perturbations. However, the cost of adversarial training becomes prohibitive with growing model complexity and input dimensionality. This is primarily due to the cost of computing adversarial perturbations, which is incurred at each step of adversarial training. In particular, for each new mini-batch one must perform multiple iterations
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
of a gradient-based optimizer on the network’s inputs to find the perturbations.1 As each step of this optimizer requires a new backwards pass, the total cost of adversarial training scales as roughly the number of such steps. Unfortunately, effective adversarial training of ImageNet often requires large number of steps to avoid problems of gradient obfuscation [1, 24], making it significantly more expensive than conventional training.
One approach which can alleviate the cost of adversarial training is training against weaker adversaries that are cheaper to compute. For example, by taking fewer gradient steps to compute adversarial examples during training. However, this can produce models which are robust against weak attacks, but break down under strong attacks – often due to gradient obfuscation. In particular, one form of gradient obfuscation occurs when the network learns to fool a gradient based attack by making the loss surface highly convoluted and non-linear (see Fig 1), an effect which has also been observed by Papernot et al [18]. This non-linearity prevents gradient based optimization methods from finding an adversarial perturbation within a small number of iterations [4, 24]. In contrast, if the loss surface was linear in the vicinity of the training examples, which is to say well-predicted by local gradient information, gra-
dient obfuscation cannot occur. In this paper, we take up this idea and introduce a novel regularizer that encourages the loss to behave linearly in the vicinity of the training data. We call this regularizer the local linearity regularizer (LLR). Empirically, we find that networks trained with LLR exhibit far less gradient obfuscation, and are almost equally robust against strong attacks as they are against weak attacks. The main contributions of our paper are summarized below:
• We show that training with LLR is significantly faster than adversarial training, allowing us to train a robust ImageNet model with a 5× speed up when training on 128 TPUv3 cores [9].
• We show that LLR trained models exhibit higher robustness relative to adversarially trained models when evaluated under strong attacks. Adversarially trained models can exhibit a decrease in accuracy of 6% when increasing the attack strength at test time for CIFAR-10, whereas LLR shows only a decrease of 2%. • We achieve new state of the art results for adversarial accuracy against untargeted white-box
attack for ImageNet (with = 4/2552): 47%. Furthermore, we match state of the art results for CIFAR 10 (with = 8/255): 52.81%3.
• We perform a large scale evaluation of existing methods for adversarially robust training under consistent, strong, white-box attacks. For this we recreate several baseline models from the literature, training them both for CIFAR-10 and ImageNet (where possible).4
2 Background and Related Work
We denote our classification function by f(x; θ) : x 7→ RC , mapping input features x to the output logits for classes in set C, i.e. pi(y|x; θ) = exp (fi(x; θ)) / ∑ j exp (fj(x; θ)), with θ being the model parameters and y being the label. Adversarial robustness for f is defined as follows: a network is robust to adversarial perturbations of magnitude at input x if and only if
argmax i∈C fi(x; θ) = argmax i∈C
fi(x+ δ; θ) ∀δ ∈ Bp( ) = {δ : ‖δ‖p ≤ }. (1)
1While computing the globally optimal adversarial example is NP-hard [12], gradient descent with several random restarts was empirically shown to be quite effective at computing adversarial perturbations of sufficient quality.
2This means that every pixel is perturbed independently by up to 4 units up or down on a scale where pixels take values ranging between 0 and 255.
3We note that TRADES [27] gets 55% against a much weaker attack; under our strongest attack, it gets 52.5%.
4Baselines created are adversarial training, TRADES and CURE [17]. Contrary to CIFAR-10, we are currently unable to achieve consistent and competitive results on ImageNet at = 4/255 using TRADES.
In this paper, we focus on p =∞ and we use B( ) to denote B∞( ) for brevity. Given the dataset is drawn from distribution D, the standard method to train a classifier f is empirical risk minimization (ERM), which is defined by: minθ E(x,y)∼D[` (x; y, θ)]. Here, `(x; y, θ) is the standard cross-entropy loss function defined by `(x; y, θ) = −yT log (p(x; θ)) , (2) where pi(x; θ) is defined as above, and y is a 1-hot vector representing the class label. While ERM is effective at training neural networks that perform well on heldout test data, the accuracy on the test set goes to zero under adversarial evaluation. This is a result of a distribution shift in the data induced by the attack. To rectify this, adversarial training [17, 14] seeks to perturb the data distribution by performing adversarial attacks during training. More concretely, adversarial training minimizes the loss function
E(x,y)∼D [ max δ∈B( ) `(x+ δ; y, θ) ] , (3)
where the inner maximization, maxδ∈B( ) `(x+ δ; y, θ), is typically performed via a fixed number of steps of a gradient-based optimization method. One such method is Projected-Gradient-Descent (PGD) which performs the following gradient step:
δ ← Proj (δ − η∇δ`(x+ δ; y, θ)) , (4) where Proj(x) = argminξ∈B( ) ‖x− ξ‖. Another popular gradient-based method is to use the sign of the gradient [8]. The cost of solving Eq (3) is dominated by the cost of solving the inner maximization problem. Thus, the inner maximization should be performed efficiently to reduce the overall cost of training. A naive approach is to reduce the number of gradient steps performed by the optimization procedure. Generally, the attack is weaker when we do fewer steps. If the attack is too weak, the trained networks often display gradient obfuscation as shown in Fig 1.
Since the introduction of adversarial training, a corpus of work has researched alternative ways of making networks robust. One such approach is the TRADES method [27], which is a form of regularization that optimizes the trade-off between robustness and accuracy – as many studies have observed these two quantities to be at odds with each other [23]. Others, such as work by Ding et al [7] adaptively increase the perturbation radius by find the minimal length perturbation which changes the output label. Some have proposed architectural changes which promote adversarial robustness, such as the "denoise" model [25] for ImageNet.
The work presented here is a regularization technique which encourages the loss function to be well approximated by its linear Taylor expansion in a sufficiently small neighbourhood. There has been work before which uses gradient information as a form of regularization [20, 17]. The work presented in this paper is closely related to the paper by Moosavi et al [17], which highlights that adversarial training reduces the curvature of `(x; y, θ) with respect to x. Leveraging an empirical observation (the highest curvature is along the direction∇x`(x; y, θ)), they further propose an algorithm to mimic the effects of adversarial training on the loss surface. The algorithm results in comparable performance to adversarial training with a significantly lower cost.
3 Motivating the Local Linearity Regularizer
As described above, the cost of adversarial training is dominated by solving the inner maximization problem maxδ∈B( ) `(x+δ). Throughout we abbreviate `(x; y, θ) with `(x). We can reduce this cost simply by reducing the number of PGD (as defined in Eq (4)) steps taken to solve maxδ∈B( ) `(x+δ). To motivate the local linearity regularizer (LLR), we start with an empirical analysis of how the behavior of adversarial training changes as we increase the number of PGD steps used during training. We find that the loss surface becomes increasingly linear (as captured by the local linearity measure defined below) as we increase the number of PGD steps.
3.1 Local Linearity Measure
Suppose that we are given an adversarial perturbation δ ∈ B( ). The corresponding adversarial loss is given by `(x+ δ). If our loss surface is smooth and approximately linear, then `(x+ δ) is well approximated by its first-order Taylor expansion `(x) + δT∇x`(x). In other words, the absolute difference between these two values,
g(δ;x) = ∣∣`(x+ δ)− `(x)− δT∇x`(x)∣∣ , (5)
is an indicator of how linear the surface is. Consequently, we consider the quantity
γ( , x) = max δ∈B( ) g(δ;x), (6)
to be a measure of how linear the surface is within a neighbourhood B( ). We call this quantity the local linearity measure.
3.2 Empirical Observations on Adversarial Training
We measure γ( , x) for networks trained with adversarial training on CIFAR-10, where the inner maximization maxδ∈B( ) `(x+ δ) is performed with 1, 2, 4, 8 and 16 steps of PGD. γ( , x) is measured throughout training on the training set5. The architecture used is a wide residual network [26] 28 in depth and 10 in width (Wide-ResNet-28-10). The results are shown in Fig 2a and 2b. Fig 2a shows that when we train with one and two steps of PGD for the inner maximization, the local loss surface is extremely non-linear at the end of training. An example visualization of such a loss surface is given in Fig A1a. However, when we train with four or more steps of PGD for the inner maximization, the surface is relatively well approximated by `(x) + δT∇x`(x) as shown in Fig 2b. An example of the loss surface is shown in Fig A1b. For the adversarial accuracy of the networks, see Table A1.
4 Local Linearity Regularizer (LLR)
From the section above, we make the empirical observation that the local linearity measure γ( , x) decreases as we train with stronger attacks6. In this section, we give some theoretical justifications of why local linearity γ( , x) correlates with adversarial robustness, and derive a regularizer from the local linearity measure that can be used for training of robust models.
4.1 Local Linearity Upper Bounds Adversarial Loss
The following proposition establishes that the adversarial loss `(x + δ) is upper bounded by the local linearity measure, plus the change in the loss as predicted by the gradient (which is given by |δT∇x`(x)|). Proposition 4.1. Consider a loss function `(x) that is once-differentiable, and a local neighbourhood defined by B( ). Then for all δ ∈ B( )
|`(x+ δ)− `(x)| ≤ |δT∇x`(x)|+ γ( , x). (7) 5To measure γ( , x) we find maxδ∈B( ) g(δ;x) with 50 steps of PGD using Adam as the optimizer and 0.1
as the step size. 6Here, we imply an increase in the number of PGD steps for the inner maximization maxδ∈B( ) `(x+ δ).
See Appendix B for the proof.
From Eq (7) it is clear that the adversarial loss tends to `(x), i.e., `(x + δ) → `(x), as both |δ>∇x`(x)| → 0 and γ( ;x)→ 0 for all δ ∈ B( ). And assuming `(x+ δ) ≥ `(δ) one also has the upper bound `(x+ δ) ≤ `(x) + |δT∇x`(x)|+ γ( , x).
4.2 Local Linearity Regularization (LLR)
Following the analysis above, we propose the following objective for adversarially robust training L(D) = ED [ `(x) + λγ( , x) + µ|δTLLR∇x`(x)|︸ ︷︷ ︸
LLR
] , (8)
where λ and µ are hyper-parameters to be optimized, and δLLR = argmaxδ∈B( )g(δ;x) (recall the definition of g(δ;x) from Eq (5)). Concretely, we are trying to find the point δLLR in B( ) where the linear approximation `(x) + δT∇x`(x) is maximally violated. To train we penalize both its linear violation γ( , x) =
∣∣`(x+ δLLR)− `(x)− δTLLR∇x`(x)∣∣, and the gradient magnitude term∣∣δTLLR∇x`(x)∣∣, as required by the above proposition. We note that, analogous to adversarial training, LLR requires an inner optimization to find δLLR – performed via gradient descent. However, as we will show in the experiments, much fewer optimization steps are required for the overall scheme to be effective. Pseudo-code for training with this regularizer is given in Appendix E.
4.3 Local Linearity Measure γ( ;x) bounds the adversarial loss by itself
Interestingly, under certain reasonable approximations and standard choices of loss functions, we can bound |δ>∇x`(x)| in terms of γ( ;x). See Appendix C for details. Consequently, the bound in Eq (7) implies that minimizing γ( ;x) (along with the nominal loss `(x)) is sufficient to minimize the adversarial loss `(x+ δ). This prediction is confirmed by our experiments. However, our experiments also show that including |δ>∇x`(x)| in the objective along with `(x) and γ( ;x) works better in practice on certain datasets, especially ImageNet. See Appendix F.3 for details.
5 Experiments and Results
We perform experiments using LLR on both CIFAR-10 [13] and ImageNet [5] datasets. We show that LLR gets state of the art adversarial accuracy on CIFAR-10 (at = 8/255) and ImageNet (at = 4/255) evaluated under a strong adversarial attack. Moreover, we show that as the attack strength increases, the degradation in adversarial accuracy is more graceful for networks trained using LLR than for those trained with standard adversarial training. Further, we demonstrate that training using LLR is 5× faster for ImageNet. Finally, we show that, by linearizing the loss surface, models are less prone to gradient obfuscation.
CIFAR-10: The perturbation radius we examine is = 8/255 and the model architectures we use are Wide-ResNet-28-8, Wide-ResNet-40-8 [26]. Since the validity of our regularizer requires `(x) to be smooth, the activation function we use is softplus function log(1 + exp(x)), which is a smooth version of ReLU. The baselines we compare our results against are adversarial training (ADV) [16], TRADES [27] and CURE [17]. We recreate these baselines from the literature using the same network architecture and activation function. The evaluation is done on the full test set of 10K images.
ImageNet: The perturbation radii considered are = 4/255 and = 16/255. The architecture used for this is from [11] which is ResNet-152. We use softplus as activation function. For = 4/255, the baselines we compare our results against is our recreated versions of ADV [16] and denoising model (DENOISE) [25].7 For = 16/255, we compare LLR to ADV [16] and DENOISE [25] networks which have been published from the the literature. Due to computational constraints, we limit ourselves to evaluating all models on the first 1K images of the test set.
To make sure that we have a close estimate of the true robustness, we evaluate all the models on a wide range of attacks these are described below.
7We attempted to use TRADES on ImageNet but did not manage to get competitive results. Thus they are omitted from the baselines.
5.1 Evaluation Setup
To accurately gauge the true robustness of our network, we tailor our attack to give the lowest possible adversarial accuracy. The two parts which we tune to get the optimal attack is the loss function for the attack and its corresponding optimization procedure. The loss functions used are described below, for the optimization procedure please refer to Appendix F.1.
Loss Functions: The three loss functions we consider are summarized in Table 1. We use the difference between logits for the loss function rather than the cross-entropy loss as we have empirically found the former to yield lower adversarial accuracy.
5.2 Results for Robustness
For CIFAR-10, the main adversarial accuracy results are given in Table 2. We compare LLR training to ADV [16], CURE [17] and TRADES [27], both with our re-implementation and the published models 8. Note that our re-implementation using softplus activations perform at or above the published results for ADV, CURE and TRADES. This is largely due to the learning rate schedule used, which is the similar to the one used by TRADES [27].
8Note the network published for TRADES [27] uses a Wide-ResNet-34-10 so this is not shown in the table but under the same rigorous evaluation we show that TRADES get 84.91% nominal accuracy, 53.41% under Untargeted and 52.58% under Multi-Targeted. We’ve also ran `∞ DeepFool (not in the table as the attack is weaker) giving ADV(S): 64.29%, CURE(S): 58.73%, TRADES(S): 63.4%, LLR(S): 65.87%.
Interestingly, for adversarial training (ADV), using the Multi-Targeted attack for evaluation gives significantly lower adversarial accuracy compared to Untargeted. The accuracy obtained are 49.79% and 55.26% respectively. Evaluation using Multi-Targeted attack consistently gave the lowest adversarial accuracy throughout. Under this attack, the methods which stand out amongst the rest are LLR and TRADES. Using LLR we get state of the art results with 52.81% adversarial accuracy.
For ImageNet, we compare against adversarial training (ADV) [16] and the denoising model (DENOISE) [25]. The results are shown in Table 3. For a perturbation radius of 4/255, LLR gets 47% adversarial accuracy under the Untargeted attack which is notably higher than the adversarial accuracy obtained via adversarial training which is 39.70%. Moreover, LLR is trained with just two-steps of PGD rather than 30 steps for adversarial training. The amount of computation needed for each method is further discussed in Sec 5.2.1.
Further shown in Table 3 are the results for = 16/255. We note a significant drop in nominal accuracy when we train with LLR to perturbation radius 16/255. When testing for perturbation radius of 16/255 we also show that the adversarial accuracy under Untargeted is very poor (below 8%) for all methods. We speculate that this perturbation radius is too large for the robustness problem. Since adversarial perturbations should be, by definition, imperceptible to the human eye, upon inspection of the images generated using an adversarial attack (see Fig F4) - this assumption no longer holds true. The images generated appear to consist of super-imposed object parts of other classes onto the target image. This leads us to believe that a more fine-grained analysis of what should constitute "robustness for ImageNet" is an important topic for debate.
5.2.1 Runtime Speed
For ImageNet, we trained on 128 TPUv3 cores [9], the total training wall time for the LLR network (4/255) is 7 hours for 110 epochs. Similarly, for the adversarially trained (ADV) networks the total wall time is 36 hours for 110 epochs. This is a 5× speed up.
5.2.2 Accuracy Degradation: Strong vs Weak Evaluation
The resulting model trained using LLR degrades gracefully in terms of adversarial accuracy when we increase the strength of attack, as shown in Fig 3. In particular, Fig 3a shows that, for CIFAR-10, when the attack changes from Untargeted to Multi-Targeted, the LLR’s accuracy remains similar with only a 2.18% drop in accuracy. Contrary to adversarial training (ADV), where we see a 5.64% drop in accuracy. We also see similar trends in accuracy in Table 2. This could indicate that some level of obfuscation may be happening under standard adversarial training.
As we empirically observe that LLR evaluates similarly under weak and strong attacks, we hypothesize that this is because LLR explicitly linearizes the loss surface. An extreme case would be when the surface is completely linear - in this instance the optimal adversarial perturbation would be found with just one PGD step. Thus evaluation using a weak attack is often good enough to get an accurate gauge of how it will perform under a stronger attack.
For ImageNet, see Fig 3b, the adversarial accuracy trained using LLR remains significantly higher (7.5%) than the adversarially trained network going from a weak to a stronger attack.
5.3 Resistance to Gradient Obfuscation
10
20
We use either the standard adversarial training objective (ADV-1, ADV-2) or the LLR objective (LLR-1, LLR-2) and taking one or two steps of PGD to maximize each objective. To train LLR-1/2, we only optimize the local linearity γ( , x), i.e. µ in Eq. (8) is set to zero. We see that for adversarial training, as shown in Figs 4a, 4c, the loss surface becomes highly non-linear and jagged – in other words obfuscated. Additionally in this setting, the adversarial accuracy under our strongest attack is 0% for both, see Table F3. In contrast, the loss surface is smooth when we train using LLR as shown in Figs 4b, 4d. Further, Table F3 shows that we obtain an adversarial accuracy of 44.50% with the LLR-2 network under our strongest evaluation. We also evaluate the values of γ( , x) for the CIFAR-10 test set after these networks are trained. This is shown in Fig F3. The values of γ( , x) are comparable when we train with LLR using two steps of PGD to adversarial training with 20 steps of PGD. By comparison, adversarial training with two steps of PGD results in much larger values of γ( , x).
6 Conclusions
We show that, by promoting linearity, deep classification networks are less susceptible to gradient obfuscation, thus allowing us to do fewer gradient descent steps for the inner optimization. Our novel linearity regularizer promotes locally linear behavior as justified from a theoretical perspective. The resulting models achieve state of the art adversarial robustness on the CIFAR-10 and Imagenet datasets, and can be trained 5× faster than regular adversarial training.
Acknowledgements
We would like to acknowledge Jost Tobias Springenberg and Brendan O’Donoghue for careful reading of this manual script. We would also like to acknowledge Jonathan Uesato and Po-Sen Huang for the insightful discussions. | 1. How does the paper contribute to robust training, and what is the novelty of its approach?
2. What are the strengths of the proposed regularizer, particularly in terms of its ability to improve robustness?
3. What are the weaknesses or limitations of the paper's approach, especially regarding its comparison with previous works and other attack methods?
4. Can the authors provide further clarification or justification for their choices and design decisions in the paper? | Review | Review
This paper provides a new regularizer for robust training. The empirical results show the efficiency of the proposed method. But there are some places the authors should further clarify: 1. Previous work shows that gradient obfuscation is the mechanism of many failed defenses, but no work verifies that prevent gradient obfuscation can lead to better robustness. 2. In Eq (7), the authors give an upper bound of the loss gap, and minimize the upper bound in the training objective. I wonder why minimizing the upper bound will be better than directly minimizing the loss gap, as basic PGD-training does. 3. The authors should report results on more diverse attacks, like Deepfool, which is more adaptive to linear loss function. |
NIPS | Title
Adversarial Robustness through Local Linearization
Abstract
Adversarial training is an effective methodology to train deep neural networks which are robust against adversarial, norm-bounded perturbations. However, the computational cost of adversarial training grows prohibitively as the size of the model and number of input dimensions increase. Further, training against less expensive and therefore weaker adversaries produces models that are robust against weak attacks but break down under attacks that are stronger. This is often attributed to the phenomenon of gradient obfuscation; such models have a highly non-linear loss surface in the vicinity of training examples, making it hard for gradient-based attacks to succeed even though adversarial examples still exist. In this work, we introduce a novel regularizer that encourages the loss to behave linearly in the vicinity of the training data, thereby penalizing gradient obfuscation while encouraging robustness. We show via extensive experiments on CIFAR-10 and ImageNet, that models trained with our regularizer avoid gradient obfuscation and can be trained significantly faster than adversarial training. Using this regularizer, we exceed current state of the art and achieve 47% adversarial accuracy for ImageNet with `∞ adversarial perturbations of radius 4/255 under an untargeted, strong, white-box attack. Additionally, we match state of the art results for CIFAR-10 at 8/255.
N/A
Adversarial training is an effective methodology to train deep neural networks which are robust against adversarial, norm-bounded perturbations. However, the computational cost of adversarial training grows prohibitively as the size of the model and number of input dimensions increase. Further, training against less expensive and therefore weaker adversaries produces models that are robust against weak attacks but break down under attacks that are stronger. This is often attributed to the phenomenon of gradient obfuscation; such models have a highly non-linear loss surface in the vicinity of training examples, making it hard for gradient-based attacks to succeed even though adversarial examples still exist. In this work, we introduce a novel regularizer that encourages the loss to behave linearly in the vicinity of the training data, thereby penalizing gradient obfuscation while encouraging robustness. We show via extensive experiments on CIFAR-10 and ImageNet, that models trained with our regularizer avoid gradient obfuscation and can be trained significantly faster than adversarial training. Using this regularizer, we exceed current state of the art and achieve 47% adversarial accuracy for ImageNet with `∞ adversarial perturbations of radius 4/255 under an untargeted, strong, white-box attack. Additionally, we match state of the art results for CIFAR-10 at 8/255.
1 Introduction
In a seminal paper, Szegedy et al. [22] demonstrated that neural networks are vulnerable to visually imperceptible but carefully chosen adversarial perturbations which cause them to output incorrect predictions. After this revealing study, a flurry of research has been conducted with the focus of making networks robust against such adversarial perturbations [14, 16, 17, 25]. Concurrently, researchers devised stronger attacks that expose previously unknown vulnerabilities of neural networks [24, 4, 1, 3].
Of the many approaches proposed [19, 2, 6, 21, 15, 17], adversarial training [14, 16] is empirically the best performing algorithm to train networks robust to adversarial perturbations. However, the cost of adversarial training becomes prohibitive with growing model complexity and input dimensionality. This is primarily due to the cost of computing adversarial perturbations, which is incurred at each step of adversarial training. In particular, for each new mini-batch one must perform multiple iterations
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
of a gradient-based optimizer on the network’s inputs to find the perturbations.1 As each step of this optimizer requires a new backwards pass, the total cost of adversarial training scales as roughly the number of such steps. Unfortunately, effective adversarial training of ImageNet often requires large number of steps to avoid problems of gradient obfuscation [1, 24], making it significantly more expensive than conventional training.
One approach which can alleviate the cost of adversarial training is training against weaker adversaries that are cheaper to compute. For example, by taking fewer gradient steps to compute adversarial examples during training. However, this can produce models which are robust against weak attacks, but break down under strong attacks – often due to gradient obfuscation. In particular, one form of gradient obfuscation occurs when the network learns to fool a gradient based attack by making the loss surface highly convoluted and non-linear (see Fig 1), an effect which has also been observed by Papernot et al [18]. This non-linearity prevents gradient based optimization methods from finding an adversarial perturbation within a small number of iterations [4, 24]. In contrast, if the loss surface was linear in the vicinity of the training examples, which is to say well-predicted by local gradient information, gra-
dient obfuscation cannot occur. In this paper, we take up this idea and introduce a novel regularizer that encourages the loss to behave linearly in the vicinity of the training data. We call this regularizer the local linearity regularizer (LLR). Empirically, we find that networks trained with LLR exhibit far less gradient obfuscation, and are almost equally robust against strong attacks as they are against weak attacks. The main contributions of our paper are summarized below:
• We show that training with LLR is significantly faster than adversarial training, allowing us to train a robust ImageNet model with a 5× speed up when training on 128 TPUv3 cores [9].
• We show that LLR trained models exhibit higher robustness relative to adversarially trained models when evaluated under strong attacks. Adversarially trained models can exhibit a decrease in accuracy of 6% when increasing the attack strength at test time for CIFAR-10, whereas LLR shows only a decrease of 2%. • We achieve new state of the art results for adversarial accuracy against untargeted white-box
attack for ImageNet (with = 4/2552): 47%. Furthermore, we match state of the art results for CIFAR 10 (with = 8/255): 52.81%3.
• We perform a large scale evaluation of existing methods for adversarially robust training under consistent, strong, white-box attacks. For this we recreate several baseline models from the literature, training them both for CIFAR-10 and ImageNet (where possible).4
2 Background and Related Work
We denote our classification function by f(x; θ) : x 7→ RC , mapping input features x to the output logits for classes in set C, i.e. pi(y|x; θ) = exp (fi(x; θ)) / ∑ j exp (fj(x; θ)), with θ being the model parameters and y being the label. Adversarial robustness for f is defined as follows: a network is robust to adversarial perturbations of magnitude at input x if and only if
argmax i∈C fi(x; θ) = argmax i∈C
fi(x+ δ; θ) ∀δ ∈ Bp( ) = {δ : ‖δ‖p ≤ }. (1)
1While computing the globally optimal adversarial example is NP-hard [12], gradient descent with several random restarts was empirically shown to be quite effective at computing adversarial perturbations of sufficient quality.
2This means that every pixel is perturbed independently by up to 4 units up or down on a scale where pixels take values ranging between 0 and 255.
3We note that TRADES [27] gets 55% against a much weaker attack; under our strongest attack, it gets 52.5%.
4Baselines created are adversarial training, TRADES and CURE [17]. Contrary to CIFAR-10, we are currently unable to achieve consistent and competitive results on ImageNet at = 4/255 using TRADES.
In this paper, we focus on p =∞ and we use B( ) to denote B∞( ) for brevity. Given the dataset is drawn from distribution D, the standard method to train a classifier f is empirical risk minimization (ERM), which is defined by: minθ E(x,y)∼D[` (x; y, θ)]. Here, `(x; y, θ) is the standard cross-entropy loss function defined by `(x; y, θ) = −yT log (p(x; θ)) , (2) where pi(x; θ) is defined as above, and y is a 1-hot vector representing the class label. While ERM is effective at training neural networks that perform well on heldout test data, the accuracy on the test set goes to zero under adversarial evaluation. This is a result of a distribution shift in the data induced by the attack. To rectify this, adversarial training [17, 14] seeks to perturb the data distribution by performing adversarial attacks during training. More concretely, adversarial training minimizes the loss function
E(x,y)∼D [ max δ∈B( ) `(x+ δ; y, θ) ] , (3)
where the inner maximization, maxδ∈B( ) `(x+ δ; y, θ), is typically performed via a fixed number of steps of a gradient-based optimization method. One such method is Projected-Gradient-Descent (PGD) which performs the following gradient step:
δ ← Proj (δ − η∇δ`(x+ δ; y, θ)) , (4) where Proj(x) = argminξ∈B( ) ‖x− ξ‖. Another popular gradient-based method is to use the sign of the gradient [8]. The cost of solving Eq (3) is dominated by the cost of solving the inner maximization problem. Thus, the inner maximization should be performed efficiently to reduce the overall cost of training. A naive approach is to reduce the number of gradient steps performed by the optimization procedure. Generally, the attack is weaker when we do fewer steps. If the attack is too weak, the trained networks often display gradient obfuscation as shown in Fig 1.
Since the introduction of adversarial training, a corpus of work has researched alternative ways of making networks robust. One such approach is the TRADES method [27], which is a form of regularization that optimizes the trade-off between robustness and accuracy – as many studies have observed these two quantities to be at odds with each other [23]. Others, such as work by Ding et al [7] adaptively increase the perturbation radius by find the minimal length perturbation which changes the output label. Some have proposed architectural changes which promote adversarial robustness, such as the "denoise" model [25] for ImageNet.
The work presented here is a regularization technique which encourages the loss function to be well approximated by its linear Taylor expansion in a sufficiently small neighbourhood. There has been work before which uses gradient information as a form of regularization [20, 17]. The work presented in this paper is closely related to the paper by Moosavi et al [17], which highlights that adversarial training reduces the curvature of `(x; y, θ) with respect to x. Leveraging an empirical observation (the highest curvature is along the direction∇x`(x; y, θ)), they further propose an algorithm to mimic the effects of adversarial training on the loss surface. The algorithm results in comparable performance to adversarial training with a significantly lower cost.
3 Motivating the Local Linearity Regularizer
As described above, the cost of adversarial training is dominated by solving the inner maximization problem maxδ∈B( ) `(x+δ). Throughout we abbreviate `(x; y, θ) with `(x). We can reduce this cost simply by reducing the number of PGD (as defined in Eq (4)) steps taken to solve maxδ∈B( ) `(x+δ). To motivate the local linearity regularizer (LLR), we start with an empirical analysis of how the behavior of adversarial training changes as we increase the number of PGD steps used during training. We find that the loss surface becomes increasingly linear (as captured by the local linearity measure defined below) as we increase the number of PGD steps.
3.1 Local Linearity Measure
Suppose that we are given an adversarial perturbation δ ∈ B( ). The corresponding adversarial loss is given by `(x+ δ). If our loss surface is smooth and approximately linear, then `(x+ δ) is well approximated by its first-order Taylor expansion `(x) + δT∇x`(x). In other words, the absolute difference between these two values,
g(δ;x) = ∣∣`(x+ δ)− `(x)− δT∇x`(x)∣∣ , (5)
is an indicator of how linear the surface is. Consequently, we consider the quantity
γ( , x) = max δ∈B( ) g(δ;x), (6)
to be a measure of how linear the surface is within a neighbourhood B( ). We call this quantity the local linearity measure.
3.2 Empirical Observations on Adversarial Training
We measure γ( , x) for networks trained with adversarial training on CIFAR-10, where the inner maximization maxδ∈B( ) `(x+ δ) is performed with 1, 2, 4, 8 and 16 steps of PGD. γ( , x) is measured throughout training on the training set5. The architecture used is a wide residual network [26] 28 in depth and 10 in width (Wide-ResNet-28-10). The results are shown in Fig 2a and 2b. Fig 2a shows that when we train with one and two steps of PGD for the inner maximization, the local loss surface is extremely non-linear at the end of training. An example visualization of such a loss surface is given in Fig A1a. However, when we train with four or more steps of PGD for the inner maximization, the surface is relatively well approximated by `(x) + δT∇x`(x) as shown in Fig 2b. An example of the loss surface is shown in Fig A1b. For the adversarial accuracy of the networks, see Table A1.
4 Local Linearity Regularizer (LLR)
From the section above, we make the empirical observation that the local linearity measure γ( , x) decreases as we train with stronger attacks6. In this section, we give some theoretical justifications of why local linearity γ( , x) correlates with adversarial robustness, and derive a regularizer from the local linearity measure that can be used for training of robust models.
4.1 Local Linearity Upper Bounds Adversarial Loss
The following proposition establishes that the adversarial loss `(x + δ) is upper bounded by the local linearity measure, plus the change in the loss as predicted by the gradient (which is given by |δT∇x`(x)|). Proposition 4.1. Consider a loss function `(x) that is once-differentiable, and a local neighbourhood defined by B( ). Then for all δ ∈ B( )
|`(x+ δ)− `(x)| ≤ |δT∇x`(x)|+ γ( , x). (7) 5To measure γ( , x) we find maxδ∈B( ) g(δ;x) with 50 steps of PGD using Adam as the optimizer and 0.1
as the step size. 6Here, we imply an increase in the number of PGD steps for the inner maximization maxδ∈B( ) `(x+ δ).
See Appendix B for the proof.
From Eq (7) it is clear that the adversarial loss tends to `(x), i.e., `(x + δ) → `(x), as both |δ>∇x`(x)| → 0 and γ( ;x)→ 0 for all δ ∈ B( ). And assuming `(x+ δ) ≥ `(δ) one also has the upper bound `(x+ δ) ≤ `(x) + |δT∇x`(x)|+ γ( , x).
4.2 Local Linearity Regularization (LLR)
Following the analysis above, we propose the following objective for adversarially robust training L(D) = ED [ `(x) + λγ( , x) + µ|δTLLR∇x`(x)|︸ ︷︷ ︸
LLR
] , (8)
where λ and µ are hyper-parameters to be optimized, and δLLR = argmaxδ∈B( )g(δ;x) (recall the definition of g(δ;x) from Eq (5)). Concretely, we are trying to find the point δLLR in B( ) where the linear approximation `(x) + δT∇x`(x) is maximally violated. To train we penalize both its linear violation γ( , x) =
∣∣`(x+ δLLR)− `(x)− δTLLR∇x`(x)∣∣, and the gradient magnitude term∣∣δTLLR∇x`(x)∣∣, as required by the above proposition. We note that, analogous to adversarial training, LLR requires an inner optimization to find δLLR – performed via gradient descent. However, as we will show in the experiments, much fewer optimization steps are required for the overall scheme to be effective. Pseudo-code for training with this regularizer is given in Appendix E.
4.3 Local Linearity Measure γ( ;x) bounds the adversarial loss by itself
Interestingly, under certain reasonable approximations and standard choices of loss functions, we can bound |δ>∇x`(x)| in terms of γ( ;x). See Appendix C for details. Consequently, the bound in Eq (7) implies that minimizing γ( ;x) (along with the nominal loss `(x)) is sufficient to minimize the adversarial loss `(x+ δ). This prediction is confirmed by our experiments. However, our experiments also show that including |δ>∇x`(x)| in the objective along with `(x) and γ( ;x) works better in practice on certain datasets, especially ImageNet. See Appendix F.3 for details.
5 Experiments and Results
We perform experiments using LLR on both CIFAR-10 [13] and ImageNet [5] datasets. We show that LLR gets state of the art adversarial accuracy on CIFAR-10 (at = 8/255) and ImageNet (at = 4/255) evaluated under a strong adversarial attack. Moreover, we show that as the attack strength increases, the degradation in adversarial accuracy is more graceful for networks trained using LLR than for those trained with standard adversarial training. Further, we demonstrate that training using LLR is 5× faster for ImageNet. Finally, we show that, by linearizing the loss surface, models are less prone to gradient obfuscation.
CIFAR-10: The perturbation radius we examine is = 8/255 and the model architectures we use are Wide-ResNet-28-8, Wide-ResNet-40-8 [26]. Since the validity of our regularizer requires `(x) to be smooth, the activation function we use is softplus function log(1 + exp(x)), which is a smooth version of ReLU. The baselines we compare our results against are adversarial training (ADV) [16], TRADES [27] and CURE [17]. We recreate these baselines from the literature using the same network architecture and activation function. The evaluation is done on the full test set of 10K images.
ImageNet: The perturbation radii considered are = 4/255 and = 16/255. The architecture used for this is from [11] which is ResNet-152. We use softplus as activation function. For = 4/255, the baselines we compare our results against is our recreated versions of ADV [16] and denoising model (DENOISE) [25].7 For = 16/255, we compare LLR to ADV [16] and DENOISE [25] networks which have been published from the the literature. Due to computational constraints, we limit ourselves to evaluating all models on the first 1K images of the test set.
To make sure that we have a close estimate of the true robustness, we evaluate all the models on a wide range of attacks these are described below.
7We attempted to use TRADES on ImageNet but did not manage to get competitive results. Thus they are omitted from the baselines.
5.1 Evaluation Setup
To accurately gauge the true robustness of our network, we tailor our attack to give the lowest possible adversarial accuracy. The two parts which we tune to get the optimal attack is the loss function for the attack and its corresponding optimization procedure. The loss functions used are described below, for the optimization procedure please refer to Appendix F.1.
Loss Functions: The three loss functions we consider are summarized in Table 1. We use the difference between logits for the loss function rather than the cross-entropy loss as we have empirically found the former to yield lower adversarial accuracy.
5.2 Results for Robustness
For CIFAR-10, the main adversarial accuracy results are given in Table 2. We compare LLR training to ADV [16], CURE [17] and TRADES [27], both with our re-implementation and the published models 8. Note that our re-implementation using softplus activations perform at or above the published results for ADV, CURE and TRADES. This is largely due to the learning rate schedule used, which is the similar to the one used by TRADES [27].
8Note the network published for TRADES [27] uses a Wide-ResNet-34-10 so this is not shown in the table but under the same rigorous evaluation we show that TRADES get 84.91% nominal accuracy, 53.41% under Untargeted and 52.58% under Multi-Targeted. We’ve also ran `∞ DeepFool (not in the table as the attack is weaker) giving ADV(S): 64.29%, CURE(S): 58.73%, TRADES(S): 63.4%, LLR(S): 65.87%.
Interestingly, for adversarial training (ADV), using the Multi-Targeted attack for evaluation gives significantly lower adversarial accuracy compared to Untargeted. The accuracy obtained are 49.79% and 55.26% respectively. Evaluation using Multi-Targeted attack consistently gave the lowest adversarial accuracy throughout. Under this attack, the methods which stand out amongst the rest are LLR and TRADES. Using LLR we get state of the art results with 52.81% adversarial accuracy.
For ImageNet, we compare against adversarial training (ADV) [16] and the denoising model (DENOISE) [25]. The results are shown in Table 3. For a perturbation radius of 4/255, LLR gets 47% adversarial accuracy under the Untargeted attack which is notably higher than the adversarial accuracy obtained via adversarial training which is 39.70%. Moreover, LLR is trained with just two-steps of PGD rather than 30 steps for adversarial training. The amount of computation needed for each method is further discussed in Sec 5.2.1.
Further shown in Table 3 are the results for = 16/255. We note a significant drop in nominal accuracy when we train with LLR to perturbation radius 16/255. When testing for perturbation radius of 16/255 we also show that the adversarial accuracy under Untargeted is very poor (below 8%) for all methods. We speculate that this perturbation radius is too large for the robustness problem. Since adversarial perturbations should be, by definition, imperceptible to the human eye, upon inspection of the images generated using an adversarial attack (see Fig F4) - this assumption no longer holds true. The images generated appear to consist of super-imposed object parts of other classes onto the target image. This leads us to believe that a more fine-grained analysis of what should constitute "robustness for ImageNet" is an important topic for debate.
5.2.1 Runtime Speed
For ImageNet, we trained on 128 TPUv3 cores [9], the total training wall time for the LLR network (4/255) is 7 hours for 110 epochs. Similarly, for the adversarially trained (ADV) networks the total wall time is 36 hours for 110 epochs. This is a 5× speed up.
5.2.2 Accuracy Degradation: Strong vs Weak Evaluation
The resulting model trained using LLR degrades gracefully in terms of adversarial accuracy when we increase the strength of attack, as shown in Fig 3. In particular, Fig 3a shows that, for CIFAR-10, when the attack changes from Untargeted to Multi-Targeted, the LLR’s accuracy remains similar with only a 2.18% drop in accuracy. Contrary to adversarial training (ADV), where we see a 5.64% drop in accuracy. We also see similar trends in accuracy in Table 2. This could indicate that some level of obfuscation may be happening under standard adversarial training.
As we empirically observe that LLR evaluates similarly under weak and strong attacks, we hypothesize that this is because LLR explicitly linearizes the loss surface. An extreme case would be when the surface is completely linear - in this instance the optimal adversarial perturbation would be found with just one PGD step. Thus evaluation using a weak attack is often good enough to get an accurate gauge of how it will perform under a stronger attack.
For ImageNet, see Fig 3b, the adversarial accuracy trained using LLR remains significantly higher (7.5%) than the adversarially trained network going from a weak to a stronger attack.
5.3 Resistance to Gradient Obfuscation
10
20
We use either the standard adversarial training objective (ADV-1, ADV-2) or the LLR objective (LLR-1, LLR-2) and taking one or two steps of PGD to maximize each objective. To train LLR-1/2, we only optimize the local linearity γ( , x), i.e. µ in Eq. (8) is set to zero. We see that for adversarial training, as shown in Figs 4a, 4c, the loss surface becomes highly non-linear and jagged – in other words obfuscated. Additionally in this setting, the adversarial accuracy under our strongest attack is 0% for both, see Table F3. In contrast, the loss surface is smooth when we train using LLR as shown in Figs 4b, 4d. Further, Table F3 shows that we obtain an adversarial accuracy of 44.50% with the LLR-2 network under our strongest evaluation. We also evaluate the values of γ( , x) for the CIFAR-10 test set after these networks are trained. This is shown in Fig F3. The values of γ( , x) are comparable when we train with LLR using two steps of PGD to adversarial training with 20 steps of PGD. By comparison, adversarial training with two steps of PGD results in much larger values of γ( , x).
6 Conclusions
We show that, by promoting linearity, deep classification networks are less susceptible to gradient obfuscation, thus allowing us to do fewer gradient descent steps for the inner optimization. Our novel linearity regularizer promotes locally linear behavior as justified from a theoretical perspective. The resulting models achieve state of the art adversarial robustness on the CIFAR-10 and Imagenet datasets, and can be trained 5× faster than regular adversarial training.
Acknowledgements
We would like to acknowledge Jost Tobias Springenberg and Brendan O’Donoghue for careful reading of this manual script. We would also like to acknowledge Jonathan Uesato and Po-Sen Huang for the insightful discussions. | 1. What is the main contribution of the paper regarding adversarial training?
2. How does the proposed method differ from traditional adversarial training approaches?
3. What are the theoretical guarantees provided by the authors for their approach?
4. How do the authors incorporate regularizers into their adversarial training method?
5. Can you explain the reasoning behind using the highest deviation from the tangent hyperplane as the perturbation in one of the regularizers?
6. How effective is the proposed method compared to other state-of-the-art methods in terms of robustness and efficiency? | Review | Review
The authors proposed to minimize a local linearity measure of the loss function (defined in Eq. (6) as the difference between tangent hyperplane and loss function) along with the empirical loss in the adversarial training. By doing so, one could avoid the so-called "gradient obfuscation" problem associated with few iterations of gradient based optimization of inner maximization in adversarial training. This leads to significant speedup of adversarial training, while achieving comparable or better robustness compared to PGA-based adversarial training. The main theoretical result is presented in Prop. 4.1, where the adversarial change in loss function was shown to be upper bounded by sum of the defined local linearity measure and absolute inner product of perturbation and loss gradient w.r.t. the input. The authors then suggested to use these two terms as regularizers in adversarial training of the model (Eq. (8)). For the first term, as we seek to minimize <\delta, \grad_x l(x)> for *all* perturbations \delta in the local neighborhood B_\epsilon, we should naturally aim at minimizing ||\grad_x l(x)||_2. However, the authors proposed to minimize <\delta_LLR, \grad_x l(x)> instead, where \delta_LLR is the perturbation that yields highest deviation from the tangent hyperplane. So the logic is not clear to me for this term. The second regularizer is the measure of deviation from linearity, which is computed in the same way as PGA iterative approximation to inner maximization of adversarial training, but with much fewer iterations. The empirical results on CIFAR 10 and ImageNet datasets support the claims under a rich variety of attacks. |
NIPS | Title
TinyTL: Reduce Memory, Not Parameters for Efficient On-Device Learning
Abstract
On-device learning enables edge devices to continually adapt the AI models to new data, which requires a small memory footprint to fit the tight memory constraint of edge devices. Existing work solves this problem by reducing the number of trainable parameters. However, this doesn’t directly translate to memory saving since the major bottleneck is the activations, not parameters. In this work, we present Tiny-Transfer-Learning (TinyTL) for memory-efficient on-device learning. TinyTL freezes the weights while only learns the bias modules, thus no need to store the intermediate activations. To maintain the adaptation capacity, we introduce a new memory-efficient bias module, the lite residual module, to refine the feature extractor by learning small residual feature maps adding only 3.8% memory overhead. Extensive experiments show that TinyTL significantly saves the memory (up to 6.5×) with little accuracy loss compared to fine-tuning the full network. Compared to fine-tuning the last layer, TinyTL provides significant accuracy improvements (up to 34.1%) with little memory overhead. Furthermore, combined with feature extractor adaptation, TinyTL provides 7.3-12.9× memory saving without sacrificing accuracy compared to fine-tuning the full Inception-V3.
1 Introduction
Intelligent edge devices with rich sensors (e.g., billions of mobile phones and IoT devices)1 have been ubiquitous in our daily lives. These devices keep collecting new and sensitive data through the sensor every day while being expected to provide high-quality and customized services without sacrificing privacy2. These pose new challenges to efficient AI systems that could not only run inference but also continually fine-tune the pre-trained models on newly collected data (i.e., on-device learning).
Though on-device learning can enable many appealing applications, it is an extremely challenging problem. First, edge devices are memory-constrained. For example, a Raspberry Pi 1 Model A only has 256MB of memory, which is sufficient for inference, but by far insufficient for training (Figure 1 left), even using a lightweight neural network architecture (MobileNetV2 [1]). Furthermore, the memory is shared by various on-device applications (e.g., other deep learning models) and the operating system. A single application may only be allocated a small fraction of the total memory, which makes this challenge more critical. Second, edge devices are energy-constrained. DRAM access consumes two orders of magnitude more energy than on-chip SRAM access. The large memory footprint of activations cannot fit into the limited on-chip SRAM, thus has to access DRAM. For instance, the training memory of MobileNetV2, under batch size 16, is close to 1GB, which is by far larger than the SRAM size of an AMD EPYC CPU3 (Figure 1 left), not to mention lower-end
1https://www.statista.com/statistics/330695/number-of-smartphone-users-worldwide/ 2https://ec.europa.eu/info/law/law-topic/data-protection_en 3https://www.amd.com/en/products/cpu/amd-epyc-7302
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Training 128x expensive! Inference Memory Footprint, Batch Size = 1 (20MB)
Memory Cost #batch size ResNet50 Act ResNet50 Params ResNet50 Running Act ResNet50 Training Memory Cost #batch size ResNet50 Inference Memory Cost #batch size MobileNetV2 Act MobileNetV2 Params MobileNetV2 Running Act MobileNetV2 Training Memory Cost #batch size MobileNetV2 Inference Memory Cost Untitled 1 0 88.4 102.23 6.42 190.63 0 108.65 0 54.80 14.02 5.60 68.82 0 19.62 Untitled 2 1 176.8 102.23 279.03 1 108.65 1 109.60 14.02 123.62 1 19.62 Untitled 3 2 353.6 102.23 456.83 2 108.65 2 219.20 14.02 233.22 2 19.62 Untitled 4 3 707.2 102.23 809.43 3 108.65 3 438.40 14.02 452.42 3 19.62 4 1414.4 102.23 1516.63 4 108.65 4 876.80 14.02 890.82 4 19.62 101 102 103 TPU SRAM (28MB) 21 4 8 Raspberry Pi 1 DRAM (256MB) float mult SRAM access DRAM access Energy 3.7 5.0 640.0 Table 1 ResNet MBV2-1.4 Params (M) 102 24 Activations (M) 707.2 626.4 0 200 400 600 800 Param (MB) Activation (MB) ResNet-50 MbV2-1.4 4.3x 1.1x The main bottleneck does not improve much. DRAM: 640 pJ/byte SRAM: 5 pJ/byte 6.9x larger Table 1-1 MobileNetV3-1.4 4 40 59 16 Batch Size M bV 2 M em or y Fo ot pr in t ( M B) Activation is the main bottleneck, not parameters. float mult SRAM access DRAM access Energy 3.7 5.0 640.0 Training Inference Batch Size 101 102 103 M ob ile Ne tV 2 M em or y Fo ot pr in t ( M B) TPU SRAM (28MB) 21 4 8 16 Raspberry Pi 1 Model A DRAM (256MB) 32 bit Float Mult 32 bit SRAM Access 32 bit DRAM Access 102 103 101 100 En er gy (p J) 3.7 pJ 5 pJ 640 pJ 128x Expensive float mult SRAM access DRAM access Energy 3.7 5.0 640.0
Inference, bs=1 Energy 20.0 0 125 250 375 500 Inference Batch Size = 1 M ob ile Ne tV 2 M em or y Fo ot pr in t ( M B) SRAM: 5 pJ/byte DRAM: 640 pJ/byte 128x expensive!
Table 2
SRAM Access Training, bs=8
Energy 20 890.82
Table 3
ResNet-50 MbV2-1.4
Param (MB) 102 24 Activation (MB) 1414.4 1252.8
1
edge platforms. If the training memory can fit on-chip SRAM, it will drastically improve the speed and energy efficiency.
There is plenty of efficient inference techniques that reduce the number of trainable parameters and the computation FLOPs [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], however, parameter-efficient or FLOPs-efficient techniques do not directly save the training memory. It is the activation that bottlenecks the training memory, not the parameters. For example, Figure 1 (right) compares ResNet-50 and MobileNetV21.4. In terms of parameter size, MobileNetV2-1.4 is 4.3× smaller than ResNet-50. However, for training activation size, MobileNetV2-1.4 is almost the same as ResNet-50 (only 1.1× smaller), leading to little memory reduction. It is essential to reduce the size of intermediate activations required by back-propagation, which is the key memory bottleneck for efficient on-device training.
In this paper, we propose Tiny-Transfer-Learning (TinyTL) to address these challenges. By analyzing the memory footprint during the backward pass, we notice that the intermediate activations (the main bottleneck) are only needed when updating the weights, not the biases (Eq. 2). Inspired by this finding, we propose to freeze the weights of the pre-trained feature extractor and only update the biases to reduce the memory footprint (Figure 2b). To compensate for the capacity loss, we introduce a memory-efficient bias module, called lite residual module, which improves the model capacity by refining the intermediate feature maps of the feature extractor (Figure 2c). Meanwhile, we aggressively shrink the resolution and width of the lite residual module to have a small memory overhead (only 3.8%). Extensive experiments on 9 image classification datasets with the same pre-trained model (ProxylessNAS-Mobile [11]) demonstrate the effectiveness of TinyTL compared to previous transfer learning methods. Further, combined with a pre-trained once-for-all network [10], TinyTL can select a specialized sub-network as the feature extractor for each transfer dataset (i.e., feature extractor adaptation): given a more difficult dataset, a larger sub-network is selected, and vice versa. TinyTL achieves the same level of (or even higher) accuracy compared to fine-tuning the full Inception-V3 while reducing the training memory footprint by up to 12.9×. Our contributions can be summarized as follows:
• We propose TinyTL, a novel transfer learning method to reduce the training memory footprint by an order of magnitude for efficient on-device learning. We systematically analyze the memory of training and find the bottleneck comes from updating the weights, not biases (assume ReLU activation).
• We also introduce the lite residual module, a memory-efficient bias module to improve the model capacity with little memory overhead.
• Extensive experiments on transfer learning tasks show that our method is highly memory-efficient and effective. It reduces the training memory footprint by up to 12.9× without sacrificing accuracy.
2 Related Work
Efficient Inference Techniques. Improving the inference efficiency of deep neural networks on resource-constrained edge devices has recently drawn extensive attention. Starting from [4, 5, 12, 13,
14], one line of research focuses on compressing pre-trained neural networks, including i) network pruning that removes less-important units [4, 15] or channels [16, 17]; ii) network quantization that reduces the bitwidth of parameters [5, 18] or activations [19, 20]. However, these techniques cannot handle the training phase, as they rely on a well-trained model on the target task as the starting point.
Another line of research focuses on lightweight neural architectures by either manual design [1, 2, 3, 21, 22] or neural architecture search [6, 8, 11, 23]. These lightweight neural networks provide highly competitive accuracy [10, 24] while significantly improving inference efficiency. However, concerning the training memory efficiency, key bottlenecks are not solved: the training memory is dominated by activations, not parameters (Figure 1).
There are also some non-deep learning methods [25, 26, 27] that are designed for efficient inference on edge devices. These methods are suitable for handling simple tasks like MNIST. However, for more complicated tasks, we still need the representation capacity of deep neural networks.
Memory Footprint Reduction. Researchers have been seeking ways to reduce the training memory footprint. One typical approach is to re-compute discarded activations during backward [28, 29]. This approach reduces memory usage at the cost of a large computation overhead. Thus it is not preferred for edge devices. Layer-wise training [30] can also reduce the memory footprint compared to end-to-end training. However, it cannot achieve the same level of accuracy as end-to-end training. Another representative approach is through activation pruning [31], which builds a dynamic sparse computation graph to prune activations during training. Similarly, [32] proposes to reduce the bitwidth of training activations by introducing new reduced-precision floating-point formats. Besides reducing the training memory cost, there are some techniques that focus on reducing the peak inference memory cost, such as RNNPool [33] and MemNet [34]. Our method is orthogonal to these techniques and can be combined to further reduce the memory footprint.
Transfer Learning. Neural networks pre-trained on large-scale datasets (e.g., ImageNet [35]) are widely used as a fixed feature extractor for transfer learning, then only the last layer needs to be fine-tuned [36, 37, 38, 39]. This approach does not require to store the intermediate activations of the feature extractor, and thus is memory-efficient. However, the capacity of this approach is limited, resulting in poor accuracy, especially on datasets [40, 41] whose distribution is far from ImageNet (e.g., only 45.9% Aircraft top1 accuracy achieved by Inception-V3 [42]). Alternatively, fine-tuning the full network can achieve better accuracy [43, 44]. But it requires a vast memory footprint and hence is not friendly for training on edge devices. Recently, [45,46] propose to only update parameters of the batch normalization (BN) [47] layers, which greatly reduces the number of trainable parameters. Unfortunately, parameter-efficiency doesn’t translate to memory-efficiency. It still requires a large amount of memory (e.g., 326MB under batch size 8) to store the input activations of the BN layers (Table 3). Additionally, the accuracy of this approach is still much worse than fine-tuning the full network (70.7% v.s. 85.5%; Table 3). People can also partially fine-tune some layers, but how many layers to select is still ad hoc. This paper provides a systematic approach to save memory without losing accuracy.
3 Tiny Transfer Learning
3.1 Understanding the Memory Footprint of Back-propagation
Without loss of generality, we consider a neural networkM that consists of a sequence of layers:
M(·) = Fwn(Fwn−1(· · · Fw2(Fw1(·)) · · · )), (1)
where wi denotes the parameters of the ith layer. Let ai and ai+1 be the input and output activations of the ith layer, respectively, and L be the loss. In the backward pass, given ∂L∂ai+1 , there are two goals for the ith layer: computing ∂L∂ai and ∂L ∂wi .
Assuming the ith layer is a linear layer whose forward process is given as: ai+1 = aiW + b, then its backward process under batch size 1 is
∂L ∂ai = ∂L ∂ai+1 ∂ai+1 ∂ai = ∂L ∂ai+1 WT , ∂L ∂W = aTi ∂L ∂ai+1 , ∂L ∂b = ∂L ∂ai+1 . (2)
fmap in memory fmap not in memory
learned weights on target task pre-trained weights (a) Fine-tune the full network Downsample Upsample (b) Lightweight residual learning (ours) (d) Our lightweight residual branch KxK Group Conv 1x1 Conv keep activations small while using group conv to increase the arithmetic intensity (c) Mobile inverted bottleneck block little computation but large activation (a) Fine-tune the full network (Conventional)
train a once-for-all network (c) Lite residual learning fmap in memory fmap not in memory learnable params fixed params weight bias mobile inverted bottleneck blockith UpsampleDownsample Group Conv 1x1 Conv (b) Fine-tune bias only
(a) Fine-tune the full network (Conventional)
(c) Lite residual learning(d) Feature network adaptation
fmap in memory fmap not in memory learnable params fixed params weight bias mobile inverted bottleneck blockith
Aircraft Cars Flowers
Downsample Group Conv
1x1 Conv
Avoid inverted bottleneck
1x1 Conv
(b) Fine-tune bias only C, R 6C, R 6C, R C, R C, 0.5R C, 0.5R 1x1 Conv1x1 Conv Depth-wise Conv 1x1 Conv1x1 Conv Depth-wise Conv
1x1 Conv1x1 Conv Depth-wise Conv
According to Eq. (2), the intermediate activations (i.e., {ai}) that dominate the memory footprint are only required to compute the gradient of the weights (i.e., ∂L∂W ), not the bias. If we only update the bias, training memory can be greatly saved. This property is also applicable to convolution layers and normalization layers (e.g., batch normalization [47], group normalization [48], etc) since they can be considered as special types of linear layers.
Regarding non-linear activation layers (e.g., ReLU, sigmoid, h-swish), sigmoid and h-swish require to store ai to compute ∂L∂ai (Table 1), hence they are not memory-efficient. Activation layers that build upon them are also not memory-efficient consequently, such as tanh, swish [49], etc. In contrast, ReLU and other ReLU-styled activation layers (e.g., LeakyReLU [50]) only requires to store a binary mask representing whether the value is smaller than 0, which is 32× smaller than storing ai.
3.2 Lite Residual Learning
Based on the memory footprint analysis, one possible solution of reducing the memory cost is to freeze the weights of the pre-trained feature extractor while only update the biases (Figure 2b). However, only updating biases has limited adaptation capacity. Therefore, we introduce lite residual learning that exploits a new class of generalized memory-efficient bias modules to refine the intermediate feature maps (Figure 2c).
4
Formally, a layer with frozen weights and learnable biases can be represented as:
ai+1 = FW(ai) + b. (3)
To improve the model capacity while keeping a small memory footprint, we propose to add a lite residual module that generates a residual feature map to refine the output:
ai+1 = FW(ai) + b+ Fwr (a′i = reduce(ai)), (4)
where a′i = reduce(ai) is the reduced activation. According to Eq. (2), learning these lite residual modules only requires to store the reduced activations {a′i} rather than the full activations {ai}.
Implementation (Figure 2c). We apply Eq. (4) to mobile inverted bottleneck blocks (MB-block) [1]. The key principle is to keep the activation small. Following this principle, we explore two design dimensions to reduce the activation size:
• Width. The widely-used inverted bottleneck requires a huge number of channels (6×) to compensate for the small capacity of a depthwise convolution, which is parameter-efficient but highly activation-inefficient. Even worse, converting 1× channels to 6× channels back and forth requires two 1× 1 projection layers, which doubles the total activation to 12×. Depthwise convolution also has a very low arithmetic intensity (its OPs/Byte is less than 4% of 1× 1 convolution’s OPs/Byte if with 256 channels), thus highly memory in-efficient with little reuse. To solve these limitations, our lite residual module employs the group convolution that has much higher arithmetic intensity than depthwise convolution, providing a good trade-off between FLOPs and memory. That also removes the 1×1 projection layer, reducing the total channel number by 6×2+11+1 = 6.5×.
• Resolution. The activation size grows quadratically with the resolution. Therefore, we shrink the resolution in the lite residual module by employing a 2× 2 average pooling to downsample the input feature map. The output of the lite residual module is then upsampled to match the size of the main branch’s output feature map via bilinear upsampling. Combining resolution and width optimizations, the activation of our lite residual module is roughly 22 × 6.5 = 26× smaller than the inverted bottleneck.
3.3 Discussions
Normalization Layers. As discussed in Section 3.1, TinyTL flexibly supports different normalization layers, including batch normalization (BN), group normalization (GN), layer normalization (LN), and so on. In particular, BN is the most widely used one in vision tasks. However, BN requires a large batch size to have accurate running statistics estimation during training, which is not suitable for on-device learning where we want a small training batch size to reduce the memory footprint. Moreover, the data may come in a streaming fashion in on-device learning, which requires a training batch size of 1. In contrast to BN, GN can handle a small training batch size as the running statistics in GN are computed independently for different inputs. In our experiments, GN with a small training batch size (e.g., 8) performs slightly worse than BN with a large training batch size (e.g., 256). However, as we target at on-device learning, we choose GN in our models.
Feature Extractor Adaptation. TinyTL can be applied to different backbone neural networks, such as MobileNetV2 [1], ProxylessNASNets [11], EfficientNets [24], etc. However, since the weights of the feature extractor are frozen in TinyTL, we find using the same backbone neural network for all transfer tasks is sub-optimal. Therefore, we choose the backbone of TinyTL using a pre-trained once-for-all network [10] to adaptively select the specialized feature extractor that best fits the target transfer dataset. Specifically, a once-for-all network is a special kind of neural network that is sparsely activated, from which many different sub-networks can be derived without retraining by sparsely activating parts of the model according to the architecture configuration (i.e., depth, width, kernel size, resolution), while the weights are shared. This allows us to efficiently evaluate the effectiveness of a backbone neural network on the target transfer dataset without the expensive pre-training process. Further details of the feature extractor adaptation process are provided in Appendix A.
4 Experiments
4.1 Setups
Datasets. Following the common practice [43, 44, 45], we use ImageNet [35] as the pre-training dataset, and then transfer the models to 8 downstream object classification tasks, including Cars [41], Flowers [51], Aircraft [40], CUB [52], Pets [53], Food [54], CIFAR10 [55], and CIFAR100 [55]. Besides object classification, we also evaluate our TinyTL on human facial attribute classification tasks, where CelebA [56] is the transfer dataset and VGGFace2 [57] is the pre-training dataset.
Model Architecture. To justify the effectiveness of TinyTL, we first apply TinyTL and previous transfer learning methods to the same backbone neural network, ProxylessNAS-Mobile [11]. For each MB-block in ProxylessNAS-Mobile, we insert a lite residual module as described in Section 3.2 and Figure 2 (c). The group number is 2, and the kernel size is 5. We use the ReLU activation since it is more memory-efficient according to Section 3.1. We replace all BN layers with GN layers to better support small training batch sizes. We set the number of channels per group to 8 for all GN layers. Following [58], we apply weight standardization [59] to convolution layers that are followed by GN.
For feature extractor adaptation, we build the once-for-all network using the MobileNetV2 design space [10, 11] that contains five stages with a gradually decreased resolution, and each stage consists of a sequence of MB-blocks. In the stage-level, it supports elastic depth (i.e., 2, 3, 4). In the block-level, it supports elastic kernel size (i.e., 3, 5, 7) and elastic width expansion ratio (i.e., 3, 4, 6). Similarly, for each MB-block in the once-for-all network, we insert a lite residual module that supports elastic group number (i.e., 2, 4) and elastic kernel size (i.e., 3, 5).
Training Details. We freeze the memory-heavy modules (weights of the feature extractor) and only update memory-efficient modules (bias, lite residual, classifier head) during transfer learning. The models are fine-tuned for 50 epochs using the Adam optimizer [60] with batch size 8 on a single GPU. The initial learning rate is tuned for each dataset while cosine schedule [61] is adopted for learning rate decay. We apply 8bits weight quantization [5] on the frozen weights to reduce the parameter size, which causes a negligible accuracy drop in our experiments. For all compared methods, we also assume the 8bits weight quantization is applied if eligible when calculating their training memory footprint. Additionally, as PyTorch does not support explicit fine-grained memory management, we use the theoretically calculated training memory footprint for comparison in our experiments. For simplicity, we assume the batch size is 8 for all compared methods throughout the experiment section.
Stanford-Cars
Full Last BN Bias LiteResidual LiteResidual+Bias
256, 448 224, 416 192, 384 89.1 292.4 160, 352 87.3 208.7 128, 320 84.2 140.5 60.0 57.6 80.1 59.3 88.3 64.7 88.8 64.7 96, 288 76.1 87.2 58.4 47.6 78.1 49.0 87.7 54.4 88.0 54.4 , 256 54.7 38.7 80.2 249.9 75.9 39.8 86.3 45.2 87.4 45.2 , 224 50.9 30.8 77.9 192.4 73.4 31.7 84.2 37.1 85.0 37.1 , 192 73.7 142.9 68.6 24.7 82.1 30.1 83.6 30.1 , 160 67.9 100.7 61.2 18.7 77.3 24.1 78.2 24.2
Flowers102-1
Full Last BN Bias LiteResidual LiteResidual+bias Batch Size
Model Size 18.98636 5.138576 5.264432 5.201504 10.587824 10.63352 8
Act@256, Act@448 60.758528 12.845056 93.6488 13.246464 13.246464 13.246464 Act@224, Act@416 46.482132 11.075584 80.713856 11.421696 11.421696 11.421696 Act@192, Act@384 34.176672 9.437184 68.8032 9.732096 9.732096 9.732096 Act@160, Act@352 23.70904 7.929856 57.785036 8.177664 8.177664 8.177664 Act@128, Act@320 15.189632 6.5536 47.78 6.7584 6.7584 6.7584 Act@96, Act@288 8.530757 5.308416 38.678632 5.474304 5.474304 5.474304 , Act@256 4.194304 30.5792 4.325376 4.325376 4.325376 , Act@224 3.211264 23.39462 3.311616 3.311616 3.311616 , Act@192 2.359296 17.2008 2.433024 2.433024 2.433024 , Act@160 1.6384 11.933009 1.6896 1.6896 1.6896
Aircraft
Full Last BN Bias LiteResidual LiteResidual+Bias
256, 448 224, 416 192, 384 83.5 292.4 160, 352 81.0 208.7 128, 320 77.7 140.5 51.9 57.6 68.6 59.3 81.5 64.7 82.3 64.7 96, 288 70.5 87.2 50.6 47.6 67.3 49.0 80.0 54.4 80.8 54.4 , 256 48.6 38.7 70.7 249.9 65.6 39.8 79.0 45.2 78.9 45.2 , 224 44.9 30.8 68.1 192.4 63.2 31.7 76.4 37.1 75.4 37.1 , 192 64.7 142.9 59.4 24.7 73.3 30.1 74.9 30.1 , 160 60.5 100.7 55.2 18.7 69.5 24.1 70.4 24.2
Flowers
Full Last BN Bias LiteResidual LiteResidual+Bias
256, 448 224, 416 96.8 390.8 192, 384 96.1 292.4 160, 352 95.4 208.7 128, 320 93.6 140.5 93.3 57.6 96.0 387.5 95.6 59.3 96.7 64.7 96.8 64.7 96, 288 89.6 87.2 92.6 47.6 95.6 314.7 95.1 49.0 96.4 54.4 96.4 54.4 , 256 91.6 38.7 95.0 249.9 94.5 39.8 95.9 45.2 96.0 45.2 , 224 90.1 30.8 94.3 192.4 93.5 31.7 95.3 37.1 95.5 37.1 , 192 92.8 142.9 91.5 24.7 94.6 30.1 94.6 30.1 , 160 90.5 100.7 89.5 18.7 92.8 24.1 93.1 24.2
Cub200
Full Last BN Bias LiteResidual LiteResidual+Bias
256, 448 224, 416 81.0 390.8 192, 384 79.0 292.4 160, 352 76.7 208.7 128, 320 71.8 140.5 77.9 57.6 80.6 387.5 79.8 59.3 80.5 64.7 81.0 64.7 96, 288 77.0 47.6 79.6 314.7 78.6 49.0 79.6 54.4 80.0 54.4 , 256 75.4 38.7 79.1 249.9 77.5 39.8 78.5 45.2 78.8 45.2 , 224 73.3 30.8 76.3 192.4 75.3 31.7 76.8 37.1 77.1 37.1 , 192 73.7 142.9 72.7 24.7 74.7 30.1 74.7 30.1 , 160
Food101
Full Last BN Bias LiteResidual LiteResidual+Bias
256, 448 224, 416 84.6 390.8 192, 384 83.2 292.4 160, 352 81.2 208.7 128, 320 78.1 140.5 73.0 57.6 80.2 387.5 78.7 59.3 82.8 64.7 82.9 64.7 96, 288 73.5 87.2 72.0 47.6 79.5 314.7 77.9 49.0 82.0 54.4 82.1 54.4 , 256 70.7 38.7 78.4 249.9 76.8 39.8 81.1 45.2 81.5 45.2 , 224 68.7 30.8 77.0 192.4 75.5 31.7 79.2 37.1 79.7 37.1 , 192 74.9 142.9 73.0 24.7 78.2 30.1 78.4 30.1 , 160 72.4 100.7 70.1 18.7 74.6 24.1 75.1 24.2
Pets
Full Last BN Bias LiteResidual LiteResidual+Bias256, 448
1
4.2 Main Results
Effectiveness of TinyTL. Table 2 reports the comparison between TinyTL and previous transfer learning methods including: i) fine-tuning the last linear layer [36, 37, 39] (referred to as FT-Last); ii) fine-tuning the normalization layers (e.g., BN, GN) and the last linear layer [42] (referred to as FT-Norm+Last) ; iii) fine-tuning the full network [43, 44] (referred to as FT-Full). We also study several variants of TinyTL including: i) TinyTL-B that fine-tunes biases and the last linear layer; ii) TinyTL-L that fine-tunes lite residual modules and the last linear layer; iii) TinyTL-L+B that fine-tunes lite residual modules, biases, and the last linear layer. All compared methods use the same pre-trained model but fine-tune different parts of the model as discussed above. We report the average accuracy across five runs.
Compared to FT-Last, TinyTL maintains a similar training memory footprint while improving the top1 accuracy by a significant margin. In particular, TinyTL-L+B improves the top1 accuracy by 34.1% on Cars, by 30.5% on Aircraft, by 12.6% on CIFAR100, by 11.0% on Food, etc. It shows the improved adaptation capacity of our method over FT-Last. Compared to FT-Norm+Last, TinyTL-L+B improves the training memory efficiency by 5.2× while providing up to 7.3% higher top1 accuracy, which shows that our method is not only more memory-efficient but also more effective than FT-Norm+Last. Compared to FT-Full, TinyTL-L+B@320 can achieve the same level of accuracy while providing 6.0× training memory saving. Regarding the comparison between different variants of TinyTL, both TinyTL-L and TinyTL-L+B have clearly better accuracy than TinyTL-B while incurring little memory overhead. It shows that the lite residual modules are essential in TinyTL. Besides, we find that TinyTL-L+B is slightly better than TinyTL-L on most of the datasets while maintaining the same memory footprint. Therefore, we choose TinyTL-L+B as the default.
Figure 3 demonstrates the results under different input resolutions. We can observe that simply reducing the input resolution will result in significant accuracy drops for FT-Full. In contrast, TinyTL can reduce the memory footprint by 3.9-6.5× while having the same or even higher accuracy compared to fine-tuning the full network.
Combining TinyTL and Feature Extractor Adaptation. Table 3 summarizes the results of TinyTL and previously reported transfer learning results, where different backbone neural networks are used as the feature extractor. Combined with feature extractor adapt tion, TinyTL achieves 7.5-12.9× memory saving compared to fine-tuning the full Inception-V3, reducing from 850MB to 66-114MB while providing the same level of accuracy. Additionally, we try updating the last two layers besides biases and lite residual modules (indicated by †), which results in 2MB of extra
Flowers102
ResNet-50 Activation Pruning
Ours MobileNetV2 Activation Pruning
97.5 802.2 96.6 447.8
96.9 682.7 97.4 114.0 95.8 373.8 96.3 612.0 96.8 66.0 94.1 330.0 95.2 541.3 90.4 286.2 93.4 470.6 79.7 242.3 88.6 399.9
Aircraft
ResNet-50 Activation Pruning
Ours MobileNetV2 Activation Pruning
86.6 802.1 82.8 447.8 83.53 682.7 84.8 116.0 79.8 373.8 80.83 612.0 82.4 69.0 77.0 330.0 77.47 541.3 70.4 286.2 75.64 470.6 61.8 242.3 72.24 399.8
Stanford-Cars
ResNet-50 Activation Pruning
Ours MobileNetV2 Activation Pruning
91.7 802.8 91.0 448.3 91.28 683.5 90.7 119.0 88.7 374.3 90.95 612.8 89.6 71.0 86.2 330.5 89.71 542.1 82.5 286.6 88.20 471.3 75.0 242.8 85.20 400.6
training memory footprint. This slightly improves the accuracy performances, from 90.7% to 91.5% on Cars, from 85.0% to 86.0% on Food, and from 84.8% to 85.4% on Aircraft.
4.3 Ablation Studies and Discussions
Comparison with Dynamic Activation Pruning. The comparison between TinyTL and dynamic activation pruning [31] is summarized in Figure 4. TinyTL is more effective because it re-designed the transfer learning framework (lite residual module, feature extractor adaptation) rather than prune an existing architecture. The transfer accuracy drops quickly when the pruning ratio increases beyond 50% (only 2× memory saving). In contrast, TinyTL can achieve much higher memory reduction without loss of accuracy.
Initialization for Lite Residual Modules. By default, we use the pre-trained weights on the pretraining dataset to initialize the lite residual modules. It requires to have lite residual modules during both the pre-training phase and transfer learning phase. When applying TinyTL to existing pre-trained neural networks that do not have lite residual modules during the pre-training phase, we need to use another initialization strategy for the lite residual modules during transfer learning. To verify the effectiveness of TinyTL under this setting, we also evaluate the performances of TinyTL when using random weights [62] to initialize the lite residual modules except for the scaling parameter of the final normalization layer in each lite residual module. These scaling parameters are initialized with zeros.
Table 4 reports the summarized results. We find using the pre-trained weights to initialize the lite residual modules consistently outperforms using random weights. Besides, we also find that using TinyTL-RandomL+B still provides highly competitive results on Cars, Food, Aircraft, CIFAR10,
8
Flowers102
TinyTL (batch size 8)
TinyTL (batch size 1)
96.8 64.7 96.3 17.4 96.4 54.4 96.1 16.1 96.0 45.2 95.9 15.0 95.5 37.1 95.6 13.9 94.6 30.1 94.8 13.1 93.1 24.2 93.4 12.3
Aircraft
TinyTL (batch size 8)
TinyTL (batch size 1)
82.3 64.7 82.7 17.4 80.8 54.4 80.2 16.1 78.9 45.2 79.6 15.0 75.4 37.1 77.5 13.9 74.9 30.1 75.0 13.1 70.4 24.2 70.7 12.3
Stanford-Cars
TinyTL (batch size 8)
TinyTL (batch size 1)
88.8 64.7 88.7 17.4 88.0 54.4 87.8 16.1 87.4 45.2 86.6 15.0 85.0 37.1 84.5 13.9 83.6 30.1 82.1 13.1 78.2 24.2 78.1 12.3
CIFAR100, and CelebA. Therefore, if having the budget, it is better to use pre-trained weights to initialize the lite residual modules. If not, TinyTL can still be applied and provides competitive results on datasets whose distribution is far from the pre-training dataset.
Results of TinyTL under Batch Size 1. Figure 5 demonstrates the results of TinyTL when using a training batch size of 1. We tune the initial learning rate for each dataset while keeping the other training settings unchanged. As our model employs group normalization rather than batch normalization (Section 3.3), we observe little/no loss of accuracy than training with batch size 8. Meanwhile, the training memory footprint is further reduced to around 16MB, a typical L3 cache size. This makes it much easier to train on the cache (SRAM), which can greatly reduce energy consumption than DRAM training.
5 Conclusion
We proposed Tiny-Transfer-Learning (TinyTL) for memory-efficient on-device learning that aims to adapt pre-trained models to newly collected data on edge devices. Unlike previous methods that focus on reducing the number of parameters or FLOPs, TinyTL directly optimizes the training memory footprint by fixing the memory-heavy modules (i.e., weights) while learning memory-efficient bias modules. We further introduce lite residual modules that significantly improve the adaptation capacity of the model with little memory overhead. Extensive experiments on benchmark datasets consistently show the effectiveness and memory-efficiency of TinyTL, paving the way for efficient on-device machine learning.
9
Broader Impact
The proposed efficient on-device learning technique greatly reduces the training memory footprint of deep neural networks, enabling adapting pre-trained models to new data locally on edge devices without leaking them to the cloud. It can democratize AI to people in the rural areas where the Internet is unavailable or the network condition is poor. They can not only inference but also fine-tune AI models on their local devices without connections to the cloud servers. This can also benefit privacy-sensitive AI applications, such as health care, smart home, and so on.
Acknowledgements
We thank MIT-IBM Watson AI Lab, NSF CAREER Award #1943349 and NSF Award #2028888 for supporting this research. We thank MIT Satori cluster for providing the computation resource. | 1. What is the focus and contribution of the paper regarding memory-efficient transfer learning?
2. What are the strengths of the proposed approach, particularly in its demonstration?
3. What are the weaknesses of the paper, especially regarding novelty and scalability?
4. Do you have any concerns about the claimed computation efficiency?
5. What are the limitations of the paper regarding the number of datasets used for benchmarking? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper proposed a memory-efficient method of transfer learning by freezing weights of pre-trained models and only updating bias. To maintain the ability to adapt and to choose a better feature extractor, this paper also proposed LiteResidual, a residual block that generates a residual feature map, and a feature extractor adapter.
Strengths
1. The idea of freezing weights and partial updates on transfer learning is new and brings new insight for the transfer learning field with only updating the bias. 2. The demonstration of the method is well-organized.
Weaknesses
1. Lack of Novelty: the novelty of the overall framework is not enough. It’s more like marginal contributions over the previous work Once-for-All (citation [1]) or common one-shot NAS method. The author didn’t well distinguish the proposed framework from directly applying Once-for-All on transfer learning with only updating bias. 2. Scalability of the claim of computation efficiency: The author claims the computation efficiency in the Section B of the supplement. To collect the training data for the accuracy predator, 450 subnets are trained on the 20% training dataset for only 1 epoch. That may get good performance on some easy datasets like the Flowers. But when facing more complex datasets requiring more training epochs, the computation cost for this part will increase a lot. And the author doesn’t give any explanation on how to face such scalability. 3. Not enough datasets are included for benchmark: Previous papers in transfer learning [7,27] not only benchmark in Cars, Flowers, and Aircraft, but also in more complex datasets like CIFAR-10 and Food-101 with more images. Compared with the previous work, the datasets included in this paper are not enough. |
NIPS | Title
TinyTL: Reduce Memory, Not Parameters for Efficient On-Device Learning
Abstract
On-device learning enables edge devices to continually adapt the AI models to new data, which requires a small memory footprint to fit the tight memory constraint of edge devices. Existing work solves this problem by reducing the number of trainable parameters. However, this doesn’t directly translate to memory saving since the major bottleneck is the activations, not parameters. In this work, we present Tiny-Transfer-Learning (TinyTL) for memory-efficient on-device learning. TinyTL freezes the weights while only learns the bias modules, thus no need to store the intermediate activations. To maintain the adaptation capacity, we introduce a new memory-efficient bias module, the lite residual module, to refine the feature extractor by learning small residual feature maps adding only 3.8% memory overhead. Extensive experiments show that TinyTL significantly saves the memory (up to 6.5×) with little accuracy loss compared to fine-tuning the full network. Compared to fine-tuning the last layer, TinyTL provides significant accuracy improvements (up to 34.1%) with little memory overhead. Furthermore, combined with feature extractor adaptation, TinyTL provides 7.3-12.9× memory saving without sacrificing accuracy compared to fine-tuning the full Inception-V3.
1 Introduction
Intelligent edge devices with rich sensors (e.g., billions of mobile phones and IoT devices)1 have been ubiquitous in our daily lives. These devices keep collecting new and sensitive data through the sensor every day while being expected to provide high-quality and customized services without sacrificing privacy2. These pose new challenges to efficient AI systems that could not only run inference but also continually fine-tune the pre-trained models on newly collected data (i.e., on-device learning).
Though on-device learning can enable many appealing applications, it is an extremely challenging problem. First, edge devices are memory-constrained. For example, a Raspberry Pi 1 Model A only has 256MB of memory, which is sufficient for inference, but by far insufficient for training (Figure 1 left), even using a lightweight neural network architecture (MobileNetV2 [1]). Furthermore, the memory is shared by various on-device applications (e.g., other deep learning models) and the operating system. A single application may only be allocated a small fraction of the total memory, which makes this challenge more critical. Second, edge devices are energy-constrained. DRAM access consumes two orders of magnitude more energy than on-chip SRAM access. The large memory footprint of activations cannot fit into the limited on-chip SRAM, thus has to access DRAM. For instance, the training memory of MobileNetV2, under batch size 16, is close to 1GB, which is by far larger than the SRAM size of an AMD EPYC CPU3 (Figure 1 left), not to mention lower-end
1https://www.statista.com/statistics/330695/number-of-smartphone-users-worldwide/ 2https://ec.europa.eu/info/law/law-topic/data-protection_en 3https://www.amd.com/en/products/cpu/amd-epyc-7302
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Training 128x expensive! Inference Memory Footprint, Batch Size = 1 (20MB)
Memory Cost #batch size ResNet50 Act ResNet50 Params ResNet50 Running Act ResNet50 Training Memory Cost #batch size ResNet50 Inference Memory Cost #batch size MobileNetV2 Act MobileNetV2 Params MobileNetV2 Running Act MobileNetV2 Training Memory Cost #batch size MobileNetV2 Inference Memory Cost Untitled 1 0 88.4 102.23 6.42 190.63 0 108.65 0 54.80 14.02 5.60 68.82 0 19.62 Untitled 2 1 176.8 102.23 279.03 1 108.65 1 109.60 14.02 123.62 1 19.62 Untitled 3 2 353.6 102.23 456.83 2 108.65 2 219.20 14.02 233.22 2 19.62 Untitled 4 3 707.2 102.23 809.43 3 108.65 3 438.40 14.02 452.42 3 19.62 4 1414.4 102.23 1516.63 4 108.65 4 876.80 14.02 890.82 4 19.62 101 102 103 TPU SRAM (28MB) 21 4 8 Raspberry Pi 1 DRAM (256MB) float mult SRAM access DRAM access Energy 3.7 5.0 640.0 Table 1 ResNet MBV2-1.4 Params (M) 102 24 Activations (M) 707.2 626.4 0 200 400 600 800 Param (MB) Activation (MB) ResNet-50 MbV2-1.4 4.3x 1.1x The main bottleneck does not improve much. DRAM: 640 pJ/byte SRAM: 5 pJ/byte 6.9x larger Table 1-1 MobileNetV3-1.4 4 40 59 16 Batch Size M bV 2 M em or y Fo ot pr in t ( M B) Activation is the main bottleneck, not parameters. float mult SRAM access DRAM access Energy 3.7 5.0 640.0 Training Inference Batch Size 101 102 103 M ob ile Ne tV 2 M em or y Fo ot pr in t ( M B) TPU SRAM (28MB) 21 4 8 16 Raspberry Pi 1 Model A DRAM (256MB) 32 bit Float Mult 32 bit SRAM Access 32 bit DRAM Access 102 103 101 100 En er gy (p J) 3.7 pJ 5 pJ 640 pJ 128x Expensive float mult SRAM access DRAM access Energy 3.7 5.0 640.0
Inference, bs=1 Energy 20.0 0 125 250 375 500 Inference Batch Size = 1 M ob ile Ne tV 2 M em or y Fo ot pr in t ( M B) SRAM: 5 pJ/byte DRAM: 640 pJ/byte 128x expensive!
Table 2
SRAM Access Training, bs=8
Energy 20 890.82
Table 3
ResNet-50 MbV2-1.4
Param (MB) 102 24 Activation (MB) 1414.4 1252.8
1
edge platforms. If the training memory can fit on-chip SRAM, it will drastically improve the speed and energy efficiency.
There is plenty of efficient inference techniques that reduce the number of trainable parameters and the computation FLOPs [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], however, parameter-efficient or FLOPs-efficient techniques do not directly save the training memory. It is the activation that bottlenecks the training memory, not the parameters. For example, Figure 1 (right) compares ResNet-50 and MobileNetV21.4. In terms of parameter size, MobileNetV2-1.4 is 4.3× smaller than ResNet-50. However, for training activation size, MobileNetV2-1.4 is almost the same as ResNet-50 (only 1.1× smaller), leading to little memory reduction. It is essential to reduce the size of intermediate activations required by back-propagation, which is the key memory bottleneck for efficient on-device training.
In this paper, we propose Tiny-Transfer-Learning (TinyTL) to address these challenges. By analyzing the memory footprint during the backward pass, we notice that the intermediate activations (the main bottleneck) are only needed when updating the weights, not the biases (Eq. 2). Inspired by this finding, we propose to freeze the weights of the pre-trained feature extractor and only update the biases to reduce the memory footprint (Figure 2b). To compensate for the capacity loss, we introduce a memory-efficient bias module, called lite residual module, which improves the model capacity by refining the intermediate feature maps of the feature extractor (Figure 2c). Meanwhile, we aggressively shrink the resolution and width of the lite residual module to have a small memory overhead (only 3.8%). Extensive experiments on 9 image classification datasets with the same pre-trained model (ProxylessNAS-Mobile [11]) demonstrate the effectiveness of TinyTL compared to previous transfer learning methods. Further, combined with a pre-trained once-for-all network [10], TinyTL can select a specialized sub-network as the feature extractor for each transfer dataset (i.e., feature extractor adaptation): given a more difficult dataset, a larger sub-network is selected, and vice versa. TinyTL achieves the same level of (or even higher) accuracy compared to fine-tuning the full Inception-V3 while reducing the training memory footprint by up to 12.9×. Our contributions can be summarized as follows:
• We propose TinyTL, a novel transfer learning method to reduce the training memory footprint by an order of magnitude for efficient on-device learning. We systematically analyze the memory of training and find the bottleneck comes from updating the weights, not biases (assume ReLU activation).
• We also introduce the lite residual module, a memory-efficient bias module to improve the model capacity with little memory overhead.
• Extensive experiments on transfer learning tasks show that our method is highly memory-efficient and effective. It reduces the training memory footprint by up to 12.9× without sacrificing accuracy.
2 Related Work
Efficient Inference Techniques. Improving the inference efficiency of deep neural networks on resource-constrained edge devices has recently drawn extensive attention. Starting from [4, 5, 12, 13,
14], one line of research focuses on compressing pre-trained neural networks, including i) network pruning that removes less-important units [4, 15] or channels [16, 17]; ii) network quantization that reduces the bitwidth of parameters [5, 18] or activations [19, 20]. However, these techniques cannot handle the training phase, as they rely on a well-trained model on the target task as the starting point.
Another line of research focuses on lightweight neural architectures by either manual design [1, 2, 3, 21, 22] or neural architecture search [6, 8, 11, 23]. These lightweight neural networks provide highly competitive accuracy [10, 24] while significantly improving inference efficiency. However, concerning the training memory efficiency, key bottlenecks are not solved: the training memory is dominated by activations, not parameters (Figure 1).
There are also some non-deep learning methods [25, 26, 27] that are designed for efficient inference on edge devices. These methods are suitable for handling simple tasks like MNIST. However, for more complicated tasks, we still need the representation capacity of deep neural networks.
Memory Footprint Reduction. Researchers have been seeking ways to reduce the training memory footprint. One typical approach is to re-compute discarded activations during backward [28, 29]. This approach reduces memory usage at the cost of a large computation overhead. Thus it is not preferred for edge devices. Layer-wise training [30] can also reduce the memory footprint compared to end-to-end training. However, it cannot achieve the same level of accuracy as end-to-end training. Another representative approach is through activation pruning [31], which builds a dynamic sparse computation graph to prune activations during training. Similarly, [32] proposes to reduce the bitwidth of training activations by introducing new reduced-precision floating-point formats. Besides reducing the training memory cost, there are some techniques that focus on reducing the peak inference memory cost, such as RNNPool [33] and MemNet [34]. Our method is orthogonal to these techniques and can be combined to further reduce the memory footprint.
Transfer Learning. Neural networks pre-trained on large-scale datasets (e.g., ImageNet [35]) are widely used as a fixed feature extractor for transfer learning, then only the last layer needs to be fine-tuned [36, 37, 38, 39]. This approach does not require to store the intermediate activations of the feature extractor, and thus is memory-efficient. However, the capacity of this approach is limited, resulting in poor accuracy, especially on datasets [40, 41] whose distribution is far from ImageNet (e.g., only 45.9% Aircraft top1 accuracy achieved by Inception-V3 [42]). Alternatively, fine-tuning the full network can achieve better accuracy [43, 44]. But it requires a vast memory footprint and hence is not friendly for training on edge devices. Recently, [45,46] propose to only update parameters of the batch normalization (BN) [47] layers, which greatly reduces the number of trainable parameters. Unfortunately, parameter-efficiency doesn’t translate to memory-efficiency. It still requires a large amount of memory (e.g., 326MB under batch size 8) to store the input activations of the BN layers (Table 3). Additionally, the accuracy of this approach is still much worse than fine-tuning the full network (70.7% v.s. 85.5%; Table 3). People can also partially fine-tune some layers, but how many layers to select is still ad hoc. This paper provides a systematic approach to save memory without losing accuracy.
3 Tiny Transfer Learning
3.1 Understanding the Memory Footprint of Back-propagation
Without loss of generality, we consider a neural networkM that consists of a sequence of layers:
M(·) = Fwn(Fwn−1(· · · Fw2(Fw1(·)) · · · )), (1)
where wi denotes the parameters of the ith layer. Let ai and ai+1 be the input and output activations of the ith layer, respectively, and L be the loss. In the backward pass, given ∂L∂ai+1 , there are two goals for the ith layer: computing ∂L∂ai and ∂L ∂wi .
Assuming the ith layer is a linear layer whose forward process is given as: ai+1 = aiW + b, then its backward process under batch size 1 is
∂L ∂ai = ∂L ∂ai+1 ∂ai+1 ∂ai = ∂L ∂ai+1 WT , ∂L ∂W = aTi ∂L ∂ai+1 , ∂L ∂b = ∂L ∂ai+1 . (2)
fmap in memory fmap not in memory
learned weights on target task pre-trained weights (a) Fine-tune the full network Downsample Upsample (b) Lightweight residual learning (ours) (d) Our lightweight residual branch KxK Group Conv 1x1 Conv keep activations small while using group conv to increase the arithmetic intensity (c) Mobile inverted bottleneck block little computation but large activation (a) Fine-tune the full network (Conventional)
train a once-for-all network (c) Lite residual learning fmap in memory fmap not in memory learnable params fixed params weight bias mobile inverted bottleneck blockith UpsampleDownsample Group Conv 1x1 Conv (b) Fine-tune bias only
(a) Fine-tune the full network (Conventional)
(c) Lite residual learning(d) Feature network adaptation
fmap in memory fmap not in memory learnable params fixed params weight bias mobile inverted bottleneck blockith
Aircraft Cars Flowers
Downsample Group Conv
1x1 Conv
Avoid inverted bottleneck
1x1 Conv
(b) Fine-tune bias only C, R 6C, R 6C, R C, R C, 0.5R C, 0.5R 1x1 Conv1x1 Conv Depth-wise Conv 1x1 Conv1x1 Conv Depth-wise Conv
1x1 Conv1x1 Conv Depth-wise Conv
According to Eq. (2), the intermediate activations (i.e., {ai}) that dominate the memory footprint are only required to compute the gradient of the weights (i.e., ∂L∂W ), not the bias. If we only update the bias, training memory can be greatly saved. This property is also applicable to convolution layers and normalization layers (e.g., batch normalization [47], group normalization [48], etc) since they can be considered as special types of linear layers.
Regarding non-linear activation layers (e.g., ReLU, sigmoid, h-swish), sigmoid and h-swish require to store ai to compute ∂L∂ai (Table 1), hence they are not memory-efficient. Activation layers that build upon them are also not memory-efficient consequently, such as tanh, swish [49], etc. In contrast, ReLU and other ReLU-styled activation layers (e.g., LeakyReLU [50]) only requires to store a binary mask representing whether the value is smaller than 0, which is 32× smaller than storing ai.
3.2 Lite Residual Learning
Based on the memory footprint analysis, one possible solution of reducing the memory cost is to freeze the weights of the pre-trained feature extractor while only update the biases (Figure 2b). However, only updating biases has limited adaptation capacity. Therefore, we introduce lite residual learning that exploits a new class of generalized memory-efficient bias modules to refine the intermediate feature maps (Figure 2c).
4
Formally, a layer with frozen weights and learnable biases can be represented as:
ai+1 = FW(ai) + b. (3)
To improve the model capacity while keeping a small memory footprint, we propose to add a lite residual module that generates a residual feature map to refine the output:
ai+1 = FW(ai) + b+ Fwr (a′i = reduce(ai)), (4)
where a′i = reduce(ai) is the reduced activation. According to Eq. (2), learning these lite residual modules only requires to store the reduced activations {a′i} rather than the full activations {ai}.
Implementation (Figure 2c). We apply Eq. (4) to mobile inverted bottleneck blocks (MB-block) [1]. The key principle is to keep the activation small. Following this principle, we explore two design dimensions to reduce the activation size:
• Width. The widely-used inverted bottleneck requires a huge number of channels (6×) to compensate for the small capacity of a depthwise convolution, which is parameter-efficient but highly activation-inefficient. Even worse, converting 1× channels to 6× channels back and forth requires two 1× 1 projection layers, which doubles the total activation to 12×. Depthwise convolution also has a very low arithmetic intensity (its OPs/Byte is less than 4% of 1× 1 convolution’s OPs/Byte if with 256 channels), thus highly memory in-efficient with little reuse. To solve these limitations, our lite residual module employs the group convolution that has much higher arithmetic intensity than depthwise convolution, providing a good trade-off between FLOPs and memory. That also removes the 1×1 projection layer, reducing the total channel number by 6×2+11+1 = 6.5×.
• Resolution. The activation size grows quadratically with the resolution. Therefore, we shrink the resolution in the lite residual module by employing a 2× 2 average pooling to downsample the input feature map. The output of the lite residual module is then upsampled to match the size of the main branch’s output feature map via bilinear upsampling. Combining resolution and width optimizations, the activation of our lite residual module is roughly 22 × 6.5 = 26× smaller than the inverted bottleneck.
3.3 Discussions
Normalization Layers. As discussed in Section 3.1, TinyTL flexibly supports different normalization layers, including batch normalization (BN), group normalization (GN), layer normalization (LN), and so on. In particular, BN is the most widely used one in vision tasks. However, BN requires a large batch size to have accurate running statistics estimation during training, which is not suitable for on-device learning where we want a small training batch size to reduce the memory footprint. Moreover, the data may come in a streaming fashion in on-device learning, which requires a training batch size of 1. In contrast to BN, GN can handle a small training batch size as the running statistics in GN are computed independently for different inputs. In our experiments, GN with a small training batch size (e.g., 8) performs slightly worse than BN with a large training batch size (e.g., 256). However, as we target at on-device learning, we choose GN in our models.
Feature Extractor Adaptation. TinyTL can be applied to different backbone neural networks, such as MobileNetV2 [1], ProxylessNASNets [11], EfficientNets [24], etc. However, since the weights of the feature extractor are frozen in TinyTL, we find using the same backbone neural network for all transfer tasks is sub-optimal. Therefore, we choose the backbone of TinyTL using a pre-trained once-for-all network [10] to adaptively select the specialized feature extractor that best fits the target transfer dataset. Specifically, a once-for-all network is a special kind of neural network that is sparsely activated, from which many different sub-networks can be derived without retraining by sparsely activating parts of the model according to the architecture configuration (i.e., depth, width, kernel size, resolution), while the weights are shared. This allows us to efficiently evaluate the effectiveness of a backbone neural network on the target transfer dataset without the expensive pre-training process. Further details of the feature extractor adaptation process are provided in Appendix A.
4 Experiments
4.1 Setups
Datasets. Following the common practice [43, 44, 45], we use ImageNet [35] as the pre-training dataset, and then transfer the models to 8 downstream object classification tasks, including Cars [41], Flowers [51], Aircraft [40], CUB [52], Pets [53], Food [54], CIFAR10 [55], and CIFAR100 [55]. Besides object classification, we also evaluate our TinyTL on human facial attribute classification tasks, where CelebA [56] is the transfer dataset and VGGFace2 [57] is the pre-training dataset.
Model Architecture. To justify the effectiveness of TinyTL, we first apply TinyTL and previous transfer learning methods to the same backbone neural network, ProxylessNAS-Mobile [11]. For each MB-block in ProxylessNAS-Mobile, we insert a lite residual module as described in Section 3.2 and Figure 2 (c). The group number is 2, and the kernel size is 5. We use the ReLU activation since it is more memory-efficient according to Section 3.1. We replace all BN layers with GN layers to better support small training batch sizes. We set the number of channels per group to 8 for all GN layers. Following [58], we apply weight standardization [59] to convolution layers that are followed by GN.
For feature extractor adaptation, we build the once-for-all network using the MobileNetV2 design space [10, 11] that contains five stages with a gradually decreased resolution, and each stage consists of a sequence of MB-blocks. In the stage-level, it supports elastic depth (i.e., 2, 3, 4). In the block-level, it supports elastic kernel size (i.e., 3, 5, 7) and elastic width expansion ratio (i.e., 3, 4, 6). Similarly, for each MB-block in the once-for-all network, we insert a lite residual module that supports elastic group number (i.e., 2, 4) and elastic kernel size (i.e., 3, 5).
Training Details. We freeze the memory-heavy modules (weights of the feature extractor) and only update memory-efficient modules (bias, lite residual, classifier head) during transfer learning. The models are fine-tuned for 50 epochs using the Adam optimizer [60] with batch size 8 on a single GPU. The initial learning rate is tuned for each dataset while cosine schedule [61] is adopted for learning rate decay. We apply 8bits weight quantization [5] on the frozen weights to reduce the parameter size, which causes a negligible accuracy drop in our experiments. For all compared methods, we also assume the 8bits weight quantization is applied if eligible when calculating their training memory footprint. Additionally, as PyTorch does not support explicit fine-grained memory management, we use the theoretically calculated training memory footprint for comparison in our experiments. For simplicity, we assume the batch size is 8 for all compared methods throughout the experiment section.
Stanford-Cars
Full Last BN Bias LiteResidual LiteResidual+Bias
256, 448 224, 416 192, 384 89.1 292.4 160, 352 87.3 208.7 128, 320 84.2 140.5 60.0 57.6 80.1 59.3 88.3 64.7 88.8 64.7 96, 288 76.1 87.2 58.4 47.6 78.1 49.0 87.7 54.4 88.0 54.4 , 256 54.7 38.7 80.2 249.9 75.9 39.8 86.3 45.2 87.4 45.2 , 224 50.9 30.8 77.9 192.4 73.4 31.7 84.2 37.1 85.0 37.1 , 192 73.7 142.9 68.6 24.7 82.1 30.1 83.6 30.1 , 160 67.9 100.7 61.2 18.7 77.3 24.1 78.2 24.2
Flowers102-1
Full Last BN Bias LiteResidual LiteResidual+bias Batch Size
Model Size 18.98636 5.138576 5.264432 5.201504 10.587824 10.63352 8
Act@256, Act@448 60.758528 12.845056 93.6488 13.246464 13.246464 13.246464 Act@224, Act@416 46.482132 11.075584 80.713856 11.421696 11.421696 11.421696 Act@192, Act@384 34.176672 9.437184 68.8032 9.732096 9.732096 9.732096 Act@160, Act@352 23.70904 7.929856 57.785036 8.177664 8.177664 8.177664 Act@128, Act@320 15.189632 6.5536 47.78 6.7584 6.7584 6.7584 Act@96, Act@288 8.530757 5.308416 38.678632 5.474304 5.474304 5.474304 , Act@256 4.194304 30.5792 4.325376 4.325376 4.325376 , Act@224 3.211264 23.39462 3.311616 3.311616 3.311616 , Act@192 2.359296 17.2008 2.433024 2.433024 2.433024 , Act@160 1.6384 11.933009 1.6896 1.6896 1.6896
Aircraft
Full Last BN Bias LiteResidual LiteResidual+Bias
256, 448 224, 416 192, 384 83.5 292.4 160, 352 81.0 208.7 128, 320 77.7 140.5 51.9 57.6 68.6 59.3 81.5 64.7 82.3 64.7 96, 288 70.5 87.2 50.6 47.6 67.3 49.0 80.0 54.4 80.8 54.4 , 256 48.6 38.7 70.7 249.9 65.6 39.8 79.0 45.2 78.9 45.2 , 224 44.9 30.8 68.1 192.4 63.2 31.7 76.4 37.1 75.4 37.1 , 192 64.7 142.9 59.4 24.7 73.3 30.1 74.9 30.1 , 160 60.5 100.7 55.2 18.7 69.5 24.1 70.4 24.2
Flowers
Full Last BN Bias LiteResidual LiteResidual+Bias
256, 448 224, 416 96.8 390.8 192, 384 96.1 292.4 160, 352 95.4 208.7 128, 320 93.6 140.5 93.3 57.6 96.0 387.5 95.6 59.3 96.7 64.7 96.8 64.7 96, 288 89.6 87.2 92.6 47.6 95.6 314.7 95.1 49.0 96.4 54.4 96.4 54.4 , 256 91.6 38.7 95.0 249.9 94.5 39.8 95.9 45.2 96.0 45.2 , 224 90.1 30.8 94.3 192.4 93.5 31.7 95.3 37.1 95.5 37.1 , 192 92.8 142.9 91.5 24.7 94.6 30.1 94.6 30.1 , 160 90.5 100.7 89.5 18.7 92.8 24.1 93.1 24.2
Cub200
Full Last BN Bias LiteResidual LiteResidual+Bias
256, 448 224, 416 81.0 390.8 192, 384 79.0 292.4 160, 352 76.7 208.7 128, 320 71.8 140.5 77.9 57.6 80.6 387.5 79.8 59.3 80.5 64.7 81.0 64.7 96, 288 77.0 47.6 79.6 314.7 78.6 49.0 79.6 54.4 80.0 54.4 , 256 75.4 38.7 79.1 249.9 77.5 39.8 78.5 45.2 78.8 45.2 , 224 73.3 30.8 76.3 192.4 75.3 31.7 76.8 37.1 77.1 37.1 , 192 73.7 142.9 72.7 24.7 74.7 30.1 74.7 30.1 , 160
Food101
Full Last BN Bias LiteResidual LiteResidual+Bias
256, 448 224, 416 84.6 390.8 192, 384 83.2 292.4 160, 352 81.2 208.7 128, 320 78.1 140.5 73.0 57.6 80.2 387.5 78.7 59.3 82.8 64.7 82.9 64.7 96, 288 73.5 87.2 72.0 47.6 79.5 314.7 77.9 49.0 82.0 54.4 82.1 54.4 , 256 70.7 38.7 78.4 249.9 76.8 39.8 81.1 45.2 81.5 45.2 , 224 68.7 30.8 77.0 192.4 75.5 31.7 79.2 37.1 79.7 37.1 , 192 74.9 142.9 73.0 24.7 78.2 30.1 78.4 30.1 , 160 72.4 100.7 70.1 18.7 74.6 24.1 75.1 24.2
Pets
Full Last BN Bias LiteResidual LiteResidual+Bias256, 448
1
4.2 Main Results
Effectiveness of TinyTL. Table 2 reports the comparison between TinyTL and previous transfer learning methods including: i) fine-tuning the last linear layer [36, 37, 39] (referred to as FT-Last); ii) fine-tuning the normalization layers (e.g., BN, GN) and the last linear layer [42] (referred to as FT-Norm+Last) ; iii) fine-tuning the full network [43, 44] (referred to as FT-Full). We also study several variants of TinyTL including: i) TinyTL-B that fine-tunes biases and the last linear layer; ii) TinyTL-L that fine-tunes lite residual modules and the last linear layer; iii) TinyTL-L+B that fine-tunes lite residual modules, biases, and the last linear layer. All compared methods use the same pre-trained model but fine-tune different parts of the model as discussed above. We report the average accuracy across five runs.
Compared to FT-Last, TinyTL maintains a similar training memory footprint while improving the top1 accuracy by a significant margin. In particular, TinyTL-L+B improves the top1 accuracy by 34.1% on Cars, by 30.5% on Aircraft, by 12.6% on CIFAR100, by 11.0% on Food, etc. It shows the improved adaptation capacity of our method over FT-Last. Compared to FT-Norm+Last, TinyTL-L+B improves the training memory efficiency by 5.2× while providing up to 7.3% higher top1 accuracy, which shows that our method is not only more memory-efficient but also more effective than FT-Norm+Last. Compared to FT-Full, TinyTL-L+B@320 can achieve the same level of accuracy while providing 6.0× training memory saving. Regarding the comparison between different variants of TinyTL, both TinyTL-L and TinyTL-L+B have clearly better accuracy than TinyTL-B while incurring little memory overhead. It shows that the lite residual modules are essential in TinyTL. Besides, we find that TinyTL-L+B is slightly better than TinyTL-L on most of the datasets while maintaining the same memory footprint. Therefore, we choose TinyTL-L+B as the default.
Figure 3 demonstrates the results under different input resolutions. We can observe that simply reducing the input resolution will result in significant accuracy drops for FT-Full. In contrast, TinyTL can reduce the memory footprint by 3.9-6.5× while having the same or even higher accuracy compared to fine-tuning the full network.
Combining TinyTL and Feature Extractor Adaptation. Table 3 summarizes the results of TinyTL and previously reported transfer learning results, where different backbone neural networks are used as the feature extractor. Combined with feature extractor adapt tion, TinyTL achieves 7.5-12.9× memory saving compared to fine-tuning the full Inception-V3, reducing from 850MB to 66-114MB while providing the same level of accuracy. Additionally, we try updating the last two layers besides biases and lite residual modules (indicated by †), which results in 2MB of extra
Flowers102
ResNet-50 Activation Pruning
Ours MobileNetV2 Activation Pruning
97.5 802.2 96.6 447.8
96.9 682.7 97.4 114.0 95.8 373.8 96.3 612.0 96.8 66.0 94.1 330.0 95.2 541.3 90.4 286.2 93.4 470.6 79.7 242.3 88.6 399.9
Aircraft
ResNet-50 Activation Pruning
Ours MobileNetV2 Activation Pruning
86.6 802.1 82.8 447.8 83.53 682.7 84.8 116.0 79.8 373.8 80.83 612.0 82.4 69.0 77.0 330.0 77.47 541.3 70.4 286.2 75.64 470.6 61.8 242.3 72.24 399.8
Stanford-Cars
ResNet-50 Activation Pruning
Ours MobileNetV2 Activation Pruning
91.7 802.8 91.0 448.3 91.28 683.5 90.7 119.0 88.7 374.3 90.95 612.8 89.6 71.0 86.2 330.5 89.71 542.1 82.5 286.6 88.20 471.3 75.0 242.8 85.20 400.6
training memory footprint. This slightly improves the accuracy performances, from 90.7% to 91.5% on Cars, from 85.0% to 86.0% on Food, and from 84.8% to 85.4% on Aircraft.
4.3 Ablation Studies and Discussions
Comparison with Dynamic Activation Pruning. The comparison between TinyTL and dynamic activation pruning [31] is summarized in Figure 4. TinyTL is more effective because it re-designed the transfer learning framework (lite residual module, feature extractor adaptation) rather than prune an existing architecture. The transfer accuracy drops quickly when the pruning ratio increases beyond 50% (only 2× memory saving). In contrast, TinyTL can achieve much higher memory reduction without loss of accuracy.
Initialization for Lite Residual Modules. By default, we use the pre-trained weights on the pretraining dataset to initialize the lite residual modules. It requires to have lite residual modules during both the pre-training phase and transfer learning phase. When applying TinyTL to existing pre-trained neural networks that do not have lite residual modules during the pre-training phase, we need to use another initialization strategy for the lite residual modules during transfer learning. To verify the effectiveness of TinyTL under this setting, we also evaluate the performances of TinyTL when using random weights [62] to initialize the lite residual modules except for the scaling parameter of the final normalization layer in each lite residual module. These scaling parameters are initialized with zeros.
Table 4 reports the summarized results. We find using the pre-trained weights to initialize the lite residual modules consistently outperforms using random weights. Besides, we also find that using TinyTL-RandomL+B still provides highly competitive results on Cars, Food, Aircraft, CIFAR10,
8
Flowers102
TinyTL (batch size 8)
TinyTL (batch size 1)
96.8 64.7 96.3 17.4 96.4 54.4 96.1 16.1 96.0 45.2 95.9 15.0 95.5 37.1 95.6 13.9 94.6 30.1 94.8 13.1 93.1 24.2 93.4 12.3
Aircraft
TinyTL (batch size 8)
TinyTL (batch size 1)
82.3 64.7 82.7 17.4 80.8 54.4 80.2 16.1 78.9 45.2 79.6 15.0 75.4 37.1 77.5 13.9 74.9 30.1 75.0 13.1 70.4 24.2 70.7 12.3
Stanford-Cars
TinyTL (batch size 8)
TinyTL (batch size 1)
88.8 64.7 88.7 17.4 88.0 54.4 87.8 16.1 87.4 45.2 86.6 15.0 85.0 37.1 84.5 13.9 83.6 30.1 82.1 13.1 78.2 24.2 78.1 12.3
CIFAR100, and CelebA. Therefore, if having the budget, it is better to use pre-trained weights to initialize the lite residual modules. If not, TinyTL can still be applied and provides competitive results on datasets whose distribution is far from the pre-training dataset.
Results of TinyTL under Batch Size 1. Figure 5 demonstrates the results of TinyTL when using a training batch size of 1. We tune the initial learning rate for each dataset while keeping the other training settings unchanged. As our model employs group normalization rather than batch normalization (Section 3.3), we observe little/no loss of accuracy than training with batch size 8. Meanwhile, the training memory footprint is further reduced to around 16MB, a typical L3 cache size. This makes it much easier to train on the cache (SRAM), which can greatly reduce energy consumption than DRAM training.
5 Conclusion
We proposed Tiny-Transfer-Learning (TinyTL) for memory-efficient on-device learning that aims to adapt pre-trained models to newly collected data on edge devices. Unlike previous methods that focus on reducing the number of parameters or FLOPs, TinyTL directly optimizes the training memory footprint by fixing the memory-heavy modules (i.e., weights) while learning memory-efficient bias modules. We further introduce lite residual modules that significantly improve the adaptation capacity of the model with little memory overhead. Extensive experiments on benchmark datasets consistently show the effectiveness and memory-efficiency of TinyTL, paving the way for efficient on-device machine learning.
9
Broader Impact
The proposed efficient on-device learning technique greatly reduces the training memory footprint of deep neural networks, enabling adapting pre-trained models to new data locally on edge devices without leaking them to the cloud. It can democratize AI to people in the rural areas where the Internet is unavailable or the network condition is poor. They can not only inference but also fine-tune AI models on their local devices without connections to the cloud servers. This can also benefit privacy-sensitive AI applications, such as health care, smart home, and so on.
Acknowledgements
We thank MIT-IBM Watson AI Lab, NSF CAREER Award #1943349 and NSF Award #2028888 for supporting this research. We thank MIT Satori cluster for providing the computation resource. | 1. What is the focus and contribution of the paper regarding edge device deployment?
2. What are the strengths and weaknesses of the proposed tiny transfer learning method?
3. Do you have any concerns or questions regarding the featurizer adaptation process?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The paper proposes tiny transfer learning, a method for adapting pre-trained models for new data on edge devices. The proposed method achieves this by not changing the model weights completely, but instead retraining only biases and augmenting the model with lite, residual 'corrections' to the feature map. The paper also proposes a featurizer selection method, where sub-networks form the pretrained super-net are identified and deployed depending on different datasets. The paper reports significant improvements in performance using their methods.
Strengths
Authors identify an important problem in edge deployment; adapting pre-trained models on device. Their main ideas are novel and lead to much improved memory utilization on the edge without sacrificing accuracy.
Weaknesses
Overall the paper introduces novel ideas to solve an important and interesting problem. The results are also impressive. Unfortunately, there are some weakness in terms of writing/details in the paper. The major weakness of this paper I found was a lack of clarity on the featurizer adaptation process. A lot of choices are unexplained, unmotivated or missing. Unfortunately, this takes away a lot from the rest of the paper as I am unable to get a sense of the complexity of this step. How expensive is this? How does one decide on various choices --- modelling, subnet selection, optimization steps, 'accuracy predictor', etc. For instance, + L53-54: What is discrete optimization space here? What are the variables? What is the objective? + L171-175: The notation is hard to follow. What are the elements of the set? How are they related? Though the intuition of sub-nets and supernets are clear, their definitions are not provided leading to a lot of questions. How does one decide what to choose as super network? Is it just the featurizer of our original network? How does one decide which subnets to even consider as candidates? Again 'discrete optimization space' is used her without defining what we are optimizing over. + L186-189: I'm not sure exactly what is happening in fine-tuning super-net set. What is the reason for randomly-sampling subnet in each training step? why is that superior over say sampling subnets based on their accuracy? (i.e. update better models more frequently) + L190: Not sure what "450, [sub-net, accuracy]" are. What is 'accuracy-predictor' ? This information is not provided in the main text and is unmotivated. Even the description in the appendix is only about the model structure. The 'why one requires it', 'how one decides on an architecture' etc are not discussed. |
NIPS | Title
TinyTL: Reduce Memory, Not Parameters for Efficient On-Device Learning
Abstract
On-device learning enables edge devices to continually adapt the AI models to new data, which requires a small memory footprint to fit the tight memory constraint of edge devices. Existing work solves this problem by reducing the number of trainable parameters. However, this doesn’t directly translate to memory saving since the major bottleneck is the activations, not parameters. In this work, we present Tiny-Transfer-Learning (TinyTL) for memory-efficient on-device learning. TinyTL freezes the weights while only learns the bias modules, thus no need to store the intermediate activations. To maintain the adaptation capacity, we introduce a new memory-efficient bias module, the lite residual module, to refine the feature extractor by learning small residual feature maps adding only 3.8% memory overhead. Extensive experiments show that TinyTL significantly saves the memory (up to 6.5×) with little accuracy loss compared to fine-tuning the full network. Compared to fine-tuning the last layer, TinyTL provides significant accuracy improvements (up to 34.1%) with little memory overhead. Furthermore, combined with feature extractor adaptation, TinyTL provides 7.3-12.9× memory saving without sacrificing accuracy compared to fine-tuning the full Inception-V3.
1 Introduction
Intelligent edge devices with rich sensors (e.g., billions of mobile phones and IoT devices)1 have been ubiquitous in our daily lives. These devices keep collecting new and sensitive data through the sensor every day while being expected to provide high-quality and customized services without sacrificing privacy2. These pose new challenges to efficient AI systems that could not only run inference but also continually fine-tune the pre-trained models on newly collected data (i.e., on-device learning).
Though on-device learning can enable many appealing applications, it is an extremely challenging problem. First, edge devices are memory-constrained. For example, a Raspberry Pi 1 Model A only has 256MB of memory, which is sufficient for inference, but by far insufficient for training (Figure 1 left), even using a lightweight neural network architecture (MobileNetV2 [1]). Furthermore, the memory is shared by various on-device applications (e.g., other deep learning models) and the operating system. A single application may only be allocated a small fraction of the total memory, which makes this challenge more critical. Second, edge devices are energy-constrained. DRAM access consumes two orders of magnitude more energy than on-chip SRAM access. The large memory footprint of activations cannot fit into the limited on-chip SRAM, thus has to access DRAM. For instance, the training memory of MobileNetV2, under batch size 16, is close to 1GB, which is by far larger than the SRAM size of an AMD EPYC CPU3 (Figure 1 left), not to mention lower-end
1https://www.statista.com/statistics/330695/number-of-smartphone-users-worldwide/ 2https://ec.europa.eu/info/law/law-topic/data-protection_en 3https://www.amd.com/en/products/cpu/amd-epyc-7302
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Training 128x expensive! Inference Memory Footprint, Batch Size = 1 (20MB)
Memory Cost #batch size ResNet50 Act ResNet50 Params ResNet50 Running Act ResNet50 Training Memory Cost #batch size ResNet50 Inference Memory Cost #batch size MobileNetV2 Act MobileNetV2 Params MobileNetV2 Running Act MobileNetV2 Training Memory Cost #batch size MobileNetV2 Inference Memory Cost Untitled 1 0 88.4 102.23 6.42 190.63 0 108.65 0 54.80 14.02 5.60 68.82 0 19.62 Untitled 2 1 176.8 102.23 279.03 1 108.65 1 109.60 14.02 123.62 1 19.62 Untitled 3 2 353.6 102.23 456.83 2 108.65 2 219.20 14.02 233.22 2 19.62 Untitled 4 3 707.2 102.23 809.43 3 108.65 3 438.40 14.02 452.42 3 19.62 4 1414.4 102.23 1516.63 4 108.65 4 876.80 14.02 890.82 4 19.62 101 102 103 TPU SRAM (28MB) 21 4 8 Raspberry Pi 1 DRAM (256MB) float mult SRAM access DRAM access Energy 3.7 5.0 640.0 Table 1 ResNet MBV2-1.4 Params (M) 102 24 Activations (M) 707.2 626.4 0 200 400 600 800 Param (MB) Activation (MB) ResNet-50 MbV2-1.4 4.3x 1.1x The main bottleneck does not improve much. DRAM: 640 pJ/byte SRAM: 5 pJ/byte 6.9x larger Table 1-1 MobileNetV3-1.4 4 40 59 16 Batch Size M bV 2 M em or y Fo ot pr in t ( M B) Activation is the main bottleneck, not parameters. float mult SRAM access DRAM access Energy 3.7 5.0 640.0 Training Inference Batch Size 101 102 103 M ob ile Ne tV 2 M em or y Fo ot pr in t ( M B) TPU SRAM (28MB) 21 4 8 16 Raspberry Pi 1 Model A DRAM (256MB) 32 bit Float Mult 32 bit SRAM Access 32 bit DRAM Access 102 103 101 100 En er gy (p J) 3.7 pJ 5 pJ 640 pJ 128x Expensive float mult SRAM access DRAM access Energy 3.7 5.0 640.0
Inference, bs=1 Energy 20.0 0 125 250 375 500 Inference Batch Size = 1 M ob ile Ne tV 2 M em or y Fo ot pr in t ( M B) SRAM: 5 pJ/byte DRAM: 640 pJ/byte 128x expensive!
Table 2
SRAM Access Training, bs=8
Energy 20 890.82
Table 3
ResNet-50 MbV2-1.4
Param (MB) 102 24 Activation (MB) 1414.4 1252.8
1
edge platforms. If the training memory can fit on-chip SRAM, it will drastically improve the speed and energy efficiency.
There is plenty of efficient inference techniques that reduce the number of trainable parameters and the computation FLOPs [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], however, parameter-efficient or FLOPs-efficient techniques do not directly save the training memory. It is the activation that bottlenecks the training memory, not the parameters. For example, Figure 1 (right) compares ResNet-50 and MobileNetV21.4. In terms of parameter size, MobileNetV2-1.4 is 4.3× smaller than ResNet-50. However, for training activation size, MobileNetV2-1.4 is almost the same as ResNet-50 (only 1.1× smaller), leading to little memory reduction. It is essential to reduce the size of intermediate activations required by back-propagation, which is the key memory bottleneck for efficient on-device training.
In this paper, we propose Tiny-Transfer-Learning (TinyTL) to address these challenges. By analyzing the memory footprint during the backward pass, we notice that the intermediate activations (the main bottleneck) are only needed when updating the weights, not the biases (Eq. 2). Inspired by this finding, we propose to freeze the weights of the pre-trained feature extractor and only update the biases to reduce the memory footprint (Figure 2b). To compensate for the capacity loss, we introduce a memory-efficient bias module, called lite residual module, which improves the model capacity by refining the intermediate feature maps of the feature extractor (Figure 2c). Meanwhile, we aggressively shrink the resolution and width of the lite residual module to have a small memory overhead (only 3.8%). Extensive experiments on 9 image classification datasets with the same pre-trained model (ProxylessNAS-Mobile [11]) demonstrate the effectiveness of TinyTL compared to previous transfer learning methods. Further, combined with a pre-trained once-for-all network [10], TinyTL can select a specialized sub-network as the feature extractor for each transfer dataset (i.e., feature extractor adaptation): given a more difficult dataset, a larger sub-network is selected, and vice versa. TinyTL achieves the same level of (or even higher) accuracy compared to fine-tuning the full Inception-V3 while reducing the training memory footprint by up to 12.9×. Our contributions can be summarized as follows:
• We propose TinyTL, a novel transfer learning method to reduce the training memory footprint by an order of magnitude for efficient on-device learning. We systematically analyze the memory of training and find the bottleneck comes from updating the weights, not biases (assume ReLU activation).
• We also introduce the lite residual module, a memory-efficient bias module to improve the model capacity with little memory overhead.
• Extensive experiments on transfer learning tasks show that our method is highly memory-efficient and effective. It reduces the training memory footprint by up to 12.9× without sacrificing accuracy.
2 Related Work
Efficient Inference Techniques. Improving the inference efficiency of deep neural networks on resource-constrained edge devices has recently drawn extensive attention. Starting from [4, 5, 12, 13,
14], one line of research focuses on compressing pre-trained neural networks, including i) network pruning that removes less-important units [4, 15] or channels [16, 17]; ii) network quantization that reduces the bitwidth of parameters [5, 18] or activations [19, 20]. However, these techniques cannot handle the training phase, as they rely on a well-trained model on the target task as the starting point.
Another line of research focuses on lightweight neural architectures by either manual design [1, 2, 3, 21, 22] or neural architecture search [6, 8, 11, 23]. These lightweight neural networks provide highly competitive accuracy [10, 24] while significantly improving inference efficiency. However, concerning the training memory efficiency, key bottlenecks are not solved: the training memory is dominated by activations, not parameters (Figure 1).
There are also some non-deep learning methods [25, 26, 27] that are designed for efficient inference on edge devices. These methods are suitable for handling simple tasks like MNIST. However, for more complicated tasks, we still need the representation capacity of deep neural networks.
Memory Footprint Reduction. Researchers have been seeking ways to reduce the training memory footprint. One typical approach is to re-compute discarded activations during backward [28, 29]. This approach reduces memory usage at the cost of a large computation overhead. Thus it is not preferred for edge devices. Layer-wise training [30] can also reduce the memory footprint compared to end-to-end training. However, it cannot achieve the same level of accuracy as end-to-end training. Another representative approach is through activation pruning [31], which builds a dynamic sparse computation graph to prune activations during training. Similarly, [32] proposes to reduce the bitwidth of training activations by introducing new reduced-precision floating-point formats. Besides reducing the training memory cost, there are some techniques that focus on reducing the peak inference memory cost, such as RNNPool [33] and MemNet [34]. Our method is orthogonal to these techniques and can be combined to further reduce the memory footprint.
Transfer Learning. Neural networks pre-trained on large-scale datasets (e.g., ImageNet [35]) are widely used as a fixed feature extractor for transfer learning, then only the last layer needs to be fine-tuned [36, 37, 38, 39]. This approach does not require to store the intermediate activations of the feature extractor, and thus is memory-efficient. However, the capacity of this approach is limited, resulting in poor accuracy, especially on datasets [40, 41] whose distribution is far from ImageNet (e.g., only 45.9% Aircraft top1 accuracy achieved by Inception-V3 [42]). Alternatively, fine-tuning the full network can achieve better accuracy [43, 44]. But it requires a vast memory footprint and hence is not friendly for training on edge devices. Recently, [45,46] propose to only update parameters of the batch normalization (BN) [47] layers, which greatly reduces the number of trainable parameters. Unfortunately, parameter-efficiency doesn’t translate to memory-efficiency. It still requires a large amount of memory (e.g., 326MB under batch size 8) to store the input activations of the BN layers (Table 3). Additionally, the accuracy of this approach is still much worse than fine-tuning the full network (70.7% v.s. 85.5%; Table 3). People can also partially fine-tune some layers, but how many layers to select is still ad hoc. This paper provides a systematic approach to save memory without losing accuracy.
3 Tiny Transfer Learning
3.1 Understanding the Memory Footprint of Back-propagation
Without loss of generality, we consider a neural networkM that consists of a sequence of layers:
M(·) = Fwn(Fwn−1(· · · Fw2(Fw1(·)) · · · )), (1)
where wi denotes the parameters of the ith layer. Let ai and ai+1 be the input and output activations of the ith layer, respectively, and L be the loss. In the backward pass, given ∂L∂ai+1 , there are two goals for the ith layer: computing ∂L∂ai and ∂L ∂wi .
Assuming the ith layer is a linear layer whose forward process is given as: ai+1 = aiW + b, then its backward process under batch size 1 is
∂L ∂ai = ∂L ∂ai+1 ∂ai+1 ∂ai = ∂L ∂ai+1 WT , ∂L ∂W = aTi ∂L ∂ai+1 , ∂L ∂b = ∂L ∂ai+1 . (2)
fmap in memory fmap not in memory
learned weights on target task pre-trained weights (a) Fine-tune the full network Downsample Upsample (b) Lightweight residual learning (ours) (d) Our lightweight residual branch KxK Group Conv 1x1 Conv keep activations small while using group conv to increase the arithmetic intensity (c) Mobile inverted bottleneck block little computation but large activation (a) Fine-tune the full network (Conventional)
train a once-for-all network (c) Lite residual learning fmap in memory fmap not in memory learnable params fixed params weight bias mobile inverted bottleneck blockith UpsampleDownsample Group Conv 1x1 Conv (b) Fine-tune bias only
(a) Fine-tune the full network (Conventional)
(c) Lite residual learning(d) Feature network adaptation
fmap in memory fmap not in memory learnable params fixed params weight bias mobile inverted bottleneck blockith
Aircraft Cars Flowers
Downsample Group Conv
1x1 Conv
Avoid inverted bottleneck
1x1 Conv
(b) Fine-tune bias only C, R 6C, R 6C, R C, R C, 0.5R C, 0.5R 1x1 Conv1x1 Conv Depth-wise Conv 1x1 Conv1x1 Conv Depth-wise Conv
1x1 Conv1x1 Conv Depth-wise Conv
According to Eq. (2), the intermediate activations (i.e., {ai}) that dominate the memory footprint are only required to compute the gradient of the weights (i.e., ∂L∂W ), not the bias. If we only update the bias, training memory can be greatly saved. This property is also applicable to convolution layers and normalization layers (e.g., batch normalization [47], group normalization [48], etc) since they can be considered as special types of linear layers.
Regarding non-linear activation layers (e.g., ReLU, sigmoid, h-swish), sigmoid and h-swish require to store ai to compute ∂L∂ai (Table 1), hence they are not memory-efficient. Activation layers that build upon them are also not memory-efficient consequently, such as tanh, swish [49], etc. In contrast, ReLU and other ReLU-styled activation layers (e.g., LeakyReLU [50]) only requires to store a binary mask representing whether the value is smaller than 0, which is 32× smaller than storing ai.
3.2 Lite Residual Learning
Based on the memory footprint analysis, one possible solution of reducing the memory cost is to freeze the weights of the pre-trained feature extractor while only update the biases (Figure 2b). However, only updating biases has limited adaptation capacity. Therefore, we introduce lite residual learning that exploits a new class of generalized memory-efficient bias modules to refine the intermediate feature maps (Figure 2c).
4
Formally, a layer with frozen weights and learnable biases can be represented as:
ai+1 = FW(ai) + b. (3)
To improve the model capacity while keeping a small memory footprint, we propose to add a lite residual module that generates a residual feature map to refine the output:
ai+1 = FW(ai) + b+ Fwr (a′i = reduce(ai)), (4)
where a′i = reduce(ai) is the reduced activation. According to Eq. (2), learning these lite residual modules only requires to store the reduced activations {a′i} rather than the full activations {ai}.
Implementation (Figure 2c). We apply Eq. (4) to mobile inverted bottleneck blocks (MB-block) [1]. The key principle is to keep the activation small. Following this principle, we explore two design dimensions to reduce the activation size:
• Width. The widely-used inverted bottleneck requires a huge number of channels (6×) to compensate for the small capacity of a depthwise convolution, which is parameter-efficient but highly activation-inefficient. Even worse, converting 1× channels to 6× channels back and forth requires two 1× 1 projection layers, which doubles the total activation to 12×. Depthwise convolution also has a very low arithmetic intensity (its OPs/Byte is less than 4% of 1× 1 convolution’s OPs/Byte if with 256 channels), thus highly memory in-efficient with little reuse. To solve these limitations, our lite residual module employs the group convolution that has much higher arithmetic intensity than depthwise convolution, providing a good trade-off between FLOPs and memory. That also removes the 1×1 projection layer, reducing the total channel number by 6×2+11+1 = 6.5×.
• Resolution. The activation size grows quadratically with the resolution. Therefore, we shrink the resolution in the lite residual module by employing a 2× 2 average pooling to downsample the input feature map. The output of the lite residual module is then upsampled to match the size of the main branch’s output feature map via bilinear upsampling. Combining resolution and width optimizations, the activation of our lite residual module is roughly 22 × 6.5 = 26× smaller than the inverted bottleneck.
3.3 Discussions
Normalization Layers. As discussed in Section 3.1, TinyTL flexibly supports different normalization layers, including batch normalization (BN), group normalization (GN), layer normalization (LN), and so on. In particular, BN is the most widely used one in vision tasks. However, BN requires a large batch size to have accurate running statistics estimation during training, which is not suitable for on-device learning where we want a small training batch size to reduce the memory footprint. Moreover, the data may come in a streaming fashion in on-device learning, which requires a training batch size of 1. In contrast to BN, GN can handle a small training batch size as the running statistics in GN are computed independently for different inputs. In our experiments, GN with a small training batch size (e.g., 8) performs slightly worse than BN with a large training batch size (e.g., 256). However, as we target at on-device learning, we choose GN in our models.
Feature Extractor Adaptation. TinyTL can be applied to different backbone neural networks, such as MobileNetV2 [1], ProxylessNASNets [11], EfficientNets [24], etc. However, since the weights of the feature extractor are frozen in TinyTL, we find using the same backbone neural network for all transfer tasks is sub-optimal. Therefore, we choose the backbone of TinyTL using a pre-trained once-for-all network [10] to adaptively select the specialized feature extractor that best fits the target transfer dataset. Specifically, a once-for-all network is a special kind of neural network that is sparsely activated, from which many different sub-networks can be derived without retraining by sparsely activating parts of the model according to the architecture configuration (i.e., depth, width, kernel size, resolution), while the weights are shared. This allows us to efficiently evaluate the effectiveness of a backbone neural network on the target transfer dataset without the expensive pre-training process. Further details of the feature extractor adaptation process are provided in Appendix A.
4 Experiments
4.1 Setups
Datasets. Following the common practice [43, 44, 45], we use ImageNet [35] as the pre-training dataset, and then transfer the models to 8 downstream object classification tasks, including Cars [41], Flowers [51], Aircraft [40], CUB [52], Pets [53], Food [54], CIFAR10 [55], and CIFAR100 [55]. Besides object classification, we also evaluate our TinyTL on human facial attribute classification tasks, where CelebA [56] is the transfer dataset and VGGFace2 [57] is the pre-training dataset.
Model Architecture. To justify the effectiveness of TinyTL, we first apply TinyTL and previous transfer learning methods to the same backbone neural network, ProxylessNAS-Mobile [11]. For each MB-block in ProxylessNAS-Mobile, we insert a lite residual module as described in Section 3.2 and Figure 2 (c). The group number is 2, and the kernel size is 5. We use the ReLU activation since it is more memory-efficient according to Section 3.1. We replace all BN layers with GN layers to better support small training batch sizes. We set the number of channels per group to 8 for all GN layers. Following [58], we apply weight standardization [59] to convolution layers that are followed by GN.
For feature extractor adaptation, we build the once-for-all network using the MobileNetV2 design space [10, 11] that contains five stages with a gradually decreased resolution, and each stage consists of a sequence of MB-blocks. In the stage-level, it supports elastic depth (i.e., 2, 3, 4). In the block-level, it supports elastic kernel size (i.e., 3, 5, 7) and elastic width expansion ratio (i.e., 3, 4, 6). Similarly, for each MB-block in the once-for-all network, we insert a lite residual module that supports elastic group number (i.e., 2, 4) and elastic kernel size (i.e., 3, 5).
Training Details. We freeze the memory-heavy modules (weights of the feature extractor) and only update memory-efficient modules (bias, lite residual, classifier head) during transfer learning. The models are fine-tuned for 50 epochs using the Adam optimizer [60] with batch size 8 on a single GPU. The initial learning rate is tuned for each dataset while cosine schedule [61] is adopted for learning rate decay. We apply 8bits weight quantization [5] on the frozen weights to reduce the parameter size, which causes a negligible accuracy drop in our experiments. For all compared methods, we also assume the 8bits weight quantization is applied if eligible when calculating their training memory footprint. Additionally, as PyTorch does not support explicit fine-grained memory management, we use the theoretically calculated training memory footprint for comparison in our experiments. For simplicity, we assume the batch size is 8 for all compared methods throughout the experiment section.
Stanford-Cars
Full Last BN Bias LiteResidual LiteResidual+Bias
256, 448 224, 416 192, 384 89.1 292.4 160, 352 87.3 208.7 128, 320 84.2 140.5 60.0 57.6 80.1 59.3 88.3 64.7 88.8 64.7 96, 288 76.1 87.2 58.4 47.6 78.1 49.0 87.7 54.4 88.0 54.4 , 256 54.7 38.7 80.2 249.9 75.9 39.8 86.3 45.2 87.4 45.2 , 224 50.9 30.8 77.9 192.4 73.4 31.7 84.2 37.1 85.0 37.1 , 192 73.7 142.9 68.6 24.7 82.1 30.1 83.6 30.1 , 160 67.9 100.7 61.2 18.7 77.3 24.1 78.2 24.2
Flowers102-1
Full Last BN Bias LiteResidual LiteResidual+bias Batch Size
Model Size 18.98636 5.138576 5.264432 5.201504 10.587824 10.63352 8
Act@256, Act@448 60.758528 12.845056 93.6488 13.246464 13.246464 13.246464 Act@224, Act@416 46.482132 11.075584 80.713856 11.421696 11.421696 11.421696 Act@192, Act@384 34.176672 9.437184 68.8032 9.732096 9.732096 9.732096 Act@160, Act@352 23.70904 7.929856 57.785036 8.177664 8.177664 8.177664 Act@128, Act@320 15.189632 6.5536 47.78 6.7584 6.7584 6.7584 Act@96, Act@288 8.530757 5.308416 38.678632 5.474304 5.474304 5.474304 , Act@256 4.194304 30.5792 4.325376 4.325376 4.325376 , Act@224 3.211264 23.39462 3.311616 3.311616 3.311616 , Act@192 2.359296 17.2008 2.433024 2.433024 2.433024 , Act@160 1.6384 11.933009 1.6896 1.6896 1.6896
Aircraft
Full Last BN Bias LiteResidual LiteResidual+Bias
256, 448 224, 416 192, 384 83.5 292.4 160, 352 81.0 208.7 128, 320 77.7 140.5 51.9 57.6 68.6 59.3 81.5 64.7 82.3 64.7 96, 288 70.5 87.2 50.6 47.6 67.3 49.0 80.0 54.4 80.8 54.4 , 256 48.6 38.7 70.7 249.9 65.6 39.8 79.0 45.2 78.9 45.2 , 224 44.9 30.8 68.1 192.4 63.2 31.7 76.4 37.1 75.4 37.1 , 192 64.7 142.9 59.4 24.7 73.3 30.1 74.9 30.1 , 160 60.5 100.7 55.2 18.7 69.5 24.1 70.4 24.2
Flowers
Full Last BN Bias LiteResidual LiteResidual+Bias
256, 448 224, 416 96.8 390.8 192, 384 96.1 292.4 160, 352 95.4 208.7 128, 320 93.6 140.5 93.3 57.6 96.0 387.5 95.6 59.3 96.7 64.7 96.8 64.7 96, 288 89.6 87.2 92.6 47.6 95.6 314.7 95.1 49.0 96.4 54.4 96.4 54.4 , 256 91.6 38.7 95.0 249.9 94.5 39.8 95.9 45.2 96.0 45.2 , 224 90.1 30.8 94.3 192.4 93.5 31.7 95.3 37.1 95.5 37.1 , 192 92.8 142.9 91.5 24.7 94.6 30.1 94.6 30.1 , 160 90.5 100.7 89.5 18.7 92.8 24.1 93.1 24.2
Cub200
Full Last BN Bias LiteResidual LiteResidual+Bias
256, 448 224, 416 81.0 390.8 192, 384 79.0 292.4 160, 352 76.7 208.7 128, 320 71.8 140.5 77.9 57.6 80.6 387.5 79.8 59.3 80.5 64.7 81.0 64.7 96, 288 77.0 47.6 79.6 314.7 78.6 49.0 79.6 54.4 80.0 54.4 , 256 75.4 38.7 79.1 249.9 77.5 39.8 78.5 45.2 78.8 45.2 , 224 73.3 30.8 76.3 192.4 75.3 31.7 76.8 37.1 77.1 37.1 , 192 73.7 142.9 72.7 24.7 74.7 30.1 74.7 30.1 , 160
Food101
Full Last BN Bias LiteResidual LiteResidual+Bias
256, 448 224, 416 84.6 390.8 192, 384 83.2 292.4 160, 352 81.2 208.7 128, 320 78.1 140.5 73.0 57.6 80.2 387.5 78.7 59.3 82.8 64.7 82.9 64.7 96, 288 73.5 87.2 72.0 47.6 79.5 314.7 77.9 49.0 82.0 54.4 82.1 54.4 , 256 70.7 38.7 78.4 249.9 76.8 39.8 81.1 45.2 81.5 45.2 , 224 68.7 30.8 77.0 192.4 75.5 31.7 79.2 37.1 79.7 37.1 , 192 74.9 142.9 73.0 24.7 78.2 30.1 78.4 30.1 , 160 72.4 100.7 70.1 18.7 74.6 24.1 75.1 24.2
Pets
Full Last BN Bias LiteResidual LiteResidual+Bias256, 448
1
4.2 Main Results
Effectiveness of TinyTL. Table 2 reports the comparison between TinyTL and previous transfer learning methods including: i) fine-tuning the last linear layer [36, 37, 39] (referred to as FT-Last); ii) fine-tuning the normalization layers (e.g., BN, GN) and the last linear layer [42] (referred to as FT-Norm+Last) ; iii) fine-tuning the full network [43, 44] (referred to as FT-Full). We also study several variants of TinyTL including: i) TinyTL-B that fine-tunes biases and the last linear layer; ii) TinyTL-L that fine-tunes lite residual modules and the last linear layer; iii) TinyTL-L+B that fine-tunes lite residual modules, biases, and the last linear layer. All compared methods use the same pre-trained model but fine-tune different parts of the model as discussed above. We report the average accuracy across five runs.
Compared to FT-Last, TinyTL maintains a similar training memory footprint while improving the top1 accuracy by a significant margin. In particular, TinyTL-L+B improves the top1 accuracy by 34.1% on Cars, by 30.5% on Aircraft, by 12.6% on CIFAR100, by 11.0% on Food, etc. It shows the improved adaptation capacity of our method over FT-Last. Compared to FT-Norm+Last, TinyTL-L+B improves the training memory efficiency by 5.2× while providing up to 7.3% higher top1 accuracy, which shows that our method is not only more memory-efficient but also more effective than FT-Norm+Last. Compared to FT-Full, TinyTL-L+B@320 can achieve the same level of accuracy while providing 6.0× training memory saving. Regarding the comparison between different variants of TinyTL, both TinyTL-L and TinyTL-L+B have clearly better accuracy than TinyTL-B while incurring little memory overhead. It shows that the lite residual modules are essential in TinyTL. Besides, we find that TinyTL-L+B is slightly better than TinyTL-L on most of the datasets while maintaining the same memory footprint. Therefore, we choose TinyTL-L+B as the default.
Figure 3 demonstrates the results under different input resolutions. We can observe that simply reducing the input resolution will result in significant accuracy drops for FT-Full. In contrast, TinyTL can reduce the memory footprint by 3.9-6.5× while having the same or even higher accuracy compared to fine-tuning the full network.
Combining TinyTL and Feature Extractor Adaptation. Table 3 summarizes the results of TinyTL and previously reported transfer learning results, where different backbone neural networks are used as the feature extractor. Combined with feature extractor adapt tion, TinyTL achieves 7.5-12.9× memory saving compared to fine-tuning the full Inception-V3, reducing from 850MB to 66-114MB while providing the same level of accuracy. Additionally, we try updating the last two layers besides biases and lite residual modules (indicated by †), which results in 2MB of extra
Flowers102
ResNet-50 Activation Pruning
Ours MobileNetV2 Activation Pruning
97.5 802.2 96.6 447.8
96.9 682.7 97.4 114.0 95.8 373.8 96.3 612.0 96.8 66.0 94.1 330.0 95.2 541.3 90.4 286.2 93.4 470.6 79.7 242.3 88.6 399.9
Aircraft
ResNet-50 Activation Pruning
Ours MobileNetV2 Activation Pruning
86.6 802.1 82.8 447.8 83.53 682.7 84.8 116.0 79.8 373.8 80.83 612.0 82.4 69.0 77.0 330.0 77.47 541.3 70.4 286.2 75.64 470.6 61.8 242.3 72.24 399.8
Stanford-Cars
ResNet-50 Activation Pruning
Ours MobileNetV2 Activation Pruning
91.7 802.8 91.0 448.3 91.28 683.5 90.7 119.0 88.7 374.3 90.95 612.8 89.6 71.0 86.2 330.5 89.71 542.1 82.5 286.6 88.20 471.3 75.0 242.8 85.20 400.6
training memory footprint. This slightly improves the accuracy performances, from 90.7% to 91.5% on Cars, from 85.0% to 86.0% on Food, and from 84.8% to 85.4% on Aircraft.
4.3 Ablation Studies and Discussions
Comparison with Dynamic Activation Pruning. The comparison between TinyTL and dynamic activation pruning [31] is summarized in Figure 4. TinyTL is more effective because it re-designed the transfer learning framework (lite residual module, feature extractor adaptation) rather than prune an existing architecture. The transfer accuracy drops quickly when the pruning ratio increases beyond 50% (only 2× memory saving). In contrast, TinyTL can achieve much higher memory reduction without loss of accuracy.
Initialization for Lite Residual Modules. By default, we use the pre-trained weights on the pretraining dataset to initialize the lite residual modules. It requires to have lite residual modules during both the pre-training phase and transfer learning phase. When applying TinyTL to existing pre-trained neural networks that do not have lite residual modules during the pre-training phase, we need to use another initialization strategy for the lite residual modules during transfer learning. To verify the effectiveness of TinyTL under this setting, we also evaluate the performances of TinyTL when using random weights [62] to initialize the lite residual modules except for the scaling parameter of the final normalization layer in each lite residual module. These scaling parameters are initialized with zeros.
Table 4 reports the summarized results. We find using the pre-trained weights to initialize the lite residual modules consistently outperforms using random weights. Besides, we also find that using TinyTL-RandomL+B still provides highly competitive results on Cars, Food, Aircraft, CIFAR10,
8
Flowers102
TinyTL (batch size 8)
TinyTL (batch size 1)
96.8 64.7 96.3 17.4 96.4 54.4 96.1 16.1 96.0 45.2 95.9 15.0 95.5 37.1 95.6 13.9 94.6 30.1 94.8 13.1 93.1 24.2 93.4 12.3
Aircraft
TinyTL (batch size 8)
TinyTL (batch size 1)
82.3 64.7 82.7 17.4 80.8 54.4 80.2 16.1 78.9 45.2 79.6 15.0 75.4 37.1 77.5 13.9 74.9 30.1 75.0 13.1 70.4 24.2 70.7 12.3
Stanford-Cars
TinyTL (batch size 8)
TinyTL (batch size 1)
88.8 64.7 88.7 17.4 88.0 54.4 87.8 16.1 87.4 45.2 86.6 15.0 85.0 37.1 84.5 13.9 83.6 30.1 82.1 13.1 78.2 24.2 78.1 12.3
CIFAR100, and CelebA. Therefore, if having the budget, it is better to use pre-trained weights to initialize the lite residual modules. If not, TinyTL can still be applied and provides competitive results on datasets whose distribution is far from the pre-training dataset.
Results of TinyTL under Batch Size 1. Figure 5 demonstrates the results of TinyTL when using a training batch size of 1. We tune the initial learning rate for each dataset while keeping the other training settings unchanged. As our model employs group normalization rather than batch normalization (Section 3.3), we observe little/no loss of accuracy than training with batch size 8. Meanwhile, the training memory footprint is further reduced to around 16MB, a typical L3 cache size. This makes it much easier to train on the cache (SRAM), which can greatly reduce energy consumption than DRAM training.
5 Conclusion
We proposed Tiny-Transfer-Learning (TinyTL) for memory-efficient on-device learning that aims to adapt pre-trained models to newly collected data on edge devices. Unlike previous methods that focus on reducing the number of parameters or FLOPs, TinyTL directly optimizes the training memory footprint by fixing the memory-heavy modules (i.e., weights) while learning memory-efficient bias modules. We further introduce lite residual modules that significantly improve the adaptation capacity of the model with little memory overhead. Extensive experiments on benchmark datasets consistently show the effectiveness and memory-efficiency of TinyTL, paving the way for efficient on-device machine learning.
9
Broader Impact
The proposed efficient on-device learning technique greatly reduces the training memory footprint of deep neural networks, enabling adapting pre-trained models to new data locally on edge devices without leaking them to the cloud. It can democratize AI to people in the rural areas where the Internet is unavailable or the network condition is poor. They can not only inference but also fine-tune AI models on their local devices without connections to the cloud servers. This can also benefit privacy-sensitive AI applications, such as health care, smart home, and so on.
Acknowledgements
We thank MIT-IBM Watson AI Lab, NSF CAREER Award #1943349 and NSF Award #2028888 for supporting this research. We thank MIT Satori cluster for providing the computation resource. | 1. What is the focus and contribution of the paper regarding transfer learning?
2. What are the strengths of the proposed approach, particularly in terms of memory efficiency and expressivity?
3. Are there any concerns or questions regarding the paper's methodology or assumptions?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any recent works related to the paper's topic that the reviewer thinks the authors should consider? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The paper presents a new Transfer learning pipeline/method which includes quite a few nice ideas either new or existing to enable on-device learning on memory-constrained edge devices where RAM costs a lot of energy. The paper makes the observation that during backprop bias updates require little memory (don't need activations to be stored) compared to weight updates. Combining this with new lite-residual blocks which are again cheap to update improves expressivity resulting in a better final model for downstream transfer tasks like Aircraft. Flowers and Cars. The authors also propose the use of one super-net to cater to all the downstream tasks and the appropriate subnet can be chosen through feature extractor adaptation routines that borrow ideas from the NAS literature. The experiments are in standard vision transfer learning settings and show that TinyTL has a much lesser memory footprint for transferring a pretrained network to a smaller downstream dataset with good boosts in accuracy.
Strengths
The paper tackles a very important problem of memory-efficient on-device training and the building blocks are well motivated, technically sound, and solid. Strengths include 1) Observations to update only biases 2) Adding new lite-residual blocks to gain the lost expressivity from not updating the weights 3) One super-net for pertaining and then adapting backbones according to the downstream datasets with feature extractor adaption. 4) Extensive experimentation and ablation studies along with memory and compute costs for all the experiments going on in the paper. 5) Details for reproducibility as well with a promise of open-sourcing the code. The only other recent paper I saw which tries to reduce memory footprint (through removed the expensive intermediate feature maps) is RNNPool (Saha et al., 2020) and I am not aware of other methods and will defer to other reviewers in case there is any. The 13.3X memory gains over Inception-V3 (Full fine-tuning) is very impressive. This paper also shows the trade-off between various modes of transfer learning like last layer tuning, BN+last layer and full and can be used for future benchmarks. They also show 2.3x and 9.8x memory reduction compared to the standard last layer and BN+last tuning methods while having better accuracy. The comparison to dynamic activation pruning is nice and the ablation about the design choices is encouraging. The paper also shows that the activation size reduces by 10x (again the only place with similar numbers against MBV2 is RNNPool) along with reduction in parameter size All the figures and tables are well made and the authors should be appreciated for that.
Weaknesses
These are not weaknesses but rather something I noticed and don't have clarity about. 1) I didn't understand the (ours) network in Figure 3. I looked around but didn't find what that was referring to. I assumed it referred to the TinyTL FeatureAdapt (FA) model from Table 1. 2) I again assume the Figure 4 is a theoretical computation and not an actual deployment on the RPi-1. It would be great to see an actual deployment if it is not already one. 3) The TinyTL method still assumes batch training, but on the edge devices smaller than RPi, things happen in a streaming fashion and probably batch size of 1 is what we might want to focus on. Any thoughts on this would be great. I think showcasing effectiveness in streaming would be a great thing assuming downstream dataset comes that way (which might be true in a lot of its devices). 4) There is some recent work on subnets inside a big net (like Worstamn et al 2020, which very recent and I don't expect the authors to know it) and training only BN layer and how effective they could be (https://arxiv.org/abs/2003.00152). It would be good to include them in related work broadly along with RNNPool kind of works. 5) I had to search around for the overhead due to the more involved pipeline with super-net and FA which is in Appendix, it would be great to point to it in the main paper and briefly mention it. 6) I don't completely get the parameter count in Fig 6 (right) can you flesh it out somewhere and it would be great (I don't know what model to use to compute and get that number). 7) Lastly, there are non-deep learning methods for on-device ML like Bonsai (Kumar et al., ICML 2017) and would be good to talk about them too. Authors should talk about negative impacts as well in the broader impact section. |
NIPS | Title
TinyTL: Reduce Memory, Not Parameters for Efficient On-Device Learning
Abstract
On-device learning enables edge devices to continually adapt the AI models to new data, which requires a small memory footprint to fit the tight memory constraint of edge devices. Existing work solves this problem by reducing the number of trainable parameters. However, this doesn’t directly translate to memory saving since the major bottleneck is the activations, not parameters. In this work, we present Tiny-Transfer-Learning (TinyTL) for memory-efficient on-device learning. TinyTL freezes the weights while only learns the bias modules, thus no need to store the intermediate activations. To maintain the adaptation capacity, we introduce a new memory-efficient bias module, the lite residual module, to refine the feature extractor by learning small residual feature maps adding only 3.8% memory overhead. Extensive experiments show that TinyTL significantly saves the memory (up to 6.5×) with little accuracy loss compared to fine-tuning the full network. Compared to fine-tuning the last layer, TinyTL provides significant accuracy improvements (up to 34.1%) with little memory overhead. Furthermore, combined with feature extractor adaptation, TinyTL provides 7.3-12.9× memory saving without sacrificing accuracy compared to fine-tuning the full Inception-V3.
1 Introduction
Intelligent edge devices with rich sensors (e.g., billions of mobile phones and IoT devices)1 have been ubiquitous in our daily lives. These devices keep collecting new and sensitive data through the sensor every day while being expected to provide high-quality and customized services without sacrificing privacy2. These pose new challenges to efficient AI systems that could not only run inference but also continually fine-tune the pre-trained models on newly collected data (i.e., on-device learning).
Though on-device learning can enable many appealing applications, it is an extremely challenging problem. First, edge devices are memory-constrained. For example, a Raspberry Pi 1 Model A only has 256MB of memory, which is sufficient for inference, but by far insufficient for training (Figure 1 left), even using a lightweight neural network architecture (MobileNetV2 [1]). Furthermore, the memory is shared by various on-device applications (e.g., other deep learning models) and the operating system. A single application may only be allocated a small fraction of the total memory, which makes this challenge more critical. Second, edge devices are energy-constrained. DRAM access consumes two orders of magnitude more energy than on-chip SRAM access. The large memory footprint of activations cannot fit into the limited on-chip SRAM, thus has to access DRAM. For instance, the training memory of MobileNetV2, under batch size 16, is close to 1GB, which is by far larger than the SRAM size of an AMD EPYC CPU3 (Figure 1 left), not to mention lower-end
1https://www.statista.com/statistics/330695/number-of-smartphone-users-worldwide/ 2https://ec.europa.eu/info/law/law-topic/data-protection_en 3https://www.amd.com/en/products/cpu/amd-epyc-7302
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Training 128x expensive! Inference Memory Footprint, Batch Size = 1 (20MB)
Memory Cost #batch size ResNet50 Act ResNet50 Params ResNet50 Running Act ResNet50 Training Memory Cost #batch size ResNet50 Inference Memory Cost #batch size MobileNetV2 Act MobileNetV2 Params MobileNetV2 Running Act MobileNetV2 Training Memory Cost #batch size MobileNetV2 Inference Memory Cost Untitled 1 0 88.4 102.23 6.42 190.63 0 108.65 0 54.80 14.02 5.60 68.82 0 19.62 Untitled 2 1 176.8 102.23 279.03 1 108.65 1 109.60 14.02 123.62 1 19.62 Untitled 3 2 353.6 102.23 456.83 2 108.65 2 219.20 14.02 233.22 2 19.62 Untitled 4 3 707.2 102.23 809.43 3 108.65 3 438.40 14.02 452.42 3 19.62 4 1414.4 102.23 1516.63 4 108.65 4 876.80 14.02 890.82 4 19.62 101 102 103 TPU SRAM (28MB) 21 4 8 Raspberry Pi 1 DRAM (256MB) float mult SRAM access DRAM access Energy 3.7 5.0 640.0 Table 1 ResNet MBV2-1.4 Params (M) 102 24 Activations (M) 707.2 626.4 0 200 400 600 800 Param (MB) Activation (MB) ResNet-50 MbV2-1.4 4.3x 1.1x The main bottleneck does not improve much. DRAM: 640 pJ/byte SRAM: 5 pJ/byte 6.9x larger Table 1-1 MobileNetV3-1.4 4 40 59 16 Batch Size M bV 2 M em or y Fo ot pr in t ( M B) Activation is the main bottleneck, not parameters. float mult SRAM access DRAM access Energy 3.7 5.0 640.0 Training Inference Batch Size 101 102 103 M ob ile Ne tV 2 M em or y Fo ot pr in t ( M B) TPU SRAM (28MB) 21 4 8 16 Raspberry Pi 1 Model A DRAM (256MB) 32 bit Float Mult 32 bit SRAM Access 32 bit DRAM Access 102 103 101 100 En er gy (p J) 3.7 pJ 5 pJ 640 pJ 128x Expensive float mult SRAM access DRAM access Energy 3.7 5.0 640.0
Inference, bs=1 Energy 20.0 0 125 250 375 500 Inference Batch Size = 1 M ob ile Ne tV 2 M em or y Fo ot pr in t ( M B) SRAM: 5 pJ/byte DRAM: 640 pJ/byte 128x expensive!
Table 2
SRAM Access Training, bs=8
Energy 20 890.82
Table 3
ResNet-50 MbV2-1.4
Param (MB) 102 24 Activation (MB) 1414.4 1252.8
1
edge platforms. If the training memory can fit on-chip SRAM, it will drastically improve the speed and energy efficiency.
There is plenty of efficient inference techniques that reduce the number of trainable parameters and the computation FLOPs [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], however, parameter-efficient or FLOPs-efficient techniques do not directly save the training memory. It is the activation that bottlenecks the training memory, not the parameters. For example, Figure 1 (right) compares ResNet-50 and MobileNetV21.4. In terms of parameter size, MobileNetV2-1.4 is 4.3× smaller than ResNet-50. However, for training activation size, MobileNetV2-1.4 is almost the same as ResNet-50 (only 1.1× smaller), leading to little memory reduction. It is essential to reduce the size of intermediate activations required by back-propagation, which is the key memory bottleneck for efficient on-device training.
In this paper, we propose Tiny-Transfer-Learning (TinyTL) to address these challenges. By analyzing the memory footprint during the backward pass, we notice that the intermediate activations (the main bottleneck) are only needed when updating the weights, not the biases (Eq. 2). Inspired by this finding, we propose to freeze the weights of the pre-trained feature extractor and only update the biases to reduce the memory footprint (Figure 2b). To compensate for the capacity loss, we introduce a memory-efficient bias module, called lite residual module, which improves the model capacity by refining the intermediate feature maps of the feature extractor (Figure 2c). Meanwhile, we aggressively shrink the resolution and width of the lite residual module to have a small memory overhead (only 3.8%). Extensive experiments on 9 image classification datasets with the same pre-trained model (ProxylessNAS-Mobile [11]) demonstrate the effectiveness of TinyTL compared to previous transfer learning methods. Further, combined with a pre-trained once-for-all network [10], TinyTL can select a specialized sub-network as the feature extractor for each transfer dataset (i.e., feature extractor adaptation): given a more difficult dataset, a larger sub-network is selected, and vice versa. TinyTL achieves the same level of (or even higher) accuracy compared to fine-tuning the full Inception-V3 while reducing the training memory footprint by up to 12.9×. Our contributions can be summarized as follows:
• We propose TinyTL, a novel transfer learning method to reduce the training memory footprint by an order of magnitude for efficient on-device learning. We systematically analyze the memory of training and find the bottleneck comes from updating the weights, not biases (assume ReLU activation).
• We also introduce the lite residual module, a memory-efficient bias module to improve the model capacity with little memory overhead.
• Extensive experiments on transfer learning tasks show that our method is highly memory-efficient and effective. It reduces the training memory footprint by up to 12.9× without sacrificing accuracy.
2 Related Work
Efficient Inference Techniques. Improving the inference efficiency of deep neural networks on resource-constrained edge devices has recently drawn extensive attention. Starting from [4, 5, 12, 13,
14], one line of research focuses on compressing pre-trained neural networks, including i) network pruning that removes less-important units [4, 15] or channels [16, 17]; ii) network quantization that reduces the bitwidth of parameters [5, 18] or activations [19, 20]. However, these techniques cannot handle the training phase, as they rely on a well-trained model on the target task as the starting point.
Another line of research focuses on lightweight neural architectures by either manual design [1, 2, 3, 21, 22] or neural architecture search [6, 8, 11, 23]. These lightweight neural networks provide highly competitive accuracy [10, 24] while significantly improving inference efficiency. However, concerning the training memory efficiency, key bottlenecks are not solved: the training memory is dominated by activations, not parameters (Figure 1).
There are also some non-deep learning methods [25, 26, 27] that are designed for efficient inference on edge devices. These methods are suitable for handling simple tasks like MNIST. However, for more complicated tasks, we still need the representation capacity of deep neural networks.
Memory Footprint Reduction. Researchers have been seeking ways to reduce the training memory footprint. One typical approach is to re-compute discarded activations during backward [28, 29]. This approach reduces memory usage at the cost of a large computation overhead. Thus it is not preferred for edge devices. Layer-wise training [30] can also reduce the memory footprint compared to end-to-end training. However, it cannot achieve the same level of accuracy as end-to-end training. Another representative approach is through activation pruning [31], which builds a dynamic sparse computation graph to prune activations during training. Similarly, [32] proposes to reduce the bitwidth of training activations by introducing new reduced-precision floating-point formats. Besides reducing the training memory cost, there are some techniques that focus on reducing the peak inference memory cost, such as RNNPool [33] and MemNet [34]. Our method is orthogonal to these techniques and can be combined to further reduce the memory footprint.
Transfer Learning. Neural networks pre-trained on large-scale datasets (e.g., ImageNet [35]) are widely used as a fixed feature extractor for transfer learning, then only the last layer needs to be fine-tuned [36, 37, 38, 39]. This approach does not require to store the intermediate activations of the feature extractor, and thus is memory-efficient. However, the capacity of this approach is limited, resulting in poor accuracy, especially on datasets [40, 41] whose distribution is far from ImageNet (e.g., only 45.9% Aircraft top1 accuracy achieved by Inception-V3 [42]). Alternatively, fine-tuning the full network can achieve better accuracy [43, 44]. But it requires a vast memory footprint and hence is not friendly for training on edge devices. Recently, [45,46] propose to only update parameters of the batch normalization (BN) [47] layers, which greatly reduces the number of trainable parameters. Unfortunately, parameter-efficiency doesn’t translate to memory-efficiency. It still requires a large amount of memory (e.g., 326MB under batch size 8) to store the input activations of the BN layers (Table 3). Additionally, the accuracy of this approach is still much worse than fine-tuning the full network (70.7% v.s. 85.5%; Table 3). People can also partially fine-tune some layers, but how many layers to select is still ad hoc. This paper provides a systematic approach to save memory without losing accuracy.
3 Tiny Transfer Learning
3.1 Understanding the Memory Footprint of Back-propagation
Without loss of generality, we consider a neural networkM that consists of a sequence of layers:
M(·) = Fwn(Fwn−1(· · · Fw2(Fw1(·)) · · · )), (1)
where wi denotes the parameters of the ith layer. Let ai and ai+1 be the input and output activations of the ith layer, respectively, and L be the loss. In the backward pass, given ∂L∂ai+1 , there are two goals for the ith layer: computing ∂L∂ai and ∂L ∂wi .
Assuming the ith layer is a linear layer whose forward process is given as: ai+1 = aiW + b, then its backward process under batch size 1 is
∂L ∂ai = ∂L ∂ai+1 ∂ai+1 ∂ai = ∂L ∂ai+1 WT , ∂L ∂W = aTi ∂L ∂ai+1 , ∂L ∂b = ∂L ∂ai+1 . (2)
fmap in memory fmap not in memory
learned weights on target task pre-trained weights (a) Fine-tune the full network Downsample Upsample (b) Lightweight residual learning (ours) (d) Our lightweight residual branch KxK Group Conv 1x1 Conv keep activations small while using group conv to increase the arithmetic intensity (c) Mobile inverted bottleneck block little computation but large activation (a) Fine-tune the full network (Conventional)
train a once-for-all network (c) Lite residual learning fmap in memory fmap not in memory learnable params fixed params weight bias mobile inverted bottleneck blockith UpsampleDownsample Group Conv 1x1 Conv (b) Fine-tune bias only
(a) Fine-tune the full network (Conventional)
(c) Lite residual learning(d) Feature network adaptation
fmap in memory fmap not in memory learnable params fixed params weight bias mobile inverted bottleneck blockith
Aircraft Cars Flowers
Downsample Group Conv
1x1 Conv
Avoid inverted bottleneck
1x1 Conv
(b) Fine-tune bias only C, R 6C, R 6C, R C, R C, 0.5R C, 0.5R 1x1 Conv1x1 Conv Depth-wise Conv 1x1 Conv1x1 Conv Depth-wise Conv
1x1 Conv1x1 Conv Depth-wise Conv
According to Eq. (2), the intermediate activations (i.e., {ai}) that dominate the memory footprint are only required to compute the gradient of the weights (i.e., ∂L∂W ), not the bias. If we only update the bias, training memory can be greatly saved. This property is also applicable to convolution layers and normalization layers (e.g., batch normalization [47], group normalization [48], etc) since they can be considered as special types of linear layers.
Regarding non-linear activation layers (e.g., ReLU, sigmoid, h-swish), sigmoid and h-swish require to store ai to compute ∂L∂ai (Table 1), hence they are not memory-efficient. Activation layers that build upon them are also not memory-efficient consequently, such as tanh, swish [49], etc. In contrast, ReLU and other ReLU-styled activation layers (e.g., LeakyReLU [50]) only requires to store a binary mask representing whether the value is smaller than 0, which is 32× smaller than storing ai.
3.2 Lite Residual Learning
Based on the memory footprint analysis, one possible solution of reducing the memory cost is to freeze the weights of the pre-trained feature extractor while only update the biases (Figure 2b). However, only updating biases has limited adaptation capacity. Therefore, we introduce lite residual learning that exploits a new class of generalized memory-efficient bias modules to refine the intermediate feature maps (Figure 2c).
4
Formally, a layer with frozen weights and learnable biases can be represented as:
ai+1 = FW(ai) + b. (3)
To improve the model capacity while keeping a small memory footprint, we propose to add a lite residual module that generates a residual feature map to refine the output:
ai+1 = FW(ai) + b+ Fwr (a′i = reduce(ai)), (4)
where a′i = reduce(ai) is the reduced activation. According to Eq. (2), learning these lite residual modules only requires to store the reduced activations {a′i} rather than the full activations {ai}.
Implementation (Figure 2c). We apply Eq. (4) to mobile inverted bottleneck blocks (MB-block) [1]. The key principle is to keep the activation small. Following this principle, we explore two design dimensions to reduce the activation size:
• Width. The widely-used inverted bottleneck requires a huge number of channels (6×) to compensate for the small capacity of a depthwise convolution, which is parameter-efficient but highly activation-inefficient. Even worse, converting 1× channels to 6× channels back and forth requires two 1× 1 projection layers, which doubles the total activation to 12×. Depthwise convolution also has a very low arithmetic intensity (its OPs/Byte is less than 4% of 1× 1 convolution’s OPs/Byte if with 256 channels), thus highly memory in-efficient with little reuse. To solve these limitations, our lite residual module employs the group convolution that has much higher arithmetic intensity than depthwise convolution, providing a good trade-off between FLOPs and memory. That also removes the 1×1 projection layer, reducing the total channel number by 6×2+11+1 = 6.5×.
• Resolution. The activation size grows quadratically with the resolution. Therefore, we shrink the resolution in the lite residual module by employing a 2× 2 average pooling to downsample the input feature map. The output of the lite residual module is then upsampled to match the size of the main branch’s output feature map via bilinear upsampling. Combining resolution and width optimizations, the activation of our lite residual module is roughly 22 × 6.5 = 26× smaller than the inverted bottleneck.
3.3 Discussions
Normalization Layers. As discussed in Section 3.1, TinyTL flexibly supports different normalization layers, including batch normalization (BN), group normalization (GN), layer normalization (LN), and so on. In particular, BN is the most widely used one in vision tasks. However, BN requires a large batch size to have accurate running statistics estimation during training, which is not suitable for on-device learning where we want a small training batch size to reduce the memory footprint. Moreover, the data may come in a streaming fashion in on-device learning, which requires a training batch size of 1. In contrast to BN, GN can handle a small training batch size as the running statistics in GN are computed independently for different inputs. In our experiments, GN with a small training batch size (e.g., 8) performs slightly worse than BN with a large training batch size (e.g., 256). However, as we target at on-device learning, we choose GN in our models.
Feature Extractor Adaptation. TinyTL can be applied to different backbone neural networks, such as MobileNetV2 [1], ProxylessNASNets [11], EfficientNets [24], etc. However, since the weights of the feature extractor are frozen in TinyTL, we find using the same backbone neural network for all transfer tasks is sub-optimal. Therefore, we choose the backbone of TinyTL using a pre-trained once-for-all network [10] to adaptively select the specialized feature extractor that best fits the target transfer dataset. Specifically, a once-for-all network is a special kind of neural network that is sparsely activated, from which many different sub-networks can be derived without retraining by sparsely activating parts of the model according to the architecture configuration (i.e., depth, width, kernel size, resolution), while the weights are shared. This allows us to efficiently evaluate the effectiveness of a backbone neural network on the target transfer dataset without the expensive pre-training process. Further details of the feature extractor adaptation process are provided in Appendix A.
4 Experiments
4.1 Setups
Datasets. Following the common practice [43, 44, 45], we use ImageNet [35] as the pre-training dataset, and then transfer the models to 8 downstream object classification tasks, including Cars [41], Flowers [51], Aircraft [40], CUB [52], Pets [53], Food [54], CIFAR10 [55], and CIFAR100 [55]. Besides object classification, we also evaluate our TinyTL on human facial attribute classification tasks, where CelebA [56] is the transfer dataset and VGGFace2 [57] is the pre-training dataset.
Model Architecture. To justify the effectiveness of TinyTL, we first apply TinyTL and previous transfer learning methods to the same backbone neural network, ProxylessNAS-Mobile [11]. For each MB-block in ProxylessNAS-Mobile, we insert a lite residual module as described in Section 3.2 and Figure 2 (c). The group number is 2, and the kernel size is 5. We use the ReLU activation since it is more memory-efficient according to Section 3.1. We replace all BN layers with GN layers to better support small training batch sizes. We set the number of channels per group to 8 for all GN layers. Following [58], we apply weight standardization [59] to convolution layers that are followed by GN.
For feature extractor adaptation, we build the once-for-all network using the MobileNetV2 design space [10, 11] that contains five stages with a gradually decreased resolution, and each stage consists of a sequence of MB-blocks. In the stage-level, it supports elastic depth (i.e., 2, 3, 4). In the block-level, it supports elastic kernel size (i.e., 3, 5, 7) and elastic width expansion ratio (i.e., 3, 4, 6). Similarly, for each MB-block in the once-for-all network, we insert a lite residual module that supports elastic group number (i.e., 2, 4) and elastic kernel size (i.e., 3, 5).
Training Details. We freeze the memory-heavy modules (weights of the feature extractor) and only update memory-efficient modules (bias, lite residual, classifier head) during transfer learning. The models are fine-tuned for 50 epochs using the Adam optimizer [60] with batch size 8 on a single GPU. The initial learning rate is tuned for each dataset while cosine schedule [61] is adopted for learning rate decay. We apply 8bits weight quantization [5] on the frozen weights to reduce the parameter size, which causes a negligible accuracy drop in our experiments. For all compared methods, we also assume the 8bits weight quantization is applied if eligible when calculating their training memory footprint. Additionally, as PyTorch does not support explicit fine-grained memory management, we use the theoretically calculated training memory footprint for comparison in our experiments. For simplicity, we assume the batch size is 8 for all compared methods throughout the experiment section.
Stanford-Cars
Full Last BN Bias LiteResidual LiteResidual+Bias
256, 448 224, 416 192, 384 89.1 292.4 160, 352 87.3 208.7 128, 320 84.2 140.5 60.0 57.6 80.1 59.3 88.3 64.7 88.8 64.7 96, 288 76.1 87.2 58.4 47.6 78.1 49.0 87.7 54.4 88.0 54.4 , 256 54.7 38.7 80.2 249.9 75.9 39.8 86.3 45.2 87.4 45.2 , 224 50.9 30.8 77.9 192.4 73.4 31.7 84.2 37.1 85.0 37.1 , 192 73.7 142.9 68.6 24.7 82.1 30.1 83.6 30.1 , 160 67.9 100.7 61.2 18.7 77.3 24.1 78.2 24.2
Flowers102-1
Full Last BN Bias LiteResidual LiteResidual+bias Batch Size
Model Size 18.98636 5.138576 5.264432 5.201504 10.587824 10.63352 8
Act@256, Act@448 60.758528 12.845056 93.6488 13.246464 13.246464 13.246464 Act@224, Act@416 46.482132 11.075584 80.713856 11.421696 11.421696 11.421696 Act@192, Act@384 34.176672 9.437184 68.8032 9.732096 9.732096 9.732096 Act@160, Act@352 23.70904 7.929856 57.785036 8.177664 8.177664 8.177664 Act@128, Act@320 15.189632 6.5536 47.78 6.7584 6.7584 6.7584 Act@96, Act@288 8.530757 5.308416 38.678632 5.474304 5.474304 5.474304 , Act@256 4.194304 30.5792 4.325376 4.325376 4.325376 , Act@224 3.211264 23.39462 3.311616 3.311616 3.311616 , Act@192 2.359296 17.2008 2.433024 2.433024 2.433024 , Act@160 1.6384 11.933009 1.6896 1.6896 1.6896
Aircraft
Full Last BN Bias LiteResidual LiteResidual+Bias
256, 448 224, 416 192, 384 83.5 292.4 160, 352 81.0 208.7 128, 320 77.7 140.5 51.9 57.6 68.6 59.3 81.5 64.7 82.3 64.7 96, 288 70.5 87.2 50.6 47.6 67.3 49.0 80.0 54.4 80.8 54.4 , 256 48.6 38.7 70.7 249.9 65.6 39.8 79.0 45.2 78.9 45.2 , 224 44.9 30.8 68.1 192.4 63.2 31.7 76.4 37.1 75.4 37.1 , 192 64.7 142.9 59.4 24.7 73.3 30.1 74.9 30.1 , 160 60.5 100.7 55.2 18.7 69.5 24.1 70.4 24.2
Flowers
Full Last BN Bias LiteResidual LiteResidual+Bias
256, 448 224, 416 96.8 390.8 192, 384 96.1 292.4 160, 352 95.4 208.7 128, 320 93.6 140.5 93.3 57.6 96.0 387.5 95.6 59.3 96.7 64.7 96.8 64.7 96, 288 89.6 87.2 92.6 47.6 95.6 314.7 95.1 49.0 96.4 54.4 96.4 54.4 , 256 91.6 38.7 95.0 249.9 94.5 39.8 95.9 45.2 96.0 45.2 , 224 90.1 30.8 94.3 192.4 93.5 31.7 95.3 37.1 95.5 37.1 , 192 92.8 142.9 91.5 24.7 94.6 30.1 94.6 30.1 , 160 90.5 100.7 89.5 18.7 92.8 24.1 93.1 24.2
Cub200
Full Last BN Bias LiteResidual LiteResidual+Bias
256, 448 224, 416 81.0 390.8 192, 384 79.0 292.4 160, 352 76.7 208.7 128, 320 71.8 140.5 77.9 57.6 80.6 387.5 79.8 59.3 80.5 64.7 81.0 64.7 96, 288 77.0 47.6 79.6 314.7 78.6 49.0 79.6 54.4 80.0 54.4 , 256 75.4 38.7 79.1 249.9 77.5 39.8 78.5 45.2 78.8 45.2 , 224 73.3 30.8 76.3 192.4 75.3 31.7 76.8 37.1 77.1 37.1 , 192 73.7 142.9 72.7 24.7 74.7 30.1 74.7 30.1 , 160
Food101
Full Last BN Bias LiteResidual LiteResidual+Bias
256, 448 224, 416 84.6 390.8 192, 384 83.2 292.4 160, 352 81.2 208.7 128, 320 78.1 140.5 73.0 57.6 80.2 387.5 78.7 59.3 82.8 64.7 82.9 64.7 96, 288 73.5 87.2 72.0 47.6 79.5 314.7 77.9 49.0 82.0 54.4 82.1 54.4 , 256 70.7 38.7 78.4 249.9 76.8 39.8 81.1 45.2 81.5 45.2 , 224 68.7 30.8 77.0 192.4 75.5 31.7 79.2 37.1 79.7 37.1 , 192 74.9 142.9 73.0 24.7 78.2 30.1 78.4 30.1 , 160 72.4 100.7 70.1 18.7 74.6 24.1 75.1 24.2
Pets
Full Last BN Bias LiteResidual LiteResidual+Bias256, 448
1
4.2 Main Results
Effectiveness of TinyTL. Table 2 reports the comparison between TinyTL and previous transfer learning methods including: i) fine-tuning the last linear layer [36, 37, 39] (referred to as FT-Last); ii) fine-tuning the normalization layers (e.g., BN, GN) and the last linear layer [42] (referred to as FT-Norm+Last) ; iii) fine-tuning the full network [43, 44] (referred to as FT-Full). We also study several variants of TinyTL including: i) TinyTL-B that fine-tunes biases and the last linear layer; ii) TinyTL-L that fine-tunes lite residual modules and the last linear layer; iii) TinyTL-L+B that fine-tunes lite residual modules, biases, and the last linear layer. All compared methods use the same pre-trained model but fine-tune different parts of the model as discussed above. We report the average accuracy across five runs.
Compared to FT-Last, TinyTL maintains a similar training memory footprint while improving the top1 accuracy by a significant margin. In particular, TinyTL-L+B improves the top1 accuracy by 34.1% on Cars, by 30.5% on Aircraft, by 12.6% on CIFAR100, by 11.0% on Food, etc. It shows the improved adaptation capacity of our method over FT-Last. Compared to FT-Norm+Last, TinyTL-L+B improves the training memory efficiency by 5.2× while providing up to 7.3% higher top1 accuracy, which shows that our method is not only more memory-efficient but also more effective than FT-Norm+Last. Compared to FT-Full, TinyTL-L+B@320 can achieve the same level of accuracy while providing 6.0× training memory saving. Regarding the comparison between different variants of TinyTL, both TinyTL-L and TinyTL-L+B have clearly better accuracy than TinyTL-B while incurring little memory overhead. It shows that the lite residual modules are essential in TinyTL. Besides, we find that TinyTL-L+B is slightly better than TinyTL-L on most of the datasets while maintaining the same memory footprint. Therefore, we choose TinyTL-L+B as the default.
Figure 3 demonstrates the results under different input resolutions. We can observe that simply reducing the input resolution will result in significant accuracy drops for FT-Full. In contrast, TinyTL can reduce the memory footprint by 3.9-6.5× while having the same or even higher accuracy compared to fine-tuning the full network.
Combining TinyTL and Feature Extractor Adaptation. Table 3 summarizes the results of TinyTL and previously reported transfer learning results, where different backbone neural networks are used as the feature extractor. Combined with feature extractor adapt tion, TinyTL achieves 7.5-12.9× memory saving compared to fine-tuning the full Inception-V3, reducing from 850MB to 66-114MB while providing the same level of accuracy. Additionally, we try updating the last two layers besides biases and lite residual modules (indicated by †), which results in 2MB of extra
Flowers102
ResNet-50 Activation Pruning
Ours MobileNetV2 Activation Pruning
97.5 802.2 96.6 447.8
96.9 682.7 97.4 114.0 95.8 373.8 96.3 612.0 96.8 66.0 94.1 330.0 95.2 541.3 90.4 286.2 93.4 470.6 79.7 242.3 88.6 399.9
Aircraft
ResNet-50 Activation Pruning
Ours MobileNetV2 Activation Pruning
86.6 802.1 82.8 447.8 83.53 682.7 84.8 116.0 79.8 373.8 80.83 612.0 82.4 69.0 77.0 330.0 77.47 541.3 70.4 286.2 75.64 470.6 61.8 242.3 72.24 399.8
Stanford-Cars
ResNet-50 Activation Pruning
Ours MobileNetV2 Activation Pruning
91.7 802.8 91.0 448.3 91.28 683.5 90.7 119.0 88.7 374.3 90.95 612.8 89.6 71.0 86.2 330.5 89.71 542.1 82.5 286.6 88.20 471.3 75.0 242.8 85.20 400.6
training memory footprint. This slightly improves the accuracy performances, from 90.7% to 91.5% on Cars, from 85.0% to 86.0% on Food, and from 84.8% to 85.4% on Aircraft.
4.3 Ablation Studies and Discussions
Comparison with Dynamic Activation Pruning. The comparison between TinyTL and dynamic activation pruning [31] is summarized in Figure 4. TinyTL is more effective because it re-designed the transfer learning framework (lite residual module, feature extractor adaptation) rather than prune an existing architecture. The transfer accuracy drops quickly when the pruning ratio increases beyond 50% (only 2× memory saving). In contrast, TinyTL can achieve much higher memory reduction without loss of accuracy.
Initialization for Lite Residual Modules. By default, we use the pre-trained weights on the pretraining dataset to initialize the lite residual modules. It requires to have lite residual modules during both the pre-training phase and transfer learning phase. When applying TinyTL to existing pre-trained neural networks that do not have lite residual modules during the pre-training phase, we need to use another initialization strategy for the lite residual modules during transfer learning. To verify the effectiveness of TinyTL under this setting, we also evaluate the performances of TinyTL when using random weights [62] to initialize the lite residual modules except for the scaling parameter of the final normalization layer in each lite residual module. These scaling parameters are initialized with zeros.
Table 4 reports the summarized results. We find using the pre-trained weights to initialize the lite residual modules consistently outperforms using random weights. Besides, we also find that using TinyTL-RandomL+B still provides highly competitive results on Cars, Food, Aircraft, CIFAR10,
8
Flowers102
TinyTL (batch size 8)
TinyTL (batch size 1)
96.8 64.7 96.3 17.4 96.4 54.4 96.1 16.1 96.0 45.2 95.9 15.0 95.5 37.1 95.6 13.9 94.6 30.1 94.8 13.1 93.1 24.2 93.4 12.3
Aircraft
TinyTL (batch size 8)
TinyTL (batch size 1)
82.3 64.7 82.7 17.4 80.8 54.4 80.2 16.1 78.9 45.2 79.6 15.0 75.4 37.1 77.5 13.9 74.9 30.1 75.0 13.1 70.4 24.2 70.7 12.3
Stanford-Cars
TinyTL (batch size 8)
TinyTL (batch size 1)
88.8 64.7 88.7 17.4 88.0 54.4 87.8 16.1 87.4 45.2 86.6 15.0 85.0 37.1 84.5 13.9 83.6 30.1 82.1 13.1 78.2 24.2 78.1 12.3
CIFAR100, and CelebA. Therefore, if having the budget, it is better to use pre-trained weights to initialize the lite residual modules. If not, TinyTL can still be applied and provides competitive results on datasets whose distribution is far from the pre-training dataset.
Results of TinyTL under Batch Size 1. Figure 5 demonstrates the results of TinyTL when using a training batch size of 1. We tune the initial learning rate for each dataset while keeping the other training settings unchanged. As our model employs group normalization rather than batch normalization (Section 3.3), we observe little/no loss of accuracy than training with batch size 8. Meanwhile, the training memory footprint is further reduced to around 16MB, a typical L3 cache size. This makes it much easier to train on the cache (SRAM), which can greatly reduce energy consumption than DRAM training.
5 Conclusion
We proposed Tiny-Transfer-Learning (TinyTL) for memory-efficient on-device learning that aims to adapt pre-trained models to newly collected data on edge devices. Unlike previous methods that focus on reducing the number of parameters or FLOPs, TinyTL directly optimizes the training memory footprint by fixing the memory-heavy modules (i.e., weights) while learning memory-efficient bias modules. We further introduce lite residual modules that significantly improve the adaptation capacity of the model with little memory overhead. Extensive experiments on benchmark datasets consistently show the effectiveness and memory-efficiency of TinyTL, paving the way for efficient on-device machine learning.
9
Broader Impact
The proposed efficient on-device learning technique greatly reduces the training memory footprint of deep neural networks, enabling adapting pre-trained models to new data locally on edge devices without leaking them to the cloud. It can democratize AI to people in the rural areas where the Internet is unavailable or the network condition is poor. They can not only inference but also fine-tune AI models on their local devices without connections to the cloud servers. This can also benefit privacy-sensitive AI applications, such as health care, smart home, and so on.
Acknowledgements
We thank MIT-IBM Watson AI Lab, NSF CAREER Award #1943349 and NSF Award #2028888 for supporting this research. We thank MIT Satori cluster for providing the computation resource. | 1. What is the main contribution of the paper regarding Tiny-Transfer-Learning?
2. What are the strengths of the proposed approach, particularly in addressing memory constraints?
3. What are the weaknesses of the paper, especially regarding the feature extractor adaptation process?
4. Do you have any concerns or questions about the methodology used in the paper?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This manuscript introduces Tiny-Transfer-Learning to address memory-constrained issues on edge devices. The proposed method adapts pre-trained models to newly collected data by freezing the weights, but not biases. Moreover, it suggests augmenting a lite residual module and selecting an architecture of feature extractor from a largely pre-trained super-net. The experimental results outperform fine-tuning methods significantly.
Strengths
This manuscript tackles an important problem of training on edge devices; back-propagation causes a huge training memory footprint. The proposed method is novel and leads a lot of improvement in reducing memory, instead of parameters. I expect the proposed feature extractor adaptation to be applicable to conventional transfer learning.
Weaknesses
Methodology (feature extractor adaptation) - The proposed method seems to rely heavily on feature extractor adaptation to avoid scarifying accuracy. But, the expense of this process is unclear. Is fine-tuning the super-net is available on edge devices? If not, it conflicts with the addressing problem of memory-constrained on-device learning. Ablations - The proposed method is configured by three components; updating bias, lite residual learning, feature extractor adaptation. To understand the proposed method in detail, component-wise ablation study is needed to resolve the following questions; 1) Is the updating bias important? What happens when the bias is frozen? 2) Could the proposed method avoid scarifying accuracy without the feature extractor adaptation? |
NIPS | Title
Graph Stochastic Neural Networks for Semi-supervised Learning
Abstract
Graph Neural Networks (GNNs) have achieved remarkable performance in the task of the semi-supervised node classification. However, most existing models learn a deterministic classification function, which lack sufficient flexibility to explore better choices in the presence of kinds of imperfect observed data such as the scarce labeled nodes and noisy graph structure. To improve the rigidness and inflexibility of deterministic classification functions, this paper proposes a novel framework named Graph Stochastic Neural Networks (GSNN), which aims to model the uncertainty of the classification function by simultaneously learning a family of functions, i.e., a stochastic function. Specifically, we introduce a learnable graph neural network coupled with a high-dimensional latent variable to model the distribution of the classification function, and further adopt the amortised variational inference to approximate the intractable joint posterior for missing labels and the latent variable. By maximizing the lower-bound of the likelihood for observed node labels, the instantiated models can be trained in an end-to-end manner effectively. Extensive experiments on three real-world datasets show that GSNN achieves substantial performance gain in different scenarios compared with state-of-the-art baselines.
1 Introduction
Graphs are essential tools to represent complex relationships among entities in various domains, such as social networks, citation networks, biological networks and physical networks. Analyzing graph data has become one of the most important topics in the machine learning community. As an abstraction of many graph mining tasks, semi-supervised node classification, which aims to predict the labels of unlabeled nodes given the graph structure, node features and labels of partial nodes, has received significant attention in recent years. Graph Neural Networks (GNNs), in particular, have achieved impressive performance in the graph-based semi-supervised learning task [1, 2, 3, 4, 5].
Most existing GNN models are designed to learn a deterministic classification function. This kind of design makes them look simple and artistic, but the other side of the coin is that the deterministic classification function makes these GNN models lack sufficient flexibility to cater for kinds of imperfect observed data. For example, in many real situations, the ground-truth labels of nodes
∗Equal Contribution
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
are often expensive or difficult to obtain, which leads to the sparseness of the labeled nodes. The insufficient supervision information can easily lead to the overfitting of the deterministic classification functions, especially when there are no additional labeled nodes as the validation set for earlystopping. Another example is the noise in the graph structure. Arisen in nature or injected deliberately by attackers, noise is prone to affect the neighbor information aggregation and misleads the learning of deterministic classification functions. The rigidness and inflexibility of deterministic classification functions make it difficult for them to bypass these similar issues and explore better choices.
In line of the aforementioned observations, this paper proposes a novel Graph Stochastic Neural Network (GSNN for short) to model the uncertainty of GNN classification functions. GSNN aims to learn simultaneously a family of classification functions rather than fitting a deterministic function. This empowers GNNs the flexibility to handle the imperfection or noise in graph data, and further bypass the traps caused by sparse labeled data and unreliable graph structure in real applications.
Specifically, we treat the classification function to be learned as a stochastic function and integrate it into the process of label inference. To model the distribution of the stochastic function, we introduce a learnable neural network, which is coupled with a high-dimensional latent variable and takes the message-passing form. To infer the missing labels, we need to obtain the joint posterior distribution of labels for unlabeled nodes and the classification function, of which the exact form is intractable in general. To solve the problem, we adopt the amortised variational inference [6, 7] to approximate the intractable posterior distribution with the other two types of neural networks. By maximizing the lower-bound of the likelihood for observed node labels, we could optimize all parameters effectively in an end-to-end manner. We conduct extensive experiments on three real-world datasets. The results show that compared with state-of-the-art baselines, GSNN not only achieves comparable or better performance in the standard experimental scenario with early-stopping, but also shows substantial performance gain when labeled nodes are scarce (no early-stopping) and there are deliberate edge perturbations in the graph structure.
2 Related Work
Graph Neural Networks for Graph-based Semi-supervised Learning: Recently, GNNs have been attracting considerable attention [8, 9, 10]. The early ideas are to derive different forms of the graph convolution in the spectral domain based on the graph spectral theory [1, 11, 2, 3, 12]. Bruna et al. [1] propose the first generation spectral-based GNN. To reduce the computational complexity, Defferrard et al. [2] propose to use a K-order Chebyshev polynomial to approximate the convolutional filter, which avoids intense calculations of eigendecomposition of the normalized graph Laplacian. Kipf and Welling [3] further simplify the graph convolution by the first-order approximation. which reduces the number of parameters and improves the performance. Another line of research is to directly perform graph convolution in the spatial domain [13, 4, 14, 15, 16]. Gilmer et al. [13] generalize spatial-based methods as a message-passing mechanism. Hamilton et al. [4] propose a general inductive framework, which could learn an embedding function that generalizes to unseen nodes. Veličković et al. [5] further introduce the attention mechanism, which assigns different weights to neighbor nodes and aggregate features with discrimination. Besides, other works also demonstrate that considering edge attributes [17], adding jumping connections [18] and modeling the outcome dependency [19] would be beneficial. However, these models generally learn a deterministic classification function, which lack sufficient flexibility to handle imperfect observed data such as the scarce labeled nodes and noisy graph structure.
Uncertainty Modeling for Graph-based Semi-supervised Learning: There are also some works using uncertainty modeling for graph-based semi-supervised learning, which are related to this paper [20, 21, 22]. Ng et al. [22] introduce Gaussian processes to model the semi-supervised learning problem on graphs, which mitigates the over-fitting to some extent. Zhang et al. [21] treat the observed graph as a realization from a parametric family of random graphs and propose bayesian graph convolutional neural networks to incorporate the uncertain graph information. Ma et al [20] further propose a flexible generative framework to model the joint distribution of the graph structure and the node labels. Most of these works typically model the uncertainty of the observed data (e.g., graph structure). Different from them, in this paper, we view the classification function as a stochastic function and straightly model its distribution, which brings better performance in many scenarios.
3 Our Solution
In this paper, we define an undirected graph as G = (V,E), where V = {v1, ..., vN} represents a set of N nodes and E ⊆ V × V is the set of edges. Let A ∈ {0, 1}N×N denote the binary adjacency matrix, i.e., Au,v = 1 if and only if (u, v) ∈ E. Let X ∈ RN×F be the node attribute matrix, where F is the feature dimension and the feature vector of node v is expressed as xv . Each node is labeled with one class in C = {c1, ..., c|C|}. In practice, only partial nodes come with labels. The set of these labeled nodes is denoted as VL and the set of unlabeled nodes is denoted as VU := V \ VL. For the task of semi-supervised node classification, given A, X and the label information of VL, the goal is to infer the labels of nodes in VU by learning a classification function f . The classification results can be denoted as Y := {yv1 , ...,yvN } where each y· is a |C|-dimension probability distribution on C. Most existing GNN models typically aim to learn a deterministic classification function, which lack sufficient flexibility to cater for kinds of imperfect observed data. For example, they are easy to overfit or be misled when labeled nodes are scarce or there exists noise in the graph structure. Therefore, instead of fitting a deterministic function, we here aim to learn a family of classification functions, which can be organized as a stochastic function F with the distribution denoted as p(f). Under this setting, the distribution of Y can be formalized as follows:
p(Y |A,X) , ∫ p(f)p ( Y |f(A,X) ) df = ∫ p(f) ∏ v∈V p ( yv|f(A,X) ) df (1)
where we use p ( Y |f(A,X) ) to denote the distribution of Y corresponding to the classification function f . Eq. (1) assumes that the label inference for each node is conditionally independent, given a selected classification function f , the adjacency matrix A and the attribute matrix X .
3.1 Framework for GSNN
In order to model the uncertainty of the classification function in Eq. (1), we here approximate the stochastic function F using a learnable function gϕ (e.g., a neural network with parameters ϕ) with a random latent vector Z involved as below:
F(A,X) , gϕ(A,X;Z) (2)
where the prior distribution of Z is p(z) defined as multivariate standard normal, i.e., p(z) = N (z;0, I). Note that the randomness of F is induced by Z and the expression capacity of F is captured by the structure of gϕ(.; .). Combined Eq. (2) with Eq. (1), the distribution p(Y |A,X) can be rewritten as follows:
p(Y |A,X) = ∫ p(z) ∏ v∈V p ( yv|gϕ(A,X; z) ) dz (3)
where p ( yv|gϕ(A,X; z) ) is a distribution on C of node v. In the semi-supervised transductive setting, the label information for labeled nodes in VL is also known. Denote YL := {yv}v∈VL and YU := Y \ YL. Under the above setting, the conditional distribution of YU , given A, X and YL, can be formalized as follows:
p(YU |A,X, YL) , ∫ p(z|A,X, YL) ∏ v∈VU p ( yv|gϕYL (A,X; z) ) dz (4)
where p(z|A,X, YL) is the posterior distribution of the latent vector Z and ϕYL are parameters to be learned when YL is taken into consideration.
We assume that the distributions p(z|A,X, YL) and p ( yv|gϕYL (A,X; z) ) in Eq. (4) can be modeled by parametric families of distributions pθ(z|A,X, YL) and pθ(yv|A,X, z) respectively, whose probability density function is differentiable almost everywhere w.r.t. θ. To predict YU via modeling the distribution of the classification function, we need to obtain an intractable joint posterior pθ(YU , z|A,X, YL). To solve the problem, we adopt the variational inference. We introduce a variational distribution qφ(YU , z|A,X, YL) parameterized by φ to approximate the true posterior pθ(YU , z|A,X, YL). To learn the model parameters φ and θ, we aim to optimize the evidence lower bound (ELBO) of the log-likelihood function for the observed node labels, i.e., log pθ(YL|A,X).
Following the standard derivation of the variational inference, the ELBO objective function can be obtained as follows:
log pθ(YL|A,X) ≥ Eqφ(YU ,z|A,X,YL) ( log pθ(Y |A,X, z) + log
p(z)
qφ(YU , z|A,X, YL) ) , LELBO(θ, φ) (5)
The variational joint posterior could be further factorized as qφ(YU , z|A,X, YL) = qφ(YU |A,X, YL)qφ(z|A,X, Y ) noting that Y = YL ∪ YU . From a sampling perspective, it can be explained that the distribution of the random latent vector Z depends on the observed data and YU sampled from the approximate posterior distribution qφ(YU |A,X, YL). On this basis, the ELBO objective function can be rewritten as follows:
LELBO(θ, φ) = Eqφ(YU |A,X,YL)Eqφ(z|A,X,Y ) log pθ(YL|A,X, z)− Eqφ(YU |A,X,YL)KL ( qφ(z|A,X, Y ) || p(z) ) − (6)
Eqφ(YU |A,X,YL) ( log qφ(YU |A,X, YL)− Eqφ(z|A,X,Y ) log pθ(YU |A,X, z) ) where KL(.||.) represents the Kullback-Leibler divergence between two distributions. The first term of Eq. (6) is the opposite of the cross-entropy between ground-truth label vectors and the predicted class distributions for labeled nodes. The form of the third term is similar to the KL divergence, which characterizes the distribution difference between qφ(YU |A,X, YL) and pθ(YU |A,X, z). Based on the amortised variational inference [6, 7], qφ(YU |A,X, YL), qφ(z|A,X, Y ) and pθ(Y |A,X, z) can be fitted by different types of neural networks, which would be described in detail in Section 3.2. The overall framework is referred as graph stochastic neural networks (GSNN for short), whose overview is shown in Fig. 1.
3.2 Model Instantiation, Training and Inference
In this part, we instantiate qφ(YU |A,X, YL), qφ(z|A,X, Y ) and pθ(Y |A,X, z) with three neural networks (i.e., qnet1, qnet2 and pnet) respectively.
Instantiating qφ(YU |A,X,YL) with qnet1: The neural network qnet1 is designed into the form of message-passing. It consists of K layers to aggregate the features of neighbor nodes with the following layer-wise propagation rule:
hkv = ρ k−1 ( ∑ u∈Ne{v}∪{v} ak−1v,u h k−1 u W k−1 qnet1 ) , k = 1, ...,K
qφ(yv|A,X, YL) = Cat(yv|hKv ), v ∈ VU
(7)
where Ne{v} is the set of neighbor nodes of node v. hkv is the hidden representation for node v in the kth layer and h0v = xv. The parameter a k−1 v,u represents the aggregation coefficient between
node v and node u. The parameter matrix W k−1qnet1 represent the trainable parameters in the k th layer. The activation functions of the first K − 1 layers (i.e., ρ0, ..., ρK−2) are ReLU , and the activation function for the Kth layer is softmax, which constructs the categorical distribution Cat(.), i.e., qφ(YU |A,X, YL). Note that YL is not used as the input for qnet1, but as the supervision information for training qnet1 in the Eq. (10) below.
Instantiating qφ(z|A,X,Y ) with qnet2: The posterior distribution qφ(z|A,X, Y ) depends on four parts of information: A, X , YU and YL. Since qnet1 has involved A and X , we therefore directly use the hidden representations of the (K − 1)th layer in qnet1 to represent A and X . The unlabeled information YU could be obtained by sampling from the output of qnet1 and YL is directly taken as one of the input for qnet2. Inspired by variational auto-encoders [7], we let the variational posterior be a multivariate Gaussian with a diagonal covariance structure, which is flexible and could make the second item of the ELBO objection in Eq. (6) be computed analytically. Accordingly, qnet2 is designed as follows:
rv = MLP([hK−1v ||yv]), v ∈ V r = Readout({rv}v∈V ) qφ(z|A,X, Y ) = N ( z;µ(r), σ2(r)I)
(8)
where .||. is the concatenation operation, MLP represents the multi-layer perceptron, Readout(.) function summarizes all input vectors into a global vector, and the MLP functions µ(.) and σ2(.) convert r into the mean and standard deviation, which parameterise the distribution of qφ(z|A,X, Y ). Instantiating pθ(Y |A,X, z) with pnet: Given z sampled from qφ(z|A,X, Y ), pnet specifies an instance of the stochastic function F (i.e., function g defined in Eq. (2)). The network architecture of pnet is similar to that of qnet1, which takes the sampled global latent variable z as well as A and X as input, and outputs the probability distributions on C for all nodes. Assume that the hidden representation of node v in the Kth layer is denoted as eKv , and the initial latent representation of node v is defined as the concatenation between xv and z, i.e., e0v = xv||z. The predicted categorical distribution can be expressed as follows:
pθ(yv|A,X, z) = Cat(yv|eKv ), v ∈ V (9)
Model Training: To optimize the object function in Eq. (6), we adopt Monte Carlo estimation to approximate the expectations w.r.t qφ(YU |A,X, YL) and qφ(z|A,X, Y ). Specifically, we first sample m instances of YU from qφ(YU |A,X, YL). After that, for each instance of YU , we further sample n instances of z from qφ(z|A,X, Y ). With these sampled instances, we could approximately estimate the object functionLELBO(θ, φ). We leverage reparameterization to calculate the derivatives w.r.t the parameters in qnet1, qnet2 and pnet. Since z is continuous and qφ(z|A,X, Y ) takes on a Gaussian form, the reparameterization trick of variational auto-encoders [7] can be directly used here. While YU is discrete, we adopt the Gumbel-Softmax reparametrization [23] for gradient backpropogation. As we mentioned above, YL could be used as the supervised information to guide the parameter update of qnet1. Therefore, we additionally introduce a supervised object function Ls(φ) = log qφ(YL|A,X). The overall objective function is given as follows: L(θ, φ) = LELBO(θ, φ) + Ls(φ) (10) The model can be optimized effectively in an end-to-end manner and the optimal parameters are denoted by θ∗ and φ∗, i.e., θ∗, φ∗ = argmax
θ,φ L(θ, φ).
Model Inference: After the above training, p(YU |A,X, YL) can be seen as the expectation of pθ∗(YU |A,X, z) w.r.t. qφ∗(z|A,X, Y ). We first sample L instances of YU from qφ∗(YU |A,X, YL), and then for each sampled instance of YU , we sample a instance of z. from qφ∗(z|A,X, Y ). We use Monte Carlo estimation for approximate inference, formulated as follows:
p(YU |A,X, YL) ≈ 1
L L∑ i=1 pθ∗(YU |A,X, zi) (11)
This approximation can also be derived from Eq. (4) with the proof in the supplemental material.
3.3 Algorithm Complexity Analysis
Because qnet1 and pnet share the similar message-passing model structure, the computational complexity of them isO(|E|), where |E| represents the number of edges in the graph. The computational
complexity of qnet2 is O(N), where N is the number of nodes. On this basis, during the training phase, the overall computational complexity is O(|E|+mN +mn|E|), where m and n are respectively the number of sampled instances of YU and z. In our experiments, we find that one sample (i.e., m = n = 1) could achieve comparable results with multiple samples. For efficiency, we only sample once for both YU and z. During the inference phase, the calculation only involves pnet. Therefore, the overall computational complexity is O(L|E|), where L is the number of sample instances from p(z). We can see that the complexity is linear to the scale of the graph. The pseudo-code of the algorithm is provided in the supplemental material.
4 Experiments
In this section, we empirically evaluate the performance of GSNN on the task of semi-supervised node classification in different scenarios: (1) the standard experimental scenario with the validation set for early-stopping, (2) the scarce labeled nodes scenario (no validation set for early-stopping), and (3) the adversarial attack scenario. Note that we mainly consider the noise injected by adversarial attack methods, since they can incur obvious impact on the performance of many existing GNNs. [24, 25, 26, 27]. Our reproducible code is available at https://github.com/GSNN/GSNN.
4.1 Experimental Settings
Datasets. We conduct experiments on three commonly used benchmark datasets: Cora, Citeseer and Pubmed [25, 28], where nodes represent documents and edges represent citation relationships. Each node is associated with a bag-of-words feature vector and a ground-truth label. Detailed statistics for the three datasets are provided in the supplemental material. In different experiment scenarios, we will adopt different dataset setup (e.g., dataset partition method) following the standard practice, which would be described when presenting experimental results in corresponding sections.
Baselines. When we evaluate the performance in the standard experimental settings and the scarce labeled nodes settings, we compared with six state-of-the-art models, three of which are GCN [3], GraphSAGE [4] and Graph Attention Networks (GAT) [5]. The other three adopt uncertainty modeling for graph-based semi-supervised learning. They are Bayesian Graph Convolutional Neural Networks (BGCN) [21], G3NN [20] and Graph Gaussian Processes (GGP) [22] respectively. BGCN and G3NN model the uncertainty of the graph structure, and GGP introduces the Gaussian processes to prevent from over-fitting. When we evaluate the performance in the adversarial attack settings, in addition to the above six baselines, we also compare with Robust Graph Convolutional Networks (RGCN) [29], which is a state-of-the-art method against adversarial attacks. More detailed description about the baselines are provided in the supplemental material.
Our Model. For the proposed GSNN framework, we could adopt different information aggregation mechanisms for qnet1 and pnet to instantiate the models. In this paper, we implement two variants, whose aggregation mechanisms are consistent with GCN (i.e., mean aggregation) [3] and GAT (i.e., attention-based aggregation) [5] respectively. Note that other advanced information aggregation mechanisms can also be involved here to improve the performance. The two variants are termed as GSNN-M and GSNN-A.
Parameter Settings. For all baselines, we adopt the default parameter settings reported in corresponding papers. For our proposed two models (i.e., GSNN-M and GSNN-A), in qnet1 and pnet, we employ two information aggregation layers, and other settings related to hidden layers are consistent with GCN [3] and GAT [5] respectively. For example, the number of hidden units for GSNN-M is set to 16 and that for GSNN-A is set to 64. Besides, GSNN-A also employs the multi-head attention mechanism in the first hidden layer with 8 attention heads. For both GSNN-M and GSNN-A, the dimension of the hidden variable z is set to 16. In qnet2, we first employ a two-layer MLP to generate the representation rv for each node v, whose dimension is 16. After that, we summarize all representations into a vector and use two fully-connected networks to convert it into the mean and covariance matrix for the multivariate Gaussian distribution. As mentioned in Section 3.3, both the numbers of sampled instances of YU and z are set to 1 for efficiency purpose. We use the Adam optimizer [30] during training, with the learning rate as 0.01 and weight decay as 5× 10−4, and set the epoch number as 200. During the inference phase, the sampling number L in Eq. (11) is set to 40.
In the experiments, we train our models and baselines for 50 times and record the mean classification accuracy and standard deviation.
4.2 Standard Experimental Scenario
In this section, we evaluate the performance of GSNN and baselines under the standard experimental scenario used in the work [3]. Specifically, in each dataset, 20 nodes per class are used for training, 1000 nodes are used for evaluation and another 500 nodes are used for validation and early-stopping.
The experimental results (mean and standard deviation) are summarized in Table 1. We can see that under the standard experimental scenario, BGCN, G3NN and GGP do not show obvious advantages
and perform even worse than the deterministic GNN-based models (i.e., GCN, GAT and GraphSAGE) in many cases. The reason behind is that the validation set could help these GNN-based models find relatively good classification functions, which can prevent the model from overfitting to a large extent. Both BGCN and G3NN attempt to model the uncertainty of the graph structure. However, the potential distributions of different graph data may vary greatly, which limits the performance of these two methods on some datasets (e.g., Pubmed). GGP
adopts Gaussian processes to model the node classification task, of which the fitting capacity is not as good as neural networks that could effectively learn the node representations. Therefore, the performance of it is not ideal.
Compared with baselines, our models achieve comparable or better performance in standard experimental scenario. Note that GSNN-M and GSNN-A adopt the consistent aggregation mechanism with GCN and GAT respectively, while the results show that the two proposed models outperform GCN and GAT on all datasets, which demonstrates the effectiveness of modeling the uncertainty of the classification function.
4.3 Label-Scarce Scenario
In general, the labeled nodes are difficult or expensive to obtain. A more practical scenario is that we only have a very small proportion of labeled nodes for training and no additional labeled nodes for early-stopping. In this section, we evaluate the performance of GSNN and baselines when labeled nodes are scarce. Specifically, in each dataset, we randomly select a certain percentage of labeled nodes for training, and the rest of nodes are used for evaluation. Note that the number of labeled nodes in each class could be different under this dataset partition setting.
For Cora and Citeseer, we set the percentage of labeled nodes for training from 1% to 5%, while for Pubmed, we set the percentage from 0.1% to 0.5% because the total number of nodes in Pubmed is about an order of magnitude higher than the other two datasets. The experimental results are shown in Table 2. We observe that, compared with baselines, GSNN-M and GSNN-A achieve substantial performance gain, which demonstrates that modeling the uncertainty of the classification function could effectively alleviate the overfitting problem on the complex graph data. BGCN models the uncertainty of the graph structure, which improves the performance of the deterministic GNN-based models on Cora and Citesser to some extent. However, its performance cannot be generalized to Pubmed because of the difference of the potential graph structure for different datasets. Although G3NN also models the distribution of the graph structure, the complex model structure make it easy to overfit without early-stopping. Therefore, modeling the distribution of the classification function provides more flexibility and better copes with the label-scarce scenario.
4.4 Adversarial Attack Scenario
In this section, we employ three state-of-the-art global adversarial attack methods (i.e., Meta-Train [25], Meta-Self [25] and min-max attack [26]), which aim at reducing the overall classification accuracy, to inject noise edges into the graph structure, and further evaluate the performance of GSNN and baselines in the presence of them. Detailed description for the three attack methods is provided in the supplemental material. The experimental settings about the adversarial attacks and dataset partition follow the work [25]. The attack budgets, i.e., the ratio of perturbed edges to all clean edges, is set to 0.05. Without loss of generality, all three attack methods are performed based on the vanilla GCN [3], which means that they mainly affect the mean aggregation mechanism. For each poisoned graph, 10% of nodes are used for training and the rest of nodes are used for evaluation.
We conduct experiments on Cora. The experimental results are shown in Table 3. Here we add a robust GNN model (i.e., RGCN [29]) as a baseline. We have the following meaningful observations: (1) Under the three attack methods, the performance of GCN reduces drastically because it serves as the surrogate model of the attacks. Meanwhile, they can transfer to other deterministic GNN-based models (i.e., GraphSAGE and GAT). However, GSNN could effectively alleviate the impacts of attacks by modeling the uncertainty of the classification function. We can see that GSNN-M and GSNN-A significantly improve
the performance of GCN and GAT, and also outperform RGCN, which is a state-of-the-art method against the adversarial attacks. Note that although the attack methods mainly affect the mean aggregation mechanism, GSNN-M still maintains good performance. (2) BGCN and G3NN could capture the underlying structure that exists in graph data. Therefore, they have capacity to improve the robustness against the adversarial attacks. Compared with them, GSNN does not need to modify the graph structure, which has more flexibility and achieves better or comparable performance.
5 Conclusion
In this paper, we propose a novel GSNN for semi-supervised learning on graph data, which aims to model the uncertainty of the classification function by simultaneously learning a family of functions. To model the distribution of the classification function, we introduce a learnable graph neural network coupled with a high-dimensional random latent vector, and further adopt the amortised variational inference to approximate the intractable joint posterior of the missing labels and the latent variable. Extensive experimental results show that GSNN outperforms the state-of-the-art baselines on different datasets. It shows great potential in label-scarce and adversarial attack scenarios. This paper focuses on the uncertainty of the GNN classification function. How to integrate more information, such as the label dependency and structure uncertainty, into the framework for inference is an interesting problem in the future.
Acknowledgment
The authors would like to thank the anonymous reviewers for their valuable comments. This work was supported by the NSFC under Grant No. 11688101 and No. 61872360, the National Key Research and Development Program of China under Grant No. 2020YFE0200500, the ARC DECRA under Grant No. DE200100964, and the Youth Innovation Promotion Association CAS under Grant No. 2017210. Chuan Zhou, Jia Wu, Shirui Pan and Jilong Wang are corresponding authors.
Broader Impact
Our work could bring the following positive impacts. (1) The proposed framework, which models the uncertainty of the classification function, provides a new idea for semi-supervised learning on graph data. (2) In practice, labeled nodes are generally scarce and expensive to obtain. GSNN could effectively alleviate the overfitting problem and improve the performance. (3) Noise could render deterministic GNN-based models vulnerable, while GSNN could alleviate the negative impacts of noise to a large extent. Many real-world applications, especially the risk-sensitive applications (e.g., financial transaction), would benefit from it.
Similar with many other GNNs, one potential issue of our model is that it provides limited interpretation of its predictions. We advocate peer researchers to make a profound study on this to improve the interpretability of modern GNN architectures and make GNNs applicable in more risk-sensitive applications. | 1. What are the main contributions and strengths of the paper regarding its formulation and experimental performance?
2. What are the weaknesses and concerns regarding the inference procedure and comparison with baselines?
3. How does the reviewer assess the novelty and similarity of the proposed approach compared to prior works?
4. What are the specific questions and suggestions provided by the reviewer for improving the paper's content and comparisons? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
I do not think the rebuttal addresses my concerns on any of the 3 issues: (1) mathematical mistakes in inference (2) extremely high cost (3) wrong experiment setting. For (1), the correct formulation is p(Y_U|A,X,z) = \mathbb{E}_{q(z|A,X,Y)} p(Y_U|A,X,z), then you need to sample from q(z|A,X,Y) and do Monte Carlo instead of prior, this is a serious mistake. For (2), I keep my opinion that a 4000% increase of computation time does not compensate for a 4% relative improvement. For (3), the authors claim "the total number of labeled nodes in the case of 0.5% under the label-scarce scenario is still more than that under the standard experimental scenario", then why do the authors claim that this is low-label setting?? This setting has more labels than standard scenario... ------------------------------------------------------ The work targets semi-supervised node classification tasks and aims to design a robust graph neural network using a latent variable model. The idea is to model the joint distribution with latent variables and use mean-field variational inference for approximations.
Strengths
The general formulation is concrete. The authors nicely organize the paper with formulation of the latent variable model and then describe in detail how they instantiate each module. There is also extensive analysis of the complexity of the algorithm. I find the figure helpful and make the paper clearer. The mathematical notations are considered consistent throughout the paper. The authors conduct extensive experiments on multiple setups including normal node classification, low-label regime and adversarial defence against malicious attack. The proposed model also shows strong empirical performance over previous baselines in all three setups with comparable standard deviation. This is pretty impressive considering GSNN contains more stochasticity than the baselines.
Weaknesses
This paper combines latent variable models with GNNs, it’s not novel enough and there are many previous works with similar ideas in graph generation. The difference is that the formulation of this paper is more like a conditional generative model and targets at node classification tasks. Based on the implementation of the method, I think the model is similar to RGCN in some aspects. Undoubtedly, there are differences that the model does not directly learn a Gaussian representation but instead samples from a Gaussian latent variable and concatenates it with the features of the node. However, both aim to inject some noise and in essence decrease the information between the representation and the original node feature so that the model only captures the key attributes and thus making the model more robust than vanilla GNNs. One concern is about the inference procedure. Why does the model directly sample from prior instead of the posterior? I find the statement “Since the posterior would be close to the prior after model training, so we directly sample from prior” (line 181-184) very sketchy. I believe the learned latent space (if trained well) should be structured in the sense that if we train a VAE on MNIST, then the posterior of the same digits will be close while the posterior of different digits will be scattered across the latent space. It’s not like after training, the posteriors will look exactly the same as prior. If it is the case, then unfortunately it means that the mutual information between z and the node will be close to 0 and should be useless. Another issue is that I feel the comparison with baselines in the experiments is not fair since the proposed model needs to sample $z$ for $L$ times, and $L=40$ in the experiments. It means that the model is at least **40** times slower than vanilla GNNs and can be worse with the additional overhead. It will be super helpful if the authors can show the results with different $L$, e.g. 1, 5, 20, 40. Another minor point is that GSNN also has more parameters, but I think it should be fine if the authors can show GSNN has comparable parameters with GAT. Also in the adversarial attack setting, where the graph is attacked on the structure with adding/deleting edges, I am not fully convinced that the model can alleviate this issue simply by noise injection in feature. I feel a nicer way that can possibly achieve more performance boost is to also model the structure $A$ using latent variable models. I list some additional points below. - There lacks an explanation as to why the authors did not use $Y_L$ as input for qnet1. - The paper does not have a clear description on how they (randomly) selected nodes for training in the label-scarce scenario. For example, is it completely random or did you select a fixed number of nodes for each class? I also did not find the details in the appendix. - In low data regime, another important previous work is PPNP [1], which also shows strong performance over standard GNNs. I suggest that the authors compare their method with PPNP. - Why does the model have higher performance in the scarce-label setting than the clean setting? As listed in Table 1 and 2, GSNN-M/A has higher accuracy in 0.5% Pubmed than in the original Pubmed. Also same for adversarial attack, it seems that the model can achieve a higher score on the perturbed graph than on the pristine graph. This is rather counterintuitive. [1] Predict then Propagate: Graph Neural Networks meet Personalized PageRank |
NIPS | Title
Graph Stochastic Neural Networks for Semi-supervised Learning
Abstract
Graph Neural Networks (GNNs) have achieved remarkable performance in the task of the semi-supervised node classification. However, most existing models learn a deterministic classification function, which lack sufficient flexibility to explore better choices in the presence of kinds of imperfect observed data such as the scarce labeled nodes and noisy graph structure. To improve the rigidness and inflexibility of deterministic classification functions, this paper proposes a novel framework named Graph Stochastic Neural Networks (GSNN), which aims to model the uncertainty of the classification function by simultaneously learning a family of functions, i.e., a stochastic function. Specifically, we introduce a learnable graph neural network coupled with a high-dimensional latent variable to model the distribution of the classification function, and further adopt the amortised variational inference to approximate the intractable joint posterior for missing labels and the latent variable. By maximizing the lower-bound of the likelihood for observed node labels, the instantiated models can be trained in an end-to-end manner effectively. Extensive experiments on three real-world datasets show that GSNN achieves substantial performance gain in different scenarios compared with state-of-the-art baselines.
1 Introduction
Graphs are essential tools to represent complex relationships among entities in various domains, such as social networks, citation networks, biological networks and physical networks. Analyzing graph data has become one of the most important topics in the machine learning community. As an abstraction of many graph mining tasks, semi-supervised node classification, which aims to predict the labels of unlabeled nodes given the graph structure, node features and labels of partial nodes, has received significant attention in recent years. Graph Neural Networks (GNNs), in particular, have achieved impressive performance in the graph-based semi-supervised learning task [1, 2, 3, 4, 5].
Most existing GNN models are designed to learn a deterministic classification function. This kind of design makes them look simple and artistic, but the other side of the coin is that the deterministic classification function makes these GNN models lack sufficient flexibility to cater for kinds of imperfect observed data. For example, in many real situations, the ground-truth labels of nodes
∗Equal Contribution
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
are often expensive or difficult to obtain, which leads to the sparseness of the labeled nodes. The insufficient supervision information can easily lead to the overfitting of the deterministic classification functions, especially when there are no additional labeled nodes as the validation set for earlystopping. Another example is the noise in the graph structure. Arisen in nature or injected deliberately by attackers, noise is prone to affect the neighbor information aggregation and misleads the learning of deterministic classification functions. The rigidness and inflexibility of deterministic classification functions make it difficult for them to bypass these similar issues and explore better choices.
In line of the aforementioned observations, this paper proposes a novel Graph Stochastic Neural Network (GSNN for short) to model the uncertainty of GNN classification functions. GSNN aims to learn simultaneously a family of classification functions rather than fitting a deterministic function. This empowers GNNs the flexibility to handle the imperfection or noise in graph data, and further bypass the traps caused by sparse labeled data and unreliable graph structure in real applications.
Specifically, we treat the classification function to be learned as a stochastic function and integrate it into the process of label inference. To model the distribution of the stochastic function, we introduce a learnable neural network, which is coupled with a high-dimensional latent variable and takes the message-passing form. To infer the missing labels, we need to obtain the joint posterior distribution of labels for unlabeled nodes and the classification function, of which the exact form is intractable in general. To solve the problem, we adopt the amortised variational inference [6, 7] to approximate the intractable posterior distribution with the other two types of neural networks. By maximizing the lower-bound of the likelihood for observed node labels, we could optimize all parameters effectively in an end-to-end manner. We conduct extensive experiments on three real-world datasets. The results show that compared with state-of-the-art baselines, GSNN not only achieves comparable or better performance in the standard experimental scenario with early-stopping, but also shows substantial performance gain when labeled nodes are scarce (no early-stopping) and there are deliberate edge perturbations in the graph structure.
2 Related Work
Graph Neural Networks for Graph-based Semi-supervised Learning: Recently, GNNs have been attracting considerable attention [8, 9, 10]. The early ideas are to derive different forms of the graph convolution in the spectral domain based on the graph spectral theory [1, 11, 2, 3, 12]. Bruna et al. [1] propose the first generation spectral-based GNN. To reduce the computational complexity, Defferrard et al. [2] propose to use a K-order Chebyshev polynomial to approximate the convolutional filter, which avoids intense calculations of eigendecomposition of the normalized graph Laplacian. Kipf and Welling [3] further simplify the graph convolution by the first-order approximation. which reduces the number of parameters and improves the performance. Another line of research is to directly perform graph convolution in the spatial domain [13, 4, 14, 15, 16]. Gilmer et al. [13] generalize spatial-based methods as a message-passing mechanism. Hamilton et al. [4] propose a general inductive framework, which could learn an embedding function that generalizes to unseen nodes. Veličković et al. [5] further introduce the attention mechanism, which assigns different weights to neighbor nodes and aggregate features with discrimination. Besides, other works also demonstrate that considering edge attributes [17], adding jumping connections [18] and modeling the outcome dependency [19] would be beneficial. However, these models generally learn a deterministic classification function, which lack sufficient flexibility to handle imperfect observed data such as the scarce labeled nodes and noisy graph structure.
Uncertainty Modeling for Graph-based Semi-supervised Learning: There are also some works using uncertainty modeling for graph-based semi-supervised learning, which are related to this paper [20, 21, 22]. Ng et al. [22] introduce Gaussian processes to model the semi-supervised learning problem on graphs, which mitigates the over-fitting to some extent. Zhang et al. [21] treat the observed graph as a realization from a parametric family of random graphs and propose bayesian graph convolutional neural networks to incorporate the uncertain graph information. Ma et al [20] further propose a flexible generative framework to model the joint distribution of the graph structure and the node labels. Most of these works typically model the uncertainty of the observed data (e.g., graph structure). Different from them, in this paper, we view the classification function as a stochastic function and straightly model its distribution, which brings better performance in many scenarios.
3 Our Solution
In this paper, we define an undirected graph as G = (V,E), where V = {v1, ..., vN} represents a set of N nodes and E ⊆ V × V is the set of edges. Let A ∈ {0, 1}N×N denote the binary adjacency matrix, i.e., Au,v = 1 if and only if (u, v) ∈ E. Let X ∈ RN×F be the node attribute matrix, where F is the feature dimension and the feature vector of node v is expressed as xv . Each node is labeled with one class in C = {c1, ..., c|C|}. In practice, only partial nodes come with labels. The set of these labeled nodes is denoted as VL and the set of unlabeled nodes is denoted as VU := V \ VL. For the task of semi-supervised node classification, given A, X and the label information of VL, the goal is to infer the labels of nodes in VU by learning a classification function f . The classification results can be denoted as Y := {yv1 , ...,yvN } where each y· is a |C|-dimension probability distribution on C. Most existing GNN models typically aim to learn a deterministic classification function, which lack sufficient flexibility to cater for kinds of imperfect observed data. For example, they are easy to overfit or be misled when labeled nodes are scarce or there exists noise in the graph structure. Therefore, instead of fitting a deterministic function, we here aim to learn a family of classification functions, which can be organized as a stochastic function F with the distribution denoted as p(f). Under this setting, the distribution of Y can be formalized as follows:
p(Y |A,X) , ∫ p(f)p ( Y |f(A,X) ) df = ∫ p(f) ∏ v∈V p ( yv|f(A,X) ) df (1)
where we use p ( Y |f(A,X) ) to denote the distribution of Y corresponding to the classification function f . Eq. (1) assumes that the label inference for each node is conditionally independent, given a selected classification function f , the adjacency matrix A and the attribute matrix X .
3.1 Framework for GSNN
In order to model the uncertainty of the classification function in Eq. (1), we here approximate the stochastic function F using a learnable function gϕ (e.g., a neural network with parameters ϕ) with a random latent vector Z involved as below:
F(A,X) , gϕ(A,X;Z) (2)
where the prior distribution of Z is p(z) defined as multivariate standard normal, i.e., p(z) = N (z;0, I). Note that the randomness of F is induced by Z and the expression capacity of F is captured by the structure of gϕ(.; .). Combined Eq. (2) with Eq. (1), the distribution p(Y |A,X) can be rewritten as follows:
p(Y |A,X) = ∫ p(z) ∏ v∈V p ( yv|gϕ(A,X; z) ) dz (3)
where p ( yv|gϕ(A,X; z) ) is a distribution on C of node v. In the semi-supervised transductive setting, the label information for labeled nodes in VL is also known. Denote YL := {yv}v∈VL and YU := Y \ YL. Under the above setting, the conditional distribution of YU , given A, X and YL, can be formalized as follows:
p(YU |A,X, YL) , ∫ p(z|A,X, YL) ∏ v∈VU p ( yv|gϕYL (A,X; z) ) dz (4)
where p(z|A,X, YL) is the posterior distribution of the latent vector Z and ϕYL are parameters to be learned when YL is taken into consideration.
We assume that the distributions p(z|A,X, YL) and p ( yv|gϕYL (A,X; z) ) in Eq. (4) can be modeled by parametric families of distributions pθ(z|A,X, YL) and pθ(yv|A,X, z) respectively, whose probability density function is differentiable almost everywhere w.r.t. θ. To predict YU via modeling the distribution of the classification function, we need to obtain an intractable joint posterior pθ(YU , z|A,X, YL). To solve the problem, we adopt the variational inference. We introduce a variational distribution qφ(YU , z|A,X, YL) parameterized by φ to approximate the true posterior pθ(YU , z|A,X, YL). To learn the model parameters φ and θ, we aim to optimize the evidence lower bound (ELBO) of the log-likelihood function for the observed node labels, i.e., log pθ(YL|A,X).
Following the standard derivation of the variational inference, the ELBO objective function can be obtained as follows:
log pθ(YL|A,X) ≥ Eqφ(YU ,z|A,X,YL) ( log pθ(Y |A,X, z) + log
p(z)
qφ(YU , z|A,X, YL) ) , LELBO(θ, φ) (5)
The variational joint posterior could be further factorized as qφ(YU , z|A,X, YL) = qφ(YU |A,X, YL)qφ(z|A,X, Y ) noting that Y = YL ∪ YU . From a sampling perspective, it can be explained that the distribution of the random latent vector Z depends on the observed data and YU sampled from the approximate posterior distribution qφ(YU |A,X, YL). On this basis, the ELBO objective function can be rewritten as follows:
LELBO(θ, φ) = Eqφ(YU |A,X,YL)Eqφ(z|A,X,Y ) log pθ(YL|A,X, z)− Eqφ(YU |A,X,YL)KL ( qφ(z|A,X, Y ) || p(z) ) − (6)
Eqφ(YU |A,X,YL) ( log qφ(YU |A,X, YL)− Eqφ(z|A,X,Y ) log pθ(YU |A,X, z) ) where KL(.||.) represents the Kullback-Leibler divergence between two distributions. The first term of Eq. (6) is the opposite of the cross-entropy between ground-truth label vectors and the predicted class distributions for labeled nodes. The form of the third term is similar to the KL divergence, which characterizes the distribution difference between qφ(YU |A,X, YL) and pθ(YU |A,X, z). Based on the amortised variational inference [6, 7], qφ(YU |A,X, YL), qφ(z|A,X, Y ) and pθ(Y |A,X, z) can be fitted by different types of neural networks, which would be described in detail in Section 3.2. The overall framework is referred as graph stochastic neural networks (GSNN for short), whose overview is shown in Fig. 1.
3.2 Model Instantiation, Training and Inference
In this part, we instantiate qφ(YU |A,X, YL), qφ(z|A,X, Y ) and pθ(Y |A,X, z) with three neural networks (i.e., qnet1, qnet2 and pnet) respectively.
Instantiating qφ(YU |A,X,YL) with qnet1: The neural network qnet1 is designed into the form of message-passing. It consists of K layers to aggregate the features of neighbor nodes with the following layer-wise propagation rule:
hkv = ρ k−1 ( ∑ u∈Ne{v}∪{v} ak−1v,u h k−1 u W k−1 qnet1 ) , k = 1, ...,K
qφ(yv|A,X, YL) = Cat(yv|hKv ), v ∈ VU
(7)
where Ne{v} is the set of neighbor nodes of node v. hkv is the hidden representation for node v in the kth layer and h0v = xv. The parameter a k−1 v,u represents the aggregation coefficient between
node v and node u. The parameter matrix W k−1qnet1 represent the trainable parameters in the k th layer. The activation functions of the first K − 1 layers (i.e., ρ0, ..., ρK−2) are ReLU , and the activation function for the Kth layer is softmax, which constructs the categorical distribution Cat(.), i.e., qφ(YU |A,X, YL). Note that YL is not used as the input for qnet1, but as the supervision information for training qnet1 in the Eq. (10) below.
Instantiating qφ(z|A,X,Y ) with qnet2: The posterior distribution qφ(z|A,X, Y ) depends on four parts of information: A, X , YU and YL. Since qnet1 has involved A and X , we therefore directly use the hidden representations of the (K − 1)th layer in qnet1 to represent A and X . The unlabeled information YU could be obtained by sampling from the output of qnet1 and YL is directly taken as one of the input for qnet2. Inspired by variational auto-encoders [7], we let the variational posterior be a multivariate Gaussian with a diagonal covariance structure, which is flexible and could make the second item of the ELBO objection in Eq. (6) be computed analytically. Accordingly, qnet2 is designed as follows:
rv = MLP([hK−1v ||yv]), v ∈ V r = Readout({rv}v∈V ) qφ(z|A,X, Y ) = N ( z;µ(r), σ2(r)I)
(8)
where .||. is the concatenation operation, MLP represents the multi-layer perceptron, Readout(.) function summarizes all input vectors into a global vector, and the MLP functions µ(.) and σ2(.) convert r into the mean and standard deviation, which parameterise the distribution of qφ(z|A,X, Y ). Instantiating pθ(Y |A,X, z) with pnet: Given z sampled from qφ(z|A,X, Y ), pnet specifies an instance of the stochastic function F (i.e., function g defined in Eq. (2)). The network architecture of pnet is similar to that of qnet1, which takes the sampled global latent variable z as well as A and X as input, and outputs the probability distributions on C for all nodes. Assume that the hidden representation of node v in the Kth layer is denoted as eKv , and the initial latent representation of node v is defined as the concatenation between xv and z, i.e., e0v = xv||z. The predicted categorical distribution can be expressed as follows:
pθ(yv|A,X, z) = Cat(yv|eKv ), v ∈ V (9)
Model Training: To optimize the object function in Eq. (6), we adopt Monte Carlo estimation to approximate the expectations w.r.t qφ(YU |A,X, YL) and qφ(z|A,X, Y ). Specifically, we first sample m instances of YU from qφ(YU |A,X, YL). After that, for each instance of YU , we further sample n instances of z from qφ(z|A,X, Y ). With these sampled instances, we could approximately estimate the object functionLELBO(θ, φ). We leverage reparameterization to calculate the derivatives w.r.t the parameters in qnet1, qnet2 and pnet. Since z is continuous and qφ(z|A,X, Y ) takes on a Gaussian form, the reparameterization trick of variational auto-encoders [7] can be directly used here. While YU is discrete, we adopt the Gumbel-Softmax reparametrization [23] for gradient backpropogation. As we mentioned above, YL could be used as the supervised information to guide the parameter update of qnet1. Therefore, we additionally introduce a supervised object function Ls(φ) = log qφ(YL|A,X). The overall objective function is given as follows: L(θ, φ) = LELBO(θ, φ) + Ls(φ) (10) The model can be optimized effectively in an end-to-end manner and the optimal parameters are denoted by θ∗ and φ∗, i.e., θ∗, φ∗ = argmax
θ,φ L(θ, φ).
Model Inference: After the above training, p(YU |A,X, YL) can be seen as the expectation of pθ∗(YU |A,X, z) w.r.t. qφ∗(z|A,X, Y ). We first sample L instances of YU from qφ∗(YU |A,X, YL), and then for each sampled instance of YU , we sample a instance of z. from qφ∗(z|A,X, Y ). We use Monte Carlo estimation for approximate inference, formulated as follows:
p(YU |A,X, YL) ≈ 1
L L∑ i=1 pθ∗(YU |A,X, zi) (11)
This approximation can also be derived from Eq. (4) with the proof in the supplemental material.
3.3 Algorithm Complexity Analysis
Because qnet1 and pnet share the similar message-passing model structure, the computational complexity of them isO(|E|), where |E| represents the number of edges in the graph. The computational
complexity of qnet2 is O(N), where N is the number of nodes. On this basis, during the training phase, the overall computational complexity is O(|E|+mN +mn|E|), where m and n are respectively the number of sampled instances of YU and z. In our experiments, we find that one sample (i.e., m = n = 1) could achieve comparable results with multiple samples. For efficiency, we only sample once for both YU and z. During the inference phase, the calculation only involves pnet. Therefore, the overall computational complexity is O(L|E|), where L is the number of sample instances from p(z). We can see that the complexity is linear to the scale of the graph. The pseudo-code of the algorithm is provided in the supplemental material.
4 Experiments
In this section, we empirically evaluate the performance of GSNN on the task of semi-supervised node classification in different scenarios: (1) the standard experimental scenario with the validation set for early-stopping, (2) the scarce labeled nodes scenario (no validation set for early-stopping), and (3) the adversarial attack scenario. Note that we mainly consider the noise injected by adversarial attack methods, since they can incur obvious impact on the performance of many existing GNNs. [24, 25, 26, 27]. Our reproducible code is available at https://github.com/GSNN/GSNN.
4.1 Experimental Settings
Datasets. We conduct experiments on three commonly used benchmark datasets: Cora, Citeseer and Pubmed [25, 28], where nodes represent documents and edges represent citation relationships. Each node is associated with a bag-of-words feature vector and a ground-truth label. Detailed statistics for the three datasets are provided in the supplemental material. In different experiment scenarios, we will adopt different dataset setup (e.g., dataset partition method) following the standard practice, which would be described when presenting experimental results in corresponding sections.
Baselines. When we evaluate the performance in the standard experimental settings and the scarce labeled nodes settings, we compared with six state-of-the-art models, three of which are GCN [3], GraphSAGE [4] and Graph Attention Networks (GAT) [5]. The other three adopt uncertainty modeling for graph-based semi-supervised learning. They are Bayesian Graph Convolutional Neural Networks (BGCN) [21], G3NN [20] and Graph Gaussian Processes (GGP) [22] respectively. BGCN and G3NN model the uncertainty of the graph structure, and GGP introduces the Gaussian processes to prevent from over-fitting. When we evaluate the performance in the adversarial attack settings, in addition to the above six baselines, we also compare with Robust Graph Convolutional Networks (RGCN) [29], which is a state-of-the-art method against adversarial attacks. More detailed description about the baselines are provided in the supplemental material.
Our Model. For the proposed GSNN framework, we could adopt different information aggregation mechanisms for qnet1 and pnet to instantiate the models. In this paper, we implement two variants, whose aggregation mechanisms are consistent with GCN (i.e., mean aggregation) [3] and GAT (i.e., attention-based aggregation) [5] respectively. Note that other advanced information aggregation mechanisms can also be involved here to improve the performance. The two variants are termed as GSNN-M and GSNN-A.
Parameter Settings. For all baselines, we adopt the default parameter settings reported in corresponding papers. For our proposed two models (i.e., GSNN-M and GSNN-A), in qnet1 and pnet, we employ two information aggregation layers, and other settings related to hidden layers are consistent with GCN [3] and GAT [5] respectively. For example, the number of hidden units for GSNN-M is set to 16 and that for GSNN-A is set to 64. Besides, GSNN-A also employs the multi-head attention mechanism in the first hidden layer with 8 attention heads. For both GSNN-M and GSNN-A, the dimension of the hidden variable z is set to 16. In qnet2, we first employ a two-layer MLP to generate the representation rv for each node v, whose dimension is 16. After that, we summarize all representations into a vector and use two fully-connected networks to convert it into the mean and covariance matrix for the multivariate Gaussian distribution. As mentioned in Section 3.3, both the numbers of sampled instances of YU and z are set to 1 for efficiency purpose. We use the Adam optimizer [30] during training, with the learning rate as 0.01 and weight decay as 5× 10−4, and set the epoch number as 200. During the inference phase, the sampling number L in Eq. (11) is set to 40.
In the experiments, we train our models and baselines for 50 times and record the mean classification accuracy and standard deviation.
4.2 Standard Experimental Scenario
In this section, we evaluate the performance of GSNN and baselines under the standard experimental scenario used in the work [3]. Specifically, in each dataset, 20 nodes per class are used for training, 1000 nodes are used for evaluation and another 500 nodes are used for validation and early-stopping.
The experimental results (mean and standard deviation) are summarized in Table 1. We can see that under the standard experimental scenario, BGCN, G3NN and GGP do not show obvious advantages
and perform even worse than the deterministic GNN-based models (i.e., GCN, GAT and GraphSAGE) in many cases. The reason behind is that the validation set could help these GNN-based models find relatively good classification functions, which can prevent the model from overfitting to a large extent. Both BGCN and G3NN attempt to model the uncertainty of the graph structure. However, the potential distributions of different graph data may vary greatly, which limits the performance of these two methods on some datasets (e.g., Pubmed). GGP
adopts Gaussian processes to model the node classification task, of which the fitting capacity is not as good as neural networks that could effectively learn the node representations. Therefore, the performance of it is not ideal.
Compared with baselines, our models achieve comparable or better performance in standard experimental scenario. Note that GSNN-M and GSNN-A adopt the consistent aggregation mechanism with GCN and GAT respectively, while the results show that the two proposed models outperform GCN and GAT on all datasets, which demonstrates the effectiveness of modeling the uncertainty of the classification function.
4.3 Label-Scarce Scenario
In general, the labeled nodes are difficult or expensive to obtain. A more practical scenario is that we only have a very small proportion of labeled nodes for training and no additional labeled nodes for early-stopping. In this section, we evaluate the performance of GSNN and baselines when labeled nodes are scarce. Specifically, in each dataset, we randomly select a certain percentage of labeled nodes for training, and the rest of nodes are used for evaluation. Note that the number of labeled nodes in each class could be different under this dataset partition setting.
For Cora and Citeseer, we set the percentage of labeled nodes for training from 1% to 5%, while for Pubmed, we set the percentage from 0.1% to 0.5% because the total number of nodes in Pubmed is about an order of magnitude higher than the other two datasets. The experimental results are shown in Table 2. We observe that, compared with baselines, GSNN-M and GSNN-A achieve substantial performance gain, which demonstrates that modeling the uncertainty of the classification function could effectively alleviate the overfitting problem on the complex graph data. BGCN models the uncertainty of the graph structure, which improves the performance of the deterministic GNN-based models on Cora and Citesser to some extent. However, its performance cannot be generalized to Pubmed because of the difference of the potential graph structure for different datasets. Although G3NN also models the distribution of the graph structure, the complex model structure make it easy to overfit without early-stopping. Therefore, modeling the distribution of the classification function provides more flexibility and better copes with the label-scarce scenario.
4.4 Adversarial Attack Scenario
In this section, we employ three state-of-the-art global adversarial attack methods (i.e., Meta-Train [25], Meta-Self [25] and min-max attack [26]), which aim at reducing the overall classification accuracy, to inject noise edges into the graph structure, and further evaluate the performance of GSNN and baselines in the presence of them. Detailed description for the three attack methods is provided in the supplemental material. The experimental settings about the adversarial attacks and dataset partition follow the work [25]. The attack budgets, i.e., the ratio of perturbed edges to all clean edges, is set to 0.05. Without loss of generality, all three attack methods are performed based on the vanilla GCN [3], which means that they mainly affect the mean aggregation mechanism. For each poisoned graph, 10% of nodes are used for training and the rest of nodes are used for evaluation.
We conduct experiments on Cora. The experimental results are shown in Table 3. Here we add a robust GNN model (i.e., RGCN [29]) as a baseline. We have the following meaningful observations: (1) Under the three attack methods, the performance of GCN reduces drastically because it serves as the surrogate model of the attacks. Meanwhile, they can transfer to other deterministic GNN-based models (i.e., GraphSAGE and GAT). However, GSNN could effectively alleviate the impacts of attacks by modeling the uncertainty of the classification function. We can see that GSNN-M and GSNN-A significantly improve
the performance of GCN and GAT, and also outperform RGCN, which is a state-of-the-art method against the adversarial attacks. Note that although the attack methods mainly affect the mean aggregation mechanism, GSNN-M still maintains good performance. (2) BGCN and G3NN could capture the underlying structure that exists in graph data. Therefore, they have capacity to improve the robustness against the adversarial attacks. Compared with them, GSNN does not need to modify the graph structure, which has more flexibility and achieves better or comparable performance.
5 Conclusion
In this paper, we propose a novel GSNN for semi-supervised learning on graph data, which aims to model the uncertainty of the classification function by simultaneously learning a family of functions. To model the distribution of the classification function, we introduce a learnable graph neural network coupled with a high-dimensional random latent vector, and further adopt the amortised variational inference to approximate the intractable joint posterior of the missing labels and the latent variable. Extensive experimental results show that GSNN outperforms the state-of-the-art baselines on different datasets. It shows great potential in label-scarce and adversarial attack scenarios. This paper focuses on the uncertainty of the GNN classification function. How to integrate more information, such as the label dependency and structure uncertainty, into the framework for inference is an interesting problem in the future.
Acknowledgment
The authors would like to thank the anonymous reviewers for their valuable comments. This work was supported by the NSFC under Grant No. 11688101 and No. 61872360, the National Key Research and Development Program of China under Grant No. 2020YFE0200500, the ARC DECRA under Grant No. DE200100964, and the Youth Innovation Promotion Association CAS under Grant No. 2017210. Chuan Zhou, Jia Wu, Shirui Pan and Jilong Wang are corresponding authors.
Broader Impact
Our work could bring the following positive impacts. (1) The proposed framework, which models the uncertainty of the classification function, provides a new idea for semi-supervised learning on graph data. (2) In practice, labeled nodes are generally scarce and expensive to obtain. GSNN could effectively alleviate the overfitting problem and improve the performance. (3) Noise could render deterministic GNN-based models vulnerable, while GSNN could alleviate the negative impacts of noise to a large extent. Many real-world applications, especially the risk-sensitive applications (e.g., financial transaction), would benefit from it.
Similar with many other GNNs, one potential issue of our model is that it provides limited interpretation of its predictions. We advocate peer researchers to make a profound study on this to improve the interpretability of modern GNN architectures and make GNNs applicable in more risk-sensitive applications. | 1. What is the main contribution of the paper in the field of graph neural networks?
2. What are the strengths of the proposed approach, particularly in its ability to handle uncertainty?
3. What are the weaknesses of the paper, especially regarding the limited scope of the experimental evaluation?
4. How could the authors improve their work by exploring different scenarios and providing a more in-depth analysis of the results? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
Most existing models learn a deterministic classification function, which lack sufficient flexibility to explore better choices in the presence of noisy observations, scarce labeled nodes, and noisy graph structure. This paper proposes a novel framework named Graph Stochastic Neural Networks (GSNN), to model the uncertainty of the classification function by simultaneously learning a family of stochastic functions.
Strengths
(1) The paper propose a stochastic framework for GNN, which models the uncertainty of the classification function by simultaneously learning a family of stochastic functions. (2) Comparison against several baselines. (3) The paper is well written
Weaknesses
(1) The paper uses only three small datasets (core, citeseer, pubmed) with a relatively high autocorrelation, this makes it difficult to see how the proposed method would generalize to other graphs with more noise and low autocorrelation. (2) The two proposed methods (GSNN-M, GSNN-A) seems to provide similar results, it's unclear which one is better for which graphs, i.e., for different graphs with different noise in attributes and structure. It'd be good to explore these with synthetic experiments, and add a discussion on them. |
NIPS | Title
Graph Stochastic Neural Networks for Semi-supervised Learning
Abstract
Graph Neural Networks (GNNs) have achieved remarkable performance in the task of the semi-supervised node classification. However, most existing models learn a deterministic classification function, which lack sufficient flexibility to explore better choices in the presence of kinds of imperfect observed data such as the scarce labeled nodes and noisy graph structure. To improve the rigidness and inflexibility of deterministic classification functions, this paper proposes a novel framework named Graph Stochastic Neural Networks (GSNN), which aims to model the uncertainty of the classification function by simultaneously learning a family of functions, i.e., a stochastic function. Specifically, we introduce a learnable graph neural network coupled with a high-dimensional latent variable to model the distribution of the classification function, and further adopt the amortised variational inference to approximate the intractable joint posterior for missing labels and the latent variable. By maximizing the lower-bound of the likelihood for observed node labels, the instantiated models can be trained in an end-to-end manner effectively. Extensive experiments on three real-world datasets show that GSNN achieves substantial performance gain in different scenarios compared with state-of-the-art baselines.
1 Introduction
Graphs are essential tools to represent complex relationships among entities in various domains, such as social networks, citation networks, biological networks and physical networks. Analyzing graph data has become one of the most important topics in the machine learning community. As an abstraction of many graph mining tasks, semi-supervised node classification, which aims to predict the labels of unlabeled nodes given the graph structure, node features and labels of partial nodes, has received significant attention in recent years. Graph Neural Networks (GNNs), in particular, have achieved impressive performance in the graph-based semi-supervised learning task [1, 2, 3, 4, 5].
Most existing GNN models are designed to learn a deterministic classification function. This kind of design makes them look simple and artistic, but the other side of the coin is that the deterministic classification function makes these GNN models lack sufficient flexibility to cater for kinds of imperfect observed data. For example, in many real situations, the ground-truth labels of nodes
∗Equal Contribution
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
are often expensive or difficult to obtain, which leads to the sparseness of the labeled nodes. The insufficient supervision information can easily lead to the overfitting of the deterministic classification functions, especially when there are no additional labeled nodes as the validation set for earlystopping. Another example is the noise in the graph structure. Arisen in nature or injected deliberately by attackers, noise is prone to affect the neighbor information aggregation and misleads the learning of deterministic classification functions. The rigidness and inflexibility of deterministic classification functions make it difficult for them to bypass these similar issues and explore better choices.
In line of the aforementioned observations, this paper proposes a novel Graph Stochastic Neural Network (GSNN for short) to model the uncertainty of GNN classification functions. GSNN aims to learn simultaneously a family of classification functions rather than fitting a deterministic function. This empowers GNNs the flexibility to handle the imperfection or noise in graph data, and further bypass the traps caused by sparse labeled data and unreliable graph structure in real applications.
Specifically, we treat the classification function to be learned as a stochastic function and integrate it into the process of label inference. To model the distribution of the stochastic function, we introduce a learnable neural network, which is coupled with a high-dimensional latent variable and takes the message-passing form. To infer the missing labels, we need to obtain the joint posterior distribution of labels for unlabeled nodes and the classification function, of which the exact form is intractable in general. To solve the problem, we adopt the amortised variational inference [6, 7] to approximate the intractable posterior distribution with the other two types of neural networks. By maximizing the lower-bound of the likelihood for observed node labels, we could optimize all parameters effectively in an end-to-end manner. We conduct extensive experiments on three real-world datasets. The results show that compared with state-of-the-art baselines, GSNN not only achieves comparable or better performance in the standard experimental scenario with early-stopping, but also shows substantial performance gain when labeled nodes are scarce (no early-stopping) and there are deliberate edge perturbations in the graph structure.
2 Related Work
Graph Neural Networks for Graph-based Semi-supervised Learning: Recently, GNNs have been attracting considerable attention [8, 9, 10]. The early ideas are to derive different forms of the graph convolution in the spectral domain based on the graph spectral theory [1, 11, 2, 3, 12]. Bruna et al. [1] propose the first generation spectral-based GNN. To reduce the computational complexity, Defferrard et al. [2] propose to use a K-order Chebyshev polynomial to approximate the convolutional filter, which avoids intense calculations of eigendecomposition of the normalized graph Laplacian. Kipf and Welling [3] further simplify the graph convolution by the first-order approximation. which reduces the number of parameters and improves the performance. Another line of research is to directly perform graph convolution in the spatial domain [13, 4, 14, 15, 16]. Gilmer et al. [13] generalize spatial-based methods as a message-passing mechanism. Hamilton et al. [4] propose a general inductive framework, which could learn an embedding function that generalizes to unseen nodes. Veličković et al. [5] further introduce the attention mechanism, which assigns different weights to neighbor nodes and aggregate features with discrimination. Besides, other works also demonstrate that considering edge attributes [17], adding jumping connections [18] and modeling the outcome dependency [19] would be beneficial. However, these models generally learn a deterministic classification function, which lack sufficient flexibility to handle imperfect observed data such as the scarce labeled nodes and noisy graph structure.
Uncertainty Modeling for Graph-based Semi-supervised Learning: There are also some works using uncertainty modeling for graph-based semi-supervised learning, which are related to this paper [20, 21, 22]. Ng et al. [22] introduce Gaussian processes to model the semi-supervised learning problem on graphs, which mitigates the over-fitting to some extent. Zhang et al. [21] treat the observed graph as a realization from a parametric family of random graphs and propose bayesian graph convolutional neural networks to incorporate the uncertain graph information. Ma et al [20] further propose a flexible generative framework to model the joint distribution of the graph structure and the node labels. Most of these works typically model the uncertainty of the observed data (e.g., graph structure). Different from them, in this paper, we view the classification function as a stochastic function and straightly model its distribution, which brings better performance in many scenarios.
3 Our Solution
In this paper, we define an undirected graph as G = (V,E), where V = {v1, ..., vN} represents a set of N nodes and E ⊆ V × V is the set of edges. Let A ∈ {0, 1}N×N denote the binary adjacency matrix, i.e., Au,v = 1 if and only if (u, v) ∈ E. Let X ∈ RN×F be the node attribute matrix, where F is the feature dimension and the feature vector of node v is expressed as xv . Each node is labeled with one class in C = {c1, ..., c|C|}. In practice, only partial nodes come with labels. The set of these labeled nodes is denoted as VL and the set of unlabeled nodes is denoted as VU := V \ VL. For the task of semi-supervised node classification, given A, X and the label information of VL, the goal is to infer the labels of nodes in VU by learning a classification function f . The classification results can be denoted as Y := {yv1 , ...,yvN } where each y· is a |C|-dimension probability distribution on C. Most existing GNN models typically aim to learn a deterministic classification function, which lack sufficient flexibility to cater for kinds of imperfect observed data. For example, they are easy to overfit or be misled when labeled nodes are scarce or there exists noise in the graph structure. Therefore, instead of fitting a deterministic function, we here aim to learn a family of classification functions, which can be organized as a stochastic function F with the distribution denoted as p(f). Under this setting, the distribution of Y can be formalized as follows:
p(Y |A,X) , ∫ p(f)p ( Y |f(A,X) ) df = ∫ p(f) ∏ v∈V p ( yv|f(A,X) ) df (1)
where we use p ( Y |f(A,X) ) to denote the distribution of Y corresponding to the classification function f . Eq. (1) assumes that the label inference for each node is conditionally independent, given a selected classification function f , the adjacency matrix A and the attribute matrix X .
3.1 Framework for GSNN
In order to model the uncertainty of the classification function in Eq. (1), we here approximate the stochastic function F using a learnable function gϕ (e.g., a neural network with parameters ϕ) with a random latent vector Z involved as below:
F(A,X) , gϕ(A,X;Z) (2)
where the prior distribution of Z is p(z) defined as multivariate standard normal, i.e., p(z) = N (z;0, I). Note that the randomness of F is induced by Z and the expression capacity of F is captured by the structure of gϕ(.; .). Combined Eq. (2) with Eq. (1), the distribution p(Y |A,X) can be rewritten as follows:
p(Y |A,X) = ∫ p(z) ∏ v∈V p ( yv|gϕ(A,X; z) ) dz (3)
where p ( yv|gϕ(A,X; z) ) is a distribution on C of node v. In the semi-supervised transductive setting, the label information for labeled nodes in VL is also known. Denote YL := {yv}v∈VL and YU := Y \ YL. Under the above setting, the conditional distribution of YU , given A, X and YL, can be formalized as follows:
p(YU |A,X, YL) , ∫ p(z|A,X, YL) ∏ v∈VU p ( yv|gϕYL (A,X; z) ) dz (4)
where p(z|A,X, YL) is the posterior distribution of the latent vector Z and ϕYL are parameters to be learned when YL is taken into consideration.
We assume that the distributions p(z|A,X, YL) and p ( yv|gϕYL (A,X; z) ) in Eq. (4) can be modeled by parametric families of distributions pθ(z|A,X, YL) and pθ(yv|A,X, z) respectively, whose probability density function is differentiable almost everywhere w.r.t. θ. To predict YU via modeling the distribution of the classification function, we need to obtain an intractable joint posterior pθ(YU , z|A,X, YL). To solve the problem, we adopt the variational inference. We introduce a variational distribution qφ(YU , z|A,X, YL) parameterized by φ to approximate the true posterior pθ(YU , z|A,X, YL). To learn the model parameters φ and θ, we aim to optimize the evidence lower bound (ELBO) of the log-likelihood function for the observed node labels, i.e., log pθ(YL|A,X).
Following the standard derivation of the variational inference, the ELBO objective function can be obtained as follows:
log pθ(YL|A,X) ≥ Eqφ(YU ,z|A,X,YL) ( log pθ(Y |A,X, z) + log
p(z)
qφ(YU , z|A,X, YL) ) , LELBO(θ, φ) (5)
The variational joint posterior could be further factorized as qφ(YU , z|A,X, YL) = qφ(YU |A,X, YL)qφ(z|A,X, Y ) noting that Y = YL ∪ YU . From a sampling perspective, it can be explained that the distribution of the random latent vector Z depends on the observed data and YU sampled from the approximate posterior distribution qφ(YU |A,X, YL). On this basis, the ELBO objective function can be rewritten as follows:
LELBO(θ, φ) = Eqφ(YU |A,X,YL)Eqφ(z|A,X,Y ) log pθ(YL|A,X, z)− Eqφ(YU |A,X,YL)KL ( qφ(z|A,X, Y ) || p(z) ) − (6)
Eqφ(YU |A,X,YL) ( log qφ(YU |A,X, YL)− Eqφ(z|A,X,Y ) log pθ(YU |A,X, z) ) where KL(.||.) represents the Kullback-Leibler divergence between two distributions. The first term of Eq. (6) is the opposite of the cross-entropy between ground-truth label vectors and the predicted class distributions for labeled nodes. The form of the third term is similar to the KL divergence, which characterizes the distribution difference between qφ(YU |A,X, YL) and pθ(YU |A,X, z). Based on the amortised variational inference [6, 7], qφ(YU |A,X, YL), qφ(z|A,X, Y ) and pθ(Y |A,X, z) can be fitted by different types of neural networks, which would be described in detail in Section 3.2. The overall framework is referred as graph stochastic neural networks (GSNN for short), whose overview is shown in Fig. 1.
3.2 Model Instantiation, Training and Inference
In this part, we instantiate qφ(YU |A,X, YL), qφ(z|A,X, Y ) and pθ(Y |A,X, z) with three neural networks (i.e., qnet1, qnet2 and pnet) respectively.
Instantiating qφ(YU |A,X,YL) with qnet1: The neural network qnet1 is designed into the form of message-passing. It consists of K layers to aggregate the features of neighbor nodes with the following layer-wise propagation rule:
hkv = ρ k−1 ( ∑ u∈Ne{v}∪{v} ak−1v,u h k−1 u W k−1 qnet1 ) , k = 1, ...,K
qφ(yv|A,X, YL) = Cat(yv|hKv ), v ∈ VU
(7)
where Ne{v} is the set of neighbor nodes of node v. hkv is the hidden representation for node v in the kth layer and h0v = xv. The parameter a k−1 v,u represents the aggregation coefficient between
node v and node u. The parameter matrix W k−1qnet1 represent the trainable parameters in the k th layer. The activation functions of the first K − 1 layers (i.e., ρ0, ..., ρK−2) are ReLU , and the activation function for the Kth layer is softmax, which constructs the categorical distribution Cat(.), i.e., qφ(YU |A,X, YL). Note that YL is not used as the input for qnet1, but as the supervision information for training qnet1 in the Eq. (10) below.
Instantiating qφ(z|A,X,Y ) with qnet2: The posterior distribution qφ(z|A,X, Y ) depends on four parts of information: A, X , YU and YL. Since qnet1 has involved A and X , we therefore directly use the hidden representations of the (K − 1)th layer in qnet1 to represent A and X . The unlabeled information YU could be obtained by sampling from the output of qnet1 and YL is directly taken as one of the input for qnet2. Inspired by variational auto-encoders [7], we let the variational posterior be a multivariate Gaussian with a diagonal covariance structure, which is flexible and could make the second item of the ELBO objection in Eq. (6) be computed analytically. Accordingly, qnet2 is designed as follows:
rv = MLP([hK−1v ||yv]), v ∈ V r = Readout({rv}v∈V ) qφ(z|A,X, Y ) = N ( z;µ(r), σ2(r)I)
(8)
where .||. is the concatenation operation, MLP represents the multi-layer perceptron, Readout(.) function summarizes all input vectors into a global vector, and the MLP functions µ(.) and σ2(.) convert r into the mean and standard deviation, which parameterise the distribution of qφ(z|A,X, Y ). Instantiating pθ(Y |A,X, z) with pnet: Given z sampled from qφ(z|A,X, Y ), pnet specifies an instance of the stochastic function F (i.e., function g defined in Eq. (2)). The network architecture of pnet is similar to that of qnet1, which takes the sampled global latent variable z as well as A and X as input, and outputs the probability distributions on C for all nodes. Assume that the hidden representation of node v in the Kth layer is denoted as eKv , and the initial latent representation of node v is defined as the concatenation between xv and z, i.e., e0v = xv||z. The predicted categorical distribution can be expressed as follows:
pθ(yv|A,X, z) = Cat(yv|eKv ), v ∈ V (9)
Model Training: To optimize the object function in Eq. (6), we adopt Monte Carlo estimation to approximate the expectations w.r.t qφ(YU |A,X, YL) and qφ(z|A,X, Y ). Specifically, we first sample m instances of YU from qφ(YU |A,X, YL). After that, for each instance of YU , we further sample n instances of z from qφ(z|A,X, Y ). With these sampled instances, we could approximately estimate the object functionLELBO(θ, φ). We leverage reparameterization to calculate the derivatives w.r.t the parameters in qnet1, qnet2 and pnet. Since z is continuous and qφ(z|A,X, Y ) takes on a Gaussian form, the reparameterization trick of variational auto-encoders [7] can be directly used here. While YU is discrete, we adopt the Gumbel-Softmax reparametrization [23] for gradient backpropogation. As we mentioned above, YL could be used as the supervised information to guide the parameter update of qnet1. Therefore, we additionally introduce a supervised object function Ls(φ) = log qφ(YL|A,X). The overall objective function is given as follows: L(θ, φ) = LELBO(θ, φ) + Ls(φ) (10) The model can be optimized effectively in an end-to-end manner and the optimal parameters are denoted by θ∗ and φ∗, i.e., θ∗, φ∗ = argmax
θ,φ L(θ, φ).
Model Inference: After the above training, p(YU |A,X, YL) can be seen as the expectation of pθ∗(YU |A,X, z) w.r.t. qφ∗(z|A,X, Y ). We first sample L instances of YU from qφ∗(YU |A,X, YL), and then for each sampled instance of YU , we sample a instance of z. from qφ∗(z|A,X, Y ). We use Monte Carlo estimation for approximate inference, formulated as follows:
p(YU |A,X, YL) ≈ 1
L L∑ i=1 pθ∗(YU |A,X, zi) (11)
This approximation can also be derived from Eq. (4) with the proof in the supplemental material.
3.3 Algorithm Complexity Analysis
Because qnet1 and pnet share the similar message-passing model structure, the computational complexity of them isO(|E|), where |E| represents the number of edges in the graph. The computational
complexity of qnet2 is O(N), where N is the number of nodes. On this basis, during the training phase, the overall computational complexity is O(|E|+mN +mn|E|), where m and n are respectively the number of sampled instances of YU and z. In our experiments, we find that one sample (i.e., m = n = 1) could achieve comparable results with multiple samples. For efficiency, we only sample once for both YU and z. During the inference phase, the calculation only involves pnet. Therefore, the overall computational complexity is O(L|E|), where L is the number of sample instances from p(z). We can see that the complexity is linear to the scale of the graph. The pseudo-code of the algorithm is provided in the supplemental material.
4 Experiments
In this section, we empirically evaluate the performance of GSNN on the task of semi-supervised node classification in different scenarios: (1) the standard experimental scenario with the validation set for early-stopping, (2) the scarce labeled nodes scenario (no validation set for early-stopping), and (3) the adversarial attack scenario. Note that we mainly consider the noise injected by adversarial attack methods, since they can incur obvious impact on the performance of many existing GNNs. [24, 25, 26, 27]. Our reproducible code is available at https://github.com/GSNN/GSNN.
4.1 Experimental Settings
Datasets. We conduct experiments on three commonly used benchmark datasets: Cora, Citeseer and Pubmed [25, 28], where nodes represent documents and edges represent citation relationships. Each node is associated with a bag-of-words feature vector and a ground-truth label. Detailed statistics for the three datasets are provided in the supplemental material. In different experiment scenarios, we will adopt different dataset setup (e.g., dataset partition method) following the standard practice, which would be described when presenting experimental results in corresponding sections.
Baselines. When we evaluate the performance in the standard experimental settings and the scarce labeled nodes settings, we compared with six state-of-the-art models, three of which are GCN [3], GraphSAGE [4] and Graph Attention Networks (GAT) [5]. The other three adopt uncertainty modeling for graph-based semi-supervised learning. They are Bayesian Graph Convolutional Neural Networks (BGCN) [21], G3NN [20] and Graph Gaussian Processes (GGP) [22] respectively. BGCN and G3NN model the uncertainty of the graph structure, and GGP introduces the Gaussian processes to prevent from over-fitting. When we evaluate the performance in the adversarial attack settings, in addition to the above six baselines, we also compare with Robust Graph Convolutional Networks (RGCN) [29], which is a state-of-the-art method against adversarial attacks. More detailed description about the baselines are provided in the supplemental material.
Our Model. For the proposed GSNN framework, we could adopt different information aggregation mechanisms for qnet1 and pnet to instantiate the models. In this paper, we implement two variants, whose aggregation mechanisms are consistent with GCN (i.e., mean aggregation) [3] and GAT (i.e., attention-based aggregation) [5] respectively. Note that other advanced information aggregation mechanisms can also be involved here to improve the performance. The two variants are termed as GSNN-M and GSNN-A.
Parameter Settings. For all baselines, we adopt the default parameter settings reported in corresponding papers. For our proposed two models (i.e., GSNN-M and GSNN-A), in qnet1 and pnet, we employ two information aggregation layers, and other settings related to hidden layers are consistent with GCN [3] and GAT [5] respectively. For example, the number of hidden units for GSNN-M is set to 16 and that for GSNN-A is set to 64. Besides, GSNN-A also employs the multi-head attention mechanism in the first hidden layer with 8 attention heads. For both GSNN-M and GSNN-A, the dimension of the hidden variable z is set to 16. In qnet2, we first employ a two-layer MLP to generate the representation rv for each node v, whose dimension is 16. After that, we summarize all representations into a vector and use two fully-connected networks to convert it into the mean and covariance matrix for the multivariate Gaussian distribution. As mentioned in Section 3.3, both the numbers of sampled instances of YU and z are set to 1 for efficiency purpose. We use the Adam optimizer [30] during training, with the learning rate as 0.01 and weight decay as 5× 10−4, and set the epoch number as 200. During the inference phase, the sampling number L in Eq. (11) is set to 40.
In the experiments, we train our models and baselines for 50 times and record the mean classification accuracy and standard deviation.
4.2 Standard Experimental Scenario
In this section, we evaluate the performance of GSNN and baselines under the standard experimental scenario used in the work [3]. Specifically, in each dataset, 20 nodes per class are used for training, 1000 nodes are used for evaluation and another 500 nodes are used for validation and early-stopping.
The experimental results (mean and standard deviation) are summarized in Table 1. We can see that under the standard experimental scenario, BGCN, G3NN and GGP do not show obvious advantages
and perform even worse than the deterministic GNN-based models (i.e., GCN, GAT and GraphSAGE) in many cases. The reason behind is that the validation set could help these GNN-based models find relatively good classification functions, which can prevent the model from overfitting to a large extent. Both BGCN and G3NN attempt to model the uncertainty of the graph structure. However, the potential distributions of different graph data may vary greatly, which limits the performance of these two methods on some datasets (e.g., Pubmed). GGP
adopts Gaussian processes to model the node classification task, of which the fitting capacity is not as good as neural networks that could effectively learn the node representations. Therefore, the performance of it is not ideal.
Compared with baselines, our models achieve comparable or better performance in standard experimental scenario. Note that GSNN-M and GSNN-A adopt the consistent aggregation mechanism with GCN and GAT respectively, while the results show that the two proposed models outperform GCN and GAT on all datasets, which demonstrates the effectiveness of modeling the uncertainty of the classification function.
4.3 Label-Scarce Scenario
In general, the labeled nodes are difficult or expensive to obtain. A more practical scenario is that we only have a very small proportion of labeled nodes for training and no additional labeled nodes for early-stopping. In this section, we evaluate the performance of GSNN and baselines when labeled nodes are scarce. Specifically, in each dataset, we randomly select a certain percentage of labeled nodes for training, and the rest of nodes are used for evaluation. Note that the number of labeled nodes in each class could be different under this dataset partition setting.
For Cora and Citeseer, we set the percentage of labeled nodes for training from 1% to 5%, while for Pubmed, we set the percentage from 0.1% to 0.5% because the total number of nodes in Pubmed is about an order of magnitude higher than the other two datasets. The experimental results are shown in Table 2. We observe that, compared with baselines, GSNN-M and GSNN-A achieve substantial performance gain, which demonstrates that modeling the uncertainty of the classification function could effectively alleviate the overfitting problem on the complex graph data. BGCN models the uncertainty of the graph structure, which improves the performance of the deterministic GNN-based models on Cora and Citesser to some extent. However, its performance cannot be generalized to Pubmed because of the difference of the potential graph structure for different datasets. Although G3NN also models the distribution of the graph structure, the complex model structure make it easy to overfit without early-stopping. Therefore, modeling the distribution of the classification function provides more flexibility and better copes with the label-scarce scenario.
4.4 Adversarial Attack Scenario
In this section, we employ three state-of-the-art global adversarial attack methods (i.e., Meta-Train [25], Meta-Self [25] and min-max attack [26]), which aim at reducing the overall classification accuracy, to inject noise edges into the graph structure, and further evaluate the performance of GSNN and baselines in the presence of them. Detailed description for the three attack methods is provided in the supplemental material. The experimental settings about the adversarial attacks and dataset partition follow the work [25]. The attack budgets, i.e., the ratio of perturbed edges to all clean edges, is set to 0.05. Without loss of generality, all three attack methods are performed based on the vanilla GCN [3], which means that they mainly affect the mean aggregation mechanism. For each poisoned graph, 10% of nodes are used for training and the rest of nodes are used for evaluation.
We conduct experiments on Cora. The experimental results are shown in Table 3. Here we add a robust GNN model (i.e., RGCN [29]) as a baseline. We have the following meaningful observations: (1) Under the three attack methods, the performance of GCN reduces drastically because it serves as the surrogate model of the attacks. Meanwhile, they can transfer to other deterministic GNN-based models (i.e., GraphSAGE and GAT). However, GSNN could effectively alleviate the impacts of attacks by modeling the uncertainty of the classification function. We can see that GSNN-M and GSNN-A significantly improve
the performance of GCN and GAT, and also outperform RGCN, which is a state-of-the-art method against the adversarial attacks. Note that although the attack methods mainly affect the mean aggregation mechanism, GSNN-M still maintains good performance. (2) BGCN and G3NN could capture the underlying structure that exists in graph data. Therefore, they have capacity to improve the robustness against the adversarial attacks. Compared with them, GSNN does not need to modify the graph structure, which has more flexibility and achieves better or comparable performance.
5 Conclusion
In this paper, we propose a novel GSNN for semi-supervised learning on graph data, which aims to model the uncertainty of the classification function by simultaneously learning a family of functions. To model the distribution of the classification function, we introduce a learnable graph neural network coupled with a high-dimensional random latent vector, and further adopt the amortised variational inference to approximate the intractable joint posterior of the missing labels and the latent variable. Extensive experimental results show that GSNN outperforms the state-of-the-art baselines on different datasets. It shows great potential in label-scarce and adversarial attack scenarios. This paper focuses on the uncertainty of the GNN classification function. How to integrate more information, such as the label dependency and structure uncertainty, into the framework for inference is an interesting problem in the future.
Acknowledgment
The authors would like to thank the anonymous reviewers for their valuable comments. This work was supported by the NSFC under Grant No. 11688101 and No. 61872360, the National Key Research and Development Program of China under Grant No. 2020YFE0200500, the ARC DECRA under Grant No. DE200100964, and the Youth Innovation Promotion Association CAS under Grant No. 2017210. Chuan Zhou, Jia Wu, Shirui Pan and Jilong Wang are corresponding authors.
Broader Impact
Our work could bring the following positive impacts. (1) The proposed framework, which models the uncertainty of the classification function, provides a new idea for semi-supervised learning on graph data. (2) In practice, labeled nodes are generally scarce and expensive to obtain. GSNN could effectively alleviate the overfitting problem and improve the performance. (3) Noise could render deterministic GNN-based models vulnerable, while GSNN could alleviate the negative impacts of noise to a large extent. Many real-world applications, especially the risk-sensitive applications (e.g., financial transaction), would benefit from it.
Similar with many other GNNs, one potential issue of our model is that it provides limited interpretation of its predictions. We advocate peer researchers to make a profound study on this to improve the interpretability of modern GNN architectures and make GNNs applicable in more risk-sensitive applications. | 1. What is the main contribution of the paper, and how does it address an unresolved problem in GNNs?
2. What are the strengths of the proposed approach, particularly in terms of its novelty and soundness?
3. Are there any weaknesses or areas for improvement in the paper, such as the need for a toy motivation example or typos?
4. How does the reviewer assess the relevance and potential impact of the paper on the field of graph-based semi-supervised learning? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper attempts to address an unresolved but meaningful problem. To improve the inflexibility of GNNs in the face of imperfect observed data, the paper proposes a novel framework GSNN to model the uncertainty of classification function by simultaneously learning a family of functions. GSNN treats the classification function as a stochastic function, and uses a learnable graph neural network parameterized by a high-dimensional latent variable to model its distribution. To infer the missing labels by classification function with uncertainty, GSNN wisely adopts variational inference technology to approximate the intractable joint posterior for missing labels and the latent variable. The extensive experimental results show that GSNN achieves substantial performance gain in different scenarios, such as the label-scarce scenario and adversarial attack scenario.
Strengths
Significance and novelty: Most GNN-based models learn a deterministic classification function, which makes them lack sufficient flexibility to cope with kinds of imperfect observed data, such as scarce labels or deliberate noise in the graph structure. To solve the problems, this paper proposes to model the uncertainty of the classification function and simultaneously learn a family of functions, which is well-motivated. The idea is novel and different from previous works, which provide a new perspective for the graph-based semi-supervised learning problem. Soundness of the claims: The authors skillfully formalize the problem of modeling the uncertainty of classification function. They treat the classification function to be learned as a stochastic function and further combine GNN models and a high-dimensional latent variable to model its distribution. The variational inference technology makes the missing labels become inferable. The overall solutions of this paper, including theoretical analysis, practical model design and experiment evaluation are technically sound. Extensive experimental results also show significant performance gain compared with stat-of-the-art baselines, which further demonstrates the effectiveness of the proposed method. Relevance: This paper has the potential to attract wide attention at NeurIPS 2020.
Weaknesses
1.Theoretically, this work fills the gap between the deterministic classification function and the stochastic classification function. A toy motivation example in the introduction is encouraged. By doing this, the proposed GSNN work will reach a broader audience. 2.Some typos in the paper. |
NIPS | Title
Myersonian Regression
Abstract
Motivated by pricing applications in online advertising, we study a variant of linear regression with a discontinuous loss function that we term Myersonian regression. In this variant, we wish to find a linear function f : R ! R that well approximates a set of points (xi, vi) 2 R ⇥ [0, 1] in the following sense: we receive a loss of vi when f(xi) > vi and a loss of vi f(xi) when f(xi) vi. This arises naturally in the economic application of designing a pricing policy for differentiated items (where the loss is the gap between the performance of our policy and the optimal Myerson prices). We show that Myersonian regression is NP-hard to solve exactly and furthermore that no fully polynomial-time approximation scheme exists for Myersonian regression conditioned on the Exponential Time Hypothesis being true. In contrast to this, we demonstrate a polynomial-time approximation scheme for Myersonian regression that obtains an ✏m additive approximation to the optimal possible revenue and can be computed in time O(exp(poly(1/✏))poly(m,n)). We show that this algorithm is stable and generalizes well over distributions of samples.
1 Introduction
In economics, the Myerson price of a distribution is the price that maximizes the revenue when selling to a buyer whose value is drawn from that distribution. Mathematically, if F is the cdf of the distribution, then the Myerson price is
p ⇤ = argmax
p p · (1 F (p))
In many modern applications such as online marketplaces and advertising, the seller doesn’t just set one price p but must instead price a variety of differentiated products. In these settings, a seller must design a policy to price items based on their features in order to optimize revenue. Thus, in this paper we study the contextual learning version of Myersonian pricing. More formally, we get to observe a training dataset {(xt, vt)}t=1..m representing the bids of a buyer on differentiated products. We will assume that the bids vt 2 [0, 1] come from a truthful auction and hence represent the maximum value a buyer is willing to pay for the product. Each product is represented by a vector of features xt 2 Rn normalized such that kxtk2 1. The goal of the learner is to design a policy that suggests a price (xt) for each product xt with the goal of maximizing the revenue on the underlying distribution D from which the pairs (xt, vt) are drawn. In practice, one would train a pricing policy on historical bids (training) and apply this policy on future products (testing).
Mathematically, we want to solve max 2P E(x,v)⇠D[REV( (x); v)] (PP)
where P is a class of pricing policies and REV is the revenue function (see Figure 1) REV(p; v) = max(p, 0) · 1{p v}
having only access to samples of D.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Medina and Mohri [2014a] establish that if the class of policies P has good generalization properties (defined in terms of Rademacher complexity) then it is enough to solve the problem on the empirical distribution given by the samples. The policy that optimizes over the empirical distribution is typically called Empirical Risk Minimization (ERM).
The missing piece in this puzzle is the algorithm, i.e. how to solve the ERM problem. Previous papers (Medina and Mohri [2014a], Medina and Vassilvitskii [2017], Shen et al. [2019]) approached this problem by designing heuristics for ERM and giving conditions on the data under which the heuristics perform well. In this paper we give the first provable approximation algorithm for the ERM problem without assumptions on the data. We also establish hardness of approximation that complements our algorithmic results. We believe these are the first hardness results for this problem. Even establishing whether exactly solving ERM was NP-hard for a reasonable class of pricing policies was open prior to this work.
Myersonian regression We now define formally the ERM problem for linear pricing policies1, which we call Myersonian regression. Recall that the dataset is of the form {(xt, vt)}t=1..m with x t 2 Rn, kxtk2 1 and vt 2 [0, 1]. The goal is to find a linear pricing policy x 7! hw, xi with kwk2 1 that maximizes the revenue on the dataset, i.e.
max w2Rn;kwk21
mX
t=1
REV(hw, xti; vt) (MR)
It is worth noting that we restrict ourselves to 1-Lipschitz pricing policies by only considering policies with kwk2 1. Bounding the Lipschitz constant of the pricing policy is important to ensure that the problem is stable and hence generalizable. We will contrast it with the unregularized version of (MR) in which the constraint kwk2 1 is omitted:
R ⇤ = max
w2Rn
mX
t=1
REV(hw, xti; vt) (UMR)
Without the Lipschitz constraint it is possible to come up with arbitrarily close datasets in the sense that kxt x̃tk ✏ and |vt ṽt| ✏ generating vastly different revenue even as ✏ ! 0. We will also show that (UMR) is APX-hard, i.e. it is NP-hard to approximate within 1 ✏0 for some constant ✏0 > 0.
Our Results Our main result is a polynomial time approximation scheme (PTAS) using dimensionality reduction. We present two versions of the same algorithm.
The first version of the PTAS has running time
O(epoly(1/✏) · poly(n,m)) 1The choice of linear function is actually not very restrictive. A common trick in machine learning is to map the features to a different space and train a linear model on (x). For example if d = 2, the features are (x1, x2). By mapping (x) = (1, x1, x2, x21, x22, x1x2) 2 R6, and training a linear function on (x), we are actually optimizing over all quadratic functions on the original features. Similarly, we can optimize over any polynomial of degree k or even more complex functions with an adequate mapping.
and outputs an L-Lipschitz pricing policy with L = O(✏ p n) that is an ✏m-additive approximation of the optimal 1-Lipschitz pricing policy.
The second version of the PTAS has running time
O(npoly(1/✏) · poly(n,m))
and outputs a 1-Lipschitz pricing policy that is an ✏m-additive approximation of the optimal 1- Lipschitz pricing policy.
We complement this result by showing that the Myersonian regression problem (MR) is NP-hard using a reduction from 1-IN-3-SAT. While it is not surprising that solving Myersonian regression exactly is NP-hard given the discontinuity in the reward function, this has actually been left open by several previous works. In fact, the same reduction implies that under the Exponential Time Hypothesis (ETH) any algorithm approximating it within an ✏m additive factor must run in time at least e⌦(poly(1/✏)), therefore ruling out a fully-polynomial time approximation scheme (FPTAS) for the problem. This hardness of approximation perfectly complements our algorithmic results, showing that our guarantees are essentially the best that one can hope for.
Finally we discuss stability and generalization of the problem. We show that (UMR) is unstable in the sense that arbitrarily small perturbations in the input can lead to completely different solutions. On the other hand (MR) is stable in the sense that the optimal solution varies continuously with the input.
We also discuss the setting in which there is an underlying distribution D on datapoints (x, v) and while we optimize on samples from D, we care about the loss with respect to the underlying distribution. We also discuss stability of our algorithms and how to extend them to other loss functions. Due to space constraints, most proofs are deferred to the Supplementary Material.
Related work Our work is in the broad area of learning for revenue optimization. The papers in this area can be categorized along two axis: online vs batch learning and contextual vs non-contextual. In the online non-contextual setting, Kleinberg and Leighton [2003] give the optimal algorithm for a single buyer which was later extended to optimal reserve pricing in auctions in Cesa-Bianchi et al. [2013]. In the online contextual setting there is a stream of recent work deriving optimal regret bounds for pricing (Amin et al. [2014], Cohen et al. [2016], Javanmard and Nazerzadeh [2016], Javanmard [2017], Lobel et al. [2017], Mao et al. [2018], Leme and Schneider [2018], Shah et al. [2019]). For batch learning in non-contextual settings there is a long line of work establishing tight sample complexity bounds for revenue optimization (Cole and Roughgarden [2014], Morgenstern and Roughgarden [2015, 2016]) as well as approximation algorithms to reserve price optimization (Paes Leme et al. [2016], Roughgarden and Wang [2019], Derakhshan et al. [2019]).
Our paper is in the setting of contextual batch learning. Medina and Mohri [2014a] started the work on this setting by showing generalization bounds via Rademacher complexity. They also observe that the loss function is discontinuous and non-convex and propose the use of a surrogate loss. They bound the difference between the pricing loss and the surrogate loss and design algorithms for minimizing the surrogate loss. Medina and Vassilvitskii [2017] design a pricing algorithm based on clustering, where first features are clustered and then a non-contextual pricing algorithm is used on each cluster. Shen et al. [2019] replaces the pricing loss by a convex loss function derived from the theory of market equilibrium and argue that the clearing price is a good approximation of the optimal price in real datasets. A common theme in the previous papers is to replace the pricing loss by a more amenable loss function and give conditions under which the new loss approximates the pricing loss. Instead here we study the pricing loss directly. We give the first hardness proof in this setting and also give a (1 ✏)-approximation without any conditions on the data other than bounded norm. Our approximation algorithms for this problem works by projecting down to a lower-dimensional linear subspace and solving the problem on this subspace. In this way, it is reminiscent of the area of compressed learning (Calderbank et al. [2009]), which studies if it is possible to learn directly in a projected (“compressed”) space. More generally, our algorithm fits into a large body of work which leverages the Johnson-Lindenstrauss lemma for designing efficient algorithms (see e.g. Linial et al. [1995] and Har-Peled et al. [2012]).
Hardness of approximation have been established for non-contextual pricing problems with multiple buyers, e.g Paes Leme et al. [2016], Roughgarden and Wang [2019]. Such hardness results hinge on
the interaction between different buyers and don’t translate to single-buyer settings. The hardness result in our paper is of a different nature.
2 Approximation Algorithms
The main ingredient in the design of our algorithms will be the Johnson-Lindenstrauss lemma: Lemma 2.1 (Johnson-Lindenstrauss). Given a vector x 2 Rn with kxk2 = 1, if J̃ is a k ⇥ n matrix formed by taking k random orthogonal vectors as rows for k = O(✏ 2 log 1) and J = p n/k · J̃ ,
then:
Pr(|kJxk2 1| > ✏)
The following is a direct consequence of the JL lemma: Lemma 2.2. Let J be the JL-projection with k = O(✏ 2 log(1/✏)), w⇤ be the optimal solution to (MR) and xt is a point in the dataset with hw⇤, xti ✏ then with probability at least 1 ✏ the following inequalities hold:
(1 ✏) · kxtk2 kJxtk2 (1 + ✏) · kxtk2 (1 ✏) · hw⇤, xti hJw⇤, Jxti (1 + ✏) · hw⇤, xti
PTAS - Version 1: For the first version of the algorithm, we randomly sample 1/✏ JL-projections J with k = O(✏ 2 log(1/✏)) and search over an ✏-net of the projected space. For each projection, we define a set of discretized vectors as:
D = {ŵ; ŵ = ✏5z for z 2 Zk, kŵk2 1 + ✏} Then we search for the vector ŵ 2 D that maximizes
mX
t=1
REV(hŵ, Jxti; vt) (1)
Over all projections, we output the vector w = J>ŵ that maximizes the revenue. Theorem 2.3. There is an algorithm with running time O(epoly(1/✏)poly(n,m)) that outputs a vector w with kwk2 O(✏ · p n) such that:
E " X
t
REV(hw, xti; vt) # R⇤ O(✏m)
where R⇤ = P
t REV(hw⇤, xti; vt) for the optimal w⇤ with kw⇤k2 1.
Proof. The running time follows from the fact that |D| (1/✏)O(k) = eO(poly(1/✏)). We show the approximation guarantee in three steps:
Step 1: defining good points. Let w⇤ be the optimal solution to (MR). Say that a datapoint (xt, vt) is good if ✏ hw⇤, xti vt and the event in Lemma 2.2 happens. If G is the set of indices t corresponding to good datapoints, then with at least 1/2 probability:
X t2G hw⇤, xti R⇤ 2✏m
This is true since the points with hw⇤, xti < ✏ can only affect the revenue by at most ✏ each and for the remaining m0 points, each can fail to be good with probability at most ✏. The revenue loss in expectation is at most m0✏, so by Markov’s inequality it is at most 2m0✏ with 1/2 probability.
Step 2: projection of the optimal solution. Define w0 = (1 2✏) · Jw⇤ and define ŵ to be the vector in D obtained by rounding all coordinates of w0 to the nearest multiple of ✏5. For any good index t 2 G we have:
hŵ, Jxti = hŵ w0, Jxti+ hw0, Jxti (1 + ✏)✏5 p k + (1 2✏)hJw⇤, Jxti
(1 + ✏)✏5 p k + (1 ✏)hw⇤, xti vt
and hence that datapoint generates revenue since the price is below the value. And:
hŵ, Jxti = hŵ w0, Jxti+ hw0, Jxti (1 + ✏)✏5 p k + (1 2✏)hJw⇤, Jxti
(1 + ✏)✏5 p k + (1 5✏)hw⇤, xti
Step 3: bounding the revenue. Finally, note that
hw, xti = hJ>ŵ, xti = hŵ, Jxti
so: X
t
REV(hw, xti; vt) = X
0hŵ,Jxtivt hŵ, Jxti
X t2G hŵ, Jxti (1 5✏) X t2G hw⇤, xti O(✏)
(1 5✏)(R⇤ 2m✏) O(✏m) = R⇤ O(✏m)
Since we sample 1/✏ independent JL projections and for each, we find an O(✏m) additive approximation with probability at least 1/2, our algorithm achieves expected revenue R⇤ O(✏m), as desired.
PTAS – Version 2 The main drawback of the first version of the PTAS is that we output an ✏ p nLipschitz pricing policy that is an approximation to the optimal 1-Lipschitz pricing policy. With an increase in running time, it is possible to obtain the same approximation with an 1-Lipschitz pricing policy (i.e. kwk2 1). For that we will increase the dimension of the JL projection to k = O(✏ 2 log(n/✏)). This will allow us to have the following conditions hold simultaneously for all datapoints with probability at least 1 ✏:
(1 ✏) · kxtk2 kJxtk2 (1 + ✏) · kxtk2 hw⇤, xti ✏2 hJw⇤, Jxti hw⇤, xti+ ✏2
This follows from the same argument in Lemma 2.2, taking the Union Bound over all points. Now we repeat the following process (1/✏)O(k log(1/✏)) times:
Choose a random point ŵ in the unit ball in Rk. For each such ŵ we define the important set as t 2 Ĝ(ŵ) if 10✏ hŵ, Jxti vt. Now, we check (by solving a convex program) if there exists a vector w 2 Rn with kwk2 1 such that:
hŵ, Jxti 1 + 5✏ hw, xti vt, 8t 2 Ĝ(ŵ)
If it exists, call it w(ŵ) otherwise discard ŵ. Over all (1/✏)O(k log(1/✏)) iterations, for all vectors ŵ that weren’t discarded, choose the one maximizing the objective (1) and output w(ŵ).
Theorem 2.4. There is an algorithm with running time O(npoly(1/✏)poly(n,m)) that outputs a vector w with kwk2 1 such that:
E " X
t
REV(hw, xti; vt) # R⇤ O(✏m)
where R⇤ = P
t REV(hw⇤, xti; vt) for the optimal w⇤ with kw⇤k2 1.
Proof. Step 1: When ŵ lies close to the projection of the optimum, the convex program is feasible
Let w0 = (1 2✏) · Jw⇤. If ||ŵ w0|| ✏5 we will show that the convex program is solvable. For t 2 Ĝ(ŵ) we have
hw⇤, xti 1 1 2✏ hw 0 , Jx ti+ ✏2 (1 + 3✏)(hŵ, Jxti+ (1 + ✏)✏5) + ✏2 (1 + 5✏)vt
and hw⇤, xti 1
(1 2✏) hw 0 , Jx ti ✏2 (1 + 2✏)hw0, Jxti ✏2
(1 + 2✏)(hŵ, Jxti (1 + ✏)✏5) ✏2 > hŵ, Jxti
Thus 1/(1 + 5✏) · w⇤ is a solution to the convex program. Step 2: When ŵ lies close to the projection of the optimum, any solution to the convex program
achieves a good approximation
If ||ŵ w0|| ✏5 then for each data point xt with t 2 Ĝ(ŵ) hŵ, Jxti = hŵ w0, Jxti+ hw0, Jxti (1 + ✏)✏5 + (1 2✏)hJw⇤, Jxti
(1 + ✏)✏5 + (1 5✏)hw⇤, xti Note the last step holds because
hw⇤, xti hŵ, Jxti 10✏ and hJw⇤, Jxti hw⇤, xti ✏2. Next, we deal with the datapoints with t /2 Ĝ(ŵ). For these datapoints, either hŵ, Jxti < 10✏ in which case
hw⇤, xti (1 + 5✏)hw0, Jxti+ ✏2
(1 + 5✏)(hŵ, Jxti+ (1 + ✏)✏5) + ✏2 11✏ or hŵ, Jxti > vt 10✏ in which case
hw⇤, xti 1 (1 2✏) hw 0 , Jx ti ✏2 (1 + 2✏)hw0, Jxti ✏2
(1 + 2✏)(hŵ, Jxti (1 + ✏)✏5) ✏2 > (1 + 2✏)(vt (1 + ✏)✏5) ✏2 > vt
Thus, the total revenue achieved by w(ŵ) is at least 1
1 + 5✏
X
t2Ĝ(ŵ)
2✏5 + (1 5✏)REV(hw⇤, xti; vt)
2✏5m+ (1 10✏) X
t2Ĝ(ŵ)
REV(hw⇤, xti; vt)
2✏5m+ (1 10✏) X
t
REV(hw⇤, xti; vt) 11✏m !
X
t
REV(hw⇤, xti; vt) 25✏m
Step 3: The algorithm finds a good approximation with probability 1 O(✏) It suffices to show that our algorithm will choose some ŵ such that ||ŵ w0|| ✏5 with probability 1 O(✏). Note ||w0||2 (1 2✏)(1 + ✏) 1 ✏. Thus the probability that ŵ lands within distance ✏5 of w0 is ✏5k. Since we choose (1/✏)O(k log(1/✏)) different points ŵ independently at random, the probability that at least one of them lands within distance ✏5 of w0 is at least 1 ✏.
3 Hardness of approximation
Unlike `2 and `1 regression, Myersonian regression is NP-hard. We prove two hardness results. First we show that without the assumption ||w||2 1, achieving a constant factor approximation is NPhard. Then we show that under the Exponential Time Hypothesis (ETH), any algorithm that achieves a ✏m-additive approximation for Myersonian regression must run in time at least exp(O ✏ 1/6 ).
1-in-3-SAT We will rely on reductions from the 1-IN-3-SAT problem, which is NP-complete. The input to 1-IN-3-SAT is an expression in conjunctive normal form with each expression having 3 literals per clause (i.e. a collection of expression of the type Xi _ Xj _ Xk). The problem is to determine if there is a truth assignment such that exactly one literal in each clause is true (and the remaining are false).
GAP 1-in-3-SAT We will need a slightly stronger hardness result that 1-in-3-SAT is not only hard to solve exactly, but it is hard to approximate the maximum number of clauses that can be satisfied. In particular, there are constants 0 < c1 < c2 1 such that given a 1-in-3-SAT instance, it is NP-hard to distinguish the following two cases
• At most c1-fraction of the clauses can be satisfied • At least c2-fraction of the clauses can be satisfied
ETH The Exponential Time Hypothesis says that 3-SAT with N variables can’t be solved in time O(2cNpoly(N)) for some constant c > 0. Since there is a linear time reduction between 3-SAT and 1-IN-3-SAT and 1-IN-3-SAT is NP-complete, then ETH implies that there is no O(2cNpoly(N)) time algorithm for 1-IN-3-SAT.
Lemma 3.1. There exists a constant ✏ > 0 for which it is possible to reduce (in poly-time) an instance of (c1, c2)-GAP 1-in-3-SAT to computing a (1 ✏)-approximation for an instance of the unregularized Myersonian regression problem (UMR). Theorem 3.2. There is some constant ✏ > 0 for which obtaining a (1 ✏)-approximation for the unregularized Myersonian regression problem (UMR) is NP-hard.
The proof follows directly from Lemma 3.1 and the NP hardness of GAP-1-IN-3-SAT. The previous result rules out a PTAS for (UMR). In contrast we will see that while (MR) is still NP-hard to solve exactly, it admits a PTAS. However, runtime that is superpolynomial in ✏ is necessary. Lemma 3.3. It is possible to transform (in poly-time) an instance of 1-IN-3-SAT with N variables into an instance of Myersonian regression with the promise ||w||2 1 and n = O(N) and m = O(N5) in such a way that a satisfiable 1-IN-3-SAT instance will map to an instance of Myersonian regression with revenue R O(N2.5) while any unsatisfiable instance will map to an instance with revenue at most R 0.5N 0.5.
If we assume ETH, we obtain a bound on the runtime of any approximation algorithm: Theorem 3.4. Under ETH, any algorithm that achieves a ✏m-additive (or (1 ✏)-multiplicative) approximation for Myersonian regression must run in time at least O(2⌦(✏ 1/6)poly(n,m)).
Proof. Assume there is an approximation algorithm for Myersonian regression with running time O(2⌦(✏ 1/6)poly(n,m)) for the constant c in the definition of ETH.
The for an instance of 1-IN-3-SAT with N variables, consider the transformation in Lemma 3.3 and apply the approximation algorithm with ✏ = O(1/N6). Such an approximation algorithm would run in time O(2cNpoly(N)) and distinguish between the satisfiable and unsatisfiable cases of 1-IN-3-SAT, contradicting ETH.
4 Stability, Generalization and Extensions
We start by commenting on the importance of the constraint kwk2 1 imposed on the problem (MR), which is closely related to stability and generalization.
Offset term It will be convenient to allow a constant term in the pricing loss, i.e. we will look at pricing functions of the type:
x 7! w1 + nX
i=2
wix t
i
This is equivalent to assuming that all the datapoints have xt1 = 1 and kxtk2 p 2. We renormalize
such that we still have P n
i=2(x t i )2 1. We will make this assumption for the rest of this section.
We note that this assumption doesn’t affect the results in the previous sections. The positive results remain unchanged since we don’t have any assumption on the data other than the norm being bounded by a constant. Our hardness results can be easily adapted to the setting with an offset term. We can essentially force the constant term to be very small by adding ⌦(N103) data points with v t = 1/N100, xt1 = 1 and all other coordinates 0.
Stability We start by discussing the constraint kwk2 1 imposed on the problem (MR). Without this constraint, it is possible to completely change the objective function with a tiny perturbation in the problem data. Let R⇤ be the optimal revenue in the unregularized Myersonian regression (UMR) for some instance (xt, vt). A natural upper bound on R⇤ is the maximum welfare, given by W = P m
t=1 v t. Typically R⇤ < W . Consider such an instance. For any fixed < 0 consider the
following two instances:
• x̃t = (xt, 0) 2 Rn+1
• x̄t = (xt, vt) 2 Rn+1
The instances (x̃t, vt)t=1..m and (x̄t, vt)t=1..m are very close to each other in the sense that the labels are the same and the features have: kx̃t x̄tk , 8t. However, the optimal revenue of (x̃t, vt)t=1..m under (UMR) is R⇤ while the optimal revenue of (x̄t, vt)t=1..m is W by choosing w = (0, 1). This is true even as ! 0. On the other hand, the solution of the regularized problem (MR) is Lipschitz-continuous in the data. Theorem 4.1. Consider two instances (x̃t, ṽt)t=1..m and (x̄t, v̄t)t=1..m such that kx̃t x̄tk and |ṽt v̄t| for all t, then if R̃ and R̄ are the respective solutions to (MR) then:
|R̃ R̄| O( m)
Uniform Convergence and Generalization To understand generalization, we are concerned with the performance of the algorithm on a distribution D that generates datapoints (xt, vt). We will sample m points from this distribution and obtain a dataset S = {(xt, vt); t = 1..m}. We want to compare across all pricing policies w the objective function on the sample:
FS(w) = 1
m
mX
t=1
REV(hw, xti; vt)
with the performance on the original distribution:
FD(w) = E(x,v)⇠D ⇥ REV(hw, xti; vt) ⇤
Medina and Mohri [2014a] provide bounds for |FS(w) FD(w)| by studying the empirical Rademacher complexity of the pricing function. The following statement follows directly from Theorem 3 in their paper. Note that while their theorem bounds only one direction, the same proof also works for the other direction. Theorem 4.2 (Medina and Mohri [2014a]). For any > 0 it holds with probability 1 over the choice of a sample S of size m that:
|FS(w) FD(w)| O r n log(m/n) + log(1/ )
m
!
Corollary 4.3. Let wS be the output of the ERM algorithm on sample S of size m = O(✏ 2[n log(n/d) + log(1/ )]). Then with probability 1 we have:
FD(wS) max kwk21 FD(w) O(✏)
Extensions to other loss functions While our results are phrased in terms of the pricing, they hold for any lower-semi-Lipschitz reward fuction, i.e. any function such that:
R(p ✏) R(p) ✏
An important example studied in Medina and Mohri [2014a], Shen et al. [2019] is the revenue of a second price auction with reserves price p. Given two highest bids v1 and v2 the revenue function is written as:
SPA(p; v1, v2) = max(v2, p) · 1{p v1}
5 Conclusion
We give the first approximation algorithm for learning a linear pricing function without any assumption on the data other than normalization. This provides a key missing component to the field of learning for revenue optimization, where ERM was shown to be optimal in Medina and Mohri [2014a] but there were no algorithms with provable guarantees for it.
Our algorithm is polynomial in the number of features dimensions n and on the number of datapoints m but exponential in the accuracy parameter ✏. We show that the exponential dependency on ✏ is necessary.
In this paper we assume that the bids in the dataset represent the buyer’s true willingness to pay as in Medina and Mohri [2014a], Medina and Vassilvitskii [2017], Shen et al. [2019]. A interesting avenue of investigation for future work is to understand how strategic buyers would change their bids in response to a contextual batch learning algorithm and how to design algorithms that are aware of strategic response. This is a well studied problem in non-contextual online learning (Amin et al. [2013], Medina and Mohri [2014b], Drutsa [2017], Vanunts and Drutsa [2019], Nedelec et al. [2019]) as well as in online contextual learning (Amin et al. [2014], Golrezaei et al. [2019]). Formulating a model of strategic response to batch learning algorithms is itself open.
Broader Impact Statement
While our work is largely theoretical, we feel it can have downstream impact in the design of better marketplaces such as those for internet advertisement. Better pricing can increase both the efficiency of the market and the revenue of the platform. The latter is important since the revenue of platforms keeps such services (e.g. online newspapers) free for most users.
Acknowledgments and Disclosure of Funding
No funding to disclose. The authors would like to thank Andrés Muñoz Medina for helpful discussions. | 1. What is the focus and contribution of the paper regarding learning optimal contextual prices?
2. What are the strengths of the proposed approach, particularly in terms of its formulation, hardness, and approximation?
3. What are the weaknesses of the paper, especially regarding its assumptions and benchmark choices?
4. Do you have any concerns about the terminology used in the paper, such as "Myersonian Regression"?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This work considers the problem of learning optimal contextual prices from a set of allocation/bid samples that would arise from a users bids within an auction. Prior work has addressed this problem significantly, and shown that when generalizable, optimal prices could be computed based on the empirical distribution provided, but have not shown how to do this efficiently without assumptions on data. The work presents an approach (Myersonian Regression) that represents more direct learning of estimated losses relating to revenue in auctions, at least relative to the benchmark of Lipcshitz smooth pricing policies. The work accomplishes this through Johnson-Lindenstrauss dimensionality reduction and sampling, then solving optimally on smaller sample. The benchmark used is the optimal 1-Lipshitz pricing policy, which due to the resulting smoothness of the policy, becomes an easier target for dimensionality reduction through Johnson-Lindenstrauss.
Strengths
The work covers formulation, hardness and approximation, with fewer distributional assumptions than prior work. The approach of moving the assumptions of smoothness from the distribution to the benchmark is interesting where either for practical concerns, there would be complexity restrictions on the actual policy or in a setting it can be shown that there is usually little loss from simplicity or smoothness assumptions on the mechanism.
Weaknesses
The terminology "Myersonian Regression" suggests a more direct tie in to Myersonian Virtual Values / amortization of expected revenue (x(v - (1-F)/f)). I do not think this approach is leveraging that amortization - if it is leveraging and I missed the connect, more discussion of its contribution should be shown. The results are direct consequences from the assumption of using as a benchmark 1-Lipschitz pricing policies (for which the leveraged Johnson-Lindenstrauss sampling approach works). However, this is a very strong assumption. The optimal auction in simple settings is often a reservation price, which is not a 1-Lipschitz pricing policy (and is never once discussed in the work). The work would be made stronger with empirical justification that optimal 1-Lipschitz pricing policies are close to optimal pricing policies in the given setting. This work claims to have fewer distributional assumptions than other work, but that is really because the distributional assumptions have been moved from the algorithm into the benchmark, without a full discussion of the benchmark performance vs the full optimal benchmark. AFTER RESPONSE: The authors addressed my (misplaced) concern over applicability of 1-Lipschitz pricing policies. |
NIPS | Title
Myersonian Regression
Abstract
Motivated by pricing applications in online advertising, we study a variant of linear regression with a discontinuous loss function that we term Myersonian regression. In this variant, we wish to find a linear function f : R ! R that well approximates a set of points (xi, vi) 2 R ⇥ [0, 1] in the following sense: we receive a loss of vi when f(xi) > vi and a loss of vi f(xi) when f(xi) vi. This arises naturally in the economic application of designing a pricing policy for differentiated items (where the loss is the gap between the performance of our policy and the optimal Myerson prices). We show that Myersonian regression is NP-hard to solve exactly and furthermore that no fully polynomial-time approximation scheme exists for Myersonian regression conditioned on the Exponential Time Hypothesis being true. In contrast to this, we demonstrate a polynomial-time approximation scheme for Myersonian regression that obtains an ✏m additive approximation to the optimal possible revenue and can be computed in time O(exp(poly(1/✏))poly(m,n)). We show that this algorithm is stable and generalizes well over distributions of samples.
1 Introduction
In economics, the Myerson price of a distribution is the price that maximizes the revenue when selling to a buyer whose value is drawn from that distribution. Mathematically, if F is the cdf of the distribution, then the Myerson price is
p ⇤ = argmax
p p · (1 F (p))
In many modern applications such as online marketplaces and advertising, the seller doesn’t just set one price p but must instead price a variety of differentiated products. In these settings, a seller must design a policy to price items based on their features in order to optimize revenue. Thus, in this paper we study the contextual learning version of Myersonian pricing. More formally, we get to observe a training dataset {(xt, vt)}t=1..m representing the bids of a buyer on differentiated products. We will assume that the bids vt 2 [0, 1] come from a truthful auction and hence represent the maximum value a buyer is willing to pay for the product. Each product is represented by a vector of features xt 2 Rn normalized such that kxtk2 1. The goal of the learner is to design a policy that suggests a price (xt) for each product xt with the goal of maximizing the revenue on the underlying distribution D from which the pairs (xt, vt) are drawn. In practice, one would train a pricing policy on historical bids (training) and apply this policy on future products (testing).
Mathematically, we want to solve max 2P E(x,v)⇠D[REV( (x); v)] (PP)
where P is a class of pricing policies and REV is the revenue function (see Figure 1) REV(p; v) = max(p, 0) · 1{p v}
having only access to samples of D.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Medina and Mohri [2014a] establish that if the class of policies P has good generalization properties (defined in terms of Rademacher complexity) then it is enough to solve the problem on the empirical distribution given by the samples. The policy that optimizes over the empirical distribution is typically called Empirical Risk Minimization (ERM).
The missing piece in this puzzle is the algorithm, i.e. how to solve the ERM problem. Previous papers (Medina and Mohri [2014a], Medina and Vassilvitskii [2017], Shen et al. [2019]) approached this problem by designing heuristics for ERM and giving conditions on the data under which the heuristics perform well. In this paper we give the first provable approximation algorithm for the ERM problem without assumptions on the data. We also establish hardness of approximation that complements our algorithmic results. We believe these are the first hardness results for this problem. Even establishing whether exactly solving ERM was NP-hard for a reasonable class of pricing policies was open prior to this work.
Myersonian regression We now define formally the ERM problem for linear pricing policies1, which we call Myersonian regression. Recall that the dataset is of the form {(xt, vt)}t=1..m with x t 2 Rn, kxtk2 1 and vt 2 [0, 1]. The goal is to find a linear pricing policy x 7! hw, xi with kwk2 1 that maximizes the revenue on the dataset, i.e.
max w2Rn;kwk21
mX
t=1
REV(hw, xti; vt) (MR)
It is worth noting that we restrict ourselves to 1-Lipschitz pricing policies by only considering policies with kwk2 1. Bounding the Lipschitz constant of the pricing policy is important to ensure that the problem is stable and hence generalizable. We will contrast it with the unregularized version of (MR) in which the constraint kwk2 1 is omitted:
R ⇤ = max
w2Rn
mX
t=1
REV(hw, xti; vt) (UMR)
Without the Lipschitz constraint it is possible to come up with arbitrarily close datasets in the sense that kxt x̃tk ✏ and |vt ṽt| ✏ generating vastly different revenue even as ✏ ! 0. We will also show that (UMR) is APX-hard, i.e. it is NP-hard to approximate within 1 ✏0 for some constant ✏0 > 0.
Our Results Our main result is a polynomial time approximation scheme (PTAS) using dimensionality reduction. We present two versions of the same algorithm.
The first version of the PTAS has running time
O(epoly(1/✏) · poly(n,m)) 1The choice of linear function is actually not very restrictive. A common trick in machine learning is to map the features to a different space and train a linear model on (x). For example if d = 2, the features are (x1, x2). By mapping (x) = (1, x1, x2, x21, x22, x1x2) 2 R6, and training a linear function on (x), we are actually optimizing over all quadratic functions on the original features. Similarly, we can optimize over any polynomial of degree k or even more complex functions with an adequate mapping.
and outputs an L-Lipschitz pricing policy with L = O(✏ p n) that is an ✏m-additive approximation of the optimal 1-Lipschitz pricing policy.
The second version of the PTAS has running time
O(npoly(1/✏) · poly(n,m))
and outputs a 1-Lipschitz pricing policy that is an ✏m-additive approximation of the optimal 1- Lipschitz pricing policy.
We complement this result by showing that the Myersonian regression problem (MR) is NP-hard using a reduction from 1-IN-3-SAT. While it is not surprising that solving Myersonian regression exactly is NP-hard given the discontinuity in the reward function, this has actually been left open by several previous works. In fact, the same reduction implies that under the Exponential Time Hypothesis (ETH) any algorithm approximating it within an ✏m additive factor must run in time at least e⌦(poly(1/✏)), therefore ruling out a fully-polynomial time approximation scheme (FPTAS) for the problem. This hardness of approximation perfectly complements our algorithmic results, showing that our guarantees are essentially the best that one can hope for.
Finally we discuss stability and generalization of the problem. We show that (UMR) is unstable in the sense that arbitrarily small perturbations in the input can lead to completely different solutions. On the other hand (MR) is stable in the sense that the optimal solution varies continuously with the input.
We also discuss the setting in which there is an underlying distribution D on datapoints (x, v) and while we optimize on samples from D, we care about the loss with respect to the underlying distribution. We also discuss stability of our algorithms and how to extend them to other loss functions. Due to space constraints, most proofs are deferred to the Supplementary Material.
Related work Our work is in the broad area of learning for revenue optimization. The papers in this area can be categorized along two axis: online vs batch learning and contextual vs non-contextual. In the online non-contextual setting, Kleinberg and Leighton [2003] give the optimal algorithm for a single buyer which was later extended to optimal reserve pricing in auctions in Cesa-Bianchi et al. [2013]. In the online contextual setting there is a stream of recent work deriving optimal regret bounds for pricing (Amin et al. [2014], Cohen et al. [2016], Javanmard and Nazerzadeh [2016], Javanmard [2017], Lobel et al. [2017], Mao et al. [2018], Leme and Schneider [2018], Shah et al. [2019]). For batch learning in non-contextual settings there is a long line of work establishing tight sample complexity bounds for revenue optimization (Cole and Roughgarden [2014], Morgenstern and Roughgarden [2015, 2016]) as well as approximation algorithms to reserve price optimization (Paes Leme et al. [2016], Roughgarden and Wang [2019], Derakhshan et al. [2019]).
Our paper is in the setting of contextual batch learning. Medina and Mohri [2014a] started the work on this setting by showing generalization bounds via Rademacher complexity. They also observe that the loss function is discontinuous and non-convex and propose the use of a surrogate loss. They bound the difference between the pricing loss and the surrogate loss and design algorithms for minimizing the surrogate loss. Medina and Vassilvitskii [2017] design a pricing algorithm based on clustering, where first features are clustered and then a non-contextual pricing algorithm is used on each cluster. Shen et al. [2019] replaces the pricing loss by a convex loss function derived from the theory of market equilibrium and argue that the clearing price is a good approximation of the optimal price in real datasets. A common theme in the previous papers is to replace the pricing loss by a more amenable loss function and give conditions under which the new loss approximates the pricing loss. Instead here we study the pricing loss directly. We give the first hardness proof in this setting and also give a (1 ✏)-approximation without any conditions on the data other than bounded norm. Our approximation algorithms for this problem works by projecting down to a lower-dimensional linear subspace and solving the problem on this subspace. In this way, it is reminiscent of the area of compressed learning (Calderbank et al. [2009]), which studies if it is possible to learn directly in a projected (“compressed”) space. More generally, our algorithm fits into a large body of work which leverages the Johnson-Lindenstrauss lemma for designing efficient algorithms (see e.g. Linial et al. [1995] and Har-Peled et al. [2012]).
Hardness of approximation have been established for non-contextual pricing problems with multiple buyers, e.g Paes Leme et al. [2016], Roughgarden and Wang [2019]. Such hardness results hinge on
the interaction between different buyers and don’t translate to single-buyer settings. The hardness result in our paper is of a different nature.
2 Approximation Algorithms
The main ingredient in the design of our algorithms will be the Johnson-Lindenstrauss lemma: Lemma 2.1 (Johnson-Lindenstrauss). Given a vector x 2 Rn with kxk2 = 1, if J̃ is a k ⇥ n matrix formed by taking k random orthogonal vectors as rows for k = O(✏ 2 log 1) and J = p n/k · J̃ ,
then:
Pr(|kJxk2 1| > ✏)
The following is a direct consequence of the JL lemma: Lemma 2.2. Let J be the JL-projection with k = O(✏ 2 log(1/✏)), w⇤ be the optimal solution to (MR) and xt is a point in the dataset with hw⇤, xti ✏ then with probability at least 1 ✏ the following inequalities hold:
(1 ✏) · kxtk2 kJxtk2 (1 + ✏) · kxtk2 (1 ✏) · hw⇤, xti hJw⇤, Jxti (1 + ✏) · hw⇤, xti
PTAS - Version 1: For the first version of the algorithm, we randomly sample 1/✏ JL-projections J with k = O(✏ 2 log(1/✏)) and search over an ✏-net of the projected space. For each projection, we define a set of discretized vectors as:
D = {ŵ; ŵ = ✏5z for z 2 Zk, kŵk2 1 + ✏} Then we search for the vector ŵ 2 D that maximizes
mX
t=1
REV(hŵ, Jxti; vt) (1)
Over all projections, we output the vector w = J>ŵ that maximizes the revenue. Theorem 2.3. There is an algorithm with running time O(epoly(1/✏)poly(n,m)) that outputs a vector w with kwk2 O(✏ · p n) such that:
E " X
t
REV(hw, xti; vt) # R⇤ O(✏m)
where R⇤ = P
t REV(hw⇤, xti; vt) for the optimal w⇤ with kw⇤k2 1.
Proof. The running time follows from the fact that |D| (1/✏)O(k) = eO(poly(1/✏)). We show the approximation guarantee in three steps:
Step 1: defining good points. Let w⇤ be the optimal solution to (MR). Say that a datapoint (xt, vt) is good if ✏ hw⇤, xti vt and the event in Lemma 2.2 happens. If G is the set of indices t corresponding to good datapoints, then with at least 1/2 probability:
X t2G hw⇤, xti R⇤ 2✏m
This is true since the points with hw⇤, xti < ✏ can only affect the revenue by at most ✏ each and for the remaining m0 points, each can fail to be good with probability at most ✏. The revenue loss in expectation is at most m0✏, so by Markov’s inequality it is at most 2m0✏ with 1/2 probability.
Step 2: projection of the optimal solution. Define w0 = (1 2✏) · Jw⇤ and define ŵ to be the vector in D obtained by rounding all coordinates of w0 to the nearest multiple of ✏5. For any good index t 2 G we have:
hŵ, Jxti = hŵ w0, Jxti+ hw0, Jxti (1 + ✏)✏5 p k + (1 2✏)hJw⇤, Jxti
(1 + ✏)✏5 p k + (1 ✏)hw⇤, xti vt
and hence that datapoint generates revenue since the price is below the value. And:
hŵ, Jxti = hŵ w0, Jxti+ hw0, Jxti (1 + ✏)✏5 p k + (1 2✏)hJw⇤, Jxti
(1 + ✏)✏5 p k + (1 5✏)hw⇤, xti
Step 3: bounding the revenue. Finally, note that
hw, xti = hJ>ŵ, xti = hŵ, Jxti
so: X
t
REV(hw, xti; vt) = X
0hŵ,Jxtivt hŵ, Jxti
X t2G hŵ, Jxti (1 5✏) X t2G hw⇤, xti O(✏)
(1 5✏)(R⇤ 2m✏) O(✏m) = R⇤ O(✏m)
Since we sample 1/✏ independent JL projections and for each, we find an O(✏m) additive approximation with probability at least 1/2, our algorithm achieves expected revenue R⇤ O(✏m), as desired.
PTAS – Version 2 The main drawback of the first version of the PTAS is that we output an ✏ p nLipschitz pricing policy that is an approximation to the optimal 1-Lipschitz pricing policy. With an increase in running time, it is possible to obtain the same approximation with an 1-Lipschitz pricing policy (i.e. kwk2 1). For that we will increase the dimension of the JL projection to k = O(✏ 2 log(n/✏)). This will allow us to have the following conditions hold simultaneously for all datapoints with probability at least 1 ✏:
(1 ✏) · kxtk2 kJxtk2 (1 + ✏) · kxtk2 hw⇤, xti ✏2 hJw⇤, Jxti hw⇤, xti+ ✏2
This follows from the same argument in Lemma 2.2, taking the Union Bound over all points. Now we repeat the following process (1/✏)O(k log(1/✏)) times:
Choose a random point ŵ in the unit ball in Rk. For each such ŵ we define the important set as t 2 Ĝ(ŵ) if 10✏ hŵ, Jxti vt. Now, we check (by solving a convex program) if there exists a vector w 2 Rn with kwk2 1 such that:
hŵ, Jxti 1 + 5✏ hw, xti vt, 8t 2 Ĝ(ŵ)
If it exists, call it w(ŵ) otherwise discard ŵ. Over all (1/✏)O(k log(1/✏)) iterations, for all vectors ŵ that weren’t discarded, choose the one maximizing the objective (1) and output w(ŵ).
Theorem 2.4. There is an algorithm with running time O(npoly(1/✏)poly(n,m)) that outputs a vector w with kwk2 1 such that:
E " X
t
REV(hw, xti; vt) # R⇤ O(✏m)
where R⇤ = P
t REV(hw⇤, xti; vt) for the optimal w⇤ with kw⇤k2 1.
Proof. Step 1: When ŵ lies close to the projection of the optimum, the convex program is feasible
Let w0 = (1 2✏) · Jw⇤. If ||ŵ w0|| ✏5 we will show that the convex program is solvable. For t 2 Ĝ(ŵ) we have
hw⇤, xti 1 1 2✏ hw 0 , Jx ti+ ✏2 (1 + 3✏)(hŵ, Jxti+ (1 + ✏)✏5) + ✏2 (1 + 5✏)vt
and hw⇤, xti 1
(1 2✏) hw 0 , Jx ti ✏2 (1 + 2✏)hw0, Jxti ✏2
(1 + 2✏)(hŵ, Jxti (1 + ✏)✏5) ✏2 > hŵ, Jxti
Thus 1/(1 + 5✏) · w⇤ is a solution to the convex program. Step 2: When ŵ lies close to the projection of the optimum, any solution to the convex program
achieves a good approximation
If ||ŵ w0|| ✏5 then for each data point xt with t 2 Ĝ(ŵ) hŵ, Jxti = hŵ w0, Jxti+ hw0, Jxti (1 + ✏)✏5 + (1 2✏)hJw⇤, Jxti
(1 + ✏)✏5 + (1 5✏)hw⇤, xti Note the last step holds because
hw⇤, xti hŵ, Jxti 10✏ and hJw⇤, Jxti hw⇤, xti ✏2. Next, we deal with the datapoints with t /2 Ĝ(ŵ). For these datapoints, either hŵ, Jxti < 10✏ in which case
hw⇤, xti (1 + 5✏)hw0, Jxti+ ✏2
(1 + 5✏)(hŵ, Jxti+ (1 + ✏)✏5) + ✏2 11✏ or hŵ, Jxti > vt 10✏ in which case
hw⇤, xti 1 (1 2✏) hw 0 , Jx ti ✏2 (1 + 2✏)hw0, Jxti ✏2
(1 + 2✏)(hŵ, Jxti (1 + ✏)✏5) ✏2 > (1 + 2✏)(vt (1 + ✏)✏5) ✏2 > vt
Thus, the total revenue achieved by w(ŵ) is at least 1
1 + 5✏
X
t2Ĝ(ŵ)
2✏5 + (1 5✏)REV(hw⇤, xti; vt)
2✏5m+ (1 10✏) X
t2Ĝ(ŵ)
REV(hw⇤, xti; vt)
2✏5m+ (1 10✏) X
t
REV(hw⇤, xti; vt) 11✏m !
X
t
REV(hw⇤, xti; vt) 25✏m
Step 3: The algorithm finds a good approximation with probability 1 O(✏) It suffices to show that our algorithm will choose some ŵ such that ||ŵ w0|| ✏5 with probability 1 O(✏). Note ||w0||2 (1 2✏)(1 + ✏) 1 ✏. Thus the probability that ŵ lands within distance ✏5 of w0 is ✏5k. Since we choose (1/✏)O(k log(1/✏)) different points ŵ independently at random, the probability that at least one of them lands within distance ✏5 of w0 is at least 1 ✏.
3 Hardness of approximation
Unlike `2 and `1 regression, Myersonian regression is NP-hard. We prove two hardness results. First we show that without the assumption ||w||2 1, achieving a constant factor approximation is NPhard. Then we show that under the Exponential Time Hypothesis (ETH), any algorithm that achieves a ✏m-additive approximation for Myersonian regression must run in time at least exp(O ✏ 1/6 ).
1-in-3-SAT We will rely on reductions from the 1-IN-3-SAT problem, which is NP-complete. The input to 1-IN-3-SAT is an expression in conjunctive normal form with each expression having 3 literals per clause (i.e. a collection of expression of the type Xi _ Xj _ Xk). The problem is to determine if there is a truth assignment such that exactly one literal in each clause is true (and the remaining are false).
GAP 1-in-3-SAT We will need a slightly stronger hardness result that 1-in-3-SAT is not only hard to solve exactly, but it is hard to approximate the maximum number of clauses that can be satisfied. In particular, there are constants 0 < c1 < c2 1 such that given a 1-in-3-SAT instance, it is NP-hard to distinguish the following two cases
• At most c1-fraction of the clauses can be satisfied • At least c2-fraction of the clauses can be satisfied
ETH The Exponential Time Hypothesis says that 3-SAT with N variables can’t be solved in time O(2cNpoly(N)) for some constant c > 0. Since there is a linear time reduction between 3-SAT and 1-IN-3-SAT and 1-IN-3-SAT is NP-complete, then ETH implies that there is no O(2cNpoly(N)) time algorithm for 1-IN-3-SAT.
Lemma 3.1. There exists a constant ✏ > 0 for which it is possible to reduce (in poly-time) an instance of (c1, c2)-GAP 1-in-3-SAT to computing a (1 ✏)-approximation for an instance of the unregularized Myersonian regression problem (UMR). Theorem 3.2. There is some constant ✏ > 0 for which obtaining a (1 ✏)-approximation for the unregularized Myersonian regression problem (UMR) is NP-hard.
The proof follows directly from Lemma 3.1 and the NP hardness of GAP-1-IN-3-SAT. The previous result rules out a PTAS for (UMR). In contrast we will see that while (MR) is still NP-hard to solve exactly, it admits a PTAS. However, runtime that is superpolynomial in ✏ is necessary. Lemma 3.3. It is possible to transform (in poly-time) an instance of 1-IN-3-SAT with N variables into an instance of Myersonian regression with the promise ||w||2 1 and n = O(N) and m = O(N5) in such a way that a satisfiable 1-IN-3-SAT instance will map to an instance of Myersonian regression with revenue R O(N2.5) while any unsatisfiable instance will map to an instance with revenue at most R 0.5N 0.5.
If we assume ETH, we obtain a bound on the runtime of any approximation algorithm: Theorem 3.4. Under ETH, any algorithm that achieves a ✏m-additive (or (1 ✏)-multiplicative) approximation for Myersonian regression must run in time at least O(2⌦(✏ 1/6)poly(n,m)).
Proof. Assume there is an approximation algorithm for Myersonian regression with running time O(2⌦(✏ 1/6)poly(n,m)) for the constant c in the definition of ETH.
The for an instance of 1-IN-3-SAT with N variables, consider the transformation in Lemma 3.3 and apply the approximation algorithm with ✏ = O(1/N6). Such an approximation algorithm would run in time O(2cNpoly(N)) and distinguish between the satisfiable and unsatisfiable cases of 1-IN-3-SAT, contradicting ETH.
4 Stability, Generalization and Extensions
We start by commenting on the importance of the constraint kwk2 1 imposed on the problem (MR), which is closely related to stability and generalization.
Offset term It will be convenient to allow a constant term in the pricing loss, i.e. we will look at pricing functions of the type:
x 7! w1 + nX
i=2
wix t
i
This is equivalent to assuming that all the datapoints have xt1 = 1 and kxtk2 p 2. We renormalize
such that we still have P n
i=2(x t i )2 1. We will make this assumption for the rest of this section.
We note that this assumption doesn’t affect the results in the previous sections. The positive results remain unchanged since we don’t have any assumption on the data other than the norm being bounded by a constant. Our hardness results can be easily adapted to the setting with an offset term. We can essentially force the constant term to be very small by adding ⌦(N103) data points with v t = 1/N100, xt1 = 1 and all other coordinates 0.
Stability We start by discussing the constraint kwk2 1 imposed on the problem (MR). Without this constraint, it is possible to completely change the objective function with a tiny perturbation in the problem data. Let R⇤ be the optimal revenue in the unregularized Myersonian regression (UMR) for some instance (xt, vt). A natural upper bound on R⇤ is the maximum welfare, given by W = P m
t=1 v t. Typically R⇤ < W . Consider such an instance. For any fixed < 0 consider the
following two instances:
• x̃t = (xt, 0) 2 Rn+1
• x̄t = (xt, vt) 2 Rn+1
The instances (x̃t, vt)t=1..m and (x̄t, vt)t=1..m are very close to each other in the sense that the labels are the same and the features have: kx̃t x̄tk , 8t. However, the optimal revenue of (x̃t, vt)t=1..m under (UMR) is R⇤ while the optimal revenue of (x̄t, vt)t=1..m is W by choosing w = (0, 1). This is true even as ! 0. On the other hand, the solution of the regularized problem (MR) is Lipschitz-continuous in the data. Theorem 4.1. Consider two instances (x̃t, ṽt)t=1..m and (x̄t, v̄t)t=1..m such that kx̃t x̄tk and |ṽt v̄t| for all t, then if R̃ and R̄ are the respective solutions to (MR) then:
|R̃ R̄| O( m)
Uniform Convergence and Generalization To understand generalization, we are concerned with the performance of the algorithm on a distribution D that generates datapoints (xt, vt). We will sample m points from this distribution and obtain a dataset S = {(xt, vt); t = 1..m}. We want to compare across all pricing policies w the objective function on the sample:
FS(w) = 1
m
mX
t=1
REV(hw, xti; vt)
with the performance on the original distribution:
FD(w) = E(x,v)⇠D ⇥ REV(hw, xti; vt) ⇤
Medina and Mohri [2014a] provide bounds for |FS(w) FD(w)| by studying the empirical Rademacher complexity of the pricing function. The following statement follows directly from Theorem 3 in their paper. Note that while their theorem bounds only one direction, the same proof also works for the other direction. Theorem 4.2 (Medina and Mohri [2014a]). For any > 0 it holds with probability 1 over the choice of a sample S of size m that:
|FS(w) FD(w)| O r n log(m/n) + log(1/ )
m
!
Corollary 4.3. Let wS be the output of the ERM algorithm on sample S of size m = O(✏ 2[n log(n/d) + log(1/ )]). Then with probability 1 we have:
FD(wS) max kwk21 FD(w) O(✏)
Extensions to other loss functions While our results are phrased in terms of the pricing, they hold for any lower-semi-Lipschitz reward fuction, i.e. any function such that:
R(p ✏) R(p) ✏
An important example studied in Medina and Mohri [2014a], Shen et al. [2019] is the revenue of a second price auction with reserves price p. Given two highest bids v1 and v2 the revenue function is written as:
SPA(p; v1, v2) = max(v2, p) · 1{p v1}
5 Conclusion
We give the first approximation algorithm for learning a linear pricing function without any assumption on the data other than normalization. This provides a key missing component to the field of learning for revenue optimization, where ERM was shown to be optimal in Medina and Mohri [2014a] but there were no algorithms with provable guarantees for it.
Our algorithm is polynomial in the number of features dimensions n and on the number of datapoints m but exponential in the accuracy parameter ✏. We show that the exponential dependency on ✏ is necessary.
In this paper we assume that the bids in the dataset represent the buyer’s true willingness to pay as in Medina and Mohri [2014a], Medina and Vassilvitskii [2017], Shen et al. [2019]. A interesting avenue of investigation for future work is to understand how strategic buyers would change their bids in response to a contextual batch learning algorithm and how to design algorithms that are aware of strategic response. This is a well studied problem in non-contextual online learning (Amin et al. [2013], Medina and Mohri [2014b], Drutsa [2017], Vanunts and Drutsa [2019], Nedelec et al. [2019]) as well as in online contextual learning (Amin et al. [2014], Golrezaei et al. [2019]). Formulating a model of strategic response to batch learning algorithms is itself open.
Broader Impact Statement
While our work is largely theoretical, we feel it can have downstream impact in the design of better marketplaces such as those for internet advertisement. Better pricing can increase both the efficiency of the market and the revenue of the platform. The latter is important since the revenue of platforms keeps such services (e.g. online newspapers) free for most users.
Acknowledgments and Disclosure of Funding
No funding to disclose. The authors would like to thank Andrés Muñoz Medina for helpful discussions. | 1. What is the main contribution of the paper regarding the Myersonian loss function?
2. What are the strengths of the paper, particularly in its presentation of a PTAS and the characterization of the problem's difficulty?
3. What are some weaknesses or areas for improvement in the paper, such as the construction of the revenue function, literature review, and motivation? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper presents the Myersonian loss function (based on Myerson price) and show that finding the optimal hypothesis exactly is NP-hard, but present a PTAS yields an additive approximation of optimal revenue. Depending on a normalization constraint added to the ERM problem (MR vs UMR), the authors show stability of (normalized) MR solutions and possible instability of UMR solutions. Note: Throughout, I use the citation [MM14] to reference A. M. Medina and M. Mohri. Learning theory and algorithms for revenue optimization in second price auctions with reserve. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 262–270, 2014a
Strengths
The presentation of a PTAS (and lack of FPTAS) to learn the approximate Myerson price (MP) is a strong result, and contributes to a seemingly limited literature on learning MP in a learning theory setting. Moreover, the work, in showing NP-hardness yet giving a PTAS by a JL projection for learning Myerson price gives a pretty tight characterization on the difficulty of the problem for a thorough study of the problem. - Having the clear explanation of ``the quadrants'' of online vs offline and contextual vs noncontextual settings was very helpful for me to situate this work.
Weaknesses
SUGGESTIONS FOR IMPROVING SCORE -------------------------------- - Develop and motivate a bit further the construction of the revenue function at the bottom of page 1 and if surrogates could be used. - Flesh out the literature review and spend more time contextualizing the results. - Share how the NP-hardness result is surprising if it is. - The broader impacts mentions pricing for online marketplaces; mentioning this in the motivation for *why* one might want to learn the Myerson price would strength the motivation of the paper. ================= AFTER RESPONSE ================= - Thank you for the elaboration re: the contextualization of the hardness result. |
NIPS | Title
Myersonian Regression
Abstract
Motivated by pricing applications in online advertising, we study a variant of linear regression with a discontinuous loss function that we term Myersonian regression. In this variant, we wish to find a linear function f : R ! R that well approximates a set of points (xi, vi) 2 R ⇥ [0, 1] in the following sense: we receive a loss of vi when f(xi) > vi and a loss of vi f(xi) when f(xi) vi. This arises naturally in the economic application of designing a pricing policy for differentiated items (where the loss is the gap between the performance of our policy and the optimal Myerson prices). We show that Myersonian regression is NP-hard to solve exactly and furthermore that no fully polynomial-time approximation scheme exists for Myersonian regression conditioned on the Exponential Time Hypothesis being true. In contrast to this, we demonstrate a polynomial-time approximation scheme for Myersonian regression that obtains an ✏m additive approximation to the optimal possible revenue and can be computed in time O(exp(poly(1/✏))poly(m,n)). We show that this algorithm is stable and generalizes well over distributions of samples.
1 Introduction
In economics, the Myerson price of a distribution is the price that maximizes the revenue when selling to a buyer whose value is drawn from that distribution. Mathematically, if F is the cdf of the distribution, then the Myerson price is
p ⇤ = argmax
p p · (1 F (p))
In many modern applications such as online marketplaces and advertising, the seller doesn’t just set one price p but must instead price a variety of differentiated products. In these settings, a seller must design a policy to price items based on their features in order to optimize revenue. Thus, in this paper we study the contextual learning version of Myersonian pricing. More formally, we get to observe a training dataset {(xt, vt)}t=1..m representing the bids of a buyer on differentiated products. We will assume that the bids vt 2 [0, 1] come from a truthful auction and hence represent the maximum value a buyer is willing to pay for the product. Each product is represented by a vector of features xt 2 Rn normalized such that kxtk2 1. The goal of the learner is to design a policy that suggests a price (xt) for each product xt with the goal of maximizing the revenue on the underlying distribution D from which the pairs (xt, vt) are drawn. In practice, one would train a pricing policy on historical bids (training) and apply this policy on future products (testing).
Mathematically, we want to solve max 2P E(x,v)⇠D[REV( (x); v)] (PP)
where P is a class of pricing policies and REV is the revenue function (see Figure 1) REV(p; v) = max(p, 0) · 1{p v}
having only access to samples of D.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Medina and Mohri [2014a] establish that if the class of policies P has good generalization properties (defined in terms of Rademacher complexity) then it is enough to solve the problem on the empirical distribution given by the samples. The policy that optimizes over the empirical distribution is typically called Empirical Risk Minimization (ERM).
The missing piece in this puzzle is the algorithm, i.e. how to solve the ERM problem. Previous papers (Medina and Mohri [2014a], Medina and Vassilvitskii [2017], Shen et al. [2019]) approached this problem by designing heuristics for ERM and giving conditions on the data under which the heuristics perform well. In this paper we give the first provable approximation algorithm for the ERM problem without assumptions on the data. We also establish hardness of approximation that complements our algorithmic results. We believe these are the first hardness results for this problem. Even establishing whether exactly solving ERM was NP-hard for a reasonable class of pricing policies was open prior to this work.
Myersonian regression We now define formally the ERM problem for linear pricing policies1, which we call Myersonian regression. Recall that the dataset is of the form {(xt, vt)}t=1..m with x t 2 Rn, kxtk2 1 and vt 2 [0, 1]. The goal is to find a linear pricing policy x 7! hw, xi with kwk2 1 that maximizes the revenue on the dataset, i.e.
max w2Rn;kwk21
mX
t=1
REV(hw, xti; vt) (MR)
It is worth noting that we restrict ourselves to 1-Lipschitz pricing policies by only considering policies with kwk2 1. Bounding the Lipschitz constant of the pricing policy is important to ensure that the problem is stable and hence generalizable. We will contrast it with the unregularized version of (MR) in which the constraint kwk2 1 is omitted:
R ⇤ = max
w2Rn
mX
t=1
REV(hw, xti; vt) (UMR)
Without the Lipschitz constraint it is possible to come up with arbitrarily close datasets in the sense that kxt x̃tk ✏ and |vt ṽt| ✏ generating vastly different revenue even as ✏ ! 0. We will also show that (UMR) is APX-hard, i.e. it is NP-hard to approximate within 1 ✏0 for some constant ✏0 > 0.
Our Results Our main result is a polynomial time approximation scheme (PTAS) using dimensionality reduction. We present two versions of the same algorithm.
The first version of the PTAS has running time
O(epoly(1/✏) · poly(n,m)) 1The choice of linear function is actually not very restrictive. A common trick in machine learning is to map the features to a different space and train a linear model on (x). For example if d = 2, the features are (x1, x2). By mapping (x) = (1, x1, x2, x21, x22, x1x2) 2 R6, and training a linear function on (x), we are actually optimizing over all quadratic functions on the original features. Similarly, we can optimize over any polynomial of degree k or even more complex functions with an adequate mapping.
and outputs an L-Lipschitz pricing policy with L = O(✏ p n) that is an ✏m-additive approximation of the optimal 1-Lipschitz pricing policy.
The second version of the PTAS has running time
O(npoly(1/✏) · poly(n,m))
and outputs a 1-Lipschitz pricing policy that is an ✏m-additive approximation of the optimal 1- Lipschitz pricing policy.
We complement this result by showing that the Myersonian regression problem (MR) is NP-hard using a reduction from 1-IN-3-SAT. While it is not surprising that solving Myersonian regression exactly is NP-hard given the discontinuity in the reward function, this has actually been left open by several previous works. In fact, the same reduction implies that under the Exponential Time Hypothesis (ETH) any algorithm approximating it within an ✏m additive factor must run in time at least e⌦(poly(1/✏)), therefore ruling out a fully-polynomial time approximation scheme (FPTAS) for the problem. This hardness of approximation perfectly complements our algorithmic results, showing that our guarantees are essentially the best that one can hope for.
Finally we discuss stability and generalization of the problem. We show that (UMR) is unstable in the sense that arbitrarily small perturbations in the input can lead to completely different solutions. On the other hand (MR) is stable in the sense that the optimal solution varies continuously with the input.
We also discuss the setting in which there is an underlying distribution D on datapoints (x, v) and while we optimize on samples from D, we care about the loss with respect to the underlying distribution. We also discuss stability of our algorithms and how to extend them to other loss functions. Due to space constraints, most proofs are deferred to the Supplementary Material.
Related work Our work is in the broad area of learning for revenue optimization. The papers in this area can be categorized along two axis: online vs batch learning and contextual vs non-contextual. In the online non-contextual setting, Kleinberg and Leighton [2003] give the optimal algorithm for a single buyer which was later extended to optimal reserve pricing in auctions in Cesa-Bianchi et al. [2013]. In the online contextual setting there is a stream of recent work deriving optimal regret bounds for pricing (Amin et al. [2014], Cohen et al. [2016], Javanmard and Nazerzadeh [2016], Javanmard [2017], Lobel et al. [2017], Mao et al. [2018], Leme and Schneider [2018], Shah et al. [2019]). For batch learning in non-contextual settings there is a long line of work establishing tight sample complexity bounds for revenue optimization (Cole and Roughgarden [2014], Morgenstern and Roughgarden [2015, 2016]) as well as approximation algorithms to reserve price optimization (Paes Leme et al. [2016], Roughgarden and Wang [2019], Derakhshan et al. [2019]).
Our paper is in the setting of contextual batch learning. Medina and Mohri [2014a] started the work on this setting by showing generalization bounds via Rademacher complexity. They also observe that the loss function is discontinuous and non-convex and propose the use of a surrogate loss. They bound the difference between the pricing loss and the surrogate loss and design algorithms for minimizing the surrogate loss. Medina and Vassilvitskii [2017] design a pricing algorithm based on clustering, where first features are clustered and then a non-contextual pricing algorithm is used on each cluster. Shen et al. [2019] replaces the pricing loss by a convex loss function derived from the theory of market equilibrium and argue that the clearing price is a good approximation of the optimal price in real datasets. A common theme in the previous papers is to replace the pricing loss by a more amenable loss function and give conditions under which the new loss approximates the pricing loss. Instead here we study the pricing loss directly. We give the first hardness proof in this setting and also give a (1 ✏)-approximation without any conditions on the data other than bounded norm. Our approximation algorithms for this problem works by projecting down to a lower-dimensional linear subspace and solving the problem on this subspace. In this way, it is reminiscent of the area of compressed learning (Calderbank et al. [2009]), which studies if it is possible to learn directly in a projected (“compressed”) space. More generally, our algorithm fits into a large body of work which leverages the Johnson-Lindenstrauss lemma for designing efficient algorithms (see e.g. Linial et al. [1995] and Har-Peled et al. [2012]).
Hardness of approximation have been established for non-contextual pricing problems with multiple buyers, e.g Paes Leme et al. [2016], Roughgarden and Wang [2019]. Such hardness results hinge on
the interaction between different buyers and don’t translate to single-buyer settings. The hardness result in our paper is of a different nature.
2 Approximation Algorithms
The main ingredient in the design of our algorithms will be the Johnson-Lindenstrauss lemma: Lemma 2.1 (Johnson-Lindenstrauss). Given a vector x 2 Rn with kxk2 = 1, if J̃ is a k ⇥ n matrix formed by taking k random orthogonal vectors as rows for k = O(✏ 2 log 1) and J = p n/k · J̃ ,
then:
Pr(|kJxk2 1| > ✏)
The following is a direct consequence of the JL lemma: Lemma 2.2. Let J be the JL-projection with k = O(✏ 2 log(1/✏)), w⇤ be the optimal solution to (MR) and xt is a point in the dataset with hw⇤, xti ✏ then with probability at least 1 ✏ the following inequalities hold:
(1 ✏) · kxtk2 kJxtk2 (1 + ✏) · kxtk2 (1 ✏) · hw⇤, xti hJw⇤, Jxti (1 + ✏) · hw⇤, xti
PTAS - Version 1: For the first version of the algorithm, we randomly sample 1/✏ JL-projections J with k = O(✏ 2 log(1/✏)) and search over an ✏-net of the projected space. For each projection, we define a set of discretized vectors as:
D = {ŵ; ŵ = ✏5z for z 2 Zk, kŵk2 1 + ✏} Then we search for the vector ŵ 2 D that maximizes
mX
t=1
REV(hŵ, Jxti; vt) (1)
Over all projections, we output the vector w = J>ŵ that maximizes the revenue. Theorem 2.3. There is an algorithm with running time O(epoly(1/✏)poly(n,m)) that outputs a vector w with kwk2 O(✏ · p n) such that:
E " X
t
REV(hw, xti; vt) # R⇤ O(✏m)
where R⇤ = P
t REV(hw⇤, xti; vt) for the optimal w⇤ with kw⇤k2 1.
Proof. The running time follows from the fact that |D| (1/✏)O(k) = eO(poly(1/✏)). We show the approximation guarantee in three steps:
Step 1: defining good points. Let w⇤ be the optimal solution to (MR). Say that a datapoint (xt, vt) is good if ✏ hw⇤, xti vt and the event in Lemma 2.2 happens. If G is the set of indices t corresponding to good datapoints, then with at least 1/2 probability:
X t2G hw⇤, xti R⇤ 2✏m
This is true since the points with hw⇤, xti < ✏ can only affect the revenue by at most ✏ each and for the remaining m0 points, each can fail to be good with probability at most ✏. The revenue loss in expectation is at most m0✏, so by Markov’s inequality it is at most 2m0✏ with 1/2 probability.
Step 2: projection of the optimal solution. Define w0 = (1 2✏) · Jw⇤ and define ŵ to be the vector in D obtained by rounding all coordinates of w0 to the nearest multiple of ✏5. For any good index t 2 G we have:
hŵ, Jxti = hŵ w0, Jxti+ hw0, Jxti (1 + ✏)✏5 p k + (1 2✏)hJw⇤, Jxti
(1 + ✏)✏5 p k + (1 ✏)hw⇤, xti vt
and hence that datapoint generates revenue since the price is below the value. And:
hŵ, Jxti = hŵ w0, Jxti+ hw0, Jxti (1 + ✏)✏5 p k + (1 2✏)hJw⇤, Jxti
(1 + ✏)✏5 p k + (1 5✏)hw⇤, xti
Step 3: bounding the revenue. Finally, note that
hw, xti = hJ>ŵ, xti = hŵ, Jxti
so: X
t
REV(hw, xti; vt) = X
0hŵ,Jxtivt hŵ, Jxti
X t2G hŵ, Jxti (1 5✏) X t2G hw⇤, xti O(✏)
(1 5✏)(R⇤ 2m✏) O(✏m) = R⇤ O(✏m)
Since we sample 1/✏ independent JL projections and for each, we find an O(✏m) additive approximation with probability at least 1/2, our algorithm achieves expected revenue R⇤ O(✏m), as desired.
PTAS – Version 2 The main drawback of the first version of the PTAS is that we output an ✏ p nLipschitz pricing policy that is an approximation to the optimal 1-Lipschitz pricing policy. With an increase in running time, it is possible to obtain the same approximation with an 1-Lipschitz pricing policy (i.e. kwk2 1). For that we will increase the dimension of the JL projection to k = O(✏ 2 log(n/✏)). This will allow us to have the following conditions hold simultaneously for all datapoints with probability at least 1 ✏:
(1 ✏) · kxtk2 kJxtk2 (1 + ✏) · kxtk2 hw⇤, xti ✏2 hJw⇤, Jxti hw⇤, xti+ ✏2
This follows from the same argument in Lemma 2.2, taking the Union Bound over all points. Now we repeat the following process (1/✏)O(k log(1/✏)) times:
Choose a random point ŵ in the unit ball in Rk. For each such ŵ we define the important set as t 2 Ĝ(ŵ) if 10✏ hŵ, Jxti vt. Now, we check (by solving a convex program) if there exists a vector w 2 Rn with kwk2 1 such that:
hŵ, Jxti 1 + 5✏ hw, xti vt, 8t 2 Ĝ(ŵ)
If it exists, call it w(ŵ) otherwise discard ŵ. Over all (1/✏)O(k log(1/✏)) iterations, for all vectors ŵ that weren’t discarded, choose the one maximizing the objective (1) and output w(ŵ).
Theorem 2.4. There is an algorithm with running time O(npoly(1/✏)poly(n,m)) that outputs a vector w with kwk2 1 such that:
E " X
t
REV(hw, xti; vt) # R⇤ O(✏m)
where R⇤ = P
t REV(hw⇤, xti; vt) for the optimal w⇤ with kw⇤k2 1.
Proof. Step 1: When ŵ lies close to the projection of the optimum, the convex program is feasible
Let w0 = (1 2✏) · Jw⇤. If ||ŵ w0|| ✏5 we will show that the convex program is solvable. For t 2 Ĝ(ŵ) we have
hw⇤, xti 1 1 2✏ hw 0 , Jx ti+ ✏2 (1 + 3✏)(hŵ, Jxti+ (1 + ✏)✏5) + ✏2 (1 + 5✏)vt
and hw⇤, xti 1
(1 2✏) hw 0 , Jx ti ✏2 (1 + 2✏)hw0, Jxti ✏2
(1 + 2✏)(hŵ, Jxti (1 + ✏)✏5) ✏2 > hŵ, Jxti
Thus 1/(1 + 5✏) · w⇤ is a solution to the convex program. Step 2: When ŵ lies close to the projection of the optimum, any solution to the convex program
achieves a good approximation
If ||ŵ w0|| ✏5 then for each data point xt with t 2 Ĝ(ŵ) hŵ, Jxti = hŵ w0, Jxti+ hw0, Jxti (1 + ✏)✏5 + (1 2✏)hJw⇤, Jxti
(1 + ✏)✏5 + (1 5✏)hw⇤, xti Note the last step holds because
hw⇤, xti hŵ, Jxti 10✏ and hJw⇤, Jxti hw⇤, xti ✏2. Next, we deal with the datapoints with t /2 Ĝ(ŵ). For these datapoints, either hŵ, Jxti < 10✏ in which case
hw⇤, xti (1 + 5✏)hw0, Jxti+ ✏2
(1 + 5✏)(hŵ, Jxti+ (1 + ✏)✏5) + ✏2 11✏ or hŵ, Jxti > vt 10✏ in which case
hw⇤, xti 1 (1 2✏) hw 0 , Jx ti ✏2 (1 + 2✏)hw0, Jxti ✏2
(1 + 2✏)(hŵ, Jxti (1 + ✏)✏5) ✏2 > (1 + 2✏)(vt (1 + ✏)✏5) ✏2 > vt
Thus, the total revenue achieved by w(ŵ) is at least 1
1 + 5✏
X
t2Ĝ(ŵ)
2✏5 + (1 5✏)REV(hw⇤, xti; vt)
2✏5m+ (1 10✏) X
t2Ĝ(ŵ)
REV(hw⇤, xti; vt)
2✏5m+ (1 10✏) X
t
REV(hw⇤, xti; vt) 11✏m !
X
t
REV(hw⇤, xti; vt) 25✏m
Step 3: The algorithm finds a good approximation with probability 1 O(✏) It suffices to show that our algorithm will choose some ŵ such that ||ŵ w0|| ✏5 with probability 1 O(✏). Note ||w0||2 (1 2✏)(1 + ✏) 1 ✏. Thus the probability that ŵ lands within distance ✏5 of w0 is ✏5k. Since we choose (1/✏)O(k log(1/✏)) different points ŵ independently at random, the probability that at least one of them lands within distance ✏5 of w0 is at least 1 ✏.
3 Hardness of approximation
Unlike `2 and `1 regression, Myersonian regression is NP-hard. We prove two hardness results. First we show that without the assumption ||w||2 1, achieving a constant factor approximation is NPhard. Then we show that under the Exponential Time Hypothesis (ETH), any algorithm that achieves a ✏m-additive approximation for Myersonian regression must run in time at least exp(O ✏ 1/6 ).
1-in-3-SAT We will rely on reductions from the 1-IN-3-SAT problem, which is NP-complete. The input to 1-IN-3-SAT is an expression in conjunctive normal form with each expression having 3 literals per clause (i.e. a collection of expression of the type Xi _ Xj _ Xk). The problem is to determine if there is a truth assignment such that exactly one literal in each clause is true (and the remaining are false).
GAP 1-in-3-SAT We will need a slightly stronger hardness result that 1-in-3-SAT is not only hard to solve exactly, but it is hard to approximate the maximum number of clauses that can be satisfied. In particular, there are constants 0 < c1 < c2 1 such that given a 1-in-3-SAT instance, it is NP-hard to distinguish the following two cases
• At most c1-fraction of the clauses can be satisfied • At least c2-fraction of the clauses can be satisfied
ETH The Exponential Time Hypothesis says that 3-SAT with N variables can’t be solved in time O(2cNpoly(N)) for some constant c > 0. Since there is a linear time reduction between 3-SAT and 1-IN-3-SAT and 1-IN-3-SAT is NP-complete, then ETH implies that there is no O(2cNpoly(N)) time algorithm for 1-IN-3-SAT.
Lemma 3.1. There exists a constant ✏ > 0 for which it is possible to reduce (in poly-time) an instance of (c1, c2)-GAP 1-in-3-SAT to computing a (1 ✏)-approximation for an instance of the unregularized Myersonian regression problem (UMR). Theorem 3.2. There is some constant ✏ > 0 for which obtaining a (1 ✏)-approximation for the unregularized Myersonian regression problem (UMR) is NP-hard.
The proof follows directly from Lemma 3.1 and the NP hardness of GAP-1-IN-3-SAT. The previous result rules out a PTAS for (UMR). In contrast we will see that while (MR) is still NP-hard to solve exactly, it admits a PTAS. However, runtime that is superpolynomial in ✏ is necessary. Lemma 3.3. It is possible to transform (in poly-time) an instance of 1-IN-3-SAT with N variables into an instance of Myersonian regression with the promise ||w||2 1 and n = O(N) and m = O(N5) in such a way that a satisfiable 1-IN-3-SAT instance will map to an instance of Myersonian regression with revenue R O(N2.5) while any unsatisfiable instance will map to an instance with revenue at most R 0.5N 0.5.
If we assume ETH, we obtain a bound on the runtime of any approximation algorithm: Theorem 3.4. Under ETH, any algorithm that achieves a ✏m-additive (or (1 ✏)-multiplicative) approximation for Myersonian regression must run in time at least O(2⌦(✏ 1/6)poly(n,m)).
Proof. Assume there is an approximation algorithm for Myersonian regression with running time O(2⌦(✏ 1/6)poly(n,m)) for the constant c in the definition of ETH.
The for an instance of 1-IN-3-SAT with N variables, consider the transformation in Lemma 3.3 and apply the approximation algorithm with ✏ = O(1/N6). Such an approximation algorithm would run in time O(2cNpoly(N)) and distinguish between the satisfiable and unsatisfiable cases of 1-IN-3-SAT, contradicting ETH.
4 Stability, Generalization and Extensions
We start by commenting on the importance of the constraint kwk2 1 imposed on the problem (MR), which is closely related to stability and generalization.
Offset term It will be convenient to allow a constant term in the pricing loss, i.e. we will look at pricing functions of the type:
x 7! w1 + nX
i=2
wix t
i
This is equivalent to assuming that all the datapoints have xt1 = 1 and kxtk2 p 2. We renormalize
such that we still have P n
i=2(x t i )2 1. We will make this assumption for the rest of this section.
We note that this assumption doesn’t affect the results in the previous sections. The positive results remain unchanged since we don’t have any assumption on the data other than the norm being bounded by a constant. Our hardness results can be easily adapted to the setting with an offset term. We can essentially force the constant term to be very small by adding ⌦(N103) data points with v t = 1/N100, xt1 = 1 and all other coordinates 0.
Stability We start by discussing the constraint kwk2 1 imposed on the problem (MR). Without this constraint, it is possible to completely change the objective function with a tiny perturbation in the problem data. Let R⇤ be the optimal revenue in the unregularized Myersonian regression (UMR) for some instance (xt, vt). A natural upper bound on R⇤ is the maximum welfare, given by W = P m
t=1 v t. Typically R⇤ < W . Consider such an instance. For any fixed < 0 consider the
following two instances:
• x̃t = (xt, 0) 2 Rn+1
• x̄t = (xt, vt) 2 Rn+1
The instances (x̃t, vt)t=1..m and (x̄t, vt)t=1..m are very close to each other in the sense that the labels are the same and the features have: kx̃t x̄tk , 8t. However, the optimal revenue of (x̃t, vt)t=1..m under (UMR) is R⇤ while the optimal revenue of (x̄t, vt)t=1..m is W by choosing w = (0, 1). This is true even as ! 0. On the other hand, the solution of the regularized problem (MR) is Lipschitz-continuous in the data. Theorem 4.1. Consider two instances (x̃t, ṽt)t=1..m and (x̄t, v̄t)t=1..m such that kx̃t x̄tk and |ṽt v̄t| for all t, then if R̃ and R̄ are the respective solutions to (MR) then:
|R̃ R̄| O( m)
Uniform Convergence and Generalization To understand generalization, we are concerned with the performance of the algorithm on a distribution D that generates datapoints (xt, vt). We will sample m points from this distribution and obtain a dataset S = {(xt, vt); t = 1..m}. We want to compare across all pricing policies w the objective function on the sample:
FS(w) = 1
m
mX
t=1
REV(hw, xti; vt)
with the performance on the original distribution:
FD(w) = E(x,v)⇠D ⇥ REV(hw, xti; vt) ⇤
Medina and Mohri [2014a] provide bounds for |FS(w) FD(w)| by studying the empirical Rademacher complexity of the pricing function. The following statement follows directly from Theorem 3 in their paper. Note that while their theorem bounds only one direction, the same proof also works for the other direction. Theorem 4.2 (Medina and Mohri [2014a]). For any > 0 it holds with probability 1 over the choice of a sample S of size m that:
|FS(w) FD(w)| O r n log(m/n) + log(1/ )
m
!
Corollary 4.3. Let wS be the output of the ERM algorithm on sample S of size m = O(✏ 2[n log(n/d) + log(1/ )]). Then with probability 1 we have:
FD(wS) max kwk21 FD(w) O(✏)
Extensions to other loss functions While our results are phrased in terms of the pricing, they hold for any lower-semi-Lipschitz reward fuction, i.e. any function such that:
R(p ✏) R(p) ✏
An important example studied in Medina and Mohri [2014a], Shen et al. [2019] is the revenue of a second price auction with reserves price p. Given two highest bids v1 and v2 the revenue function is written as:
SPA(p; v1, v2) = max(v2, p) · 1{p v1}
5 Conclusion
We give the first approximation algorithm for learning a linear pricing function without any assumption on the data other than normalization. This provides a key missing component to the field of learning for revenue optimization, where ERM was shown to be optimal in Medina and Mohri [2014a] but there were no algorithms with provable guarantees for it.
Our algorithm is polynomial in the number of features dimensions n and on the number of datapoints m but exponential in the accuracy parameter ✏. We show that the exponential dependency on ✏ is necessary.
In this paper we assume that the bids in the dataset represent the buyer’s true willingness to pay as in Medina and Mohri [2014a], Medina and Vassilvitskii [2017], Shen et al. [2019]. A interesting avenue of investigation for future work is to understand how strategic buyers would change their bids in response to a contextual batch learning algorithm and how to design algorithms that are aware of strategic response. This is a well studied problem in non-contextual online learning (Amin et al. [2013], Medina and Mohri [2014b], Drutsa [2017], Vanunts and Drutsa [2019], Nedelec et al. [2019]) as well as in online contextual learning (Amin et al. [2014], Golrezaei et al. [2019]). Formulating a model of strategic response to batch learning algorithms is itself open.
Broader Impact Statement
While our work is largely theoretical, we feel it can have downstream impact in the design of better marketplaces such as those for internet advertisement. Better pricing can increase both the efficiency of the market and the revenue of the platform. The latter is important since the revenue of platforms keeps such services (e.g. online newspapers) free for most users.
Acknowledgments and Disclosure of Funding
No funding to disclose. The authors would like to thank Andrés Muñoz Medina for helpful discussions. | 1. What is the focus of the paper in regard to Myersonian regression?
2. What are the strengths of the paper, particularly in its relevance to the NeurIPS community and its completeness in terms of sets of results?
3. What are the weaknesses of the paper, especially regarding its lack of concrete examples or applications and its restrictive assumptions? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper studies Myersonian regression, which a variant of linear regression. The input is a set of samples representing the bids of a buyer for different products. The goal is to compute the (Myerson) pricing policy which maximizes the expected revenue, for any possible distribution from which the samples could be drawn from. The authors study the complexity of this problem: they show that is hard, there is no FPTAS, and present a PTAS that obtains an additive approximation of the optimal possible revenue, under normalization assumptions.
Strengths
The paper is definitely relevant to the NeurIPS community, and is almost complete in terms of sets of results: the authors have proved both the hardness of the problem (which was missing from previous work) and also presented a PTAS for it.
Weaknesses
A concrete example/applications is missing to motivate the study of the problem. Also, the assumption that the samples come from truthful auctions seem quite restrictive (since auctions used in practice are typically not truthful). The authors do mention this as a future direction though. |
NIPS | Title
TRS: Transferability Reduced Ensemble via Promoting Gradient Diversity and Model Smoothness
Abstract
Adversarial Transferability is an intriguing property – adversarial perturbation crafted against one model is also effective against another model, while these models are from different model families or training processes. To better protect ML systems against adversarial attacks, several questions are raised: what are the sufficient conditions for adversarial transferability and how to bound it? Is there a way to reduce the adversarial transferability in order to improve the robustness of an ensemble ML model? To answer these questions, in this work we first theoretically analyze and outline sufficient conditions for adversarial transferability between models; then propose a practical algorithm to reduce the transferability between base models within an ensemble to improve its robustness. Our theoretical analysis shows that only promoting the orthogonality between gradients of base models is not enough to ensure low transferability; in the meantime, the model smoothness is an important factor to control the transferability. We also provide the lower and upper bounds of adversarial transferability under certain conditions. Inspired by our theoretical analysis, we propose an effective Transferability Reduced Smooth (TRS) ensemble training strategy to train a robust ensemble with low transferability by enforcing both gradient orthogonality and model smoothness between base models. We conduct extensive experiments on TRS and compare with 6 state-of-the-art ensemble baselines against 8 whitebox attacks on different datasets, demonstrating that the proposed TRS outperforms all baselines significantly.
1 Introduction
Machine learning systems, especially those based on deep neural networks (DNNs), have been widely applied in numerous applications [27, 18, 46, 10]. However, recent studies show that DNNs are vulnerable to adversarial examples, which are able to mislead DNNs by adding small magnitude of perturbations to the original instances [47, 17, 54, 52]. Several attack strategies have been proposed so far to generate such adversarial examples in both digital and physical environments [36, 32, 51, 53, 15, 28]. Intriguingly, though most attacks require access to the target models (whitebox attacks), several studies show that adversarial examples generated against one model are able to transferably
∗The authors contributed equally.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
attack another target model with high probability, giving rise to blackbox attacks [39, 41, 31, 30, 57]. This property of adversarial transferability poses great threat to DNNs.
Some work have been conducted to understand adversarial transferability [48, 33, 12]. However, a rigorous theoretical analysis or explanation for transferability is still lacking in the literature. In addition, although developing robust ensemble models to limit transferability shows great potential towards practical robust learning systems, only empirical observations have been made in this line of research [38, 23, 56]. Can we deepen our theoretical understanding on transferability? Can we take advantage of rigorous theoretical understanding to reduce the adversarial transferability and therefore generate robust ensemble ML models?
In this paper, we focus on these two questions. From the theoretical side, we are interested in the sufficient conditions under which the adversarial transferability can be lower bounded and upper bounded. Our theoretical arguments provides the first theoretical interpretation for the sufficient conditions of transferability. Intuitively, as illustrated in Figure 1, we show that the commonly used gradient orthogonality (low cosine similarity) between learning models [12] cannot directly imply low adversarial transferability; on the other hand, orthogonal and smoothed models would limit the transferability. In particular, we prove that the gradient similarity and model smoothness are the key factors that both contribute to the adversarial transferability, and smooth models with orthogonal gradients can guarantee low transferability.
Under an empirical lens, inspired by our theoretical analysis, we propose a simple yet effective approach, Transferability Reduced Smooth (TRS) ensemble to limit adversarial transferability between base models
within an ensemble and therefore improve its robustness. In particular, we reduce the loss gradient similarity between models as well as enforce the smoothness of models to introduce global model orthogonality.
We conduct extensive experiments to evaluate TRS in terms of the model robustness against different strong white-box and blackbox attacks following the robustness evaluation procedures [5, 6, 49], as well as its ability to limit transferability across the base models. We compare the proposed TRS with existing state-of-the-art baseline ensemble approaches such as ADP [38], GAL [23], and DVERGE [56] on MNIST, CIFAR-10, and CIFAR-100 datasets, and we show that (1) TRS achieves the state-of-the-art ensemble robustness, outperforming others by a large margin; (2) TRS achieves efficient training; (3) TRS effectively reduces the transferability among base models within an ensemble which indicates its robustness against whitebox and blackbox attacks; (4) Both loss terms in TRS contribute to the ensemble robustness by constraining different sufficient conditions of adversarial transferability.
Contributions. In this paper, we make the first attempt towards theoretical understanding of adversarial transferability, and provide practical approach for developing robust ML ensembles. (1) We provide a general theoretical analysis framework for adversarial transferability. We prove
the lower and upper bounds of adversarial transferability. Both bounds show that the gradient similarity and model smoothness are the key factors contributing to the adversarial transferability, and smooth models with orthogonal gradients can guarantee low transferability.
(2) We propose a simple yet effective approach TRS to train a robust ensemble by jointly reducing the loss gradient similarity between base models and enforcing the model smoothness. The code is publicly available2.
(3) We conduct extensive experiments to evaluate TRS in terms of model robustness under different attack settings, showing that TRS achieves the state-of-the-art ensemble robustness and outperforms other baselines by a large margin. We also conduct ablation studies to further understand the contribution of different loss terms and verify our theoretical findings.
2https://github.com/AI-secure/Transferability-Reduced-Smooth-Ensemble
Related Work
The adversarial transferability between different ML models is an intriguing research direction. Papernot et al. [40] explored the limitation of adversarial examples and showed that, while some instances are more difficult to manipulate than the others, these adversarial examples usually transfer from one model to another. Demontis et al. [12] later analyzed transferability for both evasion and poisoning attacks. Tramèr et al. [48] empirically investigated the subspace of adversarial examples that enables transferability between different models: though their results provide a non-zero probability guarantee on the transferability, they did not quantify the probability of adversarial transferability.
Leveraging the transferability, different blackbox attacks have been proposed [41, 28, 15, 9]. To defend against these transferability based attacks, Pang et al. [38] proposed a class entropy based adaptive diversity promoting approach to enhance the ML ensemble robustness. Recently, Yang et al. [56] proposed DVERGE, a robust ensemble training approach that diversifies the non-robust features of base models via an adversarial training objective function. However, these approaches do not provide theoretical justification for adversarial transferability, and there is still room to improve the ML ensemble robustness based on in-depth understanding on the sufficient conditions of transferability. In this paper, we aim to provide a theoretical understanding of transferability, and empirically compare the proposed robust ML ensemble inspired by our theoretical analysis with existing approaches to push for a tighter empirical upper bound for the ensemble robustness.
2 Transferability of Adversarial Perturbation
In this section, we first introduce preliminaries, and then provide the upper and lower bounds of adversarial transferability by connecting adversarial transferability with different characteristics of models theoretically, which, in the next section, will allow us to explicitly minimize transferability by enforcing (or rewarding) certain properties of models.
Notations. We consider neural networks for classification tasks. Assume there are C classes, and let X be the input space of the model with Y = {1, 2, . . . , C} the set of prediction classes (i.e., labels). We model the neural network by a mapping function F : X → Y . We will study the transferability between two models F and G. For brevity, hereinafter we mainly show the derived notations for F and notations for G are similar. Let the benign data (x, y) follow an unknown distribution D supported on (X ,Y), and PX denote the marginal distribution on X . For a given input x ∈ X , the classification model F first predicts the confidence score for each label y ∈ Y , denoted as fy(x). These confidence scores sum up to 1, i.e., ∑ y∈Y fy(x) = 1,∀x ∈ X . The model F will predicts the label with highest confidence score: F(x) = argmaxy∈Y fy(x). For modelF , there is usually a model-dependent loss function `F : X×Y → R+, which is the composition of a differentiable training loss (e.g., cross-entropy loss) ` and the model’s confidence score f(·): `F (x, y) := `(f(x), y), (x, y) ∈ (X ,Y). We further assume that F(x) = argminy∈Y `F (x, y), i.e., the model predicts the label with minimum loss. This holds for common training losses.
In this paper, by default we will focus on models that are well-trained on the benign dataset, and such models are the most commonly encountered in practice, so their robustness is paramount. This means we will focus on the low risk classifiers, which we will formally define in Section 2.1.
How should we define an adversarial attack? For the threat model, we consider the attacker that adds an `p norm bounded perturbation to data instance x ∈ X . In practice, there are two types of attacks, untargeted attacks and targeted attacks. The definition of adversarial transferability is slightly different under these attacks [33], and we consider both in our analysis. Definition 1 (Adversarial Attack). Given an input x ∈ X with true label y ∈ Y , F(x) = y. (1) An untargeted attack crafts AU (x) = x+ δ to maximize `F (x+ δ, y) where ‖δ‖p ≤ . (2) A targeted attack with target label yt ∈ Y crafts AT (x) = x+ δ to minimize `F (x+ δ, yt) where ‖δ‖p ≤ .
In this definition, is a pre-defined attack radius that limits the power of the attacker. We may refer to {δ : ‖δ‖p ≤ } as the perturbation ball. The goal of the untargeted attack is to maximize the loss of the target model against its true label y. The goal of the targeted attack is to minimize the loss towards its adversarial target label yt.
How do we formally define that an attack is effective?
Definition 2 ((α,F)-Effective Attack). Consider a input x ∈ X with true label y ∈ Y . An attack is (α,F )-effective in untargeted scenario if Pr (F(AU (x)) 6= y) ≥ 1− α. An attack is (α,F )-effective in targeted scenario (with class target yt) if Pr (F(AT (x)) = yt) ≥ 1− α.
This definition captures the requirement that an adversarial instance generated by an effective attack strategy is able to mislead the target classification model (e.g. F) with certain probability (1− α). The smaller the α is, the more effective the attack is. In practice, this implies that on a finite sample of targets, the attack success is frequent but not absolute. Note that the definition is general for both whitebox [1, 12, 5] and blackbox attacks [42, 4].
2.1 Model Characteristics
Given two modelsF and G, what are the characteristics ofF and G that have impact on transferability under a given attack strategy? Intuitively, the more similar these two classifers are, the larger the transferability would be. However, how can we define “similar” and how can we rigorously connect it to transferability? To answer these questions, we will first define the risk and empirical risk for a given model to measure its performance on benign test data. Then, as the DNNs are differentiable, we will define model similarity based on their gradients. We will then derive the lower and upper bounds of adversarial transferability based on the defined model risk and similarity measures. Definition 3 (Risk and Empirical Risk). For a given model F , we let `F be its model-dependent loss function. Its risk is defined as ηF = Pr (F(x) 6= y); and its empirical risk is defined as ξF = E [`F (x, y)].
The risk represents the model’s error rate on benign test data, while the empirical risk is a non-negative value that also indicates the inaccuracy. For both of them, higher value means worse performance on the benign test data. The difference is that, the risk has more intuitive meaning, while the empirical risk is differentiable and is actually used during model training. Definition 4 (Loss Gradient Similarity). The lower loss gradient similarity S and upper loss gradient similarity S between two differentiable loss functions `F and `G is defined as:
S(`F , `G) = inf x∈X ,y∈Y ∇x`F (x, y) · ∇x`G(x, y) ‖∇x`F (x, y)‖2 · ‖∇x`G(x, y)‖2 ,S(`F , `G) = sup x∈X ,y∈Y ∇x`F (x, y) · ∇x`G(x, y) ‖∇x`F (x, y)‖2 · ‖∇x`G(x, y)‖2 .
The S(`F , `G) (S(`F , `G)) is the minimum (maximum) cosine similarity between the gradients of the two loss functions for an input x drawn from X with any label y ∈ Y . Besides the loss gradient similarity, in our analysis we will also show that the model smoothness is another key characteristic of ML models that affects the model transferability.
Definition 5. We call a model F β-smooth if sup x1,x2∈X ,y∈Y ‖∇x`F (x1, y)−∇x`F (x2, y)‖2 ‖x1 − x2‖2 ≤ β.
This smoothness definition is commonly used in deep learning theory and optimization literature [3, 2], and is also named curvature bounds in certified robustness literature [44]. It could be interpreted as the Lipschitz bound for the model’s loss function gradient. We remark that larger β indicates that the model is less smoother, while smaller β means the model is smoother. Particularly, when β = 0, the model is linear in the input space X .
2.2 Definition of Adversarial Transferability
Based on the model characteristics we explored above, next we will ask: Given two models, what is the natural and precise definition of adversarial transferability?
Definition 6 (Transferability). Consider an adversarial instance AU (x) or AT (x) constructed against a surrogate model F . With a given benign input x ∈ X , The transferability Tr between F and a target model G is defined as follows (adversarial target yt ∈ Y):
• Untargeted: Tr(F ,G, x) = I[F(x) = G(x) = y ∧ F(AU (x)) 6= y ∧ G(AU (x)) 6= y].
• Targeted: Tr(F ,G, x, yt) = I[F(x) = G(x) = y ∧ F(AT (x)) = G(AT (x)) = yt].
Here we define the transferability at instance level, showing several conditions are required to satisfy for a transferable instance. For the untargeted attack, it requires that: (1) both the surrogate model
and target model make correct prediction on the benign input; and (2) both of them make incorrect predictions on the adversarial inputAU (x). The AU (x) is generated via the untargeted attack against the surrogate model F . For the targeted attack, it requires that: (1) both the surrogate and target model make correct prediction on benign input; and (2) both output the adversarial target yt ∈ Y on the adversarial input AT (x). The AT (x) is crafted against the surrogate model F . The predicates themselves do not require AU and AT to be explicitly constructed against the surrogate model F . It will be implied by attack effectiveness (Definition 2) on F in theorem statements. Note that the definition here is a predicate for a specific input x, and in the following analysis we will mainly use its distributional version: Pr (Tr(F ,G, x) = 1) and Pr (Tr(F ,G, x, yt) = 1).
2.3 Lower Bound of Adversarial Transferability
Based on the general definition of transferability, in this section we will analyze how to lower bound the transferability for targeted attack. The analysis for untargeted attack has a similar form and is deferred to Theorem 3 in Appendix A. Theorem 1 (Lower Bound on Targeted Attack Transferability). Assume both models F and G are β-smooth. Let AT be an (α,F)-effective targeted attack with perturbation ball ‖δ‖2 ≤ and target label yt ∈ Y . The transferabiity can be lower bounded by
Pr (Tr(F ,G, x, yt) = 1) ≥ (1−α)−(ηF+ηG)− (1 + α) + cF (1− α) cG + − (1− α) cG +
√ 2− 2S(`F , `G),
where
cF = max x∈X
min y∈Y
`F (AT (x), y)− `F (x, yt) + β 2/2
‖∇x`F (x, yt)‖2 , cG = min x∈X
min y∈Y
`G(AT (x), y)− `G(x, yt)− β 2/2
‖∇x`G(x, yt)‖2 .
Here ηF , ηG are the risks of models F and G respectively.
We defer the complete proof in Appendix C. In the proof, we first use a Taylor expansion to introduce the gradient terms, then relate the dot product with cosine similarity of the loss gradients, and finally use Markov’s inequality to derive the misclassification probability of G to complete the proof.
Implications. In Theorem 1, the only term which correlates both F and G is S(`F , `G), while all other terms depend on individual models F or G. Thus, we study the relation between S(`F , `G) and Pr (Tr(F ,G, x, yt) = 1). Note that since β is small compared with the perturbation radius and the gradient magnitude ‖∇x`G‖2 in the denominator is relatively large, the quantity cG is small. Moreover, 1 − α is large since the attack is typically effective against F . Thus, Pr (Tr(F ,G, x, yt) = 1) has the form C − k √ 1− S(`F , `G), where C and k are both positive constants. We can easily observe the positive correlation between the loss gradients similarity S(`F , `G), and lower bound of adversarial transferability Pr (Tr(F ,G, x, yt) = 1). In the meantime, note that when β increases (i.e., model becomes less smooth), in the transferability lower bound C − k √ 1− S(`F , `G), the C decreases and k increase. As a result, the lower bounds in Theorem 1 decreases, which implies that when model becomes less smoother (i.e., β becomes larger), the transferability lower bounds become looser for both targeted and untargeted attacks. In other words, when the model becomes smoother, the correlation between loss gradients similarity and lower bound of transferability becomes stronger, which motivates us to constrain the model smoothness to increase the effect of limiting loss gradients similarity.
In addition to the `p-bounded attacks, we also derive a transferability lower bound for general attacks whose magnitude is bounded by total variance distance of data distributions. We defer the detail analysis and discussion to Appendix B.
2.4 Upper Bound of Adversarial Transferability
We next aim to upper bound the adversarial transferability. The upper bound for target attack is shown below; and the one for untargeted attack has a similar form in Theorem 4 in Appendix A. Theorem 2 (Upper Bound on Targeted Attack Transferability). Assume both models F and G are β-smooth with gradient magnitude bounded by B, i.e., ‖∇x`F (x, y)‖ ≤ B and ‖∇x`G(x, y)‖ ≤ B for any x ∈ X , y ∈ Y . LetAT be an (α,F)-effective targeted attack with perturbation ball ‖δ‖2 ≤
and target label yt ∈ Y . When the attack radius is small such that `min− B ( 1 + √ 1+S(`F ,`G)
2
) −
β 2 > 0, the transferability can be upper bounded by
Pr (Tr(F ,G, x, yt) = 1) ≤ ξF + ξG `min − B ( 1 + √ 1+S(`F ,`G)
2
) − β 2 ,
where `min = min x∈X (`F (x, yt), `G(x, yt)). Here ξF and ξG are the empirical risks of models F and G respectively, defined relative to a differentiable loss.
We defer the complete proof to Appendix D. In the proof, we first take a Taylor expansion on the loss function at (x, y), then use the fact that the attack direction will be dissimilar with at least one of the model gradients to upper bound the transferability probability.
Implications. In Theorem 2, we observe that along with the increase of S(`F , `G), the denominator decreases and henceforth the upper bound increases. Therefore, S(`F , `G)—upper loss gradient similarity and the upper bound of transferability probability is positively correlated. This tendency is the same as that in the lower bound. Note that α does not appear in upper bounds since only completely successful attacks (α = 0%) needs to be considered here to upper bound the transferability.
Meanwhile, when the model becomes smoother (i.e., β decreases), the transferability upper bound decreases and becomes tighter. This implication again motivates us to constrain the model smoothness. We further observe that smaller magnitude of gradient, i.e., B, also helps to tighten the upper bound. We will regularize both B and β to increase the effect of constraining loss gradients similarity.
Note that the lower bound and upper bound jointly show smaller β leads to a reduced gap between lower and upper bounds and thus a stronger correlation between loss gradients similarity and transferabiltiy. Therefore, it is important to both constrain gradient similarity and increase model smoothness (decrease β) to reduce model transferability and improve ensemble robustness.
3 Improving Ensemble Robustness via Transferability Minimization
Motivated by our theoretical analysis, we propose a lightweight yet effective robust ensemble training approach, Transferability Reduced Smooth (TRS), to reduce the transferability among base models by enforcing low loss gradient similarity and model smoothness at the same time.
3.1 TRS Regularizer
In practice, it is challenging to directly regularize the model smoothness. Luckily, inspired from deep learning theory and optimization [14, 37, 45], succinct `2 regularization on the gradient terms ‖∇x`F‖2 and ‖∇x`G‖2 can reduce the magnitude of gradients and thus improve model smoothness. For example, for common neural networks, the smoothness can be upper bounded via bounding the `2 magnitude of gradients [45, Corollary 4]. An intuitive explanation is that, the `2 regularization on the gradient terms reduces the magnitude of model’s weights, thus limits its changing rate when non-linear activation functions are applied to the neural network model. However, we find that directly regularizing the loss gradient magnitude with `2 norm is not enough, since a vanilla `2 regularizer such as ‖∇x`F‖2 will only focus on the local region at data point x, while it is required to ensure the model smoothness over a large decision region to control the adversarial transferability based on our theoretical analysis.
To address this challenge, we propose a min-max framework to regularize the “support” instance x̂ with “worst” smoothness in the neighborhood region of data point x, which results in the following model smoothness loss:
Lsmooth(F ,G, x, δ) = max ‖x̂−x‖∞≤δ ‖∇x̂`F‖2 + ‖∇x̂`G‖2 (1)
where δ refers to the radius of the `∞ ball around instance x within which we aim to ensure the model to be smooth. In practice, we leverage projection gradient descent optimization to search for support instances x̂ for optimization. This model smoothness loss can be viewed as promoting margin-wise smoothness, i.e., improving the margin between nonsmooth decision boundaries and data point x. Another option is to promote point-wise smoothness that only requires the loss landscape
at data point x itself to be smooth. We compare the ensemble robustness of the proposed min-max framework which promotes the margin-wise smoothness with the naïve baseline which directly applies `2 regularization on each model loss gradient terms to promote the point-wise smoothness (i.e. Cos-`2) in Section 4.
Given trained “smoothed" base models, we also decrease the model loss gradient similarity to reduce the overall adversarial transferability between base models. Among various metrics which measure the similarity between the loss gradients of base model F and G, we find that the vanilla cosine similarity metric, which is also used in [23], may lead to certain concerns. By minimizing the cosine similarity between∇x`F and∇x`G , the optimal case implies∇x`F = −∇x`G , which means two models have contradictory (rather than diverse) performance on instance x and thus results in turbulent model functionality. Considering this challenge, we leverage the absolute value of cosine similarity between ∇x`F and ∇x`G as similarity loss Lsim and its optimal case implies orthogonal loss gradient vectors. For simplification, we will always use the absolute value of the gradient cosine similarity as the indicator of gradient similarity in our later description and evaluation.
Based on our theoretical analysis and particularly the model loss gradient similarity and model smoothness optimization above, we propose TRS regularizer for model pair (F ,G) on input x as:
LTRS(F ,G, x, δ) = λa · Lsim + λb · Lsmooth = λa · ∣∣∣∣ (∇x`F )>(∇x`G)‖∇x`F‖2 · ‖∇x`G‖2 ∣∣∣∣+ λb · [ max‖x̂−x‖∞≤δ ‖∇x̂`F‖2 + ‖∇x̂`G‖2 ] .
Here ∇x`F and ∇x`G refer to the loss gradient vectors of base models F and G on input x, and λa, λb the weight balancing parameters.
In Section 4, backed up by extensive empirical evaluation, we will systematically show that the local min-max training and the absolute value of the cosine similarity between the model loss gradients significantly improve the ensemble model robustness with negligible performance drop on benign accuracy, as well as reduce the adversarial transferability among base models.
3.2 TRS Training
We integrate the proposed TRS regularizer with the standard ensemble training loss, such as Ensemble Cross-Entropy (ECE) loss, to maintain both ensemble model’s classification utility and robustness by varying the balancing parameter λa and λb. Specifically, for an ensemble model consisting of N base models {Fi}Ni=1, given an input (x, y), our final training loss train is defined as:
Ltrain = 1
N N∑ i=1 LCE(Fi(x), y) + 2 N(N − 1) N∑ i=1 N∑ j=i+1 LTRS(Fi,Fj , x, δ)
where LCE(Fi(x), y) refers to the cross-entropy loss between Fi(x), the output vector of model Fi given x, and the ground-truth label y. The weight of LTRS regularizer could be adjusted by the tuning λa and λb internally. We present one-epoch training pseudo code in Algorithm 1 of Appendix F. The detailed hyper-parameter setting and training criterion are discussed in Appendix F.
4 Experimental Evaluation
In this section, we evaluate the robustness of the proposed TRS-ensemble model under both strong whitebox attacks, as well as blackbox attacks considering the gradient obfuscation concern [1]. We compare TRS with six state-of-the-art ensemble approaches. In addition, we evaluate the adversarial transferability among base models within an ensemble and empirically show that the TRS regularizer can indeed reduce transferability effectively. We also conduct extensive ablation studies to explore the effectiveness of different loss terms in TRS, as well as visualize the trained decision boundaries of different ensemble models to provide intuition on the model properties. We open source the code3 and provide a large-scale benchmark.
4.1 Experimental Setup
Datasets. We conduct our experiments on widely-used image datasets including hand-written dataset MNIST [29]; and colourful image datasets CIFAR-10 and CIFAR-100 [26].
3https://github.com/AI-secure/Transferability-Reduced-Smooth-Ensemble
Baseline ensemble approaches. We mainly consider the standard ensemble, as well as the state-ofthe-art robust ensemble methods that claim to be resilient against adversarial attacks. Specifically, we consider the following baseline ensemble methods which aim to promote the diversity between base models: AdaBoost [19]; GradientBoost [16]; CKAE [25]; ADP [38]; GAL [23]; DVERGE [56]. The detailed description about these approaches are in Appendix E. DVERGE, which has achieved the state-of-the-art ensemble robustness to our best knowledge, serves as the strongest baseline.
Whitebox robustness evaluation. We consider the following adversarial attacks to measure ensembles’ whitebox robustness: Fast Gradient Sign Method (FGSM) [17]; Basic Iterative Method (BIM) [34]; Momentum Iterative Method (MIM); Projected Gradient Descent (PGD); Auto-PGD (APGD); Carlini & Wanger Attack (CW); Elastic-net Attack (EAD) [8], and we leave the detailed description and parameter configuration of these attacks in Appendix E. We use Robust Accuracy as our evaluation metric for the whitebox setting, defined as the ratio of correctly predicted adversarial examples generated by different attacks among the whole test dataset.
Blackbox robustness evaluation. We also conduct blackbox robustness analysis in our evaluation since recent studies have shown that robust models which obfuscate gradients could still be fragile under blackbox attacks [1]. In the blackbox attack setting, we assume the attacker has no knowledge about the target ensemble, including the model architecture and parameters. In this case, the attacker is only able to craft adversarial examples based on several surrogate models and transfer them to the target victim ensemble. We follow the same blackbox attack evaluation setting in [56]: We choose three ensembles consisting of 3, 5, 8 base models which are trained with standard Ensemble Cross-Entropy (ECE) loss as our surrogate models. We apply 50-steps PGD attack with three random starts and two different loss functions (CrossEntropy and CW loss) on each surrogate model to generate adversarial instances (i.e. for each instance we will have 18 attack attempts). For each instance, among these attack attempts, as long as there is one that can successfully attack the victim model, we will count it as a successful attack. In this case, we use Robust Accuracy as our evaluation metric, defined as the number of unsuccessful attack attempts divided by the number of all attacks. We also consider additional three strong blackbox attacks targeting on reducing transferability (i.e., ILA [21], DI2-SGSM [55], IRA [50]) in Appendix J, which leads to similar observations.
4.2 Experimental Results
In this section, we present both whitebox and blackbox robustness evaluation results, examine the adversarial transferability, and explore the impacts of different loss terms in TRS. Furthermore, in Appendix I.1, we visualize the decision boundary; in Appendix I.2, we show results of further improving the robustness of the TRS ensemble by integrating adversarial training; in Appendix I.3, we study the impacts of each of the regularization term Lsim and Lsmooth; in Appendix I.4, we show the convergence of robust accuracy under large attack iterations to demonstrate the robustness stability of TRS ensemble; in Appendix I.5, we analyze the trade-off between the training cost and robustness of TRS by varying PGD step size and the total number of steps within Lsmooth approximation. Whitebox robustness. Table 1 presents the Robust Accuracy of different ensembles against a range of whitebox attacks on MNIST and CIFAR-10 dataset. We defer results on CIFAR-100 in Appendix K, and measure the statistical stability of our reported robust accuracy in Appendix H. Results shows that the proposed TRS ensemble outperforms other baselines including the state-of-the-art DVERGE significantly, against a range of attacks and perturbation budgets, and such performance gap could be even larger under stronger adversary attacks (e.g. PGD attack). We note that TRS ensemble is slightly less robust than DVERGE under small perturbation with weak attack FGSM. We investigate this based on the decision boundary analysis in Appendix I.1, and find that DVERGE tends to be more robust along the gradient direction and thus more robust against weak attacks which only focus on the gradient direction (e.g., FGSM); while TRS yields a smoother model along different directions leading to more consistent predictions within a larger neighborhood of an input, and thus more robust against strong iterative attacks (e.g., PGD). This may be due to that DVERGE is essentially performing adversarial training for different base models and therefore it protects the adversarial (gradient) direction, while TRS optimizes to train a smooth ensemble with diverse base models. We also analyze the convergence of attack algorithms in Appendix I.4, showing that when the number of attack iterations is large, both ADP and GAL ensemble achieve much lower robust accuracy against such iterative attacks; while both DVERGE and TRS remain robust.
Blackbox robustness. Figure 2 shows the Robust Accuracy performance of TRS compared with different baseline ensembles under different perturbation budget . As we can see, the TRS ensemble achieves competitive robust accuracy with DVERGE when is very small, and TRS beats all the baselines significantly when is large. Precisely speaking, TRS ensemble achieves over 85% robust accuracy against transfer attack with = 0.4 on MNIST while the second-best ensemble (DVERGE) only achieves 20.2%. Also on CIFAR-10, TRS ensemble achieves over 25% robust accuracy against transfer attack when = 0.06, while all the other baseline ensembles achieve robust accuracy lower than 6%. This implies that our proposed TRS ensemble has stronger generalization ability in terms of robustness against large adversarial attacks compared with other ensembles. We also put more details of the robust accuracy under blackbox attacks in Appendix G.
Adversarial transferability. Figure 3 shows the adversarial transferability matrix of different ensembles against 50-steps PGD attack with = 0.3 for MNIST and = 0.04 for CIFAR-10. Cell (i, j) where i 6= j represents the transfer attack success rate evaluated on j-th base model by using the i-th base model as the surrogate model. Lower number in each cell indicates lower transferability and thus potentially higher ensemble robustness. The diagonal cell (i, i) refers to i-th base model’s attack success rate, which reflects the vulnerability of a single model. From these figures, we can see that while base models show their vulnerabilities against adversarial attack, only DVERGE and TRS ensemble could achieve low adversarial transferability among base models. We should also notice that though GAL applied a similar gradient cosine similarity loss as our loss term Lsim, GAL still can
not achieve low adversarial transferability due to the lack of model smoothness enforcement, which is one of our key contributions in this paper.
Gradient similarity only vs. TRS. To further verify our theoretical analysis on the sufficient condition of transferability as model smoothness, we consider only applying similarity loss Lsim without model smoothness loss Lsmooth in TRS (i.e. λb = 0). The result is shown as “Cos-only” method of Table 1. We observe that the resulting whitebox robustness is much worse than standard TRS. This matches our theoretical analysis that only minimizing the gradient similarity cannot guarantee low adversarial transferability among base models and thus lead to low ensemble robustness. In Appendix I.3, we investigate the impacts of Lsim and Lsmooth thoroughly, and we show that though Lsmooth contribute slightly more, both terms are critical to the final ensemble robustness. `2 regularizer only vs. Min-max model smoothing. To emphasis the importance of our proposed min-max training loss on promoting the margin-wise model smoothness, we train a variant of TRS ensemble Cos-`2, where we directly apply the `2 regularization on ‖∇x`F‖2 and ‖∇x`G‖2. The results are shown as “Cos-`2” in Table 1. We observe that Cos-`2 achieves lower robustness accuracy compared with TRS, which implies the necessity of regularizing the gradient magnitude on not only the local training points but also their neighborhood regions to ensure overall model smoothness.
5 Conclusion
In this paper, we deliver an in-depth understanding of adversarial transferability. Theoretically, we provide both lower and upper bounds on transferability which shows that smooth models together with low loss gradient similarity guarantee low transferability. Inspired by our analysis, we propose TRS ensemble training to empirically reduce transferability by reducing loss gradient similarity and promoting model smoothness, yielding a significant improvement on ensemble robustness.
Acknowledgments and Disclosure of Funding
This work is partially supported by the NSF grant No.1910100, NSF CNS 20-46726 CAR, the Amazon Research Award, and the joint CATCH MURI-AUSMURI. | 1. What is the focus of the paper regarding adversarial attacks and transferability?
2. What are the strengths of the proposed approach, particularly in reducing model smoothness and loss gradient similarity?
3. What are the weaknesses of the paper, especially regarding experiment clarity and real-world applicability?
4. How does the reviewer assess the novelty and effectiveness of the proposed loss functions?
5. Do you have any concerns about the paper's assumptions or approximations? | Summary Of The Paper
Review | Summary Of The Paper
The manuscript describes the problem of adversarial transferability, the process of adversarial attacks that can be transferred across models, which can help scale adversarial attack strategies. As a guardrail against such attacks, it is desired that models are robust and protected against such attacks. Towards solving this, the authors first derive a lower and upper bound on a transferability metric that they define. The similarity between gradient losses was found to be the salient cross-model term affecting adversarial transferability. Subsequently, this pointed to a strategy of reducing model smoothness and loss gradient similarity across models as a way to reduce adversarial transferability.
Review
Motivated by reducing model smoothness, the authors propose a Transferability Reduced Smoothness (TRS) approach. The authors define adversarial attacks, both untargeted (goal is to mislabel) and targeted (goal is to specify an incorrect label). The definitions 1-5 and derivations are clear. The two central metrics driving the developments in the paper are the loss gradient similarity, which describes similarity between the gradient of losses for two models, and a beta-smoothness, which defines when a model is beta-smooth.
The two loss functions proposed, for smoothness and similarities, are fairly novel and appear to work well for the benchmarks. My two big concerns are that the experiments do not have the same level of clarity and detail as the theoretical counterpart, and the real-world use case of preventing adversarial attacks from an unknown source model are not addressed. My detailed comments below:
One key weakness in the utility of such a framework is that the training is across pairs of models while in reality the adversarial transferability has to be against unknown source models. I believe the authors should also look at source model agnostic solutions to make their proposed work useful in real-world settings.
“Luckily, inspired from deep learning theory and optimization [14, 36], succinct l2 regularization on the gradient terms can reduce the magnitude of gradients and thus improve model smoothness.“ This statement is not well supported and appears to be a very crude approximation to the beta-smoothness metric (Definition 4). Equation 1 only has weak similarity between the two models F and G. Which of Lsim or Lsmooth is contributing more to the losses?
The real meat of the paper starts only on page 5. I think the authors should condense pages 1-4 to within three pages to get to the new developments of the paper faster.
Experimental results section is rushed and comprises only 1 page. It is concerning that in Fig. 3, the results of the proposed method for some model pairs are significantly lower than the DVERGE baseline method.
I have read the authors' rebuttals as well as the other reviews. |
NIPS | Title
TRS: Transferability Reduced Ensemble via Promoting Gradient Diversity and Model Smoothness
Abstract
Adversarial Transferability is an intriguing property – adversarial perturbation crafted against one model is also effective against another model, while these models are from different model families or training processes. To better protect ML systems against adversarial attacks, several questions are raised: what are the sufficient conditions for adversarial transferability and how to bound it? Is there a way to reduce the adversarial transferability in order to improve the robustness of an ensemble ML model? To answer these questions, in this work we first theoretically analyze and outline sufficient conditions for adversarial transferability between models; then propose a practical algorithm to reduce the transferability between base models within an ensemble to improve its robustness. Our theoretical analysis shows that only promoting the orthogonality between gradients of base models is not enough to ensure low transferability; in the meantime, the model smoothness is an important factor to control the transferability. We also provide the lower and upper bounds of adversarial transferability under certain conditions. Inspired by our theoretical analysis, we propose an effective Transferability Reduced Smooth (TRS) ensemble training strategy to train a robust ensemble with low transferability by enforcing both gradient orthogonality and model smoothness between base models. We conduct extensive experiments on TRS and compare with 6 state-of-the-art ensemble baselines against 8 whitebox attacks on different datasets, demonstrating that the proposed TRS outperforms all baselines significantly.
1 Introduction
Machine learning systems, especially those based on deep neural networks (DNNs), have been widely applied in numerous applications [27, 18, 46, 10]. However, recent studies show that DNNs are vulnerable to adversarial examples, which are able to mislead DNNs by adding small magnitude of perturbations to the original instances [47, 17, 54, 52]. Several attack strategies have been proposed so far to generate such adversarial examples in both digital and physical environments [36, 32, 51, 53, 15, 28]. Intriguingly, though most attacks require access to the target models (whitebox attacks), several studies show that adversarial examples generated against one model are able to transferably
∗The authors contributed equally.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
attack another target model with high probability, giving rise to blackbox attacks [39, 41, 31, 30, 57]. This property of adversarial transferability poses great threat to DNNs.
Some work have been conducted to understand adversarial transferability [48, 33, 12]. However, a rigorous theoretical analysis or explanation for transferability is still lacking in the literature. In addition, although developing robust ensemble models to limit transferability shows great potential towards practical robust learning systems, only empirical observations have been made in this line of research [38, 23, 56]. Can we deepen our theoretical understanding on transferability? Can we take advantage of rigorous theoretical understanding to reduce the adversarial transferability and therefore generate robust ensemble ML models?
In this paper, we focus on these two questions. From the theoretical side, we are interested in the sufficient conditions under which the adversarial transferability can be lower bounded and upper bounded. Our theoretical arguments provides the first theoretical interpretation for the sufficient conditions of transferability. Intuitively, as illustrated in Figure 1, we show that the commonly used gradient orthogonality (low cosine similarity) between learning models [12] cannot directly imply low adversarial transferability; on the other hand, orthogonal and smoothed models would limit the transferability. In particular, we prove that the gradient similarity and model smoothness are the key factors that both contribute to the adversarial transferability, and smooth models with orthogonal gradients can guarantee low transferability.
Under an empirical lens, inspired by our theoretical analysis, we propose a simple yet effective approach, Transferability Reduced Smooth (TRS) ensemble to limit adversarial transferability between base models
within an ensemble and therefore improve its robustness. In particular, we reduce the loss gradient similarity between models as well as enforce the smoothness of models to introduce global model orthogonality.
We conduct extensive experiments to evaluate TRS in terms of the model robustness against different strong white-box and blackbox attacks following the robustness evaluation procedures [5, 6, 49], as well as its ability to limit transferability across the base models. We compare the proposed TRS with existing state-of-the-art baseline ensemble approaches such as ADP [38], GAL [23], and DVERGE [56] on MNIST, CIFAR-10, and CIFAR-100 datasets, and we show that (1) TRS achieves the state-of-the-art ensemble robustness, outperforming others by a large margin; (2) TRS achieves efficient training; (3) TRS effectively reduces the transferability among base models within an ensemble which indicates its robustness against whitebox and blackbox attacks; (4) Both loss terms in TRS contribute to the ensemble robustness by constraining different sufficient conditions of adversarial transferability.
Contributions. In this paper, we make the first attempt towards theoretical understanding of adversarial transferability, and provide practical approach for developing robust ML ensembles. (1) We provide a general theoretical analysis framework for adversarial transferability. We prove
the lower and upper bounds of adversarial transferability. Both bounds show that the gradient similarity and model smoothness are the key factors contributing to the adversarial transferability, and smooth models with orthogonal gradients can guarantee low transferability.
(2) We propose a simple yet effective approach TRS to train a robust ensemble by jointly reducing the loss gradient similarity between base models and enforcing the model smoothness. The code is publicly available2.
(3) We conduct extensive experiments to evaluate TRS in terms of model robustness under different attack settings, showing that TRS achieves the state-of-the-art ensemble robustness and outperforms other baselines by a large margin. We also conduct ablation studies to further understand the contribution of different loss terms and verify our theoretical findings.
2https://github.com/AI-secure/Transferability-Reduced-Smooth-Ensemble
Related Work
The adversarial transferability between different ML models is an intriguing research direction. Papernot et al. [40] explored the limitation of adversarial examples and showed that, while some instances are more difficult to manipulate than the others, these adversarial examples usually transfer from one model to another. Demontis et al. [12] later analyzed transferability for both evasion and poisoning attacks. Tramèr et al. [48] empirically investigated the subspace of adversarial examples that enables transferability between different models: though their results provide a non-zero probability guarantee on the transferability, they did not quantify the probability of adversarial transferability.
Leveraging the transferability, different blackbox attacks have been proposed [41, 28, 15, 9]. To defend against these transferability based attacks, Pang et al. [38] proposed a class entropy based adaptive diversity promoting approach to enhance the ML ensemble robustness. Recently, Yang et al. [56] proposed DVERGE, a robust ensemble training approach that diversifies the non-robust features of base models via an adversarial training objective function. However, these approaches do not provide theoretical justification for adversarial transferability, and there is still room to improve the ML ensemble robustness based on in-depth understanding on the sufficient conditions of transferability. In this paper, we aim to provide a theoretical understanding of transferability, and empirically compare the proposed robust ML ensemble inspired by our theoretical analysis with existing approaches to push for a tighter empirical upper bound for the ensemble robustness.
2 Transferability of Adversarial Perturbation
In this section, we first introduce preliminaries, and then provide the upper and lower bounds of adversarial transferability by connecting adversarial transferability with different characteristics of models theoretically, which, in the next section, will allow us to explicitly minimize transferability by enforcing (or rewarding) certain properties of models.
Notations. We consider neural networks for classification tasks. Assume there are C classes, and let X be the input space of the model with Y = {1, 2, . . . , C} the set of prediction classes (i.e., labels). We model the neural network by a mapping function F : X → Y . We will study the transferability between two models F and G. For brevity, hereinafter we mainly show the derived notations for F and notations for G are similar. Let the benign data (x, y) follow an unknown distribution D supported on (X ,Y), and PX denote the marginal distribution on X . For a given input x ∈ X , the classification model F first predicts the confidence score for each label y ∈ Y , denoted as fy(x). These confidence scores sum up to 1, i.e., ∑ y∈Y fy(x) = 1,∀x ∈ X . The model F will predicts the label with highest confidence score: F(x) = argmaxy∈Y fy(x). For modelF , there is usually a model-dependent loss function `F : X×Y → R+, which is the composition of a differentiable training loss (e.g., cross-entropy loss) ` and the model’s confidence score f(·): `F (x, y) := `(f(x), y), (x, y) ∈ (X ,Y). We further assume that F(x) = argminy∈Y `F (x, y), i.e., the model predicts the label with minimum loss. This holds for common training losses.
In this paper, by default we will focus on models that are well-trained on the benign dataset, and such models are the most commonly encountered in practice, so their robustness is paramount. This means we will focus on the low risk classifiers, which we will formally define in Section 2.1.
How should we define an adversarial attack? For the threat model, we consider the attacker that adds an `p norm bounded perturbation to data instance x ∈ X . In practice, there are two types of attacks, untargeted attacks and targeted attacks. The definition of adversarial transferability is slightly different under these attacks [33], and we consider both in our analysis. Definition 1 (Adversarial Attack). Given an input x ∈ X with true label y ∈ Y , F(x) = y. (1) An untargeted attack crafts AU (x) = x+ δ to maximize `F (x+ δ, y) where ‖δ‖p ≤ . (2) A targeted attack with target label yt ∈ Y crafts AT (x) = x+ δ to minimize `F (x+ δ, yt) where ‖δ‖p ≤ .
In this definition, is a pre-defined attack radius that limits the power of the attacker. We may refer to {δ : ‖δ‖p ≤ } as the perturbation ball. The goal of the untargeted attack is to maximize the loss of the target model against its true label y. The goal of the targeted attack is to minimize the loss towards its adversarial target label yt.
How do we formally define that an attack is effective?
Definition 2 ((α,F)-Effective Attack). Consider a input x ∈ X with true label y ∈ Y . An attack is (α,F )-effective in untargeted scenario if Pr (F(AU (x)) 6= y) ≥ 1− α. An attack is (α,F )-effective in targeted scenario (with class target yt) if Pr (F(AT (x)) = yt) ≥ 1− α.
This definition captures the requirement that an adversarial instance generated by an effective attack strategy is able to mislead the target classification model (e.g. F) with certain probability (1− α). The smaller the α is, the more effective the attack is. In practice, this implies that on a finite sample of targets, the attack success is frequent but not absolute. Note that the definition is general for both whitebox [1, 12, 5] and blackbox attacks [42, 4].
2.1 Model Characteristics
Given two modelsF and G, what are the characteristics ofF and G that have impact on transferability under a given attack strategy? Intuitively, the more similar these two classifers are, the larger the transferability would be. However, how can we define “similar” and how can we rigorously connect it to transferability? To answer these questions, we will first define the risk and empirical risk for a given model to measure its performance on benign test data. Then, as the DNNs are differentiable, we will define model similarity based on their gradients. We will then derive the lower and upper bounds of adversarial transferability based on the defined model risk and similarity measures. Definition 3 (Risk and Empirical Risk). For a given model F , we let `F be its model-dependent loss function. Its risk is defined as ηF = Pr (F(x) 6= y); and its empirical risk is defined as ξF = E [`F (x, y)].
The risk represents the model’s error rate on benign test data, while the empirical risk is a non-negative value that also indicates the inaccuracy. For both of them, higher value means worse performance on the benign test data. The difference is that, the risk has more intuitive meaning, while the empirical risk is differentiable and is actually used during model training. Definition 4 (Loss Gradient Similarity). The lower loss gradient similarity S and upper loss gradient similarity S between two differentiable loss functions `F and `G is defined as:
S(`F , `G) = inf x∈X ,y∈Y ∇x`F (x, y) · ∇x`G(x, y) ‖∇x`F (x, y)‖2 · ‖∇x`G(x, y)‖2 ,S(`F , `G) = sup x∈X ,y∈Y ∇x`F (x, y) · ∇x`G(x, y) ‖∇x`F (x, y)‖2 · ‖∇x`G(x, y)‖2 .
The S(`F , `G) (S(`F , `G)) is the minimum (maximum) cosine similarity between the gradients of the two loss functions for an input x drawn from X with any label y ∈ Y . Besides the loss gradient similarity, in our analysis we will also show that the model smoothness is another key characteristic of ML models that affects the model transferability.
Definition 5. We call a model F β-smooth if sup x1,x2∈X ,y∈Y ‖∇x`F (x1, y)−∇x`F (x2, y)‖2 ‖x1 − x2‖2 ≤ β.
This smoothness definition is commonly used in deep learning theory and optimization literature [3, 2], and is also named curvature bounds in certified robustness literature [44]. It could be interpreted as the Lipschitz bound for the model’s loss function gradient. We remark that larger β indicates that the model is less smoother, while smaller β means the model is smoother. Particularly, when β = 0, the model is linear in the input space X .
2.2 Definition of Adversarial Transferability
Based on the model characteristics we explored above, next we will ask: Given two models, what is the natural and precise definition of adversarial transferability?
Definition 6 (Transferability). Consider an adversarial instance AU (x) or AT (x) constructed against a surrogate model F . With a given benign input x ∈ X , The transferability Tr between F and a target model G is defined as follows (adversarial target yt ∈ Y):
• Untargeted: Tr(F ,G, x) = I[F(x) = G(x) = y ∧ F(AU (x)) 6= y ∧ G(AU (x)) 6= y].
• Targeted: Tr(F ,G, x, yt) = I[F(x) = G(x) = y ∧ F(AT (x)) = G(AT (x)) = yt].
Here we define the transferability at instance level, showing several conditions are required to satisfy for a transferable instance. For the untargeted attack, it requires that: (1) both the surrogate model
and target model make correct prediction on the benign input; and (2) both of them make incorrect predictions on the adversarial inputAU (x). The AU (x) is generated via the untargeted attack against the surrogate model F . For the targeted attack, it requires that: (1) both the surrogate and target model make correct prediction on benign input; and (2) both output the adversarial target yt ∈ Y on the adversarial input AT (x). The AT (x) is crafted against the surrogate model F . The predicates themselves do not require AU and AT to be explicitly constructed against the surrogate model F . It will be implied by attack effectiveness (Definition 2) on F in theorem statements. Note that the definition here is a predicate for a specific input x, and in the following analysis we will mainly use its distributional version: Pr (Tr(F ,G, x) = 1) and Pr (Tr(F ,G, x, yt) = 1).
2.3 Lower Bound of Adversarial Transferability
Based on the general definition of transferability, in this section we will analyze how to lower bound the transferability for targeted attack. The analysis for untargeted attack has a similar form and is deferred to Theorem 3 in Appendix A. Theorem 1 (Lower Bound on Targeted Attack Transferability). Assume both models F and G are β-smooth. Let AT be an (α,F)-effective targeted attack with perturbation ball ‖δ‖2 ≤ and target label yt ∈ Y . The transferabiity can be lower bounded by
Pr (Tr(F ,G, x, yt) = 1) ≥ (1−α)−(ηF+ηG)− (1 + α) + cF (1− α) cG + − (1− α) cG +
√ 2− 2S(`F , `G),
where
cF = max x∈X
min y∈Y
`F (AT (x), y)− `F (x, yt) + β 2/2
‖∇x`F (x, yt)‖2 , cG = min x∈X
min y∈Y
`G(AT (x), y)− `G(x, yt)− β 2/2
‖∇x`G(x, yt)‖2 .
Here ηF , ηG are the risks of models F and G respectively.
We defer the complete proof in Appendix C. In the proof, we first use a Taylor expansion to introduce the gradient terms, then relate the dot product with cosine similarity of the loss gradients, and finally use Markov’s inequality to derive the misclassification probability of G to complete the proof.
Implications. In Theorem 1, the only term which correlates both F and G is S(`F , `G), while all other terms depend on individual models F or G. Thus, we study the relation between S(`F , `G) and Pr (Tr(F ,G, x, yt) = 1). Note that since β is small compared with the perturbation radius and the gradient magnitude ‖∇x`G‖2 in the denominator is relatively large, the quantity cG is small. Moreover, 1 − α is large since the attack is typically effective against F . Thus, Pr (Tr(F ,G, x, yt) = 1) has the form C − k √ 1− S(`F , `G), where C and k are both positive constants. We can easily observe the positive correlation between the loss gradients similarity S(`F , `G), and lower bound of adversarial transferability Pr (Tr(F ,G, x, yt) = 1). In the meantime, note that when β increases (i.e., model becomes less smooth), in the transferability lower bound C − k √ 1− S(`F , `G), the C decreases and k increase. As a result, the lower bounds in Theorem 1 decreases, which implies that when model becomes less smoother (i.e., β becomes larger), the transferability lower bounds become looser for both targeted and untargeted attacks. In other words, when the model becomes smoother, the correlation between loss gradients similarity and lower bound of transferability becomes stronger, which motivates us to constrain the model smoothness to increase the effect of limiting loss gradients similarity.
In addition to the `p-bounded attacks, we also derive a transferability lower bound for general attacks whose magnitude is bounded by total variance distance of data distributions. We defer the detail analysis and discussion to Appendix B.
2.4 Upper Bound of Adversarial Transferability
We next aim to upper bound the adversarial transferability. The upper bound for target attack is shown below; and the one for untargeted attack has a similar form in Theorem 4 in Appendix A. Theorem 2 (Upper Bound on Targeted Attack Transferability). Assume both models F and G are β-smooth with gradient magnitude bounded by B, i.e., ‖∇x`F (x, y)‖ ≤ B and ‖∇x`G(x, y)‖ ≤ B for any x ∈ X , y ∈ Y . LetAT be an (α,F)-effective targeted attack with perturbation ball ‖δ‖2 ≤
and target label yt ∈ Y . When the attack radius is small such that `min− B ( 1 + √ 1+S(`F ,`G)
2
) −
β 2 > 0, the transferability can be upper bounded by
Pr (Tr(F ,G, x, yt) = 1) ≤ ξF + ξG `min − B ( 1 + √ 1+S(`F ,`G)
2
) − β 2 ,
where `min = min x∈X (`F (x, yt), `G(x, yt)). Here ξF and ξG are the empirical risks of models F and G respectively, defined relative to a differentiable loss.
We defer the complete proof to Appendix D. In the proof, we first take a Taylor expansion on the loss function at (x, y), then use the fact that the attack direction will be dissimilar with at least one of the model gradients to upper bound the transferability probability.
Implications. In Theorem 2, we observe that along with the increase of S(`F , `G), the denominator decreases and henceforth the upper bound increases. Therefore, S(`F , `G)—upper loss gradient similarity and the upper bound of transferability probability is positively correlated. This tendency is the same as that in the lower bound. Note that α does not appear in upper bounds since only completely successful attacks (α = 0%) needs to be considered here to upper bound the transferability.
Meanwhile, when the model becomes smoother (i.e., β decreases), the transferability upper bound decreases and becomes tighter. This implication again motivates us to constrain the model smoothness. We further observe that smaller magnitude of gradient, i.e., B, also helps to tighten the upper bound. We will regularize both B and β to increase the effect of constraining loss gradients similarity.
Note that the lower bound and upper bound jointly show smaller β leads to a reduced gap between lower and upper bounds and thus a stronger correlation between loss gradients similarity and transferabiltiy. Therefore, it is important to both constrain gradient similarity and increase model smoothness (decrease β) to reduce model transferability and improve ensemble robustness.
3 Improving Ensemble Robustness via Transferability Minimization
Motivated by our theoretical analysis, we propose a lightweight yet effective robust ensemble training approach, Transferability Reduced Smooth (TRS), to reduce the transferability among base models by enforcing low loss gradient similarity and model smoothness at the same time.
3.1 TRS Regularizer
In practice, it is challenging to directly regularize the model smoothness. Luckily, inspired from deep learning theory and optimization [14, 37, 45], succinct `2 regularization on the gradient terms ‖∇x`F‖2 and ‖∇x`G‖2 can reduce the magnitude of gradients and thus improve model smoothness. For example, for common neural networks, the smoothness can be upper bounded via bounding the `2 magnitude of gradients [45, Corollary 4]. An intuitive explanation is that, the `2 regularization on the gradient terms reduces the magnitude of model’s weights, thus limits its changing rate when non-linear activation functions are applied to the neural network model. However, we find that directly regularizing the loss gradient magnitude with `2 norm is not enough, since a vanilla `2 regularizer such as ‖∇x`F‖2 will only focus on the local region at data point x, while it is required to ensure the model smoothness over a large decision region to control the adversarial transferability based on our theoretical analysis.
To address this challenge, we propose a min-max framework to regularize the “support” instance x̂ with “worst” smoothness in the neighborhood region of data point x, which results in the following model smoothness loss:
Lsmooth(F ,G, x, δ) = max ‖x̂−x‖∞≤δ ‖∇x̂`F‖2 + ‖∇x̂`G‖2 (1)
where δ refers to the radius of the `∞ ball around instance x within which we aim to ensure the model to be smooth. In practice, we leverage projection gradient descent optimization to search for support instances x̂ for optimization. This model smoothness loss can be viewed as promoting margin-wise smoothness, i.e., improving the margin between nonsmooth decision boundaries and data point x. Another option is to promote point-wise smoothness that only requires the loss landscape
at data point x itself to be smooth. We compare the ensemble robustness of the proposed min-max framework which promotes the margin-wise smoothness with the naïve baseline which directly applies `2 regularization on each model loss gradient terms to promote the point-wise smoothness (i.e. Cos-`2) in Section 4.
Given trained “smoothed" base models, we also decrease the model loss gradient similarity to reduce the overall adversarial transferability between base models. Among various metrics which measure the similarity between the loss gradients of base model F and G, we find that the vanilla cosine similarity metric, which is also used in [23], may lead to certain concerns. By minimizing the cosine similarity between∇x`F and∇x`G , the optimal case implies∇x`F = −∇x`G , which means two models have contradictory (rather than diverse) performance on instance x and thus results in turbulent model functionality. Considering this challenge, we leverage the absolute value of cosine similarity between ∇x`F and ∇x`G as similarity loss Lsim and its optimal case implies orthogonal loss gradient vectors. For simplification, we will always use the absolute value of the gradient cosine similarity as the indicator of gradient similarity in our later description and evaluation.
Based on our theoretical analysis and particularly the model loss gradient similarity and model smoothness optimization above, we propose TRS regularizer for model pair (F ,G) on input x as:
LTRS(F ,G, x, δ) = λa · Lsim + λb · Lsmooth = λa · ∣∣∣∣ (∇x`F )>(∇x`G)‖∇x`F‖2 · ‖∇x`G‖2 ∣∣∣∣+ λb · [ max‖x̂−x‖∞≤δ ‖∇x̂`F‖2 + ‖∇x̂`G‖2 ] .
Here ∇x`F and ∇x`G refer to the loss gradient vectors of base models F and G on input x, and λa, λb the weight balancing parameters.
In Section 4, backed up by extensive empirical evaluation, we will systematically show that the local min-max training and the absolute value of the cosine similarity between the model loss gradients significantly improve the ensemble model robustness with negligible performance drop on benign accuracy, as well as reduce the adversarial transferability among base models.
3.2 TRS Training
We integrate the proposed TRS regularizer with the standard ensemble training loss, such as Ensemble Cross-Entropy (ECE) loss, to maintain both ensemble model’s classification utility and robustness by varying the balancing parameter λa and λb. Specifically, for an ensemble model consisting of N base models {Fi}Ni=1, given an input (x, y), our final training loss train is defined as:
Ltrain = 1
N N∑ i=1 LCE(Fi(x), y) + 2 N(N − 1) N∑ i=1 N∑ j=i+1 LTRS(Fi,Fj , x, δ)
where LCE(Fi(x), y) refers to the cross-entropy loss between Fi(x), the output vector of model Fi given x, and the ground-truth label y. The weight of LTRS regularizer could be adjusted by the tuning λa and λb internally. We present one-epoch training pseudo code in Algorithm 1 of Appendix F. The detailed hyper-parameter setting and training criterion are discussed in Appendix F.
4 Experimental Evaluation
In this section, we evaluate the robustness of the proposed TRS-ensemble model under both strong whitebox attacks, as well as blackbox attacks considering the gradient obfuscation concern [1]. We compare TRS with six state-of-the-art ensemble approaches. In addition, we evaluate the adversarial transferability among base models within an ensemble and empirically show that the TRS regularizer can indeed reduce transferability effectively. We also conduct extensive ablation studies to explore the effectiveness of different loss terms in TRS, as well as visualize the trained decision boundaries of different ensemble models to provide intuition on the model properties. We open source the code3 and provide a large-scale benchmark.
4.1 Experimental Setup
Datasets. We conduct our experiments on widely-used image datasets including hand-written dataset MNIST [29]; and colourful image datasets CIFAR-10 and CIFAR-100 [26].
3https://github.com/AI-secure/Transferability-Reduced-Smooth-Ensemble
Baseline ensemble approaches. We mainly consider the standard ensemble, as well as the state-ofthe-art robust ensemble methods that claim to be resilient against adversarial attacks. Specifically, we consider the following baseline ensemble methods which aim to promote the diversity between base models: AdaBoost [19]; GradientBoost [16]; CKAE [25]; ADP [38]; GAL [23]; DVERGE [56]. The detailed description about these approaches are in Appendix E. DVERGE, which has achieved the state-of-the-art ensemble robustness to our best knowledge, serves as the strongest baseline.
Whitebox robustness evaluation. We consider the following adversarial attacks to measure ensembles’ whitebox robustness: Fast Gradient Sign Method (FGSM) [17]; Basic Iterative Method (BIM) [34]; Momentum Iterative Method (MIM); Projected Gradient Descent (PGD); Auto-PGD (APGD); Carlini & Wanger Attack (CW); Elastic-net Attack (EAD) [8], and we leave the detailed description and parameter configuration of these attacks in Appendix E. We use Robust Accuracy as our evaluation metric for the whitebox setting, defined as the ratio of correctly predicted adversarial examples generated by different attacks among the whole test dataset.
Blackbox robustness evaluation. We also conduct blackbox robustness analysis in our evaluation since recent studies have shown that robust models which obfuscate gradients could still be fragile under blackbox attacks [1]. In the blackbox attack setting, we assume the attacker has no knowledge about the target ensemble, including the model architecture and parameters. In this case, the attacker is only able to craft adversarial examples based on several surrogate models and transfer them to the target victim ensemble. We follow the same blackbox attack evaluation setting in [56]: We choose three ensembles consisting of 3, 5, 8 base models which are trained with standard Ensemble Cross-Entropy (ECE) loss as our surrogate models. We apply 50-steps PGD attack with three random starts and two different loss functions (CrossEntropy and CW loss) on each surrogate model to generate adversarial instances (i.e. for each instance we will have 18 attack attempts). For each instance, among these attack attempts, as long as there is one that can successfully attack the victim model, we will count it as a successful attack. In this case, we use Robust Accuracy as our evaluation metric, defined as the number of unsuccessful attack attempts divided by the number of all attacks. We also consider additional three strong blackbox attacks targeting on reducing transferability (i.e., ILA [21], DI2-SGSM [55], IRA [50]) in Appendix J, which leads to similar observations.
4.2 Experimental Results
In this section, we present both whitebox and blackbox robustness evaluation results, examine the adversarial transferability, and explore the impacts of different loss terms in TRS. Furthermore, in Appendix I.1, we visualize the decision boundary; in Appendix I.2, we show results of further improving the robustness of the TRS ensemble by integrating adversarial training; in Appendix I.3, we study the impacts of each of the regularization term Lsim and Lsmooth; in Appendix I.4, we show the convergence of robust accuracy under large attack iterations to demonstrate the robustness stability of TRS ensemble; in Appendix I.5, we analyze the trade-off between the training cost and robustness of TRS by varying PGD step size and the total number of steps within Lsmooth approximation. Whitebox robustness. Table 1 presents the Robust Accuracy of different ensembles against a range of whitebox attacks on MNIST and CIFAR-10 dataset. We defer results on CIFAR-100 in Appendix K, and measure the statistical stability of our reported robust accuracy in Appendix H. Results shows that the proposed TRS ensemble outperforms other baselines including the state-of-the-art DVERGE significantly, against a range of attacks and perturbation budgets, and such performance gap could be even larger under stronger adversary attacks (e.g. PGD attack). We note that TRS ensemble is slightly less robust than DVERGE under small perturbation with weak attack FGSM. We investigate this based on the decision boundary analysis in Appendix I.1, and find that DVERGE tends to be more robust along the gradient direction and thus more robust against weak attacks which only focus on the gradient direction (e.g., FGSM); while TRS yields a smoother model along different directions leading to more consistent predictions within a larger neighborhood of an input, and thus more robust against strong iterative attacks (e.g., PGD). This may be due to that DVERGE is essentially performing adversarial training for different base models and therefore it protects the adversarial (gradient) direction, while TRS optimizes to train a smooth ensemble with diverse base models. We also analyze the convergence of attack algorithms in Appendix I.4, showing that when the number of attack iterations is large, both ADP and GAL ensemble achieve much lower robust accuracy against such iterative attacks; while both DVERGE and TRS remain robust.
Blackbox robustness. Figure 2 shows the Robust Accuracy performance of TRS compared with different baseline ensembles under different perturbation budget . As we can see, the TRS ensemble achieves competitive robust accuracy with DVERGE when is very small, and TRS beats all the baselines significantly when is large. Precisely speaking, TRS ensemble achieves over 85% robust accuracy against transfer attack with = 0.4 on MNIST while the second-best ensemble (DVERGE) only achieves 20.2%. Also on CIFAR-10, TRS ensemble achieves over 25% robust accuracy against transfer attack when = 0.06, while all the other baseline ensembles achieve robust accuracy lower than 6%. This implies that our proposed TRS ensemble has stronger generalization ability in terms of robustness against large adversarial attacks compared with other ensembles. We also put more details of the robust accuracy under blackbox attacks in Appendix G.
Adversarial transferability. Figure 3 shows the adversarial transferability matrix of different ensembles against 50-steps PGD attack with = 0.3 for MNIST and = 0.04 for CIFAR-10. Cell (i, j) where i 6= j represents the transfer attack success rate evaluated on j-th base model by using the i-th base model as the surrogate model. Lower number in each cell indicates lower transferability and thus potentially higher ensemble robustness. The diagonal cell (i, i) refers to i-th base model’s attack success rate, which reflects the vulnerability of a single model. From these figures, we can see that while base models show their vulnerabilities against adversarial attack, only DVERGE and TRS ensemble could achieve low adversarial transferability among base models. We should also notice that though GAL applied a similar gradient cosine similarity loss as our loss term Lsim, GAL still can
not achieve low adversarial transferability due to the lack of model smoothness enforcement, which is one of our key contributions in this paper.
Gradient similarity only vs. TRS. To further verify our theoretical analysis on the sufficient condition of transferability as model smoothness, we consider only applying similarity loss Lsim without model smoothness loss Lsmooth in TRS (i.e. λb = 0). The result is shown as “Cos-only” method of Table 1. We observe that the resulting whitebox robustness is much worse than standard TRS. This matches our theoretical analysis that only minimizing the gradient similarity cannot guarantee low adversarial transferability among base models and thus lead to low ensemble robustness. In Appendix I.3, we investigate the impacts of Lsim and Lsmooth thoroughly, and we show that though Lsmooth contribute slightly more, both terms are critical to the final ensemble robustness. `2 regularizer only vs. Min-max model smoothing. To emphasis the importance of our proposed min-max training loss on promoting the margin-wise model smoothness, we train a variant of TRS ensemble Cos-`2, where we directly apply the `2 regularization on ‖∇x`F‖2 and ‖∇x`G‖2. The results are shown as “Cos-`2” in Table 1. We observe that Cos-`2 achieves lower robustness accuracy compared with TRS, which implies the necessity of regularizing the gradient magnitude on not only the local training points but also their neighborhood regions to ensure overall model smoothness.
5 Conclusion
In this paper, we deliver an in-depth understanding of adversarial transferability. Theoretically, we provide both lower and upper bounds on transferability which shows that smooth models together with low loss gradient similarity guarantee low transferability. Inspired by our analysis, we propose TRS ensemble training to empirically reduce transferability by reducing loss gradient similarity and promoting model smoothness, yielding a significant improvement on ensemble robustness.
Acknowledgments and Disclosure of Funding
This work is partially supported by the NSF grant No.1910100, NSF CNS 20-46726 CAR, the Amazon Research Award, and the joint CATCH MURI-AUSMURI. | 1. What is the main contribution of the paper regarding adversarial transferability?
2. What are the strengths of the paper, particularly in terms of its theoretical analysis and experimental results?
3. Do you have any concerns or suggestions regarding the paper's weaknesses, such as inconsistent terminology, lack of statistical comparison, or limitations in practical applications? | Summary Of The Paper
Review | Summary Of The Paper
This work first theoretically analyzes and outlines sufficient conditions for adversarial transferability between models, then proposes a practical algorithm to reduce the transferability between base models within an ensemble to improve its robustness. They also provide the lower and upper bounds of adversarial transferability under certain conditions and propose an effective Transferability Reduced Smooth (TRS) ensemble training strategy to train a robust ensemble with low transferability. Extensive experiments demonstrate that the proposed TRS outperforms all baselines significantly.
Review
[Strengths] +They make the first attempt towards a theoretical understanding of adversarial transferability, and provide an approach for developing robust ML ensembles. +A theoretical analysis is provided, which helps us to understand the transferability of adversarial examples between different models. +Extensive experiments demonstrate that the proposed TRS outperforms all baselines significantly. Ablation studies are provided.
[Weaknesses] -“white-box and blackbox attacks” is used in line 62, but “whitebox and blackbox attacks” is used in line 68, please write in the same way. “theoretical understanding” -> “a theoretical understanding”. -TRS requires smaller or comparable training time (line 65). But there is no statistical comparison of training times between different methods. -∥∇_x ˆ l_F ∥_2 and ∥∇_x ˆ l_G ∥_2 and used In Eq. (1). As I understand, it means for training a robust model B, model A is needed to calculate the smooth loss. In the end, model B may be “robust” for transferable adversarial examples from model A, but vulnerable for adversarial examples generated from other models. This type of training is limited in practical application. |
NIPS | Title
TRS: Transferability Reduced Ensemble via Promoting Gradient Diversity and Model Smoothness
Abstract
Adversarial Transferability is an intriguing property – adversarial perturbation crafted against one model is also effective against another model, while these models are from different model families or training processes. To better protect ML systems against adversarial attacks, several questions are raised: what are the sufficient conditions for adversarial transferability and how to bound it? Is there a way to reduce the adversarial transferability in order to improve the robustness of an ensemble ML model? To answer these questions, in this work we first theoretically analyze and outline sufficient conditions for adversarial transferability between models; then propose a practical algorithm to reduce the transferability between base models within an ensemble to improve its robustness. Our theoretical analysis shows that only promoting the orthogonality between gradients of base models is not enough to ensure low transferability; in the meantime, the model smoothness is an important factor to control the transferability. We also provide the lower and upper bounds of adversarial transferability under certain conditions. Inspired by our theoretical analysis, we propose an effective Transferability Reduced Smooth (TRS) ensemble training strategy to train a robust ensemble with low transferability by enforcing both gradient orthogonality and model smoothness between base models. We conduct extensive experiments on TRS and compare with 6 state-of-the-art ensemble baselines against 8 whitebox attacks on different datasets, demonstrating that the proposed TRS outperforms all baselines significantly.
1 Introduction
Machine learning systems, especially those based on deep neural networks (DNNs), have been widely applied in numerous applications [27, 18, 46, 10]. However, recent studies show that DNNs are vulnerable to adversarial examples, which are able to mislead DNNs by adding small magnitude of perturbations to the original instances [47, 17, 54, 52]. Several attack strategies have been proposed so far to generate such adversarial examples in both digital and physical environments [36, 32, 51, 53, 15, 28]. Intriguingly, though most attacks require access to the target models (whitebox attacks), several studies show that adversarial examples generated against one model are able to transferably
∗The authors contributed equally.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
attack another target model with high probability, giving rise to blackbox attacks [39, 41, 31, 30, 57]. This property of adversarial transferability poses great threat to DNNs.
Some work have been conducted to understand adversarial transferability [48, 33, 12]. However, a rigorous theoretical analysis or explanation for transferability is still lacking in the literature. In addition, although developing robust ensemble models to limit transferability shows great potential towards practical robust learning systems, only empirical observations have been made in this line of research [38, 23, 56]. Can we deepen our theoretical understanding on transferability? Can we take advantage of rigorous theoretical understanding to reduce the adversarial transferability and therefore generate robust ensemble ML models?
In this paper, we focus on these two questions. From the theoretical side, we are interested in the sufficient conditions under which the adversarial transferability can be lower bounded and upper bounded. Our theoretical arguments provides the first theoretical interpretation for the sufficient conditions of transferability. Intuitively, as illustrated in Figure 1, we show that the commonly used gradient orthogonality (low cosine similarity) between learning models [12] cannot directly imply low adversarial transferability; on the other hand, orthogonal and smoothed models would limit the transferability. In particular, we prove that the gradient similarity and model smoothness are the key factors that both contribute to the adversarial transferability, and smooth models with orthogonal gradients can guarantee low transferability.
Under an empirical lens, inspired by our theoretical analysis, we propose a simple yet effective approach, Transferability Reduced Smooth (TRS) ensemble to limit adversarial transferability between base models
within an ensemble and therefore improve its robustness. In particular, we reduce the loss gradient similarity between models as well as enforce the smoothness of models to introduce global model orthogonality.
We conduct extensive experiments to evaluate TRS in terms of the model robustness against different strong white-box and blackbox attacks following the robustness evaluation procedures [5, 6, 49], as well as its ability to limit transferability across the base models. We compare the proposed TRS with existing state-of-the-art baseline ensemble approaches such as ADP [38], GAL [23], and DVERGE [56] on MNIST, CIFAR-10, and CIFAR-100 datasets, and we show that (1) TRS achieves the state-of-the-art ensemble robustness, outperforming others by a large margin; (2) TRS achieves efficient training; (3) TRS effectively reduces the transferability among base models within an ensemble which indicates its robustness against whitebox and blackbox attacks; (4) Both loss terms in TRS contribute to the ensemble robustness by constraining different sufficient conditions of adversarial transferability.
Contributions. In this paper, we make the first attempt towards theoretical understanding of adversarial transferability, and provide practical approach for developing robust ML ensembles. (1) We provide a general theoretical analysis framework for adversarial transferability. We prove
the lower and upper bounds of adversarial transferability. Both bounds show that the gradient similarity and model smoothness are the key factors contributing to the adversarial transferability, and smooth models with orthogonal gradients can guarantee low transferability.
(2) We propose a simple yet effective approach TRS to train a robust ensemble by jointly reducing the loss gradient similarity between base models and enforcing the model smoothness. The code is publicly available2.
(3) We conduct extensive experiments to evaluate TRS in terms of model robustness under different attack settings, showing that TRS achieves the state-of-the-art ensemble robustness and outperforms other baselines by a large margin. We also conduct ablation studies to further understand the contribution of different loss terms and verify our theoretical findings.
2https://github.com/AI-secure/Transferability-Reduced-Smooth-Ensemble
Related Work
The adversarial transferability between different ML models is an intriguing research direction. Papernot et al. [40] explored the limitation of adversarial examples and showed that, while some instances are more difficult to manipulate than the others, these adversarial examples usually transfer from one model to another. Demontis et al. [12] later analyzed transferability for both evasion and poisoning attacks. Tramèr et al. [48] empirically investigated the subspace of adversarial examples that enables transferability between different models: though their results provide a non-zero probability guarantee on the transferability, they did not quantify the probability of adversarial transferability.
Leveraging the transferability, different blackbox attacks have been proposed [41, 28, 15, 9]. To defend against these transferability based attacks, Pang et al. [38] proposed a class entropy based adaptive diversity promoting approach to enhance the ML ensemble robustness. Recently, Yang et al. [56] proposed DVERGE, a robust ensemble training approach that diversifies the non-robust features of base models via an adversarial training objective function. However, these approaches do not provide theoretical justification for adversarial transferability, and there is still room to improve the ML ensemble robustness based on in-depth understanding on the sufficient conditions of transferability. In this paper, we aim to provide a theoretical understanding of transferability, and empirically compare the proposed robust ML ensemble inspired by our theoretical analysis with existing approaches to push for a tighter empirical upper bound for the ensemble robustness.
2 Transferability of Adversarial Perturbation
In this section, we first introduce preliminaries, and then provide the upper and lower bounds of adversarial transferability by connecting adversarial transferability with different characteristics of models theoretically, which, in the next section, will allow us to explicitly minimize transferability by enforcing (or rewarding) certain properties of models.
Notations. We consider neural networks for classification tasks. Assume there are C classes, and let X be the input space of the model with Y = {1, 2, . . . , C} the set of prediction classes (i.e., labels). We model the neural network by a mapping function F : X → Y . We will study the transferability between two models F and G. For brevity, hereinafter we mainly show the derived notations for F and notations for G are similar. Let the benign data (x, y) follow an unknown distribution D supported on (X ,Y), and PX denote the marginal distribution on X . For a given input x ∈ X , the classification model F first predicts the confidence score for each label y ∈ Y , denoted as fy(x). These confidence scores sum up to 1, i.e., ∑ y∈Y fy(x) = 1,∀x ∈ X . The model F will predicts the label with highest confidence score: F(x) = argmaxy∈Y fy(x). For modelF , there is usually a model-dependent loss function `F : X×Y → R+, which is the composition of a differentiable training loss (e.g., cross-entropy loss) ` and the model’s confidence score f(·): `F (x, y) := `(f(x), y), (x, y) ∈ (X ,Y). We further assume that F(x) = argminy∈Y `F (x, y), i.e., the model predicts the label with minimum loss. This holds for common training losses.
In this paper, by default we will focus on models that are well-trained on the benign dataset, and such models are the most commonly encountered in practice, so their robustness is paramount. This means we will focus on the low risk classifiers, which we will formally define in Section 2.1.
How should we define an adversarial attack? For the threat model, we consider the attacker that adds an `p norm bounded perturbation to data instance x ∈ X . In practice, there are two types of attacks, untargeted attacks and targeted attacks. The definition of adversarial transferability is slightly different under these attacks [33], and we consider both in our analysis. Definition 1 (Adversarial Attack). Given an input x ∈ X with true label y ∈ Y , F(x) = y. (1) An untargeted attack crafts AU (x) = x+ δ to maximize `F (x+ δ, y) where ‖δ‖p ≤ . (2) A targeted attack with target label yt ∈ Y crafts AT (x) = x+ δ to minimize `F (x+ δ, yt) where ‖δ‖p ≤ .
In this definition, is a pre-defined attack radius that limits the power of the attacker. We may refer to {δ : ‖δ‖p ≤ } as the perturbation ball. The goal of the untargeted attack is to maximize the loss of the target model against its true label y. The goal of the targeted attack is to minimize the loss towards its adversarial target label yt.
How do we formally define that an attack is effective?
Definition 2 ((α,F)-Effective Attack). Consider a input x ∈ X with true label y ∈ Y . An attack is (α,F )-effective in untargeted scenario if Pr (F(AU (x)) 6= y) ≥ 1− α. An attack is (α,F )-effective in targeted scenario (with class target yt) if Pr (F(AT (x)) = yt) ≥ 1− α.
This definition captures the requirement that an adversarial instance generated by an effective attack strategy is able to mislead the target classification model (e.g. F) with certain probability (1− α). The smaller the α is, the more effective the attack is. In practice, this implies that on a finite sample of targets, the attack success is frequent but not absolute. Note that the definition is general for both whitebox [1, 12, 5] and blackbox attacks [42, 4].
2.1 Model Characteristics
Given two modelsF and G, what are the characteristics ofF and G that have impact on transferability under a given attack strategy? Intuitively, the more similar these two classifers are, the larger the transferability would be. However, how can we define “similar” and how can we rigorously connect it to transferability? To answer these questions, we will first define the risk and empirical risk for a given model to measure its performance on benign test data. Then, as the DNNs are differentiable, we will define model similarity based on their gradients. We will then derive the lower and upper bounds of adversarial transferability based on the defined model risk and similarity measures. Definition 3 (Risk and Empirical Risk). For a given model F , we let `F be its model-dependent loss function. Its risk is defined as ηF = Pr (F(x) 6= y); and its empirical risk is defined as ξF = E [`F (x, y)].
The risk represents the model’s error rate on benign test data, while the empirical risk is a non-negative value that also indicates the inaccuracy. For both of them, higher value means worse performance on the benign test data. The difference is that, the risk has more intuitive meaning, while the empirical risk is differentiable and is actually used during model training. Definition 4 (Loss Gradient Similarity). The lower loss gradient similarity S and upper loss gradient similarity S between two differentiable loss functions `F and `G is defined as:
S(`F , `G) = inf x∈X ,y∈Y ∇x`F (x, y) · ∇x`G(x, y) ‖∇x`F (x, y)‖2 · ‖∇x`G(x, y)‖2 ,S(`F , `G) = sup x∈X ,y∈Y ∇x`F (x, y) · ∇x`G(x, y) ‖∇x`F (x, y)‖2 · ‖∇x`G(x, y)‖2 .
The S(`F , `G) (S(`F , `G)) is the minimum (maximum) cosine similarity between the gradients of the two loss functions for an input x drawn from X with any label y ∈ Y . Besides the loss gradient similarity, in our analysis we will also show that the model smoothness is another key characteristic of ML models that affects the model transferability.
Definition 5. We call a model F β-smooth if sup x1,x2∈X ,y∈Y ‖∇x`F (x1, y)−∇x`F (x2, y)‖2 ‖x1 − x2‖2 ≤ β.
This smoothness definition is commonly used in deep learning theory and optimization literature [3, 2], and is also named curvature bounds in certified robustness literature [44]. It could be interpreted as the Lipschitz bound for the model’s loss function gradient. We remark that larger β indicates that the model is less smoother, while smaller β means the model is smoother. Particularly, when β = 0, the model is linear in the input space X .
2.2 Definition of Adversarial Transferability
Based on the model characteristics we explored above, next we will ask: Given two models, what is the natural and precise definition of adversarial transferability?
Definition 6 (Transferability). Consider an adversarial instance AU (x) or AT (x) constructed against a surrogate model F . With a given benign input x ∈ X , The transferability Tr between F and a target model G is defined as follows (adversarial target yt ∈ Y):
• Untargeted: Tr(F ,G, x) = I[F(x) = G(x) = y ∧ F(AU (x)) 6= y ∧ G(AU (x)) 6= y].
• Targeted: Tr(F ,G, x, yt) = I[F(x) = G(x) = y ∧ F(AT (x)) = G(AT (x)) = yt].
Here we define the transferability at instance level, showing several conditions are required to satisfy for a transferable instance. For the untargeted attack, it requires that: (1) both the surrogate model
and target model make correct prediction on the benign input; and (2) both of them make incorrect predictions on the adversarial inputAU (x). The AU (x) is generated via the untargeted attack against the surrogate model F . For the targeted attack, it requires that: (1) both the surrogate and target model make correct prediction on benign input; and (2) both output the adversarial target yt ∈ Y on the adversarial input AT (x). The AT (x) is crafted against the surrogate model F . The predicates themselves do not require AU and AT to be explicitly constructed against the surrogate model F . It will be implied by attack effectiveness (Definition 2) on F in theorem statements. Note that the definition here is a predicate for a specific input x, and in the following analysis we will mainly use its distributional version: Pr (Tr(F ,G, x) = 1) and Pr (Tr(F ,G, x, yt) = 1).
2.3 Lower Bound of Adversarial Transferability
Based on the general definition of transferability, in this section we will analyze how to lower bound the transferability for targeted attack. The analysis for untargeted attack has a similar form and is deferred to Theorem 3 in Appendix A. Theorem 1 (Lower Bound on Targeted Attack Transferability). Assume both models F and G are β-smooth. Let AT be an (α,F)-effective targeted attack with perturbation ball ‖δ‖2 ≤ and target label yt ∈ Y . The transferabiity can be lower bounded by
Pr (Tr(F ,G, x, yt) = 1) ≥ (1−α)−(ηF+ηG)− (1 + α) + cF (1− α) cG + − (1− α) cG +
√ 2− 2S(`F , `G),
where
cF = max x∈X
min y∈Y
`F (AT (x), y)− `F (x, yt) + β 2/2
‖∇x`F (x, yt)‖2 , cG = min x∈X
min y∈Y
`G(AT (x), y)− `G(x, yt)− β 2/2
‖∇x`G(x, yt)‖2 .
Here ηF , ηG are the risks of models F and G respectively.
We defer the complete proof in Appendix C. In the proof, we first use a Taylor expansion to introduce the gradient terms, then relate the dot product with cosine similarity of the loss gradients, and finally use Markov’s inequality to derive the misclassification probability of G to complete the proof.
Implications. In Theorem 1, the only term which correlates both F and G is S(`F , `G), while all other terms depend on individual models F or G. Thus, we study the relation between S(`F , `G) and Pr (Tr(F ,G, x, yt) = 1). Note that since β is small compared with the perturbation radius and the gradient magnitude ‖∇x`G‖2 in the denominator is relatively large, the quantity cG is small. Moreover, 1 − α is large since the attack is typically effective against F . Thus, Pr (Tr(F ,G, x, yt) = 1) has the form C − k √ 1− S(`F , `G), where C and k are both positive constants. We can easily observe the positive correlation between the loss gradients similarity S(`F , `G), and lower bound of adversarial transferability Pr (Tr(F ,G, x, yt) = 1). In the meantime, note that when β increases (i.e., model becomes less smooth), in the transferability lower bound C − k √ 1− S(`F , `G), the C decreases and k increase. As a result, the lower bounds in Theorem 1 decreases, which implies that when model becomes less smoother (i.e., β becomes larger), the transferability lower bounds become looser for both targeted and untargeted attacks. In other words, when the model becomes smoother, the correlation between loss gradients similarity and lower bound of transferability becomes stronger, which motivates us to constrain the model smoothness to increase the effect of limiting loss gradients similarity.
In addition to the `p-bounded attacks, we also derive a transferability lower bound for general attacks whose magnitude is bounded by total variance distance of data distributions. We defer the detail analysis and discussion to Appendix B.
2.4 Upper Bound of Adversarial Transferability
We next aim to upper bound the adversarial transferability. The upper bound for target attack is shown below; and the one for untargeted attack has a similar form in Theorem 4 in Appendix A. Theorem 2 (Upper Bound on Targeted Attack Transferability). Assume both models F and G are β-smooth with gradient magnitude bounded by B, i.e., ‖∇x`F (x, y)‖ ≤ B and ‖∇x`G(x, y)‖ ≤ B for any x ∈ X , y ∈ Y . LetAT be an (α,F)-effective targeted attack with perturbation ball ‖δ‖2 ≤
and target label yt ∈ Y . When the attack radius is small such that `min− B ( 1 + √ 1+S(`F ,`G)
2
) −
β 2 > 0, the transferability can be upper bounded by
Pr (Tr(F ,G, x, yt) = 1) ≤ ξF + ξG `min − B ( 1 + √ 1+S(`F ,`G)
2
) − β 2 ,
where `min = min x∈X (`F (x, yt), `G(x, yt)). Here ξF and ξG are the empirical risks of models F and G respectively, defined relative to a differentiable loss.
We defer the complete proof to Appendix D. In the proof, we first take a Taylor expansion on the loss function at (x, y), then use the fact that the attack direction will be dissimilar with at least one of the model gradients to upper bound the transferability probability.
Implications. In Theorem 2, we observe that along with the increase of S(`F , `G), the denominator decreases and henceforth the upper bound increases. Therefore, S(`F , `G)—upper loss gradient similarity and the upper bound of transferability probability is positively correlated. This tendency is the same as that in the lower bound. Note that α does not appear in upper bounds since only completely successful attacks (α = 0%) needs to be considered here to upper bound the transferability.
Meanwhile, when the model becomes smoother (i.e., β decreases), the transferability upper bound decreases and becomes tighter. This implication again motivates us to constrain the model smoothness. We further observe that smaller magnitude of gradient, i.e., B, also helps to tighten the upper bound. We will regularize both B and β to increase the effect of constraining loss gradients similarity.
Note that the lower bound and upper bound jointly show smaller β leads to a reduced gap between lower and upper bounds and thus a stronger correlation between loss gradients similarity and transferabiltiy. Therefore, it is important to both constrain gradient similarity and increase model smoothness (decrease β) to reduce model transferability and improve ensemble robustness.
3 Improving Ensemble Robustness via Transferability Minimization
Motivated by our theoretical analysis, we propose a lightweight yet effective robust ensemble training approach, Transferability Reduced Smooth (TRS), to reduce the transferability among base models by enforcing low loss gradient similarity and model smoothness at the same time.
3.1 TRS Regularizer
In practice, it is challenging to directly regularize the model smoothness. Luckily, inspired from deep learning theory and optimization [14, 37, 45], succinct `2 regularization on the gradient terms ‖∇x`F‖2 and ‖∇x`G‖2 can reduce the magnitude of gradients and thus improve model smoothness. For example, for common neural networks, the smoothness can be upper bounded via bounding the `2 magnitude of gradients [45, Corollary 4]. An intuitive explanation is that, the `2 regularization on the gradient terms reduces the magnitude of model’s weights, thus limits its changing rate when non-linear activation functions are applied to the neural network model. However, we find that directly regularizing the loss gradient magnitude with `2 norm is not enough, since a vanilla `2 regularizer such as ‖∇x`F‖2 will only focus on the local region at data point x, while it is required to ensure the model smoothness over a large decision region to control the adversarial transferability based on our theoretical analysis.
To address this challenge, we propose a min-max framework to regularize the “support” instance x̂ with “worst” smoothness in the neighborhood region of data point x, which results in the following model smoothness loss:
Lsmooth(F ,G, x, δ) = max ‖x̂−x‖∞≤δ ‖∇x̂`F‖2 + ‖∇x̂`G‖2 (1)
where δ refers to the radius of the `∞ ball around instance x within which we aim to ensure the model to be smooth. In practice, we leverage projection gradient descent optimization to search for support instances x̂ for optimization. This model smoothness loss can be viewed as promoting margin-wise smoothness, i.e., improving the margin between nonsmooth decision boundaries and data point x. Another option is to promote point-wise smoothness that only requires the loss landscape
at data point x itself to be smooth. We compare the ensemble robustness of the proposed min-max framework which promotes the margin-wise smoothness with the naïve baseline which directly applies `2 regularization on each model loss gradient terms to promote the point-wise smoothness (i.e. Cos-`2) in Section 4.
Given trained “smoothed" base models, we also decrease the model loss gradient similarity to reduce the overall adversarial transferability between base models. Among various metrics which measure the similarity between the loss gradients of base model F and G, we find that the vanilla cosine similarity metric, which is also used in [23], may lead to certain concerns. By minimizing the cosine similarity between∇x`F and∇x`G , the optimal case implies∇x`F = −∇x`G , which means two models have contradictory (rather than diverse) performance on instance x and thus results in turbulent model functionality. Considering this challenge, we leverage the absolute value of cosine similarity between ∇x`F and ∇x`G as similarity loss Lsim and its optimal case implies orthogonal loss gradient vectors. For simplification, we will always use the absolute value of the gradient cosine similarity as the indicator of gradient similarity in our later description and evaluation.
Based on our theoretical analysis and particularly the model loss gradient similarity and model smoothness optimization above, we propose TRS regularizer for model pair (F ,G) on input x as:
LTRS(F ,G, x, δ) = λa · Lsim + λb · Lsmooth = λa · ∣∣∣∣ (∇x`F )>(∇x`G)‖∇x`F‖2 · ‖∇x`G‖2 ∣∣∣∣+ λb · [ max‖x̂−x‖∞≤δ ‖∇x̂`F‖2 + ‖∇x̂`G‖2 ] .
Here ∇x`F and ∇x`G refer to the loss gradient vectors of base models F and G on input x, and λa, λb the weight balancing parameters.
In Section 4, backed up by extensive empirical evaluation, we will systematically show that the local min-max training and the absolute value of the cosine similarity between the model loss gradients significantly improve the ensemble model robustness with negligible performance drop on benign accuracy, as well as reduce the adversarial transferability among base models.
3.2 TRS Training
We integrate the proposed TRS regularizer with the standard ensemble training loss, such as Ensemble Cross-Entropy (ECE) loss, to maintain both ensemble model’s classification utility and robustness by varying the balancing parameter λa and λb. Specifically, for an ensemble model consisting of N base models {Fi}Ni=1, given an input (x, y), our final training loss train is defined as:
Ltrain = 1
N N∑ i=1 LCE(Fi(x), y) + 2 N(N − 1) N∑ i=1 N∑ j=i+1 LTRS(Fi,Fj , x, δ)
where LCE(Fi(x), y) refers to the cross-entropy loss between Fi(x), the output vector of model Fi given x, and the ground-truth label y. The weight of LTRS regularizer could be adjusted by the tuning λa and λb internally. We present one-epoch training pseudo code in Algorithm 1 of Appendix F. The detailed hyper-parameter setting and training criterion are discussed in Appendix F.
4 Experimental Evaluation
In this section, we evaluate the robustness of the proposed TRS-ensemble model under both strong whitebox attacks, as well as blackbox attacks considering the gradient obfuscation concern [1]. We compare TRS with six state-of-the-art ensemble approaches. In addition, we evaluate the adversarial transferability among base models within an ensemble and empirically show that the TRS regularizer can indeed reduce transferability effectively. We also conduct extensive ablation studies to explore the effectiveness of different loss terms in TRS, as well as visualize the trained decision boundaries of different ensemble models to provide intuition on the model properties. We open source the code3 and provide a large-scale benchmark.
4.1 Experimental Setup
Datasets. We conduct our experiments on widely-used image datasets including hand-written dataset MNIST [29]; and colourful image datasets CIFAR-10 and CIFAR-100 [26].
3https://github.com/AI-secure/Transferability-Reduced-Smooth-Ensemble
Baseline ensemble approaches. We mainly consider the standard ensemble, as well as the state-ofthe-art robust ensemble methods that claim to be resilient against adversarial attacks. Specifically, we consider the following baseline ensemble methods which aim to promote the diversity between base models: AdaBoost [19]; GradientBoost [16]; CKAE [25]; ADP [38]; GAL [23]; DVERGE [56]. The detailed description about these approaches are in Appendix E. DVERGE, which has achieved the state-of-the-art ensemble robustness to our best knowledge, serves as the strongest baseline.
Whitebox robustness evaluation. We consider the following adversarial attacks to measure ensembles’ whitebox robustness: Fast Gradient Sign Method (FGSM) [17]; Basic Iterative Method (BIM) [34]; Momentum Iterative Method (MIM); Projected Gradient Descent (PGD); Auto-PGD (APGD); Carlini & Wanger Attack (CW); Elastic-net Attack (EAD) [8], and we leave the detailed description and parameter configuration of these attacks in Appendix E. We use Robust Accuracy as our evaluation metric for the whitebox setting, defined as the ratio of correctly predicted adversarial examples generated by different attacks among the whole test dataset.
Blackbox robustness evaluation. We also conduct blackbox robustness analysis in our evaluation since recent studies have shown that robust models which obfuscate gradients could still be fragile under blackbox attacks [1]. In the blackbox attack setting, we assume the attacker has no knowledge about the target ensemble, including the model architecture and parameters. In this case, the attacker is only able to craft adversarial examples based on several surrogate models and transfer them to the target victim ensemble. We follow the same blackbox attack evaluation setting in [56]: We choose three ensembles consisting of 3, 5, 8 base models which are trained with standard Ensemble Cross-Entropy (ECE) loss as our surrogate models. We apply 50-steps PGD attack with three random starts and two different loss functions (CrossEntropy and CW loss) on each surrogate model to generate adversarial instances (i.e. for each instance we will have 18 attack attempts). For each instance, among these attack attempts, as long as there is one that can successfully attack the victim model, we will count it as a successful attack. In this case, we use Robust Accuracy as our evaluation metric, defined as the number of unsuccessful attack attempts divided by the number of all attacks. We also consider additional three strong blackbox attacks targeting on reducing transferability (i.e., ILA [21], DI2-SGSM [55], IRA [50]) in Appendix J, which leads to similar observations.
4.2 Experimental Results
In this section, we present both whitebox and blackbox robustness evaluation results, examine the adversarial transferability, and explore the impacts of different loss terms in TRS. Furthermore, in Appendix I.1, we visualize the decision boundary; in Appendix I.2, we show results of further improving the robustness of the TRS ensemble by integrating adversarial training; in Appendix I.3, we study the impacts of each of the regularization term Lsim and Lsmooth; in Appendix I.4, we show the convergence of robust accuracy under large attack iterations to demonstrate the robustness stability of TRS ensemble; in Appendix I.5, we analyze the trade-off between the training cost and robustness of TRS by varying PGD step size and the total number of steps within Lsmooth approximation. Whitebox robustness. Table 1 presents the Robust Accuracy of different ensembles against a range of whitebox attacks on MNIST and CIFAR-10 dataset. We defer results on CIFAR-100 in Appendix K, and measure the statistical stability of our reported robust accuracy in Appendix H. Results shows that the proposed TRS ensemble outperforms other baselines including the state-of-the-art DVERGE significantly, against a range of attacks and perturbation budgets, and such performance gap could be even larger under stronger adversary attacks (e.g. PGD attack). We note that TRS ensemble is slightly less robust than DVERGE under small perturbation with weak attack FGSM. We investigate this based on the decision boundary analysis in Appendix I.1, and find that DVERGE tends to be more robust along the gradient direction and thus more robust against weak attacks which only focus on the gradient direction (e.g., FGSM); while TRS yields a smoother model along different directions leading to more consistent predictions within a larger neighborhood of an input, and thus more robust against strong iterative attacks (e.g., PGD). This may be due to that DVERGE is essentially performing adversarial training for different base models and therefore it protects the adversarial (gradient) direction, while TRS optimizes to train a smooth ensemble with diverse base models. We also analyze the convergence of attack algorithms in Appendix I.4, showing that when the number of attack iterations is large, both ADP and GAL ensemble achieve much lower robust accuracy against such iterative attacks; while both DVERGE and TRS remain robust.
Blackbox robustness. Figure 2 shows the Robust Accuracy performance of TRS compared with different baseline ensembles under different perturbation budget . As we can see, the TRS ensemble achieves competitive robust accuracy with DVERGE when is very small, and TRS beats all the baselines significantly when is large. Precisely speaking, TRS ensemble achieves over 85% robust accuracy against transfer attack with = 0.4 on MNIST while the second-best ensemble (DVERGE) only achieves 20.2%. Also on CIFAR-10, TRS ensemble achieves over 25% robust accuracy against transfer attack when = 0.06, while all the other baseline ensembles achieve robust accuracy lower than 6%. This implies that our proposed TRS ensemble has stronger generalization ability in terms of robustness against large adversarial attacks compared with other ensembles. We also put more details of the robust accuracy under blackbox attacks in Appendix G.
Adversarial transferability. Figure 3 shows the adversarial transferability matrix of different ensembles against 50-steps PGD attack with = 0.3 for MNIST and = 0.04 for CIFAR-10. Cell (i, j) where i 6= j represents the transfer attack success rate evaluated on j-th base model by using the i-th base model as the surrogate model. Lower number in each cell indicates lower transferability and thus potentially higher ensemble robustness. The diagonal cell (i, i) refers to i-th base model’s attack success rate, which reflects the vulnerability of a single model. From these figures, we can see that while base models show their vulnerabilities against adversarial attack, only DVERGE and TRS ensemble could achieve low adversarial transferability among base models. We should also notice that though GAL applied a similar gradient cosine similarity loss as our loss term Lsim, GAL still can
not achieve low adversarial transferability due to the lack of model smoothness enforcement, which is one of our key contributions in this paper.
Gradient similarity only vs. TRS. To further verify our theoretical analysis on the sufficient condition of transferability as model smoothness, we consider only applying similarity loss Lsim without model smoothness loss Lsmooth in TRS (i.e. λb = 0). The result is shown as “Cos-only” method of Table 1. We observe that the resulting whitebox robustness is much worse than standard TRS. This matches our theoretical analysis that only minimizing the gradient similarity cannot guarantee low adversarial transferability among base models and thus lead to low ensemble robustness. In Appendix I.3, we investigate the impacts of Lsim and Lsmooth thoroughly, and we show that though Lsmooth contribute slightly more, both terms are critical to the final ensemble robustness. `2 regularizer only vs. Min-max model smoothing. To emphasis the importance of our proposed min-max training loss on promoting the margin-wise model smoothness, we train a variant of TRS ensemble Cos-`2, where we directly apply the `2 regularization on ‖∇x`F‖2 and ‖∇x`G‖2. The results are shown as “Cos-`2” in Table 1. We observe that Cos-`2 achieves lower robustness accuracy compared with TRS, which implies the necessity of regularizing the gradient magnitude on not only the local training points but also their neighborhood regions to ensure overall model smoothness.
5 Conclusion
In this paper, we deliver an in-depth understanding of adversarial transferability. Theoretically, we provide both lower and upper bounds on transferability which shows that smooth models together with low loss gradient similarity guarantee low transferability. Inspired by our analysis, we propose TRS ensemble training to empirically reduce transferability by reducing loss gradient similarity and promoting model smoothness, yielding a significant improvement on ensemble robustness.
Acknowledgments and Disclosure of Funding
This work is partially supported by the NSF grant No.1910100, NSF CNS 20-46726 CAR, the Amazon Research Award, and the joint CATCH MURI-AUSMURI. | 1. What is the focus and contribution of the paper regarding ensemble model robustness?
2. What are the strengths of the proposed method, particularly in terms of theoretical analysis and experimental support?
3. Do you have any concerns about the method's potential effectiveness in improving model performance?
4. How does the reviewer assess the significance of the proposed approach in comparison to prior works on promoting base model diversity?
5. What are some of the other issues mentioned by the reviewer, such as testing against stronger black-box attacks and reporting standard deviations? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes to enforce the smoothness of the base models in addition to reducing the loss gradient similarity between them to improve ensemble model robustness. This is motivated by theoretical analysis for adversarial transferability and verified by extensive experiments.
Review
To the best of my knowledge, the theoretical analysis and proposed method are novel. The experiments also support the central claim about improving ensemble model robustness.
However, the central concern I have is that: Is the proposed method essentially trading model performance for the "robustness"? After all, two completely different random guess models probably cannot transfer to each other, but such models are also useless. The proposed objective optimizes the base models to have smooth orthogonal decision boundaries, so that they are less adversarial examples are transferable between them. But this probably also makes at least one of them have much worse performance with such different decision boundaries as shown in figure 1. Then how can the ensemble still maintain the high performance? On the other hand, previous works on promoting base model diversity to improve robustness still maintain that base models make the same predictions, i.e. having similar decision boundary, but potentially different confidence score distribution, such as in Improving Adversarial Robustness via Promoting Ensemble Diversity.
There are also several other issues
It would be more convincing to test against stronger black-box attacks, such as
ILA (Enhancing adversarial example transferability with an intermediate level attack)
DI2-FGSM (Improving Transferability of Adversarial Examples with Input Diversity)
Interaction-Reduced attack (A Unified Approach to Interpreting and Boosting Adversarial Transferability)
Standard deviations are not reported in table 1, but the error bar is claimed to be reported in the checklist.
In Definition 4, the text on 159 suggesting that x and y should be paired input and true labels. But the notation in the formula could be interpreted otherwise. This should be clarified, maybe by qualifying (x,y) as a pair under the inf and sup. Similarly for other places applicable. In general, it would be better to use a different symbol for the qualified y that is potentially not the same as the ground truth label, such as the ones in Theorem 1.
On line 326, DIVERGE -> DVERGE |
NIPS | Title
TRS: Transferability Reduced Ensemble via Promoting Gradient Diversity and Model Smoothness
Abstract
Adversarial Transferability is an intriguing property – adversarial perturbation crafted against one model is also effective against another model, while these models are from different model families or training processes. To better protect ML systems against adversarial attacks, several questions are raised: what are the sufficient conditions for adversarial transferability and how to bound it? Is there a way to reduce the adversarial transferability in order to improve the robustness of an ensemble ML model? To answer these questions, in this work we first theoretically analyze and outline sufficient conditions for adversarial transferability between models; then propose a practical algorithm to reduce the transferability between base models within an ensemble to improve its robustness. Our theoretical analysis shows that only promoting the orthogonality between gradients of base models is not enough to ensure low transferability; in the meantime, the model smoothness is an important factor to control the transferability. We also provide the lower and upper bounds of adversarial transferability under certain conditions. Inspired by our theoretical analysis, we propose an effective Transferability Reduced Smooth (TRS) ensemble training strategy to train a robust ensemble with low transferability by enforcing both gradient orthogonality and model smoothness between base models. We conduct extensive experiments on TRS and compare with 6 state-of-the-art ensemble baselines against 8 whitebox attacks on different datasets, demonstrating that the proposed TRS outperforms all baselines significantly.
1 Introduction
Machine learning systems, especially those based on deep neural networks (DNNs), have been widely applied in numerous applications [27, 18, 46, 10]. However, recent studies show that DNNs are vulnerable to adversarial examples, which are able to mislead DNNs by adding small magnitude of perturbations to the original instances [47, 17, 54, 52]. Several attack strategies have been proposed so far to generate such adversarial examples in both digital and physical environments [36, 32, 51, 53, 15, 28]. Intriguingly, though most attacks require access to the target models (whitebox attacks), several studies show that adversarial examples generated against one model are able to transferably
∗The authors contributed equally.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
attack another target model with high probability, giving rise to blackbox attacks [39, 41, 31, 30, 57]. This property of adversarial transferability poses great threat to DNNs.
Some work have been conducted to understand adversarial transferability [48, 33, 12]. However, a rigorous theoretical analysis or explanation for transferability is still lacking in the literature. In addition, although developing robust ensemble models to limit transferability shows great potential towards practical robust learning systems, only empirical observations have been made in this line of research [38, 23, 56]. Can we deepen our theoretical understanding on transferability? Can we take advantage of rigorous theoretical understanding to reduce the adversarial transferability and therefore generate robust ensemble ML models?
In this paper, we focus on these two questions. From the theoretical side, we are interested in the sufficient conditions under which the adversarial transferability can be lower bounded and upper bounded. Our theoretical arguments provides the first theoretical interpretation for the sufficient conditions of transferability. Intuitively, as illustrated in Figure 1, we show that the commonly used gradient orthogonality (low cosine similarity) between learning models [12] cannot directly imply low adversarial transferability; on the other hand, orthogonal and smoothed models would limit the transferability. In particular, we prove that the gradient similarity and model smoothness are the key factors that both contribute to the adversarial transferability, and smooth models with orthogonal gradients can guarantee low transferability.
Under an empirical lens, inspired by our theoretical analysis, we propose a simple yet effective approach, Transferability Reduced Smooth (TRS) ensemble to limit adversarial transferability between base models
within an ensemble and therefore improve its robustness. In particular, we reduce the loss gradient similarity between models as well as enforce the smoothness of models to introduce global model orthogonality.
We conduct extensive experiments to evaluate TRS in terms of the model robustness against different strong white-box and blackbox attacks following the robustness evaluation procedures [5, 6, 49], as well as its ability to limit transferability across the base models. We compare the proposed TRS with existing state-of-the-art baseline ensemble approaches such as ADP [38], GAL [23], and DVERGE [56] on MNIST, CIFAR-10, and CIFAR-100 datasets, and we show that (1) TRS achieves the state-of-the-art ensemble robustness, outperforming others by a large margin; (2) TRS achieves efficient training; (3) TRS effectively reduces the transferability among base models within an ensemble which indicates its robustness against whitebox and blackbox attacks; (4) Both loss terms in TRS contribute to the ensemble robustness by constraining different sufficient conditions of adversarial transferability.
Contributions. In this paper, we make the first attempt towards theoretical understanding of adversarial transferability, and provide practical approach for developing robust ML ensembles. (1) We provide a general theoretical analysis framework for adversarial transferability. We prove
the lower and upper bounds of adversarial transferability. Both bounds show that the gradient similarity and model smoothness are the key factors contributing to the adversarial transferability, and smooth models with orthogonal gradients can guarantee low transferability.
(2) We propose a simple yet effective approach TRS to train a robust ensemble by jointly reducing the loss gradient similarity between base models and enforcing the model smoothness. The code is publicly available2.
(3) We conduct extensive experiments to evaluate TRS in terms of model robustness under different attack settings, showing that TRS achieves the state-of-the-art ensemble robustness and outperforms other baselines by a large margin. We also conduct ablation studies to further understand the contribution of different loss terms and verify our theoretical findings.
2https://github.com/AI-secure/Transferability-Reduced-Smooth-Ensemble
Related Work
The adversarial transferability between different ML models is an intriguing research direction. Papernot et al. [40] explored the limitation of adversarial examples and showed that, while some instances are more difficult to manipulate than the others, these adversarial examples usually transfer from one model to another. Demontis et al. [12] later analyzed transferability for both evasion and poisoning attacks. Tramèr et al. [48] empirically investigated the subspace of adversarial examples that enables transferability between different models: though their results provide a non-zero probability guarantee on the transferability, they did not quantify the probability of adversarial transferability.
Leveraging the transferability, different blackbox attacks have been proposed [41, 28, 15, 9]. To defend against these transferability based attacks, Pang et al. [38] proposed a class entropy based adaptive diversity promoting approach to enhance the ML ensemble robustness. Recently, Yang et al. [56] proposed DVERGE, a robust ensemble training approach that diversifies the non-robust features of base models via an adversarial training objective function. However, these approaches do not provide theoretical justification for adversarial transferability, and there is still room to improve the ML ensemble robustness based on in-depth understanding on the sufficient conditions of transferability. In this paper, we aim to provide a theoretical understanding of transferability, and empirically compare the proposed robust ML ensemble inspired by our theoretical analysis with existing approaches to push for a tighter empirical upper bound for the ensemble robustness.
2 Transferability of Adversarial Perturbation
In this section, we first introduce preliminaries, and then provide the upper and lower bounds of adversarial transferability by connecting adversarial transferability with different characteristics of models theoretically, which, in the next section, will allow us to explicitly minimize transferability by enforcing (or rewarding) certain properties of models.
Notations. We consider neural networks for classification tasks. Assume there are C classes, and let X be the input space of the model with Y = {1, 2, . . . , C} the set of prediction classes (i.e., labels). We model the neural network by a mapping function F : X → Y . We will study the transferability between two models F and G. For brevity, hereinafter we mainly show the derived notations for F and notations for G are similar. Let the benign data (x, y) follow an unknown distribution D supported on (X ,Y), and PX denote the marginal distribution on X . For a given input x ∈ X , the classification model F first predicts the confidence score for each label y ∈ Y , denoted as fy(x). These confidence scores sum up to 1, i.e., ∑ y∈Y fy(x) = 1,∀x ∈ X . The model F will predicts the label with highest confidence score: F(x) = argmaxy∈Y fy(x). For modelF , there is usually a model-dependent loss function `F : X×Y → R+, which is the composition of a differentiable training loss (e.g., cross-entropy loss) ` and the model’s confidence score f(·): `F (x, y) := `(f(x), y), (x, y) ∈ (X ,Y). We further assume that F(x) = argminy∈Y `F (x, y), i.e., the model predicts the label with minimum loss. This holds for common training losses.
In this paper, by default we will focus on models that are well-trained on the benign dataset, and such models are the most commonly encountered in practice, so their robustness is paramount. This means we will focus on the low risk classifiers, which we will formally define in Section 2.1.
How should we define an adversarial attack? For the threat model, we consider the attacker that adds an `p norm bounded perturbation to data instance x ∈ X . In practice, there are two types of attacks, untargeted attacks and targeted attacks. The definition of adversarial transferability is slightly different under these attacks [33], and we consider both in our analysis. Definition 1 (Adversarial Attack). Given an input x ∈ X with true label y ∈ Y , F(x) = y. (1) An untargeted attack crafts AU (x) = x+ δ to maximize `F (x+ δ, y) where ‖δ‖p ≤ . (2) A targeted attack with target label yt ∈ Y crafts AT (x) = x+ δ to minimize `F (x+ δ, yt) where ‖δ‖p ≤ .
In this definition, is a pre-defined attack radius that limits the power of the attacker. We may refer to {δ : ‖δ‖p ≤ } as the perturbation ball. The goal of the untargeted attack is to maximize the loss of the target model against its true label y. The goal of the targeted attack is to minimize the loss towards its adversarial target label yt.
How do we formally define that an attack is effective?
Definition 2 ((α,F)-Effective Attack). Consider a input x ∈ X with true label y ∈ Y . An attack is (α,F )-effective in untargeted scenario if Pr (F(AU (x)) 6= y) ≥ 1− α. An attack is (α,F )-effective in targeted scenario (with class target yt) if Pr (F(AT (x)) = yt) ≥ 1− α.
This definition captures the requirement that an adversarial instance generated by an effective attack strategy is able to mislead the target classification model (e.g. F) with certain probability (1− α). The smaller the α is, the more effective the attack is. In practice, this implies that on a finite sample of targets, the attack success is frequent but not absolute. Note that the definition is general for both whitebox [1, 12, 5] and blackbox attacks [42, 4].
2.1 Model Characteristics
Given two modelsF and G, what are the characteristics ofF and G that have impact on transferability under a given attack strategy? Intuitively, the more similar these two classifers are, the larger the transferability would be. However, how can we define “similar” and how can we rigorously connect it to transferability? To answer these questions, we will first define the risk and empirical risk for a given model to measure its performance on benign test data. Then, as the DNNs are differentiable, we will define model similarity based on their gradients. We will then derive the lower and upper bounds of adversarial transferability based on the defined model risk and similarity measures. Definition 3 (Risk and Empirical Risk). For a given model F , we let `F be its model-dependent loss function. Its risk is defined as ηF = Pr (F(x) 6= y); and its empirical risk is defined as ξF = E [`F (x, y)].
The risk represents the model’s error rate on benign test data, while the empirical risk is a non-negative value that also indicates the inaccuracy. For both of them, higher value means worse performance on the benign test data. The difference is that, the risk has more intuitive meaning, while the empirical risk is differentiable and is actually used during model training. Definition 4 (Loss Gradient Similarity). The lower loss gradient similarity S and upper loss gradient similarity S between two differentiable loss functions `F and `G is defined as:
S(`F , `G) = inf x∈X ,y∈Y ∇x`F (x, y) · ∇x`G(x, y) ‖∇x`F (x, y)‖2 · ‖∇x`G(x, y)‖2 ,S(`F , `G) = sup x∈X ,y∈Y ∇x`F (x, y) · ∇x`G(x, y) ‖∇x`F (x, y)‖2 · ‖∇x`G(x, y)‖2 .
The S(`F , `G) (S(`F , `G)) is the minimum (maximum) cosine similarity between the gradients of the two loss functions for an input x drawn from X with any label y ∈ Y . Besides the loss gradient similarity, in our analysis we will also show that the model smoothness is another key characteristic of ML models that affects the model transferability.
Definition 5. We call a model F β-smooth if sup x1,x2∈X ,y∈Y ‖∇x`F (x1, y)−∇x`F (x2, y)‖2 ‖x1 − x2‖2 ≤ β.
This smoothness definition is commonly used in deep learning theory and optimization literature [3, 2], and is also named curvature bounds in certified robustness literature [44]. It could be interpreted as the Lipschitz bound for the model’s loss function gradient. We remark that larger β indicates that the model is less smoother, while smaller β means the model is smoother. Particularly, when β = 0, the model is linear in the input space X .
2.2 Definition of Adversarial Transferability
Based on the model characteristics we explored above, next we will ask: Given two models, what is the natural and precise definition of adversarial transferability?
Definition 6 (Transferability). Consider an adversarial instance AU (x) or AT (x) constructed against a surrogate model F . With a given benign input x ∈ X , The transferability Tr between F and a target model G is defined as follows (adversarial target yt ∈ Y):
• Untargeted: Tr(F ,G, x) = I[F(x) = G(x) = y ∧ F(AU (x)) 6= y ∧ G(AU (x)) 6= y].
• Targeted: Tr(F ,G, x, yt) = I[F(x) = G(x) = y ∧ F(AT (x)) = G(AT (x)) = yt].
Here we define the transferability at instance level, showing several conditions are required to satisfy for a transferable instance. For the untargeted attack, it requires that: (1) both the surrogate model
and target model make correct prediction on the benign input; and (2) both of them make incorrect predictions on the adversarial inputAU (x). The AU (x) is generated via the untargeted attack against the surrogate model F . For the targeted attack, it requires that: (1) both the surrogate and target model make correct prediction on benign input; and (2) both output the adversarial target yt ∈ Y on the adversarial input AT (x). The AT (x) is crafted against the surrogate model F . The predicates themselves do not require AU and AT to be explicitly constructed against the surrogate model F . It will be implied by attack effectiveness (Definition 2) on F in theorem statements. Note that the definition here is a predicate for a specific input x, and in the following analysis we will mainly use its distributional version: Pr (Tr(F ,G, x) = 1) and Pr (Tr(F ,G, x, yt) = 1).
2.3 Lower Bound of Adversarial Transferability
Based on the general definition of transferability, in this section we will analyze how to lower bound the transferability for targeted attack. The analysis for untargeted attack has a similar form and is deferred to Theorem 3 in Appendix A. Theorem 1 (Lower Bound on Targeted Attack Transferability). Assume both models F and G are β-smooth. Let AT be an (α,F)-effective targeted attack with perturbation ball ‖δ‖2 ≤ and target label yt ∈ Y . The transferabiity can be lower bounded by
Pr (Tr(F ,G, x, yt) = 1) ≥ (1−α)−(ηF+ηG)− (1 + α) + cF (1− α) cG + − (1− α) cG +
√ 2− 2S(`F , `G),
where
cF = max x∈X
min y∈Y
`F (AT (x), y)− `F (x, yt) + β 2/2
‖∇x`F (x, yt)‖2 , cG = min x∈X
min y∈Y
`G(AT (x), y)− `G(x, yt)− β 2/2
‖∇x`G(x, yt)‖2 .
Here ηF , ηG are the risks of models F and G respectively.
We defer the complete proof in Appendix C. In the proof, we first use a Taylor expansion to introduce the gradient terms, then relate the dot product with cosine similarity of the loss gradients, and finally use Markov’s inequality to derive the misclassification probability of G to complete the proof.
Implications. In Theorem 1, the only term which correlates both F and G is S(`F , `G), while all other terms depend on individual models F or G. Thus, we study the relation between S(`F , `G) and Pr (Tr(F ,G, x, yt) = 1). Note that since β is small compared with the perturbation radius and the gradient magnitude ‖∇x`G‖2 in the denominator is relatively large, the quantity cG is small. Moreover, 1 − α is large since the attack is typically effective against F . Thus, Pr (Tr(F ,G, x, yt) = 1) has the form C − k √ 1− S(`F , `G), where C and k are both positive constants. We can easily observe the positive correlation between the loss gradients similarity S(`F , `G), and lower bound of adversarial transferability Pr (Tr(F ,G, x, yt) = 1). In the meantime, note that when β increases (i.e., model becomes less smooth), in the transferability lower bound C − k √ 1− S(`F , `G), the C decreases and k increase. As a result, the lower bounds in Theorem 1 decreases, which implies that when model becomes less smoother (i.e., β becomes larger), the transferability lower bounds become looser for both targeted and untargeted attacks. In other words, when the model becomes smoother, the correlation between loss gradients similarity and lower bound of transferability becomes stronger, which motivates us to constrain the model smoothness to increase the effect of limiting loss gradients similarity.
In addition to the `p-bounded attacks, we also derive a transferability lower bound for general attacks whose magnitude is bounded by total variance distance of data distributions. We defer the detail analysis and discussion to Appendix B.
2.4 Upper Bound of Adversarial Transferability
We next aim to upper bound the adversarial transferability. The upper bound for target attack is shown below; and the one for untargeted attack has a similar form in Theorem 4 in Appendix A. Theorem 2 (Upper Bound on Targeted Attack Transferability). Assume both models F and G are β-smooth with gradient magnitude bounded by B, i.e., ‖∇x`F (x, y)‖ ≤ B and ‖∇x`G(x, y)‖ ≤ B for any x ∈ X , y ∈ Y . LetAT be an (α,F)-effective targeted attack with perturbation ball ‖δ‖2 ≤
and target label yt ∈ Y . When the attack radius is small such that `min− B ( 1 + √ 1+S(`F ,`G)
2
) −
β 2 > 0, the transferability can be upper bounded by
Pr (Tr(F ,G, x, yt) = 1) ≤ ξF + ξG `min − B ( 1 + √ 1+S(`F ,`G)
2
) − β 2 ,
where `min = min x∈X (`F (x, yt), `G(x, yt)). Here ξF and ξG are the empirical risks of models F and G respectively, defined relative to a differentiable loss.
We defer the complete proof to Appendix D. In the proof, we first take a Taylor expansion on the loss function at (x, y), then use the fact that the attack direction will be dissimilar with at least one of the model gradients to upper bound the transferability probability.
Implications. In Theorem 2, we observe that along with the increase of S(`F , `G), the denominator decreases and henceforth the upper bound increases. Therefore, S(`F , `G)—upper loss gradient similarity and the upper bound of transferability probability is positively correlated. This tendency is the same as that in the lower bound. Note that α does not appear in upper bounds since only completely successful attacks (α = 0%) needs to be considered here to upper bound the transferability.
Meanwhile, when the model becomes smoother (i.e., β decreases), the transferability upper bound decreases and becomes tighter. This implication again motivates us to constrain the model smoothness. We further observe that smaller magnitude of gradient, i.e., B, also helps to tighten the upper bound. We will regularize both B and β to increase the effect of constraining loss gradients similarity.
Note that the lower bound and upper bound jointly show smaller β leads to a reduced gap between lower and upper bounds and thus a stronger correlation between loss gradients similarity and transferabiltiy. Therefore, it is important to both constrain gradient similarity and increase model smoothness (decrease β) to reduce model transferability and improve ensemble robustness.
3 Improving Ensemble Robustness via Transferability Minimization
Motivated by our theoretical analysis, we propose a lightweight yet effective robust ensemble training approach, Transferability Reduced Smooth (TRS), to reduce the transferability among base models by enforcing low loss gradient similarity and model smoothness at the same time.
3.1 TRS Regularizer
In practice, it is challenging to directly regularize the model smoothness. Luckily, inspired from deep learning theory and optimization [14, 37, 45], succinct `2 regularization on the gradient terms ‖∇x`F‖2 and ‖∇x`G‖2 can reduce the magnitude of gradients and thus improve model smoothness. For example, for common neural networks, the smoothness can be upper bounded via bounding the `2 magnitude of gradients [45, Corollary 4]. An intuitive explanation is that, the `2 regularization on the gradient terms reduces the magnitude of model’s weights, thus limits its changing rate when non-linear activation functions are applied to the neural network model. However, we find that directly regularizing the loss gradient magnitude with `2 norm is not enough, since a vanilla `2 regularizer such as ‖∇x`F‖2 will only focus on the local region at data point x, while it is required to ensure the model smoothness over a large decision region to control the adversarial transferability based on our theoretical analysis.
To address this challenge, we propose a min-max framework to regularize the “support” instance x̂ with “worst” smoothness in the neighborhood region of data point x, which results in the following model smoothness loss:
Lsmooth(F ,G, x, δ) = max ‖x̂−x‖∞≤δ ‖∇x̂`F‖2 + ‖∇x̂`G‖2 (1)
where δ refers to the radius of the `∞ ball around instance x within which we aim to ensure the model to be smooth. In practice, we leverage projection gradient descent optimization to search for support instances x̂ for optimization. This model smoothness loss can be viewed as promoting margin-wise smoothness, i.e., improving the margin between nonsmooth decision boundaries and data point x. Another option is to promote point-wise smoothness that only requires the loss landscape
at data point x itself to be smooth. We compare the ensemble robustness of the proposed min-max framework which promotes the margin-wise smoothness with the naïve baseline which directly applies `2 regularization on each model loss gradient terms to promote the point-wise smoothness (i.e. Cos-`2) in Section 4.
Given trained “smoothed" base models, we also decrease the model loss gradient similarity to reduce the overall adversarial transferability between base models. Among various metrics which measure the similarity between the loss gradients of base model F and G, we find that the vanilla cosine similarity metric, which is also used in [23], may lead to certain concerns. By minimizing the cosine similarity between∇x`F and∇x`G , the optimal case implies∇x`F = −∇x`G , which means two models have contradictory (rather than diverse) performance on instance x and thus results in turbulent model functionality. Considering this challenge, we leverage the absolute value of cosine similarity between ∇x`F and ∇x`G as similarity loss Lsim and its optimal case implies orthogonal loss gradient vectors. For simplification, we will always use the absolute value of the gradient cosine similarity as the indicator of gradient similarity in our later description and evaluation.
Based on our theoretical analysis and particularly the model loss gradient similarity and model smoothness optimization above, we propose TRS regularizer for model pair (F ,G) on input x as:
LTRS(F ,G, x, δ) = λa · Lsim + λb · Lsmooth = λa · ∣∣∣∣ (∇x`F )>(∇x`G)‖∇x`F‖2 · ‖∇x`G‖2 ∣∣∣∣+ λb · [ max‖x̂−x‖∞≤δ ‖∇x̂`F‖2 + ‖∇x̂`G‖2 ] .
Here ∇x`F and ∇x`G refer to the loss gradient vectors of base models F and G on input x, and λa, λb the weight balancing parameters.
In Section 4, backed up by extensive empirical evaluation, we will systematically show that the local min-max training and the absolute value of the cosine similarity between the model loss gradients significantly improve the ensemble model robustness with negligible performance drop on benign accuracy, as well as reduce the adversarial transferability among base models.
3.2 TRS Training
We integrate the proposed TRS regularizer with the standard ensemble training loss, such as Ensemble Cross-Entropy (ECE) loss, to maintain both ensemble model’s classification utility and robustness by varying the balancing parameter λa and λb. Specifically, for an ensemble model consisting of N base models {Fi}Ni=1, given an input (x, y), our final training loss train is defined as:
Ltrain = 1
N N∑ i=1 LCE(Fi(x), y) + 2 N(N − 1) N∑ i=1 N∑ j=i+1 LTRS(Fi,Fj , x, δ)
where LCE(Fi(x), y) refers to the cross-entropy loss between Fi(x), the output vector of model Fi given x, and the ground-truth label y. The weight of LTRS regularizer could be adjusted by the tuning λa and λb internally. We present one-epoch training pseudo code in Algorithm 1 of Appendix F. The detailed hyper-parameter setting and training criterion are discussed in Appendix F.
4 Experimental Evaluation
In this section, we evaluate the robustness of the proposed TRS-ensemble model under both strong whitebox attacks, as well as blackbox attacks considering the gradient obfuscation concern [1]. We compare TRS with six state-of-the-art ensemble approaches. In addition, we evaluate the adversarial transferability among base models within an ensemble and empirically show that the TRS regularizer can indeed reduce transferability effectively. We also conduct extensive ablation studies to explore the effectiveness of different loss terms in TRS, as well as visualize the trained decision boundaries of different ensemble models to provide intuition on the model properties. We open source the code3 and provide a large-scale benchmark.
4.1 Experimental Setup
Datasets. We conduct our experiments on widely-used image datasets including hand-written dataset MNIST [29]; and colourful image datasets CIFAR-10 and CIFAR-100 [26].
3https://github.com/AI-secure/Transferability-Reduced-Smooth-Ensemble
Baseline ensemble approaches. We mainly consider the standard ensemble, as well as the state-ofthe-art robust ensemble methods that claim to be resilient against adversarial attacks. Specifically, we consider the following baseline ensemble methods which aim to promote the diversity between base models: AdaBoost [19]; GradientBoost [16]; CKAE [25]; ADP [38]; GAL [23]; DVERGE [56]. The detailed description about these approaches are in Appendix E. DVERGE, which has achieved the state-of-the-art ensemble robustness to our best knowledge, serves as the strongest baseline.
Whitebox robustness evaluation. We consider the following adversarial attacks to measure ensembles’ whitebox robustness: Fast Gradient Sign Method (FGSM) [17]; Basic Iterative Method (BIM) [34]; Momentum Iterative Method (MIM); Projected Gradient Descent (PGD); Auto-PGD (APGD); Carlini & Wanger Attack (CW); Elastic-net Attack (EAD) [8], and we leave the detailed description and parameter configuration of these attacks in Appendix E. We use Robust Accuracy as our evaluation metric for the whitebox setting, defined as the ratio of correctly predicted adversarial examples generated by different attacks among the whole test dataset.
Blackbox robustness evaluation. We also conduct blackbox robustness analysis in our evaluation since recent studies have shown that robust models which obfuscate gradients could still be fragile under blackbox attacks [1]. In the blackbox attack setting, we assume the attacker has no knowledge about the target ensemble, including the model architecture and parameters. In this case, the attacker is only able to craft adversarial examples based on several surrogate models and transfer them to the target victim ensemble. We follow the same blackbox attack evaluation setting in [56]: We choose three ensembles consisting of 3, 5, 8 base models which are trained with standard Ensemble Cross-Entropy (ECE) loss as our surrogate models. We apply 50-steps PGD attack with three random starts and two different loss functions (CrossEntropy and CW loss) on each surrogate model to generate adversarial instances (i.e. for each instance we will have 18 attack attempts). For each instance, among these attack attempts, as long as there is one that can successfully attack the victim model, we will count it as a successful attack. In this case, we use Robust Accuracy as our evaluation metric, defined as the number of unsuccessful attack attempts divided by the number of all attacks. We also consider additional three strong blackbox attacks targeting on reducing transferability (i.e., ILA [21], DI2-SGSM [55], IRA [50]) in Appendix J, which leads to similar observations.
4.2 Experimental Results
In this section, we present both whitebox and blackbox robustness evaluation results, examine the adversarial transferability, and explore the impacts of different loss terms in TRS. Furthermore, in Appendix I.1, we visualize the decision boundary; in Appendix I.2, we show results of further improving the robustness of the TRS ensemble by integrating adversarial training; in Appendix I.3, we study the impacts of each of the regularization term Lsim and Lsmooth; in Appendix I.4, we show the convergence of robust accuracy under large attack iterations to demonstrate the robustness stability of TRS ensemble; in Appendix I.5, we analyze the trade-off between the training cost and robustness of TRS by varying PGD step size and the total number of steps within Lsmooth approximation. Whitebox robustness. Table 1 presents the Robust Accuracy of different ensembles against a range of whitebox attacks on MNIST and CIFAR-10 dataset. We defer results on CIFAR-100 in Appendix K, and measure the statistical stability of our reported robust accuracy in Appendix H. Results shows that the proposed TRS ensemble outperforms other baselines including the state-of-the-art DVERGE significantly, against a range of attacks and perturbation budgets, and such performance gap could be even larger under stronger adversary attacks (e.g. PGD attack). We note that TRS ensemble is slightly less robust than DVERGE under small perturbation with weak attack FGSM. We investigate this based on the decision boundary analysis in Appendix I.1, and find that DVERGE tends to be more robust along the gradient direction and thus more robust against weak attacks which only focus on the gradient direction (e.g., FGSM); while TRS yields a smoother model along different directions leading to more consistent predictions within a larger neighborhood of an input, and thus more robust against strong iterative attacks (e.g., PGD). This may be due to that DVERGE is essentially performing adversarial training for different base models and therefore it protects the adversarial (gradient) direction, while TRS optimizes to train a smooth ensemble with diverse base models. We also analyze the convergence of attack algorithms in Appendix I.4, showing that when the number of attack iterations is large, both ADP and GAL ensemble achieve much lower robust accuracy against such iterative attacks; while both DVERGE and TRS remain robust.
Blackbox robustness. Figure 2 shows the Robust Accuracy performance of TRS compared with different baseline ensembles under different perturbation budget . As we can see, the TRS ensemble achieves competitive robust accuracy with DVERGE when is very small, and TRS beats all the baselines significantly when is large. Precisely speaking, TRS ensemble achieves over 85% robust accuracy against transfer attack with = 0.4 on MNIST while the second-best ensemble (DVERGE) only achieves 20.2%. Also on CIFAR-10, TRS ensemble achieves over 25% robust accuracy against transfer attack when = 0.06, while all the other baseline ensembles achieve robust accuracy lower than 6%. This implies that our proposed TRS ensemble has stronger generalization ability in terms of robustness against large adversarial attacks compared with other ensembles. We also put more details of the robust accuracy under blackbox attacks in Appendix G.
Adversarial transferability. Figure 3 shows the adversarial transferability matrix of different ensembles against 50-steps PGD attack with = 0.3 for MNIST and = 0.04 for CIFAR-10. Cell (i, j) where i 6= j represents the transfer attack success rate evaluated on j-th base model by using the i-th base model as the surrogate model. Lower number in each cell indicates lower transferability and thus potentially higher ensemble robustness. The diagonal cell (i, i) refers to i-th base model’s attack success rate, which reflects the vulnerability of a single model. From these figures, we can see that while base models show their vulnerabilities against adversarial attack, only DVERGE and TRS ensemble could achieve low adversarial transferability among base models. We should also notice that though GAL applied a similar gradient cosine similarity loss as our loss term Lsim, GAL still can
not achieve low adversarial transferability due to the lack of model smoothness enforcement, which is one of our key contributions in this paper.
Gradient similarity only vs. TRS. To further verify our theoretical analysis on the sufficient condition of transferability as model smoothness, we consider only applying similarity loss Lsim without model smoothness loss Lsmooth in TRS (i.e. λb = 0). The result is shown as “Cos-only” method of Table 1. We observe that the resulting whitebox robustness is much worse than standard TRS. This matches our theoretical analysis that only minimizing the gradient similarity cannot guarantee low adversarial transferability among base models and thus lead to low ensemble robustness. In Appendix I.3, we investigate the impacts of Lsim and Lsmooth thoroughly, and we show that though Lsmooth contribute slightly more, both terms are critical to the final ensemble robustness. `2 regularizer only vs. Min-max model smoothing. To emphasis the importance of our proposed min-max training loss on promoting the margin-wise model smoothness, we train a variant of TRS ensemble Cos-`2, where we directly apply the `2 regularization on ‖∇x`F‖2 and ‖∇x`G‖2. The results are shown as “Cos-`2” in Table 1. We observe that Cos-`2 achieves lower robustness accuracy compared with TRS, which implies the necessity of regularizing the gradient magnitude on not only the local training points but also their neighborhood regions to ensure overall model smoothness.
5 Conclusion
In this paper, we deliver an in-depth understanding of adversarial transferability. Theoretically, we provide both lower and upper bounds on transferability which shows that smooth models together with low loss gradient similarity guarantee low transferability. Inspired by our analysis, we propose TRS ensemble training to empirically reduce transferability by reducing loss gradient similarity and promoting model smoothness, yielding a significant improvement on ensemble robustness.
Acknowledgments and Disclosure of Funding
This work is partially supported by the NSF grant No.1910100, NSF CNS 20-46726 CAR, the Amazon Research Award, and the joint CATCH MURI-AUSMURI. | 1. What is the focus of the paper regarding theoretical analysis and its contribution to deep learning models?
2. What are the strengths of the proposed TRS training method, particularly in terms of ensemble robustness?
3. What are the weaknesses of the paper, especially regarding additional training costs and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or concerns regarding the paper that the reviewer would like to discuss further? | Summary Of The Paper
Review | Summary Of The Paper
This paper provides theoretical analysus on the sufficient condition on the adversarial transferability between ML models, and identify the importance of both gradient orthogonality and model smoothness
This paper proposes a simple smoothness regularization term and merge it with gradient diversity objective to form TRS robust ensemble training algorithm
Thorough experiments are provided to show TRS achieves state-of-the-art ensemble robustness
Review
This paper provides solid theortical analysis on the transfer robustness of deep learning models, and accordingly propose ensemble training methods to reduce the transferability between sub-models and achieve robust ensemble model. Overall I enjoy reading this paper and find it interesting. Here are the strength and weakness of the paper:
Strength:
The theortical analysis on the transferability bound is solid, and its the first work to identify the importance of model smoothness in diverse ensemble training, making the proposed TRS training method well motivated.
The experiment results support the theortical claims by showing the ensemble trained with TRS can achieve SOTA transferability robutness and ensemble robustness.
The paper is well written and easy to follow.
Weakness:
There lacks a discussion on the additional training cost introduced by optimizing the smoothness loss. See the limitation section for details.
It should be noted that model smoothness has already been introduced in previous works on improving the robustness of deep learning model. One representative work is CURE [1], which also use a form of gradient regularization to minimize local curvature of deep learning model for robusntess improvement. I think it would be important for the author to cite this paper and discuss the difference in the smoothness objective.
Though I appreciate the finding of the importance of model smoothness , how exactly the smoothness regularization is contributing to the overall ensemble robustness is unclear. The author suggest the smoothness is helpful on achieving better model diversity. However, as shown in CURE, adding smoothness regularization is also an effective way to increase the robustness of individual model. So it's likely that TRS is achieving higher robustness against larger perturbation due to each sub-mode is more robust. This hypothesis could also be supported by the fact that the the curve of TRS in Fig 2 looks more like DVERGE+ADVT (lower in low perturbation and larger in strong perturbation, see Fig 5 in DVERGE paper). This may also explain why higher transferability between sub-models is shown in Fig 3 as all of them learn robust features. Redoing Fig 3 with a smaller perturbation strength may unveil the difference in sub-model robustness between TRS and DVERGE, also a result with only the smoothness loss but not the cos loss would be helpful.
[1] Moosavi-Dezfooli, Seyed-Mohsen, et al. "Robustness via curvature regularization, and vice versa." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.
In summary, I think this is a novel and solid theortical work supported by thorough experimental evaluation. Though there are concerns on whether the proposed smoothness objective is optimally designed and how the smoothness is contributing to the overall robustness exctly, I do think this can be a good starting point with high potential of inspiring future works on investigating how smoothness can help on achieving more robust ensemble models. So I would suggest accepting this submission, with the expectation that the authors can respond to my concerns in the discussion period. |
NIPS | Title
Dynamic Graph Neural Networks Under Spatio-Temporal Distribution Shift
Abstract
Dynamic graph neural networks (DyGNNs) have demonstrated powerful predictive abilities by exploiting graph structural and temporal dynamics. However, the existing DyGNNs fail to handle distribution shifts, which naturally exist in dynamic graphs, mainly because the patterns exploited by DyGNNs may be variant with respect to labels under distribution shifts. In this paper, we propose to handle spatio-temporal distribution shifts in dynamic graphs by discovering and utilizing invariant patterns, i.e., structures and features whose predictive abilities are stable across distribution shifts, which faces two key challenges: 1) How to discover the complex variant and invariant spatio-temporal patterns in dynamic graphs, which involve both time-varying graph structures and node features. 2) How to handle spatio-temporal distribution shifts with the discovered variant and invariant patterns. To tackle these challenges, we propose the Disentangled Intervention-based Dynamic graph Attention networks (DIDA). Our proposed method can effectively handle spatio-temporal distribution shifts in dynamic graphs by discovering and fully utilizing invariant spatio-temporal patterns. Specifically, we first propose a disentangled spatio-temporal attention network to capture the variant and invariant patterns. Then, we design a spatio-temporal intervention mechanism to create multiple interventional distributions by sampling and reassembling variant patterns across neighborhoods and time stamps to eliminate the spurious impacts of variant patterns. Lastly, we propose an invariance regularization term to minimize the variance of predictions in intervened distributions so that our model can make predictions based on invariant patterns with stable predictive abilities and therefore handle distribution shifts. Experiments on three real-world datasets and one synthetic dataset demonstrate the superiority of our method over state-of-the-art baselines under distribution shifts. Our work is the first study of spatio-temporal distribution shifts in dynamic graphs, to the best of our knowledge.
1 Introduction
Dynamic graphs widely exist in real-world applications, including financial networks [1, 2], social networks [3, 4], traffic networks [5, 6], etc. Distinct from static graphs, dynamic graphs can represent temporal structure and feature patterns, which are more complex yet common in reality. Dynamic graph neural networks (DyGNNs) have been proposed to tackle highly complex structural and temporal information over dynamic graphs, and have achieved remarkable progress in many predictive tasks [7, 8].
∗This work was done during author’s internship at Alibaba Group †Corresponding authors
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
Nevertheless, the existing DyGNNs fail to handle spatio-temporal distribution shifts, which naturally exist in dynamic graphs for various reasons such as survivorship bias [9], selection bias [10, 11], trending [12], etc. For example, in financial networks, external factors like period or market would affect the correlations between the payment flows and transaction illegitimacy [13]. Trends or communities also affect interaction patterns in coauthor networks [14] and recommendation networks [15]. If DyGNNs highly rely on spatio-temporal patterns which are variant under distribution shifts, they will inevitably fail to generalize well to the unseen test distributions.
To address this issue, in this paper, we study the problem of handling spatio-temporal distribution shifts in dynamic graphs through discovering and utilizing invariant patterns, i.e., structures and features whose predictive abilities are stable across distribution shifts, which remain unexplored in the literature. However, this problem is highly non-trivial with the following challenges:
• How to discover the complex variant and invariant spatio-temporal patterns in dynamic graphs, which include both graph structures and node features varying through time?
• How to handle spatio-temporal distribution shifts in a principled manner with discovered variant and invariant patterns?
To tackle these challenges, we propose a novel DyGNN named Disentangled Intervention-based Dynamic Graph Attention Networks (DIDA3). Our proposed method handles distribution shifts well by discovering and utilizing invariant spatio-temporal patterns with stable predictive abilities. Specifically, we first propose a disentangled spatio-temporal attention network to capture the variant and invariant patterns in dynamic graphs, which enables each node to attend to all its historic neighbors through a disentangled attention message-passing mechanism. Then, inspired by causal inference literatures [16, 17], we propose a spatio-temporal intervention mechanism to create multiple intervened distributions by sampling and reassembling variant patterns across neighborhoods and time, such that spurious impacts of variant patterns can be eliminated. To tackle the challenges that i) variant patterns are highly entangled across nodes and ii) directly generating and mixing up subsets of structures and features to do intervention is computationally expensive, we approximate the intervention process with summarized patterns obtained by the disentangled spatio-temporal attention network instead of original structures and features. Lastly, we propose an invariance regularization term to minimize prediction variance in multiple intervened distributions. In this way, our model can capture and utilize invariant patterns with stable predictive abilities to make predictions under distribution shifts. Extensive experiments on one synthetic dataset and three real-world datasets demonstrate the superiority of our proposed method over state-of-the-art baselines under distribution shifts. The contributions of our work are summarized as follows:
• We propose Disentangled Intervention-based Dynamic Graph Attention Networks (DIDA), which can handle spatio-temporal distribution shifts in dynamic graphs. This is the first study of spatio-temporal distribution shifts in dynamic graphs, to the best of our knowledge.
• We propose a disentangled spatio-temporal attention network to capture variant and invariant graph patterns. We further design a spatio-temporal intervention mechanism to create multiple intervened distributions and an invariance regularization term based on causal inference theory to enable the model to focus on invariant patterns under distribution shifts.
• Experiments on three real-world datasets and one synthetic dataset demonstrate the superiority of our method over state-of-the-art baselines.
2 Problem Formulation
In this section, we formulate the problem of spatio-temporal distribution shift in dynamic graphs.
Dynamic Graph. Consider a graph G with the node set V and the edge set E . A dynamic graph can be defined as G = ({Gt}Tt=1), where T is the number of time stamps, Gt = (Vt, Et) is the graph slice at time stamp t, V = ⋃T t=1 Vt, E = ⋃T t=1 Et. For simplicity, a graph slice is also denoted as Gt = (Xt,At), which includes node features and adjacency matrix at time t. We use Gt to denote a random variable of Gt.
3Our codes are publicly available at https://github.com/wondergo2017/DIDA
Prediction tasks. For dynamic graphs, the prediction task can be summarized as using past graphs to make predictions, i.e. p(Yt|G1,G2, . . . ,Gt)=p(Yt|G1:t) , where label Yt can be node properties or occurrence of links between nodes at time t+1. In this paper, we mainly focus on node-level tasks, which are commonly adopted in dynamic graph literatures [7, 8]. Following [18, 19], we factorize the distribution of graph trajectory into ego-graph trajectories, i.e. p(Yt | G1:t) = ∏ v p(y
t | G1:tv ). An ego-graph induced from node v at time t is defined as Gtv = (Xtv,Atv) where Atv is the adjacency matrix including all edges in node v’s L-hop neighbors at time t, i.e. N tv , and Xtv includes the features of nodes in N tv . The optimization objective is to learn an optimal predictor with empirical risk minimization
min θ
E(yt,G1:tv )∼ptr(yt,G1:tv )L(fθ(G 1:t v ), y t) (1)
where fθ is a learnable dynamic graph neural networks, We use G1:tv ,y t to denote the random variable of the ego-graph trajectory and its label, and G1:tv ,yt refer to the respective instances. Spatio-temporal distribution shift. However, the optimal predictor trained with the training distribution may not generalize well to the test distribution when there exists a distribution shift problem. In the literature of dynamic graph, researchers are devoted to capture laws of network dynamics which are stable in systems [20, 21, 22, 23, 24]. Following them, we assume the conditional distribution is the same ptr(Yt|G1:t) = pte(Yt|G1:t), and only consider the covariate shift problem where ptr(G1:t) ̸= pte(G1:t). Besides temporal distribution shift which naturally exists in timevarying data [25, 12, 26, 27, 28] and structural distribution shift in non-euclidean data [29, 18, 30], there exists a much more complex spatio-temporal distribution shift in dynamic graphs. For example, the distribution of ego-graph trajectories may vary across periods or communities.
3 Method
In this section, we propose Disentangled Intervention-based Dynamic Graph Attention Networks (DIDA) to handle spatio-temporal distribution shift in dynamic graphs. First, we propose a disentangled dynamic graph attention network to extract invariant and variant spatio-temporal patterns. Then we propose a spatio-temporal intervention mechanism to create multiple intervened data distributions. Finally, we optimize the model with invariance loss to make predictions relying on invariant patterns.
3.1 Handling Spatio-Temporal Distribution Shift
Spatio-Temporal Pattern. In recent decades of development of dynamic graphs, some scholars endeavor to conclude insightful patterns of network dynamics to reflect how real-world networks evolve through time [31, 32, 33, 34]. For example, the laws of triadic closure describe that two nodes with common neighbors (patterns) tend to have future interactions in social networks [35, 36, 23]. Besides structural information, node attributes are also an important part of the patterns, e.g., social interactions can be also affected by gender and age [37]. Instead of manually concluding patterns, we aim at learning the patterns using DyGNNs so that the more complex spatio-temporal patterns with mixed features and structures can be mined in dynamic graphs. Therefore, we define the spatio-temporal pattern used for node-level prediction as a subset of ego-graph trajectory
P t(v) = mtv(G1:tv ) (2)
where mtv(·) selects structures and attributes from the ego-graph trajectory. In [23], the pattern can be explained as an open triad with similar neighborhood, and the model tend to make link predictions to close the triad with ŷtu,v = fθ(P
t(u), P t(v)) based on the laws of triadic closure [38]. DyGNNs aim at exploiting predictive spatio-temporal patterns to boost prediction ability. However, the predictive power of some patterns may vary across periods or communities due to spatio-temporal distribution shift. Inspired by the causal theory [16, 17], we make the following assumption
Assumption 1 For a given task, there exists a predictor f(·) , for samples (G1:tv ,yt) from any distribution, there exists an invariant pattern P tI (v) and a variant pattern P t V (v) such that y
t = f(P tI (v)) + ϵ and P t I (v) = G1:tv \P tV (v), i.e., yt ⊥ PtV (v) | PtI(v).
Assumption 1 shows that invariant patterns PtI(v) are sufficiently predictive for label y t and can be exploited across periods and communities without adjusting the predictor, while the influence of variant patterns PtV (v) on y t is shielded by the invariant patterns.
Training Objective. Our main idea is that to obtain better generalization ability, the model should rely on invariant patterns instead of variant patterns, as the former is sufficient for prediction while the predictivity of the latter could be variant under distribution shift. Along this, our objective can be transformed to
min θ1,θ2
E(yt,G1:tv )∼ptr(yt,G1:tv )L(fθ1(P̃ t I (v)), y t)
s.t ϕθ2(G1:tv ) = P̃ tI (v),yt ⊥ P̃tV (v) | P̃tI(v). (3)
where fθ1(·) make predictions based on the invariant patterns, ϕθ2(·) aims at finding the invariant patterns. Backed by causal theory[16, 17], it can be transformed into
min θ1,θ2
E(yt,G1:tv )∼ptr(yt,G1:tv )L(fθ1(ϕθ2(G 1:t v )), y t)+
λVars∈S(E(yt,G1:tv )∼ptr(yt,G1:tv |do(PtV =s))L(fθ1(ϕθ2(G 1:t v )), y
t)) (4)
where ‘do’ denotes do-calculas to intervene the original distribution [39, 17], S denotes the intervention set and λ is a balancing hyperparameter. The idea can be informally described that as in Eq. (3), variant patterns PtV have no influence on the label y
t given the invariant patterns PtI , then the prediction would not be varied if we intervene the variant patterns and keep invariant patterns untouched. More details about the connections between objective Eq.(3) and Eq.(4) can be found in Appendix.
Remark 1 Minimizing the variance term in Eq. (4) help the model to satisfy the constraint of yt ⊥ P̃tV (v) | P̃tI(v) in Eq. (3), i.e., p(yt | P̃tI(v), P̃tV (v)) = p(yt | P̃tI(v))
3.2 Disentangled Dynamic Graph Attention Networks
Dynamic Neighborhood. To simultaneously consider the spatio-temporal information, we define the dynamic neighborhood asN t(u) = {v : (u, v) ∈ Et}, which includes all nodes that have interactions with node u at time t.
Disentangled Spatio-temporal Graph Attention Layer. To capture spatio-temporal pattern for each node, we propose a spatio-temporal graph attention to enable each node to attend to its dynamic neighborhood simultaneously. For a node u at time stamp t and its neighbors v ∈ N t′(u),∀t′ ≤ t, we calculate the Query-Key-Value vectors as:
qtu = Wq(h t u||TE(t)),kt
′ v = Wk(h t′ v ||TE(t′)),vt ′ v = Wv(h t′ v ||TE(t′)) (5)
where htu denotes the representation of node u at the time stamp t, q, k, v represents the query, key and value vector, respectively, and we omit the bias term for brevity. TE(t) denotes temporal encoding techniques to obtain embeddings of time t so that the time of link occurrence can be considered inherently [40, 41]. Then, we can calculate the attention scores among nodes in the dynamic neighborhood to obtain the structural masks
mI = Softmax( q · kT√
d ),mV = Softmax(− q · kT√ d ) (6)
where d denotes feature dimension, mI and mV represent the masks of invariant and variant structural patterns. In this way, dynamic neighbors with higher attention scores in invariant patterns will have lower attention scores in variant ones, which means the invariant and variant patterns have a negative correlation. To capture invariant featural pattern, we adopt a learnable featural mask mf = Softmax(wf ) to select features from the messages of dynamic neighbors. Then the messages of dynamic neighborhood can be summarized with respective masks,
ztI(u) = AggI(mI ,v ⊙mf ) ztV (u) = AggV (mV ,v)
(7)
where Agg(·) denotes aggregating and summarizing messages from dynamic neighborhood. To further disentangle the invariant and variant patterns, we design different aggregation functions AggI(·) and AggV (·) to summarize specific messages from masked dynamic neighborhood respectively. Then the pattern summarizations are added up as hidden embeddings to be fed into subsequent layers.
htu ← ztI(u) + ztV (u) (8)
Overall Architecture. The overall architecture is a stacking of spatio-temporal graph attention layers. Like classic graph message-passing networks, this enables each node to access high-order dynamic neighborhood indirectly, where ztI(u) and z t V (u) at l-th layer can be a summarization of invariant and variant patterns in l-order dynamic neighborhood. In practice, the attention can be easily extended to multi-head attention [42] to stable the training process and model multi-faceted graph evolution [43].
3.3 Spatio-Temporal Intervention Mechanism
Direct Intervention. One way of intervening variant pattern distribution as Eq. (4) is directly generating and altering the variant patterns. However, this is infeasible in practice due to the following reasons: First, since it has to intervene the dynamic neighborhood and features nodewisely, the computational complexity is unbearable. Second, generating variant patterns including time-varying structures and features is another intractable problem.
Approximate Intervention. To tackle the problems mentioned above, we propose to approximate the patterns Pt with summarized patterns zt found in Sec. 3.2. As ztI(u) and z t V (u) act as summarizations of invariant and variant spatio-temporal patterns for node u at time t, we approximate the intervention process by sampling and replacing the variant pattern summarizations instead of altering original structures and features with generated ones. To do spatio-temporal intervention, we collect variant patterns of all nodes at all time, from which we sample one variant pattern to replace the variant patterns of other nodes across time. For example, we can use the variant pattern of node v at time t2 to replace the variant pattern of node u at time t1 as
zt1I (u), z t1 V (u)← z t1 I (u), z t2 V (v) (9)
As the invariant pattern summarization is kept the same, the label should not be changed. Thanks to the disentangled spatio-temporal graph attention, we get variant patterns across neighborhoods and time, which can act as natural intervention samples inside data so that the complexity of the generation problem can also be avoided. By doing Eq. (9) multiple times, we can obtain multiple intervened data distributions for the subsequent optimization.
3.4 Optimization with Invariance Loss
Based on the multiple intervened data distributions with different variant patterns, we can next optimize the model to focus on invariant patterns to make predictions. Here, we introduce invariance loss to instantiate Eq. (4). Let zI and zV be the summarized invariant and variant patterns, we calculate the task loss by only using the invariant patterns
L = ℓ(f(zI),y) (10) where f(·) is the predictor. The task loss let the model utilize the invariant patterns to make predictions. Then we calculate the mixed loss as
Lm = ℓ(g(zV , zI),y) (11) where another predictor g(·) makes predictions using both invariant patterns zV and variant patterns zI . The mixed loss measure the model’s prediction ability when variant patterns are also exposed to the model. Then the invariance loss is calculated by
Ldo = Varsi∈S(Lm|do(PtV = si)) (12) where ‘do’ denotes the intervention mechanism as mentioned in Section. 3.3. The invariance loss measures the variance of the model’s prediction ability under multiple intervened distributions. The final training objective is
min θ L+ λLdo (13)
where the task loss L is minimized to exploit invariant patterns while the invariance loss Ldo helps the model to discover invariant and variant patterns, and λ is a hyperparameter to balance between two objectives. After training, we only adopt invariant patterns to make predictions in the inference stage. The overall algorithm is summarized in Table 1.
Algorithm 1 Training pipeline for DIDA Require: Training epochs L, number of intervention samples S, hyperparameter λ
1: for l = 1, . . . , L do 2: Obtain ztV , z t I for each node and time as described in Section 3.2 3: Calculate task loss and mixed loss as Eq. (10) and Eq. (11) 4: Sample S variant patterns from collections of ztV , to construct intervention set S 5: for s in S do 6: Replace the nodes’ variant pattern summarizations with s as Section 3.3 7: Calculate mixed loss as Eq. (11) 8: end for 9: Calculate invariance loss as Eq. (12)
10: Update the model according to Eq. (13) 11: end for
4 Experiments
In this section, we conduct extensive experiments to verify that our framework can handle spatiotemporal distribution shifts by discovering and utilizing invariant patterns. More Details of the settings and other results can be found in Appendix.
Baselines. We adopt several representative GNNs and Out-of-Distribution(OOD) generalization methods as our baselines:
• Static GNNs: GAE [44], a representative static GNN with stacking of graph convolutions; VGAE [44] further introduces variational variables into GAE.
• Dynamic GNNs: GCRN [45],a representative dynamic GNN that first adopts a GCN[44] to obtain node embeddings and then a GRU [46] to model the dynamics; EvolveGCN [13] adopts a LSTM[47] or GRU [46] to flexibly evolve the GCN[44] parameters instead of directly learning the temporal node embeddings; DySAT [43] models dynamic graph using structural and temporal self-attention.
• OOD generalization methods: IRM [48] aims at learning an invariant predictor which minimizes the empirical risks for all training domains; GroupDRO [49] reduces differences in risk across training domains to reduce the model’s sensitivity to distributional shifts; V-REx [50] puts more weight on training domains with larger errors when minimizing empirical risk.
4.1 Real-world Datasets
Settings. We use 3 real-world dynamic graph datasets, including COLLAB, Yelp and Transaction. We adopt the challenging inductive future link prediction task, where the model exploits past graphs to make link prediction in the next time step. Each dataset can be split into several partial dynamic graphs based on its field information. For brevity, we use ‘w/ DS’ and ‘w/o DS’ to represent test data with and without distribution shift respectively. To measure models’ performance under spatiotemporal distribution shift, we choose one field as ‘w/ DS’ and the left others are further split into training, validation and test data (‘w/o DS’) chronologically. Note that the ‘w/o DS’ is a merged dynamic graph without field information and ‘w/ DS’ is unseen during training, which is more practical and challenging in real-world scenarios. More details on their spatio-temporal distribution shifts are provided in Appendix. Here we briefly introduce the real-world datasets as follows
• COLLAB [51]4 is an academic collaboration dataset with papers that were published during 1990-2006. Node and edge represent author and coauthorship respectively. Based on the field of co-authored publication, each edge has the field information including "Data Mining", "Database", "Medical Informatics", "Theory" and "Visualization". The time granularity is year, including 16 time slices in total. We use "Data Mining" as ‘w/ DS’ and the left as ‘w/o DS’.
• Yelp [43]5 is a business review dataset, containing customer reviews on business. Node and edge represent customer/business and review behavior respectively. We consider interactions in five categories of business including "Pizza", "American (New) Food", "Coffee & Tea ", "Sushi Bars" and "Fast Food" from January 2019 to December 2020. The time granularity is month, including 24 time slices in total. We use "Pizza" as ‘w/ DS’ and the left as ‘w/o DS’.
• Transaction6 is a secondary market transaction dataset, which records transaction behaviors of users from 10th April 2022 to 10th May 2022. Node and edge represent user and transaction respectively. The transactions have 4 categories, including "Pants", "Outwears", "Shirts" and "Hoodies". The time granularity is day, including 30 time slices in total. We use "Pants" as ‘w/ DS’ and the left as ‘w/o DS’.
Results. Based on the results on real-world datasets in Table. 1, we have the following observations:
• Baselines fail dramatically under distribution shift: 1) Although DyGNN baselines perform well on test data without distribution shift, their performance drops greatly under distribution shift. In particular, the performance of DySAT, which is the best-performed DyGNN in ‘w/o DS’, drop by nearly 12%, 12% and 5% in ‘w/ DS’. In Yelp and Transaction, GCRN and EGCN even underperform static GNNs, GAE and VGAE. This phenomenon shows that the existing DyGNNs may exploit variant patterns and thus fail to handle distribution shift. 2) Moreover, as generalization baselines are not specially designed to consider spatio-temporal distribution shift in dynamic graphs, they only have limited improvements in Yelp and Transaction. In particular, they rely on ground-truth environment labels to achieve OOD generalization, which are unavailable for real dynamic graphs. The inferior performance indicates that they cannot generalize well without accurate environment labels, which verifies that lacking environmental labels is also a key challenge for handling distribution shifts of dynamic graphs.
• Our method can better handle distribution shift than the baselines, especially in stronger distribution shift. DIDA improves significantly over all baselines in ‘w/ DS’ for all datasets. Note that
4https://www.aminer.cn/collaboration. 5https://www.yelp.com/dataset 6Collected from Alibaba.com
4.2 Synthetic Dataset
Settings. To evaluate the model’s generalization ability under spatio-temporal distribution shift, following [18], we introduce manually designed shifts in dataset COLLAB with all fields merged. Denote original features and structures as Xt1 ∈ RN×d and At ∈ {0, 1}N×N . For each time t, we uniformly sample p(t)|Et+1| positive links and (1− p(t))|Et+1| negative links in At+1. Then they are factorized into variant features Xt2 ∈ RN×d with property of structural preservation. Two portions of features are concatenated as Xt = [Xt1,X t 2] as input node features for training and inference. The sampling probability p(t) = clip(p+ σcos(t), 0, 1) refers to the intensity of shifts, where the variant features Xt2 constructed with higher p(t) will have stronger correlations with future link A
t+1. We set ptest = 0.1, σtest = 0, σtrain = 0.05 and vary ptrain in from 0.4 to 0.8 for evaluation. Since the correlations between Xt2 and label A
t+1 vary through time and neighborhood, patterns include Xt2 are variant under distribution shifts. As static GNNs can not support time-varing features, we omit their results.
Results. Based on the results on synthetic dataset in Table. 2, we have the following observations:
• Our method can better handle distribution shift than the baselines. Although the baselines achieve high performance when training, their performance drop drastically in the test stage, which
shows that the existing DyGNNs fail to handle distribution shifts. In terms of test results, DIDA consistently outperforms DyGNN baselines by a significantly large margin. In particular, DIDA surpasses the best-performed baseline by nearly 13%/10%/5% in test results for different shift levels. For the general OOD baselines, they reduce the variance in some cases while their improvements are not significant. Instead, DIDA is specially designed for dynamic graphs and can exploit the invariant spatio-temporal patterns to handle distribution shift.
• Our method can exploit invariant patterns to consistently alleviate harmful effects of variant patterns under different distribution shift levels. As shift level increases, almost all baselines increase in train results and decline in test results. This phenomenon shows that as the relationship between variant patterns and labels goes stronger, the existing DyGNNs become more dependent on the variant patterns when training, causing their failure in test stage. Instead, the rise in train results and drop in test results of DIDA are significantly lower than baselines, which demonstrates that DIDA can exploit invariant patterns and alleviate the harmful effects of variant patterns under distribution shift.
4.3 Complexity Analysis
We analyze the computational complexity of DIDA as follows. Denote |V | and |E| as the total number of nodes and edges in the graph, respectively, and d as the dimensionality of the hidden representation. The spatio-temporal aggregation has a time complexity of O(|E|d + |V |d2). The disentangled component adds a constant multiplier 2, which does not affect the time complexity of aggregation. Denote |Ep| as the number of edges to predict and |S| as the size of the intervention set. Our intervention mechanism has a time complexity of O(|Ep||S|d) in training, and does not put extra time complexity in inference. Therefore, the overall time complexity of DIDA is O(|E|d + |V |d2 + |Ep||S|d). Notice that |S| is a hyper-parameter and is usually set as a small constant. In summary, DIDA has a linear time complexity with respect to the number of nodes and edges, which is on par with the existing dynamic GNNs.
4.4 Ablation study
In this section, we conduct ablation studies to verify the effectiveness of the proposed spatio-temporal intervention mechanism and disentangled graph attention in DIDA.
Spatio-temporal intervention mechanism. We remove the intervention mechanism mentioned in Sec 3.3. From Figure 2, we can see that without spatio-temporal intervention, the model’s performance drop significantly especially in the synthetic dataset, which verifies that our intervention mechanism helps the model to focus on invariant patterns to make predictions.
Disentangled graph attention. We further remove the disentangled attention mentioned in Sec 3.2. From Figure 2, we can see that disentangled attention is a critical component in the model design, especially in Yelp dataset. Moreover, without disentangled module, the model is unable to obtain variant and invariant patterns for the subsequent intervention.
5 Related Work
Dynamic Graph Neural Networks. To tackle the complex structural and temporal information in dynamic graphs, considerable research attention has been devoted to dynamic graph neural networks (DyGNNs) [7, 8]. A classic of DyGNNs first adopt a GNN to aggregate structural information for graph at each time, followed by a sequence model like RNN [52, 53, 54, 45] or temporal self-attention [43] to process temporal information. Another classic of DyGNNs first introduce
time-encoding techniques to represent each temporal link as a function of time, followed by a spatial module like GNN or memory module [20, 55, 40, 41] to process structural information. To obtain more fine-grained continous node embeddings in dynamic graphs, some work further leverages neural interaction processes [56] and ordinary differential equation [57]. DyGNNs have been widely applied in real-world applications, including dynamic anomaly detection [58], event forecasting [59], dynamic recommendation [60], social character prediction [61], user modeling [62], temporal knowledge graph completion [63], etc. In this paper, we consider DyGNNs under spatio-temporal distribution shift, which remains unexplored in dynamic graph neural networks literature.
Out-of-Distribution Generalization. Most existing machine learning methods assume that the testing and training data are independent and identically distributed, which is not guaranteed to hold in many real-world scenarios [64]. In particular, there might be uncontrollable distribution shifts between training and testing data distribution, which may lead to sharp drop of model performance. To solve this problem, Out-of-Distribution (OOD) generalization problem has recently become a central research topic in various areas [65, 64, 66]. Recently, several works attempt to handle distribution shift on graphs [67, 29, 18, 68, 11, 69, 70, 71, 72, 73]. Another classic of OOD methods most related to our works handle distribution shifts on time-series data [25, 26, 12, 27, 28, 74]. Current works consider either only structural distribution shift for static graphs or only temporal distribution shift for time-series data. However, spatio-temporal distribution shifts in dynamic graphs are more complex yet remain unexplored. To the best of our knowledge, this is the first study of spatio-temporal distribution shifts in dynamic graphs.
Disentangled Representation Learning. Disentangled representation learning aims to characterize the multiple latent explanatory factors behind the observed data, where the factors are represented by different vectors [75]. Besides its applications in computer vision [76, 77, 78, 79, 80] and recommendation [81, 82, 83, 84, 85, 86], several disentangled GNNs have proposed to generalize disentangled representation learning in graph data recently. DisenGCN [87] and IPGDN [88] utilize the dynamic routing mechanism to disentangle latent factors for node representations. FactorGCN [89] decomposes the input graph into several interpretable factor graphs. DGCL [90, 91] aim to learn disentangled graph-level representations with self-supervision. Some works factorize deep generative models based on node, edge, static, dynamic factors [92] or spatial, temporal, graph factors [93] to achieve interpretable dynamic graph generation.
6 Conclusion
In this paper, we propose Disentangled Intervention-based Dynamic Graph Attention Networks (DIDA) to handle spatio-temporal distribution shift in dynamic graphs. First, we propose a disentangled dynamic graph attention network to capture invariant and variant spatio-temporal patterns. Then, based on the causal inference literature, we design a spatio-temporal intervention mechanism to create multiple intervened distributions and propose an invariance regularization term to help the model focus on invariant patterns under distribution shifts. Extensive experiments on three real-world datasets and one synthetic dataset demonstrate that our method can better handle spatio-temporal distribution shift than state-of-the-art baselines. One limitation is that in this paper we mainly consider dynamic graphs in scenarios of discrete snapshots, and we leave studying spatio-temporal distribution shifts in continous dynamic graphs for further explorations.
Acknowledgements
This work was supported in part by the National Key Research and Development Program of China No. 2020AAA0106300, National Natural Science Foundation of China (No. 62250008, 62222209, 62102222, 62206149), China National Postdoctoral Program for Innovative Talents No. BX20220185 and China Postdoctoral Science Foundation No. 2022M711813. All opinions, findings, conclusions and recommendations in this paper are those of the authors and do not necessarily reflect the views of the funding agencies. | 1. What is the focus of the paper regarding dynamic graph neural networks?
2. What are the strengths of the proposed approach, particularly in addressing distribution shifts?
3. What are the weaknesses of the paper, especially in terms of experiment explanations and comparisons with other works?
4. Do you have any questions regarding the effectiveness of the proposed method or its advantages over other approaches? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This work studies the patio-temporal distribution shift issue of dynamic graph neural networks. To pursue the robustness of DyGNNs, the authors proposed a specific invariant learning method and conducted experiments on both real-world and synthetic datasets.
Strengths And Weaknesses
Strong points:
This paper revels the impact of distribution drift in DyGNNs, which forms a new research problem.
This paper presents a new method for training distributionally robust DyGNNs.
Extensive experiments validate the effectiveness of the proposed method.
Weak points:
The experiment results need more explanations. For instance, why IRM and GroupDRO achieve inferior performance under the "w/ DS" setting. Why the compared methods show different trends on the real-world and synthetic datasets, e.g., GCRN performs quite well on the synthetic datasets.
As compared to [18], what is the advantage of the proposed method.
Questions
The experiment results need more explanations. For instance, why IRM and GroupDRO achieve inferior performance under the "w/ DS" setting. Why the compared methods show different trends on the real-world and synthetic datasets, e.g., GCRN performs quite well on the synthetic datasets.
As compared to [18], what is the advantage of the proposed method.
Limitations
No. |
NIPS | Title
Dynamic Graph Neural Networks Under Spatio-Temporal Distribution Shift
Abstract
Dynamic graph neural networks (DyGNNs) have demonstrated powerful predictive abilities by exploiting graph structural and temporal dynamics. However, the existing DyGNNs fail to handle distribution shifts, which naturally exist in dynamic graphs, mainly because the patterns exploited by DyGNNs may be variant with respect to labels under distribution shifts. In this paper, we propose to handle spatio-temporal distribution shifts in dynamic graphs by discovering and utilizing invariant patterns, i.e., structures and features whose predictive abilities are stable across distribution shifts, which faces two key challenges: 1) How to discover the complex variant and invariant spatio-temporal patterns in dynamic graphs, which involve both time-varying graph structures and node features. 2) How to handle spatio-temporal distribution shifts with the discovered variant and invariant patterns. To tackle these challenges, we propose the Disentangled Intervention-based Dynamic graph Attention networks (DIDA). Our proposed method can effectively handle spatio-temporal distribution shifts in dynamic graphs by discovering and fully utilizing invariant spatio-temporal patterns. Specifically, we first propose a disentangled spatio-temporal attention network to capture the variant and invariant patterns. Then, we design a spatio-temporal intervention mechanism to create multiple interventional distributions by sampling and reassembling variant patterns across neighborhoods and time stamps to eliminate the spurious impacts of variant patterns. Lastly, we propose an invariance regularization term to minimize the variance of predictions in intervened distributions so that our model can make predictions based on invariant patterns with stable predictive abilities and therefore handle distribution shifts. Experiments on three real-world datasets and one synthetic dataset demonstrate the superiority of our method over state-of-the-art baselines under distribution shifts. Our work is the first study of spatio-temporal distribution shifts in dynamic graphs, to the best of our knowledge.
1 Introduction
Dynamic graphs widely exist in real-world applications, including financial networks [1, 2], social networks [3, 4], traffic networks [5, 6], etc. Distinct from static graphs, dynamic graphs can represent temporal structure and feature patterns, which are more complex yet common in reality. Dynamic graph neural networks (DyGNNs) have been proposed to tackle highly complex structural and temporal information over dynamic graphs, and have achieved remarkable progress in many predictive tasks [7, 8].
∗This work was done during author’s internship at Alibaba Group †Corresponding authors
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
Nevertheless, the existing DyGNNs fail to handle spatio-temporal distribution shifts, which naturally exist in dynamic graphs for various reasons such as survivorship bias [9], selection bias [10, 11], trending [12], etc. For example, in financial networks, external factors like period or market would affect the correlations between the payment flows and transaction illegitimacy [13]. Trends or communities also affect interaction patterns in coauthor networks [14] and recommendation networks [15]. If DyGNNs highly rely on spatio-temporal patterns which are variant under distribution shifts, they will inevitably fail to generalize well to the unseen test distributions.
To address this issue, in this paper, we study the problem of handling spatio-temporal distribution shifts in dynamic graphs through discovering and utilizing invariant patterns, i.e., structures and features whose predictive abilities are stable across distribution shifts, which remain unexplored in the literature. However, this problem is highly non-trivial with the following challenges:
• How to discover the complex variant and invariant spatio-temporal patterns in dynamic graphs, which include both graph structures and node features varying through time?
• How to handle spatio-temporal distribution shifts in a principled manner with discovered variant and invariant patterns?
To tackle these challenges, we propose a novel DyGNN named Disentangled Intervention-based Dynamic Graph Attention Networks (DIDA3). Our proposed method handles distribution shifts well by discovering and utilizing invariant spatio-temporal patterns with stable predictive abilities. Specifically, we first propose a disentangled spatio-temporal attention network to capture the variant and invariant patterns in dynamic graphs, which enables each node to attend to all its historic neighbors through a disentangled attention message-passing mechanism. Then, inspired by causal inference literatures [16, 17], we propose a spatio-temporal intervention mechanism to create multiple intervened distributions by sampling and reassembling variant patterns across neighborhoods and time, such that spurious impacts of variant patterns can be eliminated. To tackle the challenges that i) variant patterns are highly entangled across nodes and ii) directly generating and mixing up subsets of structures and features to do intervention is computationally expensive, we approximate the intervention process with summarized patterns obtained by the disentangled spatio-temporal attention network instead of original structures and features. Lastly, we propose an invariance regularization term to minimize prediction variance in multiple intervened distributions. In this way, our model can capture and utilize invariant patterns with stable predictive abilities to make predictions under distribution shifts. Extensive experiments on one synthetic dataset and three real-world datasets demonstrate the superiority of our proposed method over state-of-the-art baselines under distribution shifts. The contributions of our work are summarized as follows:
• We propose Disentangled Intervention-based Dynamic Graph Attention Networks (DIDA), which can handle spatio-temporal distribution shifts in dynamic graphs. This is the first study of spatio-temporal distribution shifts in dynamic graphs, to the best of our knowledge.
• We propose a disentangled spatio-temporal attention network to capture variant and invariant graph patterns. We further design a spatio-temporal intervention mechanism to create multiple intervened distributions and an invariance regularization term based on causal inference theory to enable the model to focus on invariant patterns under distribution shifts.
• Experiments on three real-world datasets and one synthetic dataset demonstrate the superiority of our method over state-of-the-art baselines.
2 Problem Formulation
In this section, we formulate the problem of spatio-temporal distribution shift in dynamic graphs.
Dynamic Graph. Consider a graph G with the node set V and the edge set E . A dynamic graph can be defined as G = ({Gt}Tt=1), where T is the number of time stamps, Gt = (Vt, Et) is the graph slice at time stamp t, V = ⋃T t=1 Vt, E = ⋃T t=1 Et. For simplicity, a graph slice is also denoted as Gt = (Xt,At), which includes node features and adjacency matrix at time t. We use Gt to denote a random variable of Gt.
3Our codes are publicly available at https://github.com/wondergo2017/DIDA
Prediction tasks. For dynamic graphs, the prediction task can be summarized as using past graphs to make predictions, i.e. p(Yt|G1,G2, . . . ,Gt)=p(Yt|G1:t) , where label Yt can be node properties or occurrence of links between nodes at time t+1. In this paper, we mainly focus on node-level tasks, which are commonly adopted in dynamic graph literatures [7, 8]. Following [18, 19], we factorize the distribution of graph trajectory into ego-graph trajectories, i.e. p(Yt | G1:t) = ∏ v p(y
t | G1:tv ). An ego-graph induced from node v at time t is defined as Gtv = (Xtv,Atv) where Atv is the adjacency matrix including all edges in node v’s L-hop neighbors at time t, i.e. N tv , and Xtv includes the features of nodes in N tv . The optimization objective is to learn an optimal predictor with empirical risk minimization
min θ
E(yt,G1:tv )∼ptr(yt,G1:tv )L(fθ(G 1:t v ), y t) (1)
where fθ is a learnable dynamic graph neural networks, We use G1:tv ,y t to denote the random variable of the ego-graph trajectory and its label, and G1:tv ,yt refer to the respective instances. Spatio-temporal distribution shift. However, the optimal predictor trained with the training distribution may not generalize well to the test distribution when there exists a distribution shift problem. In the literature of dynamic graph, researchers are devoted to capture laws of network dynamics which are stable in systems [20, 21, 22, 23, 24]. Following them, we assume the conditional distribution is the same ptr(Yt|G1:t) = pte(Yt|G1:t), and only consider the covariate shift problem where ptr(G1:t) ̸= pte(G1:t). Besides temporal distribution shift which naturally exists in timevarying data [25, 12, 26, 27, 28] and structural distribution shift in non-euclidean data [29, 18, 30], there exists a much more complex spatio-temporal distribution shift in dynamic graphs. For example, the distribution of ego-graph trajectories may vary across periods or communities.
3 Method
In this section, we propose Disentangled Intervention-based Dynamic Graph Attention Networks (DIDA) to handle spatio-temporal distribution shift in dynamic graphs. First, we propose a disentangled dynamic graph attention network to extract invariant and variant spatio-temporal patterns. Then we propose a spatio-temporal intervention mechanism to create multiple intervened data distributions. Finally, we optimize the model with invariance loss to make predictions relying on invariant patterns.
3.1 Handling Spatio-Temporal Distribution Shift
Spatio-Temporal Pattern. In recent decades of development of dynamic graphs, some scholars endeavor to conclude insightful patterns of network dynamics to reflect how real-world networks evolve through time [31, 32, 33, 34]. For example, the laws of triadic closure describe that two nodes with common neighbors (patterns) tend to have future interactions in social networks [35, 36, 23]. Besides structural information, node attributes are also an important part of the patterns, e.g., social interactions can be also affected by gender and age [37]. Instead of manually concluding patterns, we aim at learning the patterns using DyGNNs so that the more complex spatio-temporal patterns with mixed features and structures can be mined in dynamic graphs. Therefore, we define the spatio-temporal pattern used for node-level prediction as a subset of ego-graph trajectory
P t(v) = mtv(G1:tv ) (2)
where mtv(·) selects structures and attributes from the ego-graph trajectory. In [23], the pattern can be explained as an open triad with similar neighborhood, and the model tend to make link predictions to close the triad with ŷtu,v = fθ(P
t(u), P t(v)) based on the laws of triadic closure [38]. DyGNNs aim at exploiting predictive spatio-temporal patterns to boost prediction ability. However, the predictive power of some patterns may vary across periods or communities due to spatio-temporal distribution shift. Inspired by the causal theory [16, 17], we make the following assumption
Assumption 1 For a given task, there exists a predictor f(·) , for samples (G1:tv ,yt) from any distribution, there exists an invariant pattern P tI (v) and a variant pattern P t V (v) such that y
t = f(P tI (v)) + ϵ and P t I (v) = G1:tv \P tV (v), i.e., yt ⊥ PtV (v) | PtI(v).
Assumption 1 shows that invariant patterns PtI(v) are sufficiently predictive for label y t and can be exploited across periods and communities without adjusting the predictor, while the influence of variant patterns PtV (v) on y t is shielded by the invariant patterns.
Training Objective. Our main idea is that to obtain better generalization ability, the model should rely on invariant patterns instead of variant patterns, as the former is sufficient for prediction while the predictivity of the latter could be variant under distribution shift. Along this, our objective can be transformed to
min θ1,θ2
E(yt,G1:tv )∼ptr(yt,G1:tv )L(fθ1(P̃ t I (v)), y t)
s.t ϕθ2(G1:tv ) = P̃ tI (v),yt ⊥ P̃tV (v) | P̃tI(v). (3)
where fθ1(·) make predictions based on the invariant patterns, ϕθ2(·) aims at finding the invariant patterns. Backed by causal theory[16, 17], it can be transformed into
min θ1,θ2
E(yt,G1:tv )∼ptr(yt,G1:tv )L(fθ1(ϕθ2(G 1:t v )), y t)+
λVars∈S(E(yt,G1:tv )∼ptr(yt,G1:tv |do(PtV =s))L(fθ1(ϕθ2(G 1:t v )), y
t)) (4)
where ‘do’ denotes do-calculas to intervene the original distribution [39, 17], S denotes the intervention set and λ is a balancing hyperparameter. The idea can be informally described that as in Eq. (3), variant patterns PtV have no influence on the label y
t given the invariant patterns PtI , then the prediction would not be varied if we intervene the variant patterns and keep invariant patterns untouched. More details about the connections between objective Eq.(3) and Eq.(4) can be found in Appendix.
Remark 1 Minimizing the variance term in Eq. (4) help the model to satisfy the constraint of yt ⊥ P̃tV (v) | P̃tI(v) in Eq. (3), i.e., p(yt | P̃tI(v), P̃tV (v)) = p(yt | P̃tI(v))
3.2 Disentangled Dynamic Graph Attention Networks
Dynamic Neighborhood. To simultaneously consider the spatio-temporal information, we define the dynamic neighborhood asN t(u) = {v : (u, v) ∈ Et}, which includes all nodes that have interactions with node u at time t.
Disentangled Spatio-temporal Graph Attention Layer. To capture spatio-temporal pattern for each node, we propose a spatio-temporal graph attention to enable each node to attend to its dynamic neighborhood simultaneously. For a node u at time stamp t and its neighbors v ∈ N t′(u),∀t′ ≤ t, we calculate the Query-Key-Value vectors as:
qtu = Wq(h t u||TE(t)),kt
′ v = Wk(h t′ v ||TE(t′)),vt ′ v = Wv(h t′ v ||TE(t′)) (5)
where htu denotes the representation of node u at the time stamp t, q, k, v represents the query, key and value vector, respectively, and we omit the bias term for brevity. TE(t) denotes temporal encoding techniques to obtain embeddings of time t so that the time of link occurrence can be considered inherently [40, 41]. Then, we can calculate the attention scores among nodes in the dynamic neighborhood to obtain the structural masks
mI = Softmax( q · kT√
d ),mV = Softmax(− q · kT√ d ) (6)
where d denotes feature dimension, mI and mV represent the masks of invariant and variant structural patterns. In this way, dynamic neighbors with higher attention scores in invariant patterns will have lower attention scores in variant ones, which means the invariant and variant patterns have a negative correlation. To capture invariant featural pattern, we adopt a learnable featural mask mf = Softmax(wf ) to select features from the messages of dynamic neighbors. Then the messages of dynamic neighborhood can be summarized with respective masks,
ztI(u) = AggI(mI ,v ⊙mf ) ztV (u) = AggV (mV ,v)
(7)
where Agg(·) denotes aggregating and summarizing messages from dynamic neighborhood. To further disentangle the invariant and variant patterns, we design different aggregation functions AggI(·) and AggV (·) to summarize specific messages from masked dynamic neighborhood respectively. Then the pattern summarizations are added up as hidden embeddings to be fed into subsequent layers.
htu ← ztI(u) + ztV (u) (8)
Overall Architecture. The overall architecture is a stacking of spatio-temporal graph attention layers. Like classic graph message-passing networks, this enables each node to access high-order dynamic neighborhood indirectly, where ztI(u) and z t V (u) at l-th layer can be a summarization of invariant and variant patterns in l-order dynamic neighborhood. In practice, the attention can be easily extended to multi-head attention [42] to stable the training process and model multi-faceted graph evolution [43].
3.3 Spatio-Temporal Intervention Mechanism
Direct Intervention. One way of intervening variant pattern distribution as Eq. (4) is directly generating and altering the variant patterns. However, this is infeasible in practice due to the following reasons: First, since it has to intervene the dynamic neighborhood and features nodewisely, the computational complexity is unbearable. Second, generating variant patterns including time-varying structures and features is another intractable problem.
Approximate Intervention. To tackle the problems mentioned above, we propose to approximate the patterns Pt with summarized patterns zt found in Sec. 3.2. As ztI(u) and z t V (u) act as summarizations of invariant and variant spatio-temporal patterns for node u at time t, we approximate the intervention process by sampling and replacing the variant pattern summarizations instead of altering original structures and features with generated ones. To do spatio-temporal intervention, we collect variant patterns of all nodes at all time, from which we sample one variant pattern to replace the variant patterns of other nodes across time. For example, we can use the variant pattern of node v at time t2 to replace the variant pattern of node u at time t1 as
zt1I (u), z t1 V (u)← z t1 I (u), z t2 V (v) (9)
As the invariant pattern summarization is kept the same, the label should not be changed. Thanks to the disentangled spatio-temporal graph attention, we get variant patterns across neighborhoods and time, which can act as natural intervention samples inside data so that the complexity of the generation problem can also be avoided. By doing Eq. (9) multiple times, we can obtain multiple intervened data distributions for the subsequent optimization.
3.4 Optimization with Invariance Loss
Based on the multiple intervened data distributions with different variant patterns, we can next optimize the model to focus on invariant patterns to make predictions. Here, we introduce invariance loss to instantiate Eq. (4). Let zI and zV be the summarized invariant and variant patterns, we calculate the task loss by only using the invariant patterns
L = ℓ(f(zI),y) (10) where f(·) is the predictor. The task loss let the model utilize the invariant patterns to make predictions. Then we calculate the mixed loss as
Lm = ℓ(g(zV , zI),y) (11) where another predictor g(·) makes predictions using both invariant patterns zV and variant patterns zI . The mixed loss measure the model’s prediction ability when variant patterns are also exposed to the model. Then the invariance loss is calculated by
Ldo = Varsi∈S(Lm|do(PtV = si)) (12) where ‘do’ denotes the intervention mechanism as mentioned in Section. 3.3. The invariance loss measures the variance of the model’s prediction ability under multiple intervened distributions. The final training objective is
min θ L+ λLdo (13)
where the task loss L is minimized to exploit invariant patterns while the invariance loss Ldo helps the model to discover invariant and variant patterns, and λ is a hyperparameter to balance between two objectives. After training, we only adopt invariant patterns to make predictions in the inference stage. The overall algorithm is summarized in Table 1.
Algorithm 1 Training pipeline for DIDA Require: Training epochs L, number of intervention samples S, hyperparameter λ
1: for l = 1, . . . , L do 2: Obtain ztV , z t I for each node and time as described in Section 3.2 3: Calculate task loss and mixed loss as Eq. (10) and Eq. (11) 4: Sample S variant patterns from collections of ztV , to construct intervention set S 5: for s in S do 6: Replace the nodes’ variant pattern summarizations with s as Section 3.3 7: Calculate mixed loss as Eq. (11) 8: end for 9: Calculate invariance loss as Eq. (12)
10: Update the model according to Eq. (13) 11: end for
4 Experiments
In this section, we conduct extensive experiments to verify that our framework can handle spatiotemporal distribution shifts by discovering and utilizing invariant patterns. More Details of the settings and other results can be found in Appendix.
Baselines. We adopt several representative GNNs and Out-of-Distribution(OOD) generalization methods as our baselines:
• Static GNNs: GAE [44], a representative static GNN with stacking of graph convolutions; VGAE [44] further introduces variational variables into GAE.
• Dynamic GNNs: GCRN [45],a representative dynamic GNN that first adopts a GCN[44] to obtain node embeddings and then a GRU [46] to model the dynamics; EvolveGCN [13] adopts a LSTM[47] or GRU [46] to flexibly evolve the GCN[44] parameters instead of directly learning the temporal node embeddings; DySAT [43] models dynamic graph using structural and temporal self-attention.
• OOD generalization methods: IRM [48] aims at learning an invariant predictor which minimizes the empirical risks for all training domains; GroupDRO [49] reduces differences in risk across training domains to reduce the model’s sensitivity to distributional shifts; V-REx [50] puts more weight on training domains with larger errors when minimizing empirical risk.
4.1 Real-world Datasets
Settings. We use 3 real-world dynamic graph datasets, including COLLAB, Yelp and Transaction. We adopt the challenging inductive future link prediction task, where the model exploits past graphs to make link prediction in the next time step. Each dataset can be split into several partial dynamic graphs based on its field information. For brevity, we use ‘w/ DS’ and ‘w/o DS’ to represent test data with and without distribution shift respectively. To measure models’ performance under spatiotemporal distribution shift, we choose one field as ‘w/ DS’ and the left others are further split into training, validation and test data (‘w/o DS’) chronologically. Note that the ‘w/o DS’ is a merged dynamic graph without field information and ‘w/ DS’ is unseen during training, which is more practical and challenging in real-world scenarios. More details on their spatio-temporal distribution shifts are provided in Appendix. Here we briefly introduce the real-world datasets as follows
• COLLAB [51]4 is an academic collaboration dataset with papers that were published during 1990-2006. Node and edge represent author and coauthorship respectively. Based on the field of co-authored publication, each edge has the field information including "Data Mining", "Database", "Medical Informatics", "Theory" and "Visualization". The time granularity is year, including 16 time slices in total. We use "Data Mining" as ‘w/ DS’ and the left as ‘w/o DS’.
• Yelp [43]5 is a business review dataset, containing customer reviews on business. Node and edge represent customer/business and review behavior respectively. We consider interactions in five categories of business including "Pizza", "American (New) Food", "Coffee & Tea ", "Sushi Bars" and "Fast Food" from January 2019 to December 2020. The time granularity is month, including 24 time slices in total. We use "Pizza" as ‘w/ DS’ and the left as ‘w/o DS’.
• Transaction6 is a secondary market transaction dataset, which records transaction behaviors of users from 10th April 2022 to 10th May 2022. Node and edge represent user and transaction respectively. The transactions have 4 categories, including "Pants", "Outwears", "Shirts" and "Hoodies". The time granularity is day, including 30 time slices in total. We use "Pants" as ‘w/ DS’ and the left as ‘w/o DS’.
Results. Based on the results on real-world datasets in Table. 1, we have the following observations:
• Baselines fail dramatically under distribution shift: 1) Although DyGNN baselines perform well on test data without distribution shift, their performance drops greatly under distribution shift. In particular, the performance of DySAT, which is the best-performed DyGNN in ‘w/o DS’, drop by nearly 12%, 12% and 5% in ‘w/ DS’. In Yelp and Transaction, GCRN and EGCN even underperform static GNNs, GAE and VGAE. This phenomenon shows that the existing DyGNNs may exploit variant patterns and thus fail to handle distribution shift. 2) Moreover, as generalization baselines are not specially designed to consider spatio-temporal distribution shift in dynamic graphs, they only have limited improvements in Yelp and Transaction. In particular, they rely on ground-truth environment labels to achieve OOD generalization, which are unavailable for real dynamic graphs. The inferior performance indicates that they cannot generalize well without accurate environment labels, which verifies that lacking environmental labels is also a key challenge for handling distribution shifts of dynamic graphs.
• Our method can better handle distribution shift than the baselines, especially in stronger distribution shift. DIDA improves significantly over all baselines in ‘w/ DS’ for all datasets. Note that
4https://www.aminer.cn/collaboration. 5https://www.yelp.com/dataset 6Collected from Alibaba.com
4.2 Synthetic Dataset
Settings. To evaluate the model’s generalization ability under spatio-temporal distribution shift, following [18], we introduce manually designed shifts in dataset COLLAB with all fields merged. Denote original features and structures as Xt1 ∈ RN×d and At ∈ {0, 1}N×N . For each time t, we uniformly sample p(t)|Et+1| positive links and (1− p(t))|Et+1| negative links in At+1. Then they are factorized into variant features Xt2 ∈ RN×d with property of structural preservation. Two portions of features are concatenated as Xt = [Xt1,X t 2] as input node features for training and inference. The sampling probability p(t) = clip(p+ σcos(t), 0, 1) refers to the intensity of shifts, where the variant features Xt2 constructed with higher p(t) will have stronger correlations with future link A
t+1. We set ptest = 0.1, σtest = 0, σtrain = 0.05 and vary ptrain in from 0.4 to 0.8 for evaluation. Since the correlations between Xt2 and label A
t+1 vary through time and neighborhood, patterns include Xt2 are variant under distribution shifts. As static GNNs can not support time-varing features, we omit their results.
Results. Based on the results on synthetic dataset in Table. 2, we have the following observations:
• Our method can better handle distribution shift than the baselines. Although the baselines achieve high performance when training, their performance drop drastically in the test stage, which
shows that the existing DyGNNs fail to handle distribution shifts. In terms of test results, DIDA consistently outperforms DyGNN baselines by a significantly large margin. In particular, DIDA surpasses the best-performed baseline by nearly 13%/10%/5% in test results for different shift levels. For the general OOD baselines, they reduce the variance in some cases while their improvements are not significant. Instead, DIDA is specially designed for dynamic graphs and can exploit the invariant spatio-temporal patterns to handle distribution shift.
• Our method can exploit invariant patterns to consistently alleviate harmful effects of variant patterns under different distribution shift levels. As shift level increases, almost all baselines increase in train results and decline in test results. This phenomenon shows that as the relationship between variant patterns and labels goes stronger, the existing DyGNNs become more dependent on the variant patterns when training, causing their failure in test stage. Instead, the rise in train results and drop in test results of DIDA are significantly lower than baselines, which demonstrates that DIDA can exploit invariant patterns and alleviate the harmful effects of variant patterns under distribution shift.
4.3 Complexity Analysis
We analyze the computational complexity of DIDA as follows. Denote |V | and |E| as the total number of nodes and edges in the graph, respectively, and d as the dimensionality of the hidden representation. The spatio-temporal aggregation has a time complexity of O(|E|d + |V |d2). The disentangled component adds a constant multiplier 2, which does not affect the time complexity of aggregation. Denote |Ep| as the number of edges to predict and |S| as the size of the intervention set. Our intervention mechanism has a time complexity of O(|Ep||S|d) in training, and does not put extra time complexity in inference. Therefore, the overall time complexity of DIDA is O(|E|d + |V |d2 + |Ep||S|d). Notice that |S| is a hyper-parameter and is usually set as a small constant. In summary, DIDA has a linear time complexity with respect to the number of nodes and edges, which is on par with the existing dynamic GNNs.
4.4 Ablation study
In this section, we conduct ablation studies to verify the effectiveness of the proposed spatio-temporal intervention mechanism and disentangled graph attention in DIDA.
Spatio-temporal intervention mechanism. We remove the intervention mechanism mentioned in Sec 3.3. From Figure 2, we can see that without spatio-temporal intervention, the model’s performance drop significantly especially in the synthetic dataset, which verifies that our intervention mechanism helps the model to focus on invariant patterns to make predictions.
Disentangled graph attention. We further remove the disentangled attention mentioned in Sec 3.2. From Figure 2, we can see that disentangled attention is a critical component in the model design, especially in Yelp dataset. Moreover, without disentangled module, the model is unable to obtain variant and invariant patterns for the subsequent intervention.
5 Related Work
Dynamic Graph Neural Networks. To tackle the complex structural and temporal information in dynamic graphs, considerable research attention has been devoted to dynamic graph neural networks (DyGNNs) [7, 8]. A classic of DyGNNs first adopt a GNN to aggregate structural information for graph at each time, followed by a sequence model like RNN [52, 53, 54, 45] or temporal self-attention [43] to process temporal information. Another classic of DyGNNs first introduce
time-encoding techniques to represent each temporal link as a function of time, followed by a spatial module like GNN or memory module [20, 55, 40, 41] to process structural information. To obtain more fine-grained continous node embeddings in dynamic graphs, some work further leverages neural interaction processes [56] and ordinary differential equation [57]. DyGNNs have been widely applied in real-world applications, including dynamic anomaly detection [58], event forecasting [59], dynamic recommendation [60], social character prediction [61], user modeling [62], temporal knowledge graph completion [63], etc. In this paper, we consider DyGNNs under spatio-temporal distribution shift, which remains unexplored in dynamic graph neural networks literature.
Out-of-Distribution Generalization. Most existing machine learning methods assume that the testing and training data are independent and identically distributed, which is not guaranteed to hold in many real-world scenarios [64]. In particular, there might be uncontrollable distribution shifts between training and testing data distribution, which may lead to sharp drop of model performance. To solve this problem, Out-of-Distribution (OOD) generalization problem has recently become a central research topic in various areas [65, 64, 66]. Recently, several works attempt to handle distribution shift on graphs [67, 29, 18, 68, 11, 69, 70, 71, 72, 73]. Another classic of OOD methods most related to our works handle distribution shifts on time-series data [25, 26, 12, 27, 28, 74]. Current works consider either only structural distribution shift for static graphs or only temporal distribution shift for time-series data. However, spatio-temporal distribution shifts in dynamic graphs are more complex yet remain unexplored. To the best of our knowledge, this is the first study of spatio-temporal distribution shifts in dynamic graphs.
Disentangled Representation Learning. Disentangled representation learning aims to characterize the multiple latent explanatory factors behind the observed data, where the factors are represented by different vectors [75]. Besides its applications in computer vision [76, 77, 78, 79, 80] and recommendation [81, 82, 83, 84, 85, 86], several disentangled GNNs have proposed to generalize disentangled representation learning in graph data recently. DisenGCN [87] and IPGDN [88] utilize the dynamic routing mechanism to disentangle latent factors for node representations. FactorGCN [89] decomposes the input graph into several interpretable factor graphs. DGCL [90, 91] aim to learn disentangled graph-level representations with self-supervision. Some works factorize deep generative models based on node, edge, static, dynamic factors [92] or spatial, temporal, graph factors [93] to achieve interpretable dynamic graph generation.
6 Conclusion
In this paper, we propose Disentangled Intervention-based Dynamic Graph Attention Networks (DIDA) to handle spatio-temporal distribution shift in dynamic graphs. First, we propose a disentangled dynamic graph attention network to capture invariant and variant spatio-temporal patterns. Then, based on the causal inference literature, we design a spatio-temporal intervention mechanism to create multiple intervened distributions and propose an invariance regularization term to help the model focus on invariant patterns under distribution shifts. Extensive experiments on three real-world datasets and one synthetic dataset demonstrate that our method can better handle spatio-temporal distribution shift than state-of-the-art baselines. One limitation is that in this paper we mainly consider dynamic graphs in scenarios of discrete snapshots, and we leave studying spatio-temporal distribution shifts in continous dynamic graphs for further explorations.
Acknowledgements
This work was supported in part by the National Key Research and Development Program of China No. 2020AAA0106300, National Natural Science Foundation of China (No. 62250008, 62222209, 62102222, 62206149), China National Postdoctoral Program for Innovative Talents No. BX20220185 and China Postdoctoral Science Foundation No. 2022M711813. All opinions, findings, conclusions and recommendations in this paper are those of the authors and do not necessarily reflect the views of the funding agencies. | 1. What is the focus and contribution of the paper on graph neural networks?
2. What are the strengths of the proposed approach, particularly in handling spatio-temporal distribution shifts?
3. What are the weaknesses of the paper regarding its claims, experiments, and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper investigates graph neural networks on dynamic graphs, especially under spatio-temporal distribution shifts. The authors recognize that distribution shift is an important factor for dynamic graph embedding, which is not well-handled by the existing approaches. To address this, the authors propose a novel model named DIDA, to handle spatio-temporal distribution shifts in dynamic graphs by discovering and fully utilizing invariant spatio-temporal patterns. Experiments on four datasets demonstrate the effectiveness of the proposed model.
Strengths And Weaknesses
Strengths:
The paper is well-written and easy to follow.
Dealing with the spatio-temporal information on dynamic graphs from the perspective of discovering and utilizing invariant patterns, I feel might be an effective direction.
The experiments are sufficient to demonstrate the performance of the proposed model.
Weaknesses:
The related studies discussed in Related Work are not quite sufficient. I suggest the authors cite and discuss more.
It is better to give the details for the baselines (e.g., the differences with the proposed model), as well as more details for the datasets (e.g., give a statistic table) in the main paper.
Questions
Please see the weaknesses.
Limitations
None |
NIPS | Title
Dynamic Graph Neural Networks Under Spatio-Temporal Distribution Shift
Abstract
Dynamic graph neural networks (DyGNNs) have demonstrated powerful predictive abilities by exploiting graph structural and temporal dynamics. However, the existing DyGNNs fail to handle distribution shifts, which naturally exist in dynamic graphs, mainly because the patterns exploited by DyGNNs may be variant with respect to labels under distribution shifts. In this paper, we propose to handle spatio-temporal distribution shifts in dynamic graphs by discovering and utilizing invariant patterns, i.e., structures and features whose predictive abilities are stable across distribution shifts, which faces two key challenges: 1) How to discover the complex variant and invariant spatio-temporal patterns in dynamic graphs, which involve both time-varying graph structures and node features. 2) How to handle spatio-temporal distribution shifts with the discovered variant and invariant patterns. To tackle these challenges, we propose the Disentangled Intervention-based Dynamic graph Attention networks (DIDA). Our proposed method can effectively handle spatio-temporal distribution shifts in dynamic graphs by discovering and fully utilizing invariant spatio-temporal patterns. Specifically, we first propose a disentangled spatio-temporal attention network to capture the variant and invariant patterns. Then, we design a spatio-temporal intervention mechanism to create multiple interventional distributions by sampling and reassembling variant patterns across neighborhoods and time stamps to eliminate the spurious impacts of variant patterns. Lastly, we propose an invariance regularization term to minimize the variance of predictions in intervened distributions so that our model can make predictions based on invariant patterns with stable predictive abilities and therefore handle distribution shifts. Experiments on three real-world datasets and one synthetic dataset demonstrate the superiority of our method over state-of-the-art baselines under distribution shifts. Our work is the first study of spatio-temporal distribution shifts in dynamic graphs, to the best of our knowledge.
1 Introduction
Dynamic graphs widely exist in real-world applications, including financial networks [1, 2], social networks [3, 4], traffic networks [5, 6], etc. Distinct from static graphs, dynamic graphs can represent temporal structure and feature patterns, which are more complex yet common in reality. Dynamic graph neural networks (DyGNNs) have been proposed to tackle highly complex structural and temporal information over dynamic graphs, and have achieved remarkable progress in many predictive tasks [7, 8].
∗This work was done during author’s internship at Alibaba Group †Corresponding authors
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
Nevertheless, the existing DyGNNs fail to handle spatio-temporal distribution shifts, which naturally exist in dynamic graphs for various reasons such as survivorship bias [9], selection bias [10, 11], trending [12], etc. For example, in financial networks, external factors like period or market would affect the correlations between the payment flows and transaction illegitimacy [13]. Trends or communities also affect interaction patterns in coauthor networks [14] and recommendation networks [15]. If DyGNNs highly rely on spatio-temporal patterns which are variant under distribution shifts, they will inevitably fail to generalize well to the unseen test distributions.
To address this issue, in this paper, we study the problem of handling spatio-temporal distribution shifts in dynamic graphs through discovering and utilizing invariant patterns, i.e., structures and features whose predictive abilities are stable across distribution shifts, which remain unexplored in the literature. However, this problem is highly non-trivial with the following challenges:
• How to discover the complex variant and invariant spatio-temporal patterns in dynamic graphs, which include both graph structures and node features varying through time?
• How to handle spatio-temporal distribution shifts in a principled manner with discovered variant and invariant patterns?
To tackle these challenges, we propose a novel DyGNN named Disentangled Intervention-based Dynamic Graph Attention Networks (DIDA3). Our proposed method handles distribution shifts well by discovering and utilizing invariant spatio-temporal patterns with stable predictive abilities. Specifically, we first propose a disentangled spatio-temporal attention network to capture the variant and invariant patterns in dynamic graphs, which enables each node to attend to all its historic neighbors through a disentangled attention message-passing mechanism. Then, inspired by causal inference literatures [16, 17], we propose a spatio-temporal intervention mechanism to create multiple intervened distributions by sampling and reassembling variant patterns across neighborhoods and time, such that spurious impacts of variant patterns can be eliminated. To tackle the challenges that i) variant patterns are highly entangled across nodes and ii) directly generating and mixing up subsets of structures and features to do intervention is computationally expensive, we approximate the intervention process with summarized patterns obtained by the disentangled spatio-temporal attention network instead of original structures and features. Lastly, we propose an invariance regularization term to minimize prediction variance in multiple intervened distributions. In this way, our model can capture and utilize invariant patterns with stable predictive abilities to make predictions under distribution shifts. Extensive experiments on one synthetic dataset and three real-world datasets demonstrate the superiority of our proposed method over state-of-the-art baselines under distribution shifts. The contributions of our work are summarized as follows:
• We propose Disentangled Intervention-based Dynamic Graph Attention Networks (DIDA), which can handle spatio-temporal distribution shifts in dynamic graphs. This is the first study of spatio-temporal distribution shifts in dynamic graphs, to the best of our knowledge.
• We propose a disentangled spatio-temporal attention network to capture variant and invariant graph patterns. We further design a spatio-temporal intervention mechanism to create multiple intervened distributions and an invariance regularization term based on causal inference theory to enable the model to focus on invariant patterns under distribution shifts.
• Experiments on three real-world datasets and one synthetic dataset demonstrate the superiority of our method over state-of-the-art baselines.
2 Problem Formulation
In this section, we formulate the problem of spatio-temporal distribution shift in dynamic graphs.
Dynamic Graph. Consider a graph G with the node set V and the edge set E . A dynamic graph can be defined as G = ({Gt}Tt=1), where T is the number of time stamps, Gt = (Vt, Et) is the graph slice at time stamp t, V = ⋃T t=1 Vt, E = ⋃T t=1 Et. For simplicity, a graph slice is also denoted as Gt = (Xt,At), which includes node features and adjacency matrix at time t. We use Gt to denote a random variable of Gt.
3Our codes are publicly available at https://github.com/wondergo2017/DIDA
Prediction tasks. For dynamic graphs, the prediction task can be summarized as using past graphs to make predictions, i.e. p(Yt|G1,G2, . . . ,Gt)=p(Yt|G1:t) , where label Yt can be node properties or occurrence of links between nodes at time t+1. In this paper, we mainly focus on node-level tasks, which are commonly adopted in dynamic graph literatures [7, 8]. Following [18, 19], we factorize the distribution of graph trajectory into ego-graph trajectories, i.e. p(Yt | G1:t) = ∏ v p(y
t | G1:tv ). An ego-graph induced from node v at time t is defined as Gtv = (Xtv,Atv) where Atv is the adjacency matrix including all edges in node v’s L-hop neighbors at time t, i.e. N tv , and Xtv includes the features of nodes in N tv . The optimization objective is to learn an optimal predictor with empirical risk minimization
min θ
E(yt,G1:tv )∼ptr(yt,G1:tv )L(fθ(G 1:t v ), y t) (1)
where fθ is a learnable dynamic graph neural networks, We use G1:tv ,y t to denote the random variable of the ego-graph trajectory and its label, and G1:tv ,yt refer to the respective instances. Spatio-temporal distribution shift. However, the optimal predictor trained with the training distribution may not generalize well to the test distribution when there exists a distribution shift problem. In the literature of dynamic graph, researchers are devoted to capture laws of network dynamics which are stable in systems [20, 21, 22, 23, 24]. Following them, we assume the conditional distribution is the same ptr(Yt|G1:t) = pte(Yt|G1:t), and only consider the covariate shift problem where ptr(G1:t) ̸= pte(G1:t). Besides temporal distribution shift which naturally exists in timevarying data [25, 12, 26, 27, 28] and structural distribution shift in non-euclidean data [29, 18, 30], there exists a much more complex spatio-temporal distribution shift in dynamic graphs. For example, the distribution of ego-graph trajectories may vary across periods or communities.
3 Method
In this section, we propose Disentangled Intervention-based Dynamic Graph Attention Networks (DIDA) to handle spatio-temporal distribution shift in dynamic graphs. First, we propose a disentangled dynamic graph attention network to extract invariant and variant spatio-temporal patterns. Then we propose a spatio-temporal intervention mechanism to create multiple intervened data distributions. Finally, we optimize the model with invariance loss to make predictions relying on invariant patterns.
3.1 Handling Spatio-Temporal Distribution Shift
Spatio-Temporal Pattern. In recent decades of development of dynamic graphs, some scholars endeavor to conclude insightful patterns of network dynamics to reflect how real-world networks evolve through time [31, 32, 33, 34]. For example, the laws of triadic closure describe that two nodes with common neighbors (patterns) tend to have future interactions in social networks [35, 36, 23]. Besides structural information, node attributes are also an important part of the patterns, e.g., social interactions can be also affected by gender and age [37]. Instead of manually concluding patterns, we aim at learning the patterns using DyGNNs so that the more complex spatio-temporal patterns with mixed features and structures can be mined in dynamic graphs. Therefore, we define the spatio-temporal pattern used for node-level prediction as a subset of ego-graph trajectory
P t(v) = mtv(G1:tv ) (2)
where mtv(·) selects structures and attributes from the ego-graph trajectory. In [23], the pattern can be explained as an open triad with similar neighborhood, and the model tend to make link predictions to close the triad with ŷtu,v = fθ(P
t(u), P t(v)) based on the laws of triadic closure [38]. DyGNNs aim at exploiting predictive spatio-temporal patterns to boost prediction ability. However, the predictive power of some patterns may vary across periods or communities due to spatio-temporal distribution shift. Inspired by the causal theory [16, 17], we make the following assumption
Assumption 1 For a given task, there exists a predictor f(·) , for samples (G1:tv ,yt) from any distribution, there exists an invariant pattern P tI (v) and a variant pattern P t V (v) such that y
t = f(P tI (v)) + ϵ and P t I (v) = G1:tv \P tV (v), i.e., yt ⊥ PtV (v) | PtI(v).
Assumption 1 shows that invariant patterns PtI(v) are sufficiently predictive for label y t and can be exploited across periods and communities without adjusting the predictor, while the influence of variant patterns PtV (v) on y t is shielded by the invariant patterns.
Training Objective. Our main idea is that to obtain better generalization ability, the model should rely on invariant patterns instead of variant patterns, as the former is sufficient for prediction while the predictivity of the latter could be variant under distribution shift. Along this, our objective can be transformed to
min θ1,θ2
E(yt,G1:tv )∼ptr(yt,G1:tv )L(fθ1(P̃ t I (v)), y t)
s.t ϕθ2(G1:tv ) = P̃ tI (v),yt ⊥ P̃tV (v) | P̃tI(v). (3)
where fθ1(·) make predictions based on the invariant patterns, ϕθ2(·) aims at finding the invariant patterns. Backed by causal theory[16, 17], it can be transformed into
min θ1,θ2
E(yt,G1:tv )∼ptr(yt,G1:tv )L(fθ1(ϕθ2(G 1:t v )), y t)+
λVars∈S(E(yt,G1:tv )∼ptr(yt,G1:tv |do(PtV =s))L(fθ1(ϕθ2(G 1:t v )), y
t)) (4)
where ‘do’ denotes do-calculas to intervene the original distribution [39, 17], S denotes the intervention set and λ is a balancing hyperparameter. The idea can be informally described that as in Eq. (3), variant patterns PtV have no influence on the label y
t given the invariant patterns PtI , then the prediction would not be varied if we intervene the variant patterns and keep invariant patterns untouched. More details about the connections between objective Eq.(3) and Eq.(4) can be found in Appendix.
Remark 1 Minimizing the variance term in Eq. (4) help the model to satisfy the constraint of yt ⊥ P̃tV (v) | P̃tI(v) in Eq. (3), i.e., p(yt | P̃tI(v), P̃tV (v)) = p(yt | P̃tI(v))
3.2 Disentangled Dynamic Graph Attention Networks
Dynamic Neighborhood. To simultaneously consider the spatio-temporal information, we define the dynamic neighborhood asN t(u) = {v : (u, v) ∈ Et}, which includes all nodes that have interactions with node u at time t.
Disentangled Spatio-temporal Graph Attention Layer. To capture spatio-temporal pattern for each node, we propose a spatio-temporal graph attention to enable each node to attend to its dynamic neighborhood simultaneously. For a node u at time stamp t and its neighbors v ∈ N t′(u),∀t′ ≤ t, we calculate the Query-Key-Value vectors as:
qtu = Wq(h t u||TE(t)),kt
′ v = Wk(h t′ v ||TE(t′)),vt ′ v = Wv(h t′ v ||TE(t′)) (5)
where htu denotes the representation of node u at the time stamp t, q, k, v represents the query, key and value vector, respectively, and we omit the bias term for brevity. TE(t) denotes temporal encoding techniques to obtain embeddings of time t so that the time of link occurrence can be considered inherently [40, 41]. Then, we can calculate the attention scores among nodes in the dynamic neighborhood to obtain the structural masks
mI = Softmax( q · kT√
d ),mV = Softmax(− q · kT√ d ) (6)
where d denotes feature dimension, mI and mV represent the masks of invariant and variant structural patterns. In this way, dynamic neighbors with higher attention scores in invariant patterns will have lower attention scores in variant ones, which means the invariant and variant patterns have a negative correlation. To capture invariant featural pattern, we adopt a learnable featural mask mf = Softmax(wf ) to select features from the messages of dynamic neighbors. Then the messages of dynamic neighborhood can be summarized with respective masks,
ztI(u) = AggI(mI ,v ⊙mf ) ztV (u) = AggV (mV ,v)
(7)
where Agg(·) denotes aggregating and summarizing messages from dynamic neighborhood. To further disentangle the invariant and variant patterns, we design different aggregation functions AggI(·) and AggV (·) to summarize specific messages from masked dynamic neighborhood respectively. Then the pattern summarizations are added up as hidden embeddings to be fed into subsequent layers.
htu ← ztI(u) + ztV (u) (8)
Overall Architecture. The overall architecture is a stacking of spatio-temporal graph attention layers. Like classic graph message-passing networks, this enables each node to access high-order dynamic neighborhood indirectly, where ztI(u) and z t V (u) at l-th layer can be a summarization of invariant and variant patterns in l-order dynamic neighborhood. In practice, the attention can be easily extended to multi-head attention [42] to stable the training process and model multi-faceted graph evolution [43].
3.3 Spatio-Temporal Intervention Mechanism
Direct Intervention. One way of intervening variant pattern distribution as Eq. (4) is directly generating and altering the variant patterns. However, this is infeasible in practice due to the following reasons: First, since it has to intervene the dynamic neighborhood and features nodewisely, the computational complexity is unbearable. Second, generating variant patterns including time-varying structures and features is another intractable problem.
Approximate Intervention. To tackle the problems mentioned above, we propose to approximate the patterns Pt with summarized patterns zt found in Sec. 3.2. As ztI(u) and z t V (u) act as summarizations of invariant and variant spatio-temporal patterns for node u at time t, we approximate the intervention process by sampling and replacing the variant pattern summarizations instead of altering original structures and features with generated ones. To do spatio-temporal intervention, we collect variant patterns of all nodes at all time, from which we sample one variant pattern to replace the variant patterns of other nodes across time. For example, we can use the variant pattern of node v at time t2 to replace the variant pattern of node u at time t1 as
zt1I (u), z t1 V (u)← z t1 I (u), z t2 V (v) (9)
As the invariant pattern summarization is kept the same, the label should not be changed. Thanks to the disentangled spatio-temporal graph attention, we get variant patterns across neighborhoods and time, which can act as natural intervention samples inside data so that the complexity of the generation problem can also be avoided. By doing Eq. (9) multiple times, we can obtain multiple intervened data distributions for the subsequent optimization.
3.4 Optimization with Invariance Loss
Based on the multiple intervened data distributions with different variant patterns, we can next optimize the model to focus on invariant patterns to make predictions. Here, we introduce invariance loss to instantiate Eq. (4). Let zI and zV be the summarized invariant and variant patterns, we calculate the task loss by only using the invariant patterns
L = ℓ(f(zI),y) (10) where f(·) is the predictor. The task loss let the model utilize the invariant patterns to make predictions. Then we calculate the mixed loss as
Lm = ℓ(g(zV , zI),y) (11) where another predictor g(·) makes predictions using both invariant patterns zV and variant patterns zI . The mixed loss measure the model’s prediction ability when variant patterns are also exposed to the model. Then the invariance loss is calculated by
Ldo = Varsi∈S(Lm|do(PtV = si)) (12) where ‘do’ denotes the intervention mechanism as mentioned in Section. 3.3. The invariance loss measures the variance of the model’s prediction ability under multiple intervened distributions. The final training objective is
min θ L+ λLdo (13)
where the task loss L is minimized to exploit invariant patterns while the invariance loss Ldo helps the model to discover invariant and variant patterns, and λ is a hyperparameter to balance between two objectives. After training, we only adopt invariant patterns to make predictions in the inference stage. The overall algorithm is summarized in Table 1.
Algorithm 1 Training pipeline for DIDA Require: Training epochs L, number of intervention samples S, hyperparameter λ
1: for l = 1, . . . , L do 2: Obtain ztV , z t I for each node and time as described in Section 3.2 3: Calculate task loss and mixed loss as Eq. (10) and Eq. (11) 4: Sample S variant patterns from collections of ztV , to construct intervention set S 5: for s in S do 6: Replace the nodes’ variant pattern summarizations with s as Section 3.3 7: Calculate mixed loss as Eq. (11) 8: end for 9: Calculate invariance loss as Eq. (12)
10: Update the model according to Eq. (13) 11: end for
4 Experiments
In this section, we conduct extensive experiments to verify that our framework can handle spatiotemporal distribution shifts by discovering and utilizing invariant patterns. More Details of the settings and other results can be found in Appendix.
Baselines. We adopt several representative GNNs and Out-of-Distribution(OOD) generalization methods as our baselines:
• Static GNNs: GAE [44], a representative static GNN with stacking of graph convolutions; VGAE [44] further introduces variational variables into GAE.
• Dynamic GNNs: GCRN [45],a representative dynamic GNN that first adopts a GCN[44] to obtain node embeddings and then a GRU [46] to model the dynamics; EvolveGCN [13] adopts a LSTM[47] or GRU [46] to flexibly evolve the GCN[44] parameters instead of directly learning the temporal node embeddings; DySAT [43] models dynamic graph using structural and temporal self-attention.
• OOD generalization methods: IRM [48] aims at learning an invariant predictor which minimizes the empirical risks for all training domains; GroupDRO [49] reduces differences in risk across training domains to reduce the model’s sensitivity to distributional shifts; V-REx [50] puts more weight on training domains with larger errors when minimizing empirical risk.
4.1 Real-world Datasets
Settings. We use 3 real-world dynamic graph datasets, including COLLAB, Yelp and Transaction. We adopt the challenging inductive future link prediction task, where the model exploits past graphs to make link prediction in the next time step. Each dataset can be split into several partial dynamic graphs based on its field information. For brevity, we use ‘w/ DS’ and ‘w/o DS’ to represent test data with and without distribution shift respectively. To measure models’ performance under spatiotemporal distribution shift, we choose one field as ‘w/ DS’ and the left others are further split into training, validation and test data (‘w/o DS’) chronologically. Note that the ‘w/o DS’ is a merged dynamic graph without field information and ‘w/ DS’ is unseen during training, which is more practical and challenging in real-world scenarios. More details on their spatio-temporal distribution shifts are provided in Appendix. Here we briefly introduce the real-world datasets as follows
• COLLAB [51]4 is an academic collaboration dataset with papers that were published during 1990-2006. Node and edge represent author and coauthorship respectively. Based on the field of co-authored publication, each edge has the field information including "Data Mining", "Database", "Medical Informatics", "Theory" and "Visualization". The time granularity is year, including 16 time slices in total. We use "Data Mining" as ‘w/ DS’ and the left as ‘w/o DS’.
• Yelp [43]5 is a business review dataset, containing customer reviews on business. Node and edge represent customer/business and review behavior respectively. We consider interactions in five categories of business including "Pizza", "American (New) Food", "Coffee & Tea ", "Sushi Bars" and "Fast Food" from January 2019 to December 2020. The time granularity is month, including 24 time slices in total. We use "Pizza" as ‘w/ DS’ and the left as ‘w/o DS’.
• Transaction6 is a secondary market transaction dataset, which records transaction behaviors of users from 10th April 2022 to 10th May 2022. Node and edge represent user and transaction respectively. The transactions have 4 categories, including "Pants", "Outwears", "Shirts" and "Hoodies". The time granularity is day, including 30 time slices in total. We use "Pants" as ‘w/ DS’ and the left as ‘w/o DS’.
Results. Based on the results on real-world datasets in Table. 1, we have the following observations:
• Baselines fail dramatically under distribution shift: 1) Although DyGNN baselines perform well on test data without distribution shift, their performance drops greatly under distribution shift. In particular, the performance of DySAT, which is the best-performed DyGNN in ‘w/o DS’, drop by nearly 12%, 12% and 5% in ‘w/ DS’. In Yelp and Transaction, GCRN and EGCN even underperform static GNNs, GAE and VGAE. This phenomenon shows that the existing DyGNNs may exploit variant patterns and thus fail to handle distribution shift. 2) Moreover, as generalization baselines are not specially designed to consider spatio-temporal distribution shift in dynamic graphs, they only have limited improvements in Yelp and Transaction. In particular, they rely on ground-truth environment labels to achieve OOD generalization, which are unavailable for real dynamic graphs. The inferior performance indicates that they cannot generalize well without accurate environment labels, which verifies that lacking environmental labels is also a key challenge for handling distribution shifts of dynamic graphs.
• Our method can better handle distribution shift than the baselines, especially in stronger distribution shift. DIDA improves significantly over all baselines in ‘w/ DS’ for all datasets. Note that
4https://www.aminer.cn/collaboration. 5https://www.yelp.com/dataset 6Collected from Alibaba.com
4.2 Synthetic Dataset
Settings. To evaluate the model’s generalization ability under spatio-temporal distribution shift, following [18], we introduce manually designed shifts in dataset COLLAB with all fields merged. Denote original features and structures as Xt1 ∈ RN×d and At ∈ {0, 1}N×N . For each time t, we uniformly sample p(t)|Et+1| positive links and (1− p(t))|Et+1| negative links in At+1. Then they are factorized into variant features Xt2 ∈ RN×d with property of structural preservation. Two portions of features are concatenated as Xt = [Xt1,X t 2] as input node features for training and inference. The sampling probability p(t) = clip(p+ σcos(t), 0, 1) refers to the intensity of shifts, where the variant features Xt2 constructed with higher p(t) will have stronger correlations with future link A
t+1. We set ptest = 0.1, σtest = 0, σtrain = 0.05 and vary ptrain in from 0.4 to 0.8 for evaluation. Since the correlations between Xt2 and label A
t+1 vary through time and neighborhood, patterns include Xt2 are variant under distribution shifts. As static GNNs can not support time-varing features, we omit their results.
Results. Based on the results on synthetic dataset in Table. 2, we have the following observations:
• Our method can better handle distribution shift than the baselines. Although the baselines achieve high performance when training, their performance drop drastically in the test stage, which
shows that the existing DyGNNs fail to handle distribution shifts. In terms of test results, DIDA consistently outperforms DyGNN baselines by a significantly large margin. In particular, DIDA surpasses the best-performed baseline by nearly 13%/10%/5% in test results for different shift levels. For the general OOD baselines, they reduce the variance in some cases while their improvements are not significant. Instead, DIDA is specially designed for dynamic graphs and can exploit the invariant spatio-temporal patterns to handle distribution shift.
• Our method can exploit invariant patterns to consistently alleviate harmful effects of variant patterns under different distribution shift levels. As shift level increases, almost all baselines increase in train results and decline in test results. This phenomenon shows that as the relationship between variant patterns and labels goes stronger, the existing DyGNNs become more dependent on the variant patterns when training, causing their failure in test stage. Instead, the rise in train results and drop in test results of DIDA are significantly lower than baselines, which demonstrates that DIDA can exploit invariant patterns and alleviate the harmful effects of variant patterns under distribution shift.
4.3 Complexity Analysis
We analyze the computational complexity of DIDA as follows. Denote |V | and |E| as the total number of nodes and edges in the graph, respectively, and d as the dimensionality of the hidden representation. The spatio-temporal aggregation has a time complexity of O(|E|d + |V |d2). The disentangled component adds a constant multiplier 2, which does not affect the time complexity of aggregation. Denote |Ep| as the number of edges to predict and |S| as the size of the intervention set. Our intervention mechanism has a time complexity of O(|Ep||S|d) in training, and does not put extra time complexity in inference. Therefore, the overall time complexity of DIDA is O(|E|d + |V |d2 + |Ep||S|d). Notice that |S| is a hyper-parameter and is usually set as a small constant. In summary, DIDA has a linear time complexity with respect to the number of nodes and edges, which is on par with the existing dynamic GNNs.
4.4 Ablation study
In this section, we conduct ablation studies to verify the effectiveness of the proposed spatio-temporal intervention mechanism and disentangled graph attention in DIDA.
Spatio-temporal intervention mechanism. We remove the intervention mechanism mentioned in Sec 3.3. From Figure 2, we can see that without spatio-temporal intervention, the model’s performance drop significantly especially in the synthetic dataset, which verifies that our intervention mechanism helps the model to focus on invariant patterns to make predictions.
Disentangled graph attention. We further remove the disentangled attention mentioned in Sec 3.2. From Figure 2, we can see that disentangled attention is a critical component in the model design, especially in Yelp dataset. Moreover, without disentangled module, the model is unable to obtain variant and invariant patterns for the subsequent intervention.
5 Related Work
Dynamic Graph Neural Networks. To tackle the complex structural and temporal information in dynamic graphs, considerable research attention has been devoted to dynamic graph neural networks (DyGNNs) [7, 8]. A classic of DyGNNs first adopt a GNN to aggregate structural information for graph at each time, followed by a sequence model like RNN [52, 53, 54, 45] or temporal self-attention [43] to process temporal information. Another classic of DyGNNs first introduce
time-encoding techniques to represent each temporal link as a function of time, followed by a spatial module like GNN or memory module [20, 55, 40, 41] to process structural information. To obtain more fine-grained continous node embeddings in dynamic graphs, some work further leverages neural interaction processes [56] and ordinary differential equation [57]. DyGNNs have been widely applied in real-world applications, including dynamic anomaly detection [58], event forecasting [59], dynamic recommendation [60], social character prediction [61], user modeling [62], temporal knowledge graph completion [63], etc. In this paper, we consider DyGNNs under spatio-temporal distribution shift, which remains unexplored in dynamic graph neural networks literature.
Out-of-Distribution Generalization. Most existing machine learning methods assume that the testing and training data are independent and identically distributed, which is not guaranteed to hold in many real-world scenarios [64]. In particular, there might be uncontrollable distribution shifts between training and testing data distribution, which may lead to sharp drop of model performance. To solve this problem, Out-of-Distribution (OOD) generalization problem has recently become a central research topic in various areas [65, 64, 66]. Recently, several works attempt to handle distribution shift on graphs [67, 29, 18, 68, 11, 69, 70, 71, 72, 73]. Another classic of OOD methods most related to our works handle distribution shifts on time-series data [25, 26, 12, 27, 28, 74]. Current works consider either only structural distribution shift for static graphs or only temporal distribution shift for time-series data. However, spatio-temporal distribution shifts in dynamic graphs are more complex yet remain unexplored. To the best of our knowledge, this is the first study of spatio-temporal distribution shifts in dynamic graphs.
Disentangled Representation Learning. Disentangled representation learning aims to characterize the multiple latent explanatory factors behind the observed data, where the factors are represented by different vectors [75]. Besides its applications in computer vision [76, 77, 78, 79, 80] and recommendation [81, 82, 83, 84, 85, 86], several disentangled GNNs have proposed to generalize disentangled representation learning in graph data recently. DisenGCN [87] and IPGDN [88] utilize the dynamic routing mechanism to disentangle latent factors for node representations. FactorGCN [89] decomposes the input graph into several interpretable factor graphs. DGCL [90, 91] aim to learn disentangled graph-level representations with self-supervision. Some works factorize deep generative models based on node, edge, static, dynamic factors [92] or spatial, temporal, graph factors [93] to achieve interpretable dynamic graph generation.
6 Conclusion
In this paper, we propose Disentangled Intervention-based Dynamic Graph Attention Networks (DIDA) to handle spatio-temporal distribution shift in dynamic graphs. First, we propose a disentangled dynamic graph attention network to capture invariant and variant spatio-temporal patterns. Then, based on the causal inference literature, we design a spatio-temporal intervention mechanism to create multiple intervened distributions and propose an invariance regularization term to help the model focus on invariant patterns under distribution shifts. Extensive experiments on three real-world datasets and one synthetic dataset demonstrate that our method can better handle spatio-temporal distribution shift than state-of-the-art baselines. One limitation is that in this paper we mainly consider dynamic graphs in scenarios of discrete snapshots, and we leave studying spatio-temporal distribution shifts in continous dynamic graphs for further explorations.
Acknowledgements
This work was supported in part by the National Key Research and Development Program of China No. 2020AAA0106300, National Natural Science Foundation of China (No. 62250008, 62222209, 62102222, 62206149), China National Postdoctoral Program for Innovative Talents No. BX20220185 and China Postdoctoral Science Foundation No. 2022M711813. All opinions, findings, conclusions and recommendations in this paper are those of the authors and do not necessarily reflect the views of the funding agencies. | 1. What is the focus and contribution of the paper on dynamic graph neural networks?
2. What are the strengths of the proposed approach, particularly in terms of attention layers and spatio-temporal information?
3. What are the weaknesses of the paper, especially regarding the lack of proof or illustration for spatio-temporal intervention and computational complexity?
4. Can you clarify the expression for m_i and m_v in equation (6)?
5. How do you define 'ego-graph', 'distribution shifts', 'invariant and variance structural patterns' in the context of the paper? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper introduces a method of dynamic graph neural networks with spatio and temporal intervention mechanism.
Strengths And Weaknesses
Strength: (1) the empirical study shows considerable improvement on existing method. (2) Innovative using attention layers to capture spatio-temporal information.
Weakness: (1) no strict proof or detailed illustration to show why spatio-temporal intervention works. (2) computational complexity is not discussed in the main contents.
Questions
in equation (6), why are the expressions for m_i and m_v identital?
Limitations
I failed to find the accurate definition of 'ego-graph', 'distribution shifts', 'invariant and variance structural patterns' , etc. As a result, it is not easy to understand this paper correctly without reading several previous papers. |
NIPS | Title
Dynamic Graph Neural Networks Under Spatio-Temporal Distribution Shift
Abstract
Dynamic graph neural networks (DyGNNs) have demonstrated powerful predictive abilities by exploiting graph structural and temporal dynamics. However, the existing DyGNNs fail to handle distribution shifts, which naturally exist in dynamic graphs, mainly because the patterns exploited by DyGNNs may be variant with respect to labels under distribution shifts. In this paper, we propose to handle spatio-temporal distribution shifts in dynamic graphs by discovering and utilizing invariant patterns, i.e., structures and features whose predictive abilities are stable across distribution shifts, which faces two key challenges: 1) How to discover the complex variant and invariant spatio-temporal patterns in dynamic graphs, which involve both time-varying graph structures and node features. 2) How to handle spatio-temporal distribution shifts with the discovered variant and invariant patterns. To tackle these challenges, we propose the Disentangled Intervention-based Dynamic graph Attention networks (DIDA). Our proposed method can effectively handle spatio-temporal distribution shifts in dynamic graphs by discovering and fully utilizing invariant spatio-temporal patterns. Specifically, we first propose a disentangled spatio-temporal attention network to capture the variant and invariant patterns. Then, we design a spatio-temporal intervention mechanism to create multiple interventional distributions by sampling and reassembling variant patterns across neighborhoods and time stamps to eliminate the spurious impacts of variant patterns. Lastly, we propose an invariance regularization term to minimize the variance of predictions in intervened distributions so that our model can make predictions based on invariant patterns with stable predictive abilities and therefore handle distribution shifts. Experiments on three real-world datasets and one synthetic dataset demonstrate the superiority of our method over state-of-the-art baselines under distribution shifts. Our work is the first study of spatio-temporal distribution shifts in dynamic graphs, to the best of our knowledge.
1 Introduction
Dynamic graphs widely exist in real-world applications, including financial networks [1, 2], social networks [3, 4], traffic networks [5, 6], etc. Distinct from static graphs, dynamic graphs can represent temporal structure and feature patterns, which are more complex yet common in reality. Dynamic graph neural networks (DyGNNs) have been proposed to tackle highly complex structural and temporal information over dynamic graphs, and have achieved remarkable progress in many predictive tasks [7, 8].
∗This work was done during author’s internship at Alibaba Group †Corresponding authors
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
Nevertheless, the existing DyGNNs fail to handle spatio-temporal distribution shifts, which naturally exist in dynamic graphs for various reasons such as survivorship bias [9], selection bias [10, 11], trending [12], etc. For example, in financial networks, external factors like period or market would affect the correlations between the payment flows and transaction illegitimacy [13]. Trends or communities also affect interaction patterns in coauthor networks [14] and recommendation networks [15]. If DyGNNs highly rely on spatio-temporal patterns which are variant under distribution shifts, they will inevitably fail to generalize well to the unseen test distributions.
To address this issue, in this paper, we study the problem of handling spatio-temporal distribution shifts in dynamic graphs through discovering and utilizing invariant patterns, i.e., structures and features whose predictive abilities are stable across distribution shifts, which remain unexplored in the literature. However, this problem is highly non-trivial with the following challenges:
• How to discover the complex variant and invariant spatio-temporal patterns in dynamic graphs, which include both graph structures and node features varying through time?
• How to handle spatio-temporal distribution shifts in a principled manner with discovered variant and invariant patterns?
To tackle these challenges, we propose a novel DyGNN named Disentangled Intervention-based Dynamic Graph Attention Networks (DIDA3). Our proposed method handles distribution shifts well by discovering and utilizing invariant spatio-temporal patterns with stable predictive abilities. Specifically, we first propose a disentangled spatio-temporal attention network to capture the variant and invariant patterns in dynamic graphs, which enables each node to attend to all its historic neighbors through a disentangled attention message-passing mechanism. Then, inspired by causal inference literatures [16, 17], we propose a spatio-temporal intervention mechanism to create multiple intervened distributions by sampling and reassembling variant patterns across neighborhoods and time, such that spurious impacts of variant patterns can be eliminated. To tackle the challenges that i) variant patterns are highly entangled across nodes and ii) directly generating and mixing up subsets of structures and features to do intervention is computationally expensive, we approximate the intervention process with summarized patterns obtained by the disentangled spatio-temporal attention network instead of original structures and features. Lastly, we propose an invariance regularization term to minimize prediction variance in multiple intervened distributions. In this way, our model can capture and utilize invariant patterns with stable predictive abilities to make predictions under distribution shifts. Extensive experiments on one synthetic dataset and three real-world datasets demonstrate the superiority of our proposed method over state-of-the-art baselines under distribution shifts. The contributions of our work are summarized as follows:
• We propose Disentangled Intervention-based Dynamic Graph Attention Networks (DIDA), which can handle spatio-temporal distribution shifts in dynamic graphs. This is the first study of spatio-temporal distribution shifts in dynamic graphs, to the best of our knowledge.
• We propose a disentangled spatio-temporal attention network to capture variant and invariant graph patterns. We further design a spatio-temporal intervention mechanism to create multiple intervened distributions and an invariance regularization term based on causal inference theory to enable the model to focus on invariant patterns under distribution shifts.
• Experiments on three real-world datasets and one synthetic dataset demonstrate the superiority of our method over state-of-the-art baselines.
2 Problem Formulation
In this section, we formulate the problem of spatio-temporal distribution shift in dynamic graphs.
Dynamic Graph. Consider a graph G with the node set V and the edge set E . A dynamic graph can be defined as G = ({Gt}Tt=1), where T is the number of time stamps, Gt = (Vt, Et) is the graph slice at time stamp t, V = ⋃T t=1 Vt, E = ⋃T t=1 Et. For simplicity, a graph slice is also denoted as Gt = (Xt,At), which includes node features and adjacency matrix at time t. We use Gt to denote a random variable of Gt.
3Our codes are publicly available at https://github.com/wondergo2017/DIDA
Prediction tasks. For dynamic graphs, the prediction task can be summarized as using past graphs to make predictions, i.e. p(Yt|G1,G2, . . . ,Gt)=p(Yt|G1:t) , where label Yt can be node properties or occurrence of links between nodes at time t+1. In this paper, we mainly focus on node-level tasks, which are commonly adopted in dynamic graph literatures [7, 8]. Following [18, 19], we factorize the distribution of graph trajectory into ego-graph trajectories, i.e. p(Yt | G1:t) = ∏ v p(y
t | G1:tv ). An ego-graph induced from node v at time t is defined as Gtv = (Xtv,Atv) where Atv is the adjacency matrix including all edges in node v’s L-hop neighbors at time t, i.e. N tv , and Xtv includes the features of nodes in N tv . The optimization objective is to learn an optimal predictor with empirical risk minimization
min θ
E(yt,G1:tv )∼ptr(yt,G1:tv )L(fθ(G 1:t v ), y t) (1)
where fθ is a learnable dynamic graph neural networks, We use G1:tv ,y t to denote the random variable of the ego-graph trajectory and its label, and G1:tv ,yt refer to the respective instances. Spatio-temporal distribution shift. However, the optimal predictor trained with the training distribution may not generalize well to the test distribution when there exists a distribution shift problem. In the literature of dynamic graph, researchers are devoted to capture laws of network dynamics which are stable in systems [20, 21, 22, 23, 24]. Following them, we assume the conditional distribution is the same ptr(Yt|G1:t) = pte(Yt|G1:t), and only consider the covariate shift problem where ptr(G1:t) ̸= pte(G1:t). Besides temporal distribution shift which naturally exists in timevarying data [25, 12, 26, 27, 28] and structural distribution shift in non-euclidean data [29, 18, 30], there exists a much more complex spatio-temporal distribution shift in dynamic graphs. For example, the distribution of ego-graph trajectories may vary across periods or communities.
3 Method
In this section, we propose Disentangled Intervention-based Dynamic Graph Attention Networks (DIDA) to handle spatio-temporal distribution shift in dynamic graphs. First, we propose a disentangled dynamic graph attention network to extract invariant and variant spatio-temporal patterns. Then we propose a spatio-temporal intervention mechanism to create multiple intervened data distributions. Finally, we optimize the model with invariance loss to make predictions relying on invariant patterns.
3.1 Handling Spatio-Temporal Distribution Shift
Spatio-Temporal Pattern. In recent decades of development of dynamic graphs, some scholars endeavor to conclude insightful patterns of network dynamics to reflect how real-world networks evolve through time [31, 32, 33, 34]. For example, the laws of triadic closure describe that two nodes with common neighbors (patterns) tend to have future interactions in social networks [35, 36, 23]. Besides structural information, node attributes are also an important part of the patterns, e.g., social interactions can be also affected by gender and age [37]. Instead of manually concluding patterns, we aim at learning the patterns using DyGNNs so that the more complex spatio-temporal patterns with mixed features and structures can be mined in dynamic graphs. Therefore, we define the spatio-temporal pattern used for node-level prediction as a subset of ego-graph trajectory
P t(v) = mtv(G1:tv ) (2)
where mtv(·) selects structures and attributes from the ego-graph trajectory. In [23], the pattern can be explained as an open triad with similar neighborhood, and the model tend to make link predictions to close the triad with ŷtu,v = fθ(P
t(u), P t(v)) based on the laws of triadic closure [38]. DyGNNs aim at exploiting predictive spatio-temporal patterns to boost prediction ability. However, the predictive power of some patterns may vary across periods or communities due to spatio-temporal distribution shift. Inspired by the causal theory [16, 17], we make the following assumption
Assumption 1 For a given task, there exists a predictor f(·) , for samples (G1:tv ,yt) from any distribution, there exists an invariant pattern P tI (v) and a variant pattern P t V (v) such that y
t = f(P tI (v)) + ϵ and P t I (v) = G1:tv \P tV (v), i.e., yt ⊥ PtV (v) | PtI(v).
Assumption 1 shows that invariant patterns PtI(v) are sufficiently predictive for label y t and can be exploited across periods and communities without adjusting the predictor, while the influence of variant patterns PtV (v) on y t is shielded by the invariant patterns.
Training Objective. Our main idea is that to obtain better generalization ability, the model should rely on invariant patterns instead of variant patterns, as the former is sufficient for prediction while the predictivity of the latter could be variant under distribution shift. Along this, our objective can be transformed to
min θ1,θ2
E(yt,G1:tv )∼ptr(yt,G1:tv )L(fθ1(P̃ t I (v)), y t)
s.t ϕθ2(G1:tv ) = P̃ tI (v),yt ⊥ P̃tV (v) | P̃tI(v). (3)
where fθ1(·) make predictions based on the invariant patterns, ϕθ2(·) aims at finding the invariant patterns. Backed by causal theory[16, 17], it can be transformed into
min θ1,θ2
E(yt,G1:tv )∼ptr(yt,G1:tv )L(fθ1(ϕθ2(G 1:t v )), y t)+
λVars∈S(E(yt,G1:tv )∼ptr(yt,G1:tv |do(PtV =s))L(fθ1(ϕθ2(G 1:t v )), y
t)) (4)
where ‘do’ denotes do-calculas to intervene the original distribution [39, 17], S denotes the intervention set and λ is a balancing hyperparameter. The idea can be informally described that as in Eq. (3), variant patterns PtV have no influence on the label y
t given the invariant patterns PtI , then the prediction would not be varied if we intervene the variant patterns and keep invariant patterns untouched. More details about the connections between objective Eq.(3) and Eq.(4) can be found in Appendix.
Remark 1 Minimizing the variance term in Eq. (4) help the model to satisfy the constraint of yt ⊥ P̃tV (v) | P̃tI(v) in Eq. (3), i.e., p(yt | P̃tI(v), P̃tV (v)) = p(yt | P̃tI(v))
3.2 Disentangled Dynamic Graph Attention Networks
Dynamic Neighborhood. To simultaneously consider the spatio-temporal information, we define the dynamic neighborhood asN t(u) = {v : (u, v) ∈ Et}, which includes all nodes that have interactions with node u at time t.
Disentangled Spatio-temporal Graph Attention Layer. To capture spatio-temporal pattern for each node, we propose a spatio-temporal graph attention to enable each node to attend to its dynamic neighborhood simultaneously. For a node u at time stamp t and its neighbors v ∈ N t′(u),∀t′ ≤ t, we calculate the Query-Key-Value vectors as:
qtu = Wq(h t u||TE(t)),kt
′ v = Wk(h t′ v ||TE(t′)),vt ′ v = Wv(h t′ v ||TE(t′)) (5)
where htu denotes the representation of node u at the time stamp t, q, k, v represents the query, key and value vector, respectively, and we omit the bias term for brevity. TE(t) denotes temporal encoding techniques to obtain embeddings of time t so that the time of link occurrence can be considered inherently [40, 41]. Then, we can calculate the attention scores among nodes in the dynamic neighborhood to obtain the structural masks
mI = Softmax( q · kT√
d ),mV = Softmax(− q · kT√ d ) (6)
where d denotes feature dimension, mI and mV represent the masks of invariant and variant structural patterns. In this way, dynamic neighbors with higher attention scores in invariant patterns will have lower attention scores in variant ones, which means the invariant and variant patterns have a negative correlation. To capture invariant featural pattern, we adopt a learnable featural mask mf = Softmax(wf ) to select features from the messages of dynamic neighbors. Then the messages of dynamic neighborhood can be summarized with respective masks,
ztI(u) = AggI(mI ,v ⊙mf ) ztV (u) = AggV (mV ,v)
(7)
where Agg(·) denotes aggregating and summarizing messages from dynamic neighborhood. To further disentangle the invariant and variant patterns, we design different aggregation functions AggI(·) and AggV (·) to summarize specific messages from masked dynamic neighborhood respectively. Then the pattern summarizations are added up as hidden embeddings to be fed into subsequent layers.
htu ← ztI(u) + ztV (u) (8)
Overall Architecture. The overall architecture is a stacking of spatio-temporal graph attention layers. Like classic graph message-passing networks, this enables each node to access high-order dynamic neighborhood indirectly, where ztI(u) and z t V (u) at l-th layer can be a summarization of invariant and variant patterns in l-order dynamic neighborhood. In practice, the attention can be easily extended to multi-head attention [42] to stable the training process and model multi-faceted graph evolution [43].
3.3 Spatio-Temporal Intervention Mechanism
Direct Intervention. One way of intervening variant pattern distribution as Eq. (4) is directly generating and altering the variant patterns. However, this is infeasible in practice due to the following reasons: First, since it has to intervene the dynamic neighborhood and features nodewisely, the computational complexity is unbearable. Second, generating variant patterns including time-varying structures and features is another intractable problem.
Approximate Intervention. To tackle the problems mentioned above, we propose to approximate the patterns Pt with summarized patterns zt found in Sec. 3.2. As ztI(u) and z t V (u) act as summarizations of invariant and variant spatio-temporal patterns for node u at time t, we approximate the intervention process by sampling and replacing the variant pattern summarizations instead of altering original structures and features with generated ones. To do spatio-temporal intervention, we collect variant patterns of all nodes at all time, from which we sample one variant pattern to replace the variant patterns of other nodes across time. For example, we can use the variant pattern of node v at time t2 to replace the variant pattern of node u at time t1 as
zt1I (u), z t1 V (u)← z t1 I (u), z t2 V (v) (9)
As the invariant pattern summarization is kept the same, the label should not be changed. Thanks to the disentangled spatio-temporal graph attention, we get variant patterns across neighborhoods and time, which can act as natural intervention samples inside data so that the complexity of the generation problem can also be avoided. By doing Eq. (9) multiple times, we can obtain multiple intervened data distributions for the subsequent optimization.
3.4 Optimization with Invariance Loss
Based on the multiple intervened data distributions with different variant patterns, we can next optimize the model to focus on invariant patterns to make predictions. Here, we introduce invariance loss to instantiate Eq. (4). Let zI and zV be the summarized invariant and variant patterns, we calculate the task loss by only using the invariant patterns
L = ℓ(f(zI),y) (10) where f(·) is the predictor. The task loss let the model utilize the invariant patterns to make predictions. Then we calculate the mixed loss as
Lm = ℓ(g(zV , zI),y) (11) where another predictor g(·) makes predictions using both invariant patterns zV and variant patterns zI . The mixed loss measure the model’s prediction ability when variant patterns are also exposed to the model. Then the invariance loss is calculated by
Ldo = Varsi∈S(Lm|do(PtV = si)) (12) where ‘do’ denotes the intervention mechanism as mentioned in Section. 3.3. The invariance loss measures the variance of the model’s prediction ability under multiple intervened distributions. The final training objective is
min θ L+ λLdo (13)
where the task loss L is minimized to exploit invariant patterns while the invariance loss Ldo helps the model to discover invariant and variant patterns, and λ is a hyperparameter to balance between two objectives. After training, we only adopt invariant patterns to make predictions in the inference stage. The overall algorithm is summarized in Table 1.
Algorithm 1 Training pipeline for DIDA Require: Training epochs L, number of intervention samples S, hyperparameter λ
1: for l = 1, . . . , L do 2: Obtain ztV , z t I for each node and time as described in Section 3.2 3: Calculate task loss and mixed loss as Eq. (10) and Eq. (11) 4: Sample S variant patterns from collections of ztV , to construct intervention set S 5: for s in S do 6: Replace the nodes’ variant pattern summarizations with s as Section 3.3 7: Calculate mixed loss as Eq. (11) 8: end for 9: Calculate invariance loss as Eq. (12)
10: Update the model according to Eq. (13) 11: end for
4 Experiments
In this section, we conduct extensive experiments to verify that our framework can handle spatiotemporal distribution shifts by discovering and utilizing invariant patterns. More Details of the settings and other results can be found in Appendix.
Baselines. We adopt several representative GNNs and Out-of-Distribution(OOD) generalization methods as our baselines:
• Static GNNs: GAE [44], a representative static GNN with stacking of graph convolutions; VGAE [44] further introduces variational variables into GAE.
• Dynamic GNNs: GCRN [45],a representative dynamic GNN that first adopts a GCN[44] to obtain node embeddings and then a GRU [46] to model the dynamics; EvolveGCN [13] adopts a LSTM[47] or GRU [46] to flexibly evolve the GCN[44] parameters instead of directly learning the temporal node embeddings; DySAT [43] models dynamic graph using structural and temporal self-attention.
• OOD generalization methods: IRM [48] aims at learning an invariant predictor which minimizes the empirical risks for all training domains; GroupDRO [49] reduces differences in risk across training domains to reduce the model’s sensitivity to distributional shifts; V-REx [50] puts more weight on training domains with larger errors when minimizing empirical risk.
4.1 Real-world Datasets
Settings. We use 3 real-world dynamic graph datasets, including COLLAB, Yelp and Transaction. We adopt the challenging inductive future link prediction task, where the model exploits past graphs to make link prediction in the next time step. Each dataset can be split into several partial dynamic graphs based on its field information. For brevity, we use ‘w/ DS’ and ‘w/o DS’ to represent test data with and without distribution shift respectively. To measure models’ performance under spatiotemporal distribution shift, we choose one field as ‘w/ DS’ and the left others are further split into training, validation and test data (‘w/o DS’) chronologically. Note that the ‘w/o DS’ is a merged dynamic graph without field information and ‘w/ DS’ is unseen during training, which is more practical and challenging in real-world scenarios. More details on their spatio-temporal distribution shifts are provided in Appendix. Here we briefly introduce the real-world datasets as follows
• COLLAB [51]4 is an academic collaboration dataset with papers that were published during 1990-2006. Node and edge represent author and coauthorship respectively. Based on the field of co-authored publication, each edge has the field information including "Data Mining", "Database", "Medical Informatics", "Theory" and "Visualization". The time granularity is year, including 16 time slices in total. We use "Data Mining" as ‘w/ DS’ and the left as ‘w/o DS’.
• Yelp [43]5 is a business review dataset, containing customer reviews on business. Node and edge represent customer/business and review behavior respectively. We consider interactions in five categories of business including "Pizza", "American (New) Food", "Coffee & Tea ", "Sushi Bars" and "Fast Food" from January 2019 to December 2020. The time granularity is month, including 24 time slices in total. We use "Pizza" as ‘w/ DS’ and the left as ‘w/o DS’.
• Transaction6 is a secondary market transaction dataset, which records transaction behaviors of users from 10th April 2022 to 10th May 2022. Node and edge represent user and transaction respectively. The transactions have 4 categories, including "Pants", "Outwears", "Shirts" and "Hoodies". The time granularity is day, including 30 time slices in total. We use "Pants" as ‘w/ DS’ and the left as ‘w/o DS’.
Results. Based on the results on real-world datasets in Table. 1, we have the following observations:
• Baselines fail dramatically under distribution shift: 1) Although DyGNN baselines perform well on test data without distribution shift, their performance drops greatly under distribution shift. In particular, the performance of DySAT, which is the best-performed DyGNN in ‘w/o DS’, drop by nearly 12%, 12% and 5% in ‘w/ DS’. In Yelp and Transaction, GCRN and EGCN even underperform static GNNs, GAE and VGAE. This phenomenon shows that the existing DyGNNs may exploit variant patterns and thus fail to handle distribution shift. 2) Moreover, as generalization baselines are not specially designed to consider spatio-temporal distribution shift in dynamic graphs, they only have limited improvements in Yelp and Transaction. In particular, they rely on ground-truth environment labels to achieve OOD generalization, which are unavailable for real dynamic graphs. The inferior performance indicates that they cannot generalize well without accurate environment labels, which verifies that lacking environmental labels is also a key challenge for handling distribution shifts of dynamic graphs.
• Our method can better handle distribution shift than the baselines, especially in stronger distribution shift. DIDA improves significantly over all baselines in ‘w/ DS’ for all datasets. Note that
4https://www.aminer.cn/collaboration. 5https://www.yelp.com/dataset 6Collected from Alibaba.com
4.2 Synthetic Dataset
Settings. To evaluate the model’s generalization ability under spatio-temporal distribution shift, following [18], we introduce manually designed shifts in dataset COLLAB with all fields merged. Denote original features and structures as Xt1 ∈ RN×d and At ∈ {0, 1}N×N . For each time t, we uniformly sample p(t)|Et+1| positive links and (1− p(t))|Et+1| negative links in At+1. Then they are factorized into variant features Xt2 ∈ RN×d with property of structural preservation. Two portions of features are concatenated as Xt = [Xt1,X t 2] as input node features for training and inference. The sampling probability p(t) = clip(p+ σcos(t), 0, 1) refers to the intensity of shifts, where the variant features Xt2 constructed with higher p(t) will have stronger correlations with future link A
t+1. We set ptest = 0.1, σtest = 0, σtrain = 0.05 and vary ptrain in from 0.4 to 0.8 for evaluation. Since the correlations between Xt2 and label A
t+1 vary through time and neighborhood, patterns include Xt2 are variant under distribution shifts. As static GNNs can not support time-varing features, we omit their results.
Results. Based on the results on synthetic dataset in Table. 2, we have the following observations:
• Our method can better handle distribution shift than the baselines. Although the baselines achieve high performance when training, their performance drop drastically in the test stage, which
shows that the existing DyGNNs fail to handle distribution shifts. In terms of test results, DIDA consistently outperforms DyGNN baselines by a significantly large margin. In particular, DIDA surpasses the best-performed baseline by nearly 13%/10%/5% in test results for different shift levels. For the general OOD baselines, they reduce the variance in some cases while their improvements are not significant. Instead, DIDA is specially designed for dynamic graphs and can exploit the invariant spatio-temporal patterns to handle distribution shift.
• Our method can exploit invariant patterns to consistently alleviate harmful effects of variant patterns under different distribution shift levels. As shift level increases, almost all baselines increase in train results and decline in test results. This phenomenon shows that as the relationship between variant patterns and labels goes stronger, the existing DyGNNs become more dependent on the variant patterns when training, causing their failure in test stage. Instead, the rise in train results and drop in test results of DIDA are significantly lower than baselines, which demonstrates that DIDA can exploit invariant patterns and alleviate the harmful effects of variant patterns under distribution shift.
4.3 Complexity Analysis
We analyze the computational complexity of DIDA as follows. Denote |V | and |E| as the total number of nodes and edges in the graph, respectively, and d as the dimensionality of the hidden representation. The spatio-temporal aggregation has a time complexity of O(|E|d + |V |d2). The disentangled component adds a constant multiplier 2, which does not affect the time complexity of aggregation. Denote |Ep| as the number of edges to predict and |S| as the size of the intervention set. Our intervention mechanism has a time complexity of O(|Ep||S|d) in training, and does not put extra time complexity in inference. Therefore, the overall time complexity of DIDA is O(|E|d + |V |d2 + |Ep||S|d). Notice that |S| is a hyper-parameter and is usually set as a small constant. In summary, DIDA has a linear time complexity with respect to the number of nodes and edges, which is on par with the existing dynamic GNNs.
4.4 Ablation study
In this section, we conduct ablation studies to verify the effectiveness of the proposed spatio-temporal intervention mechanism and disentangled graph attention in DIDA.
Spatio-temporal intervention mechanism. We remove the intervention mechanism mentioned in Sec 3.3. From Figure 2, we can see that without spatio-temporal intervention, the model’s performance drop significantly especially in the synthetic dataset, which verifies that our intervention mechanism helps the model to focus on invariant patterns to make predictions.
Disentangled graph attention. We further remove the disentangled attention mentioned in Sec 3.2. From Figure 2, we can see that disentangled attention is a critical component in the model design, especially in Yelp dataset. Moreover, without disentangled module, the model is unable to obtain variant and invariant patterns for the subsequent intervention.
5 Related Work
Dynamic Graph Neural Networks. To tackle the complex structural and temporal information in dynamic graphs, considerable research attention has been devoted to dynamic graph neural networks (DyGNNs) [7, 8]. A classic of DyGNNs first adopt a GNN to aggregate structural information for graph at each time, followed by a sequence model like RNN [52, 53, 54, 45] or temporal self-attention [43] to process temporal information. Another classic of DyGNNs first introduce
time-encoding techniques to represent each temporal link as a function of time, followed by a spatial module like GNN or memory module [20, 55, 40, 41] to process structural information. To obtain more fine-grained continous node embeddings in dynamic graphs, some work further leverages neural interaction processes [56] and ordinary differential equation [57]. DyGNNs have been widely applied in real-world applications, including dynamic anomaly detection [58], event forecasting [59], dynamic recommendation [60], social character prediction [61], user modeling [62], temporal knowledge graph completion [63], etc. In this paper, we consider DyGNNs under spatio-temporal distribution shift, which remains unexplored in dynamic graph neural networks literature.
Out-of-Distribution Generalization. Most existing machine learning methods assume that the testing and training data are independent and identically distributed, which is not guaranteed to hold in many real-world scenarios [64]. In particular, there might be uncontrollable distribution shifts between training and testing data distribution, which may lead to sharp drop of model performance. To solve this problem, Out-of-Distribution (OOD) generalization problem has recently become a central research topic in various areas [65, 64, 66]. Recently, several works attempt to handle distribution shift on graphs [67, 29, 18, 68, 11, 69, 70, 71, 72, 73]. Another classic of OOD methods most related to our works handle distribution shifts on time-series data [25, 26, 12, 27, 28, 74]. Current works consider either only structural distribution shift for static graphs or only temporal distribution shift for time-series data. However, spatio-temporal distribution shifts in dynamic graphs are more complex yet remain unexplored. To the best of our knowledge, this is the first study of spatio-temporal distribution shifts in dynamic graphs.
Disentangled Representation Learning. Disentangled representation learning aims to characterize the multiple latent explanatory factors behind the observed data, where the factors are represented by different vectors [75]. Besides its applications in computer vision [76, 77, 78, 79, 80] and recommendation [81, 82, 83, 84, 85, 86], several disentangled GNNs have proposed to generalize disentangled representation learning in graph data recently. DisenGCN [87] and IPGDN [88] utilize the dynamic routing mechanism to disentangle latent factors for node representations. FactorGCN [89] decomposes the input graph into several interpretable factor graphs. DGCL [90, 91] aim to learn disentangled graph-level representations with self-supervision. Some works factorize deep generative models based on node, edge, static, dynamic factors [92] or spatial, temporal, graph factors [93] to achieve interpretable dynamic graph generation.
6 Conclusion
In this paper, we propose Disentangled Intervention-based Dynamic Graph Attention Networks (DIDA) to handle spatio-temporal distribution shift in dynamic graphs. First, we propose a disentangled dynamic graph attention network to capture invariant and variant spatio-temporal patterns. Then, based on the causal inference literature, we design a spatio-temporal intervention mechanism to create multiple intervened distributions and propose an invariance regularization term to help the model focus on invariant patterns under distribution shifts. Extensive experiments on three real-world datasets and one synthetic dataset demonstrate that our method can better handle spatio-temporal distribution shift than state-of-the-art baselines. One limitation is that in this paper we mainly consider dynamic graphs in scenarios of discrete snapshots, and we leave studying spatio-temporal distribution shifts in continous dynamic graphs for further explorations.
Acknowledgements
This work was supported in part by the National Key Research and Development Program of China No. 2020AAA0106300, National Natural Science Foundation of China (No. 62250008, 62222209, 62102222, 62206149), China National Postdoctoral Program for Innovative Talents No. BX20220185 and China Postdoctoral Science Foundation No. 2022M711813. All opinions, findings, conclusions and recommendations in this paper are those of the authors and do not necessarily reflect the views of the funding agencies. | 1. What is the focus of the paper regarding spatio-temporal distribution shifts?
2. What are the strengths of the proposed solution, particularly in its technical aspects?
3. Do you have any concerns or suggestions regarding the assumption on the invariant pattern?
4. How could the distribution shift mechanism be improved for more practical scenarios?
5. Can the approach be generalized to continuous dynamic graphs?
6. What are the limitations of the paper that hinder its applicability? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper studies the problem of spatio-temporal distribution shift in dynamic graphs. By disentangling the patterns in dynamic graphs into invariant and variant ones, the invariant patterns are utilized for stable prediction and the impact of distribution shift can be reduced. Although the distribution shift has been widely studied in the literature on computer vision and natural language processing, the authors made early attempts in the dynamic graph. I am overall positive about this work.
Strengths And Weaknesses
Strengths
[+] The distribution shift in the dynamic graph is critical and the authors made the early attempts on this topic, which should be encouraged in the community.
[+] The proposed solution is technical sound, the disentanglement as well as the causal spatio-temporal intervention mechanism can satisfy the requirements.
[+] The experiments are extensive and the results are encouraging.
Weaknesses
[-] The invariant pattern is assumed to be dependent on the time. From my opinion of view, the invariant pattern can be futher divided into the time dependent and time independent ones.
[-] The distribution shift in the experimental datasets are manually conducted,it would be better to have some automatically designed mechanism.
Questions
Can the proposed model generalized to the continuous dynamic graph?
What is the variant and invariant pattern in dynamic graph? Is there any common understanding rather than the specific graph type?
Limitations
The core definition of variant and invariant are not well explained, which limits the generalization and scalability of the proposed method. |
NIPS | Title
Consistent Kernel Mean Estimation for Functions of Random Variables
Abstract
We provide a theoretical foundation for non-parametric estimation of functions of random variables using kernel mean embeddings. We show that for any continuous function f , consistent estimators of the mean embedding of a random variable X lead to consistent estimators of the mean embedding of f(X). For Matérn kernels and sufficiently smooth functions we also provide rates of convergence. Our results extend to functions of multiple random variables. If the variables are dependent, we require an estimator of the mean embedding of their joint distribution as a starting point; if they are independent, it is sufficient to have separate estimators of the mean embeddings of their marginal distributions. In either case, our results cover both mean embeddings based on i.i.d. samples as well as “reduced set” expansions in terms of dependent expansion points. The latter serves as a justification for using such expansions to limit memory resources when applying the approach as a basis for probabilistic programming.
1 Introduction
A common task in probabilistic modelling is to compute the distribution of f(X), given a measurable function f and a random variable X . In fact, the earliest instances of this problem date back at least to Poisson (1837). Sometimes this can be done analytically. For example, if f is linear and X is Gaussian, that is f(x) = ax+ b and X ⇠ N (µ; ), we have f(X) ⇠ N (aµ+ b; a ). There exist various methods for obtaining such analytical expressions (Mathai, 1973), but outside a small subset of distributions and functions the formulae are either not available or too complicated to be practical.
An alternative to the analytical approach is numerical approximation, ideally implemented as a flexible software library. The need for such tools is recognised in the general programming languages community (McKinley, 2016), but no standards were established so far. The main challenge is in finding a good approximate representation for random variables.
Distributions on integers, for example, are usually represented as lists of (x i , p(x i )) pairs. For real valued distributions, integral transforms (Springer, 1979), mixtures of Gaussians (Milios, 2009), Laguerre polynomials (Williamson, 1989), and Chebyshev polynomials (Korzeń and Jaroszewicz, 2014) were proposed as convenient representations for numerical computation. For strings, probabilistic finite automata are often used. All those approaches have their merits, but they only work with a specific input type.
There is an alternative, based on Monte Carlo sampling (Kalos and Whitlock, 2008), which is to represent X by a (possibly weighted) sample {(x
i , w i )}n i=1 (with wi 0). This representation has
several advantages: (i) it works for any input type, (ii) the sample size controls the time-accuracy trade-off, and (iii) applying functions to random variables reduces to applying the functions pointwise
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
to the sample, i.e., {(f(x i ), w i )} represents f(X). Furthermore, expectations of functions of random variables can be estimated as E [f(X)] ⇡ P
i
w i f(x i
)/ P
i
w i , sometimes with guarantees for the convergence rate.
The flexibility of this Monte Carlo approach comes at a cost: without further assumptions on the underlying input space X , it is hard to quantify the accuracy of this representation. For instance, given two samples of the same size, {(x
i , w i )}n i=1 and {(x0i, w0i)}ni=1, how can we tell which one is a
better representation of X? More generally, how could we optimize a representation with predefined sample size?
There exists an alternative to the Monte Carlo approach, called Kernel Mean Embeddings (KME) (Berlinet and Thomas-Agnan, 2004; Smola et al., 2007). It also represents random variables as samples, but additionally defines a notion of similarity between sample points. As a result, (i) it keeps all the advantages of the Monte Carlo scheme, (ii) it includes the Monte Carlo method as a special case, (iii) it overcomes its pitfalls described above, and (iv) it can be tailored to focus on different properties of X , depending on the user’s needs and prior assumptions. The KME approach identifies both sample points and distributions with functions in an abstract Hilbert space. Internally the latter are still represented as weighted samples, but the weights can be negative and the straightforward Monte Carlo interpretation is no longer valid. Schölkopf et al. (2015) propose using KMEs as approximate representation of random variables for the purpose of computing their functions. However, they only provide theoretical justification for it in rather idealised settings, which do not meet practical implementation requirements.
In this paper, we build on this work and provide general theoretical guarantees for the proposed estimators. Specifically, we prove statements of the form “if {(x
i , w i )}n i=1 provides a good estimate for
the KME of X , then {(f(x i ), w i )}n i=1 provides a good estimate for the KME of f(X)”. Importantly, our results do not assume joint independence of the observations x i (and weights w i
). This makes them a powerful tool. For instance, imagine we are given data {(x
i , w i )}n i=1 from a random variable
X that we need to compress. Then our theorems guarantee that, whatever compression algorithm we use, as long as the compressed representation {(x0
j , w0 j )}n j=1 still provides a good estimate for the
KME of X , the pointwise images {(f(x0 j ), w0 j )}n j=1 provide good estimates of the KME of f(X).
In the remainder of this section we first introduce KMEs and discuss their merits. Then we explain why and how we extend the results of Schölkopf et al. (2015). Section 2 contains our main results. In Section 2.1 we show consistency of the relevant estimator in a general setting, and in Section 2.2 we provide finite sample guarantees when Matérn kernels are used. In Section 3 we show how our results apply to functions of multiple variables, both interdependent and independent. Section 4 concludes with a discussion.
1.1 Background on kernel mean embeddings
Let X be a measurable input space. We use a positive definite bounded and measurable kernel k : X ⇥ X ! R to represent random variables X ⇠ P and weighted samples ˆX := {(x
i , w i )}n i=1
as two functions µk X and µ̂k X in the corresponding Reproducing Kernel Hilbert Space (RKHS) H k
by defining
µk X :=
Z k(x, .) dP (x) and µ̂k
X
:=
X
i
w i k(x i , .) .
These are guaranteed to exist, since we assume the kernel is bounded (Smola et al., 2007). When clear from the context, we omit the kernel k in the superscript. µ
X is called the KME of P , but we also refer to it as the KME of X . In this paper we focus on computing functions of random variables. For f : X ! Z , where Z is a measurable space, and for a positive definite bounded k
z : Z ⇥Z ! R we also write
µkz f(X) :=
Z k z (f(x), .) dP (x) and µ̂kz f(X) := X
i
w i k z (f(x i ), .) . (1)
The advantage of mapping random variables X and samples ˆX to functions in the RKHS is that we may now say that ˆX is a good approximation for X if the RKHS distance kµ̂
X µ X k is small. This distance depends on the choice of the kernel and different kernels emphasise different information about X . For example if on X := [a, b] ⇢ R we choose k(x, x0) := x · x0 + 1, then
µ X (x) = E X⇠P [X]x+ 1. Thus any two distributions and/or samples with equal means are mapped to the same function in H k
so the distance between them is zero. Therefore using this particular k, we keep track only of the mean of the distributions. If instead we prefer to keep track of all first p moments, we may use the kernel k(x, x0) := (x · x0 + 1)p. And if we do not want to loose any information at all, we should choose k such that µk is injective over all probability measures on X . Such kernels are called characteristic. For standard spaces, such as X = Rd, many widely used kernels were proven characteristic, such as Gaussian, Laplacian, and Matérn kernels (Sriperumbudur et al., 2010, 2011).
The Gaussian kernel k(x, x0) := e kx x0k2
2 2 may serve as another good illustration of the flexibility of this representation. Whatever positive bandwidth 2 > 0, we do not lose any information about distributions, because k is characteristic. Nevertheless, if 2 grows, all distributions start looking the same, because their embeddings converge to a constant function 1. If, on the other hand, 2 becomes small, distributions look increasingly different and µ̂
X
becomes a function with bumps of height w i
at every x i . In the limit when 2 goes to zero, each point is only similar to itself, so µ̂ X reduces to the Monte Carlo method. Choosing 2 can be interpreted as controlling the degree of smoothing in the approximation.
1.2 Reduced set methods
An attractive feature when using KME estimators is the ability to reduce the number of expansion points (i.e., the size of the weighted sample) in a principled way. Specifically, if ˆX 0 := {(x0
j , 1/N)}N j=1 then the objective is to construct ˆX := {(xi, wi)}ni=1 that minimises
kµ̂ X 0 µ̂ X k with n < N . Often the resulting x i are mutually dependent and the w i
certainly depend on them. The algorithms for constructing such expansions are known as reduced set methods and have been studied by the machine learning community (Schölkopf and Smola, 2002, Chapter 18).
Although reduced set methods provide significant efficiency gains, their application raises certain concerns when it comes to computing functions of random variables. Let P,Q be distributions of X and f(X) respectively. If x0
j
⇠ i.i.d. P , then f(x0 j ) ⇠ i.i.d. Q and so µ̂ f(X0) = 1 N P j k(f(x0 j ), .)
reduces to the commonly used p N -consistent empirical estimator of µ
f(X) (Smola et al., 2007). Unfortunately, this is not the case after applying reduced set methods, and it is not known under which conditions µ̂
f(X) is a consistent estimator for µf(X).
Schölkopf et al. (2015) advocate the use of reduced expansion set methods to save computational resources. They also provide some reasoning why this should be the right thing to do for characteristic kernels, but as they state themselves, their rigorous analysis does not cover practical reduced set methods. Motivated by this and other concerns listed in Section 1.4, we provide a generalised analysis of the estimator µ̂
f(X), where we do not make assumptions on how xi and wi were generated.
Before doing that, however, we first illustrate how the need for reduced set methods naturally emerges on a concrete problem.
1.3 Illustration with functions of two random variables
Suppose that we want to estimate µ f(X,Y ) given i.i.d. samples ˆX
0 = {x0
i , 1/N}N i=1 and ˆY 0 =
{y0 j , 1/N}N j=1 from two independent random variables X 2 X and Y 2 Y respectively. Let Q be the distribution of Z = f(X,Y ).
The first option is to consider what we will call the diagonal estimator µ̂1 := 1 N
P n
i=1 kz f(x0 i , y0 i ), . .
Since f(x0 i , y0 i ) ⇠ i.i.d. Q, µ̂1 is p N -consistent (Smola et al., 2007). Another option is to consider the U-statistic estimator µ̂2 := 1
N
2
P N
i,j=1 kz f(x0 i , y0 j ), . , which is also known to be
p N -
consistent. Experiments show that µ̂2 is more accurate and has lower variance than µ̂1 (see Figure 1). However, the U-statistic estimator µ̂2 needs O(n2) memory rather than O(n). For this reason Schölkopf et al. (2015) propose to use a reduced set method both on ˆX 0 and ˆY 0 to get new samples ˆX = {x
i , w i }n i=1 and ˆY = {yj , uj}nj=1 of size n ⌧ N , and then estimate µ
f(X,Y ) using µ̂3 := P n i,j=1 wiujkx(f(xi, yj), .).
We ran experiments on synthetic data to show how accurately µ̂1, µ̂2 and µ̂3 approximate µ f(X,Y ) with growing sample size N . We considered three basic arithmetic operations: multiplication X · Y , division X/Y , and exponentiation XY , with X ⇠ N (3; 0.5) and Y ⇠ N (4; 0.5). As the true embedding µ
f(X,Y ) is unknown, we approximated it by a U-statistic estimator based on a large sample (125 points). For µ̂3, we used the simplest possible reduced set method: we randomly sampled subsets of size n = 0.01 ·N of the x
i , and optimized the weights w i and u i to best approximate µ̂ X
and µ̂ Y . The results are summarised in Figure 1 and corroborate our expectations: (i) all estimators converge, (ii) µ̂2 converges fastest and has the lowest variance, and (iii) µ̂3 is worse than µ̂2, but much better than the diagonal estimator µ̂1. Note, moreover, that unlike the U-statistic estimator µ̂2, the reduced set based estimator µ̂3 can be used with a fixed storage budget even if we perform a sequence of function applications—a situation naturally appearing in the context of probabilistic programming.
Schölkopf et al. (2015) prove the consistency of µ̂3 only for a rather limited case, when the points of the reduced expansions {x
i }n i=1 and {yi}ni=1 are i.i.d. copies of X and Y , respectively, and
the weights {(w i , u i )}n i=1 are constants. Using our new results we will prove in Section 3.1 the consistency of µ̂3 under fairly general conditions, even in the case when both expansion points and weights are interdependent random variables.
1.4 Other sources of non-i.i.d. samples
Although our discussion above focuses on reduced expansion set methods, there are other popular algorithms that produce KME expansions where the samples are not i.i.d. Here we briefly discuss several examples, emphasising that our selection is not comprehensive. They provide additional motivation for stating convergence guarantees in the most general setting possible.
An important notion in probability theory is that of a conditional distribution, which can also be represented using KME (Song et al., 2009). With this representation the standard laws of probability, such as sum, product, and Bayes’ rules, can be stated using KME (Fukumizu et al., 2013). Applying those rules results in KME estimators with strong dependencies between samples and their weights.
Another possibility is that even though i.i.d. samples are available, they may not produce the best estimator. Various approaches, such as kernel herding (Chen et al., 2010; Lacoste-Julien et al., 2015), attempt to produce a better KME estimator by actively generating pseudo-samples that are not i.i.d. from the underlying distribution.
2 Main results
This section contains our main results regarding consistency and finite sample guarantees for the estimator µ̂
f(X) defined in (1). They are based on the convergence of µ̂X and avoid simplifying assumptions about its structure.
2.1 Consistency
If k x is c0-universal (see Sriperumbudur et al. (2011)), consistency of µ̂ f(X) can be shown in a rather general setting. Theorem 1. Let X and Z be compact Hausdorff spaces equipped with their Borel -algebras, f : X ! Z a continuous function, k
x , k z continuous kernels on X ,Z respectively. Assume k x
is c0-universal and that there exists C such that P i |w i
| C independently of n. The following holds: If µ̂kx
X ! µkx X then µ̂kz f(X) ! µkzf(X) as n ! 1.
Proof. Let P be the distribution of X and ˆP n =
P n
i=1 wi xi . Define a new kernel on X by ek x (x1, x2) := kz f(x1), f(x2) . X is compact and { ˆP n
|n 2 N} [ {P} is a bounded set (in total variation norm) of finite measures, because k ˆP
n k TV =
P n
i=1 |wi| C. Furthermore, kx is continuous and c0-universal. Using Corollary 52 of Simon-Gabriel and Schölkopf (2016) we conclude that: µ̂kx
X ! µkx X implies that ˆP converges weakly to P . Now, k z
and f being continuous, so is ek
x . Thus, if ˆP converges weakly to P , then µ̂ekx X ! µekx X
(Simon-Gabriel and Schölkopf, 2016, Theorem 44, Points (1) and (iii)). Overall, µ̂kx
X ! µkx X implies µ̂ekx X ! µekx X
. We conclude the proof by showing that convergence in He
k
x
leads to convergence in H k
z : µ̂kz
f(X) µkzf(X) 2
k
z
= µ̂ekx X µekx X
2
e k
x ! 0. For a detailed version of the above, see Appendix A.
The continuity assumption is rather unrestrictive. All kernels and functions defined on a discrete space are continuous with respect to the discrete topology, so the theorem applies in this case. For X = Rd, many kernels used in practice are continuous, including Gaussian, Laplacian, Matérn and other radial kernels. The slightly limiting factor of this theorem is that k
x must be c0-universal, which often can be tricky to verify. However, most standard kernels—including all radial, non-constant kernels—are c0-universal (see Sriperumbudur et al., 2011). The assumption that the input domain is compact is satisfied in most applications, since any measurements coming from physical sensors are contained in a bounded range. Finally, the assumption that P i |w i
| C can be enforced, for instance, by applying a suitable regularization in reduced set methods.
2.2 Finite sample guarantees
Theorem 1 guarantees that the estimator µ̂ f(X) converges to µf(X) when µ̂X converges to µX . However, it says nothing about the speed of convergence. In this section we provide a convergence rate when working with Matérn kernels, which are of the form
ks x
(x, x0) = 2
1 s (s) kx x0ks d/22 Bd/2 s (kx x0k2) , (2)
where B ↵
is a modified Bessel function of the third kind (also known as Macdonald function) of order ↵, is the Gamma function and s > d2 is a smoothness parameter. The RKHS induced by ks x
is the Sobolev space W s2 (Rd) (Wendland, 2004, Theorem 6.13 & Chap.10) containing s-times differentiable functions. The finite-sample bound of Theorem 2 is based on the analysis of Kanagawa et al. (2016), which requires the following assumptions:
Assumptions 1. Let X be a random variable over X = Rd with distribution P and let ˆX = {(x
i , w i )}n i=1 be random variables over Xn⇥Rn with joint distribution S. There exists a probability
distribution Q with full support on Rd and a bounded density, satisfying the following properties:
(i) P has a bounded density function w.r.t. Q; (ii) there is a constant D > 0 independent of n, such that
E S
" 1
n
nX
i=1
g2(x i ) # D kgk2L2(Q) , 8g 2 L2(Q) .
These assumptions were shown to be fairly general and we refer to Kanagawa et al. (2016, Section 4.1) for various examples where they are met. Next we state the main result of this section.
Theorem 2. Let X = Rd, Z = Rd0 , and f : X ! Z be an ↵-times differentiable function (↵ 2 N+). Take s1 > d/2 and s2 > d0 such that s1, s2/2 2 N+. Let ks1
x and ks2 z be Matérn kernels over X and Z respectively as defined in (2). Assume X ⇠ P and ˆX = {(x
i , w i )}n i=1 ⇠ S satisfy 1. Moreover,
assume that P and the marginals of x1, . . . xn have a common compact support. Suppose that, for some constants b > 0 and 0 < c 1/2:
(i) E S h kµ̂
X µ X k2 k s1 x
i = O(n 2b) ;
(ii) P n
i=1 w 2 i = O(n 2c) (with probability 1) .
Let ✓ = min( s22s1 , ↵ s1 , 1) and assume ✓b (1/2 c)(1 ✓) > 0. Then
E S
µ̂ f(X) µf(X) 2
k s2 z
= O ⇣ (log n)d 0 n 2 (✓b (1/2 c)(1 ✓)) ⌘ . (3)
Before we provide a short sketch of the proof, let us briefly comment on this result. As a benchmark, remember that when x1, . . . xn are i.i.d. observations from X and ˆX = {(xi, 1/n)}n
i=1, we getkµ̂ f(X) µf(X)k2 = OP (n 1), which was recently shown to be a minimax optimal rate (Tolstikhin et al., 2016). How do we compare to this benchmark? In this case we have b = c = 1/2 and our rate is defined by ✓. If f is smooth enough, say ↵ > d/2 + 1, and by setting s2 > 2s1 = 2↵, we recover the O(n 1) rate up to an extra (log n)d 0 factor.
However, Theorem 2 applies to much more general settings. Importantly, it makes no i.i.d. assumptions on the data points and weights, allowing for complex interdependences. Instead, it asks the convergence of the estimator µ̂
X to the embedding µ X to be sufficiently fast. On the downside, the upper bound is affected by the smoothness of f , even in the i.i.d. setting: if ↵ ⌧ d/2 the rate will become slower, as ✓ = ↵/s1. Also, the rate depends both on d and d0. Whether these are artefacts of our proof remains an open question.
Proof. Here we sketch the main ideas of the proof and develop the details in Appendix C. Throughout the proof, C will designate a constant that depends neither on the sample size n nor on the variable R (to be introduced). C may however change from line to line. We start by showing that:
E S
µ̂kz f(X) µkzf(X) 2
k
z
= (2⇡)
d 0 2
Z
Z E S
⇣ [µ̂h f(X) µhf(X)](z) ⌘2 dz, (4)
where h is Matérn kernel over Z with smoothness parameter s2/2. Second, we upper bound the integrand by roughly imitating the proof idea of Theorem 1 from Kanagawa et al. (2016). This eventually yields:
E S
⇣ [µ̂h f(X) µhf(X)](z) ⌘2
Cn 2⌫ , (5) where ⌫ := ✓b (1/2 c)(1 ✓). Unfortunately, this upper bound does not depend on z and can not be integrated over the whole Z in (4). Denoting B
R the ball of radius R, centred on the origin of Z , we thus decompose the integral in (4) as:
Z Z E ⇣ [µ̂h f(X) µhf(X)](z) ⌘2 dz
=
Z
B
R
E ⇣
[µ̂h f(X) µhf(X)](z)
⌘2 dz +
Z
Z\B R
E ⇣
[µ̂h f(X) µhf(X)](z)
⌘2 dz.
On B R we upper bound the integral by (5) times the ball’s volume (which grows like Rd): Z
B
R
E ⇣
[µ̂h f(X) µhf(X)](z)
⌘2 dz CRdn 2⌫ . (6)
On X\B R
, we upper bound the integral by a value that decreases with R, which is of the form: Z
Z\B R
E ⇣
[µ̂h f(X) µhf(X)](z)
⌘2 dz Cn1 2c(R C 0)s2 2e 2(R C0) (7)
with C 0 > 0 being a constant smaller than R. In essence, this upper bound decreases with R because [µ̂h
f(X) µhf(X)](z) decays with the same speed as h when kzk grows indefinitely. We are now left with two rates, (6) and (7), which respectively increase and decrease with growing R. We complete the proof by balancing these two terms, which results in setting R ⇡ (log n)1/2.
3 Functions of Multiple Arguments
The previous section applies to functions f of one single variable X . However, we can apply its results to functions of multiple variables if we take the argument X to be a tuple containing multiple values. In this section we discuss how to do it using two input variables from spaces X and Y , but the results also apply to more inputs. To be precise, our input space changes from X to X ⇥ Y , input random variable from X to (X,Y ), and the kernel on the input space from k
x to k xy .
To apply our results from Section 2, all we need is a consistent estimator µ̂(X,Y ) of the joint embedding µ(X,Y ). There are different ways to get such an estimator. One way is to sample (x 0 i , y0 i
) i.i.d. from the joint distribution of (X,Y ) and construct the usual empirical estimator, or approximate it using reduced set methods. Alternatively, we may want to construct µ̂(X,Y ) based only on consistent estimators of µ
X and µ Y . For example, this is how µ̂3 was defined in Section 1.3. Below we show that this can indeed be done if X and Y are independent.
3.1 Application to Section 1.3
Following Schölkopf et al. (2015), we consider two independent random variables X ⇠ P x
and Y ⇠ P
y . Their joint distribution is P x ⌦ P y
. Consistent estimators of their embeddings are given by µ̂
X
=
P n i=1 wikx(xi, .) and µ̂Y = P n
j=1 ujky(yi, .). In this section we show that µ̂ f(X,Y ) = P n i,j=1 wiujkz f(x i , y j ), . is a consistent estimator of µ f(X,Y ).
We choose a product kernel k xy (x1, y1), (x2, y2) = k x (x1, x2)ky(y1, y2), so the corresponding RKHS is a tensor product H
k
xy
= H k
x
⌦H k
y (Steinwart and Christmann, 2008, Lemma 4.6) and the mean embedding of the product random variable (X,Y ) is a tensor product of their marginal mean embeddings µ(X,Y ) = µX ⌦ µY . With consistent estimators for the marginal embeddings we can estimate the joint embedding using their tensor product
µ̂(X,Y ) = µ̂X ⌦ µ̂Y = nX
i,j=1
w i u j k x (x i , .)⌦ k y (y j , .) = nX
i,j=1
w i u j k xy (x i , y j ), (. , .) .
If points are i.i.d. and w i = u i = 1/n, this reduces to the U-statistic estimator µ̂2 from Section 1.3. Lemma 3. Let (s
n
)
n be any positive real sequence converging to zero. Suppose k xy = k x k y is a product kernel, µ(X,Y ) = µX ⌦ µY , and µ̂(X,Y ) = µ̂X ⌦ µ̂Y . Then:
( kµ̂
X µ X k k
x
= O(s n );
kµ̂ Y µ Y k k
y
= O(s n )
implies µ̂(X,Y ) µ(X,Y )
k
xy
= O(s n ) .
Proof. For a detailed expansion of the first inequality see Appendix B. µ̂(X,Y ) µ(X,Y )
k
xy
kµ X k k
x
kµ̂ Y µ Y k k
y
+ kµ Y k k
y
kµ̂ X µ X k k
x
+ kµ̂ X µ X k k
x
kµ̂ Y µ Y k k
y
= O(s n ) +O(s n ) +O(s2 n ) = O(s n ).
Corollary 4. If µ̂ X ! n!1 µ X and µ̂ Y ! n!1 µ Y , then µ̂(X,Y ) ! n!1 µ(X,Y ).
Together with the results from Section 2 this lets us reason about estimators resulting from applying functions to multiple independent random variables. Write
µ̂ k xy
XY
=
nX
i,j=1
w i u j k xy (x i , y j ), . =
n 2X
`=1
! ` k xy (⇠ ` , .),
where ` enumerates the (i, j) pairs and ⇠ ` = (x i , y j ), ! ` = w i u j . Now if µ̂kx X ! µkx X and µ̂ky Y ! µky Y then µ̂kxy XY
! µkxy(X,Y ) (according to Corollary 4) and Theorem 1 shows thatP n
i,j=1 wiujkz f(x i , y j ), . is consistent as well. Unfortunately, we cannot apply Theorem 2 to get the speed of convergence, because a product of Matérn kernels is not a Matérn kernel any more.
One downside of this overall approach is that the number of expansion points used for the estimation of the joint increases exponentially with the number of arguments of f . This can lead to prohibitively large computational costs, especially if the result of such an operation is used as an input to another function of multiple arguments. To alleviate this problem, we may use reduced expansion set methods before or after applying f , as we did for example in Section 1.2.
To conclude this section, let us summarize the implications of our results for two practical scenarios that should be distinguished.
. If we have separate samples from two random variables X and Y , then our results justify how to provide an estimate of the mean embedding of f(X,Y ) provided that X and Y are independent. The samples themselves need not be i.i.d. — we can also work with weighted samples computed, for instance, by a reduced set method. . How about dependent random variables? For instance, imagine that Y = X , and f(X,Y ) = X + Y . Clearly, in this case the distribution of f(X,Y ) is a delta measure on 0, and there is no way to predict this from separate samples of X and Y . However, it should be stressed that our results (consistency and finite sample bound) apply even to the case where X and Y are dependent. In that case, however, they require a consistent estimator of the joint embedding µ(X,Y ). . It is also sufficient to have a reduced set expansion of the embedding of the joint distribution. This setting may sound strange, but it potentially has significant applications. Imagine that one has a large database of user data, sampled from a joint distribution. If we expand the joint’s embedding in terms of synthetic expansion points using a reduced set construction method, then we can pass on these (weighted) synthetic expansion points to a third party without revealing the original data. Using our results, the third party can nevertheless perform arbitrary continuous functional operations on the joint distribution in a consistent manner.
4 Conclusion and future work
This paper provides a theoretical foundation for using kernel mean embeddings as approximate representations of random variables in scenarios where we need to apply functions to those random variables. We show that for continuous functions f (including all functions on discrete domains), consistency of the mean embedding estimator of a random variable X implies consistency of the mean embedding estimator of f(X). Furthermore, if the kernels are Matérn and the function f is sufficiently smooth, we provide bounds on the convergence rate. Importantly, our results apply beyond i.i.d. samples and cover estimators based on expansions with interdependent points and weights. One interesting future direction is to improve the finite-sample bounds and extend them to general radial and/or translation-invariant kernels.
Our work is motivated by the field of probabilistic programming. Using our theoretical results, kernel mean embeddings can be used to generalize functional operations (which lie at the core of all programming languages) to distributions over data types in a principled manner, by applying the operations to the points or approximate kernel expansions. This is in principle feasible for any data type provided a suitable kernel function can be defined on it. We believe that the approach holds significant potential for future probabilistic programming systems.
Acknowledgements
We thank Krikamol Muandet for providing the code used to generate Figure 1, Paul Rubenstein, Motonobu Kanagawa and Bharath Sriperumbudur for very useful discussions, and our anonymous reviewers for their valuable feedback. Carl-Johann Simon-Gabriel is supported by a Google European Fellowship in Causal Inference. | 1. What is the focus of the paper in terms of contributions and novel aspects?
2. What are the strengths of the proposed approach, particularly in terms of its theoretical foundations?
3. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content?
4. Are there any concerns or limitations regarding the methodology or results presented in the paper? | Review | Review
The paper provides a consistent estimator for functions of a random variable as well as finite sample bounds, based on kernel embedding.The paper is well written, original and contains some new results in the context of kernel embedding of probability measures using reproducing kernel Hilbert spaces. However, the novelty is not enough to rank the paper in the top 3% submissions. |
NIPS | Title
Consistent Kernel Mean Estimation for Functions of Random Variables
Abstract
We provide a theoretical foundation for non-parametric estimation of functions of random variables using kernel mean embeddings. We show that for any continuous function f , consistent estimators of the mean embedding of a random variable X lead to consistent estimators of the mean embedding of f(X). For Matérn kernels and sufficiently smooth functions we also provide rates of convergence. Our results extend to functions of multiple random variables. If the variables are dependent, we require an estimator of the mean embedding of their joint distribution as a starting point; if they are independent, it is sufficient to have separate estimators of the mean embeddings of their marginal distributions. In either case, our results cover both mean embeddings based on i.i.d. samples as well as “reduced set” expansions in terms of dependent expansion points. The latter serves as a justification for using such expansions to limit memory resources when applying the approach as a basis for probabilistic programming.
1 Introduction
A common task in probabilistic modelling is to compute the distribution of f(X), given a measurable function f and a random variable X . In fact, the earliest instances of this problem date back at least to Poisson (1837). Sometimes this can be done analytically. For example, if f is linear and X is Gaussian, that is f(x) = ax+ b and X ⇠ N (µ; ), we have f(X) ⇠ N (aµ+ b; a ). There exist various methods for obtaining such analytical expressions (Mathai, 1973), but outside a small subset of distributions and functions the formulae are either not available or too complicated to be practical.
An alternative to the analytical approach is numerical approximation, ideally implemented as a flexible software library. The need for such tools is recognised in the general programming languages community (McKinley, 2016), but no standards were established so far. The main challenge is in finding a good approximate representation for random variables.
Distributions on integers, for example, are usually represented as lists of (x i , p(x i )) pairs. For real valued distributions, integral transforms (Springer, 1979), mixtures of Gaussians (Milios, 2009), Laguerre polynomials (Williamson, 1989), and Chebyshev polynomials (Korzeń and Jaroszewicz, 2014) were proposed as convenient representations for numerical computation. For strings, probabilistic finite automata are often used. All those approaches have their merits, but they only work with a specific input type.
There is an alternative, based on Monte Carlo sampling (Kalos and Whitlock, 2008), which is to represent X by a (possibly weighted) sample {(x
i , w i )}n i=1 (with wi 0). This representation has
several advantages: (i) it works for any input type, (ii) the sample size controls the time-accuracy trade-off, and (iii) applying functions to random variables reduces to applying the functions pointwise
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
to the sample, i.e., {(f(x i ), w i )} represents f(X). Furthermore, expectations of functions of random variables can be estimated as E [f(X)] ⇡ P
i
w i f(x i
)/ P
i
w i , sometimes with guarantees for the convergence rate.
The flexibility of this Monte Carlo approach comes at a cost: without further assumptions on the underlying input space X , it is hard to quantify the accuracy of this representation. For instance, given two samples of the same size, {(x
i , w i )}n i=1 and {(x0i, w0i)}ni=1, how can we tell which one is a
better representation of X? More generally, how could we optimize a representation with predefined sample size?
There exists an alternative to the Monte Carlo approach, called Kernel Mean Embeddings (KME) (Berlinet and Thomas-Agnan, 2004; Smola et al., 2007). It also represents random variables as samples, but additionally defines a notion of similarity between sample points. As a result, (i) it keeps all the advantages of the Monte Carlo scheme, (ii) it includes the Monte Carlo method as a special case, (iii) it overcomes its pitfalls described above, and (iv) it can be tailored to focus on different properties of X , depending on the user’s needs and prior assumptions. The KME approach identifies both sample points and distributions with functions in an abstract Hilbert space. Internally the latter are still represented as weighted samples, but the weights can be negative and the straightforward Monte Carlo interpretation is no longer valid. Schölkopf et al. (2015) propose using KMEs as approximate representation of random variables for the purpose of computing their functions. However, they only provide theoretical justification for it in rather idealised settings, which do not meet practical implementation requirements.
In this paper, we build on this work and provide general theoretical guarantees for the proposed estimators. Specifically, we prove statements of the form “if {(x
i , w i )}n i=1 provides a good estimate for
the KME of X , then {(f(x i ), w i )}n i=1 provides a good estimate for the KME of f(X)”. Importantly, our results do not assume joint independence of the observations x i (and weights w i
). This makes them a powerful tool. For instance, imagine we are given data {(x
i , w i )}n i=1 from a random variable
X that we need to compress. Then our theorems guarantee that, whatever compression algorithm we use, as long as the compressed representation {(x0
j , w0 j )}n j=1 still provides a good estimate for the
KME of X , the pointwise images {(f(x0 j ), w0 j )}n j=1 provide good estimates of the KME of f(X).
In the remainder of this section we first introduce KMEs and discuss their merits. Then we explain why and how we extend the results of Schölkopf et al. (2015). Section 2 contains our main results. In Section 2.1 we show consistency of the relevant estimator in a general setting, and in Section 2.2 we provide finite sample guarantees when Matérn kernels are used. In Section 3 we show how our results apply to functions of multiple variables, both interdependent and independent. Section 4 concludes with a discussion.
1.1 Background on kernel mean embeddings
Let X be a measurable input space. We use a positive definite bounded and measurable kernel k : X ⇥ X ! R to represent random variables X ⇠ P and weighted samples ˆX := {(x
i , w i )}n i=1
as two functions µk X and µ̂k X in the corresponding Reproducing Kernel Hilbert Space (RKHS) H k
by defining
µk X :=
Z k(x, .) dP (x) and µ̂k
X
:=
X
i
w i k(x i , .) .
These are guaranteed to exist, since we assume the kernel is bounded (Smola et al., 2007). When clear from the context, we omit the kernel k in the superscript. µ
X is called the KME of P , but we also refer to it as the KME of X . In this paper we focus on computing functions of random variables. For f : X ! Z , where Z is a measurable space, and for a positive definite bounded k
z : Z ⇥Z ! R we also write
µkz f(X) :=
Z k z (f(x), .) dP (x) and µ̂kz f(X) := X
i
w i k z (f(x i ), .) . (1)
The advantage of mapping random variables X and samples ˆX to functions in the RKHS is that we may now say that ˆX is a good approximation for X if the RKHS distance kµ̂
X µ X k is small. This distance depends on the choice of the kernel and different kernels emphasise different information about X . For example if on X := [a, b] ⇢ R we choose k(x, x0) := x · x0 + 1, then
µ X (x) = E X⇠P [X]x+ 1. Thus any two distributions and/or samples with equal means are mapped to the same function in H k
so the distance between them is zero. Therefore using this particular k, we keep track only of the mean of the distributions. If instead we prefer to keep track of all first p moments, we may use the kernel k(x, x0) := (x · x0 + 1)p. And if we do not want to loose any information at all, we should choose k such that µk is injective over all probability measures on X . Such kernels are called characteristic. For standard spaces, such as X = Rd, many widely used kernels were proven characteristic, such as Gaussian, Laplacian, and Matérn kernels (Sriperumbudur et al., 2010, 2011).
The Gaussian kernel k(x, x0) := e kx x0k2
2 2 may serve as another good illustration of the flexibility of this representation. Whatever positive bandwidth 2 > 0, we do not lose any information about distributions, because k is characteristic. Nevertheless, if 2 grows, all distributions start looking the same, because their embeddings converge to a constant function 1. If, on the other hand, 2 becomes small, distributions look increasingly different and µ̂
X
becomes a function with bumps of height w i
at every x i . In the limit when 2 goes to zero, each point is only similar to itself, so µ̂ X reduces to the Monte Carlo method. Choosing 2 can be interpreted as controlling the degree of smoothing in the approximation.
1.2 Reduced set methods
An attractive feature when using KME estimators is the ability to reduce the number of expansion points (i.e., the size of the weighted sample) in a principled way. Specifically, if ˆX 0 := {(x0
j , 1/N)}N j=1 then the objective is to construct ˆX := {(xi, wi)}ni=1 that minimises
kµ̂ X 0 µ̂ X k with n < N . Often the resulting x i are mutually dependent and the w i
certainly depend on them. The algorithms for constructing such expansions are known as reduced set methods and have been studied by the machine learning community (Schölkopf and Smola, 2002, Chapter 18).
Although reduced set methods provide significant efficiency gains, their application raises certain concerns when it comes to computing functions of random variables. Let P,Q be distributions of X and f(X) respectively. If x0
j
⇠ i.i.d. P , then f(x0 j ) ⇠ i.i.d. Q and so µ̂ f(X0) = 1 N P j k(f(x0 j ), .)
reduces to the commonly used p N -consistent empirical estimator of µ
f(X) (Smola et al., 2007). Unfortunately, this is not the case after applying reduced set methods, and it is not known under which conditions µ̂
f(X) is a consistent estimator for µf(X).
Schölkopf et al. (2015) advocate the use of reduced expansion set methods to save computational resources. They also provide some reasoning why this should be the right thing to do for characteristic kernels, but as they state themselves, their rigorous analysis does not cover practical reduced set methods. Motivated by this and other concerns listed in Section 1.4, we provide a generalised analysis of the estimator µ̂
f(X), where we do not make assumptions on how xi and wi were generated.
Before doing that, however, we first illustrate how the need for reduced set methods naturally emerges on a concrete problem.
1.3 Illustration with functions of two random variables
Suppose that we want to estimate µ f(X,Y ) given i.i.d. samples ˆX
0 = {x0
i , 1/N}N i=1 and ˆY 0 =
{y0 j , 1/N}N j=1 from two independent random variables X 2 X and Y 2 Y respectively. Let Q be the distribution of Z = f(X,Y ).
The first option is to consider what we will call the diagonal estimator µ̂1 := 1 N
P n
i=1 kz f(x0 i , y0 i ), . .
Since f(x0 i , y0 i ) ⇠ i.i.d. Q, µ̂1 is p N -consistent (Smola et al., 2007). Another option is to consider the U-statistic estimator µ̂2 := 1
N
2
P N
i,j=1 kz f(x0 i , y0 j ), . , which is also known to be
p N -
consistent. Experiments show that µ̂2 is more accurate and has lower variance than µ̂1 (see Figure 1). However, the U-statistic estimator µ̂2 needs O(n2) memory rather than O(n). For this reason Schölkopf et al. (2015) propose to use a reduced set method both on ˆX 0 and ˆY 0 to get new samples ˆX = {x
i , w i }n i=1 and ˆY = {yj , uj}nj=1 of size n ⌧ N , and then estimate µ
f(X,Y ) using µ̂3 := P n i,j=1 wiujkx(f(xi, yj), .).
We ran experiments on synthetic data to show how accurately µ̂1, µ̂2 and µ̂3 approximate µ f(X,Y ) with growing sample size N . We considered three basic arithmetic operations: multiplication X · Y , division X/Y , and exponentiation XY , with X ⇠ N (3; 0.5) and Y ⇠ N (4; 0.5). As the true embedding µ
f(X,Y ) is unknown, we approximated it by a U-statistic estimator based on a large sample (125 points). For µ̂3, we used the simplest possible reduced set method: we randomly sampled subsets of size n = 0.01 ·N of the x
i , and optimized the weights w i and u i to best approximate µ̂ X
and µ̂ Y . The results are summarised in Figure 1 and corroborate our expectations: (i) all estimators converge, (ii) µ̂2 converges fastest and has the lowest variance, and (iii) µ̂3 is worse than µ̂2, but much better than the diagonal estimator µ̂1. Note, moreover, that unlike the U-statistic estimator µ̂2, the reduced set based estimator µ̂3 can be used with a fixed storage budget even if we perform a sequence of function applications—a situation naturally appearing in the context of probabilistic programming.
Schölkopf et al. (2015) prove the consistency of µ̂3 only for a rather limited case, when the points of the reduced expansions {x
i }n i=1 and {yi}ni=1 are i.i.d. copies of X and Y , respectively, and
the weights {(w i , u i )}n i=1 are constants. Using our new results we will prove in Section 3.1 the consistency of µ̂3 under fairly general conditions, even in the case when both expansion points and weights are interdependent random variables.
1.4 Other sources of non-i.i.d. samples
Although our discussion above focuses on reduced expansion set methods, there are other popular algorithms that produce KME expansions where the samples are not i.i.d. Here we briefly discuss several examples, emphasising that our selection is not comprehensive. They provide additional motivation for stating convergence guarantees in the most general setting possible.
An important notion in probability theory is that of a conditional distribution, which can also be represented using KME (Song et al., 2009). With this representation the standard laws of probability, such as sum, product, and Bayes’ rules, can be stated using KME (Fukumizu et al., 2013). Applying those rules results in KME estimators with strong dependencies between samples and their weights.
Another possibility is that even though i.i.d. samples are available, they may not produce the best estimator. Various approaches, such as kernel herding (Chen et al., 2010; Lacoste-Julien et al., 2015), attempt to produce a better KME estimator by actively generating pseudo-samples that are not i.i.d. from the underlying distribution.
2 Main results
This section contains our main results regarding consistency and finite sample guarantees for the estimator µ̂
f(X) defined in (1). They are based on the convergence of µ̂X and avoid simplifying assumptions about its structure.
2.1 Consistency
If k x is c0-universal (see Sriperumbudur et al. (2011)), consistency of µ̂ f(X) can be shown in a rather general setting. Theorem 1. Let X and Z be compact Hausdorff spaces equipped with their Borel -algebras, f : X ! Z a continuous function, k
x , k z continuous kernels on X ,Z respectively. Assume k x
is c0-universal and that there exists C such that P i |w i
| C independently of n. The following holds: If µ̂kx
X ! µkx X then µ̂kz f(X) ! µkzf(X) as n ! 1.
Proof. Let P be the distribution of X and ˆP n =
P n
i=1 wi xi . Define a new kernel on X by ek x (x1, x2) := kz f(x1), f(x2) . X is compact and { ˆP n
|n 2 N} [ {P} is a bounded set (in total variation norm) of finite measures, because k ˆP
n k TV =
P n
i=1 |wi| C. Furthermore, kx is continuous and c0-universal. Using Corollary 52 of Simon-Gabriel and Schölkopf (2016) we conclude that: µ̂kx
X ! µkx X implies that ˆP converges weakly to P . Now, k z
and f being continuous, so is ek
x . Thus, if ˆP converges weakly to P , then µ̂ekx X ! µekx X
(Simon-Gabriel and Schölkopf, 2016, Theorem 44, Points (1) and (iii)). Overall, µ̂kx
X ! µkx X implies µ̂ekx X ! µekx X
. We conclude the proof by showing that convergence in He
k
x
leads to convergence in H k
z : µ̂kz
f(X) µkzf(X) 2
k
z
= µ̂ekx X µekx X
2
e k
x ! 0. For a detailed version of the above, see Appendix A.
The continuity assumption is rather unrestrictive. All kernels and functions defined on a discrete space are continuous with respect to the discrete topology, so the theorem applies in this case. For X = Rd, many kernels used in practice are continuous, including Gaussian, Laplacian, Matérn and other radial kernels. The slightly limiting factor of this theorem is that k
x must be c0-universal, which often can be tricky to verify. However, most standard kernels—including all radial, non-constant kernels—are c0-universal (see Sriperumbudur et al., 2011). The assumption that the input domain is compact is satisfied in most applications, since any measurements coming from physical sensors are contained in a bounded range. Finally, the assumption that P i |w i
| C can be enforced, for instance, by applying a suitable regularization in reduced set methods.
2.2 Finite sample guarantees
Theorem 1 guarantees that the estimator µ̂ f(X) converges to µf(X) when µ̂X converges to µX . However, it says nothing about the speed of convergence. In this section we provide a convergence rate when working with Matérn kernels, which are of the form
ks x
(x, x0) = 2
1 s (s) kx x0ks d/22 Bd/2 s (kx x0k2) , (2)
where B ↵
is a modified Bessel function of the third kind (also known as Macdonald function) of order ↵, is the Gamma function and s > d2 is a smoothness parameter. The RKHS induced by ks x
is the Sobolev space W s2 (Rd) (Wendland, 2004, Theorem 6.13 & Chap.10) containing s-times differentiable functions. The finite-sample bound of Theorem 2 is based on the analysis of Kanagawa et al. (2016), which requires the following assumptions:
Assumptions 1. Let X be a random variable over X = Rd with distribution P and let ˆX = {(x
i , w i )}n i=1 be random variables over Xn⇥Rn with joint distribution S. There exists a probability
distribution Q with full support on Rd and a bounded density, satisfying the following properties:
(i) P has a bounded density function w.r.t. Q; (ii) there is a constant D > 0 independent of n, such that
E S
" 1
n
nX
i=1
g2(x i ) # D kgk2L2(Q) , 8g 2 L2(Q) .
These assumptions were shown to be fairly general and we refer to Kanagawa et al. (2016, Section 4.1) for various examples where they are met. Next we state the main result of this section.
Theorem 2. Let X = Rd, Z = Rd0 , and f : X ! Z be an ↵-times differentiable function (↵ 2 N+). Take s1 > d/2 and s2 > d0 such that s1, s2/2 2 N+. Let ks1
x and ks2 z be Matérn kernels over X and Z respectively as defined in (2). Assume X ⇠ P and ˆX = {(x
i , w i )}n i=1 ⇠ S satisfy 1. Moreover,
assume that P and the marginals of x1, . . . xn have a common compact support. Suppose that, for some constants b > 0 and 0 < c 1/2:
(i) E S h kµ̂
X µ X k2 k s1 x
i = O(n 2b) ;
(ii) P n
i=1 w 2 i = O(n 2c) (with probability 1) .
Let ✓ = min( s22s1 , ↵ s1 , 1) and assume ✓b (1/2 c)(1 ✓) > 0. Then
E S
µ̂ f(X) µf(X) 2
k s2 z
= O ⇣ (log n)d 0 n 2 (✓b (1/2 c)(1 ✓)) ⌘ . (3)
Before we provide a short sketch of the proof, let us briefly comment on this result. As a benchmark, remember that when x1, . . . xn are i.i.d. observations from X and ˆX = {(xi, 1/n)}n
i=1, we getkµ̂ f(X) µf(X)k2 = OP (n 1), which was recently shown to be a minimax optimal rate (Tolstikhin et al., 2016). How do we compare to this benchmark? In this case we have b = c = 1/2 and our rate is defined by ✓. If f is smooth enough, say ↵ > d/2 + 1, and by setting s2 > 2s1 = 2↵, we recover the O(n 1) rate up to an extra (log n)d 0 factor.
However, Theorem 2 applies to much more general settings. Importantly, it makes no i.i.d. assumptions on the data points and weights, allowing for complex interdependences. Instead, it asks the convergence of the estimator µ̂
X to the embedding µ X to be sufficiently fast. On the downside, the upper bound is affected by the smoothness of f , even in the i.i.d. setting: if ↵ ⌧ d/2 the rate will become slower, as ✓ = ↵/s1. Also, the rate depends both on d and d0. Whether these are artefacts of our proof remains an open question.
Proof. Here we sketch the main ideas of the proof and develop the details in Appendix C. Throughout the proof, C will designate a constant that depends neither on the sample size n nor on the variable R (to be introduced). C may however change from line to line. We start by showing that:
E S
µ̂kz f(X) µkzf(X) 2
k
z
= (2⇡)
d 0 2
Z
Z E S
⇣ [µ̂h f(X) µhf(X)](z) ⌘2 dz, (4)
where h is Matérn kernel over Z with smoothness parameter s2/2. Second, we upper bound the integrand by roughly imitating the proof idea of Theorem 1 from Kanagawa et al. (2016). This eventually yields:
E S
⇣ [µ̂h f(X) µhf(X)](z) ⌘2
Cn 2⌫ , (5) where ⌫ := ✓b (1/2 c)(1 ✓). Unfortunately, this upper bound does not depend on z and can not be integrated over the whole Z in (4). Denoting B
R the ball of radius R, centred on the origin of Z , we thus decompose the integral in (4) as:
Z Z E ⇣ [µ̂h f(X) µhf(X)](z) ⌘2 dz
=
Z
B
R
E ⇣
[µ̂h f(X) µhf(X)](z)
⌘2 dz +
Z
Z\B R
E ⇣
[µ̂h f(X) µhf(X)](z)
⌘2 dz.
On B R we upper bound the integral by (5) times the ball’s volume (which grows like Rd): Z
B
R
E ⇣
[µ̂h f(X) µhf(X)](z)
⌘2 dz CRdn 2⌫ . (6)
On X\B R
, we upper bound the integral by a value that decreases with R, which is of the form: Z
Z\B R
E ⇣
[µ̂h f(X) µhf(X)](z)
⌘2 dz Cn1 2c(R C 0)s2 2e 2(R C0) (7)
with C 0 > 0 being a constant smaller than R. In essence, this upper bound decreases with R because [µ̂h
f(X) µhf(X)](z) decays with the same speed as h when kzk grows indefinitely. We are now left with two rates, (6) and (7), which respectively increase and decrease with growing R. We complete the proof by balancing these two terms, which results in setting R ⇡ (log n)1/2.
3 Functions of Multiple Arguments
The previous section applies to functions f of one single variable X . However, we can apply its results to functions of multiple variables if we take the argument X to be a tuple containing multiple values. In this section we discuss how to do it using two input variables from spaces X and Y , but the results also apply to more inputs. To be precise, our input space changes from X to X ⇥ Y , input random variable from X to (X,Y ), and the kernel on the input space from k
x to k xy .
To apply our results from Section 2, all we need is a consistent estimator µ̂(X,Y ) of the joint embedding µ(X,Y ). There are different ways to get such an estimator. One way is to sample (x 0 i , y0 i
) i.i.d. from the joint distribution of (X,Y ) and construct the usual empirical estimator, or approximate it using reduced set methods. Alternatively, we may want to construct µ̂(X,Y ) based only on consistent estimators of µ
X and µ Y . For example, this is how µ̂3 was defined in Section 1.3. Below we show that this can indeed be done if X and Y are independent.
3.1 Application to Section 1.3
Following Schölkopf et al. (2015), we consider two independent random variables X ⇠ P x
and Y ⇠ P
y . Their joint distribution is P x ⌦ P y
. Consistent estimators of their embeddings are given by µ̂
X
=
P n i=1 wikx(xi, .) and µ̂Y = P n
j=1 ujky(yi, .). In this section we show that µ̂ f(X,Y ) = P n i,j=1 wiujkz f(x i , y j ), . is a consistent estimator of µ f(X,Y ).
We choose a product kernel k xy (x1, y1), (x2, y2) = k x (x1, x2)ky(y1, y2), so the corresponding RKHS is a tensor product H
k
xy
= H k
x
⌦H k
y (Steinwart and Christmann, 2008, Lemma 4.6) and the mean embedding of the product random variable (X,Y ) is a tensor product of their marginal mean embeddings µ(X,Y ) = µX ⌦ µY . With consistent estimators for the marginal embeddings we can estimate the joint embedding using their tensor product
µ̂(X,Y ) = µ̂X ⌦ µ̂Y = nX
i,j=1
w i u j k x (x i , .)⌦ k y (y j , .) = nX
i,j=1
w i u j k xy (x i , y j ), (. , .) .
If points are i.i.d. and w i = u i = 1/n, this reduces to the U-statistic estimator µ̂2 from Section 1.3. Lemma 3. Let (s
n
)
n be any positive real sequence converging to zero. Suppose k xy = k x k y is a product kernel, µ(X,Y ) = µX ⌦ µY , and µ̂(X,Y ) = µ̂X ⌦ µ̂Y . Then:
( kµ̂
X µ X k k
x
= O(s n );
kµ̂ Y µ Y k k
y
= O(s n )
implies µ̂(X,Y ) µ(X,Y )
k
xy
= O(s n ) .
Proof. For a detailed expansion of the first inequality see Appendix B. µ̂(X,Y ) µ(X,Y )
k
xy
kµ X k k
x
kµ̂ Y µ Y k k
y
+ kµ Y k k
y
kµ̂ X µ X k k
x
+ kµ̂ X µ X k k
x
kµ̂ Y µ Y k k
y
= O(s n ) +O(s n ) +O(s2 n ) = O(s n ).
Corollary 4. If µ̂ X ! n!1 µ X and µ̂ Y ! n!1 µ Y , then µ̂(X,Y ) ! n!1 µ(X,Y ).
Together with the results from Section 2 this lets us reason about estimators resulting from applying functions to multiple independent random variables. Write
µ̂ k xy
XY
=
nX
i,j=1
w i u j k xy (x i , y j ), . =
n 2X
`=1
! ` k xy (⇠ ` , .),
where ` enumerates the (i, j) pairs and ⇠ ` = (x i , y j ), ! ` = w i u j . Now if µ̂kx X ! µkx X and µ̂ky Y ! µky Y then µ̂kxy XY
! µkxy(X,Y ) (according to Corollary 4) and Theorem 1 shows thatP n
i,j=1 wiujkz f(x i , y j ), . is consistent as well. Unfortunately, we cannot apply Theorem 2 to get the speed of convergence, because a product of Matérn kernels is not a Matérn kernel any more.
One downside of this overall approach is that the number of expansion points used for the estimation of the joint increases exponentially with the number of arguments of f . This can lead to prohibitively large computational costs, especially if the result of such an operation is used as an input to another function of multiple arguments. To alleviate this problem, we may use reduced expansion set methods before or after applying f , as we did for example in Section 1.2.
To conclude this section, let us summarize the implications of our results for two practical scenarios that should be distinguished.
. If we have separate samples from two random variables X and Y , then our results justify how to provide an estimate of the mean embedding of f(X,Y ) provided that X and Y are independent. The samples themselves need not be i.i.d. — we can also work with weighted samples computed, for instance, by a reduced set method. . How about dependent random variables? For instance, imagine that Y = X , and f(X,Y ) = X + Y . Clearly, in this case the distribution of f(X,Y ) is a delta measure on 0, and there is no way to predict this from separate samples of X and Y . However, it should be stressed that our results (consistency and finite sample bound) apply even to the case where X and Y are dependent. In that case, however, they require a consistent estimator of the joint embedding µ(X,Y ). . It is also sufficient to have a reduced set expansion of the embedding of the joint distribution. This setting may sound strange, but it potentially has significant applications. Imagine that one has a large database of user data, sampled from a joint distribution. If we expand the joint’s embedding in terms of synthetic expansion points using a reduced set construction method, then we can pass on these (weighted) synthetic expansion points to a third party without revealing the original data. Using our results, the third party can nevertheless perform arbitrary continuous functional operations on the joint distribution in a consistent manner.
4 Conclusion and future work
This paper provides a theoretical foundation for using kernel mean embeddings as approximate representations of random variables in scenarios where we need to apply functions to those random variables. We show that for continuous functions f (including all functions on discrete domains), consistency of the mean embedding estimator of a random variable X implies consistency of the mean embedding estimator of f(X). Furthermore, if the kernels are Matérn and the function f is sufficiently smooth, we provide bounds on the convergence rate. Importantly, our results apply beyond i.i.d. samples and cover estimators based on expansions with interdependent points and weights. One interesting future direction is to improve the finite-sample bounds and extend them to general radial and/or translation-invariant kernels.
Our work is motivated by the field of probabilistic programming. Using our theoretical results, kernel mean embeddings can be used to generalize functional operations (which lie at the core of all programming languages) to distributions over data types in a principled manner, by applying the operations to the points or approximate kernel expansions. This is in principle feasible for any data type provided a suitable kernel function can be defined on it. We believe that the approach holds significant potential for future probabilistic programming systems.
Acknowledgements
We thank Krikamol Muandet for providing the code used to generate Figure 1, Paul Rubenstein, Motonobu Kanagawa and Bharath Sriperumbudur for very useful discussions, and our anonymous reviewers for their valuable feedback. Carl-Johann Simon-Gabriel is supported by a Google European Fellowship in Causal Inference. | 1. What is the main contribution of the paper regarding kernel mean embedding schemes?
2. What are the strengths and weaknesses of the paper's theoretical analysis, particularly in terms of consistency and finite sample guarantees?
3. How does the reviewer assess the clarity and quality of the paper's presentation, including the formal definitions and proof?
4. Does the reviewer think that the paper is suitable for publication in NIPS, considering its relevance to machine learning? | Review | Review
Assume that we have a map f:X->Y and two kernels k_X and k_Y on X and Y, respectively. The first main result of this paper shows: if we have consistency of an empirical kernel mean embedding scheme for k_X and all distributions P on X, then this scheme is also consistent for k_Y and all image distributions P_f. In addition, some finite sample guarantees for particular kernels and maps f are provided and the case, in which X is a product space, is investigated in further detail.The clarity of the presentation can certainly be improved, in particular when it comes to formally correct definitions. For example, when reading Theorem 1, it is completely unclear from its formulation, of what form the estimator actually is. One needs to refer to the proof to (only partially!!!) understand the theorem. Also, I could not really follow the proof in the case of data-dependent weights, which are presumably allowed in the Theorem?!? In addition, the footnote in this theorem is a bad idea. On a completely different page, I am really unsure whether the paper fits into NIPS. While kernel embeddings have been considered at NIPS for quite some time, there was always a clear link to machine learning in those papers. This paper lacks this connection, or at least I could not get it. |
NIPS | Title
Consistent Kernel Mean Estimation for Functions of Random Variables
Abstract
We provide a theoretical foundation for non-parametric estimation of functions of random variables using kernel mean embeddings. We show that for any continuous function f , consistent estimators of the mean embedding of a random variable X lead to consistent estimators of the mean embedding of f(X). For Matérn kernels and sufficiently smooth functions we also provide rates of convergence. Our results extend to functions of multiple random variables. If the variables are dependent, we require an estimator of the mean embedding of their joint distribution as a starting point; if they are independent, it is sufficient to have separate estimators of the mean embeddings of their marginal distributions. In either case, our results cover both mean embeddings based on i.i.d. samples as well as “reduced set” expansions in terms of dependent expansion points. The latter serves as a justification for using such expansions to limit memory resources when applying the approach as a basis for probabilistic programming.
1 Introduction
A common task in probabilistic modelling is to compute the distribution of f(X), given a measurable function f and a random variable X . In fact, the earliest instances of this problem date back at least to Poisson (1837). Sometimes this can be done analytically. For example, if f is linear and X is Gaussian, that is f(x) = ax+ b and X ⇠ N (µ; ), we have f(X) ⇠ N (aµ+ b; a ). There exist various methods for obtaining such analytical expressions (Mathai, 1973), but outside a small subset of distributions and functions the formulae are either not available or too complicated to be practical.
An alternative to the analytical approach is numerical approximation, ideally implemented as a flexible software library. The need for such tools is recognised in the general programming languages community (McKinley, 2016), but no standards were established so far. The main challenge is in finding a good approximate representation for random variables.
Distributions on integers, for example, are usually represented as lists of (x i , p(x i )) pairs. For real valued distributions, integral transforms (Springer, 1979), mixtures of Gaussians (Milios, 2009), Laguerre polynomials (Williamson, 1989), and Chebyshev polynomials (Korzeń and Jaroszewicz, 2014) were proposed as convenient representations for numerical computation. For strings, probabilistic finite automata are often used. All those approaches have their merits, but they only work with a specific input type.
There is an alternative, based on Monte Carlo sampling (Kalos and Whitlock, 2008), which is to represent X by a (possibly weighted) sample {(x
i , w i )}n i=1 (with wi 0). This representation has
several advantages: (i) it works for any input type, (ii) the sample size controls the time-accuracy trade-off, and (iii) applying functions to random variables reduces to applying the functions pointwise
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
to the sample, i.e., {(f(x i ), w i )} represents f(X). Furthermore, expectations of functions of random variables can be estimated as E [f(X)] ⇡ P
i
w i f(x i
)/ P
i
w i , sometimes with guarantees for the convergence rate.
The flexibility of this Monte Carlo approach comes at a cost: without further assumptions on the underlying input space X , it is hard to quantify the accuracy of this representation. For instance, given two samples of the same size, {(x
i , w i )}n i=1 and {(x0i, w0i)}ni=1, how can we tell which one is a
better representation of X? More generally, how could we optimize a representation with predefined sample size?
There exists an alternative to the Monte Carlo approach, called Kernel Mean Embeddings (KME) (Berlinet and Thomas-Agnan, 2004; Smola et al., 2007). It also represents random variables as samples, but additionally defines a notion of similarity between sample points. As a result, (i) it keeps all the advantages of the Monte Carlo scheme, (ii) it includes the Monte Carlo method as a special case, (iii) it overcomes its pitfalls described above, and (iv) it can be tailored to focus on different properties of X , depending on the user’s needs and prior assumptions. The KME approach identifies both sample points and distributions with functions in an abstract Hilbert space. Internally the latter are still represented as weighted samples, but the weights can be negative and the straightforward Monte Carlo interpretation is no longer valid. Schölkopf et al. (2015) propose using KMEs as approximate representation of random variables for the purpose of computing their functions. However, they only provide theoretical justification for it in rather idealised settings, which do not meet practical implementation requirements.
In this paper, we build on this work and provide general theoretical guarantees for the proposed estimators. Specifically, we prove statements of the form “if {(x
i , w i )}n i=1 provides a good estimate for
the KME of X , then {(f(x i ), w i )}n i=1 provides a good estimate for the KME of f(X)”. Importantly, our results do not assume joint independence of the observations x i (and weights w i
). This makes them a powerful tool. For instance, imagine we are given data {(x
i , w i )}n i=1 from a random variable
X that we need to compress. Then our theorems guarantee that, whatever compression algorithm we use, as long as the compressed representation {(x0
j , w0 j )}n j=1 still provides a good estimate for the
KME of X , the pointwise images {(f(x0 j ), w0 j )}n j=1 provide good estimates of the KME of f(X).
In the remainder of this section we first introduce KMEs and discuss their merits. Then we explain why and how we extend the results of Schölkopf et al. (2015). Section 2 contains our main results. In Section 2.1 we show consistency of the relevant estimator in a general setting, and in Section 2.2 we provide finite sample guarantees when Matérn kernels are used. In Section 3 we show how our results apply to functions of multiple variables, both interdependent and independent. Section 4 concludes with a discussion.
1.1 Background on kernel mean embeddings
Let X be a measurable input space. We use a positive definite bounded and measurable kernel k : X ⇥ X ! R to represent random variables X ⇠ P and weighted samples ˆX := {(x
i , w i )}n i=1
as two functions µk X and µ̂k X in the corresponding Reproducing Kernel Hilbert Space (RKHS) H k
by defining
µk X :=
Z k(x, .) dP (x) and µ̂k
X
:=
X
i
w i k(x i , .) .
These are guaranteed to exist, since we assume the kernel is bounded (Smola et al., 2007). When clear from the context, we omit the kernel k in the superscript. µ
X is called the KME of P , but we also refer to it as the KME of X . In this paper we focus on computing functions of random variables. For f : X ! Z , where Z is a measurable space, and for a positive definite bounded k
z : Z ⇥Z ! R we also write
µkz f(X) :=
Z k z (f(x), .) dP (x) and µ̂kz f(X) := X
i
w i k z (f(x i ), .) . (1)
The advantage of mapping random variables X and samples ˆX to functions in the RKHS is that we may now say that ˆX is a good approximation for X if the RKHS distance kµ̂
X µ X k is small. This distance depends on the choice of the kernel and different kernels emphasise different information about X . For example if on X := [a, b] ⇢ R we choose k(x, x0) := x · x0 + 1, then
µ X (x) = E X⇠P [X]x+ 1. Thus any two distributions and/or samples with equal means are mapped to the same function in H k
so the distance between them is zero. Therefore using this particular k, we keep track only of the mean of the distributions. If instead we prefer to keep track of all first p moments, we may use the kernel k(x, x0) := (x · x0 + 1)p. And if we do not want to loose any information at all, we should choose k such that µk is injective over all probability measures on X . Such kernels are called characteristic. For standard spaces, such as X = Rd, many widely used kernels were proven characteristic, such as Gaussian, Laplacian, and Matérn kernels (Sriperumbudur et al., 2010, 2011).
The Gaussian kernel k(x, x0) := e kx x0k2
2 2 may serve as another good illustration of the flexibility of this representation. Whatever positive bandwidth 2 > 0, we do not lose any information about distributions, because k is characteristic. Nevertheless, if 2 grows, all distributions start looking the same, because their embeddings converge to a constant function 1. If, on the other hand, 2 becomes small, distributions look increasingly different and µ̂
X
becomes a function with bumps of height w i
at every x i . In the limit when 2 goes to zero, each point is only similar to itself, so µ̂ X reduces to the Monte Carlo method. Choosing 2 can be interpreted as controlling the degree of smoothing in the approximation.
1.2 Reduced set methods
An attractive feature when using KME estimators is the ability to reduce the number of expansion points (i.e., the size of the weighted sample) in a principled way. Specifically, if ˆX 0 := {(x0
j , 1/N)}N j=1 then the objective is to construct ˆX := {(xi, wi)}ni=1 that minimises
kµ̂ X 0 µ̂ X k with n < N . Often the resulting x i are mutually dependent and the w i
certainly depend on them. The algorithms for constructing such expansions are known as reduced set methods and have been studied by the machine learning community (Schölkopf and Smola, 2002, Chapter 18).
Although reduced set methods provide significant efficiency gains, their application raises certain concerns when it comes to computing functions of random variables. Let P,Q be distributions of X and f(X) respectively. If x0
j
⇠ i.i.d. P , then f(x0 j ) ⇠ i.i.d. Q and so µ̂ f(X0) = 1 N P j k(f(x0 j ), .)
reduces to the commonly used p N -consistent empirical estimator of µ
f(X) (Smola et al., 2007). Unfortunately, this is not the case after applying reduced set methods, and it is not known under which conditions µ̂
f(X) is a consistent estimator for µf(X).
Schölkopf et al. (2015) advocate the use of reduced expansion set methods to save computational resources. They also provide some reasoning why this should be the right thing to do for characteristic kernels, but as they state themselves, their rigorous analysis does not cover practical reduced set methods. Motivated by this and other concerns listed in Section 1.4, we provide a generalised analysis of the estimator µ̂
f(X), where we do not make assumptions on how xi and wi were generated.
Before doing that, however, we first illustrate how the need for reduced set methods naturally emerges on a concrete problem.
1.3 Illustration with functions of two random variables
Suppose that we want to estimate µ f(X,Y ) given i.i.d. samples ˆX
0 = {x0
i , 1/N}N i=1 and ˆY 0 =
{y0 j , 1/N}N j=1 from two independent random variables X 2 X and Y 2 Y respectively. Let Q be the distribution of Z = f(X,Y ).
The first option is to consider what we will call the diagonal estimator µ̂1 := 1 N
P n
i=1 kz f(x0 i , y0 i ), . .
Since f(x0 i , y0 i ) ⇠ i.i.d. Q, µ̂1 is p N -consistent (Smola et al., 2007). Another option is to consider the U-statistic estimator µ̂2 := 1
N
2
P N
i,j=1 kz f(x0 i , y0 j ), . , which is also known to be
p N -
consistent. Experiments show that µ̂2 is more accurate and has lower variance than µ̂1 (see Figure 1). However, the U-statistic estimator µ̂2 needs O(n2) memory rather than O(n). For this reason Schölkopf et al. (2015) propose to use a reduced set method both on ˆX 0 and ˆY 0 to get new samples ˆX = {x
i , w i }n i=1 and ˆY = {yj , uj}nj=1 of size n ⌧ N , and then estimate µ
f(X,Y ) using µ̂3 := P n i,j=1 wiujkx(f(xi, yj), .).
We ran experiments on synthetic data to show how accurately µ̂1, µ̂2 and µ̂3 approximate µ f(X,Y ) with growing sample size N . We considered three basic arithmetic operations: multiplication X · Y , division X/Y , and exponentiation XY , with X ⇠ N (3; 0.5) and Y ⇠ N (4; 0.5). As the true embedding µ
f(X,Y ) is unknown, we approximated it by a U-statistic estimator based on a large sample (125 points). For µ̂3, we used the simplest possible reduced set method: we randomly sampled subsets of size n = 0.01 ·N of the x
i , and optimized the weights w i and u i to best approximate µ̂ X
and µ̂ Y . The results are summarised in Figure 1 and corroborate our expectations: (i) all estimators converge, (ii) µ̂2 converges fastest and has the lowest variance, and (iii) µ̂3 is worse than µ̂2, but much better than the diagonal estimator µ̂1. Note, moreover, that unlike the U-statistic estimator µ̂2, the reduced set based estimator µ̂3 can be used with a fixed storage budget even if we perform a sequence of function applications—a situation naturally appearing in the context of probabilistic programming.
Schölkopf et al. (2015) prove the consistency of µ̂3 only for a rather limited case, when the points of the reduced expansions {x
i }n i=1 and {yi}ni=1 are i.i.d. copies of X and Y , respectively, and
the weights {(w i , u i )}n i=1 are constants. Using our new results we will prove in Section 3.1 the consistency of µ̂3 under fairly general conditions, even in the case when both expansion points and weights are interdependent random variables.
1.4 Other sources of non-i.i.d. samples
Although our discussion above focuses on reduced expansion set methods, there are other popular algorithms that produce KME expansions where the samples are not i.i.d. Here we briefly discuss several examples, emphasising that our selection is not comprehensive. They provide additional motivation for stating convergence guarantees in the most general setting possible.
An important notion in probability theory is that of a conditional distribution, which can also be represented using KME (Song et al., 2009). With this representation the standard laws of probability, such as sum, product, and Bayes’ rules, can be stated using KME (Fukumizu et al., 2013). Applying those rules results in KME estimators with strong dependencies between samples and their weights.
Another possibility is that even though i.i.d. samples are available, they may not produce the best estimator. Various approaches, such as kernel herding (Chen et al., 2010; Lacoste-Julien et al., 2015), attempt to produce a better KME estimator by actively generating pseudo-samples that are not i.i.d. from the underlying distribution.
2 Main results
This section contains our main results regarding consistency and finite sample guarantees for the estimator µ̂
f(X) defined in (1). They are based on the convergence of µ̂X and avoid simplifying assumptions about its structure.
2.1 Consistency
If k x is c0-universal (see Sriperumbudur et al. (2011)), consistency of µ̂ f(X) can be shown in a rather general setting. Theorem 1. Let X and Z be compact Hausdorff spaces equipped with their Borel -algebras, f : X ! Z a continuous function, k
x , k z continuous kernels on X ,Z respectively. Assume k x
is c0-universal and that there exists C such that P i |w i
| C independently of n. The following holds: If µ̂kx
X ! µkx X then µ̂kz f(X) ! µkzf(X) as n ! 1.
Proof. Let P be the distribution of X and ˆP n =
P n
i=1 wi xi . Define a new kernel on X by ek x (x1, x2) := kz f(x1), f(x2) . X is compact and { ˆP n
|n 2 N} [ {P} is a bounded set (in total variation norm) of finite measures, because k ˆP
n k TV =
P n
i=1 |wi| C. Furthermore, kx is continuous and c0-universal. Using Corollary 52 of Simon-Gabriel and Schölkopf (2016) we conclude that: µ̂kx
X ! µkx X implies that ˆP converges weakly to P . Now, k z
and f being continuous, so is ek
x . Thus, if ˆP converges weakly to P , then µ̂ekx X ! µekx X
(Simon-Gabriel and Schölkopf, 2016, Theorem 44, Points (1) and (iii)). Overall, µ̂kx
X ! µkx X implies µ̂ekx X ! µekx X
. We conclude the proof by showing that convergence in He
k
x
leads to convergence in H k
z : µ̂kz
f(X) µkzf(X) 2
k
z
= µ̂ekx X µekx X
2
e k
x ! 0. For a detailed version of the above, see Appendix A.
The continuity assumption is rather unrestrictive. All kernels and functions defined on a discrete space are continuous with respect to the discrete topology, so the theorem applies in this case. For X = Rd, many kernels used in practice are continuous, including Gaussian, Laplacian, Matérn and other radial kernels. The slightly limiting factor of this theorem is that k
x must be c0-universal, which often can be tricky to verify. However, most standard kernels—including all radial, non-constant kernels—are c0-universal (see Sriperumbudur et al., 2011). The assumption that the input domain is compact is satisfied in most applications, since any measurements coming from physical sensors are contained in a bounded range. Finally, the assumption that P i |w i
| C can be enforced, for instance, by applying a suitable regularization in reduced set methods.
2.2 Finite sample guarantees
Theorem 1 guarantees that the estimator µ̂ f(X) converges to µf(X) when µ̂X converges to µX . However, it says nothing about the speed of convergence. In this section we provide a convergence rate when working with Matérn kernels, which are of the form
ks x
(x, x0) = 2
1 s (s) kx x0ks d/22 Bd/2 s (kx x0k2) , (2)
where B ↵
is a modified Bessel function of the third kind (also known as Macdonald function) of order ↵, is the Gamma function and s > d2 is a smoothness parameter. The RKHS induced by ks x
is the Sobolev space W s2 (Rd) (Wendland, 2004, Theorem 6.13 & Chap.10) containing s-times differentiable functions. The finite-sample bound of Theorem 2 is based on the analysis of Kanagawa et al. (2016), which requires the following assumptions:
Assumptions 1. Let X be a random variable over X = Rd with distribution P and let ˆX = {(x
i , w i )}n i=1 be random variables over Xn⇥Rn with joint distribution S. There exists a probability
distribution Q with full support on Rd and a bounded density, satisfying the following properties:
(i) P has a bounded density function w.r.t. Q; (ii) there is a constant D > 0 independent of n, such that
E S
" 1
n
nX
i=1
g2(x i ) # D kgk2L2(Q) , 8g 2 L2(Q) .
These assumptions were shown to be fairly general and we refer to Kanagawa et al. (2016, Section 4.1) for various examples where they are met. Next we state the main result of this section.
Theorem 2. Let X = Rd, Z = Rd0 , and f : X ! Z be an ↵-times differentiable function (↵ 2 N+). Take s1 > d/2 and s2 > d0 such that s1, s2/2 2 N+. Let ks1
x and ks2 z be Matérn kernels over X and Z respectively as defined in (2). Assume X ⇠ P and ˆX = {(x
i , w i )}n i=1 ⇠ S satisfy 1. Moreover,
assume that P and the marginals of x1, . . . xn have a common compact support. Suppose that, for some constants b > 0 and 0 < c 1/2:
(i) E S h kµ̂
X µ X k2 k s1 x
i = O(n 2b) ;
(ii) P n
i=1 w 2 i = O(n 2c) (with probability 1) .
Let ✓ = min( s22s1 , ↵ s1 , 1) and assume ✓b (1/2 c)(1 ✓) > 0. Then
E S
µ̂ f(X) µf(X) 2
k s2 z
= O ⇣ (log n)d 0 n 2 (✓b (1/2 c)(1 ✓)) ⌘ . (3)
Before we provide a short sketch of the proof, let us briefly comment on this result. As a benchmark, remember that when x1, . . . xn are i.i.d. observations from X and ˆX = {(xi, 1/n)}n
i=1, we getkµ̂ f(X) µf(X)k2 = OP (n 1), which was recently shown to be a minimax optimal rate (Tolstikhin et al., 2016). How do we compare to this benchmark? In this case we have b = c = 1/2 and our rate is defined by ✓. If f is smooth enough, say ↵ > d/2 + 1, and by setting s2 > 2s1 = 2↵, we recover the O(n 1) rate up to an extra (log n)d 0 factor.
However, Theorem 2 applies to much more general settings. Importantly, it makes no i.i.d. assumptions on the data points and weights, allowing for complex interdependences. Instead, it asks the convergence of the estimator µ̂
X to the embedding µ X to be sufficiently fast. On the downside, the upper bound is affected by the smoothness of f , even in the i.i.d. setting: if ↵ ⌧ d/2 the rate will become slower, as ✓ = ↵/s1. Also, the rate depends both on d and d0. Whether these are artefacts of our proof remains an open question.
Proof. Here we sketch the main ideas of the proof and develop the details in Appendix C. Throughout the proof, C will designate a constant that depends neither on the sample size n nor on the variable R (to be introduced). C may however change from line to line. We start by showing that:
E S
µ̂kz f(X) µkzf(X) 2
k
z
= (2⇡)
d 0 2
Z
Z E S
⇣ [µ̂h f(X) µhf(X)](z) ⌘2 dz, (4)
where h is Matérn kernel over Z with smoothness parameter s2/2. Second, we upper bound the integrand by roughly imitating the proof idea of Theorem 1 from Kanagawa et al. (2016). This eventually yields:
E S
⇣ [µ̂h f(X) µhf(X)](z) ⌘2
Cn 2⌫ , (5) where ⌫ := ✓b (1/2 c)(1 ✓). Unfortunately, this upper bound does not depend on z and can not be integrated over the whole Z in (4). Denoting B
R the ball of radius R, centred on the origin of Z , we thus decompose the integral in (4) as:
Z Z E ⇣ [µ̂h f(X) µhf(X)](z) ⌘2 dz
=
Z
B
R
E ⇣
[µ̂h f(X) µhf(X)](z)
⌘2 dz +
Z
Z\B R
E ⇣
[µ̂h f(X) µhf(X)](z)
⌘2 dz.
On B R we upper bound the integral by (5) times the ball’s volume (which grows like Rd): Z
B
R
E ⇣
[µ̂h f(X) µhf(X)](z)
⌘2 dz CRdn 2⌫ . (6)
On X\B R
, we upper bound the integral by a value that decreases with R, which is of the form: Z
Z\B R
E ⇣
[µ̂h f(X) µhf(X)](z)
⌘2 dz Cn1 2c(R C 0)s2 2e 2(R C0) (7)
with C 0 > 0 being a constant smaller than R. In essence, this upper bound decreases with R because [µ̂h
f(X) µhf(X)](z) decays with the same speed as h when kzk grows indefinitely. We are now left with two rates, (6) and (7), which respectively increase and decrease with growing R. We complete the proof by balancing these two terms, which results in setting R ⇡ (log n)1/2.
3 Functions of Multiple Arguments
The previous section applies to functions f of one single variable X . However, we can apply its results to functions of multiple variables if we take the argument X to be a tuple containing multiple values. In this section we discuss how to do it using two input variables from spaces X and Y , but the results also apply to more inputs. To be precise, our input space changes from X to X ⇥ Y , input random variable from X to (X,Y ), and the kernel on the input space from k
x to k xy .
To apply our results from Section 2, all we need is a consistent estimator µ̂(X,Y ) of the joint embedding µ(X,Y ). There are different ways to get such an estimator. One way is to sample (x 0 i , y0 i
) i.i.d. from the joint distribution of (X,Y ) and construct the usual empirical estimator, or approximate it using reduced set methods. Alternatively, we may want to construct µ̂(X,Y ) based only on consistent estimators of µ
X and µ Y . For example, this is how µ̂3 was defined in Section 1.3. Below we show that this can indeed be done if X and Y are independent.
3.1 Application to Section 1.3
Following Schölkopf et al. (2015), we consider two independent random variables X ⇠ P x
and Y ⇠ P
y . Their joint distribution is P x ⌦ P y
. Consistent estimators of their embeddings are given by µ̂
X
=
P n i=1 wikx(xi, .) and µ̂Y = P n
j=1 ujky(yi, .). In this section we show that µ̂ f(X,Y ) = P n i,j=1 wiujkz f(x i , y j ), . is a consistent estimator of µ f(X,Y ).
We choose a product kernel k xy (x1, y1), (x2, y2) = k x (x1, x2)ky(y1, y2), so the corresponding RKHS is a tensor product H
k
xy
= H k
x
⌦H k
y (Steinwart and Christmann, 2008, Lemma 4.6) and the mean embedding of the product random variable (X,Y ) is a tensor product of their marginal mean embeddings µ(X,Y ) = µX ⌦ µY . With consistent estimators for the marginal embeddings we can estimate the joint embedding using their tensor product
µ̂(X,Y ) = µ̂X ⌦ µ̂Y = nX
i,j=1
w i u j k x (x i , .)⌦ k y (y j , .) = nX
i,j=1
w i u j k xy (x i , y j ), (. , .) .
If points are i.i.d. and w i = u i = 1/n, this reduces to the U-statistic estimator µ̂2 from Section 1.3. Lemma 3. Let (s
n
)
n be any positive real sequence converging to zero. Suppose k xy = k x k y is a product kernel, µ(X,Y ) = µX ⌦ µY , and µ̂(X,Y ) = µ̂X ⌦ µ̂Y . Then:
( kµ̂
X µ X k k
x
= O(s n );
kµ̂ Y µ Y k k
y
= O(s n )
implies µ̂(X,Y ) µ(X,Y )
k
xy
= O(s n ) .
Proof. For a detailed expansion of the first inequality see Appendix B. µ̂(X,Y ) µ(X,Y )
k
xy
kµ X k k
x
kµ̂ Y µ Y k k
y
+ kµ Y k k
y
kµ̂ X µ X k k
x
+ kµ̂ X µ X k k
x
kµ̂ Y µ Y k k
y
= O(s n ) +O(s n ) +O(s2 n ) = O(s n ).
Corollary 4. If µ̂ X ! n!1 µ X and µ̂ Y ! n!1 µ Y , then µ̂(X,Y ) ! n!1 µ(X,Y ).
Together with the results from Section 2 this lets us reason about estimators resulting from applying functions to multiple independent random variables. Write
µ̂ k xy
XY
=
nX
i,j=1
w i u j k xy (x i , y j ), . =
n 2X
`=1
! ` k xy (⇠ ` , .),
where ` enumerates the (i, j) pairs and ⇠ ` = (x i , y j ), ! ` = w i u j . Now if µ̂kx X ! µkx X and µ̂ky Y ! µky Y then µ̂kxy XY
! µkxy(X,Y ) (according to Corollary 4) and Theorem 1 shows thatP n
i,j=1 wiujkz f(x i , y j ), . is consistent as well. Unfortunately, we cannot apply Theorem 2 to get the speed of convergence, because a product of Matérn kernels is not a Matérn kernel any more.
One downside of this overall approach is that the number of expansion points used for the estimation of the joint increases exponentially with the number of arguments of f . This can lead to prohibitively large computational costs, especially if the result of such an operation is used as an input to another function of multiple arguments. To alleviate this problem, we may use reduced expansion set methods before or after applying f , as we did for example in Section 1.2.
To conclude this section, let us summarize the implications of our results for two practical scenarios that should be distinguished.
. If we have separate samples from two random variables X and Y , then our results justify how to provide an estimate of the mean embedding of f(X,Y ) provided that X and Y are independent. The samples themselves need not be i.i.d. — we can also work with weighted samples computed, for instance, by a reduced set method. . How about dependent random variables? For instance, imagine that Y = X , and f(X,Y ) = X + Y . Clearly, in this case the distribution of f(X,Y ) is a delta measure on 0, and there is no way to predict this from separate samples of X and Y . However, it should be stressed that our results (consistency and finite sample bound) apply even to the case where X and Y are dependent. In that case, however, they require a consistent estimator of the joint embedding µ(X,Y ). . It is also sufficient to have a reduced set expansion of the embedding of the joint distribution. This setting may sound strange, but it potentially has significant applications. Imagine that one has a large database of user data, sampled from a joint distribution. If we expand the joint’s embedding in terms of synthetic expansion points using a reduced set construction method, then we can pass on these (weighted) synthetic expansion points to a third party without revealing the original data. Using our results, the third party can nevertheless perform arbitrary continuous functional operations on the joint distribution in a consistent manner.
4 Conclusion and future work
This paper provides a theoretical foundation for using kernel mean embeddings as approximate representations of random variables in scenarios where we need to apply functions to those random variables. We show that for continuous functions f (including all functions on discrete domains), consistency of the mean embedding estimator of a random variable X implies consistency of the mean embedding estimator of f(X). Furthermore, if the kernels are Matérn and the function f is sufficiently smooth, we provide bounds on the convergence rate. Importantly, our results apply beyond i.i.d. samples and cover estimators based on expansions with interdependent points and weights. One interesting future direction is to improve the finite-sample bounds and extend them to general radial and/or translation-invariant kernels.
Our work is motivated by the field of probabilistic programming. Using our theoretical results, kernel mean embeddings can be used to generalize functional operations (which lie at the core of all programming languages) to distributions over data types in a principled manner, by applying the operations to the points or approximate kernel expansions. This is in principle feasible for any data type provided a suitable kernel function can be defined on it. We believe that the approach holds significant potential for future probabilistic programming systems.
Acknowledgements
We thank Krikamol Muandet for providing the code used to generate Figure 1, Paul Rubenstein, Motonobu Kanagawa and Bharath Sriperumbudur for very useful discussions, and our anonymous reviewers for their valuable feedback. Carl-Johann Simon-Gabriel is supported by a Google European Fellowship in Causal Inference. | 1. What is the main contribution of the paper regarding the price of estimating the distribution of $f(X)$ using dependent Kernel mean embedding?
2. What are the strengths of the paper, particularly in its theoretical analysis and motivation?
3. Do you have any concerns or suggestions regarding the paper's focus on theoretical aspects without providing algorithmic innovations?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any typos or errors in the paper that need to be addressed? | Review | Review
This paper quantifies the price of estimating the distribution of $f(X)$, when {\em dependent} Kernel mean embedding is used. As shown in Eq. (2), O(1/n) of parametric rate is achievable with KME based on i.i.d. samples. However, when "expansion-points" are dependent, the rate is not known and possibly worse. The authors quantify this gap precisely in Theorem 2. The take-home message is that when f is smooth, there is no loss, but for non-smooth f, you pay the price in sample complexity. Hence, for practitioners, this gives a guideline for deciding when to use techniques such as reduced set expansions. The paper is very well written, and I really appreciate the fact that the authors took the effort in Sections 1.1, 1.2, 1.3, and 1.4 to motivate when dependency naturally arises in KME and how memory trades off with accuracy. I learned something non-trivial that I did not know before I read this paper. Quantifying the price of dependent expansion points is an interesting mathematical question. However, this paper could improve in motivating the general readers from machine learning community to get interested in a broader topic of Kernel Mean Embedding. The authors attempt to do this in the last paragraph in Section 4, relating it to Probabilistic Programming Systems, which seems to be a weak connection. AS a reader, I was curious as to where such techniques of KME and reduced set expansions can be potentially used, or are used currently in solving some application specific problems. Another disappointing aspect of the paper is that the authors did not delve deeper into the question Multiple arguments in Section 3. What is currently provided is a direct corollary of the Theorem 2, and the paper assigns too much space for something that has little information over what is already said. However, it is an interesting question to ask: given i.i.d. samples from joint distribution $(X,Y)$, what is the sample-efficient way to construct KME? (although this is outside the scope of this paper) Overall, the questions addressed in this paper is theoretically very interesting, but mainly theoretical since there is no algorithmic innovation. Further, the techniques necessary to prove the main results seem to be largely available from existing literature e.g. [Kanagawa and Fukumizu 2014] as the authors point out, which I appreciate since I could not have made the connection if the authors did not state it so clearly. Hence, this paper addresses an interesting question and gives a clean answer, but the solution ended up being simple that it lost in Novelty/originality. Typos: - On page 2, $\{(x'_j,w'_j)\}_{i=1}^n$ should be $\{(x'_j,w'_j)\}_{j=1}^N$. - On page 2, $\{(f(x'_j),w'_j)\}_{i=1}^n$ should be $\{(f(x'_j),w'_j)\}_{j=1}^N$. |
NIPS | Title
Consistent Kernel Mean Estimation for Functions of Random Variables
Abstract
We provide a theoretical foundation for non-parametric estimation of functions of random variables using kernel mean embeddings. We show that for any continuous function f , consistent estimators of the mean embedding of a random variable X lead to consistent estimators of the mean embedding of f(X). For Matérn kernels and sufficiently smooth functions we also provide rates of convergence. Our results extend to functions of multiple random variables. If the variables are dependent, we require an estimator of the mean embedding of their joint distribution as a starting point; if they are independent, it is sufficient to have separate estimators of the mean embeddings of their marginal distributions. In either case, our results cover both mean embeddings based on i.i.d. samples as well as “reduced set” expansions in terms of dependent expansion points. The latter serves as a justification for using such expansions to limit memory resources when applying the approach as a basis for probabilistic programming.
1 Introduction
A common task in probabilistic modelling is to compute the distribution of f(X), given a measurable function f and a random variable X . In fact, the earliest instances of this problem date back at least to Poisson (1837). Sometimes this can be done analytically. For example, if f is linear and X is Gaussian, that is f(x) = ax+ b and X ⇠ N (µ; ), we have f(X) ⇠ N (aµ+ b; a ). There exist various methods for obtaining such analytical expressions (Mathai, 1973), but outside a small subset of distributions and functions the formulae are either not available or too complicated to be practical.
An alternative to the analytical approach is numerical approximation, ideally implemented as a flexible software library. The need for such tools is recognised in the general programming languages community (McKinley, 2016), but no standards were established so far. The main challenge is in finding a good approximate representation for random variables.
Distributions on integers, for example, are usually represented as lists of (x i , p(x i )) pairs. For real valued distributions, integral transforms (Springer, 1979), mixtures of Gaussians (Milios, 2009), Laguerre polynomials (Williamson, 1989), and Chebyshev polynomials (Korzeń and Jaroszewicz, 2014) were proposed as convenient representations for numerical computation. For strings, probabilistic finite automata are often used. All those approaches have their merits, but they only work with a specific input type.
There is an alternative, based on Monte Carlo sampling (Kalos and Whitlock, 2008), which is to represent X by a (possibly weighted) sample {(x
i , w i )}n i=1 (with wi 0). This representation has
several advantages: (i) it works for any input type, (ii) the sample size controls the time-accuracy trade-off, and (iii) applying functions to random variables reduces to applying the functions pointwise
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
to the sample, i.e., {(f(x i ), w i )} represents f(X). Furthermore, expectations of functions of random variables can be estimated as E [f(X)] ⇡ P
i
w i f(x i
)/ P
i
w i , sometimes with guarantees for the convergence rate.
The flexibility of this Monte Carlo approach comes at a cost: without further assumptions on the underlying input space X , it is hard to quantify the accuracy of this representation. For instance, given two samples of the same size, {(x
i , w i )}n i=1 and {(x0i, w0i)}ni=1, how can we tell which one is a
better representation of X? More generally, how could we optimize a representation with predefined sample size?
There exists an alternative to the Monte Carlo approach, called Kernel Mean Embeddings (KME) (Berlinet and Thomas-Agnan, 2004; Smola et al., 2007). It also represents random variables as samples, but additionally defines a notion of similarity between sample points. As a result, (i) it keeps all the advantages of the Monte Carlo scheme, (ii) it includes the Monte Carlo method as a special case, (iii) it overcomes its pitfalls described above, and (iv) it can be tailored to focus on different properties of X , depending on the user’s needs and prior assumptions. The KME approach identifies both sample points and distributions with functions in an abstract Hilbert space. Internally the latter are still represented as weighted samples, but the weights can be negative and the straightforward Monte Carlo interpretation is no longer valid. Schölkopf et al. (2015) propose using KMEs as approximate representation of random variables for the purpose of computing their functions. However, they only provide theoretical justification for it in rather idealised settings, which do not meet practical implementation requirements.
In this paper, we build on this work and provide general theoretical guarantees for the proposed estimators. Specifically, we prove statements of the form “if {(x
i , w i )}n i=1 provides a good estimate for
the KME of X , then {(f(x i ), w i )}n i=1 provides a good estimate for the KME of f(X)”. Importantly, our results do not assume joint independence of the observations x i (and weights w i
). This makes them a powerful tool. For instance, imagine we are given data {(x
i , w i )}n i=1 from a random variable
X that we need to compress. Then our theorems guarantee that, whatever compression algorithm we use, as long as the compressed representation {(x0
j , w0 j )}n j=1 still provides a good estimate for the
KME of X , the pointwise images {(f(x0 j ), w0 j )}n j=1 provide good estimates of the KME of f(X).
In the remainder of this section we first introduce KMEs and discuss their merits. Then we explain why and how we extend the results of Schölkopf et al. (2015). Section 2 contains our main results. In Section 2.1 we show consistency of the relevant estimator in a general setting, and in Section 2.2 we provide finite sample guarantees when Matérn kernels are used. In Section 3 we show how our results apply to functions of multiple variables, both interdependent and independent. Section 4 concludes with a discussion.
1.1 Background on kernel mean embeddings
Let X be a measurable input space. We use a positive definite bounded and measurable kernel k : X ⇥ X ! R to represent random variables X ⇠ P and weighted samples ˆX := {(x
i , w i )}n i=1
as two functions µk X and µ̂k X in the corresponding Reproducing Kernel Hilbert Space (RKHS) H k
by defining
µk X :=
Z k(x, .) dP (x) and µ̂k
X
:=
X
i
w i k(x i , .) .
These are guaranteed to exist, since we assume the kernel is bounded (Smola et al., 2007). When clear from the context, we omit the kernel k in the superscript. µ
X is called the KME of P , but we also refer to it as the KME of X . In this paper we focus on computing functions of random variables. For f : X ! Z , where Z is a measurable space, and for a positive definite bounded k
z : Z ⇥Z ! R we also write
µkz f(X) :=
Z k z (f(x), .) dP (x) and µ̂kz f(X) := X
i
w i k z (f(x i ), .) . (1)
The advantage of mapping random variables X and samples ˆX to functions in the RKHS is that we may now say that ˆX is a good approximation for X if the RKHS distance kµ̂
X µ X k is small. This distance depends on the choice of the kernel and different kernels emphasise different information about X . For example if on X := [a, b] ⇢ R we choose k(x, x0) := x · x0 + 1, then
µ X (x) = E X⇠P [X]x+ 1. Thus any two distributions and/or samples with equal means are mapped to the same function in H k
so the distance between them is zero. Therefore using this particular k, we keep track only of the mean of the distributions. If instead we prefer to keep track of all first p moments, we may use the kernel k(x, x0) := (x · x0 + 1)p. And if we do not want to loose any information at all, we should choose k such that µk is injective over all probability measures on X . Such kernels are called characteristic. For standard spaces, such as X = Rd, many widely used kernels were proven characteristic, such as Gaussian, Laplacian, and Matérn kernels (Sriperumbudur et al., 2010, 2011).
The Gaussian kernel k(x, x0) := e kx x0k2
2 2 may serve as another good illustration of the flexibility of this representation. Whatever positive bandwidth 2 > 0, we do not lose any information about distributions, because k is characteristic. Nevertheless, if 2 grows, all distributions start looking the same, because their embeddings converge to a constant function 1. If, on the other hand, 2 becomes small, distributions look increasingly different and µ̂
X
becomes a function with bumps of height w i
at every x i . In the limit when 2 goes to zero, each point is only similar to itself, so µ̂ X reduces to the Monte Carlo method. Choosing 2 can be interpreted as controlling the degree of smoothing in the approximation.
1.2 Reduced set methods
An attractive feature when using KME estimators is the ability to reduce the number of expansion points (i.e., the size of the weighted sample) in a principled way. Specifically, if ˆX 0 := {(x0
j , 1/N)}N j=1 then the objective is to construct ˆX := {(xi, wi)}ni=1 that minimises
kµ̂ X 0 µ̂ X k with n < N . Often the resulting x i are mutually dependent and the w i
certainly depend on them. The algorithms for constructing such expansions are known as reduced set methods and have been studied by the machine learning community (Schölkopf and Smola, 2002, Chapter 18).
Although reduced set methods provide significant efficiency gains, their application raises certain concerns when it comes to computing functions of random variables. Let P,Q be distributions of X and f(X) respectively. If x0
j
⇠ i.i.d. P , then f(x0 j ) ⇠ i.i.d. Q and so µ̂ f(X0) = 1 N P j k(f(x0 j ), .)
reduces to the commonly used p N -consistent empirical estimator of µ
f(X) (Smola et al., 2007). Unfortunately, this is not the case after applying reduced set methods, and it is not known under which conditions µ̂
f(X) is a consistent estimator for µf(X).
Schölkopf et al. (2015) advocate the use of reduced expansion set methods to save computational resources. They also provide some reasoning why this should be the right thing to do for characteristic kernels, but as they state themselves, their rigorous analysis does not cover practical reduced set methods. Motivated by this and other concerns listed in Section 1.4, we provide a generalised analysis of the estimator µ̂
f(X), where we do not make assumptions on how xi and wi were generated.
Before doing that, however, we first illustrate how the need for reduced set methods naturally emerges on a concrete problem.
1.3 Illustration with functions of two random variables
Suppose that we want to estimate µ f(X,Y ) given i.i.d. samples ˆX
0 = {x0
i , 1/N}N i=1 and ˆY 0 =
{y0 j , 1/N}N j=1 from two independent random variables X 2 X and Y 2 Y respectively. Let Q be the distribution of Z = f(X,Y ).
The first option is to consider what we will call the diagonal estimator µ̂1 := 1 N
P n
i=1 kz f(x0 i , y0 i ), . .
Since f(x0 i , y0 i ) ⇠ i.i.d. Q, µ̂1 is p N -consistent (Smola et al., 2007). Another option is to consider the U-statistic estimator µ̂2 := 1
N
2
P N
i,j=1 kz f(x0 i , y0 j ), . , which is also known to be
p N -
consistent. Experiments show that µ̂2 is more accurate and has lower variance than µ̂1 (see Figure 1). However, the U-statistic estimator µ̂2 needs O(n2) memory rather than O(n). For this reason Schölkopf et al. (2015) propose to use a reduced set method both on ˆX 0 and ˆY 0 to get new samples ˆX = {x
i , w i }n i=1 and ˆY = {yj , uj}nj=1 of size n ⌧ N , and then estimate µ
f(X,Y ) using µ̂3 := P n i,j=1 wiujkx(f(xi, yj), .).
We ran experiments on synthetic data to show how accurately µ̂1, µ̂2 and µ̂3 approximate µ f(X,Y ) with growing sample size N . We considered three basic arithmetic operations: multiplication X · Y , division X/Y , and exponentiation XY , with X ⇠ N (3; 0.5) and Y ⇠ N (4; 0.5). As the true embedding µ
f(X,Y ) is unknown, we approximated it by a U-statistic estimator based on a large sample (125 points). For µ̂3, we used the simplest possible reduced set method: we randomly sampled subsets of size n = 0.01 ·N of the x
i , and optimized the weights w i and u i to best approximate µ̂ X
and µ̂ Y . The results are summarised in Figure 1 and corroborate our expectations: (i) all estimators converge, (ii) µ̂2 converges fastest and has the lowest variance, and (iii) µ̂3 is worse than µ̂2, but much better than the diagonal estimator µ̂1. Note, moreover, that unlike the U-statistic estimator µ̂2, the reduced set based estimator µ̂3 can be used with a fixed storage budget even if we perform a sequence of function applications—a situation naturally appearing in the context of probabilistic programming.
Schölkopf et al. (2015) prove the consistency of µ̂3 only for a rather limited case, when the points of the reduced expansions {x
i }n i=1 and {yi}ni=1 are i.i.d. copies of X and Y , respectively, and
the weights {(w i , u i )}n i=1 are constants. Using our new results we will prove in Section 3.1 the consistency of µ̂3 under fairly general conditions, even in the case when both expansion points and weights are interdependent random variables.
1.4 Other sources of non-i.i.d. samples
Although our discussion above focuses on reduced expansion set methods, there are other popular algorithms that produce KME expansions where the samples are not i.i.d. Here we briefly discuss several examples, emphasising that our selection is not comprehensive. They provide additional motivation for stating convergence guarantees in the most general setting possible.
An important notion in probability theory is that of a conditional distribution, which can also be represented using KME (Song et al., 2009). With this representation the standard laws of probability, such as sum, product, and Bayes’ rules, can be stated using KME (Fukumizu et al., 2013). Applying those rules results in KME estimators with strong dependencies between samples and their weights.
Another possibility is that even though i.i.d. samples are available, they may not produce the best estimator. Various approaches, such as kernel herding (Chen et al., 2010; Lacoste-Julien et al., 2015), attempt to produce a better KME estimator by actively generating pseudo-samples that are not i.i.d. from the underlying distribution.
2 Main results
This section contains our main results regarding consistency and finite sample guarantees for the estimator µ̂
f(X) defined in (1). They are based on the convergence of µ̂X and avoid simplifying assumptions about its structure.
2.1 Consistency
If k x is c0-universal (see Sriperumbudur et al. (2011)), consistency of µ̂ f(X) can be shown in a rather general setting. Theorem 1. Let X and Z be compact Hausdorff spaces equipped with their Borel -algebras, f : X ! Z a continuous function, k
x , k z continuous kernels on X ,Z respectively. Assume k x
is c0-universal and that there exists C such that P i |w i
| C independently of n. The following holds: If µ̂kx
X ! µkx X then µ̂kz f(X) ! µkzf(X) as n ! 1.
Proof. Let P be the distribution of X and ˆP n =
P n
i=1 wi xi . Define a new kernel on X by ek x (x1, x2) := kz f(x1), f(x2) . X is compact and { ˆP n
|n 2 N} [ {P} is a bounded set (in total variation norm) of finite measures, because k ˆP
n k TV =
P n
i=1 |wi| C. Furthermore, kx is continuous and c0-universal. Using Corollary 52 of Simon-Gabriel and Schölkopf (2016) we conclude that: µ̂kx
X ! µkx X implies that ˆP converges weakly to P . Now, k z
and f being continuous, so is ek
x . Thus, if ˆP converges weakly to P , then µ̂ekx X ! µekx X
(Simon-Gabriel and Schölkopf, 2016, Theorem 44, Points (1) and (iii)). Overall, µ̂kx
X ! µkx X implies µ̂ekx X ! µekx X
. We conclude the proof by showing that convergence in He
k
x
leads to convergence in H k
z : µ̂kz
f(X) µkzf(X) 2
k
z
= µ̂ekx X µekx X
2
e k
x ! 0. For a detailed version of the above, see Appendix A.
The continuity assumption is rather unrestrictive. All kernels and functions defined on a discrete space are continuous with respect to the discrete topology, so the theorem applies in this case. For X = Rd, many kernels used in practice are continuous, including Gaussian, Laplacian, Matérn and other radial kernels. The slightly limiting factor of this theorem is that k
x must be c0-universal, which often can be tricky to verify. However, most standard kernels—including all radial, non-constant kernels—are c0-universal (see Sriperumbudur et al., 2011). The assumption that the input domain is compact is satisfied in most applications, since any measurements coming from physical sensors are contained in a bounded range. Finally, the assumption that P i |w i
| C can be enforced, for instance, by applying a suitable regularization in reduced set methods.
2.2 Finite sample guarantees
Theorem 1 guarantees that the estimator µ̂ f(X) converges to µf(X) when µ̂X converges to µX . However, it says nothing about the speed of convergence. In this section we provide a convergence rate when working with Matérn kernels, which are of the form
ks x
(x, x0) = 2
1 s (s) kx x0ks d/22 Bd/2 s (kx x0k2) , (2)
where B ↵
is a modified Bessel function of the third kind (also known as Macdonald function) of order ↵, is the Gamma function and s > d2 is a smoothness parameter. The RKHS induced by ks x
is the Sobolev space W s2 (Rd) (Wendland, 2004, Theorem 6.13 & Chap.10) containing s-times differentiable functions. The finite-sample bound of Theorem 2 is based on the analysis of Kanagawa et al. (2016), which requires the following assumptions:
Assumptions 1. Let X be a random variable over X = Rd with distribution P and let ˆX = {(x
i , w i )}n i=1 be random variables over Xn⇥Rn with joint distribution S. There exists a probability
distribution Q with full support on Rd and a bounded density, satisfying the following properties:
(i) P has a bounded density function w.r.t. Q; (ii) there is a constant D > 0 independent of n, such that
E S
" 1
n
nX
i=1
g2(x i ) # D kgk2L2(Q) , 8g 2 L2(Q) .
These assumptions were shown to be fairly general and we refer to Kanagawa et al. (2016, Section 4.1) for various examples where they are met. Next we state the main result of this section.
Theorem 2. Let X = Rd, Z = Rd0 , and f : X ! Z be an ↵-times differentiable function (↵ 2 N+). Take s1 > d/2 and s2 > d0 such that s1, s2/2 2 N+. Let ks1
x and ks2 z be Matérn kernels over X and Z respectively as defined in (2). Assume X ⇠ P and ˆX = {(x
i , w i )}n i=1 ⇠ S satisfy 1. Moreover,
assume that P and the marginals of x1, . . . xn have a common compact support. Suppose that, for some constants b > 0 and 0 < c 1/2:
(i) E S h kµ̂
X µ X k2 k s1 x
i = O(n 2b) ;
(ii) P n
i=1 w 2 i = O(n 2c) (with probability 1) .
Let ✓ = min( s22s1 , ↵ s1 , 1) and assume ✓b (1/2 c)(1 ✓) > 0. Then
E S
µ̂ f(X) µf(X) 2
k s2 z
= O ⇣ (log n)d 0 n 2 (✓b (1/2 c)(1 ✓)) ⌘ . (3)
Before we provide a short sketch of the proof, let us briefly comment on this result. As a benchmark, remember that when x1, . . . xn are i.i.d. observations from X and ˆX = {(xi, 1/n)}n
i=1, we getkµ̂ f(X) µf(X)k2 = OP (n 1), which was recently shown to be a minimax optimal rate (Tolstikhin et al., 2016). How do we compare to this benchmark? In this case we have b = c = 1/2 and our rate is defined by ✓. If f is smooth enough, say ↵ > d/2 + 1, and by setting s2 > 2s1 = 2↵, we recover the O(n 1) rate up to an extra (log n)d 0 factor.
However, Theorem 2 applies to much more general settings. Importantly, it makes no i.i.d. assumptions on the data points and weights, allowing for complex interdependences. Instead, it asks the convergence of the estimator µ̂
X to the embedding µ X to be sufficiently fast. On the downside, the upper bound is affected by the smoothness of f , even in the i.i.d. setting: if ↵ ⌧ d/2 the rate will become slower, as ✓ = ↵/s1. Also, the rate depends both on d and d0. Whether these are artefacts of our proof remains an open question.
Proof. Here we sketch the main ideas of the proof and develop the details in Appendix C. Throughout the proof, C will designate a constant that depends neither on the sample size n nor on the variable R (to be introduced). C may however change from line to line. We start by showing that:
E S
µ̂kz f(X) µkzf(X) 2
k
z
= (2⇡)
d 0 2
Z
Z E S
⇣ [µ̂h f(X) µhf(X)](z) ⌘2 dz, (4)
where h is Matérn kernel over Z with smoothness parameter s2/2. Second, we upper bound the integrand by roughly imitating the proof idea of Theorem 1 from Kanagawa et al. (2016). This eventually yields:
E S
⇣ [µ̂h f(X) µhf(X)](z) ⌘2
Cn 2⌫ , (5) where ⌫ := ✓b (1/2 c)(1 ✓). Unfortunately, this upper bound does not depend on z and can not be integrated over the whole Z in (4). Denoting B
R the ball of radius R, centred on the origin of Z , we thus decompose the integral in (4) as:
Z Z E ⇣ [µ̂h f(X) µhf(X)](z) ⌘2 dz
=
Z
B
R
E ⇣
[µ̂h f(X) µhf(X)](z)
⌘2 dz +
Z
Z\B R
E ⇣
[µ̂h f(X) µhf(X)](z)
⌘2 dz.
On B R we upper bound the integral by (5) times the ball’s volume (which grows like Rd): Z
B
R
E ⇣
[µ̂h f(X) µhf(X)](z)
⌘2 dz CRdn 2⌫ . (6)
On X\B R
, we upper bound the integral by a value that decreases with R, which is of the form: Z
Z\B R
E ⇣
[µ̂h f(X) µhf(X)](z)
⌘2 dz Cn1 2c(R C 0)s2 2e 2(R C0) (7)
with C 0 > 0 being a constant smaller than R. In essence, this upper bound decreases with R because [µ̂h
f(X) µhf(X)](z) decays with the same speed as h when kzk grows indefinitely. We are now left with two rates, (6) and (7), which respectively increase and decrease with growing R. We complete the proof by balancing these two terms, which results in setting R ⇡ (log n)1/2.
3 Functions of Multiple Arguments
The previous section applies to functions f of one single variable X . However, we can apply its results to functions of multiple variables if we take the argument X to be a tuple containing multiple values. In this section we discuss how to do it using two input variables from spaces X and Y , but the results also apply to more inputs. To be precise, our input space changes from X to X ⇥ Y , input random variable from X to (X,Y ), and the kernel on the input space from k
x to k xy .
To apply our results from Section 2, all we need is a consistent estimator µ̂(X,Y ) of the joint embedding µ(X,Y ). There are different ways to get such an estimator. One way is to sample (x 0 i , y0 i
) i.i.d. from the joint distribution of (X,Y ) and construct the usual empirical estimator, or approximate it using reduced set methods. Alternatively, we may want to construct µ̂(X,Y ) based only on consistent estimators of µ
X and µ Y . For example, this is how µ̂3 was defined in Section 1.3. Below we show that this can indeed be done if X and Y are independent.
3.1 Application to Section 1.3
Following Schölkopf et al. (2015), we consider two independent random variables X ⇠ P x
and Y ⇠ P
y . Their joint distribution is P x ⌦ P y
. Consistent estimators of their embeddings are given by µ̂
X
=
P n i=1 wikx(xi, .) and µ̂Y = P n
j=1 ujky(yi, .). In this section we show that µ̂ f(X,Y ) = P n i,j=1 wiujkz f(x i , y j ), . is a consistent estimator of µ f(X,Y ).
We choose a product kernel k xy (x1, y1), (x2, y2) = k x (x1, x2)ky(y1, y2), so the corresponding RKHS is a tensor product H
k
xy
= H k
x
⌦H k
y (Steinwart and Christmann, 2008, Lemma 4.6) and the mean embedding of the product random variable (X,Y ) is a tensor product of their marginal mean embeddings µ(X,Y ) = µX ⌦ µY . With consistent estimators for the marginal embeddings we can estimate the joint embedding using their tensor product
µ̂(X,Y ) = µ̂X ⌦ µ̂Y = nX
i,j=1
w i u j k x (x i , .)⌦ k y (y j , .) = nX
i,j=1
w i u j k xy (x i , y j ), (. , .) .
If points are i.i.d. and w i = u i = 1/n, this reduces to the U-statistic estimator µ̂2 from Section 1.3. Lemma 3. Let (s
n
)
n be any positive real sequence converging to zero. Suppose k xy = k x k y is a product kernel, µ(X,Y ) = µX ⌦ µY , and µ̂(X,Y ) = µ̂X ⌦ µ̂Y . Then:
( kµ̂
X µ X k k
x
= O(s n );
kµ̂ Y µ Y k k
y
= O(s n )
implies µ̂(X,Y ) µ(X,Y )
k
xy
= O(s n ) .
Proof. For a detailed expansion of the first inequality see Appendix B. µ̂(X,Y ) µ(X,Y )
k
xy
kµ X k k
x
kµ̂ Y µ Y k k
y
+ kµ Y k k
y
kµ̂ X µ X k k
x
+ kµ̂ X µ X k k
x
kµ̂ Y µ Y k k
y
= O(s n ) +O(s n ) +O(s2 n ) = O(s n ).
Corollary 4. If µ̂ X ! n!1 µ X and µ̂ Y ! n!1 µ Y , then µ̂(X,Y ) ! n!1 µ(X,Y ).
Together with the results from Section 2 this lets us reason about estimators resulting from applying functions to multiple independent random variables. Write
µ̂ k xy
XY
=
nX
i,j=1
w i u j k xy (x i , y j ), . =
n 2X
`=1
! ` k xy (⇠ ` , .),
where ` enumerates the (i, j) pairs and ⇠ ` = (x i , y j ), ! ` = w i u j . Now if µ̂kx X ! µkx X and µ̂ky Y ! µky Y then µ̂kxy XY
! µkxy(X,Y ) (according to Corollary 4) and Theorem 1 shows thatP n
i,j=1 wiujkz f(x i , y j ), . is consistent as well. Unfortunately, we cannot apply Theorem 2 to get the speed of convergence, because a product of Matérn kernels is not a Matérn kernel any more.
One downside of this overall approach is that the number of expansion points used for the estimation of the joint increases exponentially with the number of arguments of f . This can lead to prohibitively large computational costs, especially if the result of such an operation is used as an input to another function of multiple arguments. To alleviate this problem, we may use reduced expansion set methods before or after applying f , as we did for example in Section 1.2.
To conclude this section, let us summarize the implications of our results for two practical scenarios that should be distinguished.
. If we have separate samples from two random variables X and Y , then our results justify how to provide an estimate of the mean embedding of f(X,Y ) provided that X and Y are independent. The samples themselves need not be i.i.d. — we can also work with weighted samples computed, for instance, by a reduced set method. . How about dependent random variables? For instance, imagine that Y = X , and f(X,Y ) = X + Y . Clearly, in this case the distribution of f(X,Y ) is a delta measure on 0, and there is no way to predict this from separate samples of X and Y . However, it should be stressed that our results (consistency and finite sample bound) apply even to the case where X and Y are dependent. In that case, however, they require a consistent estimator of the joint embedding µ(X,Y ). . It is also sufficient to have a reduced set expansion of the embedding of the joint distribution. This setting may sound strange, but it potentially has significant applications. Imagine that one has a large database of user data, sampled from a joint distribution. If we expand the joint’s embedding in terms of synthetic expansion points using a reduced set construction method, then we can pass on these (weighted) synthetic expansion points to a third party without revealing the original data. Using our results, the third party can nevertheless perform arbitrary continuous functional operations on the joint distribution in a consistent manner.
4 Conclusion and future work
This paper provides a theoretical foundation for using kernel mean embeddings as approximate representations of random variables in scenarios where we need to apply functions to those random variables. We show that for continuous functions f (including all functions on discrete domains), consistency of the mean embedding estimator of a random variable X implies consistency of the mean embedding estimator of f(X). Furthermore, if the kernels are Matérn and the function f is sufficiently smooth, we provide bounds on the convergence rate. Importantly, our results apply beyond i.i.d. samples and cover estimators based on expansions with interdependent points and weights. One interesting future direction is to improve the finite-sample bounds and extend them to general radial and/or translation-invariant kernels.
Our work is motivated by the field of probabilistic programming. Using our theoretical results, kernel mean embeddings can be used to generalize functional operations (which lie at the core of all programming languages) to distributions over data types in a principled manner, by applying the operations to the points or approximate kernel expansions. This is in principle feasible for any data type provided a suitable kernel function can be defined on it. We believe that the approach holds significant potential for future probabilistic programming systems.
Acknowledgements
We thank Krikamol Muandet for providing the code used to generate Figure 1, Paul Rubenstein, Motonobu Kanagawa and Bharath Sriperumbudur for very useful discussions, and our anonymous reviewers for their valuable feedback. Carl-Johann Simon-Gabriel is supported by a Google European Fellowship in Causal Inference. | 1. What are the strengths and weaknesses of the paper regarding its contributions and technical quality?
2. How does the reviewer assess the novelty and potential impact of the paper's content?
3. Are there any questions or concerns regarding the clarity and presentation of the paper?
4. Is there a need for additional information or modifications to enhance the paper's value? | Review | Review
The authors conduct asymptotic analysis of the approach by Scholkopf et al. (2015) for computing functions of random variables as represented by kernel means. Unfortunately, the proofs of the main results have some flaws. Technical quality: - As mentioned above, the proofs of the main results may have some flaws. == Comments after the author feedback == Thank you for the corrections of the proofs. The new proof of Theorem 1 looks correct. I think the new proof of Theorem 2 (Solution 2) works. You may need the following slight modifications: - The parameter of the Sobolev RKHS (and the Matern kernel) should not be $b$; this constant has already been used in Line 191. Instead of $b$, let's use $s > d/2$ to denote the degree of the Sobolev RKHS, $W_2^s$. - Then the resulting rate (in the author feedback) becomes $n^{-2 (\theta b - (1-\theta) (1/2-c) )}$ with $\theta = \alpha/s$. == Comments before the author feedback == Novelty/originality: - The topic of the paper is interesting. Potential impact or usefulness: - The impact of this paper depends on that of the approach by Scholkopf et al. (2015). Therefore If this approach is proven to be useful in practice (e.g. in probabilistic programing), the aim of the current paper would be reasonable and corrected results will have significant impact. - I like Lemma 8 in Appendix and the entire proof idea of Theorem 2, as they are potentially useful for the analysis of kernel mean estimators in general. Clarity and presentation: - Overall this paper is nicely written and easy to follow. In my opinion it would be better to reproduce the results of Simon-Gabriel and Scholkopf in Appendix for the convenience of the reader. Other comments: - While the proof of Theorem 2 is not correct due to the flaw of Theorem 1 of Kanagawa and Fukumizu (2014), I believe it would be possible to replace it by some other results (e.g. Theorem 1 of http://arxiv.org/abs/1605.07254). |
NIPS | Title
Consistent Kernel Mean Estimation for Functions of Random Variables
Abstract
We provide a theoretical foundation for non-parametric estimation of functions of random variables using kernel mean embeddings. We show that for any continuous function f , consistent estimators of the mean embedding of a random variable X lead to consistent estimators of the mean embedding of f(X). For Matérn kernels and sufficiently smooth functions we also provide rates of convergence. Our results extend to functions of multiple random variables. If the variables are dependent, we require an estimator of the mean embedding of their joint distribution as a starting point; if they are independent, it is sufficient to have separate estimators of the mean embeddings of their marginal distributions. In either case, our results cover both mean embeddings based on i.i.d. samples as well as “reduced set” expansions in terms of dependent expansion points. The latter serves as a justification for using such expansions to limit memory resources when applying the approach as a basis for probabilistic programming.
1 Introduction
A common task in probabilistic modelling is to compute the distribution of f(X), given a measurable function f and a random variable X . In fact, the earliest instances of this problem date back at least to Poisson (1837). Sometimes this can be done analytically. For example, if f is linear and X is Gaussian, that is f(x) = ax+ b and X ⇠ N (µ; ), we have f(X) ⇠ N (aµ+ b; a ). There exist various methods for obtaining such analytical expressions (Mathai, 1973), but outside a small subset of distributions and functions the formulae are either not available or too complicated to be practical.
An alternative to the analytical approach is numerical approximation, ideally implemented as a flexible software library. The need for such tools is recognised in the general programming languages community (McKinley, 2016), but no standards were established so far. The main challenge is in finding a good approximate representation for random variables.
Distributions on integers, for example, are usually represented as lists of (x i , p(x i )) pairs. For real valued distributions, integral transforms (Springer, 1979), mixtures of Gaussians (Milios, 2009), Laguerre polynomials (Williamson, 1989), and Chebyshev polynomials (Korzeń and Jaroszewicz, 2014) were proposed as convenient representations for numerical computation. For strings, probabilistic finite automata are often used. All those approaches have their merits, but they only work with a specific input type.
There is an alternative, based on Monte Carlo sampling (Kalos and Whitlock, 2008), which is to represent X by a (possibly weighted) sample {(x
i , w i )}n i=1 (with wi 0). This representation has
several advantages: (i) it works for any input type, (ii) the sample size controls the time-accuracy trade-off, and (iii) applying functions to random variables reduces to applying the functions pointwise
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
to the sample, i.e., {(f(x i ), w i )} represents f(X). Furthermore, expectations of functions of random variables can be estimated as E [f(X)] ⇡ P
i
w i f(x i
)/ P
i
w i , sometimes with guarantees for the convergence rate.
The flexibility of this Monte Carlo approach comes at a cost: without further assumptions on the underlying input space X , it is hard to quantify the accuracy of this representation. For instance, given two samples of the same size, {(x
i , w i )}n i=1 and {(x0i, w0i)}ni=1, how can we tell which one is a
better representation of X? More generally, how could we optimize a representation with predefined sample size?
There exists an alternative to the Monte Carlo approach, called Kernel Mean Embeddings (KME) (Berlinet and Thomas-Agnan, 2004; Smola et al., 2007). It also represents random variables as samples, but additionally defines a notion of similarity between sample points. As a result, (i) it keeps all the advantages of the Monte Carlo scheme, (ii) it includes the Monte Carlo method as a special case, (iii) it overcomes its pitfalls described above, and (iv) it can be tailored to focus on different properties of X , depending on the user’s needs and prior assumptions. The KME approach identifies both sample points and distributions with functions in an abstract Hilbert space. Internally the latter are still represented as weighted samples, but the weights can be negative and the straightforward Monte Carlo interpretation is no longer valid. Schölkopf et al. (2015) propose using KMEs as approximate representation of random variables for the purpose of computing their functions. However, they only provide theoretical justification for it in rather idealised settings, which do not meet practical implementation requirements.
In this paper, we build on this work and provide general theoretical guarantees for the proposed estimators. Specifically, we prove statements of the form “if {(x
i , w i )}n i=1 provides a good estimate for
the KME of X , then {(f(x i ), w i )}n i=1 provides a good estimate for the KME of f(X)”. Importantly, our results do not assume joint independence of the observations x i (and weights w i
). This makes them a powerful tool. For instance, imagine we are given data {(x
i , w i )}n i=1 from a random variable
X that we need to compress. Then our theorems guarantee that, whatever compression algorithm we use, as long as the compressed representation {(x0
j , w0 j )}n j=1 still provides a good estimate for the
KME of X , the pointwise images {(f(x0 j ), w0 j )}n j=1 provide good estimates of the KME of f(X).
In the remainder of this section we first introduce KMEs and discuss their merits. Then we explain why and how we extend the results of Schölkopf et al. (2015). Section 2 contains our main results. In Section 2.1 we show consistency of the relevant estimator in a general setting, and in Section 2.2 we provide finite sample guarantees when Matérn kernels are used. In Section 3 we show how our results apply to functions of multiple variables, both interdependent and independent. Section 4 concludes with a discussion.
1.1 Background on kernel mean embeddings
Let X be a measurable input space. We use a positive definite bounded and measurable kernel k : X ⇥ X ! R to represent random variables X ⇠ P and weighted samples ˆX := {(x
i , w i )}n i=1
as two functions µk X and µ̂k X in the corresponding Reproducing Kernel Hilbert Space (RKHS) H k
by defining
µk X :=
Z k(x, .) dP (x) and µ̂k
X
:=
X
i
w i k(x i , .) .
These are guaranteed to exist, since we assume the kernel is bounded (Smola et al., 2007). When clear from the context, we omit the kernel k in the superscript. µ
X is called the KME of P , but we also refer to it as the KME of X . In this paper we focus on computing functions of random variables. For f : X ! Z , where Z is a measurable space, and for a positive definite bounded k
z : Z ⇥Z ! R we also write
µkz f(X) :=
Z k z (f(x), .) dP (x) and µ̂kz f(X) := X
i
w i k z (f(x i ), .) . (1)
The advantage of mapping random variables X and samples ˆX to functions in the RKHS is that we may now say that ˆX is a good approximation for X if the RKHS distance kµ̂
X µ X k is small. This distance depends on the choice of the kernel and different kernels emphasise different information about X . For example if on X := [a, b] ⇢ R we choose k(x, x0) := x · x0 + 1, then
µ X (x) = E X⇠P [X]x+ 1. Thus any two distributions and/or samples with equal means are mapped to the same function in H k
so the distance between them is zero. Therefore using this particular k, we keep track only of the mean of the distributions. If instead we prefer to keep track of all first p moments, we may use the kernel k(x, x0) := (x · x0 + 1)p. And if we do not want to loose any information at all, we should choose k such that µk is injective over all probability measures on X . Such kernels are called characteristic. For standard spaces, such as X = Rd, many widely used kernels were proven characteristic, such as Gaussian, Laplacian, and Matérn kernels (Sriperumbudur et al., 2010, 2011).
The Gaussian kernel k(x, x0) := e kx x0k2
2 2 may serve as another good illustration of the flexibility of this representation. Whatever positive bandwidth 2 > 0, we do not lose any information about distributions, because k is characteristic. Nevertheless, if 2 grows, all distributions start looking the same, because their embeddings converge to a constant function 1. If, on the other hand, 2 becomes small, distributions look increasingly different and µ̂
X
becomes a function with bumps of height w i
at every x i . In the limit when 2 goes to zero, each point is only similar to itself, so µ̂ X reduces to the Monte Carlo method. Choosing 2 can be interpreted as controlling the degree of smoothing in the approximation.
1.2 Reduced set methods
An attractive feature when using KME estimators is the ability to reduce the number of expansion points (i.e., the size of the weighted sample) in a principled way. Specifically, if ˆX 0 := {(x0
j , 1/N)}N j=1 then the objective is to construct ˆX := {(xi, wi)}ni=1 that minimises
kµ̂ X 0 µ̂ X k with n < N . Often the resulting x i are mutually dependent and the w i
certainly depend on them. The algorithms for constructing such expansions are known as reduced set methods and have been studied by the machine learning community (Schölkopf and Smola, 2002, Chapter 18).
Although reduced set methods provide significant efficiency gains, their application raises certain concerns when it comes to computing functions of random variables. Let P,Q be distributions of X and f(X) respectively. If x0
j
⇠ i.i.d. P , then f(x0 j ) ⇠ i.i.d. Q and so µ̂ f(X0) = 1 N P j k(f(x0 j ), .)
reduces to the commonly used p N -consistent empirical estimator of µ
f(X) (Smola et al., 2007). Unfortunately, this is not the case after applying reduced set methods, and it is not known under which conditions µ̂
f(X) is a consistent estimator for µf(X).
Schölkopf et al. (2015) advocate the use of reduced expansion set methods to save computational resources. They also provide some reasoning why this should be the right thing to do for characteristic kernels, but as they state themselves, their rigorous analysis does not cover practical reduced set methods. Motivated by this and other concerns listed in Section 1.4, we provide a generalised analysis of the estimator µ̂
f(X), where we do not make assumptions on how xi and wi were generated.
Before doing that, however, we first illustrate how the need for reduced set methods naturally emerges on a concrete problem.
1.3 Illustration with functions of two random variables
Suppose that we want to estimate µ f(X,Y ) given i.i.d. samples ˆX
0 = {x0
i , 1/N}N i=1 and ˆY 0 =
{y0 j , 1/N}N j=1 from two independent random variables X 2 X and Y 2 Y respectively. Let Q be the distribution of Z = f(X,Y ).
The first option is to consider what we will call the diagonal estimator µ̂1 := 1 N
P n
i=1 kz f(x0 i , y0 i ), . .
Since f(x0 i , y0 i ) ⇠ i.i.d. Q, µ̂1 is p N -consistent (Smola et al., 2007). Another option is to consider the U-statistic estimator µ̂2 := 1
N
2
P N
i,j=1 kz f(x0 i , y0 j ), . , which is also known to be
p N -
consistent. Experiments show that µ̂2 is more accurate and has lower variance than µ̂1 (see Figure 1). However, the U-statistic estimator µ̂2 needs O(n2) memory rather than O(n). For this reason Schölkopf et al. (2015) propose to use a reduced set method both on ˆX 0 and ˆY 0 to get new samples ˆX = {x
i , w i }n i=1 and ˆY = {yj , uj}nj=1 of size n ⌧ N , and then estimate µ
f(X,Y ) using µ̂3 := P n i,j=1 wiujkx(f(xi, yj), .).
We ran experiments on synthetic data to show how accurately µ̂1, µ̂2 and µ̂3 approximate µ f(X,Y ) with growing sample size N . We considered three basic arithmetic operations: multiplication X · Y , division X/Y , and exponentiation XY , with X ⇠ N (3; 0.5) and Y ⇠ N (4; 0.5). As the true embedding µ
f(X,Y ) is unknown, we approximated it by a U-statistic estimator based on a large sample (125 points). For µ̂3, we used the simplest possible reduced set method: we randomly sampled subsets of size n = 0.01 ·N of the x
i , and optimized the weights w i and u i to best approximate µ̂ X
and µ̂ Y . The results are summarised in Figure 1 and corroborate our expectations: (i) all estimators converge, (ii) µ̂2 converges fastest and has the lowest variance, and (iii) µ̂3 is worse than µ̂2, but much better than the diagonal estimator µ̂1. Note, moreover, that unlike the U-statistic estimator µ̂2, the reduced set based estimator µ̂3 can be used with a fixed storage budget even if we perform a sequence of function applications—a situation naturally appearing in the context of probabilistic programming.
Schölkopf et al. (2015) prove the consistency of µ̂3 only for a rather limited case, when the points of the reduced expansions {x
i }n i=1 and {yi}ni=1 are i.i.d. copies of X and Y , respectively, and
the weights {(w i , u i )}n i=1 are constants. Using our new results we will prove in Section 3.1 the consistency of µ̂3 under fairly general conditions, even in the case when both expansion points and weights are interdependent random variables.
1.4 Other sources of non-i.i.d. samples
Although our discussion above focuses on reduced expansion set methods, there are other popular algorithms that produce KME expansions where the samples are not i.i.d. Here we briefly discuss several examples, emphasising that our selection is not comprehensive. They provide additional motivation for stating convergence guarantees in the most general setting possible.
An important notion in probability theory is that of a conditional distribution, which can also be represented using KME (Song et al., 2009). With this representation the standard laws of probability, such as sum, product, and Bayes’ rules, can be stated using KME (Fukumizu et al., 2013). Applying those rules results in KME estimators with strong dependencies between samples and their weights.
Another possibility is that even though i.i.d. samples are available, they may not produce the best estimator. Various approaches, such as kernel herding (Chen et al., 2010; Lacoste-Julien et al., 2015), attempt to produce a better KME estimator by actively generating pseudo-samples that are not i.i.d. from the underlying distribution.
2 Main results
This section contains our main results regarding consistency and finite sample guarantees for the estimator µ̂
f(X) defined in (1). They are based on the convergence of µ̂X and avoid simplifying assumptions about its structure.
2.1 Consistency
If k x is c0-universal (see Sriperumbudur et al. (2011)), consistency of µ̂ f(X) can be shown in a rather general setting. Theorem 1. Let X and Z be compact Hausdorff spaces equipped with their Borel -algebras, f : X ! Z a continuous function, k
x , k z continuous kernels on X ,Z respectively. Assume k x
is c0-universal and that there exists C such that P i |w i
| C independently of n. The following holds: If µ̂kx
X ! µkx X then µ̂kz f(X) ! µkzf(X) as n ! 1.
Proof. Let P be the distribution of X and ˆP n =
P n
i=1 wi xi . Define a new kernel on X by ek x (x1, x2) := kz f(x1), f(x2) . X is compact and { ˆP n
|n 2 N} [ {P} is a bounded set (in total variation norm) of finite measures, because k ˆP
n k TV =
P n
i=1 |wi| C. Furthermore, kx is continuous and c0-universal. Using Corollary 52 of Simon-Gabriel and Schölkopf (2016) we conclude that: µ̂kx
X ! µkx X implies that ˆP converges weakly to P . Now, k z
and f being continuous, so is ek
x . Thus, if ˆP converges weakly to P , then µ̂ekx X ! µekx X
(Simon-Gabriel and Schölkopf, 2016, Theorem 44, Points (1) and (iii)). Overall, µ̂kx
X ! µkx X implies µ̂ekx X ! µekx X
. We conclude the proof by showing that convergence in He
k
x
leads to convergence in H k
z : µ̂kz
f(X) µkzf(X) 2
k
z
= µ̂ekx X µekx X
2
e k
x ! 0. For a detailed version of the above, see Appendix A.
The continuity assumption is rather unrestrictive. All kernels and functions defined on a discrete space are continuous with respect to the discrete topology, so the theorem applies in this case. For X = Rd, many kernels used in practice are continuous, including Gaussian, Laplacian, Matérn and other radial kernels. The slightly limiting factor of this theorem is that k
x must be c0-universal, which often can be tricky to verify. However, most standard kernels—including all radial, non-constant kernels—are c0-universal (see Sriperumbudur et al., 2011). The assumption that the input domain is compact is satisfied in most applications, since any measurements coming from physical sensors are contained in a bounded range. Finally, the assumption that P i |w i
| C can be enforced, for instance, by applying a suitable regularization in reduced set methods.
2.2 Finite sample guarantees
Theorem 1 guarantees that the estimator µ̂ f(X) converges to µf(X) when µ̂X converges to µX . However, it says nothing about the speed of convergence. In this section we provide a convergence rate when working with Matérn kernels, which are of the form
ks x
(x, x0) = 2
1 s (s) kx x0ks d/22 Bd/2 s (kx x0k2) , (2)
where B ↵
is a modified Bessel function of the third kind (also known as Macdonald function) of order ↵, is the Gamma function and s > d2 is a smoothness parameter. The RKHS induced by ks x
is the Sobolev space W s2 (Rd) (Wendland, 2004, Theorem 6.13 & Chap.10) containing s-times differentiable functions. The finite-sample bound of Theorem 2 is based on the analysis of Kanagawa et al. (2016), which requires the following assumptions:
Assumptions 1. Let X be a random variable over X = Rd with distribution P and let ˆX = {(x
i , w i )}n i=1 be random variables over Xn⇥Rn with joint distribution S. There exists a probability
distribution Q with full support on Rd and a bounded density, satisfying the following properties:
(i) P has a bounded density function w.r.t. Q; (ii) there is a constant D > 0 independent of n, such that
E S
" 1
n
nX
i=1
g2(x i ) # D kgk2L2(Q) , 8g 2 L2(Q) .
These assumptions were shown to be fairly general and we refer to Kanagawa et al. (2016, Section 4.1) for various examples where they are met. Next we state the main result of this section.
Theorem 2. Let X = Rd, Z = Rd0 , and f : X ! Z be an ↵-times differentiable function (↵ 2 N+). Take s1 > d/2 and s2 > d0 such that s1, s2/2 2 N+. Let ks1
x and ks2 z be Matérn kernels over X and Z respectively as defined in (2). Assume X ⇠ P and ˆX = {(x
i , w i )}n i=1 ⇠ S satisfy 1. Moreover,
assume that P and the marginals of x1, . . . xn have a common compact support. Suppose that, for some constants b > 0 and 0 < c 1/2:
(i) E S h kµ̂
X µ X k2 k s1 x
i = O(n 2b) ;
(ii) P n
i=1 w 2 i = O(n 2c) (with probability 1) .
Let ✓ = min( s22s1 , ↵ s1 , 1) and assume ✓b (1/2 c)(1 ✓) > 0. Then
E S
µ̂ f(X) µf(X) 2
k s2 z
= O ⇣ (log n)d 0 n 2 (✓b (1/2 c)(1 ✓)) ⌘ . (3)
Before we provide a short sketch of the proof, let us briefly comment on this result. As a benchmark, remember that when x1, . . . xn are i.i.d. observations from X and ˆX = {(xi, 1/n)}n
i=1, we getkµ̂ f(X) µf(X)k2 = OP (n 1), which was recently shown to be a minimax optimal rate (Tolstikhin et al., 2016). How do we compare to this benchmark? In this case we have b = c = 1/2 and our rate is defined by ✓. If f is smooth enough, say ↵ > d/2 + 1, and by setting s2 > 2s1 = 2↵, we recover the O(n 1) rate up to an extra (log n)d 0 factor.
However, Theorem 2 applies to much more general settings. Importantly, it makes no i.i.d. assumptions on the data points and weights, allowing for complex interdependences. Instead, it asks the convergence of the estimator µ̂
X to the embedding µ X to be sufficiently fast. On the downside, the upper bound is affected by the smoothness of f , even in the i.i.d. setting: if ↵ ⌧ d/2 the rate will become slower, as ✓ = ↵/s1. Also, the rate depends both on d and d0. Whether these are artefacts of our proof remains an open question.
Proof. Here we sketch the main ideas of the proof and develop the details in Appendix C. Throughout the proof, C will designate a constant that depends neither on the sample size n nor on the variable R (to be introduced). C may however change from line to line. We start by showing that:
E S
µ̂kz f(X) µkzf(X) 2
k
z
= (2⇡)
d 0 2
Z
Z E S
⇣ [µ̂h f(X) µhf(X)](z) ⌘2 dz, (4)
where h is Matérn kernel over Z with smoothness parameter s2/2. Second, we upper bound the integrand by roughly imitating the proof idea of Theorem 1 from Kanagawa et al. (2016). This eventually yields:
E S
⇣ [µ̂h f(X) µhf(X)](z) ⌘2
Cn 2⌫ , (5) where ⌫ := ✓b (1/2 c)(1 ✓). Unfortunately, this upper bound does not depend on z and can not be integrated over the whole Z in (4). Denoting B
R the ball of radius R, centred on the origin of Z , we thus decompose the integral in (4) as:
Z Z E ⇣ [µ̂h f(X) µhf(X)](z) ⌘2 dz
=
Z
B
R
E ⇣
[µ̂h f(X) µhf(X)](z)
⌘2 dz +
Z
Z\B R
E ⇣
[µ̂h f(X) µhf(X)](z)
⌘2 dz.
On B R we upper bound the integral by (5) times the ball’s volume (which grows like Rd): Z
B
R
E ⇣
[µ̂h f(X) µhf(X)](z)
⌘2 dz CRdn 2⌫ . (6)
On X\B R
, we upper bound the integral by a value that decreases with R, which is of the form: Z
Z\B R
E ⇣
[µ̂h f(X) µhf(X)](z)
⌘2 dz Cn1 2c(R C 0)s2 2e 2(R C0) (7)
with C 0 > 0 being a constant smaller than R. In essence, this upper bound decreases with R because [µ̂h
f(X) µhf(X)](z) decays with the same speed as h when kzk grows indefinitely. We are now left with two rates, (6) and (7), which respectively increase and decrease with growing R. We complete the proof by balancing these two terms, which results in setting R ⇡ (log n)1/2.
3 Functions of Multiple Arguments
The previous section applies to functions f of one single variable X . However, we can apply its results to functions of multiple variables if we take the argument X to be a tuple containing multiple values. In this section we discuss how to do it using two input variables from spaces X and Y , but the results also apply to more inputs. To be precise, our input space changes from X to X ⇥ Y , input random variable from X to (X,Y ), and the kernel on the input space from k
x to k xy .
To apply our results from Section 2, all we need is a consistent estimator µ̂(X,Y ) of the joint embedding µ(X,Y ). There are different ways to get such an estimator. One way is to sample (x 0 i , y0 i
) i.i.d. from the joint distribution of (X,Y ) and construct the usual empirical estimator, or approximate it using reduced set methods. Alternatively, we may want to construct µ̂(X,Y ) based only on consistent estimators of µ
X and µ Y . For example, this is how µ̂3 was defined in Section 1.3. Below we show that this can indeed be done if X and Y are independent.
3.1 Application to Section 1.3
Following Schölkopf et al. (2015), we consider two independent random variables X ⇠ P x
and Y ⇠ P
y . Their joint distribution is P x ⌦ P y
. Consistent estimators of their embeddings are given by µ̂
X
=
P n i=1 wikx(xi, .) and µ̂Y = P n
j=1 ujky(yi, .). In this section we show that µ̂ f(X,Y ) = P n i,j=1 wiujkz f(x i , y j ), . is a consistent estimator of µ f(X,Y ).
We choose a product kernel k xy (x1, y1), (x2, y2) = k x (x1, x2)ky(y1, y2), so the corresponding RKHS is a tensor product H
k
xy
= H k
x
⌦H k
y (Steinwart and Christmann, 2008, Lemma 4.6) and the mean embedding of the product random variable (X,Y ) is a tensor product of their marginal mean embeddings µ(X,Y ) = µX ⌦ µY . With consistent estimators for the marginal embeddings we can estimate the joint embedding using their tensor product
µ̂(X,Y ) = µ̂X ⌦ µ̂Y = nX
i,j=1
w i u j k x (x i , .)⌦ k y (y j , .) = nX
i,j=1
w i u j k xy (x i , y j ), (. , .) .
If points are i.i.d. and w i = u i = 1/n, this reduces to the U-statistic estimator µ̂2 from Section 1.3. Lemma 3. Let (s
n
)
n be any positive real sequence converging to zero. Suppose k xy = k x k y is a product kernel, µ(X,Y ) = µX ⌦ µY , and µ̂(X,Y ) = µ̂X ⌦ µ̂Y . Then:
( kµ̂
X µ X k k
x
= O(s n );
kµ̂ Y µ Y k k
y
= O(s n )
implies µ̂(X,Y ) µ(X,Y )
k
xy
= O(s n ) .
Proof. For a detailed expansion of the first inequality see Appendix B. µ̂(X,Y ) µ(X,Y )
k
xy
kµ X k k
x
kµ̂ Y µ Y k k
y
+ kµ Y k k
y
kµ̂ X µ X k k
x
+ kµ̂ X µ X k k
x
kµ̂ Y µ Y k k
y
= O(s n ) +O(s n ) +O(s2 n ) = O(s n ).
Corollary 4. If µ̂ X ! n!1 µ X and µ̂ Y ! n!1 µ Y , then µ̂(X,Y ) ! n!1 µ(X,Y ).
Together with the results from Section 2 this lets us reason about estimators resulting from applying functions to multiple independent random variables. Write
µ̂ k xy
XY
=
nX
i,j=1
w i u j k xy (x i , y j ), . =
n 2X
`=1
! ` k xy (⇠ ` , .),
where ` enumerates the (i, j) pairs and ⇠ ` = (x i , y j ), ! ` = w i u j . Now if µ̂kx X ! µkx X and µ̂ky Y ! µky Y then µ̂kxy XY
! µkxy(X,Y ) (according to Corollary 4) and Theorem 1 shows thatP n
i,j=1 wiujkz f(x i , y j ), . is consistent as well. Unfortunately, we cannot apply Theorem 2 to get the speed of convergence, because a product of Matérn kernels is not a Matérn kernel any more.
One downside of this overall approach is that the number of expansion points used for the estimation of the joint increases exponentially with the number of arguments of f . This can lead to prohibitively large computational costs, especially if the result of such an operation is used as an input to another function of multiple arguments. To alleviate this problem, we may use reduced expansion set methods before or after applying f , as we did for example in Section 1.2.
To conclude this section, let us summarize the implications of our results for two practical scenarios that should be distinguished.
. If we have separate samples from two random variables X and Y , then our results justify how to provide an estimate of the mean embedding of f(X,Y ) provided that X and Y are independent. The samples themselves need not be i.i.d. — we can also work with weighted samples computed, for instance, by a reduced set method. . How about dependent random variables? For instance, imagine that Y = X , and f(X,Y ) = X + Y . Clearly, in this case the distribution of f(X,Y ) is a delta measure on 0, and there is no way to predict this from separate samples of X and Y . However, it should be stressed that our results (consistency and finite sample bound) apply even to the case where X and Y are dependent. In that case, however, they require a consistent estimator of the joint embedding µ(X,Y ). . It is also sufficient to have a reduced set expansion of the embedding of the joint distribution. This setting may sound strange, but it potentially has significant applications. Imagine that one has a large database of user data, sampled from a joint distribution. If we expand the joint’s embedding in terms of synthetic expansion points using a reduced set construction method, then we can pass on these (weighted) synthetic expansion points to a third party without revealing the original data. Using our results, the third party can nevertheless perform arbitrary continuous functional operations on the joint distribution in a consistent manner.
4 Conclusion and future work
This paper provides a theoretical foundation for using kernel mean embeddings as approximate representations of random variables in scenarios where we need to apply functions to those random variables. We show that for continuous functions f (including all functions on discrete domains), consistency of the mean embedding estimator of a random variable X implies consistency of the mean embedding estimator of f(X). Furthermore, if the kernels are Matérn and the function f is sufficiently smooth, we provide bounds on the convergence rate. Importantly, our results apply beyond i.i.d. samples and cover estimators based on expansions with interdependent points and weights. One interesting future direction is to improve the finite-sample bounds and extend them to general radial and/or translation-invariant kernels.
Our work is motivated by the field of probabilistic programming. Using our theoretical results, kernel mean embeddings can be used to generalize functional operations (which lie at the core of all programming languages) to distributions over data types in a principled manner, by applying the operations to the points or approximate kernel expansions. This is in principle feasible for any data type provided a suitable kernel function can be defined on it. We believe that the approach holds significant potential for future probabilistic programming systems.
Acknowledgements
We thank Krikamol Muandet for providing the code used to generate Figure 1, Paul Rubenstein, Motonobu Kanagawa and Bharath Sriperumbudur for very useful discussions, and our anonymous reviewers for their valuable feedback. Carl-Johann Simon-Gabriel is supported by a Google European Fellowship in Causal Inference. | 1. What is the focus of the paper regarding kernel mean embedding?
2. What are the main contributions and strengths of the paper's theoretical analysis?
3. Do you have any minor concerns or questions regarding the paper's content? | Review | Review
This paper presents theoretical analysis for kernel mean embedding (KME) of functions of a random variable. The analysis clarifies ... (1) for any continuous function f, if there is a consistent estimator of the mean embedding of a random valuable X, one can obtain the consistent estimator of the mean embedding of f(X) (2) convergence rate of the estimator of the mean embedding of f(X) for Gaussian kernel and sufficiently smooth f (3) in case of multiple random valuables (X,Y), the reduced set method can produce a good estimator of the mean embedding of f(X,Y)This paper is clearly written and easy to follow. The analysis is sufficiently general, and its result would be applied to various application. As far as I understand, there is no major concerns about the analysis and its results shown in this paper. Some minor concerns are shown below. * In line 199, the authors state "we then almost recover the i.i.d. rate of (2)", however, I couldn't get it. Does it mean that Eq. (2) can be derived as a special case of Eq. (3) under i.i.d. assumption? Or it just means that Eq. (2) is tighter bound than Eq. (3)? * The result shown in ln. 198, which is "breaking the curse of dimensionality up to the growing logarithmic factor", is the most curious one in this paper for me. I'm interested in its experimental validation. |
NIPS | Title
Consistent Kernel Mean Estimation for Functions of Random Variables
Abstract
We provide a theoretical foundation for non-parametric estimation of functions of random variables using kernel mean embeddings. We show that for any continuous function f , consistent estimators of the mean embedding of a random variable X lead to consistent estimators of the mean embedding of f(X). For Matérn kernels and sufficiently smooth functions we also provide rates of convergence. Our results extend to functions of multiple random variables. If the variables are dependent, we require an estimator of the mean embedding of their joint distribution as a starting point; if they are independent, it is sufficient to have separate estimators of the mean embeddings of their marginal distributions. In either case, our results cover both mean embeddings based on i.i.d. samples as well as “reduced set” expansions in terms of dependent expansion points. The latter serves as a justification for using such expansions to limit memory resources when applying the approach as a basis for probabilistic programming.
1 Introduction
A common task in probabilistic modelling is to compute the distribution of f(X), given a measurable function f and a random variable X . In fact, the earliest instances of this problem date back at least to Poisson (1837). Sometimes this can be done analytically. For example, if f is linear and X is Gaussian, that is f(x) = ax+ b and X ⇠ N (µ; ), we have f(X) ⇠ N (aµ+ b; a ). There exist various methods for obtaining such analytical expressions (Mathai, 1973), but outside a small subset of distributions and functions the formulae are either not available or too complicated to be practical.
An alternative to the analytical approach is numerical approximation, ideally implemented as a flexible software library. The need for such tools is recognised in the general programming languages community (McKinley, 2016), but no standards were established so far. The main challenge is in finding a good approximate representation for random variables.
Distributions on integers, for example, are usually represented as lists of (x i , p(x i )) pairs. For real valued distributions, integral transforms (Springer, 1979), mixtures of Gaussians (Milios, 2009), Laguerre polynomials (Williamson, 1989), and Chebyshev polynomials (Korzeń and Jaroszewicz, 2014) were proposed as convenient representations for numerical computation. For strings, probabilistic finite automata are often used. All those approaches have their merits, but they only work with a specific input type.
There is an alternative, based on Monte Carlo sampling (Kalos and Whitlock, 2008), which is to represent X by a (possibly weighted) sample {(x
i , w i )}n i=1 (with wi 0). This representation has
several advantages: (i) it works for any input type, (ii) the sample size controls the time-accuracy trade-off, and (iii) applying functions to random variables reduces to applying the functions pointwise
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
to the sample, i.e., {(f(x i ), w i )} represents f(X). Furthermore, expectations of functions of random variables can be estimated as E [f(X)] ⇡ P
i
w i f(x i
)/ P
i
w i , sometimes with guarantees for the convergence rate.
The flexibility of this Monte Carlo approach comes at a cost: without further assumptions on the underlying input space X , it is hard to quantify the accuracy of this representation. For instance, given two samples of the same size, {(x
i , w i )}n i=1 and {(x0i, w0i)}ni=1, how can we tell which one is a
better representation of X? More generally, how could we optimize a representation with predefined sample size?
There exists an alternative to the Monte Carlo approach, called Kernel Mean Embeddings (KME) (Berlinet and Thomas-Agnan, 2004; Smola et al., 2007). It also represents random variables as samples, but additionally defines a notion of similarity between sample points. As a result, (i) it keeps all the advantages of the Monte Carlo scheme, (ii) it includes the Monte Carlo method as a special case, (iii) it overcomes its pitfalls described above, and (iv) it can be tailored to focus on different properties of X , depending on the user’s needs and prior assumptions. The KME approach identifies both sample points and distributions with functions in an abstract Hilbert space. Internally the latter are still represented as weighted samples, but the weights can be negative and the straightforward Monte Carlo interpretation is no longer valid. Schölkopf et al. (2015) propose using KMEs as approximate representation of random variables for the purpose of computing their functions. However, they only provide theoretical justification for it in rather idealised settings, which do not meet practical implementation requirements.
In this paper, we build on this work and provide general theoretical guarantees for the proposed estimators. Specifically, we prove statements of the form “if {(x
i , w i )}n i=1 provides a good estimate for
the KME of X , then {(f(x i ), w i )}n i=1 provides a good estimate for the KME of f(X)”. Importantly, our results do not assume joint independence of the observations x i (and weights w i
). This makes them a powerful tool. For instance, imagine we are given data {(x
i , w i )}n i=1 from a random variable
X that we need to compress. Then our theorems guarantee that, whatever compression algorithm we use, as long as the compressed representation {(x0
j , w0 j )}n j=1 still provides a good estimate for the
KME of X , the pointwise images {(f(x0 j ), w0 j )}n j=1 provide good estimates of the KME of f(X).
In the remainder of this section we first introduce KMEs and discuss their merits. Then we explain why and how we extend the results of Schölkopf et al. (2015). Section 2 contains our main results. In Section 2.1 we show consistency of the relevant estimator in a general setting, and in Section 2.2 we provide finite sample guarantees when Matérn kernels are used. In Section 3 we show how our results apply to functions of multiple variables, both interdependent and independent. Section 4 concludes with a discussion.
1.1 Background on kernel mean embeddings
Let X be a measurable input space. We use a positive definite bounded and measurable kernel k : X ⇥ X ! R to represent random variables X ⇠ P and weighted samples ˆX := {(x
i , w i )}n i=1
as two functions µk X and µ̂k X in the corresponding Reproducing Kernel Hilbert Space (RKHS) H k
by defining
µk X :=
Z k(x, .) dP (x) and µ̂k
X
:=
X
i
w i k(x i , .) .
These are guaranteed to exist, since we assume the kernel is bounded (Smola et al., 2007). When clear from the context, we omit the kernel k in the superscript. µ
X is called the KME of P , but we also refer to it as the KME of X . In this paper we focus on computing functions of random variables. For f : X ! Z , where Z is a measurable space, and for a positive definite bounded k
z : Z ⇥Z ! R we also write
µkz f(X) :=
Z k z (f(x), .) dP (x) and µ̂kz f(X) := X
i
w i k z (f(x i ), .) . (1)
The advantage of mapping random variables X and samples ˆX to functions in the RKHS is that we may now say that ˆX is a good approximation for X if the RKHS distance kµ̂
X µ X k is small. This distance depends on the choice of the kernel and different kernels emphasise different information about X . For example if on X := [a, b] ⇢ R we choose k(x, x0) := x · x0 + 1, then
µ X (x) = E X⇠P [X]x+ 1. Thus any two distributions and/or samples with equal means are mapped to the same function in H k
so the distance between them is zero. Therefore using this particular k, we keep track only of the mean of the distributions. If instead we prefer to keep track of all first p moments, we may use the kernel k(x, x0) := (x · x0 + 1)p. And if we do not want to loose any information at all, we should choose k such that µk is injective over all probability measures on X . Such kernels are called characteristic. For standard spaces, such as X = Rd, many widely used kernels were proven characteristic, such as Gaussian, Laplacian, and Matérn kernels (Sriperumbudur et al., 2010, 2011).
The Gaussian kernel k(x, x0) := e kx x0k2
2 2 may serve as another good illustration of the flexibility of this representation. Whatever positive bandwidth 2 > 0, we do not lose any information about distributions, because k is characteristic. Nevertheless, if 2 grows, all distributions start looking the same, because their embeddings converge to a constant function 1. If, on the other hand, 2 becomes small, distributions look increasingly different and µ̂
X
becomes a function with bumps of height w i
at every x i . In the limit when 2 goes to zero, each point is only similar to itself, so µ̂ X reduces to the Monte Carlo method. Choosing 2 can be interpreted as controlling the degree of smoothing in the approximation.
1.2 Reduced set methods
An attractive feature when using KME estimators is the ability to reduce the number of expansion points (i.e., the size of the weighted sample) in a principled way. Specifically, if ˆX 0 := {(x0
j , 1/N)}N j=1 then the objective is to construct ˆX := {(xi, wi)}ni=1 that minimises
kµ̂ X 0 µ̂ X k with n < N . Often the resulting x i are mutually dependent and the w i
certainly depend on them. The algorithms for constructing such expansions are known as reduced set methods and have been studied by the machine learning community (Schölkopf and Smola, 2002, Chapter 18).
Although reduced set methods provide significant efficiency gains, their application raises certain concerns when it comes to computing functions of random variables. Let P,Q be distributions of X and f(X) respectively. If x0
j
⇠ i.i.d. P , then f(x0 j ) ⇠ i.i.d. Q and so µ̂ f(X0) = 1 N P j k(f(x0 j ), .)
reduces to the commonly used p N -consistent empirical estimator of µ
f(X) (Smola et al., 2007). Unfortunately, this is not the case after applying reduced set methods, and it is not known under which conditions µ̂
f(X) is a consistent estimator for µf(X).
Schölkopf et al. (2015) advocate the use of reduced expansion set methods to save computational resources. They also provide some reasoning why this should be the right thing to do for characteristic kernels, but as they state themselves, their rigorous analysis does not cover practical reduced set methods. Motivated by this and other concerns listed in Section 1.4, we provide a generalised analysis of the estimator µ̂
f(X), where we do not make assumptions on how xi and wi were generated.
Before doing that, however, we first illustrate how the need for reduced set methods naturally emerges on a concrete problem.
1.3 Illustration with functions of two random variables
Suppose that we want to estimate µ f(X,Y ) given i.i.d. samples ˆX
0 = {x0
i , 1/N}N i=1 and ˆY 0 =
{y0 j , 1/N}N j=1 from two independent random variables X 2 X and Y 2 Y respectively. Let Q be the distribution of Z = f(X,Y ).
The first option is to consider what we will call the diagonal estimator µ̂1 := 1 N
P n
i=1 kz f(x0 i , y0 i ), . .
Since f(x0 i , y0 i ) ⇠ i.i.d. Q, µ̂1 is p N -consistent (Smola et al., 2007). Another option is to consider the U-statistic estimator µ̂2 := 1
N
2
P N
i,j=1 kz f(x0 i , y0 j ), . , which is also known to be
p N -
consistent. Experiments show that µ̂2 is more accurate and has lower variance than µ̂1 (see Figure 1). However, the U-statistic estimator µ̂2 needs O(n2) memory rather than O(n). For this reason Schölkopf et al. (2015) propose to use a reduced set method both on ˆX 0 and ˆY 0 to get new samples ˆX = {x
i , w i }n i=1 and ˆY = {yj , uj}nj=1 of size n ⌧ N , and then estimate µ
f(X,Y ) using µ̂3 := P n i,j=1 wiujkx(f(xi, yj), .).
We ran experiments on synthetic data to show how accurately µ̂1, µ̂2 and µ̂3 approximate µ f(X,Y ) with growing sample size N . We considered three basic arithmetic operations: multiplication X · Y , division X/Y , and exponentiation XY , with X ⇠ N (3; 0.5) and Y ⇠ N (4; 0.5). As the true embedding µ
f(X,Y ) is unknown, we approximated it by a U-statistic estimator based on a large sample (125 points). For µ̂3, we used the simplest possible reduced set method: we randomly sampled subsets of size n = 0.01 ·N of the x
i , and optimized the weights w i and u i to best approximate µ̂ X
and µ̂ Y . The results are summarised in Figure 1 and corroborate our expectations: (i) all estimators converge, (ii) µ̂2 converges fastest and has the lowest variance, and (iii) µ̂3 is worse than µ̂2, but much better than the diagonal estimator µ̂1. Note, moreover, that unlike the U-statistic estimator µ̂2, the reduced set based estimator µ̂3 can be used with a fixed storage budget even if we perform a sequence of function applications—a situation naturally appearing in the context of probabilistic programming.
Schölkopf et al. (2015) prove the consistency of µ̂3 only for a rather limited case, when the points of the reduced expansions {x
i }n i=1 and {yi}ni=1 are i.i.d. copies of X and Y , respectively, and
the weights {(w i , u i )}n i=1 are constants. Using our new results we will prove in Section 3.1 the consistency of µ̂3 under fairly general conditions, even in the case when both expansion points and weights are interdependent random variables.
1.4 Other sources of non-i.i.d. samples
Although our discussion above focuses on reduced expansion set methods, there are other popular algorithms that produce KME expansions where the samples are not i.i.d. Here we briefly discuss several examples, emphasising that our selection is not comprehensive. They provide additional motivation for stating convergence guarantees in the most general setting possible.
An important notion in probability theory is that of a conditional distribution, which can also be represented using KME (Song et al., 2009). With this representation the standard laws of probability, such as sum, product, and Bayes’ rules, can be stated using KME (Fukumizu et al., 2013). Applying those rules results in KME estimators with strong dependencies between samples and their weights.
Another possibility is that even though i.i.d. samples are available, they may not produce the best estimator. Various approaches, such as kernel herding (Chen et al., 2010; Lacoste-Julien et al., 2015), attempt to produce a better KME estimator by actively generating pseudo-samples that are not i.i.d. from the underlying distribution.
2 Main results
This section contains our main results regarding consistency and finite sample guarantees for the estimator µ̂
f(X) defined in (1). They are based on the convergence of µ̂X and avoid simplifying assumptions about its structure.
2.1 Consistency
If k x is c0-universal (see Sriperumbudur et al. (2011)), consistency of µ̂ f(X) can be shown in a rather general setting. Theorem 1. Let X and Z be compact Hausdorff spaces equipped with their Borel -algebras, f : X ! Z a continuous function, k
x , k z continuous kernels on X ,Z respectively. Assume k x
is c0-universal and that there exists C such that P i |w i
| C independently of n. The following holds: If µ̂kx
X ! µkx X then µ̂kz f(X) ! µkzf(X) as n ! 1.
Proof. Let P be the distribution of X and ˆP n =
P n
i=1 wi xi . Define a new kernel on X by ek x (x1, x2) := kz f(x1), f(x2) . X is compact and { ˆP n
|n 2 N} [ {P} is a bounded set (in total variation norm) of finite measures, because k ˆP
n k TV =
P n
i=1 |wi| C. Furthermore, kx is continuous and c0-universal. Using Corollary 52 of Simon-Gabriel and Schölkopf (2016) we conclude that: µ̂kx
X ! µkx X implies that ˆP converges weakly to P . Now, k z
and f being continuous, so is ek
x . Thus, if ˆP converges weakly to P , then µ̂ekx X ! µekx X
(Simon-Gabriel and Schölkopf, 2016, Theorem 44, Points (1) and (iii)). Overall, µ̂kx
X ! µkx X implies µ̂ekx X ! µekx X
. We conclude the proof by showing that convergence in He
k
x
leads to convergence in H k
z : µ̂kz
f(X) µkzf(X) 2
k
z
= µ̂ekx X µekx X
2
e k
x ! 0. For a detailed version of the above, see Appendix A.
The continuity assumption is rather unrestrictive. All kernels and functions defined on a discrete space are continuous with respect to the discrete topology, so the theorem applies in this case. For X = Rd, many kernels used in practice are continuous, including Gaussian, Laplacian, Matérn and other radial kernels. The slightly limiting factor of this theorem is that k
x must be c0-universal, which often can be tricky to verify. However, most standard kernels—including all radial, non-constant kernels—are c0-universal (see Sriperumbudur et al., 2011). The assumption that the input domain is compact is satisfied in most applications, since any measurements coming from physical sensors are contained in a bounded range. Finally, the assumption that P i |w i
| C can be enforced, for instance, by applying a suitable regularization in reduced set methods.
2.2 Finite sample guarantees
Theorem 1 guarantees that the estimator µ̂ f(X) converges to µf(X) when µ̂X converges to µX . However, it says nothing about the speed of convergence. In this section we provide a convergence rate when working with Matérn kernels, which are of the form
ks x
(x, x0) = 2
1 s (s) kx x0ks d/22 Bd/2 s (kx x0k2) , (2)
where B ↵
is a modified Bessel function of the third kind (also known as Macdonald function) of order ↵, is the Gamma function and s > d2 is a smoothness parameter. The RKHS induced by ks x
is the Sobolev space W s2 (Rd) (Wendland, 2004, Theorem 6.13 & Chap.10) containing s-times differentiable functions. The finite-sample bound of Theorem 2 is based on the analysis of Kanagawa et al. (2016), which requires the following assumptions:
Assumptions 1. Let X be a random variable over X = Rd with distribution P and let ˆX = {(x
i , w i )}n i=1 be random variables over Xn⇥Rn with joint distribution S. There exists a probability
distribution Q with full support on Rd and a bounded density, satisfying the following properties:
(i) P has a bounded density function w.r.t. Q; (ii) there is a constant D > 0 independent of n, such that
E S
" 1
n
nX
i=1
g2(x i ) # D kgk2L2(Q) , 8g 2 L2(Q) .
These assumptions were shown to be fairly general and we refer to Kanagawa et al. (2016, Section 4.1) for various examples where they are met. Next we state the main result of this section.
Theorem 2. Let X = Rd, Z = Rd0 , and f : X ! Z be an ↵-times differentiable function (↵ 2 N+). Take s1 > d/2 and s2 > d0 such that s1, s2/2 2 N+. Let ks1
x and ks2 z be Matérn kernels over X and Z respectively as defined in (2). Assume X ⇠ P and ˆX = {(x
i , w i )}n i=1 ⇠ S satisfy 1. Moreover,
assume that P and the marginals of x1, . . . xn have a common compact support. Suppose that, for some constants b > 0 and 0 < c 1/2:
(i) E S h kµ̂
X µ X k2 k s1 x
i = O(n 2b) ;
(ii) P n
i=1 w 2 i = O(n 2c) (with probability 1) .
Let ✓ = min( s22s1 , ↵ s1 , 1) and assume ✓b (1/2 c)(1 ✓) > 0. Then
E S
µ̂ f(X) µf(X) 2
k s2 z
= O ⇣ (log n)d 0 n 2 (✓b (1/2 c)(1 ✓)) ⌘ . (3)
Before we provide a short sketch of the proof, let us briefly comment on this result. As a benchmark, remember that when x1, . . . xn are i.i.d. observations from X and ˆX = {(xi, 1/n)}n
i=1, we getkµ̂ f(X) µf(X)k2 = OP (n 1), which was recently shown to be a minimax optimal rate (Tolstikhin et al., 2016). How do we compare to this benchmark? In this case we have b = c = 1/2 and our rate is defined by ✓. If f is smooth enough, say ↵ > d/2 + 1, and by setting s2 > 2s1 = 2↵, we recover the O(n 1) rate up to an extra (log n)d 0 factor.
However, Theorem 2 applies to much more general settings. Importantly, it makes no i.i.d. assumptions on the data points and weights, allowing for complex interdependences. Instead, it asks the convergence of the estimator µ̂
X to the embedding µ X to be sufficiently fast. On the downside, the upper bound is affected by the smoothness of f , even in the i.i.d. setting: if ↵ ⌧ d/2 the rate will become slower, as ✓ = ↵/s1. Also, the rate depends both on d and d0. Whether these are artefacts of our proof remains an open question.
Proof. Here we sketch the main ideas of the proof and develop the details in Appendix C. Throughout the proof, C will designate a constant that depends neither on the sample size n nor on the variable R (to be introduced). C may however change from line to line. We start by showing that:
E S
µ̂kz f(X) µkzf(X) 2
k
z
= (2⇡)
d 0 2
Z
Z E S
⇣ [µ̂h f(X) µhf(X)](z) ⌘2 dz, (4)
where h is Matérn kernel over Z with smoothness parameter s2/2. Second, we upper bound the integrand by roughly imitating the proof idea of Theorem 1 from Kanagawa et al. (2016). This eventually yields:
E S
⇣ [µ̂h f(X) µhf(X)](z) ⌘2
Cn 2⌫ , (5) where ⌫ := ✓b (1/2 c)(1 ✓). Unfortunately, this upper bound does not depend on z and can not be integrated over the whole Z in (4). Denoting B
R the ball of radius R, centred on the origin of Z , we thus decompose the integral in (4) as:
Z Z E ⇣ [µ̂h f(X) µhf(X)](z) ⌘2 dz
=
Z
B
R
E ⇣
[µ̂h f(X) µhf(X)](z)
⌘2 dz +
Z
Z\B R
E ⇣
[µ̂h f(X) µhf(X)](z)
⌘2 dz.
On B R we upper bound the integral by (5) times the ball’s volume (which grows like Rd): Z
B
R
E ⇣
[µ̂h f(X) µhf(X)](z)
⌘2 dz CRdn 2⌫ . (6)
On X\B R
, we upper bound the integral by a value that decreases with R, which is of the form: Z
Z\B R
E ⇣
[µ̂h f(X) µhf(X)](z)
⌘2 dz Cn1 2c(R C 0)s2 2e 2(R C0) (7)
with C 0 > 0 being a constant smaller than R. In essence, this upper bound decreases with R because [µ̂h
f(X) µhf(X)](z) decays with the same speed as h when kzk grows indefinitely. We are now left with two rates, (6) and (7), which respectively increase and decrease with growing R. We complete the proof by balancing these two terms, which results in setting R ⇡ (log n)1/2.
3 Functions of Multiple Arguments
The previous section applies to functions f of one single variable X . However, we can apply its results to functions of multiple variables if we take the argument X to be a tuple containing multiple values. In this section we discuss how to do it using two input variables from spaces X and Y , but the results also apply to more inputs. To be precise, our input space changes from X to X ⇥ Y , input random variable from X to (X,Y ), and the kernel on the input space from k
x to k xy .
To apply our results from Section 2, all we need is a consistent estimator µ̂(X,Y ) of the joint embedding µ(X,Y ). There are different ways to get such an estimator. One way is to sample (x 0 i , y0 i
) i.i.d. from the joint distribution of (X,Y ) and construct the usual empirical estimator, or approximate it using reduced set methods. Alternatively, we may want to construct µ̂(X,Y ) based only on consistent estimators of µ
X and µ Y . For example, this is how µ̂3 was defined in Section 1.3. Below we show that this can indeed be done if X and Y are independent.
3.1 Application to Section 1.3
Following Schölkopf et al. (2015), we consider two independent random variables X ⇠ P x
and Y ⇠ P
y . Their joint distribution is P x ⌦ P y
. Consistent estimators of their embeddings are given by µ̂
X
=
P n i=1 wikx(xi, .) and µ̂Y = P n
j=1 ujky(yi, .). In this section we show that µ̂ f(X,Y ) = P n i,j=1 wiujkz f(x i , y j ), . is a consistent estimator of µ f(X,Y ).
We choose a product kernel k xy (x1, y1), (x2, y2) = k x (x1, x2)ky(y1, y2), so the corresponding RKHS is a tensor product H
k
xy
= H k
x
⌦H k
y (Steinwart and Christmann, 2008, Lemma 4.6) and the mean embedding of the product random variable (X,Y ) is a tensor product of their marginal mean embeddings µ(X,Y ) = µX ⌦ µY . With consistent estimators for the marginal embeddings we can estimate the joint embedding using their tensor product
µ̂(X,Y ) = µ̂X ⌦ µ̂Y = nX
i,j=1
w i u j k x (x i , .)⌦ k y (y j , .) = nX
i,j=1
w i u j k xy (x i , y j ), (. , .) .
If points are i.i.d. and w i = u i = 1/n, this reduces to the U-statistic estimator µ̂2 from Section 1.3. Lemma 3. Let (s
n
)
n be any positive real sequence converging to zero. Suppose k xy = k x k y is a product kernel, µ(X,Y ) = µX ⌦ µY , and µ̂(X,Y ) = µ̂X ⌦ µ̂Y . Then:
( kµ̂
X µ X k k
x
= O(s n );
kµ̂ Y µ Y k k
y
= O(s n )
implies µ̂(X,Y ) µ(X,Y )
k
xy
= O(s n ) .
Proof. For a detailed expansion of the first inequality see Appendix B. µ̂(X,Y ) µ(X,Y )
k
xy
kµ X k k
x
kµ̂ Y µ Y k k
y
+ kµ Y k k
y
kµ̂ X µ X k k
x
+ kµ̂ X µ X k k
x
kµ̂ Y µ Y k k
y
= O(s n ) +O(s n ) +O(s2 n ) = O(s n ).
Corollary 4. If µ̂ X ! n!1 µ X and µ̂ Y ! n!1 µ Y , then µ̂(X,Y ) ! n!1 µ(X,Y ).
Together with the results from Section 2 this lets us reason about estimators resulting from applying functions to multiple independent random variables. Write
µ̂ k xy
XY
=
nX
i,j=1
w i u j k xy (x i , y j ), . =
n 2X
`=1
! ` k xy (⇠ ` , .),
where ` enumerates the (i, j) pairs and ⇠ ` = (x i , y j ), ! ` = w i u j . Now if µ̂kx X ! µkx X and µ̂ky Y ! µky Y then µ̂kxy XY
! µkxy(X,Y ) (according to Corollary 4) and Theorem 1 shows thatP n
i,j=1 wiujkz f(x i , y j ), . is consistent as well. Unfortunately, we cannot apply Theorem 2 to get the speed of convergence, because a product of Matérn kernels is not a Matérn kernel any more.
One downside of this overall approach is that the number of expansion points used for the estimation of the joint increases exponentially with the number of arguments of f . This can lead to prohibitively large computational costs, especially if the result of such an operation is used as an input to another function of multiple arguments. To alleviate this problem, we may use reduced expansion set methods before or after applying f , as we did for example in Section 1.2.
To conclude this section, let us summarize the implications of our results for two practical scenarios that should be distinguished.
. If we have separate samples from two random variables X and Y , then our results justify how to provide an estimate of the mean embedding of f(X,Y ) provided that X and Y are independent. The samples themselves need not be i.i.d. — we can also work with weighted samples computed, for instance, by a reduced set method. . How about dependent random variables? For instance, imagine that Y = X , and f(X,Y ) = X + Y . Clearly, in this case the distribution of f(X,Y ) is a delta measure on 0, and there is no way to predict this from separate samples of X and Y . However, it should be stressed that our results (consistency and finite sample bound) apply even to the case where X and Y are dependent. In that case, however, they require a consistent estimator of the joint embedding µ(X,Y ). . It is also sufficient to have a reduced set expansion of the embedding of the joint distribution. This setting may sound strange, but it potentially has significant applications. Imagine that one has a large database of user data, sampled from a joint distribution. If we expand the joint’s embedding in terms of synthetic expansion points using a reduced set construction method, then we can pass on these (weighted) synthetic expansion points to a third party without revealing the original data. Using our results, the third party can nevertheless perform arbitrary continuous functional operations on the joint distribution in a consistent manner.
4 Conclusion and future work
This paper provides a theoretical foundation for using kernel mean embeddings as approximate representations of random variables in scenarios where we need to apply functions to those random variables. We show that for continuous functions f (including all functions on discrete domains), consistency of the mean embedding estimator of a random variable X implies consistency of the mean embedding estimator of f(X). Furthermore, if the kernels are Matérn and the function f is sufficiently smooth, we provide bounds on the convergence rate. Importantly, our results apply beyond i.i.d. samples and cover estimators based on expansions with interdependent points and weights. One interesting future direction is to improve the finite-sample bounds and extend them to general radial and/or translation-invariant kernels.
Our work is motivated by the field of probabilistic programming. Using our theoretical results, kernel mean embeddings can be used to generalize functional operations (which lie at the core of all programming languages) to distributions over data types in a principled manner, by applying the operations to the points or approximate kernel expansions. This is in principle feasible for any data type provided a suitable kernel function can be defined on it. We believe that the approach holds significant potential for future probabilistic programming systems.
Acknowledgements
We thank Krikamol Muandet for providing the code used to generate Figure 1, Paul Rubenstein, Motonobu Kanagawa and Bharath Sriperumbudur for very useful discussions, and our anonymous reviewers for their valuable feedback. Carl-Johann Simon-Gabriel is supported by a Google European Fellowship in Causal Inference. | 1. What is the focus of the paper regarding kernel embeddings and sample representation?
2. What are the strengths of the proposed approach, particularly in terms of its theoretical analysis and contributions to the field?
3. Do you have any concerns or questions about the paper's methodology, assumptions, or experimental results?
4. How does the reviewer assess the novelty and impact of the work compared to prior research in the area?
5. Are there any limitations or areas for improvement in the paper's content or approach that could be addressed in future works? | Review | Review
This paper proves that if a weighted sample can represent the kernel embedding of a random variable well, we can get a good estimation for the kernel embedding of a function of that random variable by passing the weighted sample to the function. The authors also provide finite sample bounds under stricter assumptions and analyze the case of functions of multiple variables.The paper is well written. Contents are well motivated and the proofs in the supplementary materials are fully explained in detail. The contribution is incremental. The paper provides better estimators for estimating \mu[f(X)] and \mu[f(X,Y)] under mild assumptions, complementing the framework in Schölkopf et al's previous work. However, I am not sure what impact this work will have. There are no experiments illustrating practical applications of this work. The only experimental result of Figure 1 seems to be a replicate of the synthetic data experiment in Schölkopf et al's work. Also there are no comparisons of other estimators, although I understand that the results of Schölkopf et al's estimators may be similar to those of the estimators in this paper. Reference: B. Schölkopf, K. Muandet, K. Fukumizu, S. Harmeling, and J. Peters. Computing functions of random variables via reproducing kernel Hilbert space representations. Statistics and Computing. |
NIPS | Title
Gradient-Free Methods for Deterministic and Stochastic Nonsmooth Nonconvex Optimization
Abstract
Nonsmooth nonconvex optimization problems broadly emerge in machine learning and business decision making, whereas two core challenges impede the development of efficient solution methods with finite-time convergence guarantee: the lack of computationally tractable optimality criterion and the lack of computationally powerful oracles. The contributions of this paper are two-fold. First, we establish the relationship between the celebrated Goldstein subdifferential [46] and uniform smoothing, thereby providing the basis and intuition for the design of gradient-free methods that guarantee the finite-time convergence to a set of Goldstein stationary points. Second, we propose the gradient-free method (GFM) and stochastic GFM for solving a class of nonsmooth nonconvex optimization problems and prove that both of them can return a (δ, )-Goldstein stationary point of a Lipschitz function f at an expected convergence rate at O(d3/2δ−1 −4) where d is the problem dimension. Two-phase versions of GFM and SGFM are also proposed and proven to achieve improved large-deviation results. Finally, we demonstrate the effectiveness of 2-SGFM on training ReLU neural networks with the MINST dataset.
1 Introduction
Many of the recent real-world success stories of machine learning have involved nonconvex optimization formulations, with the design of models and algorithms often being heuristic and intuitive. Thus a gap has arisen between theory and practice. Attempts have been made to fill this gap for different learning methodologies, including the training of multi-layer neural networks [25], orthogonal tensor decomposition [41], M-estimators [63, 64], synchronization and MaxCut [6, 66], smooth semidefinite programming [15], matrix sensing and completion [10, 42], robust principal component analysis (RPCA) [43] and phase retrieval [82, 79, 64]. For an overview of nonconvex optimization formulations and the relevant ML applications, we refer to a recent survey [51].
It is generally intractable to compute an approximate global minimum [69] or to verify whether a point is a local minimum or a high-order saddle point [67]. Fortunately, the notion of approximate stationary point gives a reasonable optimality criterion when the objective function f is smooth; the goal here is to find a point x ∈ Rd such that ‖∇f(x)‖ ≤ . Recent years have seen rapid algorithmic development through the lens of nonasymptotic convergence rates to -stationary points [70, 44, 45, 20, 21, 53]. Another line of work establishes algorithm-independent lower bounds [22, 23, 3, 4].
Relative to its smooth counterpart, the investigation of nonsmooth optimization is relatively scarce, particularly in the nonconvex setting, both in terms of efficient algorithms and finite-time convergence guarantees. Yet, over several decades, nonsmooth nonconvex optimization formulations have found applications in many fields. A typical example is the training multi-layer neural networks with ReLU neurons, for which the piecewise linear activation functions induce nonsmoothness. Another example arises in controlling financial risk for asset portfolios or optimizing customer satisfaction in service systems or supply chain systems. Here, the nonsmoothness arises from the payoffs of financial
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
derivatives and supply chain costs, e.g., options payoffs [38] and supply chain overage/underage costs [78]. These applications make significant demands with respect to computational feasibility, and the design of efficient algorithms for solving nonsmooth nonconvex optimization problems has moved to the fore [65, 30, 28, 85, 12, 31, 80].
The key challenges lie in two aspects: (i) the lack of a computationally tractable optimality criterion, and (ii) the lack of computationally powerful oracles. More specifically, in the classical setting where the function f is Lipschitz, we can define -stationary points based on the celebrated notion of Clarke stationarity [26]. However, the value of such a criterion has been called into question by Zhang et al. [85], who show that no finite-time algorithm guarantees -stationarity when is less than a constant. Further, the computation of the gradient is impossible for many application problems and we only have access to a noisy function value at each point. This is a common issue in the context of simulation optimization [68, 48]; indeed, the objective function value is often achieved as the output of a black-box or complex simulator, for which the simulator does not have the infrastructure needed to effectively evaluate gradients; see also Ghadimi and Lan [44] and Nesterov and Spokoiny [72] for comments on the lack of gradient evaluation in practice.
Contribution. In this paper, we propose and analyze a class of deterministic and stochastic gradientfree methods for nonsmooth nonconvex optimization problems in which we only assume that the function f is Lipschitz. Our contributions can be summarized as follows.
1. We establish a relationship between the Goldstein subdifferential and uniform smoothing via appeal to the hyperplane separation theorem. This result provides the basis for algorithmic design and finite-time convergence analysis of gradient-free methods to (δ, )-Goldstein stationary points.
2. We propose and analyze a gradient-free method (GFM) and stochastic GFM for solving a class of nonsmooth nonconvex optimization problems. Both of these methods are guaranteed to return a (δ, )-Goldstein stationary point of a Lipschitz function f : Rd 7→ R with an expected convergence rate of O(d3/2δ−1 −4) where d ≥ 1 is the problem dimension. Further, we propose the two-phase versions of GFM and SGFM. As our goal is to return a (δ, )-Goldstein stationary point with user-specified high probability 1− Λ, we prove that the two-phase version of GFM and SGFM can improve the dependence from (1/Λ)4 to log(1/Λ) in the large-deviation regime.
Related works. Our work is related to a line of literature on gradient-based methods for nonsmooth and nonconvex optimization and gradient-free methods for smooth and nonconvex optimization. Due to space limitations, we defer our comments on the former topic to Appendix A. In the context of gradient-free methods, the basic idea is to approximate a full gradient using either a one-point estimator [39] or a two-point estimator [1, 44, 37, 75, 72], where the latter approach achieves a better finite-time convergence guarantee. Despite the meteoric rise of two-point-based gradient-free methods, most of the work is restricted to convex optimization [37, 75, 83] and smooth and nonconvex optimization [72, 44, 61, 62, 24, 52, 49]. For nonsmooth and convex optimization, the best upper bound on the global rate of convergence is O(d −2) [75] and this matches the lower bound [37]. For smooth and nonconvex optimization, the best global rate of convergence is O(d −2) [72] and O(d −4) if we only have access to noisy function value oracles [44]. Additional regularity conditions, e.g., a finite-sum structure, allow us to leverage variance-reduction techniques [62, 24, 52] and the best known result is O(d3/4 −3) [49]. However, none of these gradient-free methods have been developed for nonsmooth nonconvex optimization and the only gradient-free method we are aware of for the nonsmooth is summarized in Nesterov and Spokoiny [72, Section 7].
2 Preliminaries and Technical Background
We provide the formal definitions for the class of Lipschitz functions considered in this paper, and the definitions for generalized gradients and the Goldstein subdifferential that lead to optimality conditions in nonsmooth nonconvex optimization.
2.1 Function classes
Imposing regularity on functions to be optimized is necessary for obtaining optimization algorithms with finite-time convergence guarantees [71]. In the context of nonsmooth optimization there are two types of regularity conditions: Lipschitz properties of function values and bounds on function values.
We first list several equivalent definitions of Lipschitz continuity. A function f : Rd 7→ R is said to be L-Lipschitz if for every x ∈ Rd and the direction v ∈ Rd with ‖v‖ ≤ 1, the directional projection fx,v(t) := f(x + tv) defined for t ∈ R satisfies |fx,v(t)− fx,v(t′)| ≤ L|t− t′|, for all t, t′ ∈ R. Equivalently, f is L-Lipschitz if for every x,x′ ∈ Rd, we have
|f(x)− f(x′)| ≤ L‖x− x′‖. Further, the function value bound f(x0)− infx∈Rd f(x) appears in complexity guarantees for smooth and nonconvex optimization problems [71] and is often assumed to be bounded by a positive constant ∆ > 0. Note that x0 is a prespecified point (i.e., an initial point for an algorithm) and we simply fix it for the remainder of this paper. We define the function class which will be considered in this paper.
Definition 2.1 Suppose that ∆ > 0 and L > 0 are both independent of the problem dimension d ≥ 1. Then, we denote Fd(∆, L) as the set of L-Lipschitz functions f : Rd 7→ R with the bounded function value f(x0)− infx∈Rd f(x) ≤ ∆.
The function class Fd(∆, L) includes Lipschitz functions on Rd and is thus different from the nonconvex function class considered in the literature [44, 72]. First, we do not impose a smoothness condition on the function f ∈ Fd(∆, L), in contrast to the nonconvex functions studied in Ghadimi and Lan [44] which are assumed to have Lipschitz gradients. Second, Nesterov and Spokoiny [72, Section 7] presented a complexity bound for a randomized optimization method for minimizing a nonsmooth nonconvex function. However, they did not clarify why the norm of the gradient of the approximate function fµ̄ of the order δ (we use their notation) serves as a reasonable optimality criterion in nonsmooth nonconvex optimization. They also assume an exact function value oracle, ruling out many interesting application problems in simulation optimization and machine learning.
In contrast, our goal is to propose fast gradient-free methods for nonsmooth nonconvex optimization in the absence of an exact function value oracle. In general, the complexity bound of gradient-free methods will depend on the problem dimension d ≥ 1 even when we assume that the function to be optimized is convex and smooth [37, 75]. As such, we should consider a function class with a given dimension d ≥ 1. In particular, we consider a optimality criterion based on the celebrated Goldstein subdifferential [46] and prove that the number of function value oracles required by our deterministic and stochastic gradient-free methods to find a (δ, )-Goldstein stationary point of f ∈ Fd(∆, L) is O(poly(d, L,∆, 1/ , 1/δ)) when δ, ∈ (0, 1) are constants (see the definition of Goldstein stationarity in the next subsection).
It is worth mentioning that Fd(∆, L) contains a rather broad class of functions used in real-world application problems. Typical examples with additional regularity properties include Hadamard semidifferentiable functions [76, 32, 85], Whitney-stratifiable functions [13, 30], o-minimally definable functions [27] and a class of semi-algebraic functions [5, 30]. Thus, our gradient-free methods can be applied for solving these problems with finite-time convergence guarantees.
2.2 Generalized gradients and Goldstein subdifferential
We start with the definition of generalized gradients [26] for nondifferentiable functions. This is perhaps the most standard extension of gradients to nonsmooth and nonconvex functions.
Definition 2.2 Given a point x ∈ Rd and a direction v ∈ Rd, the generalized directional derivative of a nondifferentiable function f is given by Df(x;v) := lim supy→x,t↓0 f(y+tv)−f(y) t . Then, the generalized gradient of f is defined as a set ∂f(x) := {g ∈ Rd : g>v ≤ Df(x;v),∀v ∈ Rd}.
Rademacher’s theorem guarantees that any Lipschitz function is almost everywhere differentiable. This implies that the generalized gradients of Lipschitz functions have additional properties and we can define them in a relatively simple way. The following proposition summarizes these results; we refer to Clarke [26] for the proof details.
Proposition 2.1 Suppose that f is L-Lipschitz for some L > 0, we have that ∂f(x) is a nonempty, convex and compact set and ‖g‖ ≤ L for all g ∈ ∂f(x). Further, ∂f(·) is an upper-semicontinuous set-valued map. Moreover, a generalization of mean-value theorem holds: for any x1,x2 ∈ Rd, there exist λ ∈ (0, 1) and g ∈ ∂f(λx1 + (1− λ)x2) such that f(x1)− f(x2) = g>(x1 − x2). Finally, there is a simple way to represent the generalized gradient ∂f(x):
∂f(x) := conv { g ∈ Rd : g = lim
xk→x ∇f(xk)
} ,
which is the convex hull of all limit points of ∇f(xk) over all sequences x1,x2, . . . of differentiable points of f(·) which converge to x.
Given this definition of generalized gradients, a Clarke stationary point of f is a point x satisfying 0 ∈ ∂f(x). Then, it is natural to ask if an optimization algorithm can reach an -stationary point with a finite-time convergence guarantee. Here a point x ∈ Rd is an -Clarke stationary point if
min {‖g‖ : g ∈ ∂f(x)} ≤ .
This question has been addressed by [85, Theorem 1], who showed that finding an -Clarke stationary points in nonsmooth nonconvex optimization can not be achieved by any finite-time algorithm given a fixed tolerance ∈ [0, 1). One possible response is to consider a relaxation called a near -Clarke stationary point. Consider a point which is δ-close to an -stationary point for some δ > 0. A point x ∈ Rd is near -stationary if the following statement holds true:
min { ‖g‖ : g ∈ ∪y∈Bδ(x)∂f(y) } ≤ .
Unfortunately, however, [58, Theorem 1] demonstrated that it is impossible to obtain worst-case guarantees for finding a near -Clarke stationary point of f ∈ Fd(∆, L) when , δ > 0 are smaller than some certain constants unless the number of oracle calls has an exponential dependence on the problem dimension d ≥ 1. These negative results suggest a need for rethinking the definition of targeted stationary points. We propose to consider the refined notion of Goldstein subdifferential.
Definition 2.3 Given a point x ∈ Rd and δ > 0, the δ-Goldstein subdifferential of a Lipschitz function f at x is given by ∂δf(x) := conv(∪y∈Bδ(x)∂f(y)).
The Goldstein subdifferential of f at x is the convex hull of the union of all generalized gradients at points in a δ-ball around x. Accordingly, we can define the (δ, )-Goldstein stationary points; that is, a point x ∈ Rd is a (δ, )-Goldstein stationary point if the following statement holds:
min{‖g‖ : g ∈ ∂δf(x)} ≤ .
It is worth mentioning that (δ, )-Goldstein stationarity is a weaker notion than (near) -Clarke stationarity since any (near) -stationary point is a (δ, )-Goldstein stationary point but not vice versa. However, the converse holds true under a smoothness condition [85, Proposition 6] and limδ↓0 ∂δf(x) = ∂f(x) holds as shown in Zhang et al. [85, Lemma 7]. The latter result also enables an intuitive framework for transforming nonasymptotic analysis of convergence to (δ, )-Goldstein stationary points to classical asymptotic results for finding -Clarke stationary points. Thus, we conclude that finding a (δ, )-Goldstein stationary point is a reasonable optimality condition for general nonsmooth nonconvex optimization.
Remark 2.2 Finding a (δ, )-Goldstein stationary point in nonsmooth nonconvex optimization has been formally shown to be computationally tractable in an oracle model [85, 31, 80]. Goldstein [46] discovered that one can decrease the function value of a Lipschitz f by using the minimal-norm element of ∂δf(x) and this leads to a deterministic normalized subgradient method which finds a (δ, )-Goldstein stationary point within O( ∆δ ) iterations. However, Goldstein’s algorithm is only conceptual since it is computationally intractable to return an exact minimal-norm element of ∂δf(x). Recently, the randomized variants of Goldstein’s algorithm have been proposed with a convergence guarantee of O(∆L 2
δ 3 ) [85, 31, 80]. However, it remains unknown if gradient-free methods find a (δ, )-Goldstein stationary point of a Lipschitz function f withinO(poly(d, L,∆, 1/ , 1/δ)) iterations in the absence of an exact function value oracle. Note that the dependence on the problem dimension d ≥ 1 is necessary for gradient-free methods as mentioned before.
2.3 Randomized smoothing
The randomized smoothing approaches are simple and work equally well for convex and nonconvex functions. Formally, given the L-Lipschitz function f (possibly nonsmooth nonconvex) and a distribution P, we define fδ(x) = Eu∼P[f(x + δu)]. In particular, letting P be a standard Gaussian distribution, the function fδ is a δL √ d-approximation of f(·) and the gradient∇fδ is L √ d
δ -Lipschitz where d ≥ 1 is the problem dimension; see Nesterov and Spokoiny [72, Theorem 1 and Lemma 2]. Letting P be an uniform distribution on an unit ball in `2-norm, the resulting function fδ is a δLapproximation of f(·) and ∇fδ is also cL √ d
δ -Lipschitz where d ≥ 1 is the problem dimension; see Yousefian et al. [84, Lemma 8] and Duchi et al. [36, Lemma E.2], rephrased as follows.
Proposition 2.3 Let fδ(x) = Eu∼P[f(x+ δu)] where P is an uniform distribution on an unit ball in `2-norm. Assuming that f is L-Lipschitz, we have (i) |fδ(x)−f(x)| ≤ δL, and (ii) fδ is differentiable and L-Lipschitz with the cL √ d
δ -Lipschitz gradient where c > 0 is a constant. In addition, there exists a function f for which each of the above bounds are tight simultaneously.
The randomized smoothing approaches form the basis for developing gradient-free methods [39, 1, 2, 44, 72]. Given an access to function values of f , we can compute an unbiased estimate of the gradient of fδ and plug them into stochastic gradient-based methods. Note that the Lipschitz constant of fδ depends on the problem dimension d ≥ 1 with at least a factor of √ d for many randomized smoothing approaches [58, Theorem 2]. This is consistent with the lower bounds for all gradient-free methods in convex and strongly convex optimization [37, 75].
3 Main Results
We establish a relationship between the Goldstein subdifferential and the uniform smoothing approach. We propose a gradient-free method (GFM), its stochastic variant (SGFM), and a two-phase version of GFM and SGFM. We analyze these algorithms using the Goldstein subdifferential; we provide the global rate and large-deviation estimates in terms of (δ, )-Goldstein stationarity.
3.1 Linking Goldstein subdifferential to uniform smoothing
Recall that ∂δf and fδ are defined by ∂δf(x) := conv(∪y∈Bδ(x)∂f(y)) and fδ(x) = Eu∼P[f(x + δu)]. It is clear that f is almost everywhere differentiable since f is L-Lipschitz. This implies that ∇fδ(x) = Eu∼P[∇f(x + δu)] and demonstrates that ∇fδ(x) can be viewed intuitively as a convex combination of ∇f(z) over an infinite number of points z ∈ Bδ(x). As such, it is reasonable to conjecture that ∇fδ(x) ∈ ∂δf(x) for any x ∈ Rd. However, the above argument is not a rigorous proof; indeed, we need to justify why ∇fδ(x) = Eu∼P[∇f(x + δu)] if f is almost everywhere differentiable and generalize the idea of a convex combination to include infinite sums. To resolve these issues, we exploit a toolbox due to Rockafellar and Wets [74].
In the following theorem, we summarize our result and refer to Appendix C for the proof details.
Theorem 3.1 Suppose that f is L-Lipschitz and let fδ(x) = Eu∼P[f(x + δu)], where P is an uniform distribution on a unit ball in `2-norm and let ∂δf be a δ-Goldstein subdifferential of f (cf. Definition 2.3). Then, we have ∇fδ(x) ∈ ∂δf(x) for any x ∈ Rd.
Theorem 3.1 resolves an important question and forms the basis for analyzing our gradient-free methods. Notably, our analysis can be extended to justify other randomized smoothing approaches in nonsmooth nonconvex optimization. For example, Nesterov and Spokoiny [72] used Gaussian smoothing and estimated the number of iterations required by their methods to output x̂ ∈ Rd satisfying ‖∇fδ(x̂)‖ ≤ . By modifying the proof of Theorem 3.1 and Zhang et al. [85, Lemma 7], we can prove that∇fδ belongs to Goldstein subdifferential with Gaussian weights and this subdifferential converges to the Clarke subdifferential as δ → 0. Compared to uniform smoothing and the original Goldstein subdifferential, the proof for Gaussian smoothing is quite long and technical [72, Page 554], and adding Gaussian weights seems unnatural in general.
Algorithm 1 Gradient-Free Method (GFM) 1: Input: initial point x0 ∈ Rd, stepsize η > 0, problem dimension d ≥ 1, smoothing parameter δ and
iteration number T ≥ 1. 2: for t = 0, 1, 2, . . . , T − 1 do 3: Sample wt ∈ Rd uniformly from a unit sphere in Rd. 4: Compute gt = d
2δ (f(xt + δwt)− f(xt − δwt))wt.
5: Compute xt+1 = xt − ηgt. 6: Output: xR where R ∈ {0, 1, 2, . . . , T − 1} is uniformly sampled.
Algorithm 2 Two-Phase Gradient-Free Method (2-GFM) 1: Input: initial point x0 ∈ Rd, stepsize η > 0, problem dimension d ≥ 1, smoothing parameter δ, iteration
number T ≥ 1, number of rounds S ≥ 1 and sample size B. 2: for s = 0, 1, 2, . . . , S − 1 do 3: Call Algorithm 1 with x0, η, d, δ and T and let x̄s be an output. 4: for s = 0, 1, 2, . . . , S − 1 do 5: for k = 0, 1, 2, . . . , B − 1 do 6: Sample wk ∈ Rd uniformly from a unit sphere in Rd. 7: Compute gks = d2δ (f(x̄s + δw
k)− f(x̄s − δwk))wk. 8: Compute gs = 1B ∑B−1 k=0 g k s .
9: Choose an index s? ∈ {0, 1, 2, . . . , S − 1} such that s? = argmins=0,1,2,...,S−1 ‖gs‖. 10: Output: x̄s? .
3.2 Gradient-free methods
We analyze a gradient-free method (GFM) and its two-phase version (2-GFM) for optimizing a Lipschitz function f . Due to space limitations, we defer the proof details to Appendix D.
Global rate estimation. Let f : Rd 7→ R be a L-Lipschitz function and the smooth version of f is then the function fδ = Eu∼P[f(x + δu)] where P is an uniform distribution on an unit ball in `2-norm. Equipped with Lemma 10 from Shamir [75], we can compute an unbiased estimator for the gradient∇fδ(xt) using function values. This leads to the gradient-free method (GFM) in Algorithm 1 that simply performs a one-step gradient descent to obtain xt. It is worth mentioning that we use a random iteration count R to terminate the execution of Algorithm 1 and this will guarantee that GFM is valid. Indeed, we only derive that mint=1,2,...,T ‖∇fδ(xt)‖ ≤ in the theoretical analysis (see also Nesterov and Spokoiny [72, Section 7]) and finding the best solution from {x1,x2, . . . ,xT } is difficult since the quantity ‖∇fδ(xt)‖ is unknown. To estimate them using Monte Carlo simulation would incur additional approximation errors and raise some reliability issues. The idea of random output is not new but has been used by Ghadimi and Lan [44] for smooth and nonconvex stochastic optimization. Such scheme also gives us a computational gain with a factor of two in expectation.
Theorem 3.2 Suppose that f is L-Lipschitz and let δ > 0 and 0 < < 1. Then, there exists some T > 0 such that the output of Algorithm 1 with η = 110 √ δ(∆+δL) cd3/2L3T
satisfies that E[min{‖g‖ : g ∈ ∂δf(x R)}] ≤ and the total number of calls of the function value oracle is bounded by
O ( d 3 2 ( L4
4 +
∆L3
δ 4
)) ,
where d ≥ 1 is the problem dimension, L > 0 is the Lipschtiz parameter of f and ∆ > 0 is an upper bound for the initial objective function gap, f(x0)− infx∈Rd f(x) > 0.
Remark 3.3 Theorem 3.2 illustrates the difference between gradient-based and gradient-free methods in nonsmooth nonconvex optimization. Indeed, Davis et al. [31] has recently proved the rate of Õ(δ−1 −3) for a randomized gradient-based method in terms of (δ, )-Goldstein stationarity. Further, Theorem 3.2 demonstrates that nonsmooth nonconvex optimization is likely to be intrinsically harder than all other standard settings. More specifically, the state-of-the-art rate for gradient-free methods is O(d −2) for nonsmooth convex optimization in terms of objective function value gap [37] and smooth nonconvex optimization in terms of gradient norm [72]. Thus, the dependence on d ≥ 1 is
linear in their bounds yet d 3 2 in our bound. We believe it is promising to either improve the rate of gradient-free methods or show the impossibility by establishing a lower bound.
Large-deviation estimation. While Theorem 3.2 establishes the expected convergence rate over many runs of Algorithm 1, we are also interested in the large-deviation properties for a single run. Indeed, we hope to establish a complexity bound for computing a (δ, ,Λ)-solution; that is, a point x ∈ Rd satisfying Prob(min{‖g‖ : g ∈ ∂δf(x)} ≤ ) ≥ 1 − Λ for some δ > 0 and 0 < ,Λ < 1. By Theorem 3.2 and Markov’s inequality,
Prob ( min{‖g‖ : g ∈ ∂δf(xR)} ≥ λE[min{‖g‖ : g ∈ ∂δf(xR)}] ) ≤ 1λ , for all λ > 0,
we conclude that the total number of calls of the function value oracle is bounded by
O ( d 3 2 ( L4
Λ4 4 +
∆L3
δΛ4 4
)) . (3.1)
This complexity bound is rather pessimistic in terms of its dependence on Λ which is often set to be small in practice. To improve the bound, we combine Algorithm 1 with a post-optimization procedure [44], leading to a two-phase gradient-free method (2-GFM), shown in Algorithm 2.
Theorem 3.4 Suppose that f is L-Lipschitz and let δ > 0 and 0 < ,Λ < 1. Then, there exists some T, S,B > 0 such that the output of Algorithm 2 with η = 110 √ δ(∆+δL) cd3/2L3T
satisfies that Prob(min{‖g‖ : g ∈ ∂δf(x̄s?)}] ≥ ) ≤ Λ and the total number of calls of the function value oracle is bounded by
O ( d 3 2 ( L4
4 +
∆L3
δ 4
) log2 ( 1
Λ
) + dL2
Λ 2 log2
( 1
Λ
)) ,
where d ≥ 1 is the problem dimension, L > 0 is the Lipschtiz parameter of f and ∆ > 0 is an upper bound for the initial objective function gap, f(x0)− infx∈Rd f(x) > 0.
Clearly, the bound in Theorem 3.4 is significantly smaller than the corresponding one in Eq. (3.1) in terms of the dependence on 1/Λ, demonstrating the power of the post-optimization phase.
3.3 Stochastic gradient-free methods
We turn to the analysis of a stochastic gradient-free method (SGFM) and its two-phase version (2-SGFM) for optimizing a Lipschitz function f(·) = Eξ∈Pµ [F (·, ξ)]. Global rate estimation. In contrast to minimizing a deterministic function f , we only have access to the noisy function value F (x, ξ) at any point x ∈ Rd where a data sample ξ is drawn from a distribution Pµ. Intuitively, this is a more challenging setup. It has been studied before in the setting of optimizing a nonsmooth convex function [37, 72] or a smooth nonconvex function [44]. As in these papers, we assume that (i) F (·, ξ) is L(ξ)-Lipschitz with Eξ∈Pµ [L2(ξ)] ≤ G2 for some G > 0 and (ii) E[F (x, ξt)] = f(x) for all x ∈ Rd where ξt is simulated from Pµ at the tth iteration. Despite the noisy function value, we can compute an unbiased estimator of the gradient ∇fδ(xt), where fδ = Eu∼P[f(x + δu)] = Eu∼P,ξ∈Pµ [F (x + δu, ξ)]. In particular, we have ĝt = d2δ (F (x
t + δwt, ξt)− F (xt − δwt, ξt))wt. Clearly, under our assumption, we have
Eu∼P,ξ∈Pµ [ĝt] = Eu∼P[Eξ∈Pµ [ĝt | u]] = Eu∼P[gt] = ∇fδ(xt),
where gt is defined in Algorithm 1. However, the variance of the estimator ĝt can be undesirably large since F (·, ξ) is L(ξ)-Lipschitz for a (possibly unbounded) random variable L(ξ) > 0. To resolve this issue, we revisit Shamir [75, Lemma 10] and show that in deriving an upper bound for Eu∼P,ξ∈Pµ [‖ĝt‖2] it suffices to assume that Eξ∈Pµ [L2(ξ)] ≤ G2 for some constant G > 0. The resulting bound achieves a linear dependence in the problem dimension d > 0 which is the same as in Shamir [75, Lemma 10]. Note that the setup with convex and L(ξ)-Lipschitz functions F (·, ξ) has been considered in Duchi et al. [37]. However, our estimator is different from their estimator of ĝt = dδ (F (x
t + δwt, ξt)−F (xt, ξt))wt which essentially suffers from the quadratic dependence in d > 0. It is also necessary to employ a random iteration count R to terminate Algorithm 3.
Algorithm 3 Stochastic Gradient-Free Method (SGFM) 1: Input: initial point x0 ∈ Rd, stepsize η > 0, problem dimension d ≥ 1, smoothing parameter δ and
iteration number T ≥ 1. 2: for t = 0, 1, 2, . . . , T do 3: Simulate ξt from the distribution Pµ. 4: Sample wt ∈ Rd uniformly from a unit sphere in Rd. 5: Compute ĝt = d
2δ (F (xt + δwt, ξt)− F (xt − δwt, ξt))wt.
6: Compute xt+1 = xt − ηgt. 7: Output: xR where R ∈ {0, 1, 2, . . . , T − 1} is uniformly sampled.
Algorithm 4 Two-Phase Stochastic Gradient-Free Method (2-SGFM) 1: Input: initial point x0 ∈ Rd, stepsize η > 0, problem dimension d ≥ 1, smoothing parameter δ, iteration
number T ≥ 1, number of rounds S ≥ 1 and sample size B. 2: for s = 0, 1, 2, . . . , S − 1 do 3: Call Algorithm 3 with x0, η, d, δ and T and let x̄s be an output. 4: for s = 0, 1, 2, . . . , S − 1 do 5: for k = 0, 1, 2, . . . , B − 1 do 6: Simulate ξk from the distribution Pµ. 7: Sample wk ∈ Rd uniformly from a unit sphere in Rd. 8: Compute ĝks = d2δ (F (x̄s + δw
k, δk)− F (x̄s − δwk, δk))wk. 9: Compute ĝs = 1B ∑B−1 k=0 ĝ k s .
10: Choose an index s? ∈ {0, 1, 2, . . . , S − 1} such that s? = argmins=0,1,2,...,S−1 ‖ĝs‖. 11: Output: x̄s? .
Theorem 3.5 Suppose that F (·, ξ) is L(ξ)-Lipschitz with Eξ∈Pµ [L2(ξ)] ≤ G2 for some G > 0 and let δ > 0 and 0 < < 1. Then, there exists some T > 0 such that the output of Algorithm 3 with
η = 110 √ δ(∆+δG) cd3/2G3T
satisfies that E[min{‖g‖ : g ∈ ∂δf(xR)}] ≤ and the total number of calls of the noisy function value oracle is bounded by
O ( d 3 2 ( G4
4 +
∆G3
δ 4
)) ,
where d ≥ 1 is the problem dimension, L > 0 is the Lipschtiz parameter of f and ∆ > 0 is an upper bound for the initial objective function gap, f(x0)− infx∈Rd f(x) > 0.
In the stochastic setting, the gradient-based method achieves the rate of O(δ−1 −4) for a randomized gradient-based method in terms of (δ, )-Goldstein stationarity [31]. As such, our bound in Theorem 3.5 is tight up to the problem dimension d ≥ 1. Further, the state-of-the-art rate for stochastic gradient-free methods is O(d −2) for nonsmooth convex optimization in terms of objective function value gap [37] and O(d −4) for smooth nonconvex optimization in terms of gradient norm [44]. Thus, Theorem 3.5 demonstrates that nonsmooth nonconvex stochastic optimization is essentially the most difficult one among than all these standard settings.
Large-deviation estimation. As in the case of GFM, we hope to establish a complexity bound of SGFM for computing a (δ, ,Λ)-solution. By Theorem 3.5 and Markov’s inequality, we obtain that the total number of calls of the noisy function value oracle is bounded by
O ( d 3 2 ( G4
Λ4 4 +
∆G3
δΛ4 4
)) . (3.2)
We also propose a two-phase stochastic gradient-free method (2-SGFM) in Algorithm 4 by combining Algorithm 3 with a post-optimization procedure.
Theorem 3.6 Suppose that F (·, ξ) is L(ξ)-Lipschitz with Eξ∈Pµ [L2(ξ)] ≤ G2 for some G > 0 and let δ > 0 and 0 < ,Λ < 1. Then, there exists some T, S,B > 0 such that the output of Algorithm 4
with η = 110 √ δ(∆+δG) cd3/2G3T
satisfies that Prob(min{‖g‖ : g ∈ ∂δf(x̄s?)}] ≥ ) ≤ Λ and the total number of calls of the noisy function value oracle is bounded by
O ( d 3 2 ( G4
4 +
∆G3
δ 4
) log2 ( 1
Λ
) + dG2
Λ 2 log2
( 1
Λ
)) ,
where d ≥ 1 is the problem dimension, L > 0 is the Lipschtiz parameter of f and ∆ > 0 is an upper bound for the initial objective function gap f(x0)− infx∈Rd f(x) > 0.
Further discussions. We remark that the choice of stepsize η in all of our zeroth-order methods depend on ∆, whereas such dependence is not necessary in the first-order setting; see e.g., Zhang et al. [85]. Setting the stepsize without any prior knowledge of ∆, our methods can still achieve finite-time convergence guarantees but the order would become worse. This is possibly because the first-order information gives more characterization of the objective function than the zeroth-order information, so that for first-order methods the stepsize can be independent of more problem parameters without sacrificing the bound. A bit on the positive side is that, it suffices for our zeroth-order methods to know an estimate of the upper bound of Θ(∆), which can be done in certain application problems.
Moreover, we highlight that δ > 0 is the desired tolerance in our setting. In fact, (δ, )-Goldstein stationarity (see Definition 2.3) relaxes -Clarke stationarity and our methods pursue an (δ, )-stationary point since finding an -Clarke point is intractable. This is different from smooth optimization where -Clarke stationarity reduces to∇f(x) ≤ and becomes tractable. In this context, the existing zerothorder methods are designed to pursue an -stationary point. Notably, a (δ, )-Goldstein stationary point is provably an -stationary point in smooth optimization if we choose δ that relies on d and .
4 Experiment
We conduct numerical experiments to validate the effectiveness of our proposed methods. In particular, we evaluate the performance of our two-phase version of SGFM (Algorithm 4) on the task of image classification using convolutional neural networks (CNNs) with ReLU activations. The dataset we use is the MNIST dataset1 [60] and the CNN framework we use is: (i) we set two convolution layers and two fully connected layers where the dropout layers [77] are used before each fully connected layer, and (ii) two convolution layers and the first fully connected layer are associated with ReLU activation. It is worth mentioning that our setup follows the default one2 and the similar setup was also consider in Zhang et al. [85] for evaluating the gradient-based methods (see the setups and results for CIFAR10 dataset in Appendix F).
The baseline approaches include three gradient-based methods: stochastic gradient descent (SGD), ADAGRAD [34] and ADAM [55]. We compare these methods with 2-SGFM (cf. Algorithm 4) and set the learning rate η as 0.001. All the experiments are implemented using PyTorch [73] on a workstation with a 2.6 GHz Intel Core i7 and 16GB memory.
1http://yann.lecun.com/exdb/mnist 2https://github.com/pytorch/examples/tree/main/mnist
Figure 1 summarizes the numerical results on the performance of SGD, ADAGRAD, Adagrad, ADAM, INDG [85], and our method 2-SGFM with δ = 0.1 and B = 200. Notably, 2-SGFM is comparable to other gradient-based methods in terms of training/test accuracy/loss even though it only use the function values. This demonstrates the potential value of our methods since the gradient-based methods are not applicable in many real-world application problems as mentioned before. Figure 2a and 2b presents the effect of batch size B ≥ 1 in 2-SGFM; indeed, the larger value of B leads to better performance and this accords with Theorem 3.6. We also compare the performance of SGD and 2-SGFM with different choices of η. From Figure 2c and 2d, we see that SGD and 2-SGFM achieve similar performance in the early stage and converge to solutions with similar quality.
Figure 3 summarizes the experimental results on the effect of batch size B for 2-SGFM. Note that the evaluation metrics here are train loss and test loss. It is clear that the larger value of B leads to better performance and this is consistent with the results presented in the main context. Figure 4 summarizes the experimental results on the effect of learning rates for 2-SGFM. It is interesting to see that 2-SGFM can indeed benefit from a more aggressive choice of stepsize η > 0 in practice and the choice of η = 0.0001 seems to be too conservative.
5 Conclusion
We proposed and analyzed a class of deterministic and stochastic gradient-free methods for optimizing a Lipschitz function. Based on the relationship between the Goldstein subdifferential and uniform smoothing that we have established, the proposed GFM and SGFM are proved to return a (δ, )- Goldstein stationary point at an expected rate of O(d3/2δ−1 −4). We also obtain a large-deviation guarantee and improve it by combining GFM and SGFM with a two-phase scheme. Experiments on training neural networks with the MNIST and CIFAR10 datasets demonstrate the effectiveness of our methods. Future directions include the theory for non-Lipschitz and nonconvex optimization [11] and applications of our methods to deep residual neural network (ResNet) [47] and deep dense convolutional network (DenseNet) [50].
Acknowledgements
We would like to thank the area chair and three anonymous referees for constructive suggestions that improve the paper. This work is supported in part by the Mathematical Data Science program of the Office of Naval Research under grant number N00014-18-1-2764 and by the Vannevar Bush Faculty Fellowship program under grant number N00014-21-1-2941. | 1. What is the focus and contribution of the paper on nonconvex nonsmooth problems?
2. What are the strengths of the proposed algorithms, particularly in terms of their ability to compute Goldstein stationary points?
3. What are the weaknesses of the paper regarding its claims, experiments, and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper introduced two zero-order algorithms to compute the Goldstein stationary points for nonconvex nonsmooth problem. In contrast to the first-order case, the dependence on dimension is unavoidable for algorithms that only use function values. The authors show the gradient of a randomized smoothed function with a delta-ball is belong to the Goldstein delta-subdifferential, which forms the basis for the new gradient-free algorithms. They show the new algorithms compute a Goldstein stationary point in expectation within polynomial oracle complexity and the dimension dependence is only sqrt(d) worse than the convex/smooth case. They also proved a high-probability bound with a two-phase scheme.
Strengths And Weaknesses
As nonconvex nonsmooth problems are everywhere especially in the DL setting, new practical algorithm with finite time oracle complexity is important and desirable nowadays. This paper studies the computation of Goldstein approximate stationary point, which has exhibited attractive algorithmic consequence in recent years. The main contributions are two zero-order finite-time methods which are further built upon an interesting observation that the randomized smoothed function with a delta-ball is belong to the Goldstein delta-subdifferential. The paper is well-written and easy to follow. My comments are as follows:
Proposition 2.3 basically repeats [78, Lemma 8]. In the proof, it might miss a norm in L629 and L634.
The step size of Algorithm 1,2,3 seems dependent on the \Delta, which is usually unknown in practice and is not necessary in the first-order setting for Lipschitz functions, e.g., in [79].
The following reference computing Goldstein stationary points concurrent partly to [30] might be relevant:
[R] Lai Tian, Kaiwen Zhou, and Anthony Man-Cho So. On the finite-time complexity and practical computation of approximate stationarity concepts of Lipschitz functions. ICML, 2022.
L45: "Clark" -> "Clarke"
Questions
See main comments above.
Limitations
Yes. |
NIPS | Title
Gradient-Free Methods for Deterministic and Stochastic Nonsmooth Nonconvex Optimization
Abstract
Nonsmooth nonconvex optimization problems broadly emerge in machine learning and business decision making, whereas two core challenges impede the development of efficient solution methods with finite-time convergence guarantee: the lack of computationally tractable optimality criterion and the lack of computationally powerful oracles. The contributions of this paper are two-fold. First, we establish the relationship between the celebrated Goldstein subdifferential [46] and uniform smoothing, thereby providing the basis and intuition for the design of gradient-free methods that guarantee the finite-time convergence to a set of Goldstein stationary points. Second, we propose the gradient-free method (GFM) and stochastic GFM for solving a class of nonsmooth nonconvex optimization problems and prove that both of them can return a (δ, )-Goldstein stationary point of a Lipschitz function f at an expected convergence rate at O(d3/2δ−1 −4) where d is the problem dimension. Two-phase versions of GFM and SGFM are also proposed and proven to achieve improved large-deviation results. Finally, we demonstrate the effectiveness of 2-SGFM on training ReLU neural networks with the MINST dataset.
1 Introduction
Many of the recent real-world success stories of machine learning have involved nonconvex optimization formulations, with the design of models and algorithms often being heuristic and intuitive. Thus a gap has arisen between theory and practice. Attempts have been made to fill this gap for different learning methodologies, including the training of multi-layer neural networks [25], orthogonal tensor decomposition [41], M-estimators [63, 64], synchronization and MaxCut [6, 66], smooth semidefinite programming [15], matrix sensing and completion [10, 42], robust principal component analysis (RPCA) [43] and phase retrieval [82, 79, 64]. For an overview of nonconvex optimization formulations and the relevant ML applications, we refer to a recent survey [51].
It is generally intractable to compute an approximate global minimum [69] or to verify whether a point is a local minimum or a high-order saddle point [67]. Fortunately, the notion of approximate stationary point gives a reasonable optimality criterion when the objective function f is smooth; the goal here is to find a point x ∈ Rd such that ‖∇f(x)‖ ≤ . Recent years have seen rapid algorithmic development through the lens of nonasymptotic convergence rates to -stationary points [70, 44, 45, 20, 21, 53]. Another line of work establishes algorithm-independent lower bounds [22, 23, 3, 4].
Relative to its smooth counterpart, the investigation of nonsmooth optimization is relatively scarce, particularly in the nonconvex setting, both in terms of efficient algorithms and finite-time convergence guarantees. Yet, over several decades, nonsmooth nonconvex optimization formulations have found applications in many fields. A typical example is the training multi-layer neural networks with ReLU neurons, for which the piecewise linear activation functions induce nonsmoothness. Another example arises in controlling financial risk for asset portfolios or optimizing customer satisfaction in service systems or supply chain systems. Here, the nonsmoothness arises from the payoffs of financial
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
derivatives and supply chain costs, e.g., options payoffs [38] and supply chain overage/underage costs [78]. These applications make significant demands with respect to computational feasibility, and the design of efficient algorithms for solving nonsmooth nonconvex optimization problems has moved to the fore [65, 30, 28, 85, 12, 31, 80].
The key challenges lie in two aspects: (i) the lack of a computationally tractable optimality criterion, and (ii) the lack of computationally powerful oracles. More specifically, in the classical setting where the function f is Lipschitz, we can define -stationary points based on the celebrated notion of Clarke stationarity [26]. However, the value of such a criterion has been called into question by Zhang et al. [85], who show that no finite-time algorithm guarantees -stationarity when is less than a constant. Further, the computation of the gradient is impossible for many application problems and we only have access to a noisy function value at each point. This is a common issue in the context of simulation optimization [68, 48]; indeed, the objective function value is often achieved as the output of a black-box or complex simulator, for which the simulator does not have the infrastructure needed to effectively evaluate gradients; see also Ghadimi and Lan [44] and Nesterov and Spokoiny [72] for comments on the lack of gradient evaluation in practice.
Contribution. In this paper, we propose and analyze a class of deterministic and stochastic gradientfree methods for nonsmooth nonconvex optimization problems in which we only assume that the function f is Lipschitz. Our contributions can be summarized as follows.
1. We establish a relationship between the Goldstein subdifferential and uniform smoothing via appeal to the hyperplane separation theorem. This result provides the basis for algorithmic design and finite-time convergence analysis of gradient-free methods to (δ, )-Goldstein stationary points.
2. We propose and analyze a gradient-free method (GFM) and stochastic GFM for solving a class of nonsmooth nonconvex optimization problems. Both of these methods are guaranteed to return a (δ, )-Goldstein stationary point of a Lipschitz function f : Rd 7→ R with an expected convergence rate of O(d3/2δ−1 −4) where d ≥ 1 is the problem dimension. Further, we propose the two-phase versions of GFM and SGFM. As our goal is to return a (δ, )-Goldstein stationary point with user-specified high probability 1− Λ, we prove that the two-phase version of GFM and SGFM can improve the dependence from (1/Λ)4 to log(1/Λ) in the large-deviation regime.
Related works. Our work is related to a line of literature on gradient-based methods for nonsmooth and nonconvex optimization and gradient-free methods for smooth and nonconvex optimization. Due to space limitations, we defer our comments on the former topic to Appendix A. In the context of gradient-free methods, the basic idea is to approximate a full gradient using either a one-point estimator [39] or a two-point estimator [1, 44, 37, 75, 72], where the latter approach achieves a better finite-time convergence guarantee. Despite the meteoric rise of two-point-based gradient-free methods, most of the work is restricted to convex optimization [37, 75, 83] and smooth and nonconvex optimization [72, 44, 61, 62, 24, 52, 49]. For nonsmooth and convex optimization, the best upper bound on the global rate of convergence is O(d −2) [75] and this matches the lower bound [37]. For smooth and nonconvex optimization, the best global rate of convergence is O(d −2) [72] and O(d −4) if we only have access to noisy function value oracles [44]. Additional regularity conditions, e.g., a finite-sum structure, allow us to leverage variance-reduction techniques [62, 24, 52] and the best known result is O(d3/4 −3) [49]. However, none of these gradient-free methods have been developed for nonsmooth nonconvex optimization and the only gradient-free method we are aware of for the nonsmooth is summarized in Nesterov and Spokoiny [72, Section 7].
2 Preliminaries and Technical Background
We provide the formal definitions for the class of Lipschitz functions considered in this paper, and the definitions for generalized gradients and the Goldstein subdifferential that lead to optimality conditions in nonsmooth nonconvex optimization.
2.1 Function classes
Imposing regularity on functions to be optimized is necessary for obtaining optimization algorithms with finite-time convergence guarantees [71]. In the context of nonsmooth optimization there are two types of regularity conditions: Lipschitz properties of function values and bounds on function values.
We first list several equivalent definitions of Lipschitz continuity. A function f : Rd 7→ R is said to be L-Lipschitz if for every x ∈ Rd and the direction v ∈ Rd with ‖v‖ ≤ 1, the directional projection fx,v(t) := f(x + tv) defined for t ∈ R satisfies |fx,v(t)− fx,v(t′)| ≤ L|t− t′|, for all t, t′ ∈ R. Equivalently, f is L-Lipschitz if for every x,x′ ∈ Rd, we have
|f(x)− f(x′)| ≤ L‖x− x′‖. Further, the function value bound f(x0)− infx∈Rd f(x) appears in complexity guarantees for smooth and nonconvex optimization problems [71] and is often assumed to be bounded by a positive constant ∆ > 0. Note that x0 is a prespecified point (i.e., an initial point for an algorithm) and we simply fix it for the remainder of this paper. We define the function class which will be considered in this paper.
Definition 2.1 Suppose that ∆ > 0 and L > 0 are both independent of the problem dimension d ≥ 1. Then, we denote Fd(∆, L) as the set of L-Lipschitz functions f : Rd 7→ R with the bounded function value f(x0)− infx∈Rd f(x) ≤ ∆.
The function class Fd(∆, L) includes Lipschitz functions on Rd and is thus different from the nonconvex function class considered in the literature [44, 72]. First, we do not impose a smoothness condition on the function f ∈ Fd(∆, L), in contrast to the nonconvex functions studied in Ghadimi and Lan [44] which are assumed to have Lipschitz gradients. Second, Nesterov and Spokoiny [72, Section 7] presented a complexity bound for a randomized optimization method for minimizing a nonsmooth nonconvex function. However, they did not clarify why the norm of the gradient of the approximate function fµ̄ of the order δ (we use their notation) serves as a reasonable optimality criterion in nonsmooth nonconvex optimization. They also assume an exact function value oracle, ruling out many interesting application problems in simulation optimization and machine learning.
In contrast, our goal is to propose fast gradient-free methods for nonsmooth nonconvex optimization in the absence of an exact function value oracle. In general, the complexity bound of gradient-free methods will depend on the problem dimension d ≥ 1 even when we assume that the function to be optimized is convex and smooth [37, 75]. As such, we should consider a function class with a given dimension d ≥ 1. In particular, we consider a optimality criterion based on the celebrated Goldstein subdifferential [46] and prove that the number of function value oracles required by our deterministic and stochastic gradient-free methods to find a (δ, )-Goldstein stationary point of f ∈ Fd(∆, L) is O(poly(d, L,∆, 1/ , 1/δ)) when δ, ∈ (0, 1) are constants (see the definition of Goldstein stationarity in the next subsection).
It is worth mentioning that Fd(∆, L) contains a rather broad class of functions used in real-world application problems. Typical examples with additional regularity properties include Hadamard semidifferentiable functions [76, 32, 85], Whitney-stratifiable functions [13, 30], o-minimally definable functions [27] and a class of semi-algebraic functions [5, 30]. Thus, our gradient-free methods can be applied for solving these problems with finite-time convergence guarantees.
2.2 Generalized gradients and Goldstein subdifferential
We start with the definition of generalized gradients [26] for nondifferentiable functions. This is perhaps the most standard extension of gradients to nonsmooth and nonconvex functions.
Definition 2.2 Given a point x ∈ Rd and a direction v ∈ Rd, the generalized directional derivative of a nondifferentiable function f is given by Df(x;v) := lim supy→x,t↓0 f(y+tv)−f(y) t . Then, the generalized gradient of f is defined as a set ∂f(x) := {g ∈ Rd : g>v ≤ Df(x;v),∀v ∈ Rd}.
Rademacher’s theorem guarantees that any Lipschitz function is almost everywhere differentiable. This implies that the generalized gradients of Lipschitz functions have additional properties and we can define them in a relatively simple way. The following proposition summarizes these results; we refer to Clarke [26] for the proof details.
Proposition 2.1 Suppose that f is L-Lipschitz for some L > 0, we have that ∂f(x) is a nonempty, convex and compact set and ‖g‖ ≤ L for all g ∈ ∂f(x). Further, ∂f(·) is an upper-semicontinuous set-valued map. Moreover, a generalization of mean-value theorem holds: for any x1,x2 ∈ Rd, there exist λ ∈ (0, 1) and g ∈ ∂f(λx1 + (1− λ)x2) such that f(x1)− f(x2) = g>(x1 − x2). Finally, there is a simple way to represent the generalized gradient ∂f(x):
∂f(x) := conv { g ∈ Rd : g = lim
xk→x ∇f(xk)
} ,
which is the convex hull of all limit points of ∇f(xk) over all sequences x1,x2, . . . of differentiable points of f(·) which converge to x.
Given this definition of generalized gradients, a Clarke stationary point of f is a point x satisfying 0 ∈ ∂f(x). Then, it is natural to ask if an optimization algorithm can reach an -stationary point with a finite-time convergence guarantee. Here a point x ∈ Rd is an -Clarke stationary point if
min {‖g‖ : g ∈ ∂f(x)} ≤ .
This question has been addressed by [85, Theorem 1], who showed that finding an -Clarke stationary points in nonsmooth nonconvex optimization can not be achieved by any finite-time algorithm given a fixed tolerance ∈ [0, 1). One possible response is to consider a relaxation called a near -Clarke stationary point. Consider a point which is δ-close to an -stationary point for some δ > 0. A point x ∈ Rd is near -stationary if the following statement holds true:
min { ‖g‖ : g ∈ ∪y∈Bδ(x)∂f(y) } ≤ .
Unfortunately, however, [58, Theorem 1] demonstrated that it is impossible to obtain worst-case guarantees for finding a near -Clarke stationary point of f ∈ Fd(∆, L) when , δ > 0 are smaller than some certain constants unless the number of oracle calls has an exponential dependence on the problem dimension d ≥ 1. These negative results suggest a need for rethinking the definition of targeted stationary points. We propose to consider the refined notion of Goldstein subdifferential.
Definition 2.3 Given a point x ∈ Rd and δ > 0, the δ-Goldstein subdifferential of a Lipschitz function f at x is given by ∂δf(x) := conv(∪y∈Bδ(x)∂f(y)).
The Goldstein subdifferential of f at x is the convex hull of the union of all generalized gradients at points in a δ-ball around x. Accordingly, we can define the (δ, )-Goldstein stationary points; that is, a point x ∈ Rd is a (δ, )-Goldstein stationary point if the following statement holds:
min{‖g‖ : g ∈ ∂δf(x)} ≤ .
It is worth mentioning that (δ, )-Goldstein stationarity is a weaker notion than (near) -Clarke stationarity since any (near) -stationary point is a (δ, )-Goldstein stationary point but not vice versa. However, the converse holds true under a smoothness condition [85, Proposition 6] and limδ↓0 ∂δf(x) = ∂f(x) holds as shown in Zhang et al. [85, Lemma 7]. The latter result also enables an intuitive framework for transforming nonasymptotic analysis of convergence to (δ, )-Goldstein stationary points to classical asymptotic results for finding -Clarke stationary points. Thus, we conclude that finding a (δ, )-Goldstein stationary point is a reasonable optimality condition for general nonsmooth nonconvex optimization.
Remark 2.2 Finding a (δ, )-Goldstein stationary point in nonsmooth nonconvex optimization has been formally shown to be computationally tractable in an oracle model [85, 31, 80]. Goldstein [46] discovered that one can decrease the function value of a Lipschitz f by using the minimal-norm element of ∂δf(x) and this leads to a deterministic normalized subgradient method which finds a (δ, )-Goldstein stationary point within O( ∆δ ) iterations. However, Goldstein’s algorithm is only conceptual since it is computationally intractable to return an exact minimal-norm element of ∂δf(x). Recently, the randomized variants of Goldstein’s algorithm have been proposed with a convergence guarantee of O(∆L 2
δ 3 ) [85, 31, 80]. However, it remains unknown if gradient-free methods find a (δ, )-Goldstein stationary point of a Lipschitz function f withinO(poly(d, L,∆, 1/ , 1/δ)) iterations in the absence of an exact function value oracle. Note that the dependence on the problem dimension d ≥ 1 is necessary for gradient-free methods as mentioned before.
2.3 Randomized smoothing
The randomized smoothing approaches are simple and work equally well for convex and nonconvex functions. Formally, given the L-Lipschitz function f (possibly nonsmooth nonconvex) and a distribution P, we define fδ(x) = Eu∼P[f(x + δu)]. In particular, letting P be a standard Gaussian distribution, the function fδ is a δL √ d-approximation of f(·) and the gradient∇fδ is L √ d
δ -Lipschitz where d ≥ 1 is the problem dimension; see Nesterov and Spokoiny [72, Theorem 1 and Lemma 2]. Letting P be an uniform distribution on an unit ball in `2-norm, the resulting function fδ is a δLapproximation of f(·) and ∇fδ is also cL √ d
δ -Lipschitz where d ≥ 1 is the problem dimension; see Yousefian et al. [84, Lemma 8] and Duchi et al. [36, Lemma E.2], rephrased as follows.
Proposition 2.3 Let fδ(x) = Eu∼P[f(x+ δu)] where P is an uniform distribution on an unit ball in `2-norm. Assuming that f is L-Lipschitz, we have (i) |fδ(x)−f(x)| ≤ δL, and (ii) fδ is differentiable and L-Lipschitz with the cL √ d
δ -Lipschitz gradient where c > 0 is a constant. In addition, there exists a function f for which each of the above bounds are tight simultaneously.
The randomized smoothing approaches form the basis for developing gradient-free methods [39, 1, 2, 44, 72]. Given an access to function values of f , we can compute an unbiased estimate of the gradient of fδ and plug them into stochastic gradient-based methods. Note that the Lipschitz constant of fδ depends on the problem dimension d ≥ 1 with at least a factor of √ d for many randomized smoothing approaches [58, Theorem 2]. This is consistent with the lower bounds for all gradient-free methods in convex and strongly convex optimization [37, 75].
3 Main Results
We establish a relationship between the Goldstein subdifferential and the uniform smoothing approach. We propose a gradient-free method (GFM), its stochastic variant (SGFM), and a two-phase version of GFM and SGFM. We analyze these algorithms using the Goldstein subdifferential; we provide the global rate and large-deviation estimates in terms of (δ, )-Goldstein stationarity.
3.1 Linking Goldstein subdifferential to uniform smoothing
Recall that ∂δf and fδ are defined by ∂δf(x) := conv(∪y∈Bδ(x)∂f(y)) and fδ(x) = Eu∼P[f(x + δu)]. It is clear that f is almost everywhere differentiable since f is L-Lipschitz. This implies that ∇fδ(x) = Eu∼P[∇f(x + δu)] and demonstrates that ∇fδ(x) can be viewed intuitively as a convex combination of ∇f(z) over an infinite number of points z ∈ Bδ(x). As such, it is reasonable to conjecture that ∇fδ(x) ∈ ∂δf(x) for any x ∈ Rd. However, the above argument is not a rigorous proof; indeed, we need to justify why ∇fδ(x) = Eu∼P[∇f(x + δu)] if f is almost everywhere differentiable and generalize the idea of a convex combination to include infinite sums. To resolve these issues, we exploit a toolbox due to Rockafellar and Wets [74].
In the following theorem, we summarize our result and refer to Appendix C for the proof details.
Theorem 3.1 Suppose that f is L-Lipschitz and let fδ(x) = Eu∼P[f(x + δu)], where P is an uniform distribution on a unit ball in `2-norm and let ∂δf be a δ-Goldstein subdifferential of f (cf. Definition 2.3). Then, we have ∇fδ(x) ∈ ∂δf(x) for any x ∈ Rd.
Theorem 3.1 resolves an important question and forms the basis for analyzing our gradient-free methods. Notably, our analysis can be extended to justify other randomized smoothing approaches in nonsmooth nonconvex optimization. For example, Nesterov and Spokoiny [72] used Gaussian smoothing and estimated the number of iterations required by their methods to output x̂ ∈ Rd satisfying ‖∇fδ(x̂)‖ ≤ . By modifying the proof of Theorem 3.1 and Zhang et al. [85, Lemma 7], we can prove that∇fδ belongs to Goldstein subdifferential with Gaussian weights and this subdifferential converges to the Clarke subdifferential as δ → 0. Compared to uniform smoothing and the original Goldstein subdifferential, the proof for Gaussian smoothing is quite long and technical [72, Page 554], and adding Gaussian weights seems unnatural in general.
Algorithm 1 Gradient-Free Method (GFM) 1: Input: initial point x0 ∈ Rd, stepsize η > 0, problem dimension d ≥ 1, smoothing parameter δ and
iteration number T ≥ 1. 2: for t = 0, 1, 2, . . . , T − 1 do 3: Sample wt ∈ Rd uniformly from a unit sphere in Rd. 4: Compute gt = d
2δ (f(xt + δwt)− f(xt − δwt))wt.
5: Compute xt+1 = xt − ηgt. 6: Output: xR where R ∈ {0, 1, 2, . . . , T − 1} is uniformly sampled.
Algorithm 2 Two-Phase Gradient-Free Method (2-GFM) 1: Input: initial point x0 ∈ Rd, stepsize η > 0, problem dimension d ≥ 1, smoothing parameter δ, iteration
number T ≥ 1, number of rounds S ≥ 1 and sample size B. 2: for s = 0, 1, 2, . . . , S − 1 do 3: Call Algorithm 1 with x0, η, d, δ and T and let x̄s be an output. 4: for s = 0, 1, 2, . . . , S − 1 do 5: for k = 0, 1, 2, . . . , B − 1 do 6: Sample wk ∈ Rd uniformly from a unit sphere in Rd. 7: Compute gks = d2δ (f(x̄s + δw
k)− f(x̄s − δwk))wk. 8: Compute gs = 1B ∑B−1 k=0 g k s .
9: Choose an index s? ∈ {0, 1, 2, . . . , S − 1} such that s? = argmins=0,1,2,...,S−1 ‖gs‖. 10: Output: x̄s? .
3.2 Gradient-free methods
We analyze a gradient-free method (GFM) and its two-phase version (2-GFM) for optimizing a Lipschitz function f . Due to space limitations, we defer the proof details to Appendix D.
Global rate estimation. Let f : Rd 7→ R be a L-Lipschitz function and the smooth version of f is then the function fδ = Eu∼P[f(x + δu)] where P is an uniform distribution on an unit ball in `2-norm. Equipped with Lemma 10 from Shamir [75], we can compute an unbiased estimator for the gradient∇fδ(xt) using function values. This leads to the gradient-free method (GFM) in Algorithm 1 that simply performs a one-step gradient descent to obtain xt. It is worth mentioning that we use a random iteration count R to terminate the execution of Algorithm 1 and this will guarantee that GFM is valid. Indeed, we only derive that mint=1,2,...,T ‖∇fδ(xt)‖ ≤ in the theoretical analysis (see also Nesterov and Spokoiny [72, Section 7]) and finding the best solution from {x1,x2, . . . ,xT } is difficult since the quantity ‖∇fδ(xt)‖ is unknown. To estimate them using Monte Carlo simulation would incur additional approximation errors and raise some reliability issues. The idea of random output is not new but has been used by Ghadimi and Lan [44] for smooth and nonconvex stochastic optimization. Such scheme also gives us a computational gain with a factor of two in expectation.
Theorem 3.2 Suppose that f is L-Lipschitz and let δ > 0 and 0 < < 1. Then, there exists some T > 0 such that the output of Algorithm 1 with η = 110 √ δ(∆+δL) cd3/2L3T
satisfies that E[min{‖g‖ : g ∈ ∂δf(x R)}] ≤ and the total number of calls of the function value oracle is bounded by
O ( d 3 2 ( L4
4 +
∆L3
δ 4
)) ,
where d ≥ 1 is the problem dimension, L > 0 is the Lipschtiz parameter of f and ∆ > 0 is an upper bound for the initial objective function gap, f(x0)− infx∈Rd f(x) > 0.
Remark 3.3 Theorem 3.2 illustrates the difference between gradient-based and gradient-free methods in nonsmooth nonconvex optimization. Indeed, Davis et al. [31] has recently proved the rate of Õ(δ−1 −3) for a randomized gradient-based method in terms of (δ, )-Goldstein stationarity. Further, Theorem 3.2 demonstrates that nonsmooth nonconvex optimization is likely to be intrinsically harder than all other standard settings. More specifically, the state-of-the-art rate for gradient-free methods is O(d −2) for nonsmooth convex optimization in terms of objective function value gap [37] and smooth nonconvex optimization in terms of gradient norm [72]. Thus, the dependence on d ≥ 1 is
linear in their bounds yet d 3 2 in our bound. We believe it is promising to either improve the rate of gradient-free methods or show the impossibility by establishing a lower bound.
Large-deviation estimation. While Theorem 3.2 establishes the expected convergence rate over many runs of Algorithm 1, we are also interested in the large-deviation properties for a single run. Indeed, we hope to establish a complexity bound for computing a (δ, ,Λ)-solution; that is, a point x ∈ Rd satisfying Prob(min{‖g‖ : g ∈ ∂δf(x)} ≤ ) ≥ 1 − Λ for some δ > 0 and 0 < ,Λ < 1. By Theorem 3.2 and Markov’s inequality,
Prob ( min{‖g‖ : g ∈ ∂δf(xR)} ≥ λE[min{‖g‖ : g ∈ ∂δf(xR)}] ) ≤ 1λ , for all λ > 0,
we conclude that the total number of calls of the function value oracle is bounded by
O ( d 3 2 ( L4
Λ4 4 +
∆L3
δΛ4 4
)) . (3.1)
This complexity bound is rather pessimistic in terms of its dependence on Λ which is often set to be small in practice. To improve the bound, we combine Algorithm 1 with a post-optimization procedure [44], leading to a two-phase gradient-free method (2-GFM), shown in Algorithm 2.
Theorem 3.4 Suppose that f is L-Lipschitz and let δ > 0 and 0 < ,Λ < 1. Then, there exists some T, S,B > 0 such that the output of Algorithm 2 with η = 110 √ δ(∆+δL) cd3/2L3T
satisfies that Prob(min{‖g‖ : g ∈ ∂δf(x̄s?)}] ≥ ) ≤ Λ and the total number of calls of the function value oracle is bounded by
O ( d 3 2 ( L4
4 +
∆L3
δ 4
) log2 ( 1
Λ
) + dL2
Λ 2 log2
( 1
Λ
)) ,
where d ≥ 1 is the problem dimension, L > 0 is the Lipschtiz parameter of f and ∆ > 0 is an upper bound for the initial objective function gap, f(x0)− infx∈Rd f(x) > 0.
Clearly, the bound in Theorem 3.4 is significantly smaller than the corresponding one in Eq. (3.1) in terms of the dependence on 1/Λ, demonstrating the power of the post-optimization phase.
3.3 Stochastic gradient-free methods
We turn to the analysis of a stochastic gradient-free method (SGFM) and its two-phase version (2-SGFM) for optimizing a Lipschitz function f(·) = Eξ∈Pµ [F (·, ξ)]. Global rate estimation. In contrast to minimizing a deterministic function f , we only have access to the noisy function value F (x, ξ) at any point x ∈ Rd where a data sample ξ is drawn from a distribution Pµ. Intuitively, this is a more challenging setup. It has been studied before in the setting of optimizing a nonsmooth convex function [37, 72] or a smooth nonconvex function [44]. As in these papers, we assume that (i) F (·, ξ) is L(ξ)-Lipschitz with Eξ∈Pµ [L2(ξ)] ≤ G2 for some G > 0 and (ii) E[F (x, ξt)] = f(x) for all x ∈ Rd where ξt is simulated from Pµ at the tth iteration. Despite the noisy function value, we can compute an unbiased estimator of the gradient ∇fδ(xt), where fδ = Eu∼P[f(x + δu)] = Eu∼P,ξ∈Pµ [F (x + δu, ξ)]. In particular, we have ĝt = d2δ (F (x
t + δwt, ξt)− F (xt − δwt, ξt))wt. Clearly, under our assumption, we have
Eu∼P,ξ∈Pµ [ĝt] = Eu∼P[Eξ∈Pµ [ĝt | u]] = Eu∼P[gt] = ∇fδ(xt),
where gt is defined in Algorithm 1. However, the variance of the estimator ĝt can be undesirably large since F (·, ξ) is L(ξ)-Lipschitz for a (possibly unbounded) random variable L(ξ) > 0. To resolve this issue, we revisit Shamir [75, Lemma 10] and show that in deriving an upper bound for Eu∼P,ξ∈Pµ [‖ĝt‖2] it suffices to assume that Eξ∈Pµ [L2(ξ)] ≤ G2 for some constant G > 0. The resulting bound achieves a linear dependence in the problem dimension d > 0 which is the same as in Shamir [75, Lemma 10]. Note that the setup with convex and L(ξ)-Lipschitz functions F (·, ξ) has been considered in Duchi et al. [37]. However, our estimator is different from their estimator of ĝt = dδ (F (x
t + δwt, ξt)−F (xt, ξt))wt which essentially suffers from the quadratic dependence in d > 0. It is also necessary to employ a random iteration count R to terminate Algorithm 3.
Algorithm 3 Stochastic Gradient-Free Method (SGFM) 1: Input: initial point x0 ∈ Rd, stepsize η > 0, problem dimension d ≥ 1, smoothing parameter δ and
iteration number T ≥ 1. 2: for t = 0, 1, 2, . . . , T do 3: Simulate ξt from the distribution Pµ. 4: Sample wt ∈ Rd uniformly from a unit sphere in Rd. 5: Compute ĝt = d
2δ (F (xt + δwt, ξt)− F (xt − δwt, ξt))wt.
6: Compute xt+1 = xt − ηgt. 7: Output: xR where R ∈ {0, 1, 2, . . . , T − 1} is uniformly sampled.
Algorithm 4 Two-Phase Stochastic Gradient-Free Method (2-SGFM) 1: Input: initial point x0 ∈ Rd, stepsize η > 0, problem dimension d ≥ 1, smoothing parameter δ, iteration
number T ≥ 1, number of rounds S ≥ 1 and sample size B. 2: for s = 0, 1, 2, . . . , S − 1 do 3: Call Algorithm 3 with x0, η, d, δ and T and let x̄s be an output. 4: for s = 0, 1, 2, . . . , S − 1 do 5: for k = 0, 1, 2, . . . , B − 1 do 6: Simulate ξk from the distribution Pµ. 7: Sample wk ∈ Rd uniformly from a unit sphere in Rd. 8: Compute ĝks = d2δ (F (x̄s + δw
k, δk)− F (x̄s − δwk, δk))wk. 9: Compute ĝs = 1B ∑B−1 k=0 ĝ k s .
10: Choose an index s? ∈ {0, 1, 2, . . . , S − 1} such that s? = argmins=0,1,2,...,S−1 ‖ĝs‖. 11: Output: x̄s? .
Theorem 3.5 Suppose that F (·, ξ) is L(ξ)-Lipschitz with Eξ∈Pµ [L2(ξ)] ≤ G2 for some G > 0 and let δ > 0 and 0 < < 1. Then, there exists some T > 0 such that the output of Algorithm 3 with
η = 110 √ δ(∆+δG) cd3/2G3T
satisfies that E[min{‖g‖ : g ∈ ∂δf(xR)}] ≤ and the total number of calls of the noisy function value oracle is bounded by
O ( d 3 2 ( G4
4 +
∆G3
δ 4
)) ,
where d ≥ 1 is the problem dimension, L > 0 is the Lipschtiz parameter of f and ∆ > 0 is an upper bound for the initial objective function gap, f(x0)− infx∈Rd f(x) > 0.
In the stochastic setting, the gradient-based method achieves the rate of O(δ−1 −4) for a randomized gradient-based method in terms of (δ, )-Goldstein stationarity [31]. As such, our bound in Theorem 3.5 is tight up to the problem dimension d ≥ 1. Further, the state-of-the-art rate for stochastic gradient-free methods is O(d −2) for nonsmooth convex optimization in terms of objective function value gap [37] and O(d −4) for smooth nonconvex optimization in terms of gradient norm [44]. Thus, Theorem 3.5 demonstrates that nonsmooth nonconvex stochastic optimization is essentially the most difficult one among than all these standard settings.
Large-deviation estimation. As in the case of GFM, we hope to establish a complexity bound of SGFM for computing a (δ, ,Λ)-solution. By Theorem 3.5 and Markov’s inequality, we obtain that the total number of calls of the noisy function value oracle is bounded by
O ( d 3 2 ( G4
Λ4 4 +
∆G3
δΛ4 4
)) . (3.2)
We also propose a two-phase stochastic gradient-free method (2-SGFM) in Algorithm 4 by combining Algorithm 3 with a post-optimization procedure.
Theorem 3.6 Suppose that F (·, ξ) is L(ξ)-Lipschitz with Eξ∈Pµ [L2(ξ)] ≤ G2 for some G > 0 and let δ > 0 and 0 < ,Λ < 1. Then, there exists some T, S,B > 0 such that the output of Algorithm 4
with η = 110 √ δ(∆+δG) cd3/2G3T
satisfies that Prob(min{‖g‖ : g ∈ ∂δf(x̄s?)}] ≥ ) ≤ Λ and the total number of calls of the noisy function value oracle is bounded by
O ( d 3 2 ( G4
4 +
∆G3
δ 4
) log2 ( 1
Λ
) + dG2
Λ 2 log2
( 1
Λ
)) ,
where d ≥ 1 is the problem dimension, L > 0 is the Lipschtiz parameter of f and ∆ > 0 is an upper bound for the initial objective function gap f(x0)− infx∈Rd f(x) > 0.
Further discussions. We remark that the choice of stepsize η in all of our zeroth-order methods depend on ∆, whereas such dependence is not necessary in the first-order setting; see e.g., Zhang et al. [85]. Setting the stepsize without any prior knowledge of ∆, our methods can still achieve finite-time convergence guarantees but the order would become worse. This is possibly because the first-order information gives more characterization of the objective function than the zeroth-order information, so that for first-order methods the stepsize can be independent of more problem parameters without sacrificing the bound. A bit on the positive side is that, it suffices for our zeroth-order methods to know an estimate of the upper bound of Θ(∆), which can be done in certain application problems.
Moreover, we highlight that δ > 0 is the desired tolerance in our setting. In fact, (δ, )-Goldstein stationarity (see Definition 2.3) relaxes -Clarke stationarity and our methods pursue an (δ, )-stationary point since finding an -Clarke point is intractable. This is different from smooth optimization where -Clarke stationarity reduces to∇f(x) ≤ and becomes tractable. In this context, the existing zerothorder methods are designed to pursue an -stationary point. Notably, a (δ, )-Goldstein stationary point is provably an -stationary point in smooth optimization if we choose δ that relies on d and .
4 Experiment
We conduct numerical experiments to validate the effectiveness of our proposed methods. In particular, we evaluate the performance of our two-phase version of SGFM (Algorithm 4) on the task of image classification using convolutional neural networks (CNNs) with ReLU activations. The dataset we use is the MNIST dataset1 [60] and the CNN framework we use is: (i) we set two convolution layers and two fully connected layers where the dropout layers [77] are used before each fully connected layer, and (ii) two convolution layers and the first fully connected layer are associated with ReLU activation. It is worth mentioning that our setup follows the default one2 and the similar setup was also consider in Zhang et al. [85] for evaluating the gradient-based methods (see the setups and results for CIFAR10 dataset in Appendix F).
The baseline approaches include three gradient-based methods: stochastic gradient descent (SGD), ADAGRAD [34] and ADAM [55]. We compare these methods with 2-SGFM (cf. Algorithm 4) and set the learning rate η as 0.001. All the experiments are implemented using PyTorch [73] on a workstation with a 2.6 GHz Intel Core i7 and 16GB memory.
1http://yann.lecun.com/exdb/mnist 2https://github.com/pytorch/examples/tree/main/mnist
Figure 1 summarizes the numerical results on the performance of SGD, ADAGRAD, Adagrad, ADAM, INDG [85], and our method 2-SGFM with δ = 0.1 and B = 200. Notably, 2-SGFM is comparable to other gradient-based methods in terms of training/test accuracy/loss even though it only use the function values. This demonstrates the potential value of our methods since the gradient-based methods are not applicable in many real-world application problems as mentioned before. Figure 2a and 2b presents the effect of batch size B ≥ 1 in 2-SGFM; indeed, the larger value of B leads to better performance and this accords with Theorem 3.6. We also compare the performance of SGD and 2-SGFM with different choices of η. From Figure 2c and 2d, we see that SGD and 2-SGFM achieve similar performance in the early stage and converge to solutions with similar quality.
Figure 3 summarizes the experimental results on the effect of batch size B for 2-SGFM. Note that the evaluation metrics here are train loss and test loss. It is clear that the larger value of B leads to better performance and this is consistent with the results presented in the main context. Figure 4 summarizes the experimental results on the effect of learning rates for 2-SGFM. It is interesting to see that 2-SGFM can indeed benefit from a more aggressive choice of stepsize η > 0 in practice and the choice of η = 0.0001 seems to be too conservative.
5 Conclusion
We proposed and analyzed a class of deterministic and stochastic gradient-free methods for optimizing a Lipschitz function. Based on the relationship between the Goldstein subdifferential and uniform smoothing that we have established, the proposed GFM and SGFM are proved to return a (δ, )- Goldstein stationary point at an expected rate of O(d3/2δ−1 −4). We also obtain a large-deviation guarantee and improve it by combining GFM and SGFM with a two-phase scheme. Experiments on training neural networks with the MNIST and CIFAR10 datasets demonstrate the effectiveness of our methods. Future directions include the theory for non-Lipschitz and nonconvex optimization [11] and applications of our methods to deep residual neural network (ResNet) [47] and deep dense convolutional network (DenseNet) [50].
Acknowledgements
We would like to thank the area chair and three anonymous referees for constructive suggestions that improve the paper. This work is supported in part by the Mathematical Data Science program of the Office of Naval Research under grant number N00014-18-1-2764 and by the Vannevar Bush Faculty Fellowship program under grant number N00014-21-1-2941. | 1. What is the focus and contribution of the paper regarding optimality criteria and algorithms for non-smooth non-convex Lipschitz functions?
2. What are the strengths of the proposed approach, particularly in its theoretical analysis?
3. Do you have any concerns or questions about the advantages of the proposed algorithms compared to other methods?
4. How do the authors compare the performance of their proposed algorithms with other optimization techniques?
5. Are there any limitations to the experimental validation of the proposed method?
6. Are there any grammatical errors in the review that need to be addressed? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper established a novel optimality criterion for non-smooth non-convex Lipschitz function called (δ,ε)-Goldstein stationary point and proposed a gradient-free algorithm with its stochastic version by deriving the Goldstein subdifferential and uniform smoothing technical. The last-iterate convergence analysis of the proposed methods was given, which guarantee the convergence to a (δ,ε)-Goldstein stationary point with high probability for both deterministic and stochastic version. State-of-art lower bounds of total oracle calls are given in terms of δ,ϵ and Λ for the proposed methods. Numerical experiments have shown the effectiveness of the proposed methods.
Strengths And Weaknesses
The theoretical result in this paper is interesting. The authors established a novel optimality criterion for non-smooth non-convex Lipschitz function called (δ,ε)-Goldstein stationary point and proposed a gradient-free algorithm with its stochastic version by deriving the Goldstein subdifferential and uniform smoothing technical.
Questions
Though the paper is theoretically sound, there are still some questions need to be discussed in this paper:
The authors proposed a class of subdifferential-based gradient-free algorithms. What is the advantage of the proposed algorithms compared to the gradient-based methods and zeroth-order methods?
The authors compared the performance of the proposed algorithms with different choice on MNIST, which is literally a small-scale and simple dataset. Why not use more larger datasets to verify the excellent performance of the proposed algorithm? In addition, the author's comparison algorithms are too few, which can be compared with some more advanced zeroth-order and gradient-free optimization algorithms like INGD[R1]. [R1] J. Zhang, H. Lin, S. Jegelka, S. Sra, and A. Jadbabaie. Complexity of finding stationary points of nonconvex nonsmooth functions. In ICML, pages 11173–11182. PMLR, 2020.
It seems that the authors trained a quite simple convolutional neural network model on image classification task rather than some more modern and efficient models like [R2] and [R3] according to the experiment results, which is insufficient for validating the effectiveness of the proposed methods. [R2] He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016
[R3] G. Huang, Z. Liu, L. Van Der Maaten and K. Q. Weinberger, "Densely Connected Convolutional Networks," 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017
Grammar mistakes. Line 342, “We have proposed and analyzed a class of…” “have proposed and analyzed” should be replaced by “propose and analyze”.
Limitations
The authors proposed a class of subdifferential-based gradient-free algorithms. What is the advantage of the proposed algorithms compared to the gradient-based methods and zeroth-order methods? The authors compared the performance of the proposed algorithms with different choice on MNIST, which is literally a small-scale and simple dataset. Why not use more larger datasets to verify the excellent performance of the proposed algorithm?
----------after feedback---------------
The authors address my comments well, so I increased my score. |
NIPS | Title
Gradient-Free Methods for Deterministic and Stochastic Nonsmooth Nonconvex Optimization
Abstract
Nonsmooth nonconvex optimization problems broadly emerge in machine learning and business decision making, whereas two core challenges impede the development of efficient solution methods with finite-time convergence guarantee: the lack of computationally tractable optimality criterion and the lack of computationally powerful oracles. The contributions of this paper are two-fold. First, we establish the relationship between the celebrated Goldstein subdifferential [46] and uniform smoothing, thereby providing the basis and intuition for the design of gradient-free methods that guarantee the finite-time convergence to a set of Goldstein stationary points. Second, we propose the gradient-free method (GFM) and stochastic GFM for solving a class of nonsmooth nonconvex optimization problems and prove that both of them can return a (δ, )-Goldstein stationary point of a Lipschitz function f at an expected convergence rate at O(d3/2δ−1 −4) where d is the problem dimension. Two-phase versions of GFM and SGFM are also proposed and proven to achieve improved large-deviation results. Finally, we demonstrate the effectiveness of 2-SGFM on training ReLU neural networks with the MINST dataset.
1 Introduction
Many of the recent real-world success stories of machine learning have involved nonconvex optimization formulations, with the design of models and algorithms often being heuristic and intuitive. Thus a gap has arisen between theory and practice. Attempts have been made to fill this gap for different learning methodologies, including the training of multi-layer neural networks [25], orthogonal tensor decomposition [41], M-estimators [63, 64], synchronization and MaxCut [6, 66], smooth semidefinite programming [15], matrix sensing and completion [10, 42], robust principal component analysis (RPCA) [43] and phase retrieval [82, 79, 64]. For an overview of nonconvex optimization formulations and the relevant ML applications, we refer to a recent survey [51].
It is generally intractable to compute an approximate global minimum [69] or to verify whether a point is a local minimum or a high-order saddle point [67]. Fortunately, the notion of approximate stationary point gives a reasonable optimality criterion when the objective function f is smooth; the goal here is to find a point x ∈ Rd such that ‖∇f(x)‖ ≤ . Recent years have seen rapid algorithmic development through the lens of nonasymptotic convergence rates to -stationary points [70, 44, 45, 20, 21, 53]. Another line of work establishes algorithm-independent lower bounds [22, 23, 3, 4].
Relative to its smooth counterpart, the investigation of nonsmooth optimization is relatively scarce, particularly in the nonconvex setting, both in terms of efficient algorithms and finite-time convergence guarantees. Yet, over several decades, nonsmooth nonconvex optimization formulations have found applications in many fields. A typical example is the training multi-layer neural networks with ReLU neurons, for which the piecewise linear activation functions induce nonsmoothness. Another example arises in controlling financial risk for asset portfolios or optimizing customer satisfaction in service systems or supply chain systems. Here, the nonsmoothness arises from the payoffs of financial
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
derivatives and supply chain costs, e.g., options payoffs [38] and supply chain overage/underage costs [78]. These applications make significant demands with respect to computational feasibility, and the design of efficient algorithms for solving nonsmooth nonconvex optimization problems has moved to the fore [65, 30, 28, 85, 12, 31, 80].
The key challenges lie in two aspects: (i) the lack of a computationally tractable optimality criterion, and (ii) the lack of computationally powerful oracles. More specifically, in the classical setting where the function f is Lipschitz, we can define -stationary points based on the celebrated notion of Clarke stationarity [26]. However, the value of such a criterion has been called into question by Zhang et al. [85], who show that no finite-time algorithm guarantees -stationarity when is less than a constant. Further, the computation of the gradient is impossible for many application problems and we only have access to a noisy function value at each point. This is a common issue in the context of simulation optimization [68, 48]; indeed, the objective function value is often achieved as the output of a black-box or complex simulator, for which the simulator does not have the infrastructure needed to effectively evaluate gradients; see also Ghadimi and Lan [44] and Nesterov and Spokoiny [72] for comments on the lack of gradient evaluation in practice.
Contribution. In this paper, we propose and analyze a class of deterministic and stochastic gradientfree methods for nonsmooth nonconvex optimization problems in which we only assume that the function f is Lipschitz. Our contributions can be summarized as follows.
1. We establish a relationship between the Goldstein subdifferential and uniform smoothing via appeal to the hyperplane separation theorem. This result provides the basis for algorithmic design and finite-time convergence analysis of gradient-free methods to (δ, )-Goldstein stationary points.
2. We propose and analyze a gradient-free method (GFM) and stochastic GFM for solving a class of nonsmooth nonconvex optimization problems. Both of these methods are guaranteed to return a (δ, )-Goldstein stationary point of a Lipschitz function f : Rd 7→ R with an expected convergence rate of O(d3/2δ−1 −4) where d ≥ 1 is the problem dimension. Further, we propose the two-phase versions of GFM and SGFM. As our goal is to return a (δ, )-Goldstein stationary point with user-specified high probability 1− Λ, we prove that the two-phase version of GFM and SGFM can improve the dependence from (1/Λ)4 to log(1/Λ) in the large-deviation regime.
Related works. Our work is related to a line of literature on gradient-based methods for nonsmooth and nonconvex optimization and gradient-free methods for smooth and nonconvex optimization. Due to space limitations, we defer our comments on the former topic to Appendix A. In the context of gradient-free methods, the basic idea is to approximate a full gradient using either a one-point estimator [39] or a two-point estimator [1, 44, 37, 75, 72], where the latter approach achieves a better finite-time convergence guarantee. Despite the meteoric rise of two-point-based gradient-free methods, most of the work is restricted to convex optimization [37, 75, 83] and smooth and nonconvex optimization [72, 44, 61, 62, 24, 52, 49]. For nonsmooth and convex optimization, the best upper bound on the global rate of convergence is O(d −2) [75] and this matches the lower bound [37]. For smooth and nonconvex optimization, the best global rate of convergence is O(d −2) [72] and O(d −4) if we only have access to noisy function value oracles [44]. Additional regularity conditions, e.g., a finite-sum structure, allow us to leverage variance-reduction techniques [62, 24, 52] and the best known result is O(d3/4 −3) [49]. However, none of these gradient-free methods have been developed for nonsmooth nonconvex optimization and the only gradient-free method we are aware of for the nonsmooth is summarized in Nesterov and Spokoiny [72, Section 7].
2 Preliminaries and Technical Background
We provide the formal definitions for the class of Lipschitz functions considered in this paper, and the definitions for generalized gradients and the Goldstein subdifferential that lead to optimality conditions in nonsmooth nonconvex optimization.
2.1 Function classes
Imposing regularity on functions to be optimized is necessary for obtaining optimization algorithms with finite-time convergence guarantees [71]. In the context of nonsmooth optimization there are two types of regularity conditions: Lipschitz properties of function values and bounds on function values.
We first list several equivalent definitions of Lipschitz continuity. A function f : Rd 7→ R is said to be L-Lipschitz if for every x ∈ Rd and the direction v ∈ Rd with ‖v‖ ≤ 1, the directional projection fx,v(t) := f(x + tv) defined for t ∈ R satisfies |fx,v(t)− fx,v(t′)| ≤ L|t− t′|, for all t, t′ ∈ R. Equivalently, f is L-Lipschitz if for every x,x′ ∈ Rd, we have
|f(x)− f(x′)| ≤ L‖x− x′‖. Further, the function value bound f(x0)− infx∈Rd f(x) appears in complexity guarantees for smooth and nonconvex optimization problems [71] and is often assumed to be bounded by a positive constant ∆ > 0. Note that x0 is a prespecified point (i.e., an initial point for an algorithm) and we simply fix it for the remainder of this paper. We define the function class which will be considered in this paper.
Definition 2.1 Suppose that ∆ > 0 and L > 0 are both independent of the problem dimension d ≥ 1. Then, we denote Fd(∆, L) as the set of L-Lipschitz functions f : Rd 7→ R with the bounded function value f(x0)− infx∈Rd f(x) ≤ ∆.
The function class Fd(∆, L) includes Lipschitz functions on Rd and is thus different from the nonconvex function class considered in the literature [44, 72]. First, we do not impose a smoothness condition on the function f ∈ Fd(∆, L), in contrast to the nonconvex functions studied in Ghadimi and Lan [44] which are assumed to have Lipschitz gradients. Second, Nesterov and Spokoiny [72, Section 7] presented a complexity bound for a randomized optimization method for minimizing a nonsmooth nonconvex function. However, they did not clarify why the norm of the gradient of the approximate function fµ̄ of the order δ (we use their notation) serves as a reasonable optimality criterion in nonsmooth nonconvex optimization. They also assume an exact function value oracle, ruling out many interesting application problems in simulation optimization and machine learning.
In contrast, our goal is to propose fast gradient-free methods for nonsmooth nonconvex optimization in the absence of an exact function value oracle. In general, the complexity bound of gradient-free methods will depend on the problem dimension d ≥ 1 even when we assume that the function to be optimized is convex and smooth [37, 75]. As such, we should consider a function class with a given dimension d ≥ 1. In particular, we consider a optimality criterion based on the celebrated Goldstein subdifferential [46] and prove that the number of function value oracles required by our deterministic and stochastic gradient-free methods to find a (δ, )-Goldstein stationary point of f ∈ Fd(∆, L) is O(poly(d, L,∆, 1/ , 1/δ)) when δ, ∈ (0, 1) are constants (see the definition of Goldstein stationarity in the next subsection).
It is worth mentioning that Fd(∆, L) contains a rather broad class of functions used in real-world application problems. Typical examples with additional regularity properties include Hadamard semidifferentiable functions [76, 32, 85], Whitney-stratifiable functions [13, 30], o-minimally definable functions [27] and a class of semi-algebraic functions [5, 30]. Thus, our gradient-free methods can be applied for solving these problems with finite-time convergence guarantees.
2.2 Generalized gradients and Goldstein subdifferential
We start with the definition of generalized gradients [26] for nondifferentiable functions. This is perhaps the most standard extension of gradients to nonsmooth and nonconvex functions.
Definition 2.2 Given a point x ∈ Rd and a direction v ∈ Rd, the generalized directional derivative of a nondifferentiable function f is given by Df(x;v) := lim supy→x,t↓0 f(y+tv)−f(y) t . Then, the generalized gradient of f is defined as a set ∂f(x) := {g ∈ Rd : g>v ≤ Df(x;v),∀v ∈ Rd}.
Rademacher’s theorem guarantees that any Lipschitz function is almost everywhere differentiable. This implies that the generalized gradients of Lipschitz functions have additional properties and we can define them in a relatively simple way. The following proposition summarizes these results; we refer to Clarke [26] for the proof details.
Proposition 2.1 Suppose that f is L-Lipschitz for some L > 0, we have that ∂f(x) is a nonempty, convex and compact set and ‖g‖ ≤ L for all g ∈ ∂f(x). Further, ∂f(·) is an upper-semicontinuous set-valued map. Moreover, a generalization of mean-value theorem holds: for any x1,x2 ∈ Rd, there exist λ ∈ (0, 1) and g ∈ ∂f(λx1 + (1− λ)x2) such that f(x1)− f(x2) = g>(x1 − x2). Finally, there is a simple way to represent the generalized gradient ∂f(x):
∂f(x) := conv { g ∈ Rd : g = lim
xk→x ∇f(xk)
} ,
which is the convex hull of all limit points of ∇f(xk) over all sequences x1,x2, . . . of differentiable points of f(·) which converge to x.
Given this definition of generalized gradients, a Clarke stationary point of f is a point x satisfying 0 ∈ ∂f(x). Then, it is natural to ask if an optimization algorithm can reach an -stationary point with a finite-time convergence guarantee. Here a point x ∈ Rd is an -Clarke stationary point if
min {‖g‖ : g ∈ ∂f(x)} ≤ .
This question has been addressed by [85, Theorem 1], who showed that finding an -Clarke stationary points in nonsmooth nonconvex optimization can not be achieved by any finite-time algorithm given a fixed tolerance ∈ [0, 1). One possible response is to consider a relaxation called a near -Clarke stationary point. Consider a point which is δ-close to an -stationary point for some δ > 0. A point x ∈ Rd is near -stationary if the following statement holds true:
min { ‖g‖ : g ∈ ∪y∈Bδ(x)∂f(y) } ≤ .
Unfortunately, however, [58, Theorem 1] demonstrated that it is impossible to obtain worst-case guarantees for finding a near -Clarke stationary point of f ∈ Fd(∆, L) when , δ > 0 are smaller than some certain constants unless the number of oracle calls has an exponential dependence on the problem dimension d ≥ 1. These negative results suggest a need for rethinking the definition of targeted stationary points. We propose to consider the refined notion of Goldstein subdifferential.
Definition 2.3 Given a point x ∈ Rd and δ > 0, the δ-Goldstein subdifferential of a Lipschitz function f at x is given by ∂δf(x) := conv(∪y∈Bδ(x)∂f(y)).
The Goldstein subdifferential of f at x is the convex hull of the union of all generalized gradients at points in a δ-ball around x. Accordingly, we can define the (δ, )-Goldstein stationary points; that is, a point x ∈ Rd is a (δ, )-Goldstein stationary point if the following statement holds:
min{‖g‖ : g ∈ ∂δf(x)} ≤ .
It is worth mentioning that (δ, )-Goldstein stationarity is a weaker notion than (near) -Clarke stationarity since any (near) -stationary point is a (δ, )-Goldstein stationary point but not vice versa. However, the converse holds true under a smoothness condition [85, Proposition 6] and limδ↓0 ∂δf(x) = ∂f(x) holds as shown in Zhang et al. [85, Lemma 7]. The latter result also enables an intuitive framework for transforming nonasymptotic analysis of convergence to (δ, )-Goldstein stationary points to classical asymptotic results for finding -Clarke stationary points. Thus, we conclude that finding a (δ, )-Goldstein stationary point is a reasonable optimality condition for general nonsmooth nonconvex optimization.
Remark 2.2 Finding a (δ, )-Goldstein stationary point in nonsmooth nonconvex optimization has been formally shown to be computationally tractable in an oracle model [85, 31, 80]. Goldstein [46] discovered that one can decrease the function value of a Lipschitz f by using the minimal-norm element of ∂δf(x) and this leads to a deterministic normalized subgradient method which finds a (δ, )-Goldstein stationary point within O( ∆δ ) iterations. However, Goldstein’s algorithm is only conceptual since it is computationally intractable to return an exact minimal-norm element of ∂δf(x). Recently, the randomized variants of Goldstein’s algorithm have been proposed with a convergence guarantee of O(∆L 2
δ 3 ) [85, 31, 80]. However, it remains unknown if gradient-free methods find a (δ, )-Goldstein stationary point of a Lipschitz function f withinO(poly(d, L,∆, 1/ , 1/δ)) iterations in the absence of an exact function value oracle. Note that the dependence on the problem dimension d ≥ 1 is necessary for gradient-free methods as mentioned before.
2.3 Randomized smoothing
The randomized smoothing approaches are simple and work equally well for convex and nonconvex functions. Formally, given the L-Lipschitz function f (possibly nonsmooth nonconvex) and a distribution P, we define fδ(x) = Eu∼P[f(x + δu)]. In particular, letting P be a standard Gaussian distribution, the function fδ is a δL √ d-approximation of f(·) and the gradient∇fδ is L √ d
δ -Lipschitz where d ≥ 1 is the problem dimension; see Nesterov and Spokoiny [72, Theorem 1 and Lemma 2]. Letting P be an uniform distribution on an unit ball in `2-norm, the resulting function fδ is a δLapproximation of f(·) and ∇fδ is also cL √ d
δ -Lipschitz where d ≥ 1 is the problem dimension; see Yousefian et al. [84, Lemma 8] and Duchi et al. [36, Lemma E.2], rephrased as follows.
Proposition 2.3 Let fδ(x) = Eu∼P[f(x+ δu)] where P is an uniform distribution on an unit ball in `2-norm. Assuming that f is L-Lipschitz, we have (i) |fδ(x)−f(x)| ≤ δL, and (ii) fδ is differentiable and L-Lipschitz with the cL √ d
δ -Lipschitz gradient where c > 0 is a constant. In addition, there exists a function f for which each of the above bounds are tight simultaneously.
The randomized smoothing approaches form the basis for developing gradient-free methods [39, 1, 2, 44, 72]. Given an access to function values of f , we can compute an unbiased estimate of the gradient of fδ and plug them into stochastic gradient-based methods. Note that the Lipschitz constant of fδ depends on the problem dimension d ≥ 1 with at least a factor of √ d for many randomized smoothing approaches [58, Theorem 2]. This is consistent with the lower bounds for all gradient-free methods in convex and strongly convex optimization [37, 75].
3 Main Results
We establish a relationship between the Goldstein subdifferential and the uniform smoothing approach. We propose a gradient-free method (GFM), its stochastic variant (SGFM), and a two-phase version of GFM and SGFM. We analyze these algorithms using the Goldstein subdifferential; we provide the global rate and large-deviation estimates in terms of (δ, )-Goldstein stationarity.
3.1 Linking Goldstein subdifferential to uniform smoothing
Recall that ∂δf and fδ are defined by ∂δf(x) := conv(∪y∈Bδ(x)∂f(y)) and fδ(x) = Eu∼P[f(x + δu)]. It is clear that f is almost everywhere differentiable since f is L-Lipschitz. This implies that ∇fδ(x) = Eu∼P[∇f(x + δu)] and demonstrates that ∇fδ(x) can be viewed intuitively as a convex combination of ∇f(z) over an infinite number of points z ∈ Bδ(x). As such, it is reasonable to conjecture that ∇fδ(x) ∈ ∂δf(x) for any x ∈ Rd. However, the above argument is not a rigorous proof; indeed, we need to justify why ∇fδ(x) = Eu∼P[∇f(x + δu)] if f is almost everywhere differentiable and generalize the idea of a convex combination to include infinite sums. To resolve these issues, we exploit a toolbox due to Rockafellar and Wets [74].
In the following theorem, we summarize our result and refer to Appendix C for the proof details.
Theorem 3.1 Suppose that f is L-Lipschitz and let fδ(x) = Eu∼P[f(x + δu)], where P is an uniform distribution on a unit ball in `2-norm and let ∂δf be a δ-Goldstein subdifferential of f (cf. Definition 2.3). Then, we have ∇fδ(x) ∈ ∂δf(x) for any x ∈ Rd.
Theorem 3.1 resolves an important question and forms the basis for analyzing our gradient-free methods. Notably, our analysis can be extended to justify other randomized smoothing approaches in nonsmooth nonconvex optimization. For example, Nesterov and Spokoiny [72] used Gaussian smoothing and estimated the number of iterations required by their methods to output x̂ ∈ Rd satisfying ‖∇fδ(x̂)‖ ≤ . By modifying the proof of Theorem 3.1 and Zhang et al. [85, Lemma 7], we can prove that∇fδ belongs to Goldstein subdifferential with Gaussian weights and this subdifferential converges to the Clarke subdifferential as δ → 0. Compared to uniform smoothing and the original Goldstein subdifferential, the proof for Gaussian smoothing is quite long and technical [72, Page 554], and adding Gaussian weights seems unnatural in general.
Algorithm 1 Gradient-Free Method (GFM) 1: Input: initial point x0 ∈ Rd, stepsize η > 0, problem dimension d ≥ 1, smoothing parameter δ and
iteration number T ≥ 1. 2: for t = 0, 1, 2, . . . , T − 1 do 3: Sample wt ∈ Rd uniformly from a unit sphere in Rd. 4: Compute gt = d
2δ (f(xt + δwt)− f(xt − δwt))wt.
5: Compute xt+1 = xt − ηgt. 6: Output: xR where R ∈ {0, 1, 2, . . . , T − 1} is uniformly sampled.
Algorithm 2 Two-Phase Gradient-Free Method (2-GFM) 1: Input: initial point x0 ∈ Rd, stepsize η > 0, problem dimension d ≥ 1, smoothing parameter δ, iteration
number T ≥ 1, number of rounds S ≥ 1 and sample size B. 2: for s = 0, 1, 2, . . . , S − 1 do 3: Call Algorithm 1 with x0, η, d, δ and T and let x̄s be an output. 4: for s = 0, 1, 2, . . . , S − 1 do 5: for k = 0, 1, 2, . . . , B − 1 do 6: Sample wk ∈ Rd uniformly from a unit sphere in Rd. 7: Compute gks = d2δ (f(x̄s + δw
k)− f(x̄s − δwk))wk. 8: Compute gs = 1B ∑B−1 k=0 g k s .
9: Choose an index s? ∈ {0, 1, 2, . . . , S − 1} such that s? = argmins=0,1,2,...,S−1 ‖gs‖. 10: Output: x̄s? .
3.2 Gradient-free methods
We analyze a gradient-free method (GFM) and its two-phase version (2-GFM) for optimizing a Lipschitz function f . Due to space limitations, we defer the proof details to Appendix D.
Global rate estimation. Let f : Rd 7→ R be a L-Lipschitz function and the smooth version of f is then the function fδ = Eu∼P[f(x + δu)] where P is an uniform distribution on an unit ball in `2-norm. Equipped with Lemma 10 from Shamir [75], we can compute an unbiased estimator for the gradient∇fδ(xt) using function values. This leads to the gradient-free method (GFM) in Algorithm 1 that simply performs a one-step gradient descent to obtain xt. It is worth mentioning that we use a random iteration count R to terminate the execution of Algorithm 1 and this will guarantee that GFM is valid. Indeed, we only derive that mint=1,2,...,T ‖∇fδ(xt)‖ ≤ in the theoretical analysis (see also Nesterov and Spokoiny [72, Section 7]) and finding the best solution from {x1,x2, . . . ,xT } is difficult since the quantity ‖∇fδ(xt)‖ is unknown. To estimate them using Monte Carlo simulation would incur additional approximation errors and raise some reliability issues. The idea of random output is not new but has been used by Ghadimi and Lan [44] for smooth and nonconvex stochastic optimization. Such scheme also gives us a computational gain with a factor of two in expectation.
Theorem 3.2 Suppose that f is L-Lipschitz and let δ > 0 and 0 < < 1. Then, there exists some T > 0 such that the output of Algorithm 1 with η = 110 √ δ(∆+δL) cd3/2L3T
satisfies that E[min{‖g‖ : g ∈ ∂δf(x R)}] ≤ and the total number of calls of the function value oracle is bounded by
O ( d 3 2 ( L4
4 +
∆L3
δ 4
)) ,
where d ≥ 1 is the problem dimension, L > 0 is the Lipschtiz parameter of f and ∆ > 0 is an upper bound for the initial objective function gap, f(x0)− infx∈Rd f(x) > 0.
Remark 3.3 Theorem 3.2 illustrates the difference between gradient-based and gradient-free methods in nonsmooth nonconvex optimization. Indeed, Davis et al. [31] has recently proved the rate of Õ(δ−1 −3) for a randomized gradient-based method in terms of (δ, )-Goldstein stationarity. Further, Theorem 3.2 demonstrates that nonsmooth nonconvex optimization is likely to be intrinsically harder than all other standard settings. More specifically, the state-of-the-art rate for gradient-free methods is O(d −2) for nonsmooth convex optimization in terms of objective function value gap [37] and smooth nonconvex optimization in terms of gradient norm [72]. Thus, the dependence on d ≥ 1 is
linear in their bounds yet d 3 2 in our bound. We believe it is promising to either improve the rate of gradient-free methods or show the impossibility by establishing a lower bound.
Large-deviation estimation. While Theorem 3.2 establishes the expected convergence rate over many runs of Algorithm 1, we are also interested in the large-deviation properties for a single run. Indeed, we hope to establish a complexity bound for computing a (δ, ,Λ)-solution; that is, a point x ∈ Rd satisfying Prob(min{‖g‖ : g ∈ ∂δf(x)} ≤ ) ≥ 1 − Λ for some δ > 0 and 0 < ,Λ < 1. By Theorem 3.2 and Markov’s inequality,
Prob ( min{‖g‖ : g ∈ ∂δf(xR)} ≥ λE[min{‖g‖ : g ∈ ∂δf(xR)}] ) ≤ 1λ , for all λ > 0,
we conclude that the total number of calls of the function value oracle is bounded by
O ( d 3 2 ( L4
Λ4 4 +
∆L3
δΛ4 4
)) . (3.1)
This complexity bound is rather pessimistic in terms of its dependence on Λ which is often set to be small in practice. To improve the bound, we combine Algorithm 1 with a post-optimization procedure [44], leading to a two-phase gradient-free method (2-GFM), shown in Algorithm 2.
Theorem 3.4 Suppose that f is L-Lipschitz and let δ > 0 and 0 < ,Λ < 1. Then, there exists some T, S,B > 0 such that the output of Algorithm 2 with η = 110 √ δ(∆+δL) cd3/2L3T
satisfies that Prob(min{‖g‖ : g ∈ ∂δf(x̄s?)}] ≥ ) ≤ Λ and the total number of calls of the function value oracle is bounded by
O ( d 3 2 ( L4
4 +
∆L3
δ 4
) log2 ( 1
Λ
) + dL2
Λ 2 log2
( 1
Λ
)) ,
where d ≥ 1 is the problem dimension, L > 0 is the Lipschtiz parameter of f and ∆ > 0 is an upper bound for the initial objective function gap, f(x0)− infx∈Rd f(x) > 0.
Clearly, the bound in Theorem 3.4 is significantly smaller than the corresponding one in Eq. (3.1) in terms of the dependence on 1/Λ, demonstrating the power of the post-optimization phase.
3.3 Stochastic gradient-free methods
We turn to the analysis of a stochastic gradient-free method (SGFM) and its two-phase version (2-SGFM) for optimizing a Lipschitz function f(·) = Eξ∈Pµ [F (·, ξ)]. Global rate estimation. In contrast to minimizing a deterministic function f , we only have access to the noisy function value F (x, ξ) at any point x ∈ Rd where a data sample ξ is drawn from a distribution Pµ. Intuitively, this is a more challenging setup. It has been studied before in the setting of optimizing a nonsmooth convex function [37, 72] or a smooth nonconvex function [44]. As in these papers, we assume that (i) F (·, ξ) is L(ξ)-Lipschitz with Eξ∈Pµ [L2(ξ)] ≤ G2 for some G > 0 and (ii) E[F (x, ξt)] = f(x) for all x ∈ Rd where ξt is simulated from Pµ at the tth iteration. Despite the noisy function value, we can compute an unbiased estimator of the gradient ∇fδ(xt), where fδ = Eu∼P[f(x + δu)] = Eu∼P,ξ∈Pµ [F (x + δu, ξ)]. In particular, we have ĝt = d2δ (F (x
t + δwt, ξt)− F (xt − δwt, ξt))wt. Clearly, under our assumption, we have
Eu∼P,ξ∈Pµ [ĝt] = Eu∼P[Eξ∈Pµ [ĝt | u]] = Eu∼P[gt] = ∇fδ(xt),
where gt is defined in Algorithm 1. However, the variance of the estimator ĝt can be undesirably large since F (·, ξ) is L(ξ)-Lipschitz for a (possibly unbounded) random variable L(ξ) > 0. To resolve this issue, we revisit Shamir [75, Lemma 10] and show that in deriving an upper bound for Eu∼P,ξ∈Pµ [‖ĝt‖2] it suffices to assume that Eξ∈Pµ [L2(ξ)] ≤ G2 for some constant G > 0. The resulting bound achieves a linear dependence in the problem dimension d > 0 which is the same as in Shamir [75, Lemma 10]. Note that the setup with convex and L(ξ)-Lipschitz functions F (·, ξ) has been considered in Duchi et al. [37]. However, our estimator is different from their estimator of ĝt = dδ (F (x
t + δwt, ξt)−F (xt, ξt))wt which essentially suffers from the quadratic dependence in d > 0. It is also necessary to employ a random iteration count R to terminate Algorithm 3.
Algorithm 3 Stochastic Gradient-Free Method (SGFM) 1: Input: initial point x0 ∈ Rd, stepsize η > 0, problem dimension d ≥ 1, smoothing parameter δ and
iteration number T ≥ 1. 2: for t = 0, 1, 2, . . . , T do 3: Simulate ξt from the distribution Pµ. 4: Sample wt ∈ Rd uniformly from a unit sphere in Rd. 5: Compute ĝt = d
2δ (F (xt + δwt, ξt)− F (xt − δwt, ξt))wt.
6: Compute xt+1 = xt − ηgt. 7: Output: xR where R ∈ {0, 1, 2, . . . , T − 1} is uniformly sampled.
Algorithm 4 Two-Phase Stochastic Gradient-Free Method (2-SGFM) 1: Input: initial point x0 ∈ Rd, stepsize η > 0, problem dimension d ≥ 1, smoothing parameter δ, iteration
number T ≥ 1, number of rounds S ≥ 1 and sample size B. 2: for s = 0, 1, 2, . . . , S − 1 do 3: Call Algorithm 3 with x0, η, d, δ and T and let x̄s be an output. 4: for s = 0, 1, 2, . . . , S − 1 do 5: for k = 0, 1, 2, . . . , B − 1 do 6: Simulate ξk from the distribution Pµ. 7: Sample wk ∈ Rd uniformly from a unit sphere in Rd. 8: Compute ĝks = d2δ (F (x̄s + δw
k, δk)− F (x̄s − δwk, δk))wk. 9: Compute ĝs = 1B ∑B−1 k=0 ĝ k s .
10: Choose an index s? ∈ {0, 1, 2, . . . , S − 1} such that s? = argmins=0,1,2,...,S−1 ‖ĝs‖. 11: Output: x̄s? .
Theorem 3.5 Suppose that F (·, ξ) is L(ξ)-Lipschitz with Eξ∈Pµ [L2(ξ)] ≤ G2 for some G > 0 and let δ > 0 and 0 < < 1. Then, there exists some T > 0 such that the output of Algorithm 3 with
η = 110 √ δ(∆+δG) cd3/2G3T
satisfies that E[min{‖g‖ : g ∈ ∂δf(xR)}] ≤ and the total number of calls of the noisy function value oracle is bounded by
O ( d 3 2 ( G4
4 +
∆G3
δ 4
)) ,
where d ≥ 1 is the problem dimension, L > 0 is the Lipschtiz parameter of f and ∆ > 0 is an upper bound for the initial objective function gap, f(x0)− infx∈Rd f(x) > 0.
In the stochastic setting, the gradient-based method achieves the rate of O(δ−1 −4) for a randomized gradient-based method in terms of (δ, )-Goldstein stationarity [31]. As such, our bound in Theorem 3.5 is tight up to the problem dimension d ≥ 1. Further, the state-of-the-art rate for stochastic gradient-free methods is O(d −2) for nonsmooth convex optimization in terms of objective function value gap [37] and O(d −4) for smooth nonconvex optimization in terms of gradient norm [44]. Thus, Theorem 3.5 demonstrates that nonsmooth nonconvex stochastic optimization is essentially the most difficult one among than all these standard settings.
Large-deviation estimation. As in the case of GFM, we hope to establish a complexity bound of SGFM for computing a (δ, ,Λ)-solution. By Theorem 3.5 and Markov’s inequality, we obtain that the total number of calls of the noisy function value oracle is bounded by
O ( d 3 2 ( G4
Λ4 4 +
∆G3
δΛ4 4
)) . (3.2)
We also propose a two-phase stochastic gradient-free method (2-SGFM) in Algorithm 4 by combining Algorithm 3 with a post-optimization procedure.
Theorem 3.6 Suppose that F (·, ξ) is L(ξ)-Lipschitz with Eξ∈Pµ [L2(ξ)] ≤ G2 for some G > 0 and let δ > 0 and 0 < ,Λ < 1. Then, there exists some T, S,B > 0 such that the output of Algorithm 4
with η = 110 √ δ(∆+δG) cd3/2G3T
satisfies that Prob(min{‖g‖ : g ∈ ∂δf(x̄s?)}] ≥ ) ≤ Λ and the total number of calls of the noisy function value oracle is bounded by
O ( d 3 2 ( G4
4 +
∆G3
δ 4
) log2 ( 1
Λ
) + dG2
Λ 2 log2
( 1
Λ
)) ,
where d ≥ 1 is the problem dimension, L > 0 is the Lipschtiz parameter of f and ∆ > 0 is an upper bound for the initial objective function gap f(x0)− infx∈Rd f(x) > 0.
Further discussions. We remark that the choice of stepsize η in all of our zeroth-order methods depend on ∆, whereas such dependence is not necessary in the first-order setting; see e.g., Zhang et al. [85]. Setting the stepsize without any prior knowledge of ∆, our methods can still achieve finite-time convergence guarantees but the order would become worse. This is possibly because the first-order information gives more characterization of the objective function than the zeroth-order information, so that for first-order methods the stepsize can be independent of more problem parameters without sacrificing the bound. A bit on the positive side is that, it suffices for our zeroth-order methods to know an estimate of the upper bound of Θ(∆), which can be done in certain application problems.
Moreover, we highlight that δ > 0 is the desired tolerance in our setting. In fact, (δ, )-Goldstein stationarity (see Definition 2.3) relaxes -Clarke stationarity and our methods pursue an (δ, )-stationary point since finding an -Clarke point is intractable. This is different from smooth optimization where -Clarke stationarity reduces to∇f(x) ≤ and becomes tractable. In this context, the existing zerothorder methods are designed to pursue an -stationary point. Notably, a (δ, )-Goldstein stationary point is provably an -stationary point in smooth optimization if we choose δ that relies on d and .
4 Experiment
We conduct numerical experiments to validate the effectiveness of our proposed methods. In particular, we evaluate the performance of our two-phase version of SGFM (Algorithm 4) on the task of image classification using convolutional neural networks (CNNs) with ReLU activations. The dataset we use is the MNIST dataset1 [60] and the CNN framework we use is: (i) we set two convolution layers and two fully connected layers where the dropout layers [77] are used before each fully connected layer, and (ii) two convolution layers and the first fully connected layer are associated with ReLU activation. It is worth mentioning that our setup follows the default one2 and the similar setup was also consider in Zhang et al. [85] for evaluating the gradient-based methods (see the setups and results for CIFAR10 dataset in Appendix F).
The baseline approaches include three gradient-based methods: stochastic gradient descent (SGD), ADAGRAD [34] and ADAM [55]. We compare these methods with 2-SGFM (cf. Algorithm 4) and set the learning rate η as 0.001. All the experiments are implemented using PyTorch [73] on a workstation with a 2.6 GHz Intel Core i7 and 16GB memory.
1http://yann.lecun.com/exdb/mnist 2https://github.com/pytorch/examples/tree/main/mnist
Figure 1 summarizes the numerical results on the performance of SGD, ADAGRAD, Adagrad, ADAM, INDG [85], and our method 2-SGFM with δ = 0.1 and B = 200. Notably, 2-SGFM is comparable to other gradient-based methods in terms of training/test accuracy/loss even though it only use the function values. This demonstrates the potential value of our methods since the gradient-based methods are not applicable in many real-world application problems as mentioned before. Figure 2a and 2b presents the effect of batch size B ≥ 1 in 2-SGFM; indeed, the larger value of B leads to better performance and this accords with Theorem 3.6. We also compare the performance of SGD and 2-SGFM with different choices of η. From Figure 2c and 2d, we see that SGD and 2-SGFM achieve similar performance in the early stage and converge to solutions with similar quality.
Figure 3 summarizes the experimental results on the effect of batch size B for 2-SGFM. Note that the evaluation metrics here are train loss and test loss. It is clear that the larger value of B leads to better performance and this is consistent with the results presented in the main context. Figure 4 summarizes the experimental results on the effect of learning rates for 2-SGFM. It is interesting to see that 2-SGFM can indeed benefit from a more aggressive choice of stepsize η > 0 in practice and the choice of η = 0.0001 seems to be too conservative.
5 Conclusion
We proposed and analyzed a class of deterministic and stochastic gradient-free methods for optimizing a Lipschitz function. Based on the relationship between the Goldstein subdifferential and uniform smoothing that we have established, the proposed GFM and SGFM are proved to return a (δ, )- Goldstein stationary point at an expected rate of O(d3/2δ−1 −4). We also obtain a large-deviation guarantee and improve it by combining GFM and SGFM with a two-phase scheme. Experiments on training neural networks with the MNIST and CIFAR10 datasets demonstrate the effectiveness of our methods. Future directions include the theory for non-Lipschitz and nonconvex optimization [11] and applications of our methods to deep residual neural network (ResNet) [47] and deep dense convolutional network (DenseNet) [50].
Acknowledgements
We would like to thank the area chair and three anonymous referees for constructive suggestions that improve the paper. This work is supported in part by the Mathematical Data Science program of the Office of Naval Research under grant number N00014-18-1-2764 and by the Vannevar Bush Faculty Fellowship program under grant number N00014-21-1-2941. | 1. What is the focus of the paper regarding nonsmooth nonconvex optimization problems?
2. What are the strengths of the proposed gradient-free methods, particularly in terms of theoretical analysis?
3. What are the weaknesses of the paper, especially regarding the Lipschitz continuous condition?
4. Do you have any concerns or suggestions regarding the relaxation of the Lipschitz continuous condition?
5. Are there any limitations in the convergence analysis of the proposed methods? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper studied the gradient-free (zeroth-order) methods for the nonsmooth nonconvex optimization problems, and provided the solid theoretical analysis for the proposed gradient-free methods. Notably, it established a useful relationship between the Goldstein subdifferential and uniform smoothing via appeal to the hyperplane separation theorem. Some experimental results demonstrate the effectiveness of the proposed methods.
Strengths And Weaknesses
Novelty of this paper: Overall, it provides some solid theoretical results on gradient-free methods for the nonsmooth nonconvex optimization.
Weakness of this paper: The Lipschitz continuous condition used in the paper may be not mild.
Questions
Some comments:
Could we further relax the Lipschitz continuous condition in the gradient-free methods for nonsmooth nonconvex optimization ?
In the convergence analysis (Theorem 3.2,3.4-3.6), do we need to choose some small parameter
δ
that relies on
d
and
ϵ
, as in the existing zeroth-order methods for smooth optimzation?
Limitations
Yes |
NIPS | Title
Exploiting Higher Order Smoothness in Derivative-free Optimization and Continuous Bandits
Abstract
We study the problem of zero-order optimization of a strongly convex function. The goal is to find the minimizer of the function by a sequential exploration of its values, under measurement noise. We study the impact of higher order smoothness properties of the function on the optimization error and on the cumulative regret. To solve this problem we consider a randomized approximation of the projected gradient descent algorithm. The gradient is estimated by a randomized procedure involving two function evaluations and a smoothing kernel. We derive upper bounds for this algorithm both in the constrained and unconstrained settings and prove minimax lower bounds for any sequential search method. Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters. Based on this algorithm, we also propose an estimator of the minimum value of the function achieving almost sharp oracle behavior. We compare our results with the state-of-the-art, highlighting a number of key improvements.
1 Introduction
We study the problem of zero-order stochastic optimization, in which we aim to minimize an unknown strongly convex function via a sequential exploration of its function values, under measurement error, and a closely related problem of continuous (or continuum-armed) stochastic bandits. These problems have received significant attention in the literature, see [1, 2, 3, 4, 7, 9, 10, 14, 17, 18, 34, 16, 20, 21, 30, 25, 31, 13, 27, 28, 19, 29], and are fundamental for many applications in which the derivatives of the function are either too expensive or impossible to compute. A principal goal of this paper is to exploit higher order smoothness properties of the underlying function in order to improve the performance of search algorithms. We derive upper bounds on the estimation error for a class of projected gradient-like algorithms, as well as close matching lower bounds, that characterize the role played by the number of iterations, the strong convexity parameter, the smoothness parameter, the number of variables, and the noise level.
Let f : Rd → R be the function that we wish to minimize over a closed convex subset Θ of Rd. Our approach, outlined in Algorithm 1, builds upon previous work in which a sequential algorithm queries at each iteration a pair of function values, under a general noise model. Specifically, at iteration t the current guess xt for the minimizer of f is used to build two perturbations xt + δt and xt − δt, where the function values are queried subject to additive measurement errors ξt and ξ′t, respectively. The
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Algorithm 1 Zero-Order Stochastic Projected Gradient
Requires Kernel K : [−1, 1]→ R, step size ηt > 0 and parameter ht, for t = 1, . . . , T Initialization Generate scalars r1, . . . , rT uniformly on the interval [−1, 1], vectors ζ1, . . . , ζT uniformly distributed on the unit sphere Sd = {ζ ∈ Rd : ‖ζ‖ = 1}, and choose x1 ∈ Θ For t = 1, . . . , T
1. Let yt = f(xt + htrtζt) + ξt and y′t = f(xt − htrtζt) + ξ′t, 2. Define ĝt = d2ht (yt − y ′ t)ζtK(rt)
3. Update xt+1 = ProjΘ(xt − ηtĝt) Return (xt)Tt=1
values δt can be chosen in different ways. In this paper, we set δt = htrrζt (Line 1), where ht > 0 is a suitably chosen small parameter, rt is random and uniformly distributed on [−1, 1], and ζt is uniformly distributed on the unit sphere. The estimate for the gradient is then computed at Line 2 and used inside a projected gradient method scheme to compute the next exploration point. We introduce a suitably chosen kernel K that allows us to take advantage of higher order smoothness of f .
The idea of using randomized procedures for derivative-free stochastic optimization can be traced back to Nemirovski and Yudin [23, Sec. 9.3] who suggested an algorithm with one query per step at point xt +htζt, with ζt uniform on the unit sphere. Its versions with one, two or more queries were studied in several papers including [1, 3, 16, 31]. Using two queries per step leads to better performance bounds as emphasized in [26, 1, 3, 16, 31, 13]. Randomizing sequences other than uniform on the sphere were also explored: ζt uniformly distributed on a cube [26], Gaussian ζt [24, 25], ζt uniformly distributed on the vertices of a cube [30] or satisfying some general assumptions [12, 13]. Except for [26, 12, 3], these works study settings with low smoothness of f (2-smooth or less) and do not invoke kernels K (i.e. K(·) ≡ 1 and rt ≡ 1 in Algorithm 1). The use of randomization with smoothing kernels was proposed by Polyak and Tsybakov [26] and further developed by Dippon [12], and Bach and Perchet [3] to whom the current form of Algorithm 1 is due.
In this paper we consider higher order smooth functions f satisfying the generalized Hölder condition with parameter β ≥ 2, cf. inequality (1) below. For integer β, this parameter can be roughly interpreted as the number of bounded derivatives. Furthermore, we assume that f is α-strongly convex. For such functions, we address the following two main questions:
(a) What is the performance of Algorithm 1 in terms of the cumulative regret and optimization error, namely what is the explicit dependency of the rate on the main parameters d, T, α, β?
(b) What are the fundamental limits of any sequential search procedure expressed in terms of minimax optimization error?
To handle task (a), we prove upper bounds for Algorithm 1, and to handle (b), we prove minimax lower bounds for any sequential search method.
Contributions. Our main contributions can be summarized as follows: i) Under an adversarial noise assumption (cf. Assumption 2.1 below), we establish for all β ≥ 2 upper bounds of the order d2
α T − β−1β for the optimization risk and d 2 α T 1 β for the cumulative regret of Algorithm 1, both for its
constrained and unconstrained versions; ii) In the case of independent noise satisfying some natural assumptions (including the Gaussian noise), we prove a minimax lower bound of the order dαT − β−1β for the optimization risk when α is not very small. This shows that to within the factor of d the bound for Algorithm 1 cannot be improved for all β ≥ 2; iii) We show that, when α is too small, below some specified threshold, higher order smoothness does not help to improve the convergence rate. We prove that in this regime the rate cannot be faster than d/ √ T , which is not better (to within the dependency on d) than for derivative-free minimization of simply convex functions [2, 18]; iv) For β = 2, we obtain a bracketing of the optimal rate between O(d/ √ αT ) and Ω(d/(max(1, α) √ T )). In a special case when α is a fixed numerical constant, this validates a conjecture in [30] (claimed there as proved fact) that the optimal rate for β = 2 scales as d/ √ T ; v) We propose a simple algorithm of estimation of the value minx f(x) requiring three queries per step and attaining the optimal rate 1/ √ T for all
β ≥ 2. The best previous work on this problem [6] suggested a method with exponential complexity and proved a bound of the order c(d, α)/ √ T for β > 2 where c(d, α) is an unspecified constant.
Notation. Throughout the paper we use the following notation. We let 〈·, ·〉 and ‖ · ‖ be the standard inner product and Euclidean norm on Rd, respectively. For every close convex set Θ ⊂ Rd and x ∈ Rd we denote by ProjΘ(x) = argmin{‖z−x‖ : z ∈ Θ} the Euclidean projection of x to Θ. We assume everywhere that T ≥ 2. We denote by Fβ(L) the class of functions with Hölder smoothness β (inequality (1) below). Recall that f is α-strongly convex for some α > 0 if, for any x, y ∈ Rd it holds that f(y) ≥ f(x) + 〈∇f(x), y − x〉+ α2 ‖x− y‖
2. We further denote by Fα,β(L) the class of all α-strongly convex functions belonging to Fβ(L). Organization. We start in Section 2 with some preliminary results on the gradient estimator. Section 3 presents our upper bounds for Algorithm 1, both in the constrained and unconstrained case. In Section 4 we observe that a slight modification of Algorithm 1 can be used to estimated the minimum value (rather than the minimizer) of f . Section 4 presents improved upper bounds in the case β = 2. In Section 6 we establish minimax lower bounds. Finally, Section 7 contrasts our results with previous work in the literature and discusses future directions of research.
2 Preliminaries
In this section, we give the definitions, assumptions and basic facts that will be used throughout the paper. For β > 0, let ` be the greatest integer strictly less than β. We denote by Fβ(L) the set of all functions f : Rd → R that are ` times differentiable and satisfy, for all x, z ∈ Θ the Hölder-type condition ∣∣∣∣f(z)− ∑
0≤|m|≤`
1
m! Dmf(x)(z − x)m ∣∣∣∣ ≤ L‖z − x‖β , (1) where L > 0, the sum is over the multi-index m = (m1, ...,md) ∈ Nd, we used the notation m! = m1! · · ·md!, |m| = m1 + · · ·+md, and we defined
Dmf(x)νm = ∂|m|f(x)
∂m1x1 · · · ∂mdxd νm11 · · · ν md d , ∀ν = (ν1, . . . , νd) ∈ R d.
In this paper, we assume that the gradient estimator defined by Algorithm 1 uses a kernel function K : [−1, 1]→ R satisfying∫
K(u)du = 0, ∫ uK(u)du = 1, ∫ ujK(u)du = 0, j = 2, . . . , `, ∫ |u|β |K(u)|du <∞. (2)
Examples of such kernels obtained as weighted sums of Legendre polynomials are given in [26] and further discussed in [3]. Assumption 2.1. It holds, for all t ∈ {1, . . . , T}, that: (i) the random variables ξt and ξ′t are independent from ζt and from rt, and the random variables ζt and rt are independent; (ii) E[ξ2t ] ≤ σ2, and E[(ξ′t)2] ≤ σ2, where σ ≥ 0.
Note that we do not assume ξt and ξ′t to have zero mean. Moreover, they can be non-random and no independence between noises on different steps is required, so that the setting can be considered as adversarial. Having such a relaxed set of assumptions is possible because of randomization that, for example, allows the proofs go through without assuming the zero mean noise.
We will also use the following assumption. Assumption 2.2. Function f : Rd → R is 2-smooth, that is, differentiable on Rd and such that ‖∇f(x)−∇f(x′)‖ ≤ L̄‖x− x′‖ for all x, x′ ∈ Rd, where L̄ > 0.
It is easy to see that this assumption implies that f ∈ F2(L̄/2). The following lemma gives a bound on the bias of the gradient estimator. Lemma 2.3. Let f ∈ Fβ(L), with β ≥ 1 and let Assumption 2.1 (i) hold. Let ĝt and xt be defined by Algorithm 1 and let κβ = ∫ |u|β |K(u)|du. Then
‖E[ĝt |xt]−∇f(xt)‖ ≤ κβLdhβ−1t . (3)
If K be a weighted sum of Legendre polynomials, κβ ≤ 2 √
2β, with β ≥ 1 (see e.g., [3, Appendix A.3]).
The next lemma provides a bound on the stochastic variability of the estimated gradient by controlling its second moment. Lemma 2.4. Let Assumption 2.1(i) hold, let ĝt and xt be defined by Algorithm 1 and set κ =∫ K2(u)du. Then
(i) If Θ ⊆ Rd,∇f(x∗) = 0 and Assumption 2.2 holds, E[‖ĝt‖2 |xt] ≤ 9κL̄2 ( d‖xt − x∗‖2 +
d2h2t 8
) + 3κd2σ2
2h2t ,
(ii) If f ∈ F2(L) and Θ is a closed convex subset of Rd such that max x∈Θ ‖∇f(x)‖ ≤ G, then
E[‖ĝt‖2 |xt] ≤ 9κ ( G2d+
L2d2h2t 2
) + 3κd2σ2
2h2t .
3 Upper bounds
In this section, we provide upper bounds on the cumulative regret and on the optimization error of Algorithm 1, which are defined as
T∑ t=1 E[f(xt)− f(x)],
and E[f(x̂T )− f(x∗)], respectively, where x ∈ Θ and x̂T is an estimator after T queries. Note that the provided upper bound for cumulative regret is valid for any x ∈ Θ. First we consider Algorithm 1 when the convex set Θ is bounded (constrained case). Theorem 3.1. (Upper Bound, Constrained Case.) Let f ∈ Fα,β(L) with α,L > 0 and β ≥ 2. Let Assumptions 2.1 and 2.2 hold and let Θ be a convex compact subset of Rd. Assume that maxx∈Θ ‖∇f(x)‖ ≤ G. If σ > 0 then the cumulative regret of Algorithm 1 with
ht =
( 3κσ2
2(β − 1)(κβL)2
) 1 2β
t− 1 2β , ηt =
2
αt , t = 1, . . . , T
satisfies
∀x ∈ Θ : T∑ t=1 E[f(xt)− f(x)] ≤ 1 α ( d2 ( A1T 1/β +A2 ) +A3d log T ) , (4)
where A1 = 3β(κσ2) β−1 β (κβL) 2 β , A2 = c̄L̄2(σ/L) 2 β + 9κG2/d with constant c̄ > 0 depending only on β, and A3 = 9κG2. The optimization error of averaged estimator x̄T = 1T ∑T t=1 xt satisfies
E[f(x̄T )− f(x∗)] ≤ 1
α
( d2 ( A1
T β−1 β
+ A2 T
) +A3 d log T
T
) , (5)
where x∗ = arg minx∈Θ f(x). If σ = 0, then the cumulative regret and the optimization error of Algorithm 1 with any ht chosen small enough and ηt = 2αt satisfy the bounds (4) and (5), respectively, with A1 = 0, A2 = 9κG2/d and A3 = 10κG2.
Proof sketch. We use the definition of Algorithm 1 and strong convexity of f to obtain an upper bound for ∑T t=1 E[f(xt)− f(x) |xt], which depends on the bias term ∑T t=1 ‖E[ĝt |xt]−∇f(xt)‖
and on the stochastic error term ∑T t=1 E[‖ĝt‖2]. By substituting ht (that is derived from balancing the
two terms) and ηt in Lemmas 2.3 and 2.4 we obtain upper bounds for ∑T t=1 ‖E[ĝt |xt]−∇f(xt)‖ and∑T
t=1 E[‖ĝt‖2] that imply the desired upper bound for ∑T t=1 E[f(xt)− f(x) |xt] due to a recursive
argument in the spirit of [5].
In the non-noisy case (σ = 0) we get the rate dα log T for the cumulative regret, and d α log T T for the optimization error. In what concerns the optimization error, this rate is not optimal since one can achieve much faster rate under strong convexity [25]. However, for the cumulative regret in our derivative-free setting it remains an open question whether the result of Theorem 3.1 can be improved. Previous papers on derivative-free online methods with no noise [1, 13, 16] provide slower rates than (d/α) log T . The best known so far is (d2/α) log T , cf. [1, Corollary 5]. We may also notice that the cumulative regret bounds of Theorem 3.1 trivially extend to the case when we query functions ft depending on t rather than a single f . Another immediate fact is that on the r.h.s. of inequalities (4) and (5) we can take the minimum with GBT and GB, respectively, where B is the Euclidean diameter of Θ. Finally, the factor log T in the bounds for the optimization error can be eliminated by considering averaging from T/2 to T rather than from 1 to T , in the spirit of [27]. We refer to Appendix D for the details and proofs of these facts.
We now study the performance of Algorithm 1 when Θ = Rd. In this case we make the following choice for the parameters ht and ηt in Algorithm 1:
ht = T − 12β , ηt =
1
αT , t = 1, . . . , T0,
ht = t − 12β , ηt =
2
αt , t = T0 + 1, . . . , T,
(6)
where T0 = max { k ≥ 0 : C1L̄2d > α2k/2 } and C1 is a positive constant1 depending only on the kernel K(·) (this is defined in the proof of Theorem 3.2 in Appendix B) and recall L̄ is the Lipschitz constant on the gradient∇f . Finally, define the estimator
x̄T0,T = 1
T − T0 T∑ t=T0+1 xt. (7)
Theorem 3.2. (Upper Bounds, Unconstrained Case.) Let f ∈ Fα,β(L) with α,L > 0 and β ≥ 2. Let Assumptions 2.1 and 2.2 hold. Assume also that α > √ C∗d/T , where C∗ > 72κL̄2. Let xt’s be the updates of Algorithm 1 with Θ = Rd, ht and ηt as in (6) and a non-random x1 ∈ Rd. Then the estimator defined by (7) satisfies
E[f(x̄T0,T )− f(x∗)] ≤ CκL̄2 d
αT ‖x1 − x∗‖2 + C
d2
α
( (κβL) 2 + κ ( L̄2 + σ2 )) T− β−1 β (8)
where C > 0 is a constant depending only on β and x∗ = arg minx∈Rd f(x).
Proof sketch. As in the proof of Theorem 3.1, we apply Lemmas 2.3 and 2.4. But we can only use Lemma 2.4(i) and not Lemma 2.4(ii) and thus the bound on the stochastic error now involves ‖xt − x∗‖2. So, after taking expectations, we need to control an additional term containing rt = E[‖xt − x∗‖2]. However, the issue concerns only small t (t ≤ T0 ∼ d2/α) since for bigger t this term is compensated due to the strong convexity with parameter α > √ C∗d/T . This motivates the method where we use the first T0 iterations to get a suitably good (but not rate optimal) bound on rT0+1 and then proceed analogously to Theorem 3.1 for iterations t ≥ T0 + 1.
4 Estimation of f(x∗)
In this section, we apply the above results to estimation of the minimum value f(x∗) = minx∈Θ f(x) for functions f in the class Fα,β(L). The literature related to this problem assumes that xt’s are either i.i.d. with density bounded away from zero on its support [32] or xt’s are chosen sequentially [22, 6]. In the fist case, from the results in [32] one can deduce that f(x∗) cannot be estimated better than at the slow rate T−β/(2β+d). For the second case, which is our setting, the best result so far is obtained in [6]. The estimator of f(x∗) in [6] is defined via a multi-stage procedure whose complexity increases exponentially with the dimension d and it is shown to achieve (asymptotically,
1If T0 = 0 the algorithm does not use (6). Assumptions of Theorem 3.2 are such that condition T > T0 holds.
for T greater than an exponent of d) the c(d, α)/ √ T rate for functions in Fα,β(L) with β > 2. Here, c(d, α) is some constant depending on d and α in an unspecified way.
Observe that f(x̄T ) is not an estimator since it depends on the unknown f , so Theorem 3.1 does not provide a result about estimation of f(x∗). In this section, we show that using the computationally simple Algorithm 1 and making one more query per step (that is, having three queries per step in total) allows us to achieve the 1/ √ T rate for all β ≥ 2 with no dependency on the dimension in the main term. Note that the 1/ √ T rate cannot be improved. Indeed, one cannot estimate f(x∗) with a better rate even using the ideal but non-realizable oracle that makes all queries at point x∗. That is, even if x∗ is known and we sample T times f(x∗) + ξt with independent centered variables ξt, the error is still of the order 1/ √ T .
In order to construct our estimator, at any step t of Algorithm 1 we make along with yt and y′t the third query y′′t = f(xt) + ξ ′′ t , where ξ ′′ t is some noise and xt are the updates of Algorithm 1. We
estimate f(x∗) by M̂ = 1T ∑T t=1 y ′′ t . The properties of estimator M̂ are summarized in the next theorem, which is an immediate corollary of Theorem 3.1. Theorem 4.1. Let the assumptions of Theorem 3.1 be satisfied. Let σ > 0 and assume that (ξ′′t )Tt=1 are independent random variables with E[ξ′′t ] = 0 and E[(ξ′′t )2] ≤ σ2 for t = 1, . . . , T . If f attains its minimum at point x∗ ∈ Θ, then
E|M̂ − f(x∗)| ≤ σ T 1 2 + 1 α
( d2 ( A1
T β−1 β
+ A2 T
) +A3 d log T
T
) . (9)
Remark 4.2. With three queries per step, the risk (error) of the oracle that makes all queries at point x∗ does not exceed σ/ √ 3T . Thus, for β > 2 the estimator M̂ achieves asymptotically as T →∞ the oracle risk up to a numerical constant factor. We do not obtain such a sharp property for β = 2, in which case the remainder term in Theorem 4.1 accounting for the accuracy of Algorithm 1 is of the same order as the main term σ/ √ T .
Note that in Theorem 4.1 the noises (ξ′′t ) T t=1 are assumed to be independent and zero mean random
variables, which is essential to obtain the 1/ √ T rate. Nevertheless, we do not require independence between the noises (ξ′′t ) T t=1 and the noises in the other two queries (ξt) T t=1 and (ξ ′ t) T t=1. Another
interesting point is that for β = 2 the third query is not needed and f(x∗) is estimated with the 1/ √ T rate either by M̂ = 1T ∑T t=1 yt or by M̂ = 1 T ∑T t=1 y ′ t. This is an easy consequence of the above argument, the property (19) – see Lemma A.3 in the appendix – which is specific for the case β = 2, and the fact that the optimal choice of ht is of order t−1/4 for β = 2.
5 Improved bounds for β = 2
In this section, we consider the case β = 2 and obtain improved bounds that scale as d rather than d2 with the dimension in the constrained optimization setting analogous to Theorem 3.1. First note that for β = 2 we can simplify the algorithm. The use of kernel K is redundant when β = 2, and therefore in this section we define the approximate gradient as
ĝt = d
2ht (yt − y′t)ζt, (10)
where yt = f(x + htζ̃) and y′t = f(x − htζ̃). A well-known observation that goes back to [23] consists in the fact that ĝt defined in (10) is an unbiased estimator of the gradient of the surrogate function f̂t defined by
f̂t(x) = Ef(x+ htζ̃), ∀x ∈ Rd,
where the expectation E is taken with respect to the random vector ζ̃ uniformly distributed on the unit ball Bd = {u ∈ Rd : ‖u‖ ≤ 1}. The properties of the surrogate f̂t are described in Lemmas A.2 and A.3 presented in the appendix.
The improvement in the rate that we get for β = 2 is due to the fact that we can consider Algorithm 1 with ĝt defined in (10) as the SGD for the surrogate function. Then the bias of approximating f by f̂t scales as h2t , which is smaller than the squared bias of approximating the gradient arising in the proof
of Theorem 3.1 that scales as d2h2(β−1)t = d 2h2t when β = 2. On the other hand, the stochastic variability terms are the same for both methods of proof. This explains the gain in dependency on d. However, this technique does not work for β > 2 since then the error of approximating f by f̂t, which is of the order h β t (with ht small), becomes too large compared to the bias d 2h 2(β−1) t of Theorem 3.1.
Theorem 5.1. Let f ∈ Fα,2(L) with α,L > 0. Let Assumption 2.1 hold and let Θ be a convex compact subset of Rd. Assume that maxx∈Θ ‖∇f(x)‖ ≤ G. If σ > 0 then for Algorithm 1 with ĝt defined in (10) and parameters ht = ( 3d2σ2
4Lαt+9L2d2 )1/4 and ηt = 1αt we have
∀x ∈ Θ : E T∑ t=1 ( f(xt)− f(x) ) ≤ min ( GBT, 2 √ 3Lσ d√ α √ T +A4 d2 α log T ) , (11)
where B is the Euclidean diameter of Θ and A4 = 6.5Lσ + 22G2/d. Moreover, if x∗ = arg minx∈Θ f(x) the optimization error of averaged estimator x̄T = 1 T ∑T t=1 xt is bounded as
E[f(x̄T )− f(x∗)] ≤ min ( GB, 2 √ 3Lσ
d√ αT +A4 d2 α log T T
) . (12)
Finally, if σ = 0, then the cumulative regret of Algorithm 1 with any ht chosen small enough and ηt = 1 αt and the optimization error of its averaged version are of the order d2 α log T and d2 α log T T , respectively.
Note that the terms d 2 α log T and d2 α log T T appearing in these bounds can be improved to d α log T and d α log T T at the expense of assuming that the norm ‖∇f‖ is uniformly bounded by G not only on Θ but also on a large enough Euclidean neighborhood of Θ. Moreover, the log T factor in the bounds for the optimization error can be eliminated by considering averaging from T/2 to T rather than from 1 to T in the spirit of [27]. We refer to Appendix D for the details and proofs of these facts. A major conclusion is that, when σ > 0 and we consider the optimization error, those terms are negligible with respect to d/ √ αT and thus an attainable rate is min(1, d/ √ αT ).
We close this section by noting, in connection with the bandit setting, that the bound (11) extends straightforwardly (up to a change in numerical constants) to the cumulative regret of the form E ∑T t=1 ( ft(xt ± htζt)− ft(x) ) , where the losses are measured at the query points and f depends on t. This fact follows immediately from the proof of Theorem 5.1 presented in the appendix and the property (19), see Lemma A.3 in the appendix.
6 Lower bound
In this section we prove a minimax lower bound on the optimization error over all sequential strategies that allow the query points depend on the past. For t = 1, . . . , T , we assume that yt = f(zt) + ξt and we consider strategies of choosing the query points as zt = Φt(zt−11 , y t−1 1 ) where Φt are Borel functions and z1 ∈ Rd is any random variable. We denote by ΠT the set of all such strategies. The noises ξ1, . . . , ξT are assumed in this section to be independent with cumulative distribution function F satisfying the condition∫
log ( dF (u)/dF (u+ v) ) dF (u) ≤ I0v2, |v| < v0 (13)
for some 0 < I0 <∞, 0 < v0 ≤ ∞. Using the second order expansion of the logarithm w.r.t. v, one can verify that this assumption is satisfied when F has a smooth enough density with finite Fisher information. For example, for Gaussian distribution F this condition holds with v0 =∞. Note that the class ΠT includes the sequential strategy of Algorithm 1 that corresponds to taking T as an even number, and choosing zt = xt + ζtrt and zt = xt − ζtrt for even t and odd t, respectively. The presence of the randomizing sequences ζt, rt is not crucial for the lower bound. Indeed, Theorem 6.1 below is valid conditionally on any randomization, and thus the lower bound remains valid when taking expectation over the randomizing distribution.
Theorem 6.1. Let Θ = {x ∈ Rd : ‖x‖ ≤ 1}. For α,L > 0,β ≥ 2, let F ′α,β denote the set of functions f that attain their minimum over Rd in Θ and belong to Fα,β(L) ∩ {f : maxx∈Θ ‖∇f(x)‖ ≤ G}, where G > 2α. Then for any strategy in the class ΠT we have
sup f∈F ′α,β
E [ f(zT )−min
x f(x)
] ≥ C min ( max(α, T−1/2+1/β),
d√ T , d α T− β−1 β
) , (14)
and sup f∈F ′α,β E [ ‖zT − x∗(f)‖2 ] ≥ C min ( 1, d T 1 β , d α2 T− β−1 β ) , (15)
where C > 0 is a constant that does not depend of T, d, and α, and x∗(f) is the minimizer of f on Θ.
The proof is given in Appendix B. It extends the proof technique of Polyak and Tsybakov [28], by applying it to more than two probe functions. The proof takes into account dependency on the dimension d, and on α. The final result is obtained by applying Assouad’s Lemma, see e.g. [33].
We stress that the condition G > 2α in this theorem is necessary. It should always hold if the intersection Fα,β(L) ∩ {f : maxx∈Θ ‖∇f(x)‖ ≤ G} is not empty. Notice also that the threshold T−1/2+1/β on the strong convexity parameter α plays an important role in bounds (14) and (15). Indeed, for α below this threshold, the bounds start to be independent of α. Moreover, in this regime, the rate of (14) becomes min(T 1/β , d)/ √ T , which is asymptotically d/ √ T and thus not better as function of T than the rate attained for zero-order minimization of simply convex functions [2, 7]. Intuitively, it seems reasonable that α-strong convexity should be of no added value for very small α. Theorem 6.1 allows us to quantify exactly how small such α should be. Also, quite naturally, the threshold becomes smaller when the smoothness β increases. Finally note that for β = 2 the lower bounds (14) and (15) are, in the interesting regime of large enough T , of order d/(max(α, 1) √ T ) and d/(max(α2, 1) √ T ), respectively. This highlights the near minimax optimal properties of Algorithm 1 in the setting of Theorem 5.1.
7 Discussion and related work
There is a great deal of attention to zero-order feedback stochastic optimization and convex bandits problems in the recent literature. Several settings are studied: (i) deterministic in the sense that the queries contain no random noise and we query functions ft depending on t rather than f where ft are Lipschitz or 2-smooth [16, 1, 24, 25, 28, 31]; (ii) stochastic with two-point feedback where the two noisy evaluations are obtained with the same noise and the noisy functions are Lipschitz or 2-smooth [24, 25, 13] (this setting does not differ much from (i) in terms of the analysis and the results); (iii) stochastic, where the noises ξi are independent zero-mean random variables [15, 26, 12, 2, 30, 3, 19, 4, 20]. In this paper, we considered a setting, which is more general than (iii) by allowing for adversarial noise (no independence or zero-mean assumption in contrast to (iii), no Lipschitz assumption in contrast to settings (i) and (ii)), which are both covered by our results when the noise is set to zero.
One part of our results are bounds on the cumulative regret, cf. (4) and (11). We emphasize that they remain trivially valid if the queries are from ft depending on t instead of f , and thus cover the setting (i). To the best of our knowledge, there were no such results in this setting previously, except for [3] that gives bounds with suboptimal dependency on T in the case of classical (non-adversarial) noise. In the non-noisy case, we get bounds on the cumulative regret with faster rates than previously known for the setting (i). It remains an open question whether these bounds can be improved.
The second part of our results dealing with the optimization error E[f(x̄T ) − f(x∗)] is closely related to the work on derivative-free stochastic optimization under strong convexity and smoothness assumptions initiated in [15, 26] and more recently developed in [12, 19, 30, 3]. It was shown in [26] that the minimax optimal rate for f ∈ Fα,β(L) scales as c(α, d)T−(β−1)/β , where c(α, d) is an unspecified function of α and d (for d = 1 an upper bound of the same order was earlier established in [15]). The issue of establishing non-asymptotic fundamental limits as function of the main parameters of the problem (α, d and T ) was first addressed in [19] giving a lower bound Ω( √ d/T ) for β = 2. This was improved to Ω(d/ √ T ) when α 1 by Shamir [30] who conjectured that the rate d/ √ T is optimal for β = 2, which indeed follows from our Theorem 5.1 (although [30] claims the optimality
as proved fact by referring to results in [1], such results cannot be applied in setting (iii) because the noise cannot be considered as Lipschitz). A result similar to Theorem 5.1 is stated without proof in Bach and Perchet [3, Proposition 7] but not for the cumilative regret and with a suboptimal rate in the non-noisy case. For integer β ≥ 3, Bach and Perchet [3] present explicit upper bounds as functions of α, d and T with, however, suboptimal dependency on T except for their Proposition 8 that is problematic (see Appendix C for the details). Finally, by slightly modifying the proof of Theorem 3.1 we get that the estimation risk E [ ‖x̄T − x∗‖2 ] is O((d2/α2)T−(β−1)/β), which is to within factor d of the main term in the lower bound (15) (see Appendix D for details).
The lower bound in Theorem 6.1 is, to the best of our knowledge, the first result providing nonasymptotic fundamental limits under general configuration of α, d and T . The known lower bounds [26, 19, 30] either give no explicit dependency on α and d, or treat the special case β = 2 and α 1. Moreover, as an interesting consequence of our lower bound we find that, for small strong convexity parameter α (namely, below the T−1/2+1/β threshold), the best achievable rate cannot be substantially faster than for simply convex functions, at least for moderate dimensions. Indeed, for such small α, our lower bound is asymptotically Ω(d/ √ T ) independently of the smoothness index β and on α, while the achievable rate for convex functions is shown to be d16/ √ T in [2] and improved to d3.75/ √ T in [7] (both up to log-factors). The gap here is only in the dependency on the dimension. Our results imply that for α above the T−1/2+1/β threshold, the gap between upper and lower bounds is much smaller. Thus, our upper bounds in this regime scale as (d2/α)T−(β−1)/β while the lower bound of Theorem 6.1 is of the order Ω ( (d/α)T−(β−1)/β ) ; moreover for β = 2, upper and lower bounds match in the dependency on d.
We hope that our work will stimulate further study at the intersection of zero-order optimization and convex bandits in machine learning. An important open problem is to study novel algorithms which match our lower bound simultaneously in all main parameters. For example a class of algorithms worth exploring are those using memory of the gradient in the spirit of Nesterov accelerated method. Yet another important open problem is to study lower bounds for the regret in our setting. Finally, it would be valuable to study extensions of our work to locally strongly convex functions.
Broader impact
The present work improves our understanding of zero-order optimization methods in specific scenarios in which the underlying function we wish to optimize has certain regularity properties. We believe that a solid theoretical foundation is beneficial to the development of practical machine learning and statistical methods. We expect no direct or indirect ethical risks from our research.
Acknowledgments and Disclosure of Funding
We would like to thank Francis Bach, Vianney Perchet, Saverio Salzo, and Ohad Shamir for helpful discussions. The first and second authors were partially supported by SAP SE. The research of A.B. Tsybakov is supported by a grant of the French National Research Agency (ANR), “Investissements d’Avenir” (LabEx Ecodec/ANR-11-LABX-0047). | 1. What are the main contributions and strengths of the paper regarding zero-th order stochastic gradient descent?
2. What are the weaknesses of the paper, particularly in terms of its broader impact and lack of discussion on related work?
3. How does the paper's approach differ from existing literature on derivative-free stochastic optimization?
4. Can you elaborate on the upper bounds provided by the paper on optimization complexity, regret, and estimation error of the minimal value?
5. How do the paper's results compare to prior work in terms of explicit dependence on relevant parameters d, \alpha, and \beta?
6. What are some potential applications of the paper's findings in machine learning tasks, specifically in stochastic bandit problems? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper studies the closely related problems of continuous stochastic bandits and zero-order stochastic optimization. The overall goal is to minimize an unknown strongly convex function by exploring its function values sequentially, where querying a point returns a noisy estimate of the true value of the function at that specific location. These problems are fundamental to many problems in bandits where the derivatives of the functions are too expensive to compute (or don't have a closed form expression), and so algorithms need to be developed that exploit higher order smoothness conditions without resorting to using gradient information. The goal then is to develop algorithms which utilize only zero-th order information on the function which has low optimization complexity (additive deviation of the point versus the true minimizer) and regret (cumulative additive error). This paper develops upper bounds on the optimization complexity and regret for gradient-like algorithms along with nearly-matching lower bounds. These bounds show explicit dependence on the number of iterations, convexity parameter, Holder smoothness parameter, etc, unlike prior bounds which just deals with dependence on the number of iterations. They also utilize a kernel to help exploit the higher order smoothness properties when interpolating the gradient. The overall set-up is as follows: there is an unknown alpha-strongly convex function d-dimensional function f which satisfies the \beta-Holder smoothness condition. The authors restrict to the case when \beta >= 2 so that they can impose an additional assumption that the function has Lipschitz smooth gradients. At every iteration the agent can query the point at several locations (taken to be at most three in the case of this algorithm), where they observe a noisy observation of the function value where the second moment of the noise is unbounded. (Note that this doesn't mean that the noise is zero-mean, which is somewhat interesting and is handled due to the convexity assumption and randomized algorithm). The goal then is threefold. First, minimize optimization complexity (namely the expected deviation of f(x_hat) - f(x^\star)) where x_hat is the estimate of the minimizer. Second, the cumulative regret (namely the additive deviation of f(x_t) - f(x^\star) where x_t is the point chosen by the algorithm). And lastly, an estimate for f(x^\star) where the goal is to minimize the expected deviation of M_hat and f(x^\star) where M_hat is the estimate. At a high level - the authors provide upper bounds on each of these quantities. They consider both the constrained and unconstrained setting, and the case under no noise or adversarial noise. Lastly, they show a lower bound on the optimization error which matches their bound up to a linear term on the dimension d. The overall algorithm is a simple (projected) gradient-like procedure with the addition of a kernel which is used to exploit the smoothness. At every iteration, the current estimate x_t for the minimizer of the function is taken to build two perturbations taken (roughly) uniformly inside of the h_t - sphere where h_t dictates how far away from the current iterate x_t the perturbation is chosen. Afterwards the agent querires the function value at these two points, uses the observed function value to compute an estimate of the gradient (which is not necessarily unbiased due to the noise model), and then takes a gradient step in the direction of the gradient (where the kernel is used to smooth the gradient). The authors then provide bounds for the three goals discussed before. The first of which on the cumulative regret, which scales linearly in the dimension of the space, and provide explicit dependence on the relevant parameters \beta and \alpha. In order to estimate the minimizer of the function, the authors suggest using the average of the iterates (or the delayed average of iterates) and provide a similar bound. Lastly, in order to estimate the function value of the minimizer (as the function is unknown you can't just take f(estimate of the minimizer)) they propose averaging the observed iterates of the function value and provide an upper bound on this setting. Moreover, for the case when \beta = 2 they offer improved bounds which scale linearly on the dimension instead of quadratically.
Strengths
This paper provides improved bounds for zero-th order stochastic gradient descent for the case where the function is alpha-strongly convex and \beta Holder smoothe. While the algorithm is based on existing literature, (with derivative free stochastic optimization dating back to Nemirovski), they provide novel bounds with explicit dependence on the relevant parameters d, \alpha, and \beta, unlike prior work which only shows explicit dependence on the number of iterations. In particular, the bounds that they show include: - upper bound on the optimization complexity - upper bound on the regret - upper bound on estimation error of the minimal value - a minimax lower bound on the optimization complexity for the case of independent noise (which matches their provided upper bound up to a factor of d) Moreover, the ability to obtain bounds on estimating the minimizer of the function at a rate of 1 / \sqrt{T} , which is the same statistical complexity one would obtain from just querying the minimizer over independent noise is both surprising and an interesting result. I believe that these contributions are of interest to the NeurIPS community, namely due to the importance of understanding zero-th order optimization in various machine learning tasks. This work serves at a starting point for understanding the impact of higher-order smoothness conditions on zero-th order optimization.
Weaknesses
The authors provide no discussion on the broader impacts. Clearly zero-th order optimization is very important for various stochastic bandit tasks, as many problems (e.g. optimizing complex chemical systems) have function values which are more readily computable than their gradient information. However, the authors provide no motivation, background / interesting problems to help put their work in perspective. Especially the additional assumption of strong convexity (which allows them to get guarantees which scale linearly with respect to the dimension instead of exponentially) is a BIG additional assumption in comparison to bandit literature, and there was no discussion / motivating problems as to when situations like this might arise. In addition, some of the related work section mostly concerns the optimization perspective, and ignores a large amount of related work considering exploiting smoothness in the stochastic contextual bandit literature. More discussion on this point is in the related work section of the review. |
NIPS | Title
Exploiting Higher Order Smoothness in Derivative-free Optimization and Continuous Bandits
Abstract
We study the problem of zero-order optimization of a strongly convex function. The goal is to find the minimizer of the function by a sequential exploration of its values, under measurement noise. We study the impact of higher order smoothness properties of the function on the optimization error and on the cumulative regret. To solve this problem we consider a randomized approximation of the projected gradient descent algorithm. The gradient is estimated by a randomized procedure involving two function evaluations and a smoothing kernel. We derive upper bounds for this algorithm both in the constrained and unconstrained settings and prove minimax lower bounds for any sequential search method. Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters. Based on this algorithm, we also propose an estimator of the minimum value of the function achieving almost sharp oracle behavior. We compare our results with the state-of-the-art, highlighting a number of key improvements.
1 Introduction
We study the problem of zero-order stochastic optimization, in which we aim to minimize an unknown strongly convex function via a sequential exploration of its function values, under measurement error, and a closely related problem of continuous (or continuum-armed) stochastic bandits. These problems have received significant attention in the literature, see [1, 2, 3, 4, 7, 9, 10, 14, 17, 18, 34, 16, 20, 21, 30, 25, 31, 13, 27, 28, 19, 29], and are fundamental for many applications in which the derivatives of the function are either too expensive or impossible to compute. A principal goal of this paper is to exploit higher order smoothness properties of the underlying function in order to improve the performance of search algorithms. We derive upper bounds on the estimation error for a class of projected gradient-like algorithms, as well as close matching lower bounds, that characterize the role played by the number of iterations, the strong convexity parameter, the smoothness parameter, the number of variables, and the noise level.
Let f : Rd → R be the function that we wish to minimize over a closed convex subset Θ of Rd. Our approach, outlined in Algorithm 1, builds upon previous work in which a sequential algorithm queries at each iteration a pair of function values, under a general noise model. Specifically, at iteration t the current guess xt for the minimizer of f is used to build two perturbations xt + δt and xt − δt, where the function values are queried subject to additive measurement errors ξt and ξ′t, respectively. The
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Algorithm 1 Zero-Order Stochastic Projected Gradient
Requires Kernel K : [−1, 1]→ R, step size ηt > 0 and parameter ht, for t = 1, . . . , T Initialization Generate scalars r1, . . . , rT uniformly on the interval [−1, 1], vectors ζ1, . . . , ζT uniformly distributed on the unit sphere Sd = {ζ ∈ Rd : ‖ζ‖ = 1}, and choose x1 ∈ Θ For t = 1, . . . , T
1. Let yt = f(xt + htrtζt) + ξt and y′t = f(xt − htrtζt) + ξ′t, 2. Define ĝt = d2ht (yt − y ′ t)ζtK(rt)
3. Update xt+1 = ProjΘ(xt − ηtĝt) Return (xt)Tt=1
values δt can be chosen in different ways. In this paper, we set δt = htrrζt (Line 1), where ht > 0 is a suitably chosen small parameter, rt is random and uniformly distributed on [−1, 1], and ζt is uniformly distributed on the unit sphere. The estimate for the gradient is then computed at Line 2 and used inside a projected gradient method scheme to compute the next exploration point. We introduce a suitably chosen kernel K that allows us to take advantage of higher order smoothness of f .
The idea of using randomized procedures for derivative-free stochastic optimization can be traced back to Nemirovski and Yudin [23, Sec. 9.3] who suggested an algorithm with one query per step at point xt +htζt, with ζt uniform on the unit sphere. Its versions with one, two or more queries were studied in several papers including [1, 3, 16, 31]. Using two queries per step leads to better performance bounds as emphasized in [26, 1, 3, 16, 31, 13]. Randomizing sequences other than uniform on the sphere were also explored: ζt uniformly distributed on a cube [26], Gaussian ζt [24, 25], ζt uniformly distributed on the vertices of a cube [30] or satisfying some general assumptions [12, 13]. Except for [26, 12, 3], these works study settings with low smoothness of f (2-smooth or less) and do not invoke kernels K (i.e. K(·) ≡ 1 and rt ≡ 1 in Algorithm 1). The use of randomization with smoothing kernels was proposed by Polyak and Tsybakov [26] and further developed by Dippon [12], and Bach and Perchet [3] to whom the current form of Algorithm 1 is due.
In this paper we consider higher order smooth functions f satisfying the generalized Hölder condition with parameter β ≥ 2, cf. inequality (1) below. For integer β, this parameter can be roughly interpreted as the number of bounded derivatives. Furthermore, we assume that f is α-strongly convex. For such functions, we address the following two main questions:
(a) What is the performance of Algorithm 1 in terms of the cumulative regret and optimization error, namely what is the explicit dependency of the rate on the main parameters d, T, α, β?
(b) What are the fundamental limits of any sequential search procedure expressed in terms of minimax optimization error?
To handle task (a), we prove upper bounds for Algorithm 1, and to handle (b), we prove minimax lower bounds for any sequential search method.
Contributions. Our main contributions can be summarized as follows: i) Under an adversarial noise assumption (cf. Assumption 2.1 below), we establish for all β ≥ 2 upper bounds of the order d2
α T − β−1β for the optimization risk and d 2 α T 1 β for the cumulative regret of Algorithm 1, both for its
constrained and unconstrained versions; ii) In the case of independent noise satisfying some natural assumptions (including the Gaussian noise), we prove a minimax lower bound of the order dαT − β−1β for the optimization risk when α is not very small. This shows that to within the factor of d the bound for Algorithm 1 cannot be improved for all β ≥ 2; iii) We show that, when α is too small, below some specified threshold, higher order smoothness does not help to improve the convergence rate. We prove that in this regime the rate cannot be faster than d/ √ T , which is not better (to within the dependency on d) than for derivative-free minimization of simply convex functions [2, 18]; iv) For β = 2, we obtain a bracketing of the optimal rate between O(d/ √ αT ) and Ω(d/(max(1, α) √ T )). In a special case when α is a fixed numerical constant, this validates a conjecture in [30] (claimed there as proved fact) that the optimal rate for β = 2 scales as d/ √ T ; v) We propose a simple algorithm of estimation of the value minx f(x) requiring three queries per step and attaining the optimal rate 1/ √ T for all
β ≥ 2. The best previous work on this problem [6] suggested a method with exponential complexity and proved a bound of the order c(d, α)/ √ T for β > 2 where c(d, α) is an unspecified constant.
Notation. Throughout the paper we use the following notation. We let 〈·, ·〉 and ‖ · ‖ be the standard inner product and Euclidean norm on Rd, respectively. For every close convex set Θ ⊂ Rd and x ∈ Rd we denote by ProjΘ(x) = argmin{‖z−x‖ : z ∈ Θ} the Euclidean projection of x to Θ. We assume everywhere that T ≥ 2. We denote by Fβ(L) the class of functions with Hölder smoothness β (inequality (1) below). Recall that f is α-strongly convex for some α > 0 if, for any x, y ∈ Rd it holds that f(y) ≥ f(x) + 〈∇f(x), y − x〉+ α2 ‖x− y‖
2. We further denote by Fα,β(L) the class of all α-strongly convex functions belonging to Fβ(L). Organization. We start in Section 2 with some preliminary results on the gradient estimator. Section 3 presents our upper bounds for Algorithm 1, both in the constrained and unconstrained case. In Section 4 we observe that a slight modification of Algorithm 1 can be used to estimated the minimum value (rather than the minimizer) of f . Section 4 presents improved upper bounds in the case β = 2. In Section 6 we establish minimax lower bounds. Finally, Section 7 contrasts our results with previous work in the literature and discusses future directions of research.
2 Preliminaries
In this section, we give the definitions, assumptions and basic facts that will be used throughout the paper. For β > 0, let ` be the greatest integer strictly less than β. We denote by Fβ(L) the set of all functions f : Rd → R that are ` times differentiable and satisfy, for all x, z ∈ Θ the Hölder-type condition ∣∣∣∣f(z)− ∑
0≤|m|≤`
1
m! Dmf(x)(z − x)m ∣∣∣∣ ≤ L‖z − x‖β , (1) where L > 0, the sum is over the multi-index m = (m1, ...,md) ∈ Nd, we used the notation m! = m1! · · ·md!, |m| = m1 + · · ·+md, and we defined
Dmf(x)νm = ∂|m|f(x)
∂m1x1 · · · ∂mdxd νm11 · · · ν md d , ∀ν = (ν1, . . . , νd) ∈ R d.
In this paper, we assume that the gradient estimator defined by Algorithm 1 uses a kernel function K : [−1, 1]→ R satisfying∫
K(u)du = 0, ∫ uK(u)du = 1, ∫ ujK(u)du = 0, j = 2, . . . , `, ∫ |u|β |K(u)|du <∞. (2)
Examples of such kernels obtained as weighted sums of Legendre polynomials are given in [26] and further discussed in [3]. Assumption 2.1. It holds, for all t ∈ {1, . . . , T}, that: (i) the random variables ξt and ξ′t are independent from ζt and from rt, and the random variables ζt and rt are independent; (ii) E[ξ2t ] ≤ σ2, and E[(ξ′t)2] ≤ σ2, where σ ≥ 0.
Note that we do not assume ξt and ξ′t to have zero mean. Moreover, they can be non-random and no independence between noises on different steps is required, so that the setting can be considered as adversarial. Having such a relaxed set of assumptions is possible because of randomization that, for example, allows the proofs go through without assuming the zero mean noise.
We will also use the following assumption. Assumption 2.2. Function f : Rd → R is 2-smooth, that is, differentiable on Rd and such that ‖∇f(x)−∇f(x′)‖ ≤ L̄‖x− x′‖ for all x, x′ ∈ Rd, where L̄ > 0.
It is easy to see that this assumption implies that f ∈ F2(L̄/2). The following lemma gives a bound on the bias of the gradient estimator. Lemma 2.3. Let f ∈ Fβ(L), with β ≥ 1 and let Assumption 2.1 (i) hold. Let ĝt and xt be defined by Algorithm 1 and let κβ = ∫ |u|β |K(u)|du. Then
‖E[ĝt |xt]−∇f(xt)‖ ≤ κβLdhβ−1t . (3)
If K be a weighted sum of Legendre polynomials, κβ ≤ 2 √
2β, with β ≥ 1 (see e.g., [3, Appendix A.3]).
The next lemma provides a bound on the stochastic variability of the estimated gradient by controlling its second moment. Lemma 2.4. Let Assumption 2.1(i) hold, let ĝt and xt be defined by Algorithm 1 and set κ =∫ K2(u)du. Then
(i) If Θ ⊆ Rd,∇f(x∗) = 0 and Assumption 2.2 holds, E[‖ĝt‖2 |xt] ≤ 9κL̄2 ( d‖xt − x∗‖2 +
d2h2t 8
) + 3κd2σ2
2h2t ,
(ii) If f ∈ F2(L) and Θ is a closed convex subset of Rd such that max x∈Θ ‖∇f(x)‖ ≤ G, then
E[‖ĝt‖2 |xt] ≤ 9κ ( G2d+
L2d2h2t 2
) + 3κd2σ2
2h2t .
3 Upper bounds
In this section, we provide upper bounds on the cumulative regret and on the optimization error of Algorithm 1, which are defined as
T∑ t=1 E[f(xt)− f(x)],
and E[f(x̂T )− f(x∗)], respectively, where x ∈ Θ and x̂T is an estimator after T queries. Note that the provided upper bound for cumulative regret is valid for any x ∈ Θ. First we consider Algorithm 1 when the convex set Θ is bounded (constrained case). Theorem 3.1. (Upper Bound, Constrained Case.) Let f ∈ Fα,β(L) with α,L > 0 and β ≥ 2. Let Assumptions 2.1 and 2.2 hold and let Θ be a convex compact subset of Rd. Assume that maxx∈Θ ‖∇f(x)‖ ≤ G. If σ > 0 then the cumulative regret of Algorithm 1 with
ht =
( 3κσ2
2(β − 1)(κβL)2
) 1 2β
t− 1 2β , ηt =
2
αt , t = 1, . . . , T
satisfies
∀x ∈ Θ : T∑ t=1 E[f(xt)− f(x)] ≤ 1 α ( d2 ( A1T 1/β +A2 ) +A3d log T ) , (4)
where A1 = 3β(κσ2) β−1 β (κβL) 2 β , A2 = c̄L̄2(σ/L) 2 β + 9κG2/d with constant c̄ > 0 depending only on β, and A3 = 9κG2. The optimization error of averaged estimator x̄T = 1T ∑T t=1 xt satisfies
E[f(x̄T )− f(x∗)] ≤ 1
α
( d2 ( A1
T β−1 β
+ A2 T
) +A3 d log T
T
) , (5)
where x∗ = arg minx∈Θ f(x). If σ = 0, then the cumulative regret and the optimization error of Algorithm 1 with any ht chosen small enough and ηt = 2αt satisfy the bounds (4) and (5), respectively, with A1 = 0, A2 = 9κG2/d and A3 = 10κG2.
Proof sketch. We use the definition of Algorithm 1 and strong convexity of f to obtain an upper bound for ∑T t=1 E[f(xt)− f(x) |xt], which depends on the bias term ∑T t=1 ‖E[ĝt |xt]−∇f(xt)‖
and on the stochastic error term ∑T t=1 E[‖ĝt‖2]. By substituting ht (that is derived from balancing the
two terms) and ηt in Lemmas 2.3 and 2.4 we obtain upper bounds for ∑T t=1 ‖E[ĝt |xt]−∇f(xt)‖ and∑T
t=1 E[‖ĝt‖2] that imply the desired upper bound for ∑T t=1 E[f(xt)− f(x) |xt] due to a recursive
argument in the spirit of [5].
In the non-noisy case (σ = 0) we get the rate dα log T for the cumulative regret, and d α log T T for the optimization error. In what concerns the optimization error, this rate is not optimal since one can achieve much faster rate under strong convexity [25]. However, for the cumulative regret in our derivative-free setting it remains an open question whether the result of Theorem 3.1 can be improved. Previous papers on derivative-free online methods with no noise [1, 13, 16] provide slower rates than (d/α) log T . The best known so far is (d2/α) log T , cf. [1, Corollary 5]. We may also notice that the cumulative regret bounds of Theorem 3.1 trivially extend to the case when we query functions ft depending on t rather than a single f . Another immediate fact is that on the r.h.s. of inequalities (4) and (5) we can take the minimum with GBT and GB, respectively, where B is the Euclidean diameter of Θ. Finally, the factor log T in the bounds for the optimization error can be eliminated by considering averaging from T/2 to T rather than from 1 to T , in the spirit of [27]. We refer to Appendix D for the details and proofs of these facts.
We now study the performance of Algorithm 1 when Θ = Rd. In this case we make the following choice for the parameters ht and ηt in Algorithm 1:
ht = T − 12β , ηt =
1
αT , t = 1, . . . , T0,
ht = t − 12β , ηt =
2
αt , t = T0 + 1, . . . , T,
(6)
where T0 = max { k ≥ 0 : C1L̄2d > α2k/2 } and C1 is a positive constant1 depending only on the kernel K(·) (this is defined in the proof of Theorem 3.2 in Appendix B) and recall L̄ is the Lipschitz constant on the gradient∇f . Finally, define the estimator
x̄T0,T = 1
T − T0 T∑ t=T0+1 xt. (7)
Theorem 3.2. (Upper Bounds, Unconstrained Case.) Let f ∈ Fα,β(L) with α,L > 0 and β ≥ 2. Let Assumptions 2.1 and 2.2 hold. Assume also that α > √ C∗d/T , where C∗ > 72κL̄2. Let xt’s be the updates of Algorithm 1 with Θ = Rd, ht and ηt as in (6) and a non-random x1 ∈ Rd. Then the estimator defined by (7) satisfies
E[f(x̄T0,T )− f(x∗)] ≤ CκL̄2 d
αT ‖x1 − x∗‖2 + C
d2
α
( (κβL) 2 + κ ( L̄2 + σ2 )) T− β−1 β (8)
where C > 0 is a constant depending only on β and x∗ = arg minx∈Rd f(x).
Proof sketch. As in the proof of Theorem 3.1, we apply Lemmas 2.3 and 2.4. But we can only use Lemma 2.4(i) and not Lemma 2.4(ii) and thus the bound on the stochastic error now involves ‖xt − x∗‖2. So, after taking expectations, we need to control an additional term containing rt = E[‖xt − x∗‖2]. However, the issue concerns only small t (t ≤ T0 ∼ d2/α) since for bigger t this term is compensated due to the strong convexity with parameter α > √ C∗d/T . This motivates the method where we use the first T0 iterations to get a suitably good (but not rate optimal) bound on rT0+1 and then proceed analogously to Theorem 3.1 for iterations t ≥ T0 + 1.
4 Estimation of f(x∗)
In this section, we apply the above results to estimation of the minimum value f(x∗) = minx∈Θ f(x) for functions f in the class Fα,β(L). The literature related to this problem assumes that xt’s are either i.i.d. with density bounded away from zero on its support [32] or xt’s are chosen sequentially [22, 6]. In the fist case, from the results in [32] one can deduce that f(x∗) cannot be estimated better than at the slow rate T−β/(2β+d). For the second case, which is our setting, the best result so far is obtained in [6]. The estimator of f(x∗) in [6] is defined via a multi-stage procedure whose complexity increases exponentially with the dimension d and it is shown to achieve (asymptotically,
1If T0 = 0 the algorithm does not use (6). Assumptions of Theorem 3.2 are such that condition T > T0 holds.
for T greater than an exponent of d) the c(d, α)/ √ T rate for functions in Fα,β(L) with β > 2. Here, c(d, α) is some constant depending on d and α in an unspecified way.
Observe that f(x̄T ) is not an estimator since it depends on the unknown f , so Theorem 3.1 does not provide a result about estimation of f(x∗). In this section, we show that using the computationally simple Algorithm 1 and making one more query per step (that is, having three queries per step in total) allows us to achieve the 1/ √ T rate for all β ≥ 2 with no dependency on the dimension in the main term. Note that the 1/ √ T rate cannot be improved. Indeed, one cannot estimate f(x∗) with a better rate even using the ideal but non-realizable oracle that makes all queries at point x∗. That is, even if x∗ is known and we sample T times f(x∗) + ξt with independent centered variables ξt, the error is still of the order 1/ √ T .
In order to construct our estimator, at any step t of Algorithm 1 we make along with yt and y′t the third query y′′t = f(xt) + ξ ′′ t , where ξ ′′ t is some noise and xt are the updates of Algorithm 1. We
estimate f(x∗) by M̂ = 1T ∑T t=1 y ′′ t . The properties of estimator M̂ are summarized in the next theorem, which is an immediate corollary of Theorem 3.1. Theorem 4.1. Let the assumptions of Theorem 3.1 be satisfied. Let σ > 0 and assume that (ξ′′t )Tt=1 are independent random variables with E[ξ′′t ] = 0 and E[(ξ′′t )2] ≤ σ2 for t = 1, . . . , T . If f attains its minimum at point x∗ ∈ Θ, then
E|M̂ − f(x∗)| ≤ σ T 1 2 + 1 α
( d2 ( A1
T β−1 β
+ A2 T
) +A3 d log T
T
) . (9)
Remark 4.2. With three queries per step, the risk (error) of the oracle that makes all queries at point x∗ does not exceed σ/ √ 3T . Thus, for β > 2 the estimator M̂ achieves asymptotically as T →∞ the oracle risk up to a numerical constant factor. We do not obtain such a sharp property for β = 2, in which case the remainder term in Theorem 4.1 accounting for the accuracy of Algorithm 1 is of the same order as the main term σ/ √ T .
Note that in Theorem 4.1 the noises (ξ′′t ) T t=1 are assumed to be independent and zero mean random
variables, which is essential to obtain the 1/ √ T rate. Nevertheless, we do not require independence between the noises (ξ′′t ) T t=1 and the noises in the other two queries (ξt) T t=1 and (ξ ′ t) T t=1. Another
interesting point is that for β = 2 the third query is not needed and f(x∗) is estimated with the 1/ √ T rate either by M̂ = 1T ∑T t=1 yt or by M̂ = 1 T ∑T t=1 y ′ t. This is an easy consequence of the above argument, the property (19) – see Lemma A.3 in the appendix – which is specific for the case β = 2, and the fact that the optimal choice of ht is of order t−1/4 for β = 2.
5 Improved bounds for β = 2
In this section, we consider the case β = 2 and obtain improved bounds that scale as d rather than d2 with the dimension in the constrained optimization setting analogous to Theorem 3.1. First note that for β = 2 we can simplify the algorithm. The use of kernel K is redundant when β = 2, and therefore in this section we define the approximate gradient as
ĝt = d
2ht (yt − y′t)ζt, (10)
where yt = f(x + htζ̃) and y′t = f(x − htζ̃). A well-known observation that goes back to [23] consists in the fact that ĝt defined in (10) is an unbiased estimator of the gradient of the surrogate function f̂t defined by
f̂t(x) = Ef(x+ htζ̃), ∀x ∈ Rd,
where the expectation E is taken with respect to the random vector ζ̃ uniformly distributed on the unit ball Bd = {u ∈ Rd : ‖u‖ ≤ 1}. The properties of the surrogate f̂t are described in Lemmas A.2 and A.3 presented in the appendix.
The improvement in the rate that we get for β = 2 is due to the fact that we can consider Algorithm 1 with ĝt defined in (10) as the SGD for the surrogate function. Then the bias of approximating f by f̂t scales as h2t , which is smaller than the squared bias of approximating the gradient arising in the proof
of Theorem 3.1 that scales as d2h2(β−1)t = d 2h2t when β = 2. On the other hand, the stochastic variability terms are the same for both methods of proof. This explains the gain in dependency on d. However, this technique does not work for β > 2 since then the error of approximating f by f̂t, which is of the order h β t (with ht small), becomes too large compared to the bias d 2h 2(β−1) t of Theorem 3.1.
Theorem 5.1. Let f ∈ Fα,2(L) with α,L > 0. Let Assumption 2.1 hold and let Θ be a convex compact subset of Rd. Assume that maxx∈Θ ‖∇f(x)‖ ≤ G. If σ > 0 then for Algorithm 1 with ĝt defined in (10) and parameters ht = ( 3d2σ2
4Lαt+9L2d2 )1/4 and ηt = 1αt we have
∀x ∈ Θ : E T∑ t=1 ( f(xt)− f(x) ) ≤ min ( GBT, 2 √ 3Lσ d√ α √ T +A4 d2 α log T ) , (11)
where B is the Euclidean diameter of Θ and A4 = 6.5Lσ + 22G2/d. Moreover, if x∗ = arg minx∈Θ f(x) the optimization error of averaged estimator x̄T = 1 T ∑T t=1 xt is bounded as
E[f(x̄T )− f(x∗)] ≤ min ( GB, 2 √ 3Lσ
d√ αT +A4 d2 α log T T
) . (12)
Finally, if σ = 0, then the cumulative regret of Algorithm 1 with any ht chosen small enough and ηt = 1 αt and the optimization error of its averaged version are of the order d2 α log T and d2 α log T T , respectively.
Note that the terms d 2 α log T and d2 α log T T appearing in these bounds can be improved to d α log T and d α log T T at the expense of assuming that the norm ‖∇f‖ is uniformly bounded by G not only on Θ but also on a large enough Euclidean neighborhood of Θ. Moreover, the log T factor in the bounds for the optimization error can be eliminated by considering averaging from T/2 to T rather than from 1 to T in the spirit of [27]. We refer to Appendix D for the details and proofs of these facts. A major conclusion is that, when σ > 0 and we consider the optimization error, those terms are negligible with respect to d/ √ αT and thus an attainable rate is min(1, d/ √ αT ).
We close this section by noting, in connection with the bandit setting, that the bound (11) extends straightforwardly (up to a change in numerical constants) to the cumulative regret of the form E ∑T t=1 ( ft(xt ± htζt)− ft(x) ) , where the losses are measured at the query points and f depends on t. This fact follows immediately from the proof of Theorem 5.1 presented in the appendix and the property (19), see Lemma A.3 in the appendix.
6 Lower bound
In this section we prove a minimax lower bound on the optimization error over all sequential strategies that allow the query points depend on the past. For t = 1, . . . , T , we assume that yt = f(zt) + ξt and we consider strategies of choosing the query points as zt = Φt(zt−11 , y t−1 1 ) where Φt are Borel functions and z1 ∈ Rd is any random variable. We denote by ΠT the set of all such strategies. The noises ξ1, . . . , ξT are assumed in this section to be independent with cumulative distribution function F satisfying the condition∫
log ( dF (u)/dF (u+ v) ) dF (u) ≤ I0v2, |v| < v0 (13)
for some 0 < I0 <∞, 0 < v0 ≤ ∞. Using the second order expansion of the logarithm w.r.t. v, one can verify that this assumption is satisfied when F has a smooth enough density with finite Fisher information. For example, for Gaussian distribution F this condition holds with v0 =∞. Note that the class ΠT includes the sequential strategy of Algorithm 1 that corresponds to taking T as an even number, and choosing zt = xt + ζtrt and zt = xt − ζtrt for even t and odd t, respectively. The presence of the randomizing sequences ζt, rt is not crucial for the lower bound. Indeed, Theorem 6.1 below is valid conditionally on any randomization, and thus the lower bound remains valid when taking expectation over the randomizing distribution.
Theorem 6.1. Let Θ = {x ∈ Rd : ‖x‖ ≤ 1}. For α,L > 0,β ≥ 2, let F ′α,β denote the set of functions f that attain their minimum over Rd in Θ and belong to Fα,β(L) ∩ {f : maxx∈Θ ‖∇f(x)‖ ≤ G}, where G > 2α. Then for any strategy in the class ΠT we have
sup f∈F ′α,β
E [ f(zT )−min
x f(x)
] ≥ C min ( max(α, T−1/2+1/β),
d√ T , d α T− β−1 β
) , (14)
and sup f∈F ′α,β E [ ‖zT − x∗(f)‖2 ] ≥ C min ( 1, d T 1 β , d α2 T− β−1 β ) , (15)
where C > 0 is a constant that does not depend of T, d, and α, and x∗(f) is the minimizer of f on Θ.
The proof is given in Appendix B. It extends the proof technique of Polyak and Tsybakov [28], by applying it to more than two probe functions. The proof takes into account dependency on the dimension d, and on α. The final result is obtained by applying Assouad’s Lemma, see e.g. [33].
We stress that the condition G > 2α in this theorem is necessary. It should always hold if the intersection Fα,β(L) ∩ {f : maxx∈Θ ‖∇f(x)‖ ≤ G} is not empty. Notice also that the threshold T−1/2+1/β on the strong convexity parameter α plays an important role in bounds (14) and (15). Indeed, for α below this threshold, the bounds start to be independent of α. Moreover, in this regime, the rate of (14) becomes min(T 1/β , d)/ √ T , which is asymptotically d/ √ T and thus not better as function of T than the rate attained for zero-order minimization of simply convex functions [2, 7]. Intuitively, it seems reasonable that α-strong convexity should be of no added value for very small α. Theorem 6.1 allows us to quantify exactly how small such α should be. Also, quite naturally, the threshold becomes smaller when the smoothness β increases. Finally note that for β = 2 the lower bounds (14) and (15) are, in the interesting regime of large enough T , of order d/(max(α, 1) √ T ) and d/(max(α2, 1) √ T ), respectively. This highlights the near minimax optimal properties of Algorithm 1 in the setting of Theorem 5.1.
7 Discussion and related work
There is a great deal of attention to zero-order feedback stochastic optimization and convex bandits problems in the recent literature. Several settings are studied: (i) deterministic in the sense that the queries contain no random noise and we query functions ft depending on t rather than f where ft are Lipschitz or 2-smooth [16, 1, 24, 25, 28, 31]; (ii) stochastic with two-point feedback where the two noisy evaluations are obtained with the same noise and the noisy functions are Lipschitz or 2-smooth [24, 25, 13] (this setting does not differ much from (i) in terms of the analysis and the results); (iii) stochastic, where the noises ξi are independent zero-mean random variables [15, 26, 12, 2, 30, 3, 19, 4, 20]. In this paper, we considered a setting, which is more general than (iii) by allowing for adversarial noise (no independence or zero-mean assumption in contrast to (iii), no Lipschitz assumption in contrast to settings (i) and (ii)), which are both covered by our results when the noise is set to zero.
One part of our results are bounds on the cumulative regret, cf. (4) and (11). We emphasize that they remain trivially valid if the queries are from ft depending on t instead of f , and thus cover the setting (i). To the best of our knowledge, there were no such results in this setting previously, except for [3] that gives bounds with suboptimal dependency on T in the case of classical (non-adversarial) noise. In the non-noisy case, we get bounds on the cumulative regret with faster rates than previously known for the setting (i). It remains an open question whether these bounds can be improved.
The second part of our results dealing with the optimization error E[f(x̄T ) − f(x∗)] is closely related to the work on derivative-free stochastic optimization under strong convexity and smoothness assumptions initiated in [15, 26] and more recently developed in [12, 19, 30, 3]. It was shown in [26] that the minimax optimal rate for f ∈ Fα,β(L) scales as c(α, d)T−(β−1)/β , where c(α, d) is an unspecified function of α and d (for d = 1 an upper bound of the same order was earlier established in [15]). The issue of establishing non-asymptotic fundamental limits as function of the main parameters of the problem (α, d and T ) was first addressed in [19] giving a lower bound Ω( √ d/T ) for β = 2. This was improved to Ω(d/ √ T ) when α 1 by Shamir [30] who conjectured that the rate d/ √ T is optimal for β = 2, which indeed follows from our Theorem 5.1 (although [30] claims the optimality
as proved fact by referring to results in [1], such results cannot be applied in setting (iii) because the noise cannot be considered as Lipschitz). A result similar to Theorem 5.1 is stated without proof in Bach and Perchet [3, Proposition 7] but not for the cumilative regret and with a suboptimal rate in the non-noisy case. For integer β ≥ 3, Bach and Perchet [3] present explicit upper bounds as functions of α, d and T with, however, suboptimal dependency on T except for their Proposition 8 that is problematic (see Appendix C for the details). Finally, by slightly modifying the proof of Theorem 3.1 we get that the estimation risk E [ ‖x̄T − x∗‖2 ] is O((d2/α2)T−(β−1)/β), which is to within factor d of the main term in the lower bound (15) (see Appendix D for details).
The lower bound in Theorem 6.1 is, to the best of our knowledge, the first result providing nonasymptotic fundamental limits under general configuration of α, d and T . The known lower bounds [26, 19, 30] either give no explicit dependency on α and d, or treat the special case β = 2 and α 1. Moreover, as an interesting consequence of our lower bound we find that, for small strong convexity parameter α (namely, below the T−1/2+1/β threshold), the best achievable rate cannot be substantially faster than for simply convex functions, at least for moderate dimensions. Indeed, for such small α, our lower bound is asymptotically Ω(d/ √ T ) independently of the smoothness index β and on α, while the achievable rate for convex functions is shown to be d16/ √ T in [2] and improved to d3.75/ √ T in [7] (both up to log-factors). The gap here is only in the dependency on the dimension. Our results imply that for α above the T−1/2+1/β threshold, the gap between upper and lower bounds is much smaller. Thus, our upper bounds in this regime scale as (d2/α)T−(β−1)/β while the lower bound of Theorem 6.1 is of the order Ω ( (d/α)T−(β−1)/β ) ; moreover for β = 2, upper and lower bounds match in the dependency on d.
We hope that our work will stimulate further study at the intersection of zero-order optimization and convex bandits in machine learning. An important open problem is to study novel algorithms which match our lower bound simultaneously in all main parameters. For example a class of algorithms worth exploring are those using memory of the gradient in the spirit of Nesterov accelerated method. Yet another important open problem is to study lower bounds for the regret in our setting. Finally, it would be valuable to study extensions of our work to locally strongly convex functions.
Broader impact
The present work improves our understanding of zero-order optimization methods in specific scenarios in which the underlying function we wish to optimize has certain regularity properties. We believe that a solid theoretical foundation is beneficial to the development of practical machine learning and statistical methods. We expect no direct or indirect ethical risks from our research.
Acknowledgments and Disclosure of Funding
We would like to thank Francis Bach, Vianney Perchet, Saverio Salzo, and Ohad Shamir for helpful discussions. The first and second authors were partially supported by SAP SE. The research of A.B. Tsybakov is supported by a grant of the French National Research Agency (ANR), “Investissements d’Avenir” (LabEx Ecodec/ANR-11-LABX-0047). | 1. What is the focus of the paper regarding zeroth-order optimization?
2. What are the strengths of the proposed approach, particularly in terms of technical correctness and explicit constants?
3. What are the weaknesses of the paper, especially regarding its contribution and significance compared to prior works?
4. Do you have any concerns about the setting of the proof and its lack of practical implementation discussion?
5. Are there any questions regarding the minimax lower bounds or the cumulative regret and optimization error bounds? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper studies the problem of zeroth order optimization of a strongly convex function, a problem already studied by Bach and Perchet. They analyze the algorithm of Bach and Perchet under additional assumptions, namely high-order smoothness (using a Hölder-like condition to quantify the smoothness) and derive bounds on the cumulative regret and optimization error in this setting. They also prove minimax lower bounds on this problem.
Strengths
The claims are all proved and the proofs seem technically correct. I appreciated to see that all constants are explicit in the statements of the theorems, even if this does not improve the readability of the paper. I also appreciated to see sketches of the proofs of the main theorems in the main text.
Weaknesses
I am unsure of the significance of this contribution. This paper highly builds on the previous algorithm of Bach and Perchet. The contribution of this work is the analysis of this algorithm under the high-order smoothness assumption, which does not seem especially interesting to me. On top of that, I am under the impression that the proofs are derived in the setting where all the parameters are known (strong convexity constant, smoothness parameters, sigma, etc) which does not seem realistic. I would have been interested in seeing numerical experiments. How can this algorithm be implemented in practice ? And how do you ensure that its implementation achieves the same bounds, without knowing all parameters ? |
NIPS | Title
Exploiting Higher Order Smoothness in Derivative-free Optimization and Continuous Bandits
Abstract
We study the problem of zero-order optimization of a strongly convex function. The goal is to find the minimizer of the function by a sequential exploration of its values, under measurement noise. We study the impact of higher order smoothness properties of the function on the optimization error and on the cumulative regret. To solve this problem we consider a randomized approximation of the projected gradient descent algorithm. The gradient is estimated by a randomized procedure involving two function evaluations and a smoothing kernel. We derive upper bounds for this algorithm both in the constrained and unconstrained settings and prove minimax lower bounds for any sequential search method. Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters. Based on this algorithm, we also propose an estimator of the minimum value of the function achieving almost sharp oracle behavior. We compare our results with the state-of-the-art, highlighting a number of key improvements.
1 Introduction
We study the problem of zero-order stochastic optimization, in which we aim to minimize an unknown strongly convex function via a sequential exploration of its function values, under measurement error, and a closely related problem of continuous (or continuum-armed) stochastic bandits. These problems have received significant attention in the literature, see [1, 2, 3, 4, 7, 9, 10, 14, 17, 18, 34, 16, 20, 21, 30, 25, 31, 13, 27, 28, 19, 29], and are fundamental for many applications in which the derivatives of the function are either too expensive or impossible to compute. A principal goal of this paper is to exploit higher order smoothness properties of the underlying function in order to improve the performance of search algorithms. We derive upper bounds on the estimation error for a class of projected gradient-like algorithms, as well as close matching lower bounds, that characterize the role played by the number of iterations, the strong convexity parameter, the smoothness parameter, the number of variables, and the noise level.
Let f : Rd → R be the function that we wish to minimize over a closed convex subset Θ of Rd. Our approach, outlined in Algorithm 1, builds upon previous work in which a sequential algorithm queries at each iteration a pair of function values, under a general noise model. Specifically, at iteration t the current guess xt for the minimizer of f is used to build two perturbations xt + δt and xt − δt, where the function values are queried subject to additive measurement errors ξt and ξ′t, respectively. The
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Algorithm 1 Zero-Order Stochastic Projected Gradient
Requires Kernel K : [−1, 1]→ R, step size ηt > 0 and parameter ht, for t = 1, . . . , T Initialization Generate scalars r1, . . . , rT uniformly on the interval [−1, 1], vectors ζ1, . . . , ζT uniformly distributed on the unit sphere Sd = {ζ ∈ Rd : ‖ζ‖ = 1}, and choose x1 ∈ Θ For t = 1, . . . , T
1. Let yt = f(xt + htrtζt) + ξt and y′t = f(xt − htrtζt) + ξ′t, 2. Define ĝt = d2ht (yt − y ′ t)ζtK(rt)
3. Update xt+1 = ProjΘ(xt − ηtĝt) Return (xt)Tt=1
values δt can be chosen in different ways. In this paper, we set δt = htrrζt (Line 1), where ht > 0 is a suitably chosen small parameter, rt is random and uniformly distributed on [−1, 1], and ζt is uniformly distributed on the unit sphere. The estimate for the gradient is then computed at Line 2 and used inside a projected gradient method scheme to compute the next exploration point. We introduce a suitably chosen kernel K that allows us to take advantage of higher order smoothness of f .
The idea of using randomized procedures for derivative-free stochastic optimization can be traced back to Nemirovski and Yudin [23, Sec. 9.3] who suggested an algorithm with one query per step at point xt +htζt, with ζt uniform on the unit sphere. Its versions with one, two or more queries were studied in several papers including [1, 3, 16, 31]. Using two queries per step leads to better performance bounds as emphasized in [26, 1, 3, 16, 31, 13]. Randomizing sequences other than uniform on the sphere were also explored: ζt uniformly distributed on a cube [26], Gaussian ζt [24, 25], ζt uniformly distributed on the vertices of a cube [30] or satisfying some general assumptions [12, 13]. Except for [26, 12, 3], these works study settings with low smoothness of f (2-smooth or less) and do not invoke kernels K (i.e. K(·) ≡ 1 and rt ≡ 1 in Algorithm 1). The use of randomization with smoothing kernels was proposed by Polyak and Tsybakov [26] and further developed by Dippon [12], and Bach and Perchet [3] to whom the current form of Algorithm 1 is due.
In this paper we consider higher order smooth functions f satisfying the generalized Hölder condition with parameter β ≥ 2, cf. inequality (1) below. For integer β, this parameter can be roughly interpreted as the number of bounded derivatives. Furthermore, we assume that f is α-strongly convex. For such functions, we address the following two main questions:
(a) What is the performance of Algorithm 1 in terms of the cumulative regret and optimization error, namely what is the explicit dependency of the rate on the main parameters d, T, α, β?
(b) What are the fundamental limits of any sequential search procedure expressed in terms of minimax optimization error?
To handle task (a), we prove upper bounds for Algorithm 1, and to handle (b), we prove minimax lower bounds for any sequential search method.
Contributions. Our main contributions can be summarized as follows: i) Under an adversarial noise assumption (cf. Assumption 2.1 below), we establish for all β ≥ 2 upper bounds of the order d2
α T − β−1β for the optimization risk and d 2 α T 1 β for the cumulative regret of Algorithm 1, both for its
constrained and unconstrained versions; ii) In the case of independent noise satisfying some natural assumptions (including the Gaussian noise), we prove a minimax lower bound of the order dαT − β−1β for the optimization risk when α is not very small. This shows that to within the factor of d the bound for Algorithm 1 cannot be improved for all β ≥ 2; iii) We show that, when α is too small, below some specified threshold, higher order smoothness does not help to improve the convergence rate. We prove that in this regime the rate cannot be faster than d/ √ T , which is not better (to within the dependency on d) than for derivative-free minimization of simply convex functions [2, 18]; iv) For β = 2, we obtain a bracketing of the optimal rate between O(d/ √ αT ) and Ω(d/(max(1, α) √ T )). In a special case when α is a fixed numerical constant, this validates a conjecture in [30] (claimed there as proved fact) that the optimal rate for β = 2 scales as d/ √ T ; v) We propose a simple algorithm of estimation of the value minx f(x) requiring three queries per step and attaining the optimal rate 1/ √ T for all
β ≥ 2. The best previous work on this problem [6] suggested a method with exponential complexity and proved a bound of the order c(d, α)/ √ T for β > 2 where c(d, α) is an unspecified constant.
Notation. Throughout the paper we use the following notation. We let 〈·, ·〉 and ‖ · ‖ be the standard inner product and Euclidean norm on Rd, respectively. For every close convex set Θ ⊂ Rd and x ∈ Rd we denote by ProjΘ(x) = argmin{‖z−x‖ : z ∈ Θ} the Euclidean projection of x to Θ. We assume everywhere that T ≥ 2. We denote by Fβ(L) the class of functions with Hölder smoothness β (inequality (1) below). Recall that f is α-strongly convex for some α > 0 if, for any x, y ∈ Rd it holds that f(y) ≥ f(x) + 〈∇f(x), y − x〉+ α2 ‖x− y‖
2. We further denote by Fα,β(L) the class of all α-strongly convex functions belonging to Fβ(L). Organization. We start in Section 2 with some preliminary results on the gradient estimator. Section 3 presents our upper bounds for Algorithm 1, both in the constrained and unconstrained case. In Section 4 we observe that a slight modification of Algorithm 1 can be used to estimated the minimum value (rather than the minimizer) of f . Section 4 presents improved upper bounds in the case β = 2. In Section 6 we establish minimax lower bounds. Finally, Section 7 contrasts our results with previous work in the literature and discusses future directions of research.
2 Preliminaries
In this section, we give the definitions, assumptions and basic facts that will be used throughout the paper. For β > 0, let ` be the greatest integer strictly less than β. We denote by Fβ(L) the set of all functions f : Rd → R that are ` times differentiable and satisfy, for all x, z ∈ Θ the Hölder-type condition ∣∣∣∣f(z)− ∑
0≤|m|≤`
1
m! Dmf(x)(z − x)m ∣∣∣∣ ≤ L‖z − x‖β , (1) where L > 0, the sum is over the multi-index m = (m1, ...,md) ∈ Nd, we used the notation m! = m1! · · ·md!, |m| = m1 + · · ·+md, and we defined
Dmf(x)νm = ∂|m|f(x)
∂m1x1 · · · ∂mdxd νm11 · · · ν md d , ∀ν = (ν1, . . . , νd) ∈ R d.
In this paper, we assume that the gradient estimator defined by Algorithm 1 uses a kernel function K : [−1, 1]→ R satisfying∫
K(u)du = 0, ∫ uK(u)du = 1, ∫ ujK(u)du = 0, j = 2, . . . , `, ∫ |u|β |K(u)|du <∞. (2)
Examples of such kernels obtained as weighted sums of Legendre polynomials are given in [26] and further discussed in [3]. Assumption 2.1. It holds, for all t ∈ {1, . . . , T}, that: (i) the random variables ξt and ξ′t are independent from ζt and from rt, and the random variables ζt and rt are independent; (ii) E[ξ2t ] ≤ σ2, and E[(ξ′t)2] ≤ σ2, where σ ≥ 0.
Note that we do not assume ξt and ξ′t to have zero mean. Moreover, they can be non-random and no independence between noises on different steps is required, so that the setting can be considered as adversarial. Having such a relaxed set of assumptions is possible because of randomization that, for example, allows the proofs go through without assuming the zero mean noise.
We will also use the following assumption. Assumption 2.2. Function f : Rd → R is 2-smooth, that is, differentiable on Rd and such that ‖∇f(x)−∇f(x′)‖ ≤ L̄‖x− x′‖ for all x, x′ ∈ Rd, where L̄ > 0.
It is easy to see that this assumption implies that f ∈ F2(L̄/2). The following lemma gives a bound on the bias of the gradient estimator. Lemma 2.3. Let f ∈ Fβ(L), with β ≥ 1 and let Assumption 2.1 (i) hold. Let ĝt and xt be defined by Algorithm 1 and let κβ = ∫ |u|β |K(u)|du. Then
‖E[ĝt |xt]−∇f(xt)‖ ≤ κβLdhβ−1t . (3)
If K be a weighted sum of Legendre polynomials, κβ ≤ 2 √
2β, with β ≥ 1 (see e.g., [3, Appendix A.3]).
The next lemma provides a bound on the stochastic variability of the estimated gradient by controlling its second moment. Lemma 2.4. Let Assumption 2.1(i) hold, let ĝt and xt be defined by Algorithm 1 and set κ =∫ K2(u)du. Then
(i) If Θ ⊆ Rd,∇f(x∗) = 0 and Assumption 2.2 holds, E[‖ĝt‖2 |xt] ≤ 9κL̄2 ( d‖xt − x∗‖2 +
d2h2t 8
) + 3κd2σ2
2h2t ,
(ii) If f ∈ F2(L) and Θ is a closed convex subset of Rd such that max x∈Θ ‖∇f(x)‖ ≤ G, then
E[‖ĝt‖2 |xt] ≤ 9κ ( G2d+
L2d2h2t 2
) + 3κd2σ2
2h2t .
3 Upper bounds
In this section, we provide upper bounds on the cumulative regret and on the optimization error of Algorithm 1, which are defined as
T∑ t=1 E[f(xt)− f(x)],
and E[f(x̂T )− f(x∗)], respectively, where x ∈ Θ and x̂T is an estimator after T queries. Note that the provided upper bound for cumulative regret is valid for any x ∈ Θ. First we consider Algorithm 1 when the convex set Θ is bounded (constrained case). Theorem 3.1. (Upper Bound, Constrained Case.) Let f ∈ Fα,β(L) with α,L > 0 and β ≥ 2. Let Assumptions 2.1 and 2.2 hold and let Θ be a convex compact subset of Rd. Assume that maxx∈Θ ‖∇f(x)‖ ≤ G. If σ > 0 then the cumulative regret of Algorithm 1 with
ht =
( 3κσ2
2(β − 1)(κβL)2
) 1 2β
t− 1 2β , ηt =
2
αt , t = 1, . . . , T
satisfies
∀x ∈ Θ : T∑ t=1 E[f(xt)− f(x)] ≤ 1 α ( d2 ( A1T 1/β +A2 ) +A3d log T ) , (4)
where A1 = 3β(κσ2) β−1 β (κβL) 2 β , A2 = c̄L̄2(σ/L) 2 β + 9κG2/d with constant c̄ > 0 depending only on β, and A3 = 9κG2. The optimization error of averaged estimator x̄T = 1T ∑T t=1 xt satisfies
E[f(x̄T )− f(x∗)] ≤ 1
α
( d2 ( A1
T β−1 β
+ A2 T
) +A3 d log T
T
) , (5)
where x∗ = arg minx∈Θ f(x). If σ = 0, then the cumulative regret and the optimization error of Algorithm 1 with any ht chosen small enough and ηt = 2αt satisfy the bounds (4) and (5), respectively, with A1 = 0, A2 = 9κG2/d and A3 = 10κG2.
Proof sketch. We use the definition of Algorithm 1 and strong convexity of f to obtain an upper bound for ∑T t=1 E[f(xt)− f(x) |xt], which depends on the bias term ∑T t=1 ‖E[ĝt |xt]−∇f(xt)‖
and on the stochastic error term ∑T t=1 E[‖ĝt‖2]. By substituting ht (that is derived from balancing the
two terms) and ηt in Lemmas 2.3 and 2.4 we obtain upper bounds for ∑T t=1 ‖E[ĝt |xt]−∇f(xt)‖ and∑T
t=1 E[‖ĝt‖2] that imply the desired upper bound for ∑T t=1 E[f(xt)− f(x) |xt] due to a recursive
argument in the spirit of [5].
In the non-noisy case (σ = 0) we get the rate dα log T for the cumulative regret, and d α log T T for the optimization error. In what concerns the optimization error, this rate is not optimal since one can achieve much faster rate under strong convexity [25]. However, for the cumulative regret in our derivative-free setting it remains an open question whether the result of Theorem 3.1 can be improved. Previous papers on derivative-free online methods with no noise [1, 13, 16] provide slower rates than (d/α) log T . The best known so far is (d2/α) log T , cf. [1, Corollary 5]. We may also notice that the cumulative regret bounds of Theorem 3.1 trivially extend to the case when we query functions ft depending on t rather than a single f . Another immediate fact is that on the r.h.s. of inequalities (4) and (5) we can take the minimum with GBT and GB, respectively, where B is the Euclidean diameter of Θ. Finally, the factor log T in the bounds for the optimization error can be eliminated by considering averaging from T/2 to T rather than from 1 to T , in the spirit of [27]. We refer to Appendix D for the details and proofs of these facts.
We now study the performance of Algorithm 1 when Θ = Rd. In this case we make the following choice for the parameters ht and ηt in Algorithm 1:
ht = T − 12β , ηt =
1
αT , t = 1, . . . , T0,
ht = t − 12β , ηt =
2
αt , t = T0 + 1, . . . , T,
(6)
where T0 = max { k ≥ 0 : C1L̄2d > α2k/2 } and C1 is a positive constant1 depending only on the kernel K(·) (this is defined in the proof of Theorem 3.2 in Appendix B) and recall L̄ is the Lipschitz constant on the gradient∇f . Finally, define the estimator
x̄T0,T = 1
T − T0 T∑ t=T0+1 xt. (7)
Theorem 3.2. (Upper Bounds, Unconstrained Case.) Let f ∈ Fα,β(L) with α,L > 0 and β ≥ 2. Let Assumptions 2.1 and 2.2 hold. Assume also that α > √ C∗d/T , where C∗ > 72κL̄2. Let xt’s be the updates of Algorithm 1 with Θ = Rd, ht and ηt as in (6) and a non-random x1 ∈ Rd. Then the estimator defined by (7) satisfies
E[f(x̄T0,T )− f(x∗)] ≤ CκL̄2 d
αT ‖x1 − x∗‖2 + C
d2
α
( (κβL) 2 + κ ( L̄2 + σ2 )) T− β−1 β (8)
where C > 0 is a constant depending only on β and x∗ = arg minx∈Rd f(x).
Proof sketch. As in the proof of Theorem 3.1, we apply Lemmas 2.3 and 2.4. But we can only use Lemma 2.4(i) and not Lemma 2.4(ii) and thus the bound on the stochastic error now involves ‖xt − x∗‖2. So, after taking expectations, we need to control an additional term containing rt = E[‖xt − x∗‖2]. However, the issue concerns only small t (t ≤ T0 ∼ d2/α) since for bigger t this term is compensated due to the strong convexity with parameter α > √ C∗d/T . This motivates the method where we use the first T0 iterations to get a suitably good (but not rate optimal) bound on rT0+1 and then proceed analogously to Theorem 3.1 for iterations t ≥ T0 + 1.
4 Estimation of f(x∗)
In this section, we apply the above results to estimation of the minimum value f(x∗) = minx∈Θ f(x) for functions f in the class Fα,β(L). The literature related to this problem assumes that xt’s are either i.i.d. with density bounded away from zero on its support [32] or xt’s are chosen sequentially [22, 6]. In the fist case, from the results in [32] one can deduce that f(x∗) cannot be estimated better than at the slow rate T−β/(2β+d). For the second case, which is our setting, the best result so far is obtained in [6]. The estimator of f(x∗) in [6] is defined via a multi-stage procedure whose complexity increases exponentially with the dimension d and it is shown to achieve (asymptotically,
1If T0 = 0 the algorithm does not use (6). Assumptions of Theorem 3.2 are such that condition T > T0 holds.
for T greater than an exponent of d) the c(d, α)/ √ T rate for functions in Fα,β(L) with β > 2. Here, c(d, α) is some constant depending on d and α in an unspecified way.
Observe that f(x̄T ) is not an estimator since it depends on the unknown f , so Theorem 3.1 does not provide a result about estimation of f(x∗). In this section, we show that using the computationally simple Algorithm 1 and making one more query per step (that is, having three queries per step in total) allows us to achieve the 1/ √ T rate for all β ≥ 2 with no dependency on the dimension in the main term. Note that the 1/ √ T rate cannot be improved. Indeed, one cannot estimate f(x∗) with a better rate even using the ideal but non-realizable oracle that makes all queries at point x∗. That is, even if x∗ is known and we sample T times f(x∗) + ξt with independent centered variables ξt, the error is still of the order 1/ √ T .
In order to construct our estimator, at any step t of Algorithm 1 we make along with yt and y′t the third query y′′t = f(xt) + ξ ′′ t , where ξ ′′ t is some noise and xt are the updates of Algorithm 1. We
estimate f(x∗) by M̂ = 1T ∑T t=1 y ′′ t . The properties of estimator M̂ are summarized in the next theorem, which is an immediate corollary of Theorem 3.1. Theorem 4.1. Let the assumptions of Theorem 3.1 be satisfied. Let σ > 0 and assume that (ξ′′t )Tt=1 are independent random variables with E[ξ′′t ] = 0 and E[(ξ′′t )2] ≤ σ2 for t = 1, . . . , T . If f attains its minimum at point x∗ ∈ Θ, then
E|M̂ − f(x∗)| ≤ σ T 1 2 + 1 α
( d2 ( A1
T β−1 β
+ A2 T
) +A3 d log T
T
) . (9)
Remark 4.2. With three queries per step, the risk (error) of the oracle that makes all queries at point x∗ does not exceed σ/ √ 3T . Thus, for β > 2 the estimator M̂ achieves asymptotically as T →∞ the oracle risk up to a numerical constant factor. We do not obtain such a sharp property for β = 2, in which case the remainder term in Theorem 4.1 accounting for the accuracy of Algorithm 1 is of the same order as the main term σ/ √ T .
Note that in Theorem 4.1 the noises (ξ′′t ) T t=1 are assumed to be independent and zero mean random
variables, which is essential to obtain the 1/ √ T rate. Nevertheless, we do not require independence between the noises (ξ′′t ) T t=1 and the noises in the other two queries (ξt) T t=1 and (ξ ′ t) T t=1. Another
interesting point is that for β = 2 the third query is not needed and f(x∗) is estimated with the 1/ √ T rate either by M̂ = 1T ∑T t=1 yt or by M̂ = 1 T ∑T t=1 y ′ t. This is an easy consequence of the above argument, the property (19) – see Lemma A.3 in the appendix – which is specific for the case β = 2, and the fact that the optimal choice of ht is of order t−1/4 for β = 2.
5 Improved bounds for β = 2
In this section, we consider the case β = 2 and obtain improved bounds that scale as d rather than d2 with the dimension in the constrained optimization setting analogous to Theorem 3.1. First note that for β = 2 we can simplify the algorithm. The use of kernel K is redundant when β = 2, and therefore in this section we define the approximate gradient as
ĝt = d
2ht (yt − y′t)ζt, (10)
where yt = f(x + htζ̃) and y′t = f(x − htζ̃). A well-known observation that goes back to [23] consists in the fact that ĝt defined in (10) is an unbiased estimator of the gradient of the surrogate function f̂t defined by
f̂t(x) = Ef(x+ htζ̃), ∀x ∈ Rd,
where the expectation E is taken with respect to the random vector ζ̃ uniformly distributed on the unit ball Bd = {u ∈ Rd : ‖u‖ ≤ 1}. The properties of the surrogate f̂t are described in Lemmas A.2 and A.3 presented in the appendix.
The improvement in the rate that we get for β = 2 is due to the fact that we can consider Algorithm 1 with ĝt defined in (10) as the SGD for the surrogate function. Then the bias of approximating f by f̂t scales as h2t , which is smaller than the squared bias of approximating the gradient arising in the proof
of Theorem 3.1 that scales as d2h2(β−1)t = d 2h2t when β = 2. On the other hand, the stochastic variability terms are the same for both methods of proof. This explains the gain in dependency on d. However, this technique does not work for β > 2 since then the error of approximating f by f̂t, which is of the order h β t (with ht small), becomes too large compared to the bias d 2h 2(β−1) t of Theorem 3.1.
Theorem 5.1. Let f ∈ Fα,2(L) with α,L > 0. Let Assumption 2.1 hold and let Θ be a convex compact subset of Rd. Assume that maxx∈Θ ‖∇f(x)‖ ≤ G. If σ > 0 then for Algorithm 1 with ĝt defined in (10) and parameters ht = ( 3d2σ2
4Lαt+9L2d2 )1/4 and ηt = 1αt we have
∀x ∈ Θ : E T∑ t=1 ( f(xt)− f(x) ) ≤ min ( GBT, 2 √ 3Lσ d√ α √ T +A4 d2 α log T ) , (11)
where B is the Euclidean diameter of Θ and A4 = 6.5Lσ + 22G2/d. Moreover, if x∗ = arg minx∈Θ f(x) the optimization error of averaged estimator x̄T = 1 T ∑T t=1 xt is bounded as
E[f(x̄T )− f(x∗)] ≤ min ( GB, 2 √ 3Lσ
d√ αT +A4 d2 α log T T
) . (12)
Finally, if σ = 0, then the cumulative regret of Algorithm 1 with any ht chosen small enough and ηt = 1 αt and the optimization error of its averaged version are of the order d2 α log T and d2 α log T T , respectively.
Note that the terms d 2 α log T and d2 α log T T appearing in these bounds can be improved to d α log T and d α log T T at the expense of assuming that the norm ‖∇f‖ is uniformly bounded by G not only on Θ but also on a large enough Euclidean neighborhood of Θ. Moreover, the log T factor in the bounds for the optimization error can be eliminated by considering averaging from T/2 to T rather than from 1 to T in the spirit of [27]. We refer to Appendix D for the details and proofs of these facts. A major conclusion is that, when σ > 0 and we consider the optimization error, those terms are negligible with respect to d/ √ αT and thus an attainable rate is min(1, d/ √ αT ).
We close this section by noting, in connection with the bandit setting, that the bound (11) extends straightforwardly (up to a change in numerical constants) to the cumulative regret of the form E ∑T t=1 ( ft(xt ± htζt)− ft(x) ) , where the losses are measured at the query points and f depends on t. This fact follows immediately from the proof of Theorem 5.1 presented in the appendix and the property (19), see Lemma A.3 in the appendix.
6 Lower bound
In this section we prove a minimax lower bound on the optimization error over all sequential strategies that allow the query points depend on the past. For t = 1, . . . , T , we assume that yt = f(zt) + ξt and we consider strategies of choosing the query points as zt = Φt(zt−11 , y t−1 1 ) where Φt are Borel functions and z1 ∈ Rd is any random variable. We denote by ΠT the set of all such strategies. The noises ξ1, . . . , ξT are assumed in this section to be independent with cumulative distribution function F satisfying the condition∫
log ( dF (u)/dF (u+ v) ) dF (u) ≤ I0v2, |v| < v0 (13)
for some 0 < I0 <∞, 0 < v0 ≤ ∞. Using the second order expansion of the logarithm w.r.t. v, one can verify that this assumption is satisfied when F has a smooth enough density with finite Fisher information. For example, for Gaussian distribution F this condition holds with v0 =∞. Note that the class ΠT includes the sequential strategy of Algorithm 1 that corresponds to taking T as an even number, and choosing zt = xt + ζtrt and zt = xt − ζtrt for even t and odd t, respectively. The presence of the randomizing sequences ζt, rt is not crucial for the lower bound. Indeed, Theorem 6.1 below is valid conditionally on any randomization, and thus the lower bound remains valid when taking expectation over the randomizing distribution.
Theorem 6.1. Let Θ = {x ∈ Rd : ‖x‖ ≤ 1}. For α,L > 0,β ≥ 2, let F ′α,β denote the set of functions f that attain their minimum over Rd in Θ and belong to Fα,β(L) ∩ {f : maxx∈Θ ‖∇f(x)‖ ≤ G}, where G > 2α. Then for any strategy in the class ΠT we have
sup f∈F ′α,β
E [ f(zT )−min
x f(x)
] ≥ C min ( max(α, T−1/2+1/β),
d√ T , d α T− β−1 β
) , (14)
and sup f∈F ′α,β E [ ‖zT − x∗(f)‖2 ] ≥ C min ( 1, d T 1 β , d α2 T− β−1 β ) , (15)
where C > 0 is a constant that does not depend of T, d, and α, and x∗(f) is the minimizer of f on Θ.
The proof is given in Appendix B. It extends the proof technique of Polyak and Tsybakov [28], by applying it to more than two probe functions. The proof takes into account dependency on the dimension d, and on α. The final result is obtained by applying Assouad’s Lemma, see e.g. [33].
We stress that the condition G > 2α in this theorem is necessary. It should always hold if the intersection Fα,β(L) ∩ {f : maxx∈Θ ‖∇f(x)‖ ≤ G} is not empty. Notice also that the threshold T−1/2+1/β on the strong convexity parameter α plays an important role in bounds (14) and (15). Indeed, for α below this threshold, the bounds start to be independent of α. Moreover, in this regime, the rate of (14) becomes min(T 1/β , d)/ √ T , which is asymptotically d/ √ T and thus not better as function of T than the rate attained for zero-order minimization of simply convex functions [2, 7]. Intuitively, it seems reasonable that α-strong convexity should be of no added value for very small α. Theorem 6.1 allows us to quantify exactly how small such α should be. Also, quite naturally, the threshold becomes smaller when the smoothness β increases. Finally note that for β = 2 the lower bounds (14) and (15) are, in the interesting regime of large enough T , of order d/(max(α, 1) √ T ) and d/(max(α2, 1) √ T ), respectively. This highlights the near minimax optimal properties of Algorithm 1 in the setting of Theorem 5.1.
7 Discussion and related work
There is a great deal of attention to zero-order feedback stochastic optimization and convex bandits problems in the recent literature. Several settings are studied: (i) deterministic in the sense that the queries contain no random noise and we query functions ft depending on t rather than f where ft are Lipschitz or 2-smooth [16, 1, 24, 25, 28, 31]; (ii) stochastic with two-point feedback where the two noisy evaluations are obtained with the same noise and the noisy functions are Lipschitz or 2-smooth [24, 25, 13] (this setting does not differ much from (i) in terms of the analysis and the results); (iii) stochastic, where the noises ξi are independent zero-mean random variables [15, 26, 12, 2, 30, 3, 19, 4, 20]. In this paper, we considered a setting, which is more general than (iii) by allowing for adversarial noise (no independence or zero-mean assumption in contrast to (iii), no Lipschitz assumption in contrast to settings (i) and (ii)), which are both covered by our results when the noise is set to zero.
One part of our results are bounds on the cumulative regret, cf. (4) and (11). We emphasize that they remain trivially valid if the queries are from ft depending on t instead of f , and thus cover the setting (i). To the best of our knowledge, there were no such results in this setting previously, except for [3] that gives bounds with suboptimal dependency on T in the case of classical (non-adversarial) noise. In the non-noisy case, we get bounds on the cumulative regret with faster rates than previously known for the setting (i). It remains an open question whether these bounds can be improved.
The second part of our results dealing with the optimization error E[f(x̄T ) − f(x∗)] is closely related to the work on derivative-free stochastic optimization under strong convexity and smoothness assumptions initiated in [15, 26] and more recently developed in [12, 19, 30, 3]. It was shown in [26] that the minimax optimal rate for f ∈ Fα,β(L) scales as c(α, d)T−(β−1)/β , where c(α, d) is an unspecified function of α and d (for d = 1 an upper bound of the same order was earlier established in [15]). The issue of establishing non-asymptotic fundamental limits as function of the main parameters of the problem (α, d and T ) was first addressed in [19] giving a lower bound Ω( √ d/T ) for β = 2. This was improved to Ω(d/ √ T ) when α 1 by Shamir [30] who conjectured that the rate d/ √ T is optimal for β = 2, which indeed follows from our Theorem 5.1 (although [30] claims the optimality
as proved fact by referring to results in [1], such results cannot be applied in setting (iii) because the noise cannot be considered as Lipschitz). A result similar to Theorem 5.1 is stated without proof in Bach and Perchet [3, Proposition 7] but not for the cumilative regret and with a suboptimal rate in the non-noisy case. For integer β ≥ 3, Bach and Perchet [3] present explicit upper bounds as functions of α, d and T with, however, suboptimal dependency on T except for their Proposition 8 that is problematic (see Appendix C for the details). Finally, by slightly modifying the proof of Theorem 3.1 we get that the estimation risk E [ ‖x̄T − x∗‖2 ] is O((d2/α2)T−(β−1)/β), which is to within factor d of the main term in the lower bound (15) (see Appendix D for details).
The lower bound in Theorem 6.1 is, to the best of our knowledge, the first result providing nonasymptotic fundamental limits under general configuration of α, d and T . The known lower bounds [26, 19, 30] either give no explicit dependency on α and d, or treat the special case β = 2 and α 1. Moreover, as an interesting consequence of our lower bound we find that, for small strong convexity parameter α (namely, below the T−1/2+1/β threshold), the best achievable rate cannot be substantially faster than for simply convex functions, at least for moderate dimensions. Indeed, for such small α, our lower bound is asymptotically Ω(d/ √ T ) independently of the smoothness index β and on α, while the achievable rate for convex functions is shown to be d16/ √ T in [2] and improved to d3.75/ √ T in [7] (both up to log-factors). The gap here is only in the dependency on the dimension. Our results imply that for α above the T−1/2+1/β threshold, the gap between upper and lower bounds is much smaller. Thus, our upper bounds in this regime scale as (d2/α)T−(β−1)/β while the lower bound of Theorem 6.1 is of the order Ω ( (d/α)T−(β−1)/β ) ; moreover for β = 2, upper and lower bounds match in the dependency on d.
We hope that our work will stimulate further study at the intersection of zero-order optimization and convex bandits in machine learning. An important open problem is to study novel algorithms which match our lower bound simultaneously in all main parameters. For example a class of algorithms worth exploring are those using memory of the gradient in the spirit of Nesterov accelerated method. Yet another important open problem is to study lower bounds for the regret in our setting. Finally, it would be valuable to study extensions of our work to locally strongly convex functions.
Broader impact
The present work improves our understanding of zero-order optimization methods in specific scenarios in which the underlying function we wish to optimize has certain regularity properties. We believe that a solid theoretical foundation is beneficial to the development of practical machine learning and statistical methods. We expect no direct or indirect ethical risks from our research.
Acknowledgments and Disclosure of Funding
We would like to thank Francis Bach, Vianney Perchet, Saverio Salzo, and Ohad Shamir for helpful discussions. The first and second authors were partially supported by SAP SE. The research of A.B. Tsybakov is supported by a grant of the French National Research Agency (ANR), “Investissements d’Avenir” (LabEx Ecodec/ANR-11-LABX-0047). | 1. What is the focus and contribution of the paper regarding zeroth-order convex optimization?
2. What are the strengths of the proposed approach, particularly in terms of its simplicity and computational efficiency?
3. What are the weaknesses of the paper, especially regarding its presentation and explanations?
4. Do you have any questions or concerns regarding the paper's assumptions and additional considerations?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The authors study the zeroth-other convex optimization problem and study a previous algorithm of Bach and Perchet under additional assumptions that the target function is beta-Holder smooth and alpha-strongly convex. The authors prove upper bounds on the regret when the only assumption on the noise is that the variance is bounded (though the algorithm needs to be turned with knowledge of this variance). The authors also provide a guarantee for estimation of f(x^*) for a modified algorithm that uses a third query to estimate the function value. The authors provide stronger convergence guarantees in the beta=2 case (exploiting the connection to the surrogate function), as well as lower bounds for arbitrary beta>=2.
Strengths
Baring my being ignorant of large part of this literature, the technical contributions seem strong. In particular, the algorithms analyzed are simple, intuitive, and computationally trivial. Pointers to improvements from the literature (such as the log(T) improvement taking a tail average would yield) are appreciated. Presenting a lower bound really strengthens the upper bounds in the paper and provides a more holistic picture of the algorithm, and the paper seems to provide a satisfactory addition to our understanding on how smoothness affects rates. Section 7 contextualizes the results very well. Given the fundamental nature of zeroth order approximation, I believe the results in this paper would be very relevant to the community.
Weaknesses
Sometimes the presentation is dense: a table, for example, would be a more efficient way to compare the derived rates with past results. There are a few discoveries I wish the authors would discuss a bit more, including: - the generalization to "adversarial noise;" e.g. explain why this generalization is plausible. - showing the bias-variance decomposition explicitly, at least one, would be nice - can you explain why the kernel is redundant when beta=2 (line 204)? - Since the claimed lower bound is novel, can you explain what is new about the construction? |
NIPS | Title
Exploiting Higher Order Smoothness in Derivative-free Optimization and Continuous Bandits
Abstract
We study the problem of zero-order optimization of a strongly convex function. The goal is to find the minimizer of the function by a sequential exploration of its values, under measurement noise. We study the impact of higher order smoothness properties of the function on the optimization error and on the cumulative regret. To solve this problem we consider a randomized approximation of the projected gradient descent algorithm. The gradient is estimated by a randomized procedure involving two function evaluations and a smoothing kernel. We derive upper bounds for this algorithm both in the constrained and unconstrained settings and prove minimax lower bounds for any sequential search method. Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters. Based on this algorithm, we also propose an estimator of the minimum value of the function achieving almost sharp oracle behavior. We compare our results with the state-of-the-art, highlighting a number of key improvements.
1 Introduction
We study the problem of zero-order stochastic optimization, in which we aim to minimize an unknown strongly convex function via a sequential exploration of its function values, under measurement error, and a closely related problem of continuous (or continuum-armed) stochastic bandits. These problems have received significant attention in the literature, see [1, 2, 3, 4, 7, 9, 10, 14, 17, 18, 34, 16, 20, 21, 30, 25, 31, 13, 27, 28, 19, 29], and are fundamental for many applications in which the derivatives of the function are either too expensive or impossible to compute. A principal goal of this paper is to exploit higher order smoothness properties of the underlying function in order to improve the performance of search algorithms. We derive upper bounds on the estimation error for a class of projected gradient-like algorithms, as well as close matching lower bounds, that characterize the role played by the number of iterations, the strong convexity parameter, the smoothness parameter, the number of variables, and the noise level.
Let f : Rd → R be the function that we wish to minimize over a closed convex subset Θ of Rd. Our approach, outlined in Algorithm 1, builds upon previous work in which a sequential algorithm queries at each iteration a pair of function values, under a general noise model. Specifically, at iteration t the current guess xt for the minimizer of f is used to build two perturbations xt + δt and xt − δt, where the function values are queried subject to additive measurement errors ξt and ξ′t, respectively. The
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Algorithm 1 Zero-Order Stochastic Projected Gradient
Requires Kernel K : [−1, 1]→ R, step size ηt > 0 and parameter ht, for t = 1, . . . , T Initialization Generate scalars r1, . . . , rT uniformly on the interval [−1, 1], vectors ζ1, . . . , ζT uniformly distributed on the unit sphere Sd = {ζ ∈ Rd : ‖ζ‖ = 1}, and choose x1 ∈ Θ For t = 1, . . . , T
1. Let yt = f(xt + htrtζt) + ξt and y′t = f(xt − htrtζt) + ξ′t, 2. Define ĝt = d2ht (yt − y ′ t)ζtK(rt)
3. Update xt+1 = ProjΘ(xt − ηtĝt) Return (xt)Tt=1
values δt can be chosen in different ways. In this paper, we set δt = htrrζt (Line 1), where ht > 0 is a suitably chosen small parameter, rt is random and uniformly distributed on [−1, 1], and ζt is uniformly distributed on the unit sphere. The estimate for the gradient is then computed at Line 2 and used inside a projected gradient method scheme to compute the next exploration point. We introduce a suitably chosen kernel K that allows us to take advantage of higher order smoothness of f .
The idea of using randomized procedures for derivative-free stochastic optimization can be traced back to Nemirovski and Yudin [23, Sec. 9.3] who suggested an algorithm with one query per step at point xt +htζt, with ζt uniform on the unit sphere. Its versions with one, two or more queries were studied in several papers including [1, 3, 16, 31]. Using two queries per step leads to better performance bounds as emphasized in [26, 1, 3, 16, 31, 13]. Randomizing sequences other than uniform on the sphere were also explored: ζt uniformly distributed on a cube [26], Gaussian ζt [24, 25], ζt uniformly distributed on the vertices of a cube [30] or satisfying some general assumptions [12, 13]. Except for [26, 12, 3], these works study settings with low smoothness of f (2-smooth or less) and do not invoke kernels K (i.e. K(·) ≡ 1 and rt ≡ 1 in Algorithm 1). The use of randomization with smoothing kernels was proposed by Polyak and Tsybakov [26] and further developed by Dippon [12], and Bach and Perchet [3] to whom the current form of Algorithm 1 is due.
In this paper we consider higher order smooth functions f satisfying the generalized Hölder condition with parameter β ≥ 2, cf. inequality (1) below. For integer β, this parameter can be roughly interpreted as the number of bounded derivatives. Furthermore, we assume that f is α-strongly convex. For such functions, we address the following two main questions:
(a) What is the performance of Algorithm 1 in terms of the cumulative regret and optimization error, namely what is the explicit dependency of the rate on the main parameters d, T, α, β?
(b) What are the fundamental limits of any sequential search procedure expressed in terms of minimax optimization error?
To handle task (a), we prove upper bounds for Algorithm 1, and to handle (b), we prove minimax lower bounds for any sequential search method.
Contributions. Our main contributions can be summarized as follows: i) Under an adversarial noise assumption (cf. Assumption 2.1 below), we establish for all β ≥ 2 upper bounds of the order d2
α T − β−1β for the optimization risk and d 2 α T 1 β for the cumulative regret of Algorithm 1, both for its
constrained and unconstrained versions; ii) In the case of independent noise satisfying some natural assumptions (including the Gaussian noise), we prove a minimax lower bound of the order dαT − β−1β for the optimization risk when α is not very small. This shows that to within the factor of d the bound for Algorithm 1 cannot be improved for all β ≥ 2; iii) We show that, when α is too small, below some specified threshold, higher order smoothness does not help to improve the convergence rate. We prove that in this regime the rate cannot be faster than d/ √ T , which is not better (to within the dependency on d) than for derivative-free minimization of simply convex functions [2, 18]; iv) For β = 2, we obtain a bracketing of the optimal rate between O(d/ √ αT ) and Ω(d/(max(1, α) √ T )). In a special case when α is a fixed numerical constant, this validates a conjecture in [30] (claimed there as proved fact) that the optimal rate for β = 2 scales as d/ √ T ; v) We propose a simple algorithm of estimation of the value minx f(x) requiring three queries per step and attaining the optimal rate 1/ √ T for all
β ≥ 2. The best previous work on this problem [6] suggested a method with exponential complexity and proved a bound of the order c(d, α)/ √ T for β > 2 where c(d, α) is an unspecified constant.
Notation. Throughout the paper we use the following notation. We let 〈·, ·〉 and ‖ · ‖ be the standard inner product and Euclidean norm on Rd, respectively. For every close convex set Θ ⊂ Rd and x ∈ Rd we denote by ProjΘ(x) = argmin{‖z−x‖ : z ∈ Θ} the Euclidean projection of x to Θ. We assume everywhere that T ≥ 2. We denote by Fβ(L) the class of functions with Hölder smoothness β (inequality (1) below). Recall that f is α-strongly convex for some α > 0 if, for any x, y ∈ Rd it holds that f(y) ≥ f(x) + 〈∇f(x), y − x〉+ α2 ‖x− y‖
2. We further denote by Fα,β(L) the class of all α-strongly convex functions belonging to Fβ(L). Organization. We start in Section 2 with some preliminary results on the gradient estimator. Section 3 presents our upper bounds for Algorithm 1, both in the constrained and unconstrained case. In Section 4 we observe that a slight modification of Algorithm 1 can be used to estimated the minimum value (rather than the minimizer) of f . Section 4 presents improved upper bounds in the case β = 2. In Section 6 we establish minimax lower bounds. Finally, Section 7 contrasts our results with previous work in the literature and discusses future directions of research.
2 Preliminaries
In this section, we give the definitions, assumptions and basic facts that will be used throughout the paper. For β > 0, let ` be the greatest integer strictly less than β. We denote by Fβ(L) the set of all functions f : Rd → R that are ` times differentiable and satisfy, for all x, z ∈ Θ the Hölder-type condition ∣∣∣∣f(z)− ∑
0≤|m|≤`
1
m! Dmf(x)(z − x)m ∣∣∣∣ ≤ L‖z − x‖β , (1) where L > 0, the sum is over the multi-index m = (m1, ...,md) ∈ Nd, we used the notation m! = m1! · · ·md!, |m| = m1 + · · ·+md, and we defined
Dmf(x)νm = ∂|m|f(x)
∂m1x1 · · · ∂mdxd νm11 · · · ν md d , ∀ν = (ν1, . . . , νd) ∈ R d.
In this paper, we assume that the gradient estimator defined by Algorithm 1 uses a kernel function K : [−1, 1]→ R satisfying∫
K(u)du = 0, ∫ uK(u)du = 1, ∫ ujK(u)du = 0, j = 2, . . . , `, ∫ |u|β |K(u)|du <∞. (2)
Examples of such kernels obtained as weighted sums of Legendre polynomials are given in [26] and further discussed in [3]. Assumption 2.1. It holds, for all t ∈ {1, . . . , T}, that: (i) the random variables ξt and ξ′t are independent from ζt and from rt, and the random variables ζt and rt are independent; (ii) E[ξ2t ] ≤ σ2, and E[(ξ′t)2] ≤ σ2, where σ ≥ 0.
Note that we do not assume ξt and ξ′t to have zero mean. Moreover, they can be non-random and no independence between noises on different steps is required, so that the setting can be considered as adversarial. Having such a relaxed set of assumptions is possible because of randomization that, for example, allows the proofs go through without assuming the zero mean noise.
We will also use the following assumption. Assumption 2.2. Function f : Rd → R is 2-smooth, that is, differentiable on Rd and such that ‖∇f(x)−∇f(x′)‖ ≤ L̄‖x− x′‖ for all x, x′ ∈ Rd, where L̄ > 0.
It is easy to see that this assumption implies that f ∈ F2(L̄/2). The following lemma gives a bound on the bias of the gradient estimator. Lemma 2.3. Let f ∈ Fβ(L), with β ≥ 1 and let Assumption 2.1 (i) hold. Let ĝt and xt be defined by Algorithm 1 and let κβ = ∫ |u|β |K(u)|du. Then
‖E[ĝt |xt]−∇f(xt)‖ ≤ κβLdhβ−1t . (3)
If K be a weighted sum of Legendre polynomials, κβ ≤ 2 √
2β, with β ≥ 1 (see e.g., [3, Appendix A.3]).
The next lemma provides a bound on the stochastic variability of the estimated gradient by controlling its second moment. Lemma 2.4. Let Assumption 2.1(i) hold, let ĝt and xt be defined by Algorithm 1 and set κ =∫ K2(u)du. Then
(i) If Θ ⊆ Rd,∇f(x∗) = 0 and Assumption 2.2 holds, E[‖ĝt‖2 |xt] ≤ 9κL̄2 ( d‖xt − x∗‖2 +
d2h2t 8
) + 3κd2σ2
2h2t ,
(ii) If f ∈ F2(L) and Θ is a closed convex subset of Rd such that max x∈Θ ‖∇f(x)‖ ≤ G, then
E[‖ĝt‖2 |xt] ≤ 9κ ( G2d+
L2d2h2t 2
) + 3κd2σ2
2h2t .
3 Upper bounds
In this section, we provide upper bounds on the cumulative regret and on the optimization error of Algorithm 1, which are defined as
T∑ t=1 E[f(xt)− f(x)],
and E[f(x̂T )− f(x∗)], respectively, where x ∈ Θ and x̂T is an estimator after T queries. Note that the provided upper bound for cumulative regret is valid for any x ∈ Θ. First we consider Algorithm 1 when the convex set Θ is bounded (constrained case). Theorem 3.1. (Upper Bound, Constrained Case.) Let f ∈ Fα,β(L) with α,L > 0 and β ≥ 2. Let Assumptions 2.1 and 2.2 hold and let Θ be a convex compact subset of Rd. Assume that maxx∈Θ ‖∇f(x)‖ ≤ G. If σ > 0 then the cumulative regret of Algorithm 1 with
ht =
( 3κσ2
2(β − 1)(κβL)2
) 1 2β
t− 1 2β , ηt =
2
αt , t = 1, . . . , T
satisfies
∀x ∈ Θ : T∑ t=1 E[f(xt)− f(x)] ≤ 1 α ( d2 ( A1T 1/β +A2 ) +A3d log T ) , (4)
where A1 = 3β(κσ2) β−1 β (κβL) 2 β , A2 = c̄L̄2(σ/L) 2 β + 9κG2/d with constant c̄ > 0 depending only on β, and A3 = 9κG2. The optimization error of averaged estimator x̄T = 1T ∑T t=1 xt satisfies
E[f(x̄T )− f(x∗)] ≤ 1
α
( d2 ( A1
T β−1 β
+ A2 T
) +A3 d log T
T
) , (5)
where x∗ = arg minx∈Θ f(x). If σ = 0, then the cumulative regret and the optimization error of Algorithm 1 with any ht chosen small enough and ηt = 2αt satisfy the bounds (4) and (5), respectively, with A1 = 0, A2 = 9κG2/d and A3 = 10κG2.
Proof sketch. We use the definition of Algorithm 1 and strong convexity of f to obtain an upper bound for ∑T t=1 E[f(xt)− f(x) |xt], which depends on the bias term ∑T t=1 ‖E[ĝt |xt]−∇f(xt)‖
and on the stochastic error term ∑T t=1 E[‖ĝt‖2]. By substituting ht (that is derived from balancing the
two terms) and ηt in Lemmas 2.3 and 2.4 we obtain upper bounds for ∑T t=1 ‖E[ĝt |xt]−∇f(xt)‖ and∑T
t=1 E[‖ĝt‖2] that imply the desired upper bound for ∑T t=1 E[f(xt)− f(x) |xt] due to a recursive
argument in the spirit of [5].
In the non-noisy case (σ = 0) we get the rate dα log T for the cumulative regret, and d α log T T for the optimization error. In what concerns the optimization error, this rate is not optimal since one can achieve much faster rate under strong convexity [25]. However, for the cumulative regret in our derivative-free setting it remains an open question whether the result of Theorem 3.1 can be improved. Previous papers on derivative-free online methods with no noise [1, 13, 16] provide slower rates than (d/α) log T . The best known so far is (d2/α) log T , cf. [1, Corollary 5]. We may also notice that the cumulative regret bounds of Theorem 3.1 trivially extend to the case when we query functions ft depending on t rather than a single f . Another immediate fact is that on the r.h.s. of inequalities (4) and (5) we can take the minimum with GBT and GB, respectively, where B is the Euclidean diameter of Θ. Finally, the factor log T in the bounds for the optimization error can be eliminated by considering averaging from T/2 to T rather than from 1 to T , in the spirit of [27]. We refer to Appendix D for the details and proofs of these facts.
We now study the performance of Algorithm 1 when Θ = Rd. In this case we make the following choice for the parameters ht and ηt in Algorithm 1:
ht = T − 12β , ηt =
1
αT , t = 1, . . . , T0,
ht = t − 12β , ηt =
2
αt , t = T0 + 1, . . . , T,
(6)
where T0 = max { k ≥ 0 : C1L̄2d > α2k/2 } and C1 is a positive constant1 depending only on the kernel K(·) (this is defined in the proof of Theorem 3.2 in Appendix B) and recall L̄ is the Lipschitz constant on the gradient∇f . Finally, define the estimator
x̄T0,T = 1
T − T0 T∑ t=T0+1 xt. (7)
Theorem 3.2. (Upper Bounds, Unconstrained Case.) Let f ∈ Fα,β(L) with α,L > 0 and β ≥ 2. Let Assumptions 2.1 and 2.2 hold. Assume also that α > √ C∗d/T , where C∗ > 72κL̄2. Let xt’s be the updates of Algorithm 1 with Θ = Rd, ht and ηt as in (6) and a non-random x1 ∈ Rd. Then the estimator defined by (7) satisfies
E[f(x̄T0,T )− f(x∗)] ≤ CκL̄2 d
αT ‖x1 − x∗‖2 + C
d2
α
( (κβL) 2 + κ ( L̄2 + σ2 )) T− β−1 β (8)
where C > 0 is a constant depending only on β and x∗ = arg minx∈Rd f(x).
Proof sketch. As in the proof of Theorem 3.1, we apply Lemmas 2.3 and 2.4. But we can only use Lemma 2.4(i) and not Lemma 2.4(ii) and thus the bound on the stochastic error now involves ‖xt − x∗‖2. So, after taking expectations, we need to control an additional term containing rt = E[‖xt − x∗‖2]. However, the issue concerns only small t (t ≤ T0 ∼ d2/α) since for bigger t this term is compensated due to the strong convexity with parameter α > √ C∗d/T . This motivates the method where we use the first T0 iterations to get a suitably good (but not rate optimal) bound on rT0+1 and then proceed analogously to Theorem 3.1 for iterations t ≥ T0 + 1.
4 Estimation of f(x∗)
In this section, we apply the above results to estimation of the minimum value f(x∗) = minx∈Θ f(x) for functions f in the class Fα,β(L). The literature related to this problem assumes that xt’s are either i.i.d. with density bounded away from zero on its support [32] or xt’s are chosen sequentially [22, 6]. In the fist case, from the results in [32] one can deduce that f(x∗) cannot be estimated better than at the slow rate T−β/(2β+d). For the second case, which is our setting, the best result so far is obtained in [6]. The estimator of f(x∗) in [6] is defined via a multi-stage procedure whose complexity increases exponentially with the dimension d and it is shown to achieve (asymptotically,
1If T0 = 0 the algorithm does not use (6). Assumptions of Theorem 3.2 are such that condition T > T0 holds.
for T greater than an exponent of d) the c(d, α)/ √ T rate for functions in Fα,β(L) with β > 2. Here, c(d, α) is some constant depending on d and α in an unspecified way.
Observe that f(x̄T ) is not an estimator since it depends on the unknown f , so Theorem 3.1 does not provide a result about estimation of f(x∗). In this section, we show that using the computationally simple Algorithm 1 and making one more query per step (that is, having three queries per step in total) allows us to achieve the 1/ √ T rate for all β ≥ 2 with no dependency on the dimension in the main term. Note that the 1/ √ T rate cannot be improved. Indeed, one cannot estimate f(x∗) with a better rate even using the ideal but non-realizable oracle that makes all queries at point x∗. That is, even if x∗ is known and we sample T times f(x∗) + ξt with independent centered variables ξt, the error is still of the order 1/ √ T .
In order to construct our estimator, at any step t of Algorithm 1 we make along with yt and y′t the third query y′′t = f(xt) + ξ ′′ t , where ξ ′′ t is some noise and xt are the updates of Algorithm 1. We
estimate f(x∗) by M̂ = 1T ∑T t=1 y ′′ t . The properties of estimator M̂ are summarized in the next theorem, which is an immediate corollary of Theorem 3.1. Theorem 4.1. Let the assumptions of Theorem 3.1 be satisfied. Let σ > 0 and assume that (ξ′′t )Tt=1 are independent random variables with E[ξ′′t ] = 0 and E[(ξ′′t )2] ≤ σ2 for t = 1, . . . , T . If f attains its minimum at point x∗ ∈ Θ, then
E|M̂ − f(x∗)| ≤ σ T 1 2 + 1 α
( d2 ( A1
T β−1 β
+ A2 T
) +A3 d log T
T
) . (9)
Remark 4.2. With three queries per step, the risk (error) of the oracle that makes all queries at point x∗ does not exceed σ/ √ 3T . Thus, for β > 2 the estimator M̂ achieves asymptotically as T →∞ the oracle risk up to a numerical constant factor. We do not obtain such a sharp property for β = 2, in which case the remainder term in Theorem 4.1 accounting for the accuracy of Algorithm 1 is of the same order as the main term σ/ √ T .
Note that in Theorem 4.1 the noises (ξ′′t ) T t=1 are assumed to be independent and zero mean random
variables, which is essential to obtain the 1/ √ T rate. Nevertheless, we do not require independence between the noises (ξ′′t ) T t=1 and the noises in the other two queries (ξt) T t=1 and (ξ ′ t) T t=1. Another
interesting point is that for β = 2 the third query is not needed and f(x∗) is estimated with the 1/ √ T rate either by M̂ = 1T ∑T t=1 yt or by M̂ = 1 T ∑T t=1 y ′ t. This is an easy consequence of the above argument, the property (19) – see Lemma A.3 in the appendix – which is specific for the case β = 2, and the fact that the optimal choice of ht is of order t−1/4 for β = 2.
5 Improved bounds for β = 2
In this section, we consider the case β = 2 and obtain improved bounds that scale as d rather than d2 with the dimension in the constrained optimization setting analogous to Theorem 3.1. First note that for β = 2 we can simplify the algorithm. The use of kernel K is redundant when β = 2, and therefore in this section we define the approximate gradient as
ĝt = d
2ht (yt − y′t)ζt, (10)
where yt = f(x + htζ̃) and y′t = f(x − htζ̃). A well-known observation that goes back to [23] consists in the fact that ĝt defined in (10) is an unbiased estimator of the gradient of the surrogate function f̂t defined by
f̂t(x) = Ef(x+ htζ̃), ∀x ∈ Rd,
where the expectation E is taken with respect to the random vector ζ̃ uniformly distributed on the unit ball Bd = {u ∈ Rd : ‖u‖ ≤ 1}. The properties of the surrogate f̂t are described in Lemmas A.2 and A.3 presented in the appendix.
The improvement in the rate that we get for β = 2 is due to the fact that we can consider Algorithm 1 with ĝt defined in (10) as the SGD for the surrogate function. Then the bias of approximating f by f̂t scales as h2t , which is smaller than the squared bias of approximating the gradient arising in the proof
of Theorem 3.1 that scales as d2h2(β−1)t = d 2h2t when β = 2. On the other hand, the stochastic variability terms are the same for both methods of proof. This explains the gain in dependency on d. However, this technique does not work for β > 2 since then the error of approximating f by f̂t, which is of the order h β t (with ht small), becomes too large compared to the bias d 2h 2(β−1) t of Theorem 3.1.
Theorem 5.1. Let f ∈ Fα,2(L) with α,L > 0. Let Assumption 2.1 hold and let Θ be a convex compact subset of Rd. Assume that maxx∈Θ ‖∇f(x)‖ ≤ G. If σ > 0 then for Algorithm 1 with ĝt defined in (10) and parameters ht = ( 3d2σ2
4Lαt+9L2d2 )1/4 and ηt = 1αt we have
∀x ∈ Θ : E T∑ t=1 ( f(xt)− f(x) ) ≤ min ( GBT, 2 √ 3Lσ d√ α √ T +A4 d2 α log T ) , (11)
where B is the Euclidean diameter of Θ and A4 = 6.5Lσ + 22G2/d. Moreover, if x∗ = arg minx∈Θ f(x) the optimization error of averaged estimator x̄T = 1 T ∑T t=1 xt is bounded as
E[f(x̄T )− f(x∗)] ≤ min ( GB, 2 √ 3Lσ
d√ αT +A4 d2 α log T T
) . (12)
Finally, if σ = 0, then the cumulative regret of Algorithm 1 with any ht chosen small enough and ηt = 1 αt and the optimization error of its averaged version are of the order d2 α log T and d2 α log T T , respectively.
Note that the terms d 2 α log T and d2 α log T T appearing in these bounds can be improved to d α log T and d α log T T at the expense of assuming that the norm ‖∇f‖ is uniformly bounded by G not only on Θ but also on a large enough Euclidean neighborhood of Θ. Moreover, the log T factor in the bounds for the optimization error can be eliminated by considering averaging from T/2 to T rather than from 1 to T in the spirit of [27]. We refer to Appendix D for the details and proofs of these facts. A major conclusion is that, when σ > 0 and we consider the optimization error, those terms are negligible with respect to d/ √ αT and thus an attainable rate is min(1, d/ √ αT ).
We close this section by noting, in connection with the bandit setting, that the bound (11) extends straightforwardly (up to a change in numerical constants) to the cumulative regret of the form E ∑T t=1 ( ft(xt ± htζt)− ft(x) ) , where the losses are measured at the query points and f depends on t. This fact follows immediately from the proof of Theorem 5.1 presented in the appendix and the property (19), see Lemma A.3 in the appendix.
6 Lower bound
In this section we prove a minimax lower bound on the optimization error over all sequential strategies that allow the query points depend on the past. For t = 1, . . . , T , we assume that yt = f(zt) + ξt and we consider strategies of choosing the query points as zt = Φt(zt−11 , y t−1 1 ) where Φt are Borel functions and z1 ∈ Rd is any random variable. We denote by ΠT the set of all such strategies. The noises ξ1, . . . , ξT are assumed in this section to be independent with cumulative distribution function F satisfying the condition∫
log ( dF (u)/dF (u+ v) ) dF (u) ≤ I0v2, |v| < v0 (13)
for some 0 < I0 <∞, 0 < v0 ≤ ∞. Using the second order expansion of the logarithm w.r.t. v, one can verify that this assumption is satisfied when F has a smooth enough density with finite Fisher information. For example, for Gaussian distribution F this condition holds with v0 =∞. Note that the class ΠT includes the sequential strategy of Algorithm 1 that corresponds to taking T as an even number, and choosing zt = xt + ζtrt and zt = xt − ζtrt for even t and odd t, respectively. The presence of the randomizing sequences ζt, rt is not crucial for the lower bound. Indeed, Theorem 6.1 below is valid conditionally on any randomization, and thus the lower bound remains valid when taking expectation over the randomizing distribution.
Theorem 6.1. Let Θ = {x ∈ Rd : ‖x‖ ≤ 1}. For α,L > 0,β ≥ 2, let F ′α,β denote the set of functions f that attain their minimum over Rd in Θ and belong to Fα,β(L) ∩ {f : maxx∈Θ ‖∇f(x)‖ ≤ G}, where G > 2α. Then for any strategy in the class ΠT we have
sup f∈F ′α,β
E [ f(zT )−min
x f(x)
] ≥ C min ( max(α, T−1/2+1/β),
d√ T , d α T− β−1 β
) , (14)
and sup f∈F ′α,β E [ ‖zT − x∗(f)‖2 ] ≥ C min ( 1, d T 1 β , d α2 T− β−1 β ) , (15)
where C > 0 is a constant that does not depend of T, d, and α, and x∗(f) is the minimizer of f on Θ.
The proof is given in Appendix B. It extends the proof technique of Polyak and Tsybakov [28], by applying it to more than two probe functions. The proof takes into account dependency on the dimension d, and on α. The final result is obtained by applying Assouad’s Lemma, see e.g. [33].
We stress that the condition G > 2α in this theorem is necessary. It should always hold if the intersection Fα,β(L) ∩ {f : maxx∈Θ ‖∇f(x)‖ ≤ G} is not empty. Notice also that the threshold T−1/2+1/β on the strong convexity parameter α plays an important role in bounds (14) and (15). Indeed, for α below this threshold, the bounds start to be independent of α. Moreover, in this regime, the rate of (14) becomes min(T 1/β , d)/ √ T , which is asymptotically d/ √ T and thus not better as function of T than the rate attained for zero-order minimization of simply convex functions [2, 7]. Intuitively, it seems reasonable that α-strong convexity should be of no added value for very small α. Theorem 6.1 allows us to quantify exactly how small such α should be. Also, quite naturally, the threshold becomes smaller when the smoothness β increases. Finally note that for β = 2 the lower bounds (14) and (15) are, in the interesting regime of large enough T , of order d/(max(α, 1) √ T ) and d/(max(α2, 1) √ T ), respectively. This highlights the near minimax optimal properties of Algorithm 1 in the setting of Theorem 5.1.
7 Discussion and related work
There is a great deal of attention to zero-order feedback stochastic optimization and convex bandits problems in the recent literature. Several settings are studied: (i) deterministic in the sense that the queries contain no random noise and we query functions ft depending on t rather than f where ft are Lipschitz or 2-smooth [16, 1, 24, 25, 28, 31]; (ii) stochastic with two-point feedback where the two noisy evaluations are obtained with the same noise and the noisy functions are Lipschitz or 2-smooth [24, 25, 13] (this setting does not differ much from (i) in terms of the analysis and the results); (iii) stochastic, where the noises ξi are independent zero-mean random variables [15, 26, 12, 2, 30, 3, 19, 4, 20]. In this paper, we considered a setting, which is more general than (iii) by allowing for adversarial noise (no independence or zero-mean assumption in contrast to (iii), no Lipschitz assumption in contrast to settings (i) and (ii)), which are both covered by our results when the noise is set to zero.
One part of our results are bounds on the cumulative regret, cf. (4) and (11). We emphasize that they remain trivially valid if the queries are from ft depending on t instead of f , and thus cover the setting (i). To the best of our knowledge, there were no such results in this setting previously, except for [3] that gives bounds with suboptimal dependency on T in the case of classical (non-adversarial) noise. In the non-noisy case, we get bounds on the cumulative regret with faster rates than previously known for the setting (i). It remains an open question whether these bounds can be improved.
The second part of our results dealing with the optimization error E[f(x̄T ) − f(x∗)] is closely related to the work on derivative-free stochastic optimization under strong convexity and smoothness assumptions initiated in [15, 26] and more recently developed in [12, 19, 30, 3]. It was shown in [26] that the minimax optimal rate for f ∈ Fα,β(L) scales as c(α, d)T−(β−1)/β , where c(α, d) is an unspecified function of α and d (for d = 1 an upper bound of the same order was earlier established in [15]). The issue of establishing non-asymptotic fundamental limits as function of the main parameters of the problem (α, d and T ) was first addressed in [19] giving a lower bound Ω( √ d/T ) for β = 2. This was improved to Ω(d/ √ T ) when α 1 by Shamir [30] who conjectured that the rate d/ √ T is optimal for β = 2, which indeed follows from our Theorem 5.1 (although [30] claims the optimality
as proved fact by referring to results in [1], such results cannot be applied in setting (iii) because the noise cannot be considered as Lipschitz). A result similar to Theorem 5.1 is stated without proof in Bach and Perchet [3, Proposition 7] but not for the cumilative regret and with a suboptimal rate in the non-noisy case. For integer β ≥ 3, Bach and Perchet [3] present explicit upper bounds as functions of α, d and T with, however, suboptimal dependency on T except for their Proposition 8 that is problematic (see Appendix C for the details). Finally, by slightly modifying the proof of Theorem 3.1 we get that the estimation risk E [ ‖x̄T − x∗‖2 ] is O((d2/α2)T−(β−1)/β), which is to within factor d of the main term in the lower bound (15) (see Appendix D for details).
The lower bound in Theorem 6.1 is, to the best of our knowledge, the first result providing nonasymptotic fundamental limits under general configuration of α, d and T . The known lower bounds [26, 19, 30] either give no explicit dependency on α and d, or treat the special case β = 2 and α 1. Moreover, as an interesting consequence of our lower bound we find that, for small strong convexity parameter α (namely, below the T−1/2+1/β threshold), the best achievable rate cannot be substantially faster than for simply convex functions, at least for moderate dimensions. Indeed, for such small α, our lower bound is asymptotically Ω(d/ √ T ) independently of the smoothness index β and on α, while the achievable rate for convex functions is shown to be d16/ √ T in [2] and improved to d3.75/ √ T in [7] (both up to log-factors). The gap here is only in the dependency on the dimension. Our results imply that for α above the T−1/2+1/β threshold, the gap between upper and lower bounds is much smaller. Thus, our upper bounds in this regime scale as (d2/α)T−(β−1)/β while the lower bound of Theorem 6.1 is of the order Ω ( (d/α)T−(β−1)/β ) ; moreover for β = 2, upper and lower bounds match in the dependency on d.
We hope that our work will stimulate further study at the intersection of zero-order optimization and convex bandits in machine learning. An important open problem is to study novel algorithms which match our lower bound simultaneously in all main parameters. For example a class of algorithms worth exploring are those using memory of the gradient in the spirit of Nesterov accelerated method. Yet another important open problem is to study lower bounds for the regret in our setting. Finally, it would be valuable to study extensions of our work to locally strongly convex functions.
Broader impact
The present work improves our understanding of zero-order optimization methods in specific scenarios in which the underlying function we wish to optimize has certain regularity properties. We believe that a solid theoretical foundation is beneficial to the development of practical machine learning and statistical methods. We expect no direct or indirect ethical risks from our research.
Acknowledgments and Disclosure of Funding
We would like to thank Francis Bach, Vianney Perchet, Saverio Salzo, and Ohad Shamir for helpful discussions. The first and second authors were partially supported by SAP SE. The research of A.B. Tsybakov is supported by a grant of the French National Research Agency (ANR), “Investissements d’Avenir” (LabEx Ecodec/ANR-11-LABX-0047). | 1. What is the focus and contribution of the paper regarding zero-order optimization?
2. What are the strengths of the proposed approach, particularly in terms of its theoretical analysis and novelty?
3. Do you have any concerns or suggestions regarding the paper's content or its relevance to the NeurIPS community?
4. How does the reviewer assess the clarity and quality of the paper's writing? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The authors focus on the problem of zero-order optimization of a strongly convex function, in order to find the minimizer of the function via a sequential approach. They focus on the impact of higher-order smoothness on both the optimization error as well we on the cumulative regret. They consider a randomized approximation of the projected gradient descent algorithm, where the gradient is estimated via two function evaluations and a smoothing kernel. Several theoretical results are derived under different settings. Their results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and problem parameters. They also provide an estimator of the minimum function value which achieves sharp oracle behavior.
Strengths
EDIT: I thank the authors for their response, which have addressed my questions. I leave my evaluation unchanged. ---------------------------------------------------------------------------------------------- The paper is very well written and is a pleasure to read. The main strengths are as follows: 1. Very clear problem statement and clean highlights of the contributions 2. The theoretical proofs seem sound (Unfortunately, I was not able to go through the detailed proofs, but the proof sketch seems correct and clearly highlights the steps needed to prove the claims. 3. The paper is highly novel as it improves on previous literature and considers settings that are much more general than before. Their result on the lower bound is very novel. 4. This is of high relevance to the NeurIPS community
Weaknesses
I didn't find any strong weakness in the paper but it might be good to add some empirical results on some functions to visually see how these bounds behave in practice. |
NIPS | Title
Monotone operator equilibrium networks
Abstract
Implicit-depth models such as Deep Equilibrium Networks have recently been shown to match or exceed the performance of traditional deep networks while being much more memory efficient. However, these models suffer from unstable convergence to a solution and lack guarantees that a solution exists. On the other hand, Neural ODEs, another class of implicit-depth models, do guarantee existence of a unique solution but perform poorly compared with traditional networks. In this paper, we develop a new class of implicit-depth model based on the theory of monotone operators, the Monotone Operator Equilibrium Network (monDEQ). We show the close connection between finding the equilibrium point of an implicit network and solving a form of monotone operator splitting problem, which admits efficient solvers with guaranteed, stable convergence. We then develop a parameterization of the network which ensures that all operators remain monotone, which guarantees the existence of a unique equilibrium point. Finally, we show how to instantiate several versions of these models, and implement the resulting iterative solvers, for structured linear operators such as multi-scale convolutions. The resulting models vastly outperform the Neural ODE-based models while also being more computationally efficient. Code is available at http://github.com/locuslab/monotone_op_net.
1 Introduction
Recent work in deep learning has demonstrated the power of implicit-depth networks, models where features are created not by explicitly iterating some number of nonlinear layers, but by finding a solution to some implicitly defined equation. Instances of such models include the Neural ODE [8], which computes hidden layers as the solution to a continuous-time dynamical system, and the Deep Equilibrium (DEQ) Model [5], which finds a fixed point of a nonlinear dynamical system corresponding to an effectively infinite-depth weight-tied network. These models, which trace back to some of the original work on recurrent backpropagation [2, 23], have recently regained attention since they have been shown to match or even exceed to performance of traditional deep networks in domains such as sequence modeling [5]. At the same time, these models show drastically improved memory efficiency over traditional networks since backpropagation is typically done analytically using the implicit function theorem, without needing to store the intermediate hidden layers.
However, implict-depth models that perform well require extensive tuning in order to achieve stable convergence to a solution. Obtaining convergence in DEQs requires careful initialization and regularization, which has proven difficult in practice [21]. Moreover, solutions to these models are not guaranteed to exist or be unique, making the output of the models potentially ill-defined. While Neural ODEs [8] do guarantee existence of a unique solution, training remains unstable since the ODE problems can become severely ill-posed [10]. Augmented Neural ODEs [10] improve the stability of Neural ODEs by learning ODEs with simpler flows, but neither model achieves efficient
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
convergence nor performs well on standard benchmarks. Crucial questions remain about how models can have guaranteed, unique solutions, and what algorithms are most efficient at finding them.
In this paper, we present a new class of implicit-depth equilibrium model, the Monotone Operator Equilibrium Network (monDEQ), which guarantees stable convergence to a unique fixed point.1 The model is based upon the theory of monotone operators [6, 26], and illustrates a close connection between simple fixed-point iteration in weight-tied networks and the solution to a particular form of monotone operator splitting problem. Using this connection, this paper lays the theoretical and practical foundations for such networks. We show how to parameterize networks in a manner that ensures all operators remain monotone, which establishes the existence and uniqueness of the equilibrium point. We show how to backpropagate through such networks using the implicit function theorem; this leads to a corresponding (linear) operator splitting problem for the backward pass, which also is guaranteed to have a unique solution. We then adapt traditional operator splitting methods, such as forward-backward splitting or Peaceman-Rachford splitting, to naturally derive algorithms for efficiently computing these equilibrium points.
Finally, we demonstrate how to practically implement such models and operator splitting methods, in the cases of typical feedforward, fully convolutional, and multi-scale convolutional networks. For convolutional networks, the most efficient fixed-point solution methods require an inversion of the associated linear operator, and we illustrate how to achieve this using the fast Fourier transform. The resulting networks show strong performance on several benchmark tasks, vastly improving upon the accuracy and efficiency of Neural ODEs-based models, the other implicit-depth models where solutions are guaranteed to exist and be unique.
2 Related work
Implicit models in deep learning There has been a growing interest in recent years in implicit layers in deep learning. Instead of specifying the explicit computation to perform, a layer specifies some condition that should hold at the solution to the layer, such as a nonlinear equality, or a differential equation solution. Using the implicit function theorem allows for backpropagating through the layer solutions analytically, making these layers very memory efficient, as they do not need to maintain intermediate iterations of the solution procedure. Recent examples include layers that compute inference in graphical models [15], solve optimization problems [12, 3, 13, 1], execute model-based control policies [4], solve two-player games [20], solve gradient-based optimization for meta-learning [24], and many others.
Stability of fixed-point models The issue of model stability has in fact been at the heart of much work in fixed-point models. The original work on attractor-style recurrent models, trained via recurrent backpropagation [2, 23], precisely attempted to ensure that the forward iteration procedure was stable. And indeed, much of the work in recurrent architectures such as LSTMs has focused on these issues of stability [14]. Recent work has revisited recurrent backpropagation in a similar manner to DEQs, with the similar aim of speeding up the computation of fixed points [19]. And other work has looked at the stability of implicit models [11], with an emphasis on guaranteeing the existence of fixed points, but focused on alternative stability conditions, and considered only relatively small-scale experiments. Other recent work has looked to use control-theoretic methods to ensure the stability of implicit models, [25], though again they consider only small-scale evaluations.
Monotone operators in deep learning Although most work in the field of monotone operators is concerned with general convex analysis, the recent work of [9] does also highlight connections between deep networks and monotone operator problems. Unlike our current work, however, that work focused largely on the fact that many common non-linearities can be expressed via proximal operators, and analyzed traditional networks under the assumptions that certain of the operators were monotone, but did not address conditions for the networks to be monotone or algorithms for solving or backpropagating through the networks.
3 A monotone operator view of fixed-point networks
This section lays out our main methodological and theoretical contribution, a class of equilibrium networks based upon monotone operators. We begin with some preliminaries, then highlight the
1We largely use the terms “fixed point” and “equilibrium point” interchangably in this work, using fixed point in the context of an iterative procedure, and equilibrium point to refer more broadly to the point itself.
basic connection between the fixed point of an “infinite-depth” network and an associated operator splitting problem; next, we propose a parameterization that guarantees the associated operators to be maximal monotone; finally, we show how to use operator splitting methods to both compute the fixed point and backpropagate through the fixed point efficiently.
3.1 Preliminaries
Monotone operator theory The theory of monotone operators plays a foundational role in convex analysis and optimization. Monotone operators are a natural generalization of monotone functions, which can be used to assess the convergence properties of many forms of iterative fixed-point algorithms. We emphasize that the majority of the work in this paper relies on well-known properties about monotone operators, and we refer to standard references on the topic including [6] and a less formal survey by [26]; we do include a brief recap of the definitions and results we require in Appendix A. Formally, an operator is a subset of the space F ✓ Rn ⇥ Rn; in our setting this will usually correspond to set-valued or single-valued function. Operator splitting approaches refer to methods for finding a zero in a sum of operators, i.e., find x such that 0 2 (F +G)(x). There are many such methods, but the two we will use mainly in this work are forward-backward splitting (eqn. A9 in the Appendix) and Peaceman-Rachford splitting (eqn. A10). As we will see, both finding a network equilibrium point and backpropagating through it can be formulated as operator splitting problems, and different operator splitting methods will lead to different approaches in their application to our subsequent implicit networks.
Deep equilibrium models The monDEQ architecture is closely relate to the DEQ model, which parameterizes a “weight-tied, input-injected” network of the form zi+1 = g(zi, x), where x denotes the input to the network, injected at each layer; zi denotes the hidden layer at depth i; and g denotes a nonlinear function which is the same for each layer (hence the network is weight-tied). The key aspect of the DEQ model is that in this weight-tied setting, instead of forward iteration, we can simply use any root-finding approach to find an equilibrium point of such an iteration z⇤ = g(z⇤, x). Assuming the model is stable, this equilibrium point corresponds to an “infinite-depth fixed point” of the layer. The monDEQ architecture can be viewed as an instance of a DEQ model, but one that relies on the theory of monotone operators, and a specific paramterization of the network weights, to guarantee the existence of a unique fixed point for the network. Crucially, however, as is the case for DEQs, naive forward iteration of this model is not necessarily stable; we therefore employ operator splitting methods to develop provably (linearly) convergent methods for finding such fixed points.
3.2 Fixed-point networks as operator splitting
As a starting point of our analysis, consider the weight-tied, input-injected network in which x 2 Rd denotes the input, and zk 2 Rn denotes the hidden units at layer k, given by the iteration2
z k+1 = (Wzk + Ux+ b) (1)
where : R ! R is a nonlinearity applied elementwise, W 2 Rn⇥n are the hidden unit weights, U 2 Rn⇥x are the input-injection weights and b 2 Rn is a bias term. An equilibrium point, or fixed point, of this system is some point z? which remains constant after an update:
z ? = (Wz? + Ux+ b). (2)
We begin by observing that it is possible to characterize this equilibrium point exactly as the solution to a certain operator splitting problem, under certain choices of operators and activation . This can be formalized in the following theorem, which we prove in Appendix B: Theorem 1. Finding a fixed point of the iteration (1) is equivalent to finding a zero of the operator splitting problem 0 2 (F +G)(z?) with the operators
F (z) = (I W )(z) (Ux+ b), G = @f (3) and (·) = prox1f (·) for some convex closed proper (CCP) function f , where prox↵f denotes the proximal operator
prox↵f (x) ⌘ argmin z
1 2 kx zk22 + ↵f(z). (4)
2This setting can also be seen as corresponding to a recurrent network with identical inputs at each time (indeed, this is the view of so-called attractor networks [23]). However, because in modern usage recurrent networks typically refer to sequential models with different inputs at each time, we don’t adopt this terminology.
It is also well-established that many common nonlinearities used in deep networks can be represented as proximal operators of CCP functions [7, 9]. For example, the ReLU nonlinearity (x) = [x]+ is the proximal operator of the indicator of the positive orthant f(x) = I{x 0}, and tanh, sigmoid, and softplus all have close correspondence with proximal operators of simple expressions [7].
In fact, this method establishes that some seemingly unstable iterations can actually still lead to convergent algorithms. ReLU activations, for instance, have traditionally been avoided in iterative models such as recurrent networks, due to exploding or vanishing gradient problems and nonsmoothness. Yet this iteration shows that (with input injection and the above constraint on W ), ReLU operators are perfectly well-suited to these fixed-point iterations.
3.3 Enforcing existence of a unique solution
The above connection is straightforward, but also carries interesting implications for deep learning. Specifically, we can establish the existence and uniqueness of the equilibirum point z? via the simple sufficient criterion that I W is strongly monotone, or in other words3 I W ⌫ mI for some m > 0 (see Appendix A). The constraint is by no means a trivial condition. Although many layers obey this condition under typical initialization schemes, during training it is normal for W to move outside this regime. Thus, the first step of the monDEQ architecture is to parameterize W in such a way that it always satisfies this strong monotonicity constraint. Proposition 1. We have I W ⌫ mI if and only if there exist A,B 2 Rn⇥n such that
W = (1 m)I ATA+B BT . (5)
We therefore propose to simply parameterize W directly in this form, by defining the A and B matrices directly. While this is an overparameterized form for a dense matrix, we could avoid this issue by, e.g. constraining A to be lower triangular (making it the Cholesky factor of ATA), and by making B strictly upper triangular; in practice, however, simply using general A and B matrices has little impact upon the performance of the method. The parameterization does notably raise additional complications when dealing with convolutional layers, but we defer this discussion to Section 4.2.
3.4 Computing the network fixed point
Given the monDEQ formulation, the first natural question to ask is: how should we compute the equilibrium point z? = (Wz? + Ux + b)? Crucially, it can be the case that the simple forward iteration of the network equation (1) does not converge, i.e., the iteration may be unstable. Fortunately, monotone operator splitting leads to a number of iterative methods for finding these fixed points, which are guaranteed to converge under proper conditions. For example, the forward-backward iteration applied to the monotone operator formulation from Theorem 1 results exactly in a damped version of the forward iteration z k+1 = prox↵f (z
k ↵((I W )zk (Ux+ b))) = prox↵f ((1 ↵)zk + ↵(Wzk +Ux+ b)). (6) This iteration is guaranteed to converge linearly to the fixed point z? provided that ↵ 2m/L2, when the operator I W is Lipschitz and strongly monotone with parameters L (which is simply the operator norm kI Wk2) and m [26]. A key advantage of the monDEQ formulation is the flexibility to employ alternative operator splitting methods that converge much more quickly to the equilibrium. One such example is PeacemanRachford splitting which, when applied to the formulation from Theorem 1, takes the form
u k+1/2 = 2zk uk
z k+1/2 = (I + ↵(I W )) 1(uk+1/2 ↵(Ux+ b)) u k+1 = 2zk+1/2 uk+1/2
z k+1 = prox↵f (u k+1)
(7)
where we use the explicit form of the resolvents for the two monotone operators of the model. The advantage of Peaceman-Rachford splitting over forward-backward is two-fold: 1) it typically converges in fewer iterations, which is a key bottleneck for many implicit models; and 2) it converges
3For non-symmetric matrices, which of course is typically the case with W , positive definiteness is defined as the positive definiteness of the symmetric component I W ⌫ mI , I (W +WT )/2 ⌫ mI .
Algorithm 1 Forward-backward equilibrium solving z := 0; err := 1 while err > ✏ do
z+ := (1 ↵)z + ↵(Wz + Ux+ b) z+ := prox↵f (z +) err := kz + zk2 kz+k2 z := z+
return z
Algorithm 2 Peaceman-Rachford equilibrium solving
z, u := 0; err := 1; V := (I + ↵(I W )) 1 while err > ✏ do
u1/2 := 2z u z1/2 := V (u1/2 + ↵(Ux+ b)) u+ := 2z1/2 u1/2 z+ := prox↵f (u +) err := kz + zk2 kz+k2 z, u := z+, u+
return z
for any ↵ > 0 [26], unlike forward-backward splitting which is dependent on the Lipschitz constant of I W . The disadvantage of Peaceman-Rachford splitting, however, is that it requires an inverse involving the weight matrix W . It is not immediately clear how to apply such methods if the W matrix involves convolutions or multi-layer models; we discuss these points in Section 4.2. A summary of these methods for computation of the forward equilibrium point is given in Algorithms 1 and 2.
3.5 Backprogation through the monotone operator layer
Finally, a key challenge for any implicit model is to determine how to perform backpropagation through the layer. As with most implicit models, a potential benefit of the fixed-point conditions we describe is that, by using the implicit function theorem, it is possible to perform backpropagation without storing the intermediate iterates of the operator splitting algorithm in memory, and instead backpropagating directly through the equilibrium point.
To begin, we present a standard approach to differentiating through the fixed point z? using the implicit function theorem. This formulation has some compelling properties for monDEQ, namely the fact that this (sub)gradient will always exist. When training a network via gradient descent, we need to compute the gradients of the loss function
@` @(·) = @` @z?
@z ? @(·) (8)
where (·) denotes some input to the layer or parameters, i.e. W , x, etc. The challenge here is computing (or left-multiplying by) the Jacobian @z?/@(·), since z? is not an explicit function of the inputs. While it would be possible to simply compute gradients through the “unrolled” updates, e.g. zk+1 = (Wzk + Ux+ b) for forward iteration, this would require storing each intermediate state zk, a potentially memory-intensive operation. Instead, the following theorem gives an explicit formula for the necessary (sub)gradients. We state the theorem more directly in terms of the operators mentioned Theorem 1; that is, we use prox1f (·) in place of (·).
Theorem 2. For the equilibrium point z? = prox1f (Wz ? + Ux+ b), we have
@` @(·) = @` @z? (I JW ) 1J @(Wz
? + Ux+ b)
@(·) (9)
where J = D prox1f (Wz
? + Ux+ b) (10) denotes the Clarke generalized Jacobian of the nonlinearity evaluated at the point Wz? + Ux+ b. Furthermore, for the case that (I W ) ⌫ mI , this derivative always exists.
To apply the theorem in practice to perform reverse-mode differentiation, we need to solve the system
(I JW ) T ✓ @`
@z?
◆T . (11)
The above system is a linear equation and while it is typically computationally infeasible to compute the inverse (I JW ) T exactly, we could compute a solution to (I JW ) T v using, e.g., conjugate gradient methods. However, we present an alternative formulation to computing (11) as the solution to a (linear) monotone operator splitting problem:
Algorithm 3 Forward-backward equilibrium backpropagation
u := 0; err := 1; v := @`@z⇤ while err > ✏ do
u+ := (1 ↵)u+ ↵WTu
u+i :=
( u+i +↵vi
1+↵(1+Dii) if Dii < 1
0 if Dii = 1 err := ku
+ uk2 ku+k2
u := u+
return u
Algorithm 4 Peaceman-Rachford equilibrium backpropagation
z, u := 0; err := 1; v := @`@z⇤ ; V := (I+↵(I W )) 1 while err > ✏ do u1/2 := 2z u z1/2 := V Tu1/2
u+ := 2z1/2 u1/2
z+i :=
( u+i +↵vi
1+↵(1+Dii) if Dii < 1
0 if Dii = 1 err := kz
+ zk2 kz+k2
z, u := z+, u+
return z
Theorem 3. Let z? be a solution to the monotone operator splitting problem defined in Theorem 1, and define J as in (10). Then for v 2 Rn the solution of the equation
u ? = (I JW ) T v (12)
is given by u ? = v +WT ũ? (13)
where ũ? is a zero of the operator splitting problem 0 2 (F̃ + G̃)(u?), with operators defined as F̃ (ũ) = (I WT )(ũ), G̃(ũ) = Dũ v (14)
where D is a diagonal matrix defined by J = (I + D) 1 (where we allow for the possibility of Dii = 1 for Jii = 0).
An advantage of this approach when using Peaceman-Rachford splitting is that it allows us to reuse a fast method for multiplying by (I + ↵(I W )) 1 which is required by Peaceman-Rachford during both the forward pass (equilibrium solving) and backward pass (backpropagation) of training a monDEQ. Algorithms detailing both the Peaceman-Rachford and forward-backward solvers for the backpropagation problem (14) are given in Algorithms 3 and 4.
4 Example monotone operator networks
With the basic foundations from the previous section, we now highlight several different instantiations of the monDEQ architecture. In each of these settings, as in Theorem 1, we will formulate the objective as one of finding a solution to the operator splitting problem 0 2 (F +G)(z?) for
F (z) = (I W )(z) (Ux+ b), G = @f (15) or equivalently as computing an equilibrium point z? = prox1f (Wz ? + Ux+ b).
In each of these settings we need to define what the input and hidden state x and z correspond to, what the W and U operators consist of, and what is the function f which determines the network nonlinearity. Key to the application of monotone operator methods are that 1) we need to constrain the W matrix such that I W ⌫ mI as described in the previous section and 2) we need a method to compute (or solve) the inverse (I + ↵(I W )) 1, needed e.g. for Peaceman-Rachford; while this would not be needed if using only forward-backward splitting, we believe that the full power of the monotone operator view is realized precisely when these more involved methods are possible.
4.1 Fully connected networks
The simplest setting, of course, is the case we have largely highlighted above, where x 2 Rd and z 2 Rn are unstructured vectors, and W 2 Rn⇥n and U 2 Rn⇥d and b 2 Rn are dense matrices and vectors respectively. As indicated above, we parameterize W directly by A,B 2 Rn⇥n as in (5). Since the Ux term simply acts as a bias in the iteration, there is no constraint on the form of U .
We can form an inverse directly by simply forming and inverting the matrix I + ↵(I W ), which has cost O(n3). Note that this inverse needs to be formed only once, and can be reused over all iterations of the operator splitting method and over an entire batch of examples (but recomputed, of course, when W changes). Any proximal function can be used as the activation: for example the ReLU, though as mentioned there are also close approximations to the sigmoid, tanh, and softplus.
4.2 Convolutional networks
The real power of monDEQs comes with the ability to use more structured linear operators such as convolutions. We let x 2 Rds2 be a d-channel input of size s⇥s and z 2 Rns2 be a n-channel hidden layer. We also let W 2 Rns2⇥ns2 denote the linear form of a 2D convolutional operator and similarly for U 2 Rns2⇥ds2 . As above, W is parameterized by two additional convolutional operators A,B of the same form as W . Note that this implicitly increases the receptive field size of W : if A and B are 3⇥ 3 convolutions, then W = (1 m)I ATA+B BT will have an effective kernel size of 5.
Inversion The benefit of convolutional operators in this setting is the ability to perform efficient inversion via the fast Fourier transform. Specifically, in the case that A and B represent circular convolutions, we can reduce the matrices to block-diagonal form via the discrete Fourier transform (DFT) matrix
A = FsDAF ⇤ s (16)
where Fs denotes (a permuted form of) the 2D DFT operator and DA 2 Cns 2⇥ns2 is a (complex) block diagonal matrix where each block DAii 2 Cn⇥n corresponds to the DFT at one particular location in the image. In this form, we can efficiently multiply by the inverse of the convolutional operator, noting that
I + ↵(I W ) = (1 + ↵m)I + ↵ATA ↵B + ↵BT
= Fs((1 + ↵m)I + ↵D ⇤ ADA DB +D⇤B)F ⇤s .
(17)
The inner term here is itself a block diagonal matrix with complex n⇥ n blocks (each block is also guaranteed to be invertible by the same logic as for the full matrix). Thus, we can multiply a set of hidden units z by the inverse of this matrix by simply inverting each n ⇥ n block, taking the fast Fourier transform (FFT) of z, multiplying each corresponding block of Fsz by the corresponding inverse, then taking the inverse FFT. The details are given in Appendix C.
The computational cost of multiplying by this inverse is O(n2s2 log s+ n3s2) to compute the FFT of each convolutional filter and precompute the inverses, and then O(bns2 log s+ bn2s2) to multiply by the inverses for a set of hidden units with a minibatch of size b. Note that just computing the convolutions in a normal manner has cost O(bn2s2), so that these computations are on the same order as performing typical forward passes through a network, though empirically 2-3 times slower owing to the relative complexity of performing the necessary FFTs.
One drawback of using the FFT in this manner is that it requires that all convolutions be circular; however, this circular dependence can be avoided using zero-padding, as detailed in Section C.2.
4.3 Forward multi-tier networks
Although a single fully-connected or convolutional operator within a monDEQ can be suitable for small-scale problems, in typical deep learning settings it is common to model hidden units at different hierarchical levels. While monDEQs may seem inherently “single-layer,” we can model this same hierarchical structure by incorporating structure into the W matrix. For example, assuming a convolutional setting, with input x 2 Rds2 as in the previous section, we could partition z into L different hierarchical resolutions and let W have a multi-tier structure, e.g.
z =
2
6664
z1 2 Rn1s 2 1 z2 2 Rn2s 2 2
... zL 2 RnLs 2 L
3
7775 , W =
2
66664 W11 0 0 · · · 0 W21 W22 0 · · · 0 0 W32 W33 · · · 0 ... ... ... . . . ...
0 0 0 · · · WLL
3
77775
where zi denotes the hidden units at level i, an si ⇥ si resolution hidden unit with ni channels, and where Wii denotes an ni channel to ni channel convolution, and Wi+1,i denotes an ni to ni+1 channel, strided convolution. This structure of W allows for both inter- and intra-tier influence.
One challenge is to ensure that we can represent W with the form (1 m)I ATA + B BT while still maintaining the above structure, which we achieve by parameterizing each Wij block appropriately. Another consideration is the inversion of the multi-tier operator, which can be achieved via the FFT similarly as for single-convolutional W , but with additional complexity arising from the fact that the Ai+1,i convolutions are strided. These details are described in Appendix D.
CIFAR-10
Method Model size Acc.
Neural ODE 172K 55.3±0.3% Aug. Neural ODE 172K 58.9±2.8% Neural ODE†⇤ 1M 59.9% Aug. Neural ODE†⇤ 1M 73.4%
monDEQ (ours)
Single conv 172K 74.0±0.1% Multi-tier 170K 72.0±0.3% Single conv⇤ 854K 82.0±0.3% Multi-tier⇤ 1M 89.0±0.3%
SVHN
Method Model size Acc.
Neural ODE‡ 172K 81.0% Aug. Neural ODE‡ 172K 83.5%
monDEQ (ours)
Single conv 172K 88.7±1.1% Multi-tier 170K 92.4±0.1%
MNIST
Method Model size Acc.
Neural ODE‡ 84K 96.4% Aug. Neural ODE‡ 84K 98.2%
monDEQ (ours)
Fully connected 84K 98.1±0.1% Single conv 84K 99.1±0.1% Multi-tier 81K 99.0±0.1%
Table 1: Test accuracy of monDEQ models compared to Neural ODEs and Augmented Neural ODEs. *with data augmentation; †best test accuracy before training diverges; ‡as reported in [10].
1 10 20 30 40 ESRFhs
40
50
60
70
80
% 7
es t D
FF ur
DF y
CIFA5-10
6ingOe FRnv 0uOti-tier
12DE A12DE
Figure 1: Test accuracy of monDEQs during training on CIFAR-10, with NODE [8] and ANODE [10] for comparison. NODE and ANODE curves obtained using code provided by [10].
5 Experiments
To test the expressive power and training stability of monDEQs, we evaluate the monDEQ instantiations described in Section 4 on several image classification benchmarks. We take as a point of comparison the Neural ODE (NODE) [8] and Augmented Neural ODE (ANODE) [10] models, the only other implicit-depth models which guarantee the existence and uniqueness of a solution. We also assess the stability of training standard DEQs of the same form as our monDEQs.
The training process relies upon the operator splitting algorithms derived in Sections 3.4 and 3.5; for each batch of examples, the forward pass of the network involves finding the network fixed point (Algorithm 1 or 2), and the backward pass involves backpropagating the loss gradient through the fixed point (Algorithm 3 or 4). We analyze the convergence properties of both the forward-backward and Peaceman-Rachford operator splitting methods, and use the more efficient Peaceman-Rachford splitting for our model training. For further training details and model architectures see Appendix E. Experiment code can be found at http://github.com/locuslab/monotone_op_net.
Performance on image benchmarks We train small monDEQs on CIFAR-10 [17], SVHN [22], and MNIST [18], with a similar number of parameters as the ODE-based models reported in [8] and [10]. The results (averages over three runs) are shown in Table 1. Training curves for monDEQs, NODE, and ANODE on CIFAR-10 are show in Figure (1) and additional training curves are shown in Figure F1. Notably, except for the fully-connected model on MNIST, all monDEQs significantly outperform the ODE-based models across datasets. We highlight the performance of the small single convolution monDEQ on CIFAR-10 which outperforms Augmented Neural ODE by 15.1%.
We also attempt to train standard DEQs of the same structure as our small multi-tier convolutional monDEQ. We train DEQs both with unconstrained W and with W having the monotone parameterization (5), and solve for the fixed point using Broyden’s method as in [5]. All models quickly diverge during the first few epochs of training, even when allowed 300 iterations of Broyden’s method.
Additionally, we train two larger monDEQs on CIFAR-10 with data augmentation. The strong performance (89% test accuracy) of the multi-tier network, in particular, goes a long way towards closing the performance gap with traditional deep networks. For comparison, we train larger NODE and ANODE models with a comparable number of parameters (~1M). These attain higher test accuracy than the smaller models during training, but diverge after 10-30 epochs (see Figure F1).
Efficiency of operator splitting methods We compare the convergence rates of PeacemanRachford and forward-backward splitting on a fully trained model, using a large multi-tier monDEQ trained on CIFAR-10. Figure 3 shows convergence for both methods during the forward pass, for a range of ↵. As the theory suggests, the convergence rates depend strongly on the choice of ↵. Forward-backward does not converge for ↵ > 0.125, but convergence speed varies inversely with ↵ for ↵ < 0.125. In contrast, Peaceman-Rachford is guaranteed to converge for any ↵ > 0 but the dependence is non-monotonic. We see that, for the optimal choice of ↵, Peaceman-Rachford can converge much more quickly than forward-backward. The convergence rate also depends on the Lipschitz parameter L of I W , which we observe increases during training. Peaceman-Rachford therefore requires an increasing number of iterations during both the forward pass (Figure 2) and backward pass (Figure F2).
Finally, we compare the efficiency of monDEQ to that of the ODE-based models. We report the time and number of function evaluations (OED solver steps or operator splitting iterations) required by the ~170k-parameter models to train on CIFAR-10 for 40 epochs. The monDEQ, neural ODE, and ANODE training takes respectively 1.4, 4.4, and 3.3 hours, with an average of 20, 96, and 90 function evals per minibatch. Note however that training the larger 1M-parameter monDEQ on CIFAR-10 requires 65 epochs and takes 16 hours. All experiments are run on a single RTX 2080 Ti GPU.
6 Conclusion
The connection between monotone operator splitting and implicit network equilibria brings a new suite of tools to the study of implicit-depth networks. The strong performance, efficiency, and guaranteed stability of monDEQ indicate that such networks could become practical alternatives to deep networks, while the flexibility of the framework means that performance can likely be further improved by, e.g. imposing additional structure on W or employing other operator splitting methods. At the same time, we see potential for the study of monDEQs to inform traditional deep learning itself. The guarantees we can derive about what architectures and algorithms work for implicit-depth networks may give us insights into what will work for explicit deep networks.
Broader impact statement
While the main thrust of our work is foundational in nature, we do demonstrate the potential for implicit models to become practical alternatives to traditional deep networks. Owing to their improved memory efficiency, these networks have the potential to further applications of AI methods on edge devices, where they are currently largely impractical. However, the work is still largely algorithmic in nature, and thus it is much less clear the immediate societal-level benefits (or harms) that could result from the specific tehniques we propose and demonstrate in this paper.
Acknowledgements
Ezra Winston is supported by a grant from the Bosch Center for Artificial Intelligence. | 1. What is the focus and contribution of the paper on neural networks?
2. What are the strengths of the proposed approach, particularly in terms of modeling infinite depth with constant memory?
3. What are the weaknesses of the paper compared to prior works such as DEQ?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
DEQ is a very interesting work, which models infinite depth with a constant memory. DEQ does not save the intermediate hidden layers and updates the weights with only the fixed point. However, it did not solve the problem that does the fixed point stable and unique? The paper used the theory of monotone operators and presented a solution to this problem.
Strengths
The paper used the theory of monotone operators and presented a practical solution to obtain unique fixed points.
Weaknesses
Compared with DEQ, the fixed points of MOE are much more stable and unique, which needs experiments to justify. |
NIPS | Title
Monotone operator equilibrium networks
Abstract
Implicit-depth models such as Deep Equilibrium Networks have recently been shown to match or exceed the performance of traditional deep networks while being much more memory efficient. However, these models suffer from unstable convergence to a solution and lack guarantees that a solution exists. On the other hand, Neural ODEs, another class of implicit-depth models, do guarantee existence of a unique solution but perform poorly compared with traditional networks. In this paper, we develop a new class of implicit-depth model based on the theory of monotone operators, the Monotone Operator Equilibrium Network (monDEQ). We show the close connection between finding the equilibrium point of an implicit network and solving a form of monotone operator splitting problem, which admits efficient solvers with guaranteed, stable convergence. We then develop a parameterization of the network which ensures that all operators remain monotone, which guarantees the existence of a unique equilibrium point. Finally, we show how to instantiate several versions of these models, and implement the resulting iterative solvers, for structured linear operators such as multi-scale convolutions. The resulting models vastly outperform the Neural ODE-based models while also being more computationally efficient. Code is available at http://github.com/locuslab/monotone_op_net.
1 Introduction
Recent work in deep learning has demonstrated the power of implicit-depth networks, models where features are created not by explicitly iterating some number of nonlinear layers, but by finding a solution to some implicitly defined equation. Instances of such models include the Neural ODE [8], which computes hidden layers as the solution to a continuous-time dynamical system, and the Deep Equilibrium (DEQ) Model [5], which finds a fixed point of a nonlinear dynamical system corresponding to an effectively infinite-depth weight-tied network. These models, which trace back to some of the original work on recurrent backpropagation [2, 23], have recently regained attention since they have been shown to match or even exceed to performance of traditional deep networks in domains such as sequence modeling [5]. At the same time, these models show drastically improved memory efficiency over traditional networks since backpropagation is typically done analytically using the implicit function theorem, without needing to store the intermediate hidden layers.
However, implict-depth models that perform well require extensive tuning in order to achieve stable convergence to a solution. Obtaining convergence in DEQs requires careful initialization and regularization, which has proven difficult in practice [21]. Moreover, solutions to these models are not guaranteed to exist or be unique, making the output of the models potentially ill-defined. While Neural ODEs [8] do guarantee existence of a unique solution, training remains unstable since the ODE problems can become severely ill-posed [10]. Augmented Neural ODEs [10] improve the stability of Neural ODEs by learning ODEs with simpler flows, but neither model achieves efficient
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
convergence nor performs well on standard benchmarks. Crucial questions remain about how models can have guaranteed, unique solutions, and what algorithms are most efficient at finding them.
In this paper, we present a new class of implicit-depth equilibrium model, the Monotone Operator Equilibrium Network (monDEQ), which guarantees stable convergence to a unique fixed point.1 The model is based upon the theory of monotone operators [6, 26], and illustrates a close connection between simple fixed-point iteration in weight-tied networks and the solution to a particular form of monotone operator splitting problem. Using this connection, this paper lays the theoretical and practical foundations for such networks. We show how to parameterize networks in a manner that ensures all operators remain monotone, which establishes the existence and uniqueness of the equilibrium point. We show how to backpropagate through such networks using the implicit function theorem; this leads to a corresponding (linear) operator splitting problem for the backward pass, which also is guaranteed to have a unique solution. We then adapt traditional operator splitting methods, such as forward-backward splitting or Peaceman-Rachford splitting, to naturally derive algorithms for efficiently computing these equilibrium points.
Finally, we demonstrate how to practically implement such models and operator splitting methods, in the cases of typical feedforward, fully convolutional, and multi-scale convolutional networks. For convolutional networks, the most efficient fixed-point solution methods require an inversion of the associated linear operator, and we illustrate how to achieve this using the fast Fourier transform. The resulting networks show strong performance on several benchmark tasks, vastly improving upon the accuracy and efficiency of Neural ODEs-based models, the other implicit-depth models where solutions are guaranteed to exist and be unique.
2 Related work
Implicit models in deep learning There has been a growing interest in recent years in implicit layers in deep learning. Instead of specifying the explicit computation to perform, a layer specifies some condition that should hold at the solution to the layer, such as a nonlinear equality, or a differential equation solution. Using the implicit function theorem allows for backpropagating through the layer solutions analytically, making these layers very memory efficient, as they do not need to maintain intermediate iterations of the solution procedure. Recent examples include layers that compute inference in graphical models [15], solve optimization problems [12, 3, 13, 1], execute model-based control policies [4], solve two-player games [20], solve gradient-based optimization for meta-learning [24], and many others.
Stability of fixed-point models The issue of model stability has in fact been at the heart of much work in fixed-point models. The original work on attractor-style recurrent models, trained via recurrent backpropagation [2, 23], precisely attempted to ensure that the forward iteration procedure was stable. And indeed, much of the work in recurrent architectures such as LSTMs has focused on these issues of stability [14]. Recent work has revisited recurrent backpropagation in a similar manner to DEQs, with the similar aim of speeding up the computation of fixed points [19]. And other work has looked at the stability of implicit models [11], with an emphasis on guaranteeing the existence of fixed points, but focused on alternative stability conditions, and considered only relatively small-scale experiments. Other recent work has looked to use control-theoretic methods to ensure the stability of implicit models, [25], though again they consider only small-scale evaluations.
Monotone operators in deep learning Although most work in the field of monotone operators is concerned with general convex analysis, the recent work of [9] does also highlight connections between deep networks and monotone operator problems. Unlike our current work, however, that work focused largely on the fact that many common non-linearities can be expressed via proximal operators, and analyzed traditional networks under the assumptions that certain of the operators were monotone, but did not address conditions for the networks to be monotone or algorithms for solving or backpropagating through the networks.
3 A monotone operator view of fixed-point networks
This section lays out our main methodological and theoretical contribution, a class of equilibrium networks based upon monotone operators. We begin with some preliminaries, then highlight the
1We largely use the terms “fixed point” and “equilibrium point” interchangably in this work, using fixed point in the context of an iterative procedure, and equilibrium point to refer more broadly to the point itself.
basic connection between the fixed point of an “infinite-depth” network and an associated operator splitting problem; next, we propose a parameterization that guarantees the associated operators to be maximal monotone; finally, we show how to use operator splitting methods to both compute the fixed point and backpropagate through the fixed point efficiently.
3.1 Preliminaries
Monotone operator theory The theory of monotone operators plays a foundational role in convex analysis and optimization. Monotone operators are a natural generalization of monotone functions, which can be used to assess the convergence properties of many forms of iterative fixed-point algorithms. We emphasize that the majority of the work in this paper relies on well-known properties about monotone operators, and we refer to standard references on the topic including [6] and a less formal survey by [26]; we do include a brief recap of the definitions and results we require in Appendix A. Formally, an operator is a subset of the space F ✓ Rn ⇥ Rn; in our setting this will usually correspond to set-valued or single-valued function. Operator splitting approaches refer to methods for finding a zero in a sum of operators, i.e., find x such that 0 2 (F +G)(x). There are many such methods, but the two we will use mainly in this work are forward-backward splitting (eqn. A9 in the Appendix) and Peaceman-Rachford splitting (eqn. A10). As we will see, both finding a network equilibrium point and backpropagating through it can be formulated as operator splitting problems, and different operator splitting methods will lead to different approaches in their application to our subsequent implicit networks.
Deep equilibrium models The monDEQ architecture is closely relate to the DEQ model, which parameterizes a “weight-tied, input-injected” network of the form zi+1 = g(zi, x), where x denotes the input to the network, injected at each layer; zi denotes the hidden layer at depth i; and g denotes a nonlinear function which is the same for each layer (hence the network is weight-tied). The key aspect of the DEQ model is that in this weight-tied setting, instead of forward iteration, we can simply use any root-finding approach to find an equilibrium point of such an iteration z⇤ = g(z⇤, x). Assuming the model is stable, this equilibrium point corresponds to an “infinite-depth fixed point” of the layer. The monDEQ architecture can be viewed as an instance of a DEQ model, but one that relies on the theory of monotone operators, and a specific paramterization of the network weights, to guarantee the existence of a unique fixed point for the network. Crucially, however, as is the case for DEQs, naive forward iteration of this model is not necessarily stable; we therefore employ operator splitting methods to develop provably (linearly) convergent methods for finding such fixed points.
3.2 Fixed-point networks as operator splitting
As a starting point of our analysis, consider the weight-tied, input-injected network in which x 2 Rd denotes the input, and zk 2 Rn denotes the hidden units at layer k, given by the iteration2
z k+1 = (Wzk + Ux+ b) (1)
where : R ! R is a nonlinearity applied elementwise, W 2 Rn⇥n are the hidden unit weights, U 2 Rn⇥x are the input-injection weights and b 2 Rn is a bias term. An equilibrium point, or fixed point, of this system is some point z? which remains constant after an update:
z ? = (Wz? + Ux+ b). (2)
We begin by observing that it is possible to characterize this equilibrium point exactly as the solution to a certain operator splitting problem, under certain choices of operators and activation . This can be formalized in the following theorem, which we prove in Appendix B: Theorem 1. Finding a fixed point of the iteration (1) is equivalent to finding a zero of the operator splitting problem 0 2 (F +G)(z?) with the operators
F (z) = (I W )(z) (Ux+ b), G = @f (3) and (·) = prox1f (·) for some convex closed proper (CCP) function f , where prox↵f denotes the proximal operator
prox↵f (x) ⌘ argmin z
1 2 kx zk22 + ↵f(z). (4)
2This setting can also be seen as corresponding to a recurrent network with identical inputs at each time (indeed, this is the view of so-called attractor networks [23]). However, because in modern usage recurrent networks typically refer to sequential models with different inputs at each time, we don’t adopt this terminology.
It is also well-established that many common nonlinearities used in deep networks can be represented as proximal operators of CCP functions [7, 9]. For example, the ReLU nonlinearity (x) = [x]+ is the proximal operator of the indicator of the positive orthant f(x) = I{x 0}, and tanh, sigmoid, and softplus all have close correspondence with proximal operators of simple expressions [7].
In fact, this method establishes that some seemingly unstable iterations can actually still lead to convergent algorithms. ReLU activations, for instance, have traditionally been avoided in iterative models such as recurrent networks, due to exploding or vanishing gradient problems and nonsmoothness. Yet this iteration shows that (with input injection and the above constraint on W ), ReLU operators are perfectly well-suited to these fixed-point iterations.
3.3 Enforcing existence of a unique solution
The above connection is straightforward, but also carries interesting implications for deep learning. Specifically, we can establish the existence and uniqueness of the equilibirum point z? via the simple sufficient criterion that I W is strongly monotone, or in other words3 I W ⌫ mI for some m > 0 (see Appendix A). The constraint is by no means a trivial condition. Although many layers obey this condition under typical initialization schemes, during training it is normal for W to move outside this regime. Thus, the first step of the monDEQ architecture is to parameterize W in such a way that it always satisfies this strong monotonicity constraint. Proposition 1. We have I W ⌫ mI if and only if there exist A,B 2 Rn⇥n such that
W = (1 m)I ATA+B BT . (5)
We therefore propose to simply parameterize W directly in this form, by defining the A and B matrices directly. While this is an overparameterized form for a dense matrix, we could avoid this issue by, e.g. constraining A to be lower triangular (making it the Cholesky factor of ATA), and by making B strictly upper triangular; in practice, however, simply using general A and B matrices has little impact upon the performance of the method. The parameterization does notably raise additional complications when dealing with convolutional layers, but we defer this discussion to Section 4.2.
3.4 Computing the network fixed point
Given the monDEQ formulation, the first natural question to ask is: how should we compute the equilibrium point z? = (Wz? + Ux + b)? Crucially, it can be the case that the simple forward iteration of the network equation (1) does not converge, i.e., the iteration may be unstable. Fortunately, monotone operator splitting leads to a number of iterative methods for finding these fixed points, which are guaranteed to converge under proper conditions. For example, the forward-backward iteration applied to the monotone operator formulation from Theorem 1 results exactly in a damped version of the forward iteration z k+1 = prox↵f (z
k ↵((I W )zk (Ux+ b))) = prox↵f ((1 ↵)zk + ↵(Wzk +Ux+ b)). (6) This iteration is guaranteed to converge linearly to the fixed point z? provided that ↵ 2m/L2, when the operator I W is Lipschitz and strongly monotone with parameters L (which is simply the operator norm kI Wk2) and m [26]. A key advantage of the monDEQ formulation is the flexibility to employ alternative operator splitting methods that converge much more quickly to the equilibrium. One such example is PeacemanRachford splitting which, when applied to the formulation from Theorem 1, takes the form
u k+1/2 = 2zk uk
z k+1/2 = (I + ↵(I W )) 1(uk+1/2 ↵(Ux+ b)) u k+1 = 2zk+1/2 uk+1/2
z k+1 = prox↵f (u k+1)
(7)
where we use the explicit form of the resolvents for the two monotone operators of the model. The advantage of Peaceman-Rachford splitting over forward-backward is two-fold: 1) it typically converges in fewer iterations, which is a key bottleneck for many implicit models; and 2) it converges
3For non-symmetric matrices, which of course is typically the case with W , positive definiteness is defined as the positive definiteness of the symmetric component I W ⌫ mI , I (W +WT )/2 ⌫ mI .
Algorithm 1 Forward-backward equilibrium solving z := 0; err := 1 while err > ✏ do
z+ := (1 ↵)z + ↵(Wz + Ux+ b) z+ := prox↵f (z +) err := kz + zk2 kz+k2 z := z+
return z
Algorithm 2 Peaceman-Rachford equilibrium solving
z, u := 0; err := 1; V := (I + ↵(I W )) 1 while err > ✏ do
u1/2 := 2z u z1/2 := V (u1/2 + ↵(Ux+ b)) u+ := 2z1/2 u1/2 z+ := prox↵f (u +) err := kz + zk2 kz+k2 z, u := z+, u+
return z
for any ↵ > 0 [26], unlike forward-backward splitting which is dependent on the Lipschitz constant of I W . The disadvantage of Peaceman-Rachford splitting, however, is that it requires an inverse involving the weight matrix W . It is not immediately clear how to apply such methods if the W matrix involves convolutions or multi-layer models; we discuss these points in Section 4.2. A summary of these methods for computation of the forward equilibrium point is given in Algorithms 1 and 2.
3.5 Backprogation through the monotone operator layer
Finally, a key challenge for any implicit model is to determine how to perform backpropagation through the layer. As with most implicit models, a potential benefit of the fixed-point conditions we describe is that, by using the implicit function theorem, it is possible to perform backpropagation without storing the intermediate iterates of the operator splitting algorithm in memory, and instead backpropagating directly through the equilibrium point.
To begin, we present a standard approach to differentiating through the fixed point z? using the implicit function theorem. This formulation has some compelling properties for monDEQ, namely the fact that this (sub)gradient will always exist. When training a network via gradient descent, we need to compute the gradients of the loss function
@` @(·) = @` @z?
@z ? @(·) (8)
where (·) denotes some input to the layer or parameters, i.e. W , x, etc. The challenge here is computing (or left-multiplying by) the Jacobian @z?/@(·), since z? is not an explicit function of the inputs. While it would be possible to simply compute gradients through the “unrolled” updates, e.g. zk+1 = (Wzk + Ux+ b) for forward iteration, this would require storing each intermediate state zk, a potentially memory-intensive operation. Instead, the following theorem gives an explicit formula for the necessary (sub)gradients. We state the theorem more directly in terms of the operators mentioned Theorem 1; that is, we use prox1f (·) in place of (·).
Theorem 2. For the equilibrium point z? = prox1f (Wz ? + Ux+ b), we have
@` @(·) = @` @z? (I JW ) 1J @(Wz
? + Ux+ b)
@(·) (9)
where J = D prox1f (Wz
? + Ux+ b) (10) denotes the Clarke generalized Jacobian of the nonlinearity evaluated at the point Wz? + Ux+ b. Furthermore, for the case that (I W ) ⌫ mI , this derivative always exists.
To apply the theorem in practice to perform reverse-mode differentiation, we need to solve the system
(I JW ) T ✓ @`
@z?
◆T . (11)
The above system is a linear equation and while it is typically computationally infeasible to compute the inverse (I JW ) T exactly, we could compute a solution to (I JW ) T v using, e.g., conjugate gradient methods. However, we present an alternative formulation to computing (11) as the solution to a (linear) monotone operator splitting problem:
Algorithm 3 Forward-backward equilibrium backpropagation
u := 0; err := 1; v := @`@z⇤ while err > ✏ do
u+ := (1 ↵)u+ ↵WTu
u+i :=
( u+i +↵vi
1+↵(1+Dii) if Dii < 1
0 if Dii = 1 err := ku
+ uk2 ku+k2
u := u+
return u
Algorithm 4 Peaceman-Rachford equilibrium backpropagation
z, u := 0; err := 1; v := @`@z⇤ ; V := (I+↵(I W )) 1 while err > ✏ do u1/2 := 2z u z1/2 := V Tu1/2
u+ := 2z1/2 u1/2
z+i :=
( u+i +↵vi
1+↵(1+Dii) if Dii < 1
0 if Dii = 1 err := kz
+ zk2 kz+k2
z, u := z+, u+
return z
Theorem 3. Let z? be a solution to the monotone operator splitting problem defined in Theorem 1, and define J as in (10). Then for v 2 Rn the solution of the equation
u ? = (I JW ) T v (12)
is given by u ? = v +WT ũ? (13)
where ũ? is a zero of the operator splitting problem 0 2 (F̃ + G̃)(u?), with operators defined as F̃ (ũ) = (I WT )(ũ), G̃(ũ) = Dũ v (14)
where D is a diagonal matrix defined by J = (I + D) 1 (where we allow for the possibility of Dii = 1 for Jii = 0).
An advantage of this approach when using Peaceman-Rachford splitting is that it allows us to reuse a fast method for multiplying by (I + ↵(I W )) 1 which is required by Peaceman-Rachford during both the forward pass (equilibrium solving) and backward pass (backpropagation) of training a monDEQ. Algorithms detailing both the Peaceman-Rachford and forward-backward solvers for the backpropagation problem (14) are given in Algorithms 3 and 4.
4 Example monotone operator networks
With the basic foundations from the previous section, we now highlight several different instantiations of the monDEQ architecture. In each of these settings, as in Theorem 1, we will formulate the objective as one of finding a solution to the operator splitting problem 0 2 (F +G)(z?) for
F (z) = (I W )(z) (Ux+ b), G = @f (15) or equivalently as computing an equilibrium point z? = prox1f (Wz ? + Ux+ b).
In each of these settings we need to define what the input and hidden state x and z correspond to, what the W and U operators consist of, and what is the function f which determines the network nonlinearity. Key to the application of monotone operator methods are that 1) we need to constrain the W matrix such that I W ⌫ mI as described in the previous section and 2) we need a method to compute (or solve) the inverse (I + ↵(I W )) 1, needed e.g. for Peaceman-Rachford; while this would not be needed if using only forward-backward splitting, we believe that the full power of the monotone operator view is realized precisely when these more involved methods are possible.
4.1 Fully connected networks
The simplest setting, of course, is the case we have largely highlighted above, where x 2 Rd and z 2 Rn are unstructured vectors, and W 2 Rn⇥n and U 2 Rn⇥d and b 2 Rn are dense matrices and vectors respectively. As indicated above, we parameterize W directly by A,B 2 Rn⇥n as in (5). Since the Ux term simply acts as a bias in the iteration, there is no constraint on the form of U .
We can form an inverse directly by simply forming and inverting the matrix I + ↵(I W ), which has cost O(n3). Note that this inverse needs to be formed only once, and can be reused over all iterations of the operator splitting method and over an entire batch of examples (but recomputed, of course, when W changes). Any proximal function can be used as the activation: for example the ReLU, though as mentioned there are also close approximations to the sigmoid, tanh, and softplus.
4.2 Convolutional networks
The real power of monDEQs comes with the ability to use more structured linear operators such as convolutions. We let x 2 Rds2 be a d-channel input of size s⇥s and z 2 Rns2 be a n-channel hidden layer. We also let W 2 Rns2⇥ns2 denote the linear form of a 2D convolutional operator and similarly for U 2 Rns2⇥ds2 . As above, W is parameterized by two additional convolutional operators A,B of the same form as W . Note that this implicitly increases the receptive field size of W : if A and B are 3⇥ 3 convolutions, then W = (1 m)I ATA+B BT will have an effective kernel size of 5.
Inversion The benefit of convolutional operators in this setting is the ability to perform efficient inversion via the fast Fourier transform. Specifically, in the case that A and B represent circular convolutions, we can reduce the matrices to block-diagonal form via the discrete Fourier transform (DFT) matrix
A = FsDAF ⇤ s (16)
where Fs denotes (a permuted form of) the 2D DFT operator and DA 2 Cns 2⇥ns2 is a (complex) block diagonal matrix where each block DAii 2 Cn⇥n corresponds to the DFT at one particular location in the image. In this form, we can efficiently multiply by the inverse of the convolutional operator, noting that
I + ↵(I W ) = (1 + ↵m)I + ↵ATA ↵B + ↵BT
= Fs((1 + ↵m)I + ↵D ⇤ ADA DB +D⇤B)F ⇤s .
(17)
The inner term here is itself a block diagonal matrix with complex n⇥ n blocks (each block is also guaranteed to be invertible by the same logic as for the full matrix). Thus, we can multiply a set of hidden units z by the inverse of this matrix by simply inverting each n ⇥ n block, taking the fast Fourier transform (FFT) of z, multiplying each corresponding block of Fsz by the corresponding inverse, then taking the inverse FFT. The details are given in Appendix C.
The computational cost of multiplying by this inverse is O(n2s2 log s+ n3s2) to compute the FFT of each convolutional filter and precompute the inverses, and then O(bns2 log s+ bn2s2) to multiply by the inverses for a set of hidden units with a minibatch of size b. Note that just computing the convolutions in a normal manner has cost O(bn2s2), so that these computations are on the same order as performing typical forward passes through a network, though empirically 2-3 times slower owing to the relative complexity of performing the necessary FFTs.
One drawback of using the FFT in this manner is that it requires that all convolutions be circular; however, this circular dependence can be avoided using zero-padding, as detailed in Section C.2.
4.3 Forward multi-tier networks
Although a single fully-connected or convolutional operator within a monDEQ can be suitable for small-scale problems, in typical deep learning settings it is common to model hidden units at different hierarchical levels. While monDEQs may seem inherently “single-layer,” we can model this same hierarchical structure by incorporating structure into the W matrix. For example, assuming a convolutional setting, with input x 2 Rds2 as in the previous section, we could partition z into L different hierarchical resolutions and let W have a multi-tier structure, e.g.
z =
2
6664
z1 2 Rn1s 2 1 z2 2 Rn2s 2 2
... zL 2 RnLs 2 L
3
7775 , W =
2
66664 W11 0 0 · · · 0 W21 W22 0 · · · 0 0 W32 W33 · · · 0 ... ... ... . . . ...
0 0 0 · · · WLL
3
77775
where zi denotes the hidden units at level i, an si ⇥ si resolution hidden unit with ni channels, and where Wii denotes an ni channel to ni channel convolution, and Wi+1,i denotes an ni to ni+1 channel, strided convolution. This structure of W allows for both inter- and intra-tier influence.
One challenge is to ensure that we can represent W with the form (1 m)I ATA + B BT while still maintaining the above structure, which we achieve by parameterizing each Wij block appropriately. Another consideration is the inversion of the multi-tier operator, which can be achieved via the FFT similarly as for single-convolutional W , but with additional complexity arising from the fact that the Ai+1,i convolutions are strided. These details are described in Appendix D.
CIFAR-10
Method Model size Acc.
Neural ODE 172K 55.3±0.3% Aug. Neural ODE 172K 58.9±2.8% Neural ODE†⇤ 1M 59.9% Aug. Neural ODE†⇤ 1M 73.4%
monDEQ (ours)
Single conv 172K 74.0±0.1% Multi-tier 170K 72.0±0.3% Single conv⇤ 854K 82.0±0.3% Multi-tier⇤ 1M 89.0±0.3%
SVHN
Method Model size Acc.
Neural ODE‡ 172K 81.0% Aug. Neural ODE‡ 172K 83.5%
monDEQ (ours)
Single conv 172K 88.7±1.1% Multi-tier 170K 92.4±0.1%
MNIST
Method Model size Acc.
Neural ODE‡ 84K 96.4% Aug. Neural ODE‡ 84K 98.2%
monDEQ (ours)
Fully connected 84K 98.1±0.1% Single conv 84K 99.1±0.1% Multi-tier 81K 99.0±0.1%
Table 1: Test accuracy of monDEQ models compared to Neural ODEs and Augmented Neural ODEs. *with data augmentation; †best test accuracy before training diverges; ‡as reported in [10].
1 10 20 30 40 ESRFhs
40
50
60
70
80
% 7
es t D
FF ur
DF y
CIFA5-10
6ingOe FRnv 0uOti-tier
12DE A12DE
Figure 1: Test accuracy of monDEQs during training on CIFAR-10, with NODE [8] and ANODE [10] for comparison. NODE and ANODE curves obtained using code provided by [10].
5 Experiments
To test the expressive power and training stability of monDEQs, we evaluate the monDEQ instantiations described in Section 4 on several image classification benchmarks. We take as a point of comparison the Neural ODE (NODE) [8] and Augmented Neural ODE (ANODE) [10] models, the only other implicit-depth models which guarantee the existence and uniqueness of a solution. We also assess the stability of training standard DEQs of the same form as our monDEQs.
The training process relies upon the operator splitting algorithms derived in Sections 3.4 and 3.5; for each batch of examples, the forward pass of the network involves finding the network fixed point (Algorithm 1 or 2), and the backward pass involves backpropagating the loss gradient through the fixed point (Algorithm 3 or 4). We analyze the convergence properties of both the forward-backward and Peaceman-Rachford operator splitting methods, and use the more efficient Peaceman-Rachford splitting for our model training. For further training details and model architectures see Appendix E. Experiment code can be found at http://github.com/locuslab/monotone_op_net.
Performance on image benchmarks We train small monDEQs on CIFAR-10 [17], SVHN [22], and MNIST [18], with a similar number of parameters as the ODE-based models reported in [8] and [10]. The results (averages over three runs) are shown in Table 1. Training curves for monDEQs, NODE, and ANODE on CIFAR-10 are show in Figure (1) and additional training curves are shown in Figure F1. Notably, except for the fully-connected model on MNIST, all monDEQs significantly outperform the ODE-based models across datasets. We highlight the performance of the small single convolution monDEQ on CIFAR-10 which outperforms Augmented Neural ODE by 15.1%.
We also attempt to train standard DEQs of the same structure as our small multi-tier convolutional monDEQ. We train DEQs both with unconstrained W and with W having the monotone parameterization (5), and solve for the fixed point using Broyden’s method as in [5]. All models quickly diverge during the first few epochs of training, even when allowed 300 iterations of Broyden’s method.
Additionally, we train two larger monDEQs on CIFAR-10 with data augmentation. The strong performance (89% test accuracy) of the multi-tier network, in particular, goes a long way towards closing the performance gap with traditional deep networks. For comparison, we train larger NODE and ANODE models with a comparable number of parameters (~1M). These attain higher test accuracy than the smaller models during training, but diverge after 10-30 epochs (see Figure F1).
Efficiency of operator splitting methods We compare the convergence rates of PeacemanRachford and forward-backward splitting on a fully trained model, using a large multi-tier monDEQ trained on CIFAR-10. Figure 3 shows convergence for both methods during the forward pass, for a range of ↵. As the theory suggests, the convergence rates depend strongly on the choice of ↵. Forward-backward does not converge for ↵ > 0.125, but convergence speed varies inversely with ↵ for ↵ < 0.125. In contrast, Peaceman-Rachford is guaranteed to converge for any ↵ > 0 but the dependence is non-monotonic. We see that, for the optimal choice of ↵, Peaceman-Rachford can converge much more quickly than forward-backward. The convergence rate also depends on the Lipschitz parameter L of I W , which we observe increases during training. Peaceman-Rachford therefore requires an increasing number of iterations during both the forward pass (Figure 2) and backward pass (Figure F2).
Finally, we compare the efficiency of monDEQ to that of the ODE-based models. We report the time and number of function evaluations (OED solver steps or operator splitting iterations) required by the ~170k-parameter models to train on CIFAR-10 for 40 epochs. The monDEQ, neural ODE, and ANODE training takes respectively 1.4, 4.4, and 3.3 hours, with an average of 20, 96, and 90 function evals per minibatch. Note however that training the larger 1M-parameter monDEQ on CIFAR-10 requires 65 epochs and takes 16 hours. All experiments are run on a single RTX 2080 Ti GPU.
6 Conclusion
The connection between monotone operator splitting and implicit network equilibria brings a new suite of tools to the study of implicit-depth networks. The strong performance, efficiency, and guaranteed stability of monDEQ indicate that such networks could become practical alternatives to deep networks, while the flexibility of the framework means that performance can likely be further improved by, e.g. imposing additional structure on W or employing other operator splitting methods. At the same time, we see potential for the study of monDEQs to inform traditional deep learning itself. The guarantees we can derive about what architectures and algorithms work for implicit-depth networks may give us insights into what will work for explicit deep networks.
Broader impact statement
While the main thrust of our work is foundational in nature, we do demonstrate the potential for implicit models to become practical alternatives to traditional deep networks. Owing to their improved memory efficiency, these networks have the potential to further applications of AI methods on edge devices, where they are currently largely impractical. However, the work is still largely algorithmic in nature, and thus it is much less clear the immediate societal-level benefits (or harms) that could result from the specific tehniques we propose and demonstrate in this paper.
Acknowledgements
Ezra Winston is supported by a grant from the Bosch Center for Artificial Intelligence. | 1. What is the focus and contribution of the paper on fixed-point networks?
2. What are the strengths of the proposed approach, particularly in terms of convergence guarantees and memory efficiency?
3. What are the weaknesses of the paper, especially regarding its limitations in function classes and applicability to certain types of data? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The paper develops a principled approach for training fixed-point networks. Given input $x$, fixed point networks define the output of a network as a fixed point $z$ of a computational block $f(x, z)$. The authors consider computational blocks consisting of a linear operator and a component-wise non-linearity; for such blocks, the authors propose a parametrization for the block to ensure a sufficient condition for the existence of a fixed point. Then the authors adopt two different operator splitting algorithms for the forward and backward passes of the fixed-point network. For the evaluation, the authors develop several architectures and compare them against Neural ODE. With the same parameter count, the proposed approach outperforms Neural ODE in terms of accuracy and computational efficiency. Additionally, the authors show the role of the fixed point solver hyperparameters and the scalability to more parameters.
Strengths
The original work on deep equilibrium models (DEQ) did not give any convergence guarantees for the underlying iterative process. The paper fills the gap with an alternative parametrization, an elegant and easy to implement solution. Similarly to DEQ, Monotone networks use implicit differentiation and do not store the intermediate steps of the underlying solver. As a result, the model training loop is more memory-efficient compared to conventional deep architectures. Another appealing feature is that the considered class of fixed-point networks include a variety of non-linear activations and, as the authors show, is extendable beyond fully-connected matrices. Monotone networks outperform NeuralODE (another implicit depth architecture that is, unlike DEQ, well-defined) with a similar number of parameters and demonstrate a room for further performance improvement.
Weaknesses
Compared to DEQ, Monotone Networks consider a less versatile class of functions (a price to pay for the convergence?). In particular, NeuralODE and DEQ extend to series data, but it is not clear how to extend MON to such data. |
NIPS | Title
Monotone operator equilibrium networks
Abstract
Implicit-depth models such as Deep Equilibrium Networks have recently been shown to match or exceed the performance of traditional deep networks while being much more memory efficient. However, these models suffer from unstable convergence to a solution and lack guarantees that a solution exists. On the other hand, Neural ODEs, another class of implicit-depth models, do guarantee existence of a unique solution but perform poorly compared with traditional networks. In this paper, we develop a new class of implicit-depth model based on the theory of monotone operators, the Monotone Operator Equilibrium Network (monDEQ). We show the close connection between finding the equilibrium point of an implicit network and solving a form of monotone operator splitting problem, which admits efficient solvers with guaranteed, stable convergence. We then develop a parameterization of the network which ensures that all operators remain monotone, which guarantees the existence of a unique equilibrium point. Finally, we show how to instantiate several versions of these models, and implement the resulting iterative solvers, for structured linear operators such as multi-scale convolutions. The resulting models vastly outperform the Neural ODE-based models while also being more computationally efficient. Code is available at http://github.com/locuslab/monotone_op_net.
1 Introduction
Recent work in deep learning has demonstrated the power of implicit-depth networks, models where features are created not by explicitly iterating some number of nonlinear layers, but by finding a solution to some implicitly defined equation. Instances of such models include the Neural ODE [8], which computes hidden layers as the solution to a continuous-time dynamical system, and the Deep Equilibrium (DEQ) Model [5], which finds a fixed point of a nonlinear dynamical system corresponding to an effectively infinite-depth weight-tied network. These models, which trace back to some of the original work on recurrent backpropagation [2, 23], have recently regained attention since they have been shown to match or even exceed to performance of traditional deep networks in domains such as sequence modeling [5]. At the same time, these models show drastically improved memory efficiency over traditional networks since backpropagation is typically done analytically using the implicit function theorem, without needing to store the intermediate hidden layers.
However, implict-depth models that perform well require extensive tuning in order to achieve stable convergence to a solution. Obtaining convergence in DEQs requires careful initialization and regularization, which has proven difficult in practice [21]. Moreover, solutions to these models are not guaranteed to exist or be unique, making the output of the models potentially ill-defined. While Neural ODEs [8] do guarantee existence of a unique solution, training remains unstable since the ODE problems can become severely ill-posed [10]. Augmented Neural ODEs [10] improve the stability of Neural ODEs by learning ODEs with simpler flows, but neither model achieves efficient
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
convergence nor performs well on standard benchmarks. Crucial questions remain about how models can have guaranteed, unique solutions, and what algorithms are most efficient at finding them.
In this paper, we present a new class of implicit-depth equilibrium model, the Monotone Operator Equilibrium Network (monDEQ), which guarantees stable convergence to a unique fixed point.1 The model is based upon the theory of monotone operators [6, 26], and illustrates a close connection between simple fixed-point iteration in weight-tied networks and the solution to a particular form of monotone operator splitting problem. Using this connection, this paper lays the theoretical and practical foundations for such networks. We show how to parameterize networks in a manner that ensures all operators remain monotone, which establishes the existence and uniqueness of the equilibrium point. We show how to backpropagate through such networks using the implicit function theorem; this leads to a corresponding (linear) operator splitting problem for the backward pass, which also is guaranteed to have a unique solution. We then adapt traditional operator splitting methods, such as forward-backward splitting or Peaceman-Rachford splitting, to naturally derive algorithms for efficiently computing these equilibrium points.
Finally, we demonstrate how to practically implement such models and operator splitting methods, in the cases of typical feedforward, fully convolutional, and multi-scale convolutional networks. For convolutional networks, the most efficient fixed-point solution methods require an inversion of the associated linear operator, and we illustrate how to achieve this using the fast Fourier transform. The resulting networks show strong performance on several benchmark tasks, vastly improving upon the accuracy and efficiency of Neural ODEs-based models, the other implicit-depth models where solutions are guaranteed to exist and be unique.
2 Related work
Implicit models in deep learning There has been a growing interest in recent years in implicit layers in deep learning. Instead of specifying the explicit computation to perform, a layer specifies some condition that should hold at the solution to the layer, such as a nonlinear equality, or a differential equation solution. Using the implicit function theorem allows for backpropagating through the layer solutions analytically, making these layers very memory efficient, as they do not need to maintain intermediate iterations of the solution procedure. Recent examples include layers that compute inference in graphical models [15], solve optimization problems [12, 3, 13, 1], execute model-based control policies [4], solve two-player games [20], solve gradient-based optimization for meta-learning [24], and many others.
Stability of fixed-point models The issue of model stability has in fact been at the heart of much work in fixed-point models. The original work on attractor-style recurrent models, trained via recurrent backpropagation [2, 23], precisely attempted to ensure that the forward iteration procedure was stable. And indeed, much of the work in recurrent architectures such as LSTMs has focused on these issues of stability [14]. Recent work has revisited recurrent backpropagation in a similar manner to DEQs, with the similar aim of speeding up the computation of fixed points [19]. And other work has looked at the stability of implicit models [11], with an emphasis on guaranteeing the existence of fixed points, but focused on alternative stability conditions, and considered only relatively small-scale experiments. Other recent work has looked to use control-theoretic methods to ensure the stability of implicit models, [25], though again they consider only small-scale evaluations.
Monotone operators in deep learning Although most work in the field of monotone operators is concerned with general convex analysis, the recent work of [9] does also highlight connections between deep networks and monotone operator problems. Unlike our current work, however, that work focused largely on the fact that many common non-linearities can be expressed via proximal operators, and analyzed traditional networks under the assumptions that certain of the operators were monotone, but did not address conditions for the networks to be monotone or algorithms for solving or backpropagating through the networks.
3 A monotone operator view of fixed-point networks
This section lays out our main methodological and theoretical contribution, a class of equilibrium networks based upon monotone operators. We begin with some preliminaries, then highlight the
1We largely use the terms “fixed point” and “equilibrium point” interchangably in this work, using fixed point in the context of an iterative procedure, and equilibrium point to refer more broadly to the point itself.
basic connection between the fixed point of an “infinite-depth” network and an associated operator splitting problem; next, we propose a parameterization that guarantees the associated operators to be maximal monotone; finally, we show how to use operator splitting methods to both compute the fixed point and backpropagate through the fixed point efficiently.
3.1 Preliminaries
Monotone operator theory The theory of monotone operators plays a foundational role in convex analysis and optimization. Monotone operators are a natural generalization of monotone functions, which can be used to assess the convergence properties of many forms of iterative fixed-point algorithms. We emphasize that the majority of the work in this paper relies on well-known properties about monotone operators, and we refer to standard references on the topic including [6] and a less formal survey by [26]; we do include a brief recap of the definitions and results we require in Appendix A. Formally, an operator is a subset of the space F ✓ Rn ⇥ Rn; in our setting this will usually correspond to set-valued or single-valued function. Operator splitting approaches refer to methods for finding a zero in a sum of operators, i.e., find x such that 0 2 (F +G)(x). There are many such methods, but the two we will use mainly in this work are forward-backward splitting (eqn. A9 in the Appendix) and Peaceman-Rachford splitting (eqn. A10). As we will see, both finding a network equilibrium point and backpropagating through it can be formulated as operator splitting problems, and different operator splitting methods will lead to different approaches in their application to our subsequent implicit networks.
Deep equilibrium models The monDEQ architecture is closely relate to the DEQ model, which parameterizes a “weight-tied, input-injected” network of the form zi+1 = g(zi, x), where x denotes the input to the network, injected at each layer; zi denotes the hidden layer at depth i; and g denotes a nonlinear function which is the same for each layer (hence the network is weight-tied). The key aspect of the DEQ model is that in this weight-tied setting, instead of forward iteration, we can simply use any root-finding approach to find an equilibrium point of such an iteration z⇤ = g(z⇤, x). Assuming the model is stable, this equilibrium point corresponds to an “infinite-depth fixed point” of the layer. The monDEQ architecture can be viewed as an instance of a DEQ model, but one that relies on the theory of monotone operators, and a specific paramterization of the network weights, to guarantee the existence of a unique fixed point for the network. Crucially, however, as is the case for DEQs, naive forward iteration of this model is not necessarily stable; we therefore employ operator splitting methods to develop provably (linearly) convergent methods for finding such fixed points.
3.2 Fixed-point networks as operator splitting
As a starting point of our analysis, consider the weight-tied, input-injected network in which x 2 Rd denotes the input, and zk 2 Rn denotes the hidden units at layer k, given by the iteration2
z k+1 = (Wzk + Ux+ b) (1)
where : R ! R is a nonlinearity applied elementwise, W 2 Rn⇥n are the hidden unit weights, U 2 Rn⇥x are the input-injection weights and b 2 Rn is a bias term. An equilibrium point, or fixed point, of this system is some point z? which remains constant after an update:
z ? = (Wz? + Ux+ b). (2)
We begin by observing that it is possible to characterize this equilibrium point exactly as the solution to a certain operator splitting problem, under certain choices of operators and activation . This can be formalized in the following theorem, which we prove in Appendix B: Theorem 1. Finding a fixed point of the iteration (1) is equivalent to finding a zero of the operator splitting problem 0 2 (F +G)(z?) with the operators
F (z) = (I W )(z) (Ux+ b), G = @f (3) and (·) = prox1f (·) for some convex closed proper (CCP) function f , where prox↵f denotes the proximal operator
prox↵f (x) ⌘ argmin z
1 2 kx zk22 + ↵f(z). (4)
2This setting can also be seen as corresponding to a recurrent network with identical inputs at each time (indeed, this is the view of so-called attractor networks [23]). However, because in modern usage recurrent networks typically refer to sequential models with different inputs at each time, we don’t adopt this terminology.
It is also well-established that many common nonlinearities used in deep networks can be represented as proximal operators of CCP functions [7, 9]. For example, the ReLU nonlinearity (x) = [x]+ is the proximal operator of the indicator of the positive orthant f(x) = I{x 0}, and tanh, sigmoid, and softplus all have close correspondence with proximal operators of simple expressions [7].
In fact, this method establishes that some seemingly unstable iterations can actually still lead to convergent algorithms. ReLU activations, for instance, have traditionally been avoided in iterative models such as recurrent networks, due to exploding or vanishing gradient problems and nonsmoothness. Yet this iteration shows that (with input injection and the above constraint on W ), ReLU operators are perfectly well-suited to these fixed-point iterations.
3.3 Enforcing existence of a unique solution
The above connection is straightforward, but also carries interesting implications for deep learning. Specifically, we can establish the existence and uniqueness of the equilibirum point z? via the simple sufficient criterion that I W is strongly monotone, or in other words3 I W ⌫ mI for some m > 0 (see Appendix A). The constraint is by no means a trivial condition. Although many layers obey this condition under typical initialization schemes, during training it is normal for W to move outside this regime. Thus, the first step of the monDEQ architecture is to parameterize W in such a way that it always satisfies this strong monotonicity constraint. Proposition 1. We have I W ⌫ mI if and only if there exist A,B 2 Rn⇥n such that
W = (1 m)I ATA+B BT . (5)
We therefore propose to simply parameterize W directly in this form, by defining the A and B matrices directly. While this is an overparameterized form for a dense matrix, we could avoid this issue by, e.g. constraining A to be lower triangular (making it the Cholesky factor of ATA), and by making B strictly upper triangular; in practice, however, simply using general A and B matrices has little impact upon the performance of the method. The parameterization does notably raise additional complications when dealing with convolutional layers, but we defer this discussion to Section 4.2.
3.4 Computing the network fixed point
Given the monDEQ formulation, the first natural question to ask is: how should we compute the equilibrium point z? = (Wz? + Ux + b)? Crucially, it can be the case that the simple forward iteration of the network equation (1) does not converge, i.e., the iteration may be unstable. Fortunately, monotone operator splitting leads to a number of iterative methods for finding these fixed points, which are guaranteed to converge under proper conditions. For example, the forward-backward iteration applied to the monotone operator formulation from Theorem 1 results exactly in a damped version of the forward iteration z k+1 = prox↵f (z
k ↵((I W )zk (Ux+ b))) = prox↵f ((1 ↵)zk + ↵(Wzk +Ux+ b)). (6) This iteration is guaranteed to converge linearly to the fixed point z? provided that ↵ 2m/L2, when the operator I W is Lipschitz and strongly monotone with parameters L (which is simply the operator norm kI Wk2) and m [26]. A key advantage of the monDEQ formulation is the flexibility to employ alternative operator splitting methods that converge much more quickly to the equilibrium. One such example is PeacemanRachford splitting which, when applied to the formulation from Theorem 1, takes the form
u k+1/2 = 2zk uk
z k+1/2 = (I + ↵(I W )) 1(uk+1/2 ↵(Ux+ b)) u k+1 = 2zk+1/2 uk+1/2
z k+1 = prox↵f (u k+1)
(7)
where we use the explicit form of the resolvents for the two monotone operators of the model. The advantage of Peaceman-Rachford splitting over forward-backward is two-fold: 1) it typically converges in fewer iterations, which is a key bottleneck for many implicit models; and 2) it converges
3For non-symmetric matrices, which of course is typically the case with W , positive definiteness is defined as the positive definiteness of the symmetric component I W ⌫ mI , I (W +WT )/2 ⌫ mI .
Algorithm 1 Forward-backward equilibrium solving z := 0; err := 1 while err > ✏ do
z+ := (1 ↵)z + ↵(Wz + Ux+ b) z+ := prox↵f (z +) err := kz + zk2 kz+k2 z := z+
return z
Algorithm 2 Peaceman-Rachford equilibrium solving
z, u := 0; err := 1; V := (I + ↵(I W )) 1 while err > ✏ do
u1/2 := 2z u z1/2 := V (u1/2 + ↵(Ux+ b)) u+ := 2z1/2 u1/2 z+ := prox↵f (u +) err := kz + zk2 kz+k2 z, u := z+, u+
return z
for any ↵ > 0 [26], unlike forward-backward splitting which is dependent on the Lipschitz constant of I W . The disadvantage of Peaceman-Rachford splitting, however, is that it requires an inverse involving the weight matrix W . It is not immediately clear how to apply such methods if the W matrix involves convolutions or multi-layer models; we discuss these points in Section 4.2. A summary of these methods for computation of the forward equilibrium point is given in Algorithms 1 and 2.
3.5 Backprogation through the monotone operator layer
Finally, a key challenge for any implicit model is to determine how to perform backpropagation through the layer. As with most implicit models, a potential benefit of the fixed-point conditions we describe is that, by using the implicit function theorem, it is possible to perform backpropagation without storing the intermediate iterates of the operator splitting algorithm in memory, and instead backpropagating directly through the equilibrium point.
To begin, we present a standard approach to differentiating through the fixed point z? using the implicit function theorem. This formulation has some compelling properties for monDEQ, namely the fact that this (sub)gradient will always exist. When training a network via gradient descent, we need to compute the gradients of the loss function
@` @(·) = @` @z?
@z ? @(·) (8)
where (·) denotes some input to the layer or parameters, i.e. W , x, etc. The challenge here is computing (or left-multiplying by) the Jacobian @z?/@(·), since z? is not an explicit function of the inputs. While it would be possible to simply compute gradients through the “unrolled” updates, e.g. zk+1 = (Wzk + Ux+ b) for forward iteration, this would require storing each intermediate state zk, a potentially memory-intensive operation. Instead, the following theorem gives an explicit formula for the necessary (sub)gradients. We state the theorem more directly in terms of the operators mentioned Theorem 1; that is, we use prox1f (·) in place of (·).
Theorem 2. For the equilibrium point z? = prox1f (Wz ? + Ux+ b), we have
@` @(·) = @` @z? (I JW ) 1J @(Wz
? + Ux+ b)
@(·) (9)
where J = D prox1f (Wz
? + Ux+ b) (10) denotes the Clarke generalized Jacobian of the nonlinearity evaluated at the point Wz? + Ux+ b. Furthermore, for the case that (I W ) ⌫ mI , this derivative always exists.
To apply the theorem in practice to perform reverse-mode differentiation, we need to solve the system
(I JW ) T ✓ @`
@z?
◆T . (11)
The above system is a linear equation and while it is typically computationally infeasible to compute the inverse (I JW ) T exactly, we could compute a solution to (I JW ) T v using, e.g., conjugate gradient methods. However, we present an alternative formulation to computing (11) as the solution to a (linear) monotone operator splitting problem:
Algorithm 3 Forward-backward equilibrium backpropagation
u := 0; err := 1; v := @`@z⇤ while err > ✏ do
u+ := (1 ↵)u+ ↵WTu
u+i :=
( u+i +↵vi
1+↵(1+Dii) if Dii < 1
0 if Dii = 1 err := ku
+ uk2 ku+k2
u := u+
return u
Algorithm 4 Peaceman-Rachford equilibrium backpropagation
z, u := 0; err := 1; v := @`@z⇤ ; V := (I+↵(I W )) 1 while err > ✏ do u1/2 := 2z u z1/2 := V Tu1/2
u+ := 2z1/2 u1/2
z+i :=
( u+i +↵vi
1+↵(1+Dii) if Dii < 1
0 if Dii = 1 err := kz
+ zk2 kz+k2
z, u := z+, u+
return z
Theorem 3. Let z? be a solution to the monotone operator splitting problem defined in Theorem 1, and define J as in (10). Then for v 2 Rn the solution of the equation
u ? = (I JW ) T v (12)
is given by u ? = v +WT ũ? (13)
where ũ? is a zero of the operator splitting problem 0 2 (F̃ + G̃)(u?), with operators defined as F̃ (ũ) = (I WT )(ũ), G̃(ũ) = Dũ v (14)
where D is a diagonal matrix defined by J = (I + D) 1 (where we allow for the possibility of Dii = 1 for Jii = 0).
An advantage of this approach when using Peaceman-Rachford splitting is that it allows us to reuse a fast method for multiplying by (I + ↵(I W )) 1 which is required by Peaceman-Rachford during both the forward pass (equilibrium solving) and backward pass (backpropagation) of training a monDEQ. Algorithms detailing both the Peaceman-Rachford and forward-backward solvers for the backpropagation problem (14) are given in Algorithms 3 and 4.
4 Example monotone operator networks
With the basic foundations from the previous section, we now highlight several different instantiations of the monDEQ architecture. In each of these settings, as in Theorem 1, we will formulate the objective as one of finding a solution to the operator splitting problem 0 2 (F +G)(z?) for
F (z) = (I W )(z) (Ux+ b), G = @f (15) or equivalently as computing an equilibrium point z? = prox1f (Wz ? + Ux+ b).
In each of these settings we need to define what the input and hidden state x and z correspond to, what the W and U operators consist of, and what is the function f which determines the network nonlinearity. Key to the application of monotone operator methods are that 1) we need to constrain the W matrix such that I W ⌫ mI as described in the previous section and 2) we need a method to compute (or solve) the inverse (I + ↵(I W )) 1, needed e.g. for Peaceman-Rachford; while this would not be needed if using only forward-backward splitting, we believe that the full power of the monotone operator view is realized precisely when these more involved methods are possible.
4.1 Fully connected networks
The simplest setting, of course, is the case we have largely highlighted above, where x 2 Rd and z 2 Rn are unstructured vectors, and W 2 Rn⇥n and U 2 Rn⇥d and b 2 Rn are dense matrices and vectors respectively. As indicated above, we parameterize W directly by A,B 2 Rn⇥n as in (5). Since the Ux term simply acts as a bias in the iteration, there is no constraint on the form of U .
We can form an inverse directly by simply forming and inverting the matrix I + ↵(I W ), which has cost O(n3). Note that this inverse needs to be formed only once, and can be reused over all iterations of the operator splitting method and over an entire batch of examples (but recomputed, of course, when W changes). Any proximal function can be used as the activation: for example the ReLU, though as mentioned there are also close approximations to the sigmoid, tanh, and softplus.
4.2 Convolutional networks
The real power of monDEQs comes with the ability to use more structured linear operators such as convolutions. We let x 2 Rds2 be a d-channel input of size s⇥s and z 2 Rns2 be a n-channel hidden layer. We also let W 2 Rns2⇥ns2 denote the linear form of a 2D convolutional operator and similarly for U 2 Rns2⇥ds2 . As above, W is parameterized by two additional convolutional operators A,B of the same form as W . Note that this implicitly increases the receptive field size of W : if A and B are 3⇥ 3 convolutions, then W = (1 m)I ATA+B BT will have an effective kernel size of 5.
Inversion The benefit of convolutional operators in this setting is the ability to perform efficient inversion via the fast Fourier transform. Specifically, in the case that A and B represent circular convolutions, we can reduce the matrices to block-diagonal form via the discrete Fourier transform (DFT) matrix
A = FsDAF ⇤ s (16)
where Fs denotes (a permuted form of) the 2D DFT operator and DA 2 Cns 2⇥ns2 is a (complex) block diagonal matrix where each block DAii 2 Cn⇥n corresponds to the DFT at one particular location in the image. In this form, we can efficiently multiply by the inverse of the convolutional operator, noting that
I + ↵(I W ) = (1 + ↵m)I + ↵ATA ↵B + ↵BT
= Fs((1 + ↵m)I + ↵D ⇤ ADA DB +D⇤B)F ⇤s .
(17)
The inner term here is itself a block diagonal matrix with complex n⇥ n blocks (each block is also guaranteed to be invertible by the same logic as for the full matrix). Thus, we can multiply a set of hidden units z by the inverse of this matrix by simply inverting each n ⇥ n block, taking the fast Fourier transform (FFT) of z, multiplying each corresponding block of Fsz by the corresponding inverse, then taking the inverse FFT. The details are given in Appendix C.
The computational cost of multiplying by this inverse is O(n2s2 log s+ n3s2) to compute the FFT of each convolutional filter and precompute the inverses, and then O(bns2 log s+ bn2s2) to multiply by the inverses for a set of hidden units with a minibatch of size b. Note that just computing the convolutions in a normal manner has cost O(bn2s2), so that these computations are on the same order as performing typical forward passes through a network, though empirically 2-3 times slower owing to the relative complexity of performing the necessary FFTs.
One drawback of using the FFT in this manner is that it requires that all convolutions be circular; however, this circular dependence can be avoided using zero-padding, as detailed in Section C.2.
4.3 Forward multi-tier networks
Although a single fully-connected or convolutional operator within a monDEQ can be suitable for small-scale problems, in typical deep learning settings it is common to model hidden units at different hierarchical levels. While monDEQs may seem inherently “single-layer,” we can model this same hierarchical structure by incorporating structure into the W matrix. For example, assuming a convolutional setting, with input x 2 Rds2 as in the previous section, we could partition z into L different hierarchical resolutions and let W have a multi-tier structure, e.g.
z =
2
6664
z1 2 Rn1s 2 1 z2 2 Rn2s 2 2
... zL 2 RnLs 2 L
3
7775 , W =
2
66664 W11 0 0 · · · 0 W21 W22 0 · · · 0 0 W32 W33 · · · 0 ... ... ... . . . ...
0 0 0 · · · WLL
3
77775
where zi denotes the hidden units at level i, an si ⇥ si resolution hidden unit with ni channels, and where Wii denotes an ni channel to ni channel convolution, and Wi+1,i denotes an ni to ni+1 channel, strided convolution. This structure of W allows for both inter- and intra-tier influence.
One challenge is to ensure that we can represent W with the form (1 m)I ATA + B BT while still maintaining the above structure, which we achieve by parameterizing each Wij block appropriately. Another consideration is the inversion of the multi-tier operator, which can be achieved via the FFT similarly as for single-convolutional W , but with additional complexity arising from the fact that the Ai+1,i convolutions are strided. These details are described in Appendix D.
CIFAR-10
Method Model size Acc.
Neural ODE 172K 55.3±0.3% Aug. Neural ODE 172K 58.9±2.8% Neural ODE†⇤ 1M 59.9% Aug. Neural ODE†⇤ 1M 73.4%
monDEQ (ours)
Single conv 172K 74.0±0.1% Multi-tier 170K 72.0±0.3% Single conv⇤ 854K 82.0±0.3% Multi-tier⇤ 1M 89.0±0.3%
SVHN
Method Model size Acc.
Neural ODE‡ 172K 81.0% Aug. Neural ODE‡ 172K 83.5%
monDEQ (ours)
Single conv 172K 88.7±1.1% Multi-tier 170K 92.4±0.1%
MNIST
Method Model size Acc.
Neural ODE‡ 84K 96.4% Aug. Neural ODE‡ 84K 98.2%
monDEQ (ours)
Fully connected 84K 98.1±0.1% Single conv 84K 99.1±0.1% Multi-tier 81K 99.0±0.1%
Table 1: Test accuracy of monDEQ models compared to Neural ODEs and Augmented Neural ODEs. *with data augmentation; †best test accuracy before training diverges; ‡as reported in [10].
1 10 20 30 40 ESRFhs
40
50
60
70
80
% 7
es t D
FF ur
DF y
CIFA5-10
6ingOe FRnv 0uOti-tier
12DE A12DE
Figure 1: Test accuracy of monDEQs during training on CIFAR-10, with NODE [8] and ANODE [10] for comparison. NODE and ANODE curves obtained using code provided by [10].
5 Experiments
To test the expressive power and training stability of monDEQs, we evaluate the monDEQ instantiations described in Section 4 on several image classification benchmarks. We take as a point of comparison the Neural ODE (NODE) [8] and Augmented Neural ODE (ANODE) [10] models, the only other implicit-depth models which guarantee the existence and uniqueness of a solution. We also assess the stability of training standard DEQs of the same form as our monDEQs.
The training process relies upon the operator splitting algorithms derived in Sections 3.4 and 3.5; for each batch of examples, the forward pass of the network involves finding the network fixed point (Algorithm 1 or 2), and the backward pass involves backpropagating the loss gradient through the fixed point (Algorithm 3 or 4). We analyze the convergence properties of both the forward-backward and Peaceman-Rachford operator splitting methods, and use the more efficient Peaceman-Rachford splitting for our model training. For further training details and model architectures see Appendix E. Experiment code can be found at http://github.com/locuslab/monotone_op_net.
Performance on image benchmarks We train small monDEQs on CIFAR-10 [17], SVHN [22], and MNIST [18], with a similar number of parameters as the ODE-based models reported in [8] and [10]. The results (averages over three runs) are shown in Table 1. Training curves for monDEQs, NODE, and ANODE on CIFAR-10 are show in Figure (1) and additional training curves are shown in Figure F1. Notably, except for the fully-connected model on MNIST, all monDEQs significantly outperform the ODE-based models across datasets. We highlight the performance of the small single convolution monDEQ on CIFAR-10 which outperforms Augmented Neural ODE by 15.1%.
We also attempt to train standard DEQs of the same structure as our small multi-tier convolutional monDEQ. We train DEQs both with unconstrained W and with W having the monotone parameterization (5), and solve for the fixed point using Broyden’s method as in [5]. All models quickly diverge during the first few epochs of training, even when allowed 300 iterations of Broyden’s method.
Additionally, we train two larger monDEQs on CIFAR-10 with data augmentation. The strong performance (89% test accuracy) of the multi-tier network, in particular, goes a long way towards closing the performance gap with traditional deep networks. For comparison, we train larger NODE and ANODE models with a comparable number of parameters (~1M). These attain higher test accuracy than the smaller models during training, but diverge after 10-30 epochs (see Figure F1).
Efficiency of operator splitting methods We compare the convergence rates of PeacemanRachford and forward-backward splitting on a fully trained model, using a large multi-tier monDEQ trained on CIFAR-10. Figure 3 shows convergence for both methods during the forward pass, for a range of ↵. As the theory suggests, the convergence rates depend strongly on the choice of ↵. Forward-backward does not converge for ↵ > 0.125, but convergence speed varies inversely with ↵ for ↵ < 0.125. In contrast, Peaceman-Rachford is guaranteed to converge for any ↵ > 0 but the dependence is non-monotonic. We see that, for the optimal choice of ↵, Peaceman-Rachford can converge much more quickly than forward-backward. The convergence rate also depends on the Lipschitz parameter L of I W , which we observe increases during training. Peaceman-Rachford therefore requires an increasing number of iterations during both the forward pass (Figure 2) and backward pass (Figure F2).
Finally, we compare the efficiency of monDEQ to that of the ODE-based models. We report the time and number of function evaluations (OED solver steps or operator splitting iterations) required by the ~170k-parameter models to train on CIFAR-10 for 40 epochs. The monDEQ, neural ODE, and ANODE training takes respectively 1.4, 4.4, and 3.3 hours, with an average of 20, 96, and 90 function evals per minibatch. Note however that training the larger 1M-parameter monDEQ on CIFAR-10 requires 65 epochs and takes 16 hours. All experiments are run on a single RTX 2080 Ti GPU.
6 Conclusion
The connection between monotone operator splitting and implicit network equilibria brings a new suite of tools to the study of implicit-depth networks. The strong performance, efficiency, and guaranteed stability of monDEQ indicate that such networks could become practical alternatives to deep networks, while the flexibility of the framework means that performance can likely be further improved by, e.g. imposing additional structure on W or employing other operator splitting methods. At the same time, we see potential for the study of monDEQs to inform traditional deep learning itself. The guarantees we can derive about what architectures and algorithms work for implicit-depth networks may give us insights into what will work for explicit deep networks.
Broader impact statement
While the main thrust of our work is foundational in nature, we do demonstrate the potential for implicit models to become practical alternatives to traditional deep networks. Owing to their improved memory efficiency, these networks have the potential to further applications of AI methods on edge devices, where they are currently largely impractical. However, the work is still largely algorithmic in nature, and thus it is much less clear the immediate societal-level benefits (or harms) that could result from the specific tehniques we propose and demonstrate in this paper.
Acknowledgements
Ezra Winston is supported by a grant from the Bosch Center for Artificial Intelligence. | 1. What is the focus and contribution of the paper on implicit depth models?
2. What are the strengths of the proposed approach, particularly in terms of its stability and performance compared to other methods?
3. What are the weaknesses of the paper regarding its comparisons and simulations?
4. Are there any concerns or suggestions regarding the reporting of results and standard errors? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
Authors have used the theory of monotone operators to develop a novel implicit-depth model. Unlike previous approaches such as NODE and ANODE, authors show that their proposed approach has stable convergence.
Strengths
The major impressive strength of the paper is the fact that the proposed approach significantly outperforms state-of-the-art results from NODE and ANODE papers.
Weaknesses
It is great that authors have compared against NODE and ANODE. However, there are still some comparisons that are missing compared to the reported results in NODE and ANODE. One more comment about simulation results is that it would be great if authors could report the standard error for their results similar to NODE and ANODE papers. |
NIPS | Title
Expected Frequency Matrices of Elections: Computation, Geometry, and Preference Learning
Abstract
We use the “map of elections” approach of Szufa et al. (AAMAS-2020) to analyze several well-known vote distributions. For each of them, we give an explicit formula or an efficient algorithm for computing its frequency matrix, which captures the probability that a given candidate appears in a given position in a sampled vote. We use these matrices to draw the “skeleton map” of distributions, evaluate its robustness, and analyze its properties. Finally, we develop a general and unified framework for learning the distribution of real-world preferences using the frequency matrices of established vote distributions.
1 Introduction
Computational social choice is a research area at the intersection of social choice (the science of collective decision-making) and computer science, which focuses on the algorithmic analysis of problems related to preference aggregation and elicitation (Brandt et al., 2013). Many of the early papers in this field were primarily theoretical, focusing on establishing the worst-case complexity of winner determination and strategic behavior under various voting rules—see, e.g., the papers of Hemaspaandra et al. (1997), Dwork et al. (2001), and Conitzer et al. (2007)—but more recent work often combines theoretical investigations with empirical analysis. For example, formal bounds on the running time and/or approximation ratio of a winner determination algorithm can be complemented by experiments that evaluate its performance on realistic instances; see, e.g., the works of Conitzer (2006), Betzler et al. (2014), Faliszewski et al. (2018) and Wang et al. (2019).
However, performing high-quality experiments requires the ability to organize and understand the available data. One way to achieve this is to form a so-called “map of elections,” recently introduced by Szufa et al. (2020) and extended by Boehmer et al. (2021b). The idea is as follows. First, we fix a distance measure between elections. Second, we sample a number of elections from various distributions and real-life datasets—e.g., those collected in PrefLib (Mattei & Walsh, 2013)—and measure the pairwise distances between them. Third, we embed these elections into the 2D plane, mapping each election to a point so that the Euclidean distances between points are approximately equal to the distances between the respective elections. Finally, we plot these points, usually coloring them to indicate their origin (e.g., the distribution from which a given election was sampled); see Figure 2 later in the paper for an example of such a map. A location of an election on a map provides useful information about its properties. For example, Szufa et al. (2020) and Boehmer et al. (2021a,b) have shown that it can be used to predict (a) the Borda score of the winner of the election, (b) the
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
running time of ILP solvers computing the winners under the Harmonic-Borda multiwinner voting rule, or (c) the robustness of Plurality and Borda winners. Moreover, real-world elections of the same type (such as the ones from politics, sports, or surveys) tend to cluster in the same areas of the map; see also the positions on the map of the datasets collected by Boehmer & Schaar (2022). As such, the map has proven to be a useful framework to analyze the nature of elections and to visualize experimental results in a non-aggregate fashion.
Unfortunately, extending the map to incorporate additional examples and distributions is a challenging task, as the visual representation becomes cluttered and, more importantly, the embedding algorithms, which map elections to points in 2D, find it more difficult to preserve pairwise distances between points as the number of points increases. It is therefore desirable to reduce the number of points in a way that preserves the key features of the framework.
We address this challenge by drawing a map of distributions rather than individual elections, which we call the skeleton map. That is, instead of sampling 20–30 points from each distribution and placing them all on the map, as Szufa et al. (2020) and Boehmer et al. (2021b) do (obtaining around 800 points in total), we create a single point for each distribution. This approach is facilitated by the fact that prior work on the “map of elections” framework represented elections by their frequency matrices, which capture their essential features. The starting point of our work is the observation that this representation extends to distributions in a natural way. Thus, if we can compute the frequency matrix of some distribution D, then, instead of sampling elections from D and creating a point on the map for each sample, we can create a single point for D itself.
Our Contribution. We provide three sets of results. First, for a number of prominent vote distributions, we show how to compute their frequency matrices, by providing an explicit formula or an efficient algorithm. Second, we draw the map of distributions (the skeleton map) and argue for its credibility and robustness. Finally, we use our results to estimate the parameters of the distributions that are closest to the real-world elections considered by Boehmer et al. (2021b). In more detail, we work in the setting of preference learning, where we are given an election and we want to learn the parameters of some distribution, so as to maximize the similarity of the votes sampled from this distribution and the input election. For example, we may be interested in fitting the classic model of Mallows (1957). This model is parameterized by a central vote v and a dispersion parameter φ, which specifies how likely it is to generate a vote at some distance from the central one (alternatively, one may use, e.g., the Plackett–Luce model). Previous works on preference learning typically proposed algorithms to learn the parameters of one specific (parameterized) vote distribution (see, e.g., the works of Lu & Boutilier (2014); Mandhani & Meilǎ (2009); Meila & Chen (2010); Vitelli et al. (2017); Murphy & Martin (2003); Awasthi et al. (2014) for (mixtures of) the Mallows model and the works of Guiver & Snelson (2009); Hunter (2004); Minka (2004); Gormley & Murphy (2008) for (mixtures of) the Plackett–Luce model). Using frequency matrices, we offer a more general approach. Indeed, given an election and a parameterized vote distribution whose frequency matrix we can compute, the task of learning the distribution’s parameters boils down to finding parameters that minimize the distance between the election and the matrices of the distribution. While this minimization problem may be quite challenging, our approach offers a uniform framework for dealing with multiple kinds of distributions at the same time. We find that for the case of the Mallows distribution, our approach learns parameters very similar to those established using maximum likelihood-based approaches. Omitted proofs and discussions are in the appendix. The source code used for the experiments is available in a GitHub repository1.
2 Preliminaries
Given an integer t, we write [t] to denote the set {1, . . . , t}. We interpret a vector x ∈ Rm as an m× 1 matrix (i.e., we use column vectors as the default).
Preference Orders and Elections. Let C be a finite, nonempty set of candidates. We refer to total orders over C as preference orders (or, equivalently, votes), and denote the set of all preference orders over C by L(C). Given a vote v and a candidate c, by posv(c) we mean the position of c in v (the top-ranked candidate has position 1, the next one has position 2, and so on). If a candidate a is ranked above another candidate b in vote v, we write v : a b. Let rev(v) denote the reverse of
1github.com/Project-PRAGMA/Expected-Frequency-Matrices-NeurIPS-2022
vote v. An election E = (C, V ) consists of a set C = {c1, . . . , cm} of candidates and a collection V = (v1, . . . , vn) of votes. Occasionally we refer to the elements of V as voters rather than votes.
Frequency Matrices. Consider an election E = (C, V ) with C = {c1, . . . , cm} and V = (v1, . . . , vn). For each candidate cj and position i ∈ [m], we define #freqE(cj , i) to be the fraction of the votes from V that rank cj in position i. We define the column vector #freqE(cj) to be (#freqE(cj , 1), . . . ,#freqE(cj ,m)) and matrix #freq(E) to consist of vectors #freqE(c1), . . . ,#freqE(cm). We refer to #freq(E) as the frequency matrix of election E. Frequency matrices are bistochastic, i.e., their entries are nonnegative and each of their rows and columns sums up to one.
Example 2.1. Let E = (C, V ) be an election with candidate set C = {a, b, c, d, e} and four voters, v1, v2, v3, and v4. Below, we show the voters’ preference orders (on the left) and the election’s frequency matrix (on the right).
v1 : a b c d e, v2 : c b d a e, v3 : d e c b a, v4 : b c a d e.
a b c d e
1 1/4 1/4 1/4 1/4 0 2 0 1/2 1/4 0 1/4 3 1/4 0 1/2 1/4 0 4 1/4 1/4 0 1/2 0 5 1/4 0 0 0 3/4 Given a vote v, we write #freq(v) to denote the frequency matrix of the election containing this vote only; #freq(v) is a permutation matrix, with a single 1 in each row and in each column. Thus, for an election E = (C, V ) with V = (v1, . . . , vn) we have #freq(E) = 1n · ∑n i=1 #freq(vi).
Compass Matrices. For even m, Boehmer et al. (2021b) defined the following four m × m “compass” matrices, which appear to be extreme on the “map of elections”:
1. The identity matrix, IDm, has ones on the diagonal and zeroes everywhere else (it corresponds to an election where all voters agree on a single preference order).
2. The uniformity matrix, UNm, has all entries equal to 1/m (it corresponds to lack of agreement; each candidate is ranked at each position equally often).
3. The stratification matrix, STm, is partitioned into four quadrangles, where all entries in the top-left and bottom-right quadrangles are equal to 2/m, and all other entries are equal to zero (it corresponds to partial agreement; the voters agree which half of the candidates is superior, but disagree on everything else).
4. The antagonism matrix, ANm, has values 1/2 on both diagonals and zeroes elsewhere (it captures a conflict: it is a matrix of an election where half of the voters rank the candidates in one way and half of the voters rank them in the opposite way).
Below, we show examples of these matrices for m = 4:
UN4 = 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 , ID4 = 1 0 0 00 1 0 00 0 1 0 0 0 0 1 , ST4 = 1/2 1/2 0 0 1/2 1/2 0 0 0 0 1/2 1/2 0 0 1/2 1/2 ,AN4 = 1/2 0 0 1/2 0 1/2 1/2 0 0 1/2 1/2 0 1/2 0 0 1/2 . We omit the subscript in the names of these matrices if its value is clear from the context or irrelevant.
EMD. Let x = (x1, . . . , xn) and y = (y1, . . . , yn) be two vectors with nonnegative real entries that sum up to 1. Their Earth mover’s distance, denoted EMD(x, y), is the cost of transforming x into y using operations of the form: Given indices i, j ∈ [n] and a positive value δ such that xi ≥ δ, at the cost of δ · |i − j|, replace xi with xi − δ and xj with xj + δ (this corresponds to moving δ amount of “earth” from position i to position j). EMD(x, y) can be computed in polynomial time by a standard greedy algorithm.
Positionwise Distance (Szufa et al., 2020). Let A = (a1, . . . , am) and B = (b1, . . . , bm) be two m × m frequency matrices. Their raw positionwise distance is rawPOS(A,B) = minσ∈Sm ∑m i=1 EMD(ai, bσ(i)), where Sm denotes the set of all permutations over [m]. We will normalize these distances by 13 (m 2 − 1), which Boehmer et al. (2021b, 2022) proved to be the maximum distance between two m × m frequency matrices and the distance between IDm and
UNm: nPOS(A,B) = rawPOS(A,B)
1 3 (m 2−1) . For two elections E and F with equal-sized candidate sets, their positionwise distance, raw or normalized, is defined as the positionwise distance between their frequency matrices.
Paths Between the Compass Matrices. Let X and Y be two compass matrices. Boehmer et al. (2021b) showed that if we take their affine combination Z = αX + (1 − α)Y (0 ≤ α ≤ 1) then nPOS(X,Z) = (1 − α)nPOS(X,Y ) and nPOS(Z, Y ) = αnPOS(X,Y ). Such affine combinations form direct paths between the compass matrices; they are also possible between any two frequency matrices of a given size, not just the compass ones, but may require shuffling the matrices’ columns (Boehmer et al., 2021b).
Structured Domains. We consider two classes of structured elections, single-peaked elections (Black, 1958), and group-separable elections (Inada, 1964). For a discussion of these domains and the motivation behind them, see the original papers and the overviews by Elkind et al. (2017, 2022).
Intuitively, an election is single-peaked if we can order the candidates so that, as each voter considers the candidates in this order (referred to as the societal axis), his or her appreciation first increases and then decreases. The axis may, e.g., correspond to the left-right political spectrum. Definition 2.2. Let v be a vote over C and let C be the societal axis over C. We say that v is singlepeaked with respect to C if for every t ∈ [|C|] its t top-ranked candidates form an interval within C. An election is single-peaked with respect to C if all its votes are. An election is single-peaked (SP) if it is single-peaked with respect to some axis.
Note that the election from Example 2.1 is single-peaked with respect to the axis aC bC cC dC e.
We also consider group-separable elections, introduced by Inada (1964). For our purposes, it will be convenient to use the tree-based definition of Karpov (2019). Let C = {c1, . . . , cm} be a set of candidates, and consider a rooted, ordered tree T whose leaves are elements of C. The frontier of such a tree is the preference order that ranks the candidates in the order in which they appear in the tree from left to right. A preference order is consistent with a given tree if it can be obtained as its frontier by reversing the order in which the children of some nodes appear. Definition 2.3. An election E = (C, V ) is group-separable if there is a rooted, ordered tree T whose leaves are members of C, such that each vote in V is consistent with T .
The trees from Definition 2.3 form a subclass of clone decomposition trees, which are examples of PQ-trees (Elkind et al., 2012; Booth & Lueker, 1976). Example 2.4. Consider candidate set C = {a, b, c, d}, trees T1, T2, and T3 from Figure 1, and votes v1 : a b c d, v2 : c d b a, and v3 : b d c a. Vote v1 is consistent with each of the trees, v2 is consistent with T2 (reverse the children of y1 and y2), and v3 is consistent with T3 (reverse the children of x1 and x3).
3 Frequency Matrices for Vote Distributions
We show how to compute frequency matrices for several well-known distributions over votes.
3.1 Setup and Interpretation
A vote distribution for a candidate set C is a function D that assigns a probability to each preference order over C. Formally, we require that for each v ∈ L(C) it holds that D(v) ≥ 0 and
∑ v∈L(C)D(v) = 1. We say that a vote v is in the support of D if D(v) > 0. Given such a distribution, we can form an election by repeatedly drawing votes according to the specified probabilities. For example, we can sample each element of L(C) with equal probability; this distribution, which is known as impartial culture (IC), is denoted by DIC (we omit the candidate set from our notation as it will always be clear from the context). The frequency matrix of a vote distribution D over a candidate set C is #freq(D) = ∑ v∈L(C)D(v) ·#freq(v). For example, we have #freq(DIC) = UN. One interpretation of #freq(D) is that the entry for a candidate cj and a position i is the probability that a vote v sampled from D has cj in position i (which we denote as P[posv(cj) = i]). Another interpretation is that if we sample a large number of votes then the resulting election’s frequency matrix would be close to #freq(D) with high probability. More formally, if we letMn be a random variable equal to the frequency matrix of an n-voter election generated according to D, then it holds that limn→∞ E(Mn) = #freq(D).
3.2 Group-Separable Elections
We first consider sampling group-separable votes. Given a rooted tree T whose leaves are labeled by elements of C = {c1, . . . , cm}, let DTGS be the distribution assigning equal probability to all votes consistent with T , and zero probability to all other votes; one can think of DTGS as impartial culture restricted to the group-separable subdomain defined by T . To sample from DTGS, we can toss a fair coin for each internal node of T , reversing the order of its children if this coin comes up heads, and output the frontier of the resulting tree. We focus on the following types of trees:
1. Flat(c1, . . . , cm) is a tree with a single internal node, whose children, from left to right, are c1, c2, . . . , cm. There are only two preference orders consistent with this tree, c1 · · · cm and its reverse.
2. Bal(c1, . . . , cm) is a perfectly balanced binary tree with frontier c1, . . . , cm (hence we assume the number m of candidates to be a power of two).
3. CP(c1, . . . , cm) is a binary caterpillar tree: it has internal nodes x1, . . . , xm−1; for each j ∈ [m− 2], xj has cj as the left child and xj+1 as the right one, whereas xm−1 has both cm−1 and cm as children.
The first tree in Figure 1 is flat, the second one is balanced, and the third one is a caterpillar tree. If T is a caterpillar tree, then we refer to DTGS as the GS/caterpillar distribution. We use a similar terminology for the other trees. Theorem 3.1. Let F be the frequency matrix of distribution DTGS. If T is flat then F = AN, and if it is balanced then F = UN. If T is a caterpillar tree CP(c1, . . . , cm), then for each candidate cj the probability that cj appears in a position i ∈ [m] in a random vote v sampled from DTGS is:
1 2j ( j−1 i−1 ) · 1i≤j + 12j ( j−1 (i−1)−(m−j) ) · 1i>m−j .
Proof. The cases of flat and balanced trees are immediate, so we focus on caterpillar trees. Let T = CP(c1, . . . , cm) with internal nodes x1, . . . , xm−1, and consider a candidate cj and a position i ∈ [m]. Let v be a random variable equal to a vote sampled from DTGS. We say that a node x`, ` ∈ [m− 1], is reversed if the order of its children is reversed. Note that for ` < r it holds that cr precedes c` in the frontier if and only if x` is reversed. Suppose that xj is not reversed. Then v ranks cj above each of cj+1, . . . , cm. This means that for cj to be ranked exactly in position i, it must be that j ≥ i and exactly i− 1 nodes among x1, . . . , xj−1 are not reversed. If j ≥ i, the probability that xj and i − 1 nodes among x1, . . . , xj−1 are not reversed is 12j · ( j−1 i−1 ) . On the other hand, if xj is reversed, then v ranks candidates cj+1, . . . , cm above cj . As there are m− j of them, for cj to be ranked exactly in position i it must hold that i > m− j and exactly (i− 1)− (m− j) nodes among x1, . . . , xj−1 are not reversed. This happens with probability 12j · ( j−1 (i−1)−(m−j) ) .
Regarding distributions DTGS not handled in Theorem 3.1, we still can compute their frequence matrices efficiently. Theorem 3.2. There is an algorithm that given a tree T computes #freq(DTGS) using polynomially many arithmetic operations with respect to the number of nodes in T .
3.3 From Caterpillars to Single-Peaked Preferences.
There is a relationship between GS/caterpillar votes and single-peaked ones, which will be very useful when computing one of the frequency matrices in the next section. Theorem 3.3. Given a ranking v over C = {c1, . . . , cm}, let v̂ be another ranking over C such that, for each j ∈ [m], if cj is ranked in position i in v then ci is ranked in position m− j + 1 in v̂. Suppose that v is in the support of DTGS, where T = CP(c1, . . . , cm). Then v̂ is single-peaked with respect to c1 C · · ·C cm.
There are exactly 2m−1 votes in the support of DTGS (this follows by simple counting) and there are 2m−1 votes that are single-peaked with respect to c1 C · · ·C cm. As u 6= v implies û 6= v̂, it follows that the mapping v 7→ v̂ is a bijection between all votes in the support of DTGS and all votes that are single-peaked with respect to c1 C · · ·C cm.
3.4 Single-Peaked Elections
We consider two models of generating single-peaked elections, one due to Walsh (2015) and one due to Conitzer (2009). Let us fix a candidate set C = {c1, . . . , cm} and a societal axis c1 C · · ·C cm. Under the Walsh distribution, denoted DWalSP , each vote that is single-peaked with respect to C has equal probability (namely, 12m−1 ), and all other votes have probability zero. By Theorems 3.1 and 3.3, we immediately obtain the frequency matrix for the Walsh distribution (in short, it is the transposed matrix of the GS/caterpillar distribution). Corollary 3.4. Consider a candidate setC = {c1, . . . , cm} and an axis c1C· · ·Ccm. The probability that candidate cj appears in position i in a vote sampled from DWalSP is: 12m−i+1 ( m−i j−1 ) · 1j≤m−i+1 +
1 2m−i+1 ( m−i j−i ) · 1j>i−1.
To sample a vote from the Conitzer distribution, DConSP (also known as the random peak distribution), we pick some candidate cj uniformly at random and rank him or her on top. Then we perform m− 1 iterations, where in each we choose (uniformly at random) a candidate directly to the right or the left of the already selected ones, and place him or her in the highest available position in the vote. Theorem 3.5. Let c1 C · · · C cm be the societal axis, where m is an even number, and let v be a random vote sampled from DConSP for this axis. For j ∈ [m2 ] and i ∈ [m] we have:
P[posv(cj) = i] = 2/2m if i < j, (j+1)/2m if i = j, 1/2m if j < i < m− j + 1, (m−j+1)/2m if i = m− j + 1, 0 if i+ j > m.
Further, for each candidate cj ∈ C and each position i ∈ [m] we have P[posv(cj) = i] = P[posv(cm−j+1) = i].
3.5 Mallows Model
Finally, we consider the classic Mallows distribution. It has two parameters, a central vote v∗ over m candidates and a dispersion parameter φ ∈ [0, 1]. The probability of sampling a vote v from this distribution (denoted Dv
∗,φ Mal ) is: D v∗,φ Mal (v) = 1 Zφ κ(v,v∗), where Z = 1 · (1 + φ) · (1 + φ + φ2) · · · · · (1 + · · ·+ φm−1) is a normalizing constant and κ(v, v∗) is the swap distance between v and v∗ (i.e., the number of swaps of adjacent candidates needed to transform v into v∗). In our experiments, we consider a new parameterization, introduced by Boehmer et al. (2021b). It uses a normalized dispersion parameter norm-φ, which is converted to a value of φ so that the expected swap distance between the central vote v∗ and a sampled vote v is norm-φ2 times the maximum swap distance between two votes (so, norm-φ = 1 is equivalent to IC and for norm-φ = 0.5 we get elections that lie close to the middle of the UN–ID path).
Our goal is now to compute the frequency matrix of Dv ∗,φ
Mal . That is, given the candidate ranked in position j in the central vote, we want to compute the probability that he or she appears in a
given position i ∈ [m] in the sampled vote. Given a positive integer m, consider the candidate set C(m) = {c1, . . . , cm} and the central vote v∗m : c1 · · · cm. Fix a candidate cj ∈ C(m), and a position i ∈ [m]. For every integer k between 0 and m(m−1)/2, let S(m, k) be the number of votes in L(C(m)) that are at swap distance k from v∗m, and define T (m, k, j, i) to be the number of such votes that have cj in position i. One can compute S(m, k) in time polynomial inm (OEIS Foundation Inc., 2020); using S(m, k), we show that the same holds for T (m, k, j, i).
Lemma 3.6. There is an algorithm that computes T (m, k, j, i) in polynomial time with respect to m.
We can now express the probability of sampling a vote v, where the candidate ranked in position j in the central vote v∗ ends up in position i under Dv
∗,φ Mal , as:
fm(φ, j, i) = 1 Z ∑m(m−1)/2 k=0 T (m, k, j, i)φ k. (1)
The correctness follows from the definitions of T and Dv ∗,φ
Mal . By Lemma 3.6, we have the following.
Theorem 3.7. There exists an algorithm that, given a number m of candidates, a vote v∗, and a parameter φ, computes the frequency matrix of Dv
∗,φ Mal using polynomially many operations in m.
Note that Equation (1) only depends on φ, j and i (and, naturally, on m). Using this fact, we can also compute frequency matrices for several variants of the Mallows distribution.
Remark 3.8. Given a vote v, two dispersion parameters φ and ψ, and a probability p ∈ [0, 1], we define the distribution p-Dv,φ,ψMal as p · D v,φ Mal + (1− p) · D rev(v),ψ Mal , i.e., with probability p we sample a vote from Dv,φMal and with probability 1− p we sample a vote from D rev(v),ψ Mal . The probability that candidate cj appears in position i in the resulting vote is p ·fm(φ, j, i)+(1−p) ·fm(ψ,m− j+1, i). Remark 3.9. Consider a candidate set C = {c1, . . . , cm}. Given a vote distribution D over L(C) and a parameter φ, define a new distribution D′ as follows: Draw a vote v̂ according to D and then output a vote v sampled from Dv̂,φMal; indeed, such models are quite natural, see, e.g., the work of Kenig & Kimelfeld (2019). For each t ∈ [m], let g(j, t) be the probability that cj appears in position t in a vote sampled from D. The probability that cj appears in position i ∈ [m] in a vote sampled from D′ is ∑m t=1 g(j, t) · f(φ, t, i). In terms of matrix multiplication, this means that #freq(D′) = #freq(Dv ∗,φ
Mal ) ·#freq(D), where v∗ is c1 · · · cm. We write φ-Conitzer (φ-Walsh) to refer to this model where we use the Conitzer (Walsh) distribution as the underlying one and normalized dispersion parameter φ.
4 Skeleton Map
Our goal in this section is to form what we call a skeleton map of vote distributions (skeleton map, for short), evaluate its quality and robustness, and compare it to the map of Boehmer et al. (2021b). Throughout this section, whenever we speak of a distance between elections or matrices, we mean the positionwise distance (occasionally we will also refer to the Euclidean distances on our maps, but we will always make this explicit). Let Φ = {0, 0.05, 0.1, . . . , 1} be a set of normalized dispersion parameters that we will be using for Mallows-based distributions in this section.
We form the skeleton map following the general approach of Szufa et al. (2020) and Boehmer et al. (2021b). For a given number of candidates, we consider the four compass matrices (UN, ID, AN, ST) and paths between each matrix pair consisting of their convex combinations (gray dots), the frequency matrices of the Mallows distribution with normalized dispersion parameters from Φ (blue triangles), and the frequency matrices of the Conitzer (CON), Walsh (WAL), and GS/caterpillar distribution (CAT). Moreover, we add the frequency matrices of the following vote distributions (we again use the dispersion parameters from Φ): (i) The distribution 1/2-Dv,φ,φMal as defined in Remark 3.8 (red triangles), (ii) the distribution where with equal probability we mix the standard Mallows distribution and 1/2-Dv,φ,φMal (green triangles), and (iii) the φ-Conitzer and φ-Walsh distributions as defined in Remark 3.9 (magenta and orange crosses). For each pair of these matrices we compute their positionwise distance. Then we find an embedding of the matrices into a 2D plane, so that each matrix is a point and the Euclidean distances between these points are as similar to the positionwise distances as possible (we use the MDS algorithm, as implemented in the Python sklearn.manifold.MDS package). In Figure 3 we show our map for the case of 10 candidates (the
0.33
0.33
0.39
0.77
0.44
0.34
0.66
0.38
0.380.33
0.19
0.32
0.65
IDUN
AN
ST
WAL
CON
CAT MID
1
3
4
5 6
7
2
Figure 3: The skeleton map with 10 candidates. We have MID = 1/2AN + 1/2ID. Each point labeled with a number is a realworld election as described in Section 5.
Figure 4: In the top-right part, we show the normalized positionwise distance. In the bottomleft one, we show the embedding misrepresentation ratios.
lines between some points/matrices show their positionwise distances; to maintain clarity, we only provide some of them).
We now verify the credibility of the skeleton map. As the map does not have many points, we expect its embedding to truly reflect the positionwise distances between the matrices. This, indeed, seems to be the case, although some distances are represented (much) more accurately than the others. In Figure 4 we provide the following data for a number of matrices (for m = 10; matrix M2W is the Mallows matrix in our data set that is closest to the Walsh matrix). In the top-right part (the white-orange area), we give positionwise distances between the matrices, and in the bottom-left part (the blue area), for each pair of matrices X and Y we report the misrepresentation ratio Euc(X,Y )nPOS(X,Y ) , where Euc(X,Y ) is the Euclidean distance between X and Y in the embedding, normalized by the Euclidean distance between ID and UN. The closer they are to 1, the more accurate is the embedding. The misrepresentation ratios are typically between 0.8 and 1.15, with many of them between 0.9 and 1.05. Thus, in most cases, the map is quite accurate and offers good intuition about the relations between the matrices. Yet, some distances are represented particularly badly. As an extreme example, the Euclidean distance between the Walsh matrix and the closest Mallows matrix, M2W, is off by almost a factor of 8 (these matrices are close, but not as close as the map suggests). Thus, while one always has to verify claims suggested by the skeleton map, we view it as quite credible. This conclusion is particularly valuable when we compare the skeleton map and the map of Boehmer et al. (2021b), shown in Figure 2. The two maps are similar, and analogous points (mostly) appear in analogous positions. Perhaps the biggest difference is the location of the Conitzer matrix on the skeleton map and Conitzer elections in the map of Boehmer et al., but even this difference is not huge. We remark that the Conitzer matrix is closer to UN and AN than to ID and ST, whereas for the Walsh matrix the opposite is true. Boehmer et al. (2021b) make a similar observation; our results allow us to make this claim formal. In Appendix E, we analyze the robustness of the skeleton map with respect to varying the number of candidates. We find that except for pairs including the Walsh or GS/caterpillar matrices, which "travel" on the map as the number of candidates increases, the distance between each pair of matrices in the skeleton map stays nearly constant.
5 Learning Vote Distributions
We demonstrate how the positionwise distance and frequency matrices can be used to fit vote distributions to given real-world elections. Specifically, we consider the Mallows model (Dv,φMal) and the φ-Conitzer and φ-Walsh models. Naturally, we could use more distributions, but we focus on showcasing the technique and the general unified approach. Concerning our results, among others, we verify that for Mallows model our approach is strongly correlated with existing maximumlikelihood approaches. Moreover, unlike in previous works, we compare the capabilities of different distributions to fit the given elections. We remark that if we do not have an algorithm for computing a frequency matrix of a given vote distribution, we can obtain an approximate matrix by sampling sufficiently many votes from this distribution. In principle, it is also possible to deal with distributions
over elections that do not correspond to vote distributions and hence are not captured by expected frequency matrices (as is the case, e.g., for the Euclidean models where candidates do not have fixed positions; see the work of Szufa et al. (2020) for examples of such models in the context of the map of elections): If we want to compute the distance of such a distribution, we sample sufficiently many elections and compute their average distance from the input one. However, it remains unclear how robust this approach is.
Approach. To fit our vote distributions to a given election, we compute the election’s distance to the frequency matrices of Dv,φMal, φ-Conitzer, and φ-Walsh, for φ ∈ {0, 0.001, . . . , 1}. We select the distribution corresponding to the closest matrix.
Data. We consider elections from the real-world datasets used by Boehmer et al. (2021b). They generated 15 elections with 10 candidates and 100 voters (with strict preferences) from each of the eleven different real-world election datasets (so, altogether, they generated 165 elections, most of them from Preflib (Mattei & Walsh, 2013)). They used four datasets of political elections (from North Dublin (Irish), various non-profit and professional organizations (ERS), and city council elections from Glasgow and Aspen), four datasets of sport-based elections (from Tour de France (TDF), Giro d’Italia (GDI), speed skating, and figure skating) and three datasets with survey-based elections (from preferences over T-shirt designs, sushi, and cities). We present the results of our analysis for seven illustrative and particularly interesting elections in Table 1 and also include them in our skeleton map from Figure 3.
Basic Test. There is a standard maximum-likelihood estimator (MLE; based on Kemeny voting (Mandhani & Meilǎ, 2009)) that given an election provides the most likely dispersion parameter of the Mallows distribution that might have generated this election. To test our approach, we compared the parameters provided by our approach and by the MLE for our 165 elections and found them to be highly correlated (with Pearson correlation coefficient around 0.97). In particular, the average absolute difference between the dispersion parameter calculated by our approach and the MLE is only 0.02. See Appendix F for details.
Fitting Real-World Elections. Next, we consider the capabilities of Dv,φMal, φ-Conitzer, and φWalsh to fit the real-world elections of Boehmer et al. (2021b). Overall, we find that these three vote distributions have some ability to capture the considered elections, but it certainly is not perfect. Indeed, the average normalized distance of these elections to the frequency matrix of the closest distribution is 0.14. To illustrate that some distance is to be expected here, we mention that the average distance of an election sampled from impartial culture (DIC, with 10 candidates and 100 voters) to the distribution’s expected frequency matrix is 0.09 (see Appendix E.4 for a discussion of this and how it may serve as an estimator for the “variance of a distribution”). There are also some elections that are not captured by any of the considered distributions to an acceptable degree; examples of this are elections nr. 1 and nr. 2, which are at distance at least 0.32 and 0.25 from all our distributions, respectively. Remarkably, while coming from the same dataset, elections nr. 1 and nr. 2 are still quite different from each other and, accordingly, the computed dispersion parameter is also quite different. It remains a challenge to find distributions capturing such elections.
Comparing the power of the three considered models, nearly all of our elections are best captured by the Mallows model rather than φ-Conitzer or φ-Walsh. There are only twenty elections that are closer to φ-Walsh or φ-Conitzer than to a Mallows model (election nr. 3 is the most extreme example), and, unsurprisingly, both φ-Walsh and φ-Conitzer perform particularly badly at capturing elections close to ID (see election nr. 4). That is, φ-Conitzer and φ-Walsh are not needed to ensure good coverage of the space of elections; the average normalized distance of our elections to the closest Mallows model is only 0.0007 higher than their distance to the closest distribution (elections nr. 3-6 are three examples of elections which are well captured by the Mallows model and distributed over the entire map).2 Nevertheless, φ-Walsh is also surprisingly powerful, as the average normalized distance of our elections to the closest φ-Walsh distribution is only 0.03 higher than their distance to the closest distribution (however, this might be also due to the fact that most of the considered real-world elections fall into the same area of the map, which φ-Walsh happens to capture particularly well (Boehmer et al., 2021b)). φ-Conitzer performs considerably worse: there are only three elections for which it produces a (slightly) better result than φ-Walsh.
Moreover, our results also emphasize the complex nature of the space of elections: Election nr. 7 is very close to Dv,0.95Mal , hinting that its votes are quite chaotic. At the same time, this election is very close to 0.63-Conitzer and 0.69-Walsh distributions, which suggests at least a certain level of structure among its votes (because votes from Conitzer and Walsh distributions are very structured, and the Mallows filter with dispersion between 0.63 and 0.69 does not destroy this structure fully). However, as witnessed by the fact that the frequency matrix of GS/balanced (which is highly structured) is UN, such phenomena can happen. Lastly, note that most of our datasets are quite “homogenous”, in that the closest distributions for elections from the dataset are similar and also at a similar distance. However, there are also clear exceptions, for instance, elections nr. 1 and nr. 4 from the figure skating dataset. Moreover, there are two elections from the speed skating dataset where one election is captured best by Dv,0.76Mal and the other by D v,0.32 Mal .
6 Summary
We have computed the frequency matrices (Szufa et al., 2020; Boehmer et al., 2021b) of several well-known distributions of votes. Using them, we have drawn a “skeleton map”, which shows how these distributions relate to each other, and we have analyzed its properties. Moreover, we have demonstrated how our results can be used to fit vote distributions to capture real-world elections.
For future work, it would be interesting to compute the frequency matrices of further popular vote distributions, such as the Plackett–Luce model (we conjecture that its frequency matrix is computable in polynomial time). It would also be interesting to use our approach to fit more complex models, such as mixtures of Mallows models, to real-world elections. Further, it may be interesting to use expected frequency matrices to reason about the asymptotic behavior of our models. For example, it might be possible to formally show where, in the limit, do the matrices of our models end up on the map as we increase the number of candidates.
Acknowledgments
NB was supported by the DFG project MaMu (NI 369/19) and by the DFG project ComSoc-MPMS (NI 369/22). This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 101002854).
2For each election, we also computed the closest frequency matrix of two mixtures of Mallows models with reversed central votes p-Dv,φ,ψMal using our approach. However, this only decreased the average minimum distance by around 0.02, with the probability p of flipping the central vote being (close to) zero for most elections. | 1. What is the focus and contribution of the paper regarding vote distributions?
2. What are the strengths of the proposed approach, particularly in its connection to previous works?
3. Are there any concerns or weaknesses in the paper, especially regarding its claims and experiments?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any potential societal implications of the work that the authors have not discussed? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
Note: I have reviewed a previous submission of this paper at a previous conference.
This paper continues a recent line of work aimed at identifying relationships between many different vote distributions. Where previous work used frequency matrices to measure the difference between sampled elections from given distributions this paper introduces a very natural extension to the concept and applies the frequency matrices to the vote distributions themselves.
The paper describes a number of election structures (single-peaked, group-seperable, Mallows) and finds the frequency matrix, or a polynomial time formula for generating the matrix, for each distribution. Using these frequency matrices a new map of elections is generated. This "skeleton map" strengthens and confirms prior work and is shown to generally represent the distance between elections well. Finally, Mallows models are generated for elections representing real-world data and placed on the map. These also have some similarity to prior work however they find generating a distribution to perfectly match each real election to be difficult. Some promising potential future work is identified while concluding.
Strengths And Weaknesses
The paper represents a novel addition to a recent series of papers. This approach of generalizing from sampled data to comparing entire distributions could also have potential uses in other domains. The work is well written and does a very good job of connecting itself to the prior results in this line of research. While the results are moderately complex I found them to be explained and structured quite clearly.
Overall I find the paper fairly strong and have no major issues with it. Due to space limitations some of the figures (particularly Fig 2) are rather small and difficult to read. I am glad to see that most of the specific, minor changes I have previously suggested have been fixed.
Questions
Can you discuss why this paper was not previously accepted and how it has changed since then?
Tiny issue: Line 290 misspells "Conitzer" as "Cointzer"
Limitations
The authors have not discussed the potential societal impact of their work. While the contribution appears quite far removed from any possible real-world impact perhaps one of the many appendices could be used to briefly imagine how this work could be misused somewhere down the line? |
NIPS | Title
Expected Frequency Matrices of Elections: Computation, Geometry, and Preference Learning
Abstract
We use the “map of elections” approach of Szufa et al. (AAMAS-2020) to analyze several well-known vote distributions. For each of them, we give an explicit formula or an efficient algorithm for computing its frequency matrix, which captures the probability that a given candidate appears in a given position in a sampled vote. We use these matrices to draw the “skeleton map” of distributions, evaluate its robustness, and analyze its properties. Finally, we develop a general and unified framework for learning the distribution of real-world preferences using the frequency matrices of established vote distributions.
1 Introduction
Computational social choice is a research area at the intersection of social choice (the science of collective decision-making) and computer science, which focuses on the algorithmic analysis of problems related to preference aggregation and elicitation (Brandt et al., 2013). Many of the early papers in this field were primarily theoretical, focusing on establishing the worst-case complexity of winner determination and strategic behavior under various voting rules—see, e.g., the papers of Hemaspaandra et al. (1997), Dwork et al. (2001), and Conitzer et al. (2007)—but more recent work often combines theoretical investigations with empirical analysis. For example, formal bounds on the running time and/or approximation ratio of a winner determination algorithm can be complemented by experiments that evaluate its performance on realistic instances; see, e.g., the works of Conitzer (2006), Betzler et al. (2014), Faliszewski et al. (2018) and Wang et al. (2019).
However, performing high-quality experiments requires the ability to organize and understand the available data. One way to achieve this is to form a so-called “map of elections,” recently introduced by Szufa et al. (2020) and extended by Boehmer et al. (2021b). The idea is as follows. First, we fix a distance measure between elections. Second, we sample a number of elections from various distributions and real-life datasets—e.g., those collected in PrefLib (Mattei & Walsh, 2013)—and measure the pairwise distances between them. Third, we embed these elections into the 2D plane, mapping each election to a point so that the Euclidean distances between points are approximately equal to the distances between the respective elections. Finally, we plot these points, usually coloring them to indicate their origin (e.g., the distribution from which a given election was sampled); see Figure 2 later in the paper for an example of such a map. A location of an election on a map provides useful information about its properties. For example, Szufa et al. (2020) and Boehmer et al. (2021a,b) have shown that it can be used to predict (a) the Borda score of the winner of the election, (b) the
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
running time of ILP solvers computing the winners under the Harmonic-Borda multiwinner voting rule, or (c) the robustness of Plurality and Borda winners. Moreover, real-world elections of the same type (such as the ones from politics, sports, or surveys) tend to cluster in the same areas of the map; see also the positions on the map of the datasets collected by Boehmer & Schaar (2022). As such, the map has proven to be a useful framework to analyze the nature of elections and to visualize experimental results in a non-aggregate fashion.
Unfortunately, extending the map to incorporate additional examples and distributions is a challenging task, as the visual representation becomes cluttered and, more importantly, the embedding algorithms, which map elections to points in 2D, find it more difficult to preserve pairwise distances between points as the number of points increases. It is therefore desirable to reduce the number of points in a way that preserves the key features of the framework.
We address this challenge by drawing a map of distributions rather than individual elections, which we call the skeleton map. That is, instead of sampling 20–30 points from each distribution and placing them all on the map, as Szufa et al. (2020) and Boehmer et al. (2021b) do (obtaining around 800 points in total), we create a single point for each distribution. This approach is facilitated by the fact that prior work on the “map of elections” framework represented elections by their frequency matrices, which capture their essential features. The starting point of our work is the observation that this representation extends to distributions in a natural way. Thus, if we can compute the frequency matrix of some distribution D, then, instead of sampling elections from D and creating a point on the map for each sample, we can create a single point for D itself.
Our Contribution. We provide three sets of results. First, for a number of prominent vote distributions, we show how to compute their frequency matrices, by providing an explicit formula or an efficient algorithm. Second, we draw the map of distributions (the skeleton map) and argue for its credibility and robustness. Finally, we use our results to estimate the parameters of the distributions that are closest to the real-world elections considered by Boehmer et al. (2021b). In more detail, we work in the setting of preference learning, where we are given an election and we want to learn the parameters of some distribution, so as to maximize the similarity of the votes sampled from this distribution and the input election. For example, we may be interested in fitting the classic model of Mallows (1957). This model is parameterized by a central vote v and a dispersion parameter φ, which specifies how likely it is to generate a vote at some distance from the central one (alternatively, one may use, e.g., the Plackett–Luce model). Previous works on preference learning typically proposed algorithms to learn the parameters of one specific (parameterized) vote distribution (see, e.g., the works of Lu & Boutilier (2014); Mandhani & Meilǎ (2009); Meila & Chen (2010); Vitelli et al. (2017); Murphy & Martin (2003); Awasthi et al. (2014) for (mixtures of) the Mallows model and the works of Guiver & Snelson (2009); Hunter (2004); Minka (2004); Gormley & Murphy (2008) for (mixtures of) the Plackett–Luce model). Using frequency matrices, we offer a more general approach. Indeed, given an election and a parameterized vote distribution whose frequency matrix we can compute, the task of learning the distribution’s parameters boils down to finding parameters that minimize the distance between the election and the matrices of the distribution. While this minimization problem may be quite challenging, our approach offers a uniform framework for dealing with multiple kinds of distributions at the same time. We find that for the case of the Mallows distribution, our approach learns parameters very similar to those established using maximum likelihood-based approaches. Omitted proofs and discussions are in the appendix. The source code used for the experiments is available in a GitHub repository1.
2 Preliminaries
Given an integer t, we write [t] to denote the set {1, . . . , t}. We interpret a vector x ∈ Rm as an m× 1 matrix (i.e., we use column vectors as the default).
Preference Orders and Elections. Let C be a finite, nonempty set of candidates. We refer to total orders over C as preference orders (or, equivalently, votes), and denote the set of all preference orders over C by L(C). Given a vote v and a candidate c, by posv(c) we mean the position of c in v (the top-ranked candidate has position 1, the next one has position 2, and so on). If a candidate a is ranked above another candidate b in vote v, we write v : a b. Let rev(v) denote the reverse of
1github.com/Project-PRAGMA/Expected-Frequency-Matrices-NeurIPS-2022
vote v. An election E = (C, V ) consists of a set C = {c1, . . . , cm} of candidates and a collection V = (v1, . . . , vn) of votes. Occasionally we refer to the elements of V as voters rather than votes.
Frequency Matrices. Consider an election E = (C, V ) with C = {c1, . . . , cm} and V = (v1, . . . , vn). For each candidate cj and position i ∈ [m], we define #freqE(cj , i) to be the fraction of the votes from V that rank cj in position i. We define the column vector #freqE(cj) to be (#freqE(cj , 1), . . . ,#freqE(cj ,m)) and matrix #freq(E) to consist of vectors #freqE(c1), . . . ,#freqE(cm). We refer to #freq(E) as the frequency matrix of election E. Frequency matrices are bistochastic, i.e., their entries are nonnegative and each of their rows and columns sums up to one.
Example 2.1. Let E = (C, V ) be an election with candidate set C = {a, b, c, d, e} and four voters, v1, v2, v3, and v4. Below, we show the voters’ preference orders (on the left) and the election’s frequency matrix (on the right).
v1 : a b c d e, v2 : c b d a e, v3 : d e c b a, v4 : b c a d e.
a b c d e
1 1/4 1/4 1/4 1/4 0 2 0 1/2 1/4 0 1/4 3 1/4 0 1/2 1/4 0 4 1/4 1/4 0 1/2 0 5 1/4 0 0 0 3/4 Given a vote v, we write #freq(v) to denote the frequency matrix of the election containing this vote only; #freq(v) is a permutation matrix, with a single 1 in each row and in each column. Thus, for an election E = (C, V ) with V = (v1, . . . , vn) we have #freq(E) = 1n · ∑n i=1 #freq(vi).
Compass Matrices. For even m, Boehmer et al. (2021b) defined the following four m × m “compass” matrices, which appear to be extreme on the “map of elections”:
1. The identity matrix, IDm, has ones on the diagonal and zeroes everywhere else (it corresponds to an election where all voters agree on a single preference order).
2. The uniformity matrix, UNm, has all entries equal to 1/m (it corresponds to lack of agreement; each candidate is ranked at each position equally often).
3. The stratification matrix, STm, is partitioned into four quadrangles, where all entries in the top-left and bottom-right quadrangles are equal to 2/m, and all other entries are equal to zero (it corresponds to partial agreement; the voters agree which half of the candidates is superior, but disagree on everything else).
4. The antagonism matrix, ANm, has values 1/2 on both diagonals and zeroes elsewhere (it captures a conflict: it is a matrix of an election where half of the voters rank the candidates in one way and half of the voters rank them in the opposite way).
Below, we show examples of these matrices for m = 4:
UN4 = 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 , ID4 = 1 0 0 00 1 0 00 0 1 0 0 0 0 1 , ST4 = 1/2 1/2 0 0 1/2 1/2 0 0 0 0 1/2 1/2 0 0 1/2 1/2 ,AN4 = 1/2 0 0 1/2 0 1/2 1/2 0 0 1/2 1/2 0 1/2 0 0 1/2 . We omit the subscript in the names of these matrices if its value is clear from the context or irrelevant.
EMD. Let x = (x1, . . . , xn) and y = (y1, . . . , yn) be two vectors with nonnegative real entries that sum up to 1. Their Earth mover’s distance, denoted EMD(x, y), is the cost of transforming x into y using operations of the form: Given indices i, j ∈ [n] and a positive value δ such that xi ≥ δ, at the cost of δ · |i − j|, replace xi with xi − δ and xj with xj + δ (this corresponds to moving δ amount of “earth” from position i to position j). EMD(x, y) can be computed in polynomial time by a standard greedy algorithm.
Positionwise Distance (Szufa et al., 2020). Let A = (a1, . . . , am) and B = (b1, . . . , bm) be two m × m frequency matrices. Their raw positionwise distance is rawPOS(A,B) = minσ∈Sm ∑m i=1 EMD(ai, bσ(i)), where Sm denotes the set of all permutations over [m]. We will normalize these distances by 13 (m 2 − 1), which Boehmer et al. (2021b, 2022) proved to be the maximum distance between two m × m frequency matrices and the distance between IDm and
UNm: nPOS(A,B) = rawPOS(A,B)
1 3 (m 2−1) . For two elections E and F with equal-sized candidate sets, their positionwise distance, raw or normalized, is defined as the positionwise distance between their frequency matrices.
Paths Between the Compass Matrices. Let X and Y be two compass matrices. Boehmer et al. (2021b) showed that if we take their affine combination Z = αX + (1 − α)Y (0 ≤ α ≤ 1) then nPOS(X,Z) = (1 − α)nPOS(X,Y ) and nPOS(Z, Y ) = αnPOS(X,Y ). Such affine combinations form direct paths between the compass matrices; they are also possible between any two frequency matrices of a given size, not just the compass ones, but may require shuffling the matrices’ columns (Boehmer et al., 2021b).
Structured Domains. We consider two classes of structured elections, single-peaked elections (Black, 1958), and group-separable elections (Inada, 1964). For a discussion of these domains and the motivation behind them, see the original papers and the overviews by Elkind et al. (2017, 2022).
Intuitively, an election is single-peaked if we can order the candidates so that, as each voter considers the candidates in this order (referred to as the societal axis), his or her appreciation first increases and then decreases. The axis may, e.g., correspond to the left-right political spectrum. Definition 2.2. Let v be a vote over C and let C be the societal axis over C. We say that v is singlepeaked with respect to C if for every t ∈ [|C|] its t top-ranked candidates form an interval within C. An election is single-peaked with respect to C if all its votes are. An election is single-peaked (SP) if it is single-peaked with respect to some axis.
Note that the election from Example 2.1 is single-peaked with respect to the axis aC bC cC dC e.
We also consider group-separable elections, introduced by Inada (1964). For our purposes, it will be convenient to use the tree-based definition of Karpov (2019). Let C = {c1, . . . , cm} be a set of candidates, and consider a rooted, ordered tree T whose leaves are elements of C. The frontier of such a tree is the preference order that ranks the candidates in the order in which they appear in the tree from left to right. A preference order is consistent with a given tree if it can be obtained as its frontier by reversing the order in which the children of some nodes appear. Definition 2.3. An election E = (C, V ) is group-separable if there is a rooted, ordered tree T whose leaves are members of C, such that each vote in V is consistent with T .
The trees from Definition 2.3 form a subclass of clone decomposition trees, which are examples of PQ-trees (Elkind et al., 2012; Booth & Lueker, 1976). Example 2.4. Consider candidate set C = {a, b, c, d}, trees T1, T2, and T3 from Figure 1, and votes v1 : a b c d, v2 : c d b a, and v3 : b d c a. Vote v1 is consistent with each of the trees, v2 is consistent with T2 (reverse the children of y1 and y2), and v3 is consistent with T3 (reverse the children of x1 and x3).
3 Frequency Matrices for Vote Distributions
We show how to compute frequency matrices for several well-known distributions over votes.
3.1 Setup and Interpretation
A vote distribution for a candidate set C is a function D that assigns a probability to each preference order over C. Formally, we require that for each v ∈ L(C) it holds that D(v) ≥ 0 and
∑ v∈L(C)D(v) = 1. We say that a vote v is in the support of D if D(v) > 0. Given such a distribution, we can form an election by repeatedly drawing votes according to the specified probabilities. For example, we can sample each element of L(C) with equal probability; this distribution, which is known as impartial culture (IC), is denoted by DIC (we omit the candidate set from our notation as it will always be clear from the context). The frequency matrix of a vote distribution D over a candidate set C is #freq(D) = ∑ v∈L(C)D(v) ·#freq(v). For example, we have #freq(DIC) = UN. One interpretation of #freq(D) is that the entry for a candidate cj and a position i is the probability that a vote v sampled from D has cj in position i (which we denote as P[posv(cj) = i]). Another interpretation is that if we sample a large number of votes then the resulting election’s frequency matrix would be close to #freq(D) with high probability. More formally, if we letMn be a random variable equal to the frequency matrix of an n-voter election generated according to D, then it holds that limn→∞ E(Mn) = #freq(D).
3.2 Group-Separable Elections
We first consider sampling group-separable votes. Given a rooted tree T whose leaves are labeled by elements of C = {c1, . . . , cm}, let DTGS be the distribution assigning equal probability to all votes consistent with T , and zero probability to all other votes; one can think of DTGS as impartial culture restricted to the group-separable subdomain defined by T . To sample from DTGS, we can toss a fair coin for each internal node of T , reversing the order of its children if this coin comes up heads, and output the frontier of the resulting tree. We focus on the following types of trees:
1. Flat(c1, . . . , cm) is a tree with a single internal node, whose children, from left to right, are c1, c2, . . . , cm. There are only two preference orders consistent with this tree, c1 · · · cm and its reverse.
2. Bal(c1, . . . , cm) is a perfectly balanced binary tree with frontier c1, . . . , cm (hence we assume the number m of candidates to be a power of two).
3. CP(c1, . . . , cm) is a binary caterpillar tree: it has internal nodes x1, . . . , xm−1; for each j ∈ [m− 2], xj has cj as the left child and xj+1 as the right one, whereas xm−1 has both cm−1 and cm as children.
The first tree in Figure 1 is flat, the second one is balanced, and the third one is a caterpillar tree. If T is a caterpillar tree, then we refer to DTGS as the GS/caterpillar distribution. We use a similar terminology for the other trees. Theorem 3.1. Let F be the frequency matrix of distribution DTGS. If T is flat then F = AN, and if it is balanced then F = UN. If T is a caterpillar tree CP(c1, . . . , cm), then for each candidate cj the probability that cj appears in a position i ∈ [m] in a random vote v sampled from DTGS is:
1 2j ( j−1 i−1 ) · 1i≤j + 12j ( j−1 (i−1)−(m−j) ) · 1i>m−j .
Proof. The cases of flat and balanced trees are immediate, so we focus on caterpillar trees. Let T = CP(c1, . . . , cm) with internal nodes x1, . . . , xm−1, and consider a candidate cj and a position i ∈ [m]. Let v be a random variable equal to a vote sampled from DTGS. We say that a node x`, ` ∈ [m− 1], is reversed if the order of its children is reversed. Note that for ` < r it holds that cr precedes c` in the frontier if and only if x` is reversed. Suppose that xj is not reversed. Then v ranks cj above each of cj+1, . . . , cm. This means that for cj to be ranked exactly in position i, it must be that j ≥ i and exactly i− 1 nodes among x1, . . . , xj−1 are not reversed. If j ≥ i, the probability that xj and i − 1 nodes among x1, . . . , xj−1 are not reversed is 12j · ( j−1 i−1 ) . On the other hand, if xj is reversed, then v ranks candidates cj+1, . . . , cm above cj . As there are m− j of them, for cj to be ranked exactly in position i it must hold that i > m− j and exactly (i− 1)− (m− j) nodes among x1, . . . , xj−1 are not reversed. This happens with probability 12j · ( j−1 (i−1)−(m−j) ) .
Regarding distributions DTGS not handled in Theorem 3.1, we still can compute their frequence matrices efficiently. Theorem 3.2. There is an algorithm that given a tree T computes #freq(DTGS) using polynomially many arithmetic operations with respect to the number of nodes in T .
3.3 From Caterpillars to Single-Peaked Preferences.
There is a relationship between GS/caterpillar votes and single-peaked ones, which will be very useful when computing one of the frequency matrices in the next section. Theorem 3.3. Given a ranking v over C = {c1, . . . , cm}, let v̂ be another ranking over C such that, for each j ∈ [m], if cj is ranked in position i in v then ci is ranked in position m− j + 1 in v̂. Suppose that v is in the support of DTGS, where T = CP(c1, . . . , cm). Then v̂ is single-peaked with respect to c1 C · · ·C cm.
There are exactly 2m−1 votes in the support of DTGS (this follows by simple counting) and there are 2m−1 votes that are single-peaked with respect to c1 C · · ·C cm. As u 6= v implies û 6= v̂, it follows that the mapping v 7→ v̂ is a bijection between all votes in the support of DTGS and all votes that are single-peaked with respect to c1 C · · ·C cm.
3.4 Single-Peaked Elections
We consider two models of generating single-peaked elections, one due to Walsh (2015) and one due to Conitzer (2009). Let us fix a candidate set C = {c1, . . . , cm} and a societal axis c1 C · · ·C cm. Under the Walsh distribution, denoted DWalSP , each vote that is single-peaked with respect to C has equal probability (namely, 12m−1 ), and all other votes have probability zero. By Theorems 3.1 and 3.3, we immediately obtain the frequency matrix for the Walsh distribution (in short, it is the transposed matrix of the GS/caterpillar distribution). Corollary 3.4. Consider a candidate setC = {c1, . . . , cm} and an axis c1C· · ·Ccm. The probability that candidate cj appears in position i in a vote sampled from DWalSP is: 12m−i+1 ( m−i j−1 ) · 1j≤m−i+1 +
1 2m−i+1 ( m−i j−i ) · 1j>i−1.
To sample a vote from the Conitzer distribution, DConSP (also known as the random peak distribution), we pick some candidate cj uniformly at random and rank him or her on top. Then we perform m− 1 iterations, where in each we choose (uniformly at random) a candidate directly to the right or the left of the already selected ones, and place him or her in the highest available position in the vote. Theorem 3.5. Let c1 C · · · C cm be the societal axis, where m is an even number, and let v be a random vote sampled from DConSP for this axis. For j ∈ [m2 ] and i ∈ [m] we have:
P[posv(cj) = i] = 2/2m if i < j, (j+1)/2m if i = j, 1/2m if j < i < m− j + 1, (m−j+1)/2m if i = m− j + 1, 0 if i+ j > m.
Further, for each candidate cj ∈ C and each position i ∈ [m] we have P[posv(cj) = i] = P[posv(cm−j+1) = i].
3.5 Mallows Model
Finally, we consider the classic Mallows distribution. It has two parameters, a central vote v∗ over m candidates and a dispersion parameter φ ∈ [0, 1]. The probability of sampling a vote v from this distribution (denoted Dv
∗,φ Mal ) is: D v∗,φ Mal (v) = 1 Zφ κ(v,v∗), where Z = 1 · (1 + φ) · (1 + φ + φ2) · · · · · (1 + · · ·+ φm−1) is a normalizing constant and κ(v, v∗) is the swap distance between v and v∗ (i.e., the number of swaps of adjacent candidates needed to transform v into v∗). In our experiments, we consider a new parameterization, introduced by Boehmer et al. (2021b). It uses a normalized dispersion parameter norm-φ, which is converted to a value of φ so that the expected swap distance between the central vote v∗ and a sampled vote v is norm-φ2 times the maximum swap distance between two votes (so, norm-φ = 1 is equivalent to IC and for norm-φ = 0.5 we get elections that lie close to the middle of the UN–ID path).
Our goal is now to compute the frequency matrix of Dv ∗,φ
Mal . That is, given the candidate ranked in position j in the central vote, we want to compute the probability that he or she appears in a
given position i ∈ [m] in the sampled vote. Given a positive integer m, consider the candidate set C(m) = {c1, . . . , cm} and the central vote v∗m : c1 · · · cm. Fix a candidate cj ∈ C(m), and a position i ∈ [m]. For every integer k between 0 and m(m−1)/2, let S(m, k) be the number of votes in L(C(m)) that are at swap distance k from v∗m, and define T (m, k, j, i) to be the number of such votes that have cj in position i. One can compute S(m, k) in time polynomial inm (OEIS Foundation Inc., 2020); using S(m, k), we show that the same holds for T (m, k, j, i).
Lemma 3.6. There is an algorithm that computes T (m, k, j, i) in polynomial time with respect to m.
We can now express the probability of sampling a vote v, where the candidate ranked in position j in the central vote v∗ ends up in position i under Dv
∗,φ Mal , as:
fm(φ, j, i) = 1 Z ∑m(m−1)/2 k=0 T (m, k, j, i)φ k. (1)
The correctness follows from the definitions of T and Dv ∗,φ
Mal . By Lemma 3.6, we have the following.
Theorem 3.7. There exists an algorithm that, given a number m of candidates, a vote v∗, and a parameter φ, computes the frequency matrix of Dv
∗,φ Mal using polynomially many operations in m.
Note that Equation (1) only depends on φ, j and i (and, naturally, on m). Using this fact, we can also compute frequency matrices for several variants of the Mallows distribution.
Remark 3.8. Given a vote v, two dispersion parameters φ and ψ, and a probability p ∈ [0, 1], we define the distribution p-Dv,φ,ψMal as p · D v,φ Mal + (1− p) · D rev(v),ψ Mal , i.e., with probability p we sample a vote from Dv,φMal and with probability 1− p we sample a vote from D rev(v),ψ Mal . The probability that candidate cj appears in position i in the resulting vote is p ·fm(φ, j, i)+(1−p) ·fm(ψ,m− j+1, i). Remark 3.9. Consider a candidate set C = {c1, . . . , cm}. Given a vote distribution D over L(C) and a parameter φ, define a new distribution D′ as follows: Draw a vote v̂ according to D and then output a vote v sampled from Dv̂,φMal; indeed, such models are quite natural, see, e.g., the work of Kenig & Kimelfeld (2019). For each t ∈ [m], let g(j, t) be the probability that cj appears in position t in a vote sampled from D. The probability that cj appears in position i ∈ [m] in a vote sampled from D′ is ∑m t=1 g(j, t) · f(φ, t, i). In terms of matrix multiplication, this means that #freq(D′) = #freq(Dv ∗,φ
Mal ) ·#freq(D), where v∗ is c1 · · · cm. We write φ-Conitzer (φ-Walsh) to refer to this model where we use the Conitzer (Walsh) distribution as the underlying one and normalized dispersion parameter φ.
4 Skeleton Map
Our goal in this section is to form what we call a skeleton map of vote distributions (skeleton map, for short), evaluate its quality and robustness, and compare it to the map of Boehmer et al. (2021b). Throughout this section, whenever we speak of a distance between elections or matrices, we mean the positionwise distance (occasionally we will also refer to the Euclidean distances on our maps, but we will always make this explicit). Let Φ = {0, 0.05, 0.1, . . . , 1} be a set of normalized dispersion parameters that we will be using for Mallows-based distributions in this section.
We form the skeleton map following the general approach of Szufa et al. (2020) and Boehmer et al. (2021b). For a given number of candidates, we consider the four compass matrices (UN, ID, AN, ST) and paths between each matrix pair consisting of their convex combinations (gray dots), the frequency matrices of the Mallows distribution with normalized dispersion parameters from Φ (blue triangles), and the frequency matrices of the Conitzer (CON), Walsh (WAL), and GS/caterpillar distribution (CAT). Moreover, we add the frequency matrices of the following vote distributions (we again use the dispersion parameters from Φ): (i) The distribution 1/2-Dv,φ,φMal as defined in Remark 3.8 (red triangles), (ii) the distribution where with equal probability we mix the standard Mallows distribution and 1/2-Dv,φ,φMal (green triangles), and (iii) the φ-Conitzer and φ-Walsh distributions as defined in Remark 3.9 (magenta and orange crosses). For each pair of these matrices we compute their positionwise distance. Then we find an embedding of the matrices into a 2D plane, so that each matrix is a point and the Euclidean distances between these points are as similar to the positionwise distances as possible (we use the MDS algorithm, as implemented in the Python sklearn.manifold.MDS package). In Figure 3 we show our map for the case of 10 candidates (the
0.33
0.33
0.39
0.77
0.44
0.34
0.66
0.38
0.380.33
0.19
0.32
0.65
IDUN
AN
ST
WAL
CON
CAT MID
1
3
4
5 6
7
2
Figure 3: The skeleton map with 10 candidates. We have MID = 1/2AN + 1/2ID. Each point labeled with a number is a realworld election as described in Section 5.
Figure 4: In the top-right part, we show the normalized positionwise distance. In the bottomleft one, we show the embedding misrepresentation ratios.
lines between some points/matrices show their positionwise distances; to maintain clarity, we only provide some of them).
We now verify the credibility of the skeleton map. As the map does not have many points, we expect its embedding to truly reflect the positionwise distances between the matrices. This, indeed, seems to be the case, although some distances are represented (much) more accurately than the others. In Figure 4 we provide the following data for a number of matrices (for m = 10; matrix M2W is the Mallows matrix in our data set that is closest to the Walsh matrix). In the top-right part (the white-orange area), we give positionwise distances between the matrices, and in the bottom-left part (the blue area), for each pair of matrices X and Y we report the misrepresentation ratio Euc(X,Y )nPOS(X,Y ) , where Euc(X,Y ) is the Euclidean distance between X and Y in the embedding, normalized by the Euclidean distance between ID and UN. The closer they are to 1, the more accurate is the embedding. The misrepresentation ratios are typically between 0.8 and 1.15, with many of them between 0.9 and 1.05. Thus, in most cases, the map is quite accurate and offers good intuition about the relations between the matrices. Yet, some distances are represented particularly badly. As an extreme example, the Euclidean distance between the Walsh matrix and the closest Mallows matrix, M2W, is off by almost a factor of 8 (these matrices are close, but not as close as the map suggests). Thus, while one always has to verify claims suggested by the skeleton map, we view it as quite credible. This conclusion is particularly valuable when we compare the skeleton map and the map of Boehmer et al. (2021b), shown in Figure 2. The two maps are similar, and analogous points (mostly) appear in analogous positions. Perhaps the biggest difference is the location of the Conitzer matrix on the skeleton map and Conitzer elections in the map of Boehmer et al., but even this difference is not huge. We remark that the Conitzer matrix is closer to UN and AN than to ID and ST, whereas for the Walsh matrix the opposite is true. Boehmer et al. (2021b) make a similar observation; our results allow us to make this claim formal. In Appendix E, we analyze the robustness of the skeleton map with respect to varying the number of candidates. We find that except for pairs including the Walsh or GS/caterpillar matrices, which "travel" on the map as the number of candidates increases, the distance between each pair of matrices in the skeleton map stays nearly constant.
5 Learning Vote Distributions
We demonstrate how the positionwise distance and frequency matrices can be used to fit vote distributions to given real-world elections. Specifically, we consider the Mallows model (Dv,φMal) and the φ-Conitzer and φ-Walsh models. Naturally, we could use more distributions, but we focus on showcasing the technique and the general unified approach. Concerning our results, among others, we verify that for Mallows model our approach is strongly correlated with existing maximumlikelihood approaches. Moreover, unlike in previous works, we compare the capabilities of different distributions to fit the given elections. We remark that if we do not have an algorithm for computing a frequency matrix of a given vote distribution, we can obtain an approximate matrix by sampling sufficiently many votes from this distribution. In principle, it is also possible to deal with distributions
over elections that do not correspond to vote distributions and hence are not captured by expected frequency matrices (as is the case, e.g., for the Euclidean models where candidates do not have fixed positions; see the work of Szufa et al. (2020) for examples of such models in the context of the map of elections): If we want to compute the distance of such a distribution, we sample sufficiently many elections and compute their average distance from the input one. However, it remains unclear how robust this approach is.
Approach. To fit our vote distributions to a given election, we compute the election’s distance to the frequency matrices of Dv,φMal, φ-Conitzer, and φ-Walsh, for φ ∈ {0, 0.001, . . . , 1}. We select the distribution corresponding to the closest matrix.
Data. We consider elections from the real-world datasets used by Boehmer et al. (2021b). They generated 15 elections with 10 candidates and 100 voters (with strict preferences) from each of the eleven different real-world election datasets (so, altogether, they generated 165 elections, most of them from Preflib (Mattei & Walsh, 2013)). They used four datasets of political elections (from North Dublin (Irish), various non-profit and professional organizations (ERS), and city council elections from Glasgow and Aspen), four datasets of sport-based elections (from Tour de France (TDF), Giro d’Italia (GDI), speed skating, and figure skating) and three datasets with survey-based elections (from preferences over T-shirt designs, sushi, and cities). We present the results of our analysis for seven illustrative and particularly interesting elections in Table 1 and also include them in our skeleton map from Figure 3.
Basic Test. There is a standard maximum-likelihood estimator (MLE; based on Kemeny voting (Mandhani & Meilǎ, 2009)) that given an election provides the most likely dispersion parameter of the Mallows distribution that might have generated this election. To test our approach, we compared the parameters provided by our approach and by the MLE for our 165 elections and found them to be highly correlated (with Pearson correlation coefficient around 0.97). In particular, the average absolute difference between the dispersion parameter calculated by our approach and the MLE is only 0.02. See Appendix F for details.
Fitting Real-World Elections. Next, we consider the capabilities of Dv,φMal, φ-Conitzer, and φWalsh to fit the real-world elections of Boehmer et al. (2021b). Overall, we find that these three vote distributions have some ability to capture the considered elections, but it certainly is not perfect. Indeed, the average normalized distance of these elections to the frequency matrix of the closest distribution is 0.14. To illustrate that some distance is to be expected here, we mention that the average distance of an election sampled from impartial culture (DIC, with 10 candidates and 100 voters) to the distribution’s expected frequency matrix is 0.09 (see Appendix E.4 for a discussion of this and how it may serve as an estimator for the “variance of a distribution”). There are also some elections that are not captured by any of the considered distributions to an acceptable degree; examples of this are elections nr. 1 and nr. 2, which are at distance at least 0.32 and 0.25 from all our distributions, respectively. Remarkably, while coming from the same dataset, elections nr. 1 and nr. 2 are still quite different from each other and, accordingly, the computed dispersion parameter is also quite different. It remains a challenge to find distributions capturing such elections.
Comparing the power of the three considered models, nearly all of our elections are best captured by the Mallows model rather than φ-Conitzer or φ-Walsh. There are only twenty elections that are closer to φ-Walsh or φ-Conitzer than to a Mallows model (election nr. 3 is the most extreme example), and, unsurprisingly, both φ-Walsh and φ-Conitzer perform particularly badly at capturing elections close to ID (see election nr. 4). That is, φ-Conitzer and φ-Walsh are not needed to ensure good coverage of the space of elections; the average normalized distance of our elections to the closest Mallows model is only 0.0007 higher than their distance to the closest distribution (elections nr. 3-6 are three examples of elections which are well captured by the Mallows model and distributed over the entire map).2 Nevertheless, φ-Walsh is also surprisingly powerful, as the average normalized distance of our elections to the closest φ-Walsh distribution is only 0.03 higher than their distance to the closest distribution (however, this might be also due to the fact that most of the considered real-world elections fall into the same area of the map, which φ-Walsh happens to capture particularly well (Boehmer et al., 2021b)). φ-Conitzer performs considerably worse: there are only three elections for which it produces a (slightly) better result than φ-Walsh.
Moreover, our results also emphasize the complex nature of the space of elections: Election nr. 7 is very close to Dv,0.95Mal , hinting that its votes are quite chaotic. At the same time, this election is very close to 0.63-Conitzer and 0.69-Walsh distributions, which suggests at least a certain level of structure among its votes (because votes from Conitzer and Walsh distributions are very structured, and the Mallows filter with dispersion between 0.63 and 0.69 does not destroy this structure fully). However, as witnessed by the fact that the frequency matrix of GS/balanced (which is highly structured) is UN, such phenomena can happen. Lastly, note that most of our datasets are quite “homogenous”, in that the closest distributions for elections from the dataset are similar and also at a similar distance. However, there are also clear exceptions, for instance, elections nr. 1 and nr. 4 from the figure skating dataset. Moreover, there are two elections from the speed skating dataset where one election is captured best by Dv,0.76Mal and the other by D v,0.32 Mal .
6 Summary
We have computed the frequency matrices (Szufa et al., 2020; Boehmer et al., 2021b) of several well-known distributions of votes. Using them, we have drawn a “skeleton map”, which shows how these distributions relate to each other, and we have analyzed its properties. Moreover, we have demonstrated how our results can be used to fit vote distributions to capture real-world elections.
For future work, it would be interesting to compute the frequency matrices of further popular vote distributions, such as the Plackett–Luce model (we conjecture that its frequency matrix is computable in polynomial time). It would also be interesting to use our approach to fit more complex models, such as mixtures of Mallows models, to real-world elections. Further, it may be interesting to use expected frequency matrices to reason about the asymptotic behavior of our models. For example, it might be possible to formally show where, in the limit, do the matrices of our models end up on the map as we increase the number of candidates.
Acknowledgments
NB was supported by the DFG project MaMu (NI 369/19) and by the DFG project ComSoc-MPMS (NI 369/22). This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 101002854).
2For each election, we also computed the closest frequency matrix of two mixtures of Mallows models with reversed central votes p-Dv,φ,ψMal using our approach. However, this only decreased the average minimum distance by around 0.02, with the probability p of flipping the central vote being (close to) zero for most elections. | 1. What is the focus and contribution of the paper regarding the "map of elections"?
2. What are the strengths of the proposed approach, particularly in terms of its utility and simplicity?
3. What are the weaknesses of the paper, especially regarding its reliance on heuristics?
4. Do you have any questions or concerns regarding the paper's methodology or presentation? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The authors build on recent work on a very exciting "map of elections" by showing how to compute the frequency matrix of various vote distributions. They then use these frequency matrices to produce a "skeleton map" of distributions that is closely related to, and visually much simpler than, the "map of elections" approach. Lastly, they show that they can use frequency matrices to estimate parameters of real-world election data (based on nearest distributions).
Strengths And Weaknesses
Strengths:
The "map of elections" is a remarkably useful and insightful tool when dealing with real-world elections, and having a "skeleton map" like this (that doesn't rely on sampling individual instances) is a very useful addition.
I particularly appreciated that this skeleton map allows us to learn parameters of real-world elections (or approximations thereof).
The sampling results / algorithms are intuitively presented and clear to the reader (despite the fact that the results are quite nontrivial).
Weaknesses:
I am not overly familiar with the background behind the map of elections (beyond seeing it as a very useful tool), and it seems like the whole area is built upon many heuristics that seem to work pretty well but lack robust theoretical justification. This is perhaps not so much of a downside to me (relative to others) but is definitely something worth thinking about.
Questions
Line 22: what is eu?
Limitations
Yes |
NIPS | Title
Expected Frequency Matrices of Elections: Computation, Geometry, and Preference Learning
Abstract
We use the “map of elections” approach of Szufa et al. (AAMAS-2020) to analyze several well-known vote distributions. For each of them, we give an explicit formula or an efficient algorithm for computing its frequency matrix, which captures the probability that a given candidate appears in a given position in a sampled vote. We use these matrices to draw the “skeleton map” of distributions, evaluate its robustness, and analyze its properties. Finally, we develop a general and unified framework for learning the distribution of real-world preferences using the frequency matrices of established vote distributions.
1 Introduction
Computational social choice is a research area at the intersection of social choice (the science of collective decision-making) and computer science, which focuses on the algorithmic analysis of problems related to preference aggregation and elicitation (Brandt et al., 2013). Many of the early papers in this field were primarily theoretical, focusing on establishing the worst-case complexity of winner determination and strategic behavior under various voting rules—see, e.g., the papers of Hemaspaandra et al. (1997), Dwork et al. (2001), and Conitzer et al. (2007)—but more recent work often combines theoretical investigations with empirical analysis. For example, formal bounds on the running time and/or approximation ratio of a winner determination algorithm can be complemented by experiments that evaluate its performance on realistic instances; see, e.g., the works of Conitzer (2006), Betzler et al. (2014), Faliszewski et al. (2018) and Wang et al. (2019).
However, performing high-quality experiments requires the ability to organize and understand the available data. One way to achieve this is to form a so-called “map of elections,” recently introduced by Szufa et al. (2020) and extended by Boehmer et al. (2021b). The idea is as follows. First, we fix a distance measure between elections. Second, we sample a number of elections from various distributions and real-life datasets—e.g., those collected in PrefLib (Mattei & Walsh, 2013)—and measure the pairwise distances between them. Third, we embed these elections into the 2D plane, mapping each election to a point so that the Euclidean distances between points are approximately equal to the distances between the respective elections. Finally, we plot these points, usually coloring them to indicate their origin (e.g., the distribution from which a given election was sampled); see Figure 2 later in the paper for an example of such a map. A location of an election on a map provides useful information about its properties. For example, Szufa et al. (2020) and Boehmer et al. (2021a,b) have shown that it can be used to predict (a) the Borda score of the winner of the election, (b) the
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
running time of ILP solvers computing the winners under the Harmonic-Borda multiwinner voting rule, or (c) the robustness of Plurality and Borda winners. Moreover, real-world elections of the same type (such as the ones from politics, sports, or surveys) tend to cluster in the same areas of the map; see also the positions on the map of the datasets collected by Boehmer & Schaar (2022). As such, the map has proven to be a useful framework to analyze the nature of elections and to visualize experimental results in a non-aggregate fashion.
Unfortunately, extending the map to incorporate additional examples and distributions is a challenging task, as the visual representation becomes cluttered and, more importantly, the embedding algorithms, which map elections to points in 2D, find it more difficult to preserve pairwise distances between points as the number of points increases. It is therefore desirable to reduce the number of points in a way that preserves the key features of the framework.
We address this challenge by drawing a map of distributions rather than individual elections, which we call the skeleton map. That is, instead of sampling 20–30 points from each distribution and placing them all on the map, as Szufa et al. (2020) and Boehmer et al. (2021b) do (obtaining around 800 points in total), we create a single point for each distribution. This approach is facilitated by the fact that prior work on the “map of elections” framework represented elections by their frequency matrices, which capture their essential features. The starting point of our work is the observation that this representation extends to distributions in a natural way. Thus, if we can compute the frequency matrix of some distribution D, then, instead of sampling elections from D and creating a point on the map for each sample, we can create a single point for D itself.
Our Contribution. We provide three sets of results. First, for a number of prominent vote distributions, we show how to compute their frequency matrices, by providing an explicit formula or an efficient algorithm. Second, we draw the map of distributions (the skeleton map) and argue for its credibility and robustness. Finally, we use our results to estimate the parameters of the distributions that are closest to the real-world elections considered by Boehmer et al. (2021b). In more detail, we work in the setting of preference learning, where we are given an election and we want to learn the parameters of some distribution, so as to maximize the similarity of the votes sampled from this distribution and the input election. For example, we may be interested in fitting the classic model of Mallows (1957). This model is parameterized by a central vote v and a dispersion parameter φ, which specifies how likely it is to generate a vote at some distance from the central one (alternatively, one may use, e.g., the Plackett–Luce model). Previous works on preference learning typically proposed algorithms to learn the parameters of one specific (parameterized) vote distribution (see, e.g., the works of Lu & Boutilier (2014); Mandhani & Meilǎ (2009); Meila & Chen (2010); Vitelli et al. (2017); Murphy & Martin (2003); Awasthi et al. (2014) for (mixtures of) the Mallows model and the works of Guiver & Snelson (2009); Hunter (2004); Minka (2004); Gormley & Murphy (2008) for (mixtures of) the Plackett–Luce model). Using frequency matrices, we offer a more general approach. Indeed, given an election and a parameterized vote distribution whose frequency matrix we can compute, the task of learning the distribution’s parameters boils down to finding parameters that minimize the distance between the election and the matrices of the distribution. While this minimization problem may be quite challenging, our approach offers a uniform framework for dealing with multiple kinds of distributions at the same time. We find that for the case of the Mallows distribution, our approach learns parameters very similar to those established using maximum likelihood-based approaches. Omitted proofs and discussions are in the appendix. The source code used for the experiments is available in a GitHub repository1.
2 Preliminaries
Given an integer t, we write [t] to denote the set {1, . . . , t}. We interpret a vector x ∈ Rm as an m× 1 matrix (i.e., we use column vectors as the default).
Preference Orders and Elections. Let C be a finite, nonempty set of candidates. We refer to total orders over C as preference orders (or, equivalently, votes), and denote the set of all preference orders over C by L(C). Given a vote v and a candidate c, by posv(c) we mean the position of c in v (the top-ranked candidate has position 1, the next one has position 2, and so on). If a candidate a is ranked above another candidate b in vote v, we write v : a b. Let rev(v) denote the reverse of
1github.com/Project-PRAGMA/Expected-Frequency-Matrices-NeurIPS-2022
vote v. An election E = (C, V ) consists of a set C = {c1, . . . , cm} of candidates and a collection V = (v1, . . . , vn) of votes. Occasionally we refer to the elements of V as voters rather than votes.
Frequency Matrices. Consider an election E = (C, V ) with C = {c1, . . . , cm} and V = (v1, . . . , vn). For each candidate cj and position i ∈ [m], we define #freqE(cj , i) to be the fraction of the votes from V that rank cj in position i. We define the column vector #freqE(cj) to be (#freqE(cj , 1), . . . ,#freqE(cj ,m)) and matrix #freq(E) to consist of vectors #freqE(c1), . . . ,#freqE(cm). We refer to #freq(E) as the frequency matrix of election E. Frequency matrices are bistochastic, i.e., their entries are nonnegative and each of their rows and columns sums up to one.
Example 2.1. Let E = (C, V ) be an election with candidate set C = {a, b, c, d, e} and four voters, v1, v2, v3, and v4. Below, we show the voters’ preference orders (on the left) and the election’s frequency matrix (on the right).
v1 : a b c d e, v2 : c b d a e, v3 : d e c b a, v4 : b c a d e.
a b c d e
1 1/4 1/4 1/4 1/4 0 2 0 1/2 1/4 0 1/4 3 1/4 0 1/2 1/4 0 4 1/4 1/4 0 1/2 0 5 1/4 0 0 0 3/4 Given a vote v, we write #freq(v) to denote the frequency matrix of the election containing this vote only; #freq(v) is a permutation matrix, with a single 1 in each row and in each column. Thus, for an election E = (C, V ) with V = (v1, . . . , vn) we have #freq(E) = 1n · ∑n i=1 #freq(vi).
Compass Matrices. For even m, Boehmer et al. (2021b) defined the following four m × m “compass” matrices, which appear to be extreme on the “map of elections”:
1. The identity matrix, IDm, has ones on the diagonal and zeroes everywhere else (it corresponds to an election where all voters agree on a single preference order).
2. The uniformity matrix, UNm, has all entries equal to 1/m (it corresponds to lack of agreement; each candidate is ranked at each position equally often).
3. The stratification matrix, STm, is partitioned into four quadrangles, where all entries in the top-left and bottom-right quadrangles are equal to 2/m, and all other entries are equal to zero (it corresponds to partial agreement; the voters agree which half of the candidates is superior, but disagree on everything else).
4. The antagonism matrix, ANm, has values 1/2 on both diagonals and zeroes elsewhere (it captures a conflict: it is a matrix of an election where half of the voters rank the candidates in one way and half of the voters rank them in the opposite way).
Below, we show examples of these matrices for m = 4:
UN4 = 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 1/4 , ID4 = 1 0 0 00 1 0 00 0 1 0 0 0 0 1 , ST4 = 1/2 1/2 0 0 1/2 1/2 0 0 0 0 1/2 1/2 0 0 1/2 1/2 ,AN4 = 1/2 0 0 1/2 0 1/2 1/2 0 0 1/2 1/2 0 1/2 0 0 1/2 . We omit the subscript in the names of these matrices if its value is clear from the context or irrelevant.
EMD. Let x = (x1, . . . , xn) and y = (y1, . . . , yn) be two vectors with nonnegative real entries that sum up to 1. Their Earth mover’s distance, denoted EMD(x, y), is the cost of transforming x into y using operations of the form: Given indices i, j ∈ [n] and a positive value δ such that xi ≥ δ, at the cost of δ · |i − j|, replace xi with xi − δ and xj with xj + δ (this corresponds to moving δ amount of “earth” from position i to position j). EMD(x, y) can be computed in polynomial time by a standard greedy algorithm.
Positionwise Distance (Szufa et al., 2020). Let A = (a1, . . . , am) and B = (b1, . . . , bm) be two m × m frequency matrices. Their raw positionwise distance is rawPOS(A,B) = minσ∈Sm ∑m i=1 EMD(ai, bσ(i)), where Sm denotes the set of all permutations over [m]. We will normalize these distances by 13 (m 2 − 1), which Boehmer et al. (2021b, 2022) proved to be the maximum distance between two m × m frequency matrices and the distance between IDm and
UNm: nPOS(A,B) = rawPOS(A,B)
1 3 (m 2−1) . For two elections E and F with equal-sized candidate sets, their positionwise distance, raw or normalized, is defined as the positionwise distance between their frequency matrices.
Paths Between the Compass Matrices. Let X and Y be two compass matrices. Boehmer et al. (2021b) showed that if we take their affine combination Z = αX + (1 − α)Y (0 ≤ α ≤ 1) then nPOS(X,Z) = (1 − α)nPOS(X,Y ) and nPOS(Z, Y ) = αnPOS(X,Y ). Such affine combinations form direct paths between the compass matrices; they are also possible between any two frequency matrices of a given size, not just the compass ones, but may require shuffling the matrices’ columns (Boehmer et al., 2021b).
Structured Domains. We consider two classes of structured elections, single-peaked elections (Black, 1958), and group-separable elections (Inada, 1964). For a discussion of these domains and the motivation behind them, see the original papers and the overviews by Elkind et al. (2017, 2022).
Intuitively, an election is single-peaked if we can order the candidates so that, as each voter considers the candidates in this order (referred to as the societal axis), his or her appreciation first increases and then decreases. The axis may, e.g., correspond to the left-right political spectrum. Definition 2.2. Let v be a vote over C and let C be the societal axis over C. We say that v is singlepeaked with respect to C if for every t ∈ [|C|] its t top-ranked candidates form an interval within C. An election is single-peaked with respect to C if all its votes are. An election is single-peaked (SP) if it is single-peaked with respect to some axis.
Note that the election from Example 2.1 is single-peaked with respect to the axis aC bC cC dC e.
We also consider group-separable elections, introduced by Inada (1964). For our purposes, it will be convenient to use the tree-based definition of Karpov (2019). Let C = {c1, . . . , cm} be a set of candidates, and consider a rooted, ordered tree T whose leaves are elements of C. The frontier of such a tree is the preference order that ranks the candidates in the order in which they appear in the tree from left to right. A preference order is consistent with a given tree if it can be obtained as its frontier by reversing the order in which the children of some nodes appear. Definition 2.3. An election E = (C, V ) is group-separable if there is a rooted, ordered tree T whose leaves are members of C, such that each vote in V is consistent with T .
The trees from Definition 2.3 form a subclass of clone decomposition trees, which are examples of PQ-trees (Elkind et al., 2012; Booth & Lueker, 1976). Example 2.4. Consider candidate set C = {a, b, c, d}, trees T1, T2, and T3 from Figure 1, and votes v1 : a b c d, v2 : c d b a, and v3 : b d c a. Vote v1 is consistent with each of the trees, v2 is consistent with T2 (reverse the children of y1 and y2), and v3 is consistent with T3 (reverse the children of x1 and x3).
3 Frequency Matrices for Vote Distributions
We show how to compute frequency matrices for several well-known distributions over votes.
3.1 Setup and Interpretation
A vote distribution for a candidate set C is a function D that assigns a probability to each preference order over C. Formally, we require that for each v ∈ L(C) it holds that D(v) ≥ 0 and
∑ v∈L(C)D(v) = 1. We say that a vote v is in the support of D if D(v) > 0. Given such a distribution, we can form an election by repeatedly drawing votes according to the specified probabilities. For example, we can sample each element of L(C) with equal probability; this distribution, which is known as impartial culture (IC), is denoted by DIC (we omit the candidate set from our notation as it will always be clear from the context). The frequency matrix of a vote distribution D over a candidate set C is #freq(D) = ∑ v∈L(C)D(v) ·#freq(v). For example, we have #freq(DIC) = UN. One interpretation of #freq(D) is that the entry for a candidate cj and a position i is the probability that a vote v sampled from D has cj in position i (which we denote as P[posv(cj) = i]). Another interpretation is that if we sample a large number of votes then the resulting election’s frequency matrix would be close to #freq(D) with high probability. More formally, if we letMn be a random variable equal to the frequency matrix of an n-voter election generated according to D, then it holds that limn→∞ E(Mn) = #freq(D).
3.2 Group-Separable Elections
We first consider sampling group-separable votes. Given a rooted tree T whose leaves are labeled by elements of C = {c1, . . . , cm}, let DTGS be the distribution assigning equal probability to all votes consistent with T , and zero probability to all other votes; one can think of DTGS as impartial culture restricted to the group-separable subdomain defined by T . To sample from DTGS, we can toss a fair coin for each internal node of T , reversing the order of its children if this coin comes up heads, and output the frontier of the resulting tree. We focus on the following types of trees:
1. Flat(c1, . . . , cm) is a tree with a single internal node, whose children, from left to right, are c1, c2, . . . , cm. There are only two preference orders consistent with this tree, c1 · · · cm and its reverse.
2. Bal(c1, . . . , cm) is a perfectly balanced binary tree with frontier c1, . . . , cm (hence we assume the number m of candidates to be a power of two).
3. CP(c1, . . . , cm) is a binary caterpillar tree: it has internal nodes x1, . . . , xm−1; for each j ∈ [m− 2], xj has cj as the left child and xj+1 as the right one, whereas xm−1 has both cm−1 and cm as children.
The first tree in Figure 1 is flat, the second one is balanced, and the third one is a caterpillar tree. If T is a caterpillar tree, then we refer to DTGS as the GS/caterpillar distribution. We use a similar terminology for the other trees. Theorem 3.1. Let F be the frequency matrix of distribution DTGS. If T is flat then F = AN, and if it is balanced then F = UN. If T is a caterpillar tree CP(c1, . . . , cm), then for each candidate cj the probability that cj appears in a position i ∈ [m] in a random vote v sampled from DTGS is:
1 2j ( j−1 i−1 ) · 1i≤j + 12j ( j−1 (i−1)−(m−j) ) · 1i>m−j .
Proof. The cases of flat and balanced trees are immediate, so we focus on caterpillar trees. Let T = CP(c1, . . . , cm) with internal nodes x1, . . . , xm−1, and consider a candidate cj and a position i ∈ [m]. Let v be a random variable equal to a vote sampled from DTGS. We say that a node x`, ` ∈ [m− 1], is reversed if the order of its children is reversed. Note that for ` < r it holds that cr precedes c` in the frontier if and only if x` is reversed. Suppose that xj is not reversed. Then v ranks cj above each of cj+1, . . . , cm. This means that for cj to be ranked exactly in position i, it must be that j ≥ i and exactly i− 1 nodes among x1, . . . , xj−1 are not reversed. If j ≥ i, the probability that xj and i − 1 nodes among x1, . . . , xj−1 are not reversed is 12j · ( j−1 i−1 ) . On the other hand, if xj is reversed, then v ranks candidates cj+1, . . . , cm above cj . As there are m− j of them, for cj to be ranked exactly in position i it must hold that i > m− j and exactly (i− 1)− (m− j) nodes among x1, . . . , xj−1 are not reversed. This happens with probability 12j · ( j−1 (i−1)−(m−j) ) .
Regarding distributions DTGS not handled in Theorem 3.1, we still can compute their frequence matrices efficiently. Theorem 3.2. There is an algorithm that given a tree T computes #freq(DTGS) using polynomially many arithmetic operations with respect to the number of nodes in T .
3.3 From Caterpillars to Single-Peaked Preferences.
There is a relationship between GS/caterpillar votes and single-peaked ones, which will be very useful when computing one of the frequency matrices in the next section. Theorem 3.3. Given a ranking v over C = {c1, . . . , cm}, let v̂ be another ranking over C such that, for each j ∈ [m], if cj is ranked in position i in v then ci is ranked in position m− j + 1 in v̂. Suppose that v is in the support of DTGS, where T = CP(c1, . . . , cm). Then v̂ is single-peaked with respect to c1 C · · ·C cm.
There are exactly 2m−1 votes in the support of DTGS (this follows by simple counting) and there are 2m−1 votes that are single-peaked with respect to c1 C · · ·C cm. As u 6= v implies û 6= v̂, it follows that the mapping v 7→ v̂ is a bijection between all votes in the support of DTGS and all votes that are single-peaked with respect to c1 C · · ·C cm.
3.4 Single-Peaked Elections
We consider two models of generating single-peaked elections, one due to Walsh (2015) and one due to Conitzer (2009). Let us fix a candidate set C = {c1, . . . , cm} and a societal axis c1 C · · ·C cm. Under the Walsh distribution, denoted DWalSP , each vote that is single-peaked with respect to C has equal probability (namely, 12m−1 ), and all other votes have probability zero. By Theorems 3.1 and 3.3, we immediately obtain the frequency matrix for the Walsh distribution (in short, it is the transposed matrix of the GS/caterpillar distribution). Corollary 3.4. Consider a candidate setC = {c1, . . . , cm} and an axis c1C· · ·Ccm. The probability that candidate cj appears in position i in a vote sampled from DWalSP is: 12m−i+1 ( m−i j−1 ) · 1j≤m−i+1 +
1 2m−i+1 ( m−i j−i ) · 1j>i−1.
To sample a vote from the Conitzer distribution, DConSP (also known as the random peak distribution), we pick some candidate cj uniformly at random and rank him or her on top. Then we perform m− 1 iterations, where in each we choose (uniformly at random) a candidate directly to the right or the left of the already selected ones, and place him or her in the highest available position in the vote. Theorem 3.5. Let c1 C · · · C cm be the societal axis, where m is an even number, and let v be a random vote sampled from DConSP for this axis. For j ∈ [m2 ] and i ∈ [m] we have:
P[posv(cj) = i] = 2/2m if i < j, (j+1)/2m if i = j, 1/2m if j < i < m− j + 1, (m−j+1)/2m if i = m− j + 1, 0 if i+ j > m.
Further, for each candidate cj ∈ C and each position i ∈ [m] we have P[posv(cj) = i] = P[posv(cm−j+1) = i].
3.5 Mallows Model
Finally, we consider the classic Mallows distribution. It has two parameters, a central vote v∗ over m candidates and a dispersion parameter φ ∈ [0, 1]. The probability of sampling a vote v from this distribution (denoted Dv
∗,φ Mal ) is: D v∗,φ Mal (v) = 1 Zφ κ(v,v∗), where Z = 1 · (1 + φ) · (1 + φ + φ2) · · · · · (1 + · · ·+ φm−1) is a normalizing constant and κ(v, v∗) is the swap distance between v and v∗ (i.e., the number of swaps of adjacent candidates needed to transform v into v∗). In our experiments, we consider a new parameterization, introduced by Boehmer et al. (2021b). It uses a normalized dispersion parameter norm-φ, which is converted to a value of φ so that the expected swap distance between the central vote v∗ and a sampled vote v is norm-φ2 times the maximum swap distance between two votes (so, norm-φ = 1 is equivalent to IC and for norm-φ = 0.5 we get elections that lie close to the middle of the UN–ID path).
Our goal is now to compute the frequency matrix of Dv ∗,φ
Mal . That is, given the candidate ranked in position j in the central vote, we want to compute the probability that he or she appears in a
given position i ∈ [m] in the sampled vote. Given a positive integer m, consider the candidate set C(m) = {c1, . . . , cm} and the central vote v∗m : c1 · · · cm. Fix a candidate cj ∈ C(m), and a position i ∈ [m]. For every integer k between 0 and m(m−1)/2, let S(m, k) be the number of votes in L(C(m)) that are at swap distance k from v∗m, and define T (m, k, j, i) to be the number of such votes that have cj in position i. One can compute S(m, k) in time polynomial inm (OEIS Foundation Inc., 2020); using S(m, k), we show that the same holds for T (m, k, j, i).
Lemma 3.6. There is an algorithm that computes T (m, k, j, i) in polynomial time with respect to m.
We can now express the probability of sampling a vote v, where the candidate ranked in position j in the central vote v∗ ends up in position i under Dv
∗,φ Mal , as:
fm(φ, j, i) = 1 Z ∑m(m−1)/2 k=0 T (m, k, j, i)φ k. (1)
The correctness follows from the definitions of T and Dv ∗,φ
Mal . By Lemma 3.6, we have the following.
Theorem 3.7. There exists an algorithm that, given a number m of candidates, a vote v∗, and a parameter φ, computes the frequency matrix of Dv
∗,φ Mal using polynomially many operations in m.
Note that Equation (1) only depends on φ, j and i (and, naturally, on m). Using this fact, we can also compute frequency matrices for several variants of the Mallows distribution.
Remark 3.8. Given a vote v, two dispersion parameters φ and ψ, and a probability p ∈ [0, 1], we define the distribution p-Dv,φ,ψMal as p · D v,φ Mal + (1− p) · D rev(v),ψ Mal , i.e., with probability p we sample a vote from Dv,φMal and with probability 1− p we sample a vote from D rev(v),ψ Mal . The probability that candidate cj appears in position i in the resulting vote is p ·fm(φ, j, i)+(1−p) ·fm(ψ,m− j+1, i). Remark 3.9. Consider a candidate set C = {c1, . . . , cm}. Given a vote distribution D over L(C) and a parameter φ, define a new distribution D′ as follows: Draw a vote v̂ according to D and then output a vote v sampled from Dv̂,φMal; indeed, such models are quite natural, see, e.g., the work of Kenig & Kimelfeld (2019). For each t ∈ [m], let g(j, t) be the probability that cj appears in position t in a vote sampled from D. The probability that cj appears in position i ∈ [m] in a vote sampled from D′ is ∑m t=1 g(j, t) · f(φ, t, i). In terms of matrix multiplication, this means that #freq(D′) = #freq(Dv ∗,φ
Mal ) ·#freq(D), where v∗ is c1 · · · cm. We write φ-Conitzer (φ-Walsh) to refer to this model where we use the Conitzer (Walsh) distribution as the underlying one and normalized dispersion parameter φ.
4 Skeleton Map
Our goal in this section is to form what we call a skeleton map of vote distributions (skeleton map, for short), evaluate its quality and robustness, and compare it to the map of Boehmer et al. (2021b). Throughout this section, whenever we speak of a distance between elections or matrices, we mean the positionwise distance (occasionally we will also refer to the Euclidean distances on our maps, but we will always make this explicit). Let Φ = {0, 0.05, 0.1, . . . , 1} be a set of normalized dispersion parameters that we will be using for Mallows-based distributions in this section.
We form the skeleton map following the general approach of Szufa et al. (2020) and Boehmer et al. (2021b). For a given number of candidates, we consider the four compass matrices (UN, ID, AN, ST) and paths between each matrix pair consisting of their convex combinations (gray dots), the frequency matrices of the Mallows distribution with normalized dispersion parameters from Φ (blue triangles), and the frequency matrices of the Conitzer (CON), Walsh (WAL), and GS/caterpillar distribution (CAT). Moreover, we add the frequency matrices of the following vote distributions (we again use the dispersion parameters from Φ): (i) The distribution 1/2-Dv,φ,φMal as defined in Remark 3.8 (red triangles), (ii) the distribution where with equal probability we mix the standard Mallows distribution and 1/2-Dv,φ,φMal (green triangles), and (iii) the φ-Conitzer and φ-Walsh distributions as defined in Remark 3.9 (magenta and orange crosses). For each pair of these matrices we compute their positionwise distance. Then we find an embedding of the matrices into a 2D plane, so that each matrix is a point and the Euclidean distances between these points are as similar to the positionwise distances as possible (we use the MDS algorithm, as implemented in the Python sklearn.manifold.MDS package). In Figure 3 we show our map for the case of 10 candidates (the
0.33
0.33
0.39
0.77
0.44
0.34
0.66
0.38
0.380.33
0.19
0.32
0.65
IDUN
AN
ST
WAL
CON
CAT MID
1
3
4
5 6
7
2
Figure 3: The skeleton map with 10 candidates. We have MID = 1/2AN + 1/2ID. Each point labeled with a number is a realworld election as described in Section 5.
Figure 4: In the top-right part, we show the normalized positionwise distance. In the bottomleft one, we show the embedding misrepresentation ratios.
lines between some points/matrices show their positionwise distances; to maintain clarity, we only provide some of them).
We now verify the credibility of the skeleton map. As the map does not have many points, we expect its embedding to truly reflect the positionwise distances between the matrices. This, indeed, seems to be the case, although some distances are represented (much) more accurately than the others. In Figure 4 we provide the following data for a number of matrices (for m = 10; matrix M2W is the Mallows matrix in our data set that is closest to the Walsh matrix). In the top-right part (the white-orange area), we give positionwise distances between the matrices, and in the bottom-left part (the blue area), for each pair of matrices X and Y we report the misrepresentation ratio Euc(X,Y )nPOS(X,Y ) , where Euc(X,Y ) is the Euclidean distance between X and Y in the embedding, normalized by the Euclidean distance between ID and UN. The closer they are to 1, the more accurate is the embedding. The misrepresentation ratios are typically between 0.8 and 1.15, with many of them between 0.9 and 1.05. Thus, in most cases, the map is quite accurate and offers good intuition about the relations between the matrices. Yet, some distances are represented particularly badly. As an extreme example, the Euclidean distance between the Walsh matrix and the closest Mallows matrix, M2W, is off by almost a factor of 8 (these matrices are close, but not as close as the map suggests). Thus, while one always has to verify claims suggested by the skeleton map, we view it as quite credible. This conclusion is particularly valuable when we compare the skeleton map and the map of Boehmer et al. (2021b), shown in Figure 2. The two maps are similar, and analogous points (mostly) appear in analogous positions. Perhaps the biggest difference is the location of the Conitzer matrix on the skeleton map and Conitzer elections in the map of Boehmer et al., but even this difference is not huge. We remark that the Conitzer matrix is closer to UN and AN than to ID and ST, whereas for the Walsh matrix the opposite is true. Boehmer et al. (2021b) make a similar observation; our results allow us to make this claim formal. In Appendix E, we analyze the robustness of the skeleton map with respect to varying the number of candidates. We find that except for pairs including the Walsh or GS/caterpillar matrices, which "travel" on the map as the number of candidates increases, the distance between each pair of matrices in the skeleton map stays nearly constant.
5 Learning Vote Distributions
We demonstrate how the positionwise distance and frequency matrices can be used to fit vote distributions to given real-world elections. Specifically, we consider the Mallows model (Dv,φMal) and the φ-Conitzer and φ-Walsh models. Naturally, we could use more distributions, but we focus on showcasing the technique and the general unified approach. Concerning our results, among others, we verify that for Mallows model our approach is strongly correlated with existing maximumlikelihood approaches. Moreover, unlike in previous works, we compare the capabilities of different distributions to fit the given elections. We remark that if we do not have an algorithm for computing a frequency matrix of a given vote distribution, we can obtain an approximate matrix by sampling sufficiently many votes from this distribution. In principle, it is also possible to deal with distributions
over elections that do not correspond to vote distributions and hence are not captured by expected frequency matrices (as is the case, e.g., for the Euclidean models where candidates do not have fixed positions; see the work of Szufa et al. (2020) for examples of such models in the context of the map of elections): If we want to compute the distance of such a distribution, we sample sufficiently many elections and compute their average distance from the input one. However, it remains unclear how robust this approach is.
Approach. To fit our vote distributions to a given election, we compute the election’s distance to the frequency matrices of Dv,φMal, φ-Conitzer, and φ-Walsh, for φ ∈ {0, 0.001, . . . , 1}. We select the distribution corresponding to the closest matrix.
Data. We consider elections from the real-world datasets used by Boehmer et al. (2021b). They generated 15 elections with 10 candidates and 100 voters (with strict preferences) from each of the eleven different real-world election datasets (so, altogether, they generated 165 elections, most of them from Preflib (Mattei & Walsh, 2013)). They used four datasets of political elections (from North Dublin (Irish), various non-profit and professional organizations (ERS), and city council elections from Glasgow and Aspen), four datasets of sport-based elections (from Tour de France (TDF), Giro d’Italia (GDI), speed skating, and figure skating) and three datasets with survey-based elections (from preferences over T-shirt designs, sushi, and cities). We present the results of our analysis for seven illustrative and particularly interesting elections in Table 1 and also include them in our skeleton map from Figure 3.
Basic Test. There is a standard maximum-likelihood estimator (MLE; based on Kemeny voting (Mandhani & Meilǎ, 2009)) that given an election provides the most likely dispersion parameter of the Mallows distribution that might have generated this election. To test our approach, we compared the parameters provided by our approach and by the MLE for our 165 elections and found them to be highly correlated (with Pearson correlation coefficient around 0.97). In particular, the average absolute difference between the dispersion parameter calculated by our approach and the MLE is only 0.02. See Appendix F for details.
Fitting Real-World Elections. Next, we consider the capabilities of Dv,φMal, φ-Conitzer, and φWalsh to fit the real-world elections of Boehmer et al. (2021b). Overall, we find that these three vote distributions have some ability to capture the considered elections, but it certainly is not perfect. Indeed, the average normalized distance of these elections to the frequency matrix of the closest distribution is 0.14. To illustrate that some distance is to be expected here, we mention that the average distance of an election sampled from impartial culture (DIC, with 10 candidates and 100 voters) to the distribution’s expected frequency matrix is 0.09 (see Appendix E.4 for a discussion of this and how it may serve as an estimator for the “variance of a distribution”). There are also some elections that are not captured by any of the considered distributions to an acceptable degree; examples of this are elections nr. 1 and nr. 2, which are at distance at least 0.32 and 0.25 from all our distributions, respectively. Remarkably, while coming from the same dataset, elections nr. 1 and nr. 2 are still quite different from each other and, accordingly, the computed dispersion parameter is also quite different. It remains a challenge to find distributions capturing such elections.
Comparing the power of the three considered models, nearly all of our elections are best captured by the Mallows model rather than φ-Conitzer or φ-Walsh. There are only twenty elections that are closer to φ-Walsh or φ-Conitzer than to a Mallows model (election nr. 3 is the most extreme example), and, unsurprisingly, both φ-Walsh and φ-Conitzer perform particularly badly at capturing elections close to ID (see election nr. 4). That is, φ-Conitzer and φ-Walsh are not needed to ensure good coverage of the space of elections; the average normalized distance of our elections to the closest Mallows model is only 0.0007 higher than their distance to the closest distribution (elections nr. 3-6 are three examples of elections which are well captured by the Mallows model and distributed over the entire map).2 Nevertheless, φ-Walsh is also surprisingly powerful, as the average normalized distance of our elections to the closest φ-Walsh distribution is only 0.03 higher than their distance to the closest distribution (however, this might be also due to the fact that most of the considered real-world elections fall into the same area of the map, which φ-Walsh happens to capture particularly well (Boehmer et al., 2021b)). φ-Conitzer performs considerably worse: there are only three elections for which it produces a (slightly) better result than φ-Walsh.
Moreover, our results also emphasize the complex nature of the space of elections: Election nr. 7 is very close to Dv,0.95Mal , hinting that its votes are quite chaotic. At the same time, this election is very close to 0.63-Conitzer and 0.69-Walsh distributions, which suggests at least a certain level of structure among its votes (because votes from Conitzer and Walsh distributions are very structured, and the Mallows filter with dispersion between 0.63 and 0.69 does not destroy this structure fully). However, as witnessed by the fact that the frequency matrix of GS/balanced (which is highly structured) is UN, such phenomena can happen. Lastly, note that most of our datasets are quite “homogenous”, in that the closest distributions for elections from the dataset are similar and also at a similar distance. However, there are also clear exceptions, for instance, elections nr. 1 and nr. 4 from the figure skating dataset. Moreover, there are two elections from the speed skating dataset where one election is captured best by Dv,0.76Mal and the other by D v,0.32 Mal .
6 Summary
We have computed the frequency matrices (Szufa et al., 2020; Boehmer et al., 2021b) of several well-known distributions of votes. Using them, we have drawn a “skeleton map”, which shows how these distributions relate to each other, and we have analyzed its properties. Moreover, we have demonstrated how our results can be used to fit vote distributions to capture real-world elections.
For future work, it would be interesting to compute the frequency matrices of further popular vote distributions, such as the Plackett–Luce model (we conjecture that its frequency matrix is computable in polynomial time). It would also be interesting to use our approach to fit more complex models, such as mixtures of Mallows models, to real-world elections. Further, it may be interesting to use expected frequency matrices to reason about the asymptotic behavior of our models. For example, it might be possible to formally show where, in the limit, do the matrices of our models end up on the map as we increase the number of candidates.
Acknowledgments
NB was supported by the DFG project MaMu (NI 369/19) and by the DFG project ComSoc-MPMS (NI 369/22). This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 101002854).
2For each election, we also computed the closest frequency matrix of two mixtures of Mallows models with reversed central votes p-Dv,φ,ψMal using our approach. However, this only decreased the average minimum distance by around 0.02, with the probability p of flipping the central vote being (close to) zero for most elections. | 1. What is the focus and contribution of the paper regarding "map of elections" and "skeleton map"?
2. What are the strengths of the proposed approach, particularly in terms of simplifying the map while preserving key features?
3. What are the weaknesses of the paper, especially regarding the lack of experiments showing its superiority over previous works?
4. Do you have any concerns about the representation of vote distributions by frequency matrices compared to previous methods?
5. What are the limitations of the paper, such as the potential applicability of the frequency matrix calculation beyond the context of the paper? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
Previously, Szufa et al. (2020) and Boehmer et al. (2021) proposed "map of elections" as both a visualization tool to group similar elections, but also demonstrated that those elections close to each other on the map may have similar features.This paper introduces "skeleton map", which is a new type of "map of elections". The main difference is that in the skeleton map, each point is a distribution instead of a single election. This change simplifies the map while preserving the key features. In order to represent each election / vote distribution as a point, this paper studies a number of prominent vote distributions, either gives an analytical formula of the frequency matrices, or provides a polynomial time algorithm to calculate it. The authors ran experiments to see how the distances between different vote distributions change when the number of candidates change. Finally, they use real life election datasets to verify that the vote distributions they consider (Mallows, Conitzer, Walsh with different parameters) could fit some real vote datasets well, but still have their limits.
Strengths And Weaknesses
I have previously reviewed another version of this paper.
Originality: This work is well built on two previous works: Szufa et al. (2020) that proposed the "map of elections", and Boehmer et al. (2021) that defined compass matrices and normalized positionwise distance. They did a thorough work providing algorithms to compute the frequency matrices for many vote distributions. The "skeleton map" is a nice simplification to only have one point for each distribution instead of one point for each election.
Quality: The paper is well organized. Concepts and algorithms are clearly defined, proved, and elaborated. Experiments and analysis of results are well presented. Source code is provided. My concern is that I did not find experiments showing this new method performs better than previous work.
Clarity: This paper is well written and organized.
Significance: My main concern is whether representing vote distributions by frequency matrices is better (in the sense that it fits better with real life datasets) than previous work that represents vote distributions by sampling elections from it.
Questions
On line 20: “However, performing high-quality experiments requires the ability to organize and understand the available data.” Could you elaborate on this? How the “map of elections” and “skeleton map” help experiments?
For the frequency matrix calculation, I wonder if the results could be used in general above the context of this paper?
Do you know whether your method in this paper is better than previous work? See my comments in “Strengths And Weaknesses”.
Limitations
It would be nice if the frequency matrix calculation could be used beyond the context in this paper.
Again, my main concern is how this paper compares with previous papers on fitting real life datasets. |
NIPS | Title
Order-Invariant Cardinality Estimators Are Differentially Private
Abstract
We consider privacy in the context of streaming algorithms for cardinality estimation. We show that a large class of algorithms all satisfy -differential privacy, so long as (a) the algorithm is combined with a simple down-sampling procedure, and (b) the input stream cardinality is Ω(k/ ). Here, k is a certain parameter of the sketch that is always at most the sketch size in bits, but is typically much smaller. We also show that, even with no modification, algorithms in our class satisfy ( , δ)-differential privacy, where δ falls exponentially with the stream cardinality. Our analysis applies to essentially all popular cardinality estimation algorithms, and substantially generalizes and tightens privacy bounds from earlier works. Our approach is faster and exhibits a better utility-space tradeoff than prior art.
1 Introduction
Cardinality estimation, or the distinct counting problem, is a fundamental data analysis task. Typical applications are found in network traffic monitoring [9], query optimization [20], and counting unique search engine queries [14]. A key challenge is to perform this estimation in small space while processing each data item quickly. Typical approaches for solving this problem at scale involve data sketches such as the Flajolet-Martin (FM85) sketch [12], HyperLogLog (HLL) [11], Bottom-k [2, 6, 3]. All these provide approximate cardinality estimates but use bounded space.
While research has historically focused on the accuracy, speed, and space usage of these sketches, recent work examines their privacy guarantees. These privacy-preserving properties have grown in importance as companies have built tools that can grant an appropriate level of privacy to different people and scenarios. The tools aid in satisfying users’ demand for better data stewardship, while also ensuring compliance with regulatory requirements.
We show that all cardinality estimators in a class of hash-based, order-invariant sketches with bounded size are -differentially private (DP) so long as the algorithm is combined with a simple down-sampling procedure and the true cardinality satisfies a mild lower bound. This lower bound requirement can be guaranteed to hold by inserting sufficiently many “phantom elements” into the stream when initializing the sketch. We also show that, even with no modification, algorithms in our class satisfy ( , δ)-differential privacy, where δ falls exponentially with the stream cardinality.
Our novel analysis has significant benefits. First, prior works on differentially private cardinality estimation have analyzed only specific sketches [23, 25, 5, 22]. Moreover, many of the sketches analyzed (e.g., [23, 22]), while reminiscent of sketches used in practice, in fact differ from practical
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
sketches in important ways. For example, Smith et al. [22] analyze a variant of HLL that Section 4 shows has an update time that can be k times slower than an HLL sketch with k buckets.
While our analysis covers an entire class of sketches at once, our error analysis improves upon prior work in many cases when specialized to specific sketches. For example, our analysis yields tighter privacy bounds for HLL than the one given in [5], yielding both an -DP guarantee, rather than an ( , δ)-DP guarantee, as well as tighter bounds on the failure probability δ—see Section 4 for details. Crucially, the class of sketches we analyze captures many (in fact, almost all to our knowledge) of the sketches that are actually used in practice. This means that existing systems can be used in contexts requiring privacy, either without modification if streams are guaranteed to satisfy the mild cardinality lower bound we require, or with a simple pre-processing step described if such cardinality lower bounds may not hold. Thus, existing data infrastructure can be easily modified to provide DP guarantees, and in fact existing sketches can be easily migrated to DP summaries.
1.1 Related work
One perspective is that cardinality estimators cannot simultaneously preserve privacy and offer good utility [7]. However, this impossibility result applies only when an adversary However, this impossibility result applies only when an adversary can create and merge an arbitrary number of sketches, effectively observing an item’s value many times. It does not address the privacy of one sketch itself.
Other works have studied more realistic models where either the hashes are public, but private noise is added to the sketch [23, 17, 25], or the hashes are secret [5] (i.e., not known to the adversary who is trying to “break” privacy). This latter setting turns out to permit less noisy cardinality estimates. Past works study specific sketches or a variant of a sketch. For example, Smith et al. [22] show that an HLL-type sketch is -DP while [25] modifies the FM85 sketch using coordinated sampling, which is also based on a private hash. Variants of both models are analyzed by Choi et al. [5], and they show (amongst other contributions) a similar result to [22], establishing that an FM85-type sketch is differentially private. Like these prior works, we focus on the setting when the hash functions are kept secret from the adversary. A related problem of differentially private estimation of cardinalities under set operations is studied by [18], but they assume the inputs to each sketch are already de-duplicated.
There is one standard caveat: following prior works [22, 5] our privacy analysis assumes a perfectly random hash function. One can remove this assumption both in theory and practice by using a cryptographic hash function. This will yield a sketch that satisfies either a computational variant of differential privacy called SIM-CDP, or standard information-theoretic notions of differential privacy under the assumption that the hash function fools space-bounded computations [22, Section 2.3].
Other works also consider the privacy-preserving properties of common Lp functions over data streams. For p = 2, these include fast dimensionality reduction [4, 24] and least squares regression [21]. Meanwhile, for 0 < p ≤ 1, frequency-moment estimation has also been studied [26]. Our focus is solely the cardinality estimation problem when p = 0.
1.2 Preliminaries
More formally, we consider the following problem.
Problem Definition Let D = {x1, . . . , xn} denote a stream of samples with each identifier xi coming from a large universe U , e.g., of size 264. The objective is to estimate the cardinality, or number of distinct identifiers, of D using an algorithm S which is given privacy parameters , δ ≥ 0 and a space bound b, measured in bits. Definition 1.1 (Differential Privacy [8]). A randomized algorithm S is ( , δ)-differentially private (( , δ)-DP for short or if δ = 0, pure -DP) if for any pair of data sets D,D′ that differ in one record and for all S in the range of S, Pr(S(D′) ∈ S) ≤ e Pr(S(D) ∈ S) + δ with probability over the internal randomness of the algorithm S.
Rather than analyzing any specific sketching algorithm, we analyze a natural class of randomized distinct counting sketches. Algorithms in this class operate in the following manner: each time a new stream item i arrives, i is hashed using some uniform random hash function h, and then h(i) is used to update the sketch, i.e., the update procedure depends only on h(i), and is otherwise independent of
i. Our analysis applies to any such algorithm that depends only on the set of observed hash values. Equivalently, the sketch state is invariant both to the order in which stream items arrive, and to item duplication.2 We call this class of algorithms hash-based, order-invariant cardinality estimators. Note that for any hash-based, order-invariant cardinality estimator, the distribution of the sketch depends only on the cardinality of the stream. All distinct counting sketches of which we are aware that are invariant to permutations of the input data are included in this class. This includes FM85, LPCA, Bottom-k, Adaptive Sampling, and HLL as shown in Section 4. Definition 1.2 (Hash-Based, Order-invariant Cardinality Estimators). Any sketching algorithm that depends only on the set of hash values of stream items using a uniform random hash function is a hash-based order-invariant cardinality estimator. We denote this class of algorithms by C.
We denote a sketching algorithm with internal randomness r by Sr (for hash-based algorithms, r specifies the random hash function used). The algorithm takes a data set D and generates a data structure Sr(D) that is used to estimate the cardinality. We refer to this structure as the state of the sketch, or simply the sketch, and the values it can take by s ∈ Ω. Sketches are first initialized and then items are inserted into the sketch with an add operation that may or may not change the sketch state.
The size of the sketch is a crucial constraint, and we denote the space consumption in bits by b. For example, FM85 consists of k bitmaps of length `. Thus, its state s ∈ Ω = {0, 1}k×`. Typically, ` = 32, so that b = 32k. Further examples are given in Section 4. Our goal is to prove such sketches are differentially private.
2 Hash-Based Order-Invariant Estimators are Private
The distribution of any hash-based, order-invariant cardinality estimator depends only on the cardinality of the input stream, so without loss of generality we assume the input is D = {1, . . . , n}. Denote the set D\{i} by D−i for i ∈ D and a sketching algorithm with internal randomness r by Sr(D). By definition, for an -differential privacy guarantee, we must show that the Bayes factor comparing the hypothesis i ∈ D versus i /∈ D is appropriately bounded:
e− < Prr(Sr(D) = s)
Prr(Sr(D−i) = s) < e ∀s ∈ Ω, i ∈ D. (1)
Overview of privacy results. The main result in our analysis bounds the privacy loss of a hash-based, order-invariant sketch in terms of just two sketch-specific quantities. Both quantities intuitively capture how sensitive the sketch is to the removal or insertion of a single item from the data stream.
The first quantity is a bound kmax on the number of items that would change the sketch if removed from the stream. Denote the items whose removal from the data set changes the sketch by
Kr := {i ∈ D : Sr(D−i) 6= Sr(D)}. (2) Denote its cardinality by Kr := |Kr| and the upper bound by kmax = suprKr. The second quantity is a bound on a "sampling" probability. Let π(s) be the probability that a newly inserted item would change a sketch in state s,
π(s) := Pr r
(Sr(D) 6= Sr(D−i) |Sr(D−i) = s). (3)
Although a sketch generally does not store explicit samples, conceptually, it can be helpful to think of π(s) as the probability that an as-yet-unseen item i gets “sampled” by a sketch in state s. We upper bound π∗ := sups∈Ω π(s) to limit the influence of items added to the stream.
The main sub-result in our analysis (Theorem 2.4) roughly states that the sketch is -DP so long as (a) the sampling probability π∗ < 1 − e− is small enough, and (b) the stream cardinality n > kmaxe −1 = Θ(kmax/ ) is large enough.
We show Property (a) is a necessary condition for any -DP algorithm if the algorithm works over data universes of unbounded size. Unfortunately, Property (a) does not directly hold for natural
2A sketch is duplication-invariant if and only if its state when run on any stream σ is identical to its state when run on the stream σ′, in which all elements of the stream σ appear exactly once.
sketching algorithms. But we show (Section 2.2) by applying a simple down-sampling procedure, any hash-based, order-invariant algorithm can be modified to satisfy (a).
Furthermore, Section 4 shows common sketches satisfy Property (a) with high probability, thus providing ( , δ)-DP guarantees for sufficiently large cardinalities. Compared to [5], these guarantees are tighter, more precise, and more general as they establish the failure probability δ decays exponentially with n, provide explicit formulas for δ, and apply to a range of sketches rather than just HLL.
Overview of the analysis. The definition of -DP requires bounding the Bayes factor in equation 1. The challenge is that the numerator and denominator may not be easy to compute by themselves. However, it is similar to the form of a conditional probability involving only one insertion. Our main trick re-expresses this Bayes factor as a sum of conditional probabilities involving a single insertion. Since the denominator Prr(Sr(D−i) = s) involves a specific item i which may change the sketch, we instead consider the smallest item Jr whose removal does not change the sketch. This allows us to re-express the numerator in terms of a conditional probability Prr(S(D) = s ∧ Jr = j) = Prr(Jr = j|S(D−j) = s) Prr(S(D−j) = s) involving only a single insertion plus a nuisance term Prr(S(D−j) = s). The symmetry of items gives that the nuisance term is equal to denominator Prr(S(D−j) = s) = Prr(S(D−i) = s), thus allowing us to eliminate it. Lemma 2.1. Suppose n > suprKr. Then Prr(Kr = n) = 0, and
Prr(Sr(D) = s) Prr(Sr(D−i) = s) = ∑
j∈D Pr r
(Jr = j |Sr(D−j) = s). (4)
By further conditioning on the total number of items that, when removed, can change the sketch, we obtain conditional probabilities that are simple to calculate. A combinatorial argument simplifies the resulting expression and gives us two factors in Lemma 2.2, one involving the sampling probability for new items π(s) given a sketch in state s and the other being an expectation involving Kr. This identifies the two quantities that must be controlled in order for a sketch to be -DP.
Lemma 2.2. Under the same assumptions as Lemma 2.1
∑ j∈D Pr r (Jr = j |Sr(D−j) = s) = (1− π(s))Er ( 1 + Kr n−Kr + 1 ∣∣∣∣Sr(D−1) = s ) . (5)
To show that all hash-based, order invariant sketching algorithms can be made -DP, we show that Kr can always be bounded by the maximum size of the sketch in bits. Thus, if a sketch is combined with a downsampling procedure to ensure π(s) is sufficiently small, one satisfies both of the properties that are sufficient for an -DP guarantee.
Having established (5), we can derive a result showing that a hash-based, order-invariant sketch is -DP so long as the stream cardinality is large enough and sups∈Ω π(s) is not too close to 1.
Corollary 2.3. Let Ω denote the set of all possible states of a hash-based order-invariant distinct counting sketching algorithm. When run on a stream of cardinality n > suprKr, the sketch output by the algorithm satisfies -DP if
π0 := 1− e− > sup s∈Ω π(s) and (6) e > 1 + Er (
Kr n−Kr + 1
∣∣∣∣Sr(D−1) = s ) for all sketch states s ∈ Ω. (7)
Furthermore, if the data stream D consists of items from a universe U of unbounded size, Condition 6 is necessarily satisfied by any sketching algorithm satisfying -DP.
The above corollary may be difficult to apply directly since the expectation in Condition (7) is often difficult to compute and depends on the unknown cardinality n. Our main result provides sufficient criteria to ensure that Condition (7) holds. The criteria is expressed in terms of a minimum cardinality n0 and sketch-dependent constant kmax. This constant kmax is a bound on the maximum number of items which change the sketch when removed. That is, for all input streamsD and all r, kmax ≥ |Kr|. We derive kmax for a number of popular sketch algorithms in Section 4.
Theorem 2.4. Consider any hash-based, order-invariant distinct counting sketch. The sketch output by the algorithm satisfies an -DP guarantee if
sup s∈Ω
π(s) < π0 := 1− e− and there are strictly greater than (8)
n0 := kmax/(1− e− ) unique items in the stream. (9)
Later, we explain how to modify existing sketching algorithms in a black-box way to satisfy these conditions. If left unmodified, most sketching algorithms used in practice allow for some sketch values s ∈ Ω which violate Condition 8, i.e π(s) > 1− e− . We call such sketch values “privacy-violating”. Fortunately, such values turn out to arise with only tiny probability. The next theorem states that, so long as this probability is smaller than δ, the sketch satisfies ( , δ)-DP without modification. The proof of Theorem 2.5 follows immediately from Theorem 2.4. Theorem 2.5. Let n0 be as in Theorem 2.4. Given a hash-based, order-invariant distinct counting sketch with bounded size, let Ω′ be the set of sketch states such that π(s) ≥ π0. If the input stream D has cardinality n > n0, then the sketch is ( , δ) differentially private where δ = Prr(Sr(D) ∈ Ω′).
2.1 Constructing Sketches Satisfying Approximate Differential Privacy: Algorithm 1a
Theorem 2.5 states that, when run on a stream with n ≥ n0 distinct items, any hash-based orderinvariant algorithm (see Algorithm 1a) automatically satisfies ( , δ)-differential privacy where δ denotes the probability that the final sketch state s is “privacy-violating”, i.e., π(s) > π0 = 1− e− . In Section 4, we provide concrete bounds of δ for specific algorithms. In all cases considered, δ falls exponentially with respect to the cardinality n. Thus, high privacy is achieved with high probability so long as the stream is large.
We now outline how to derive a bound for a specific sketch. We can prove the desired bound on δ by analyzing sketches in a manner similar to the coupon collector problem. Assuming a perfect, random hash function, the hash values of a universe of items defines a probability space. We can identify v ≤ kmax events or coupons, C1, . . . , Cv, such that π(s) is guaranteed to be less than π0 after all events have occurred. Thus, if all coupons are collected, the sketch satisfies the requirement to be -DP. As the cardinality n grows, the probability that a particular coupons remains missing decreases exponentially. A simple union bound shows that the probability δ that any coupon is missing decreases exponentially with n.
For more intuition as to why unmodified sketches satisfy an ( , δ)-DP guarantee when the cardinality is large, we note that the inclusion probability π(s) is closely tied to the cardinality estimate in most sketching algorithms. For example, the cardinality estimators used in HLL and KMV are inversely proportional to the sampling probability π(s), i.e., N̂(s) ∝ 1/π(s), while for LPCA and Adaptive Sampling, the cardinality estimators are monotonically decreasing with respect to π(s). Thus, for most sketching algorithms, when run on a stream of sufficiently large cardinality, the resulting sketch is privacy-violating only when the cardinality estimate is also inaccurate. Theorem 2.6 is useful when analyzing the privacy of such algorithms, as it characterizes the probability δ of a “privacy violation” in terms of the probability the returned estimate, N̂(Sr(D)), is lower than some threshold Ñ(π0). Theorem 2.6. Let Sr be a sketching algorithm with estimator N̂(Sr). If n ≥ n0 and the estimate returned on sketch s is a strictly decreasing function of π(s), so that N̂(s) = Ñ(π(s)) for a function
Ñ . Then, Sr is ( , δ)-DP where δ = Prr(N̂(Sr(D)) < Ñ(π0)).
2.2 Constructing Sketches Satisfying Pure Differential Privacy: Algorithm 1b - 1c
Theorem 2.4 guarantees an -DP sketch if (8), (9) hold. Condition (8) requires that sups∈Ω π(s) < 1− e− , i.e., the “sampling probability” of the sketching algorithm is sufficiently small regardless of the sketch’s state s. Meanwhile, (9) requires that the input cardinality is sufficiently large.
We show that any hash-based, order-invariant distinct counting sketching algorithm can satisfy these two conditions by adding a simple pre-processing step which does two things. First, it “downsamples” the input stream by hashing each input, interpreting the hash values as numbers in [0, 1], and simply ignoring numbers whose hashes are larger than π0. The downsampling hash must be independent to that used by the sketching algorithm itself. This ensures that Condition (8) is satisfied, as each input item has maximum sampling probability π0.
BASE(items, ) S ← InitSketch()
for x ∈ items do
S.add(x)
return N̂(S)
(a) ( , δ)-DP for n ≥ n0.
DPSKETCHLARGESET(items, ) S ← InitSketch() π0 ← 1− e− for x ∈ items do
if hash(x) < π0 then S.add(x)
return N̂(S)/π0 (b) ( , 0)-DP for n ≥ n0.
DPSKETCHANYSET(items, ) S, n0 ← DPInitSketch( ) π0 ← 1− e− for x ∈ items do
if hash(x) < π0 then S.add(x)
return N̂(S)/π0 − n0 (c) ( , 0)-DP for n ≥ 1.
Algorithms 1: Differentially private cardinality estimation algorithms from black box sketches. The function InitSketch() initializes a black-box sketch. The uniform random hash function hash(x) is chosen independently of any hash in the black-box sketch and is interpreted as a real in [0, 1]. The cardinality estimate returned by sketch S is denoted N̂(S). DPInitSketch is given in Algorithm 2a.
If there is an a priori guarantee that the number of distinct items n is greater than n0 = kmax1−e− , then (9) is trivially satisfied. Pseudocode for the resulting -DP algorithm is given in Algorithm 1b. If there is no such guarantee, then the preprocessing step adds n0 items to the input stream to satisfy (9). To ensure unbiasedness, these n0 items must (i) be distinct from any items in the “real” stream, and (ii) be downsampled as per the first modification. An unbiased estimate of the cardinality of the unmodified stream can then be easily recovered from the sketch via a post-processing correction. Pseudocode for the modified algorithm, which is guaranteed to satisfy -DP, is given in Algorithm 1c.
Corollary 2.7. The functions DPSketchLargeSet (Algorithm 1b) and DPSketchAnySet (Algorithm 1c) yield -DP distinct counting sketches provided that n ≥ n0 and n ≥ 1, respectively.
2.3 Constructing -DP Sketches from Existing Sketches: Algorithm 3, Appendix A.1
As regulations change and new ones are added, existing data may need to be appropriately anonymized. However, if the data has already been sketched, the underlying data may no longer be available, and even if it is retained, it may be too costly to reprocess it all. Our theory allows these sketches to be directly converted into differentially private sketches when the sketch has a merge procedure. Using the merge procedure to achieve -differential privacy yields more useful estimates than the naive approach of simply adding Laplace noise to cardinality estimates in proportion to the global sensitivity.
The algorithm assumes it is possible to take a sketch Sr(D1) of a stream D1 and a sketch Sr(D2) of a stream D2, and “merge” them to get a sketch of the concatenation of the two streams D1 ◦ D2. This is the case for most practical hash-based order-invariant distinct count sketches. Denote the merge of sketches Sr(D1) and Sr(D2) by Sr(D1) ∪ Sr(D2). In this setting, we think of the existing nonprivate sketch Sr(D1) being converted to a sketch that satisfies -DP by Algorithm 3 (see pseudocode in Appendix A.1). Since sketch Sr(D1) is already constructed, items cannot be first downsampled in the build phase the way they are in Algorithms 1b-1c. To achieve -DP, Algorithm 3 constructs a noisily initialized sketch, Sr(D2), which satisfies both the downsampling condition (Condition (8)) and the minimum stream cardinality requirement (Condition (9)) and returns the merged sketch Sr(D1)∪ Sr(D2). Hence, the sketch will satisfy both conditions for -DP, as shown in Corollary A.3 This merge based procedure typically adds no additional error to the estimates for large cardinalities. In contrast, the naive approach of adding Laplace noise can add significant noise since the sensitivity can be very large. For example, HLL’s estimator is of the form N̂HLL(s) = α/π(s) where α is a constant and s is the sketch. One item can update a bin to the maximum value, so that the updated sketch s′ has sampling probability π(s′) < π(s)(1− 1/k). The sensitivity of cardinality estimate is thus at least N̂HLL(s)/k. Given that the cardinality estimate, and hence sensitivity, can be arbitrarily large when n ≥ k, the naive approach is unworkable to achieve -DP.
3 The Utility of Private Sketches
When processing a data set with n unique items, denote the expectation and variance of a sketch and its estimator by En(N̂) and Varn(N̂) respectively. We show that our algorithms all yield unbiased
estimates. Furthermore, we show that for Algorithms 1a-1c, if the base sketch satisfies a relative error guarantee (defined below), the DP sketches add no additional error asymptotically.
Establishing unbiasedness. To analyze the expectation and variance of each algorithm’s estimator, N̂(S(D)), note that each estimator uses a ‘base estimate’ N̂base from the base sketch S and has the form N̂(S(D)) = N̂basep − V ; p is the downsampling probability and V is the number of artificial items added. This allows us to express expectations and variance via the variance of the base estimator.
Theorem 3.1. Consider a base sketching algorithm S ∈ C with an unbiased estimator N̂base for the cardinality of items added to the base sketch. Algorithms 1 (a)-(c) and 3 yield unbiased estimators.
Bounding the variance. Theorem 3.1 yields a clean expression for the variance of our private algorithms. Namely, Var[N̂(Sr(D))] = E[Var( N̂basep |V )] which is shown in Corollary B.1. The expression is a consequence of the law of total variance and that the estimators are unbiased.
We say that the base sketch satisfies a relative-error guarantee if with high probability, the estimate returned by the sketching algorithm when run on a stream of cardinality n is (1 ± 1/√c)n for some constant c > 0. Let N̂base,n denote the cardinality estimate when the base algorithm is run on a stream of cardinality n, as opposed to N̂base denoting the cardinality estimate produced by the base sketch on the sub-sampled stream used in our private sketches DPSketchLargeSet (Algorithm 1b) and DPSketchAnySet (Algorithm 1c). The relative error guarantee is satisfied when Varn(N̂base,n) < n 2/c; this is an immediate consequence of Chebyshev’s inequality.
When the number of artificially added items V is constant as in Algorithms 1b and 1c, Corollary B.1 provides a precise expression for the variance of the differentially private sketch. In Theorem 3.2 below, we use this expression to establish that the modification of the base algorithm to an -DP sketch as per Algorithms 1b and 1c satisfy the exact same relative error guarantee asymptotically. In other words, the additional error due to any pre-processing (down-sampling and possibly adding artificial items) is insignificant for large cardinalities n.
Theorem 3.2. Suppose N̂base,n satisfies a relative error guarantee, Varn(N̂base,n) < n2/c, for all n and for some constant c. Let v = 0 for Algorithm 1b and v = n0 for Algorithm 1c. Then Algorithms 1b and 1c satisfy
Varn(N̂) ≤ (n+ v)2
c + (n+ v)(v + π−10 ) kmax = (n+ v)2 c +O(n), (10)
so that Varn(N̂)/Varn(N̂base,n)→ 1 as n→∞.
In Corollary B.2 we prove an analagous result for Algorithm 3, which merges non-private and noisy sketches to produce a private sketch. Informally, the result is comparable to (10), albeit with v ≥ n0. This is because, in Algorithm 3, the number of artificial items added V is a random variable. We ensure that the algorithm satisfies a utility guarantee by bounding V with high probability. This is equivalent to showing that the base sketching algorithm satisfies an ( , δ)-DP guarantee as for any n∗ ≥ n0 and dataset D∗ with |D∗| = n∗, ( , δn∗)-DP ensures δn∗ > Prr(π(Sr(D∗)) > π0) = Prr(V > n∗) which follows from the definition of V in Algorithm 2b.
4 Examples of Hash-based, Order-Invariant Cardinality Estimators
We now provide ( , δ)-DP results for a select group of samples: FM85, LPCA, Bottom-k, Adaptive Sampling, and HLL. The ( , δ)-DP results in this section operate in the Algorithm 1a setting with no modification to the base sketching algorithm. Recall that the quantities of interest are the number of bins used in the sketch k, the size of the sketch in bits b and the number of items whose absence changes the sketch kmax. From Section 2 and Lemma A.1 we know that kmax ≤ b but for several common sketches we show a stronger bound of kmax = k. The relationship between these parameters for various sketching algorithms is summarized in Table 1. Table 2, Appendix C, details our improvements over [22, 5] in both privacy and utility.
We remind the reader that, per (6), π0 = 1 − e− , and (9) n0 = kmax1−e− . Furthermore, recall that once we bound the parameter kmax for any given hash-based order-invariant sketching algorithm,
Corollary 2.7 states that the derived algorithms 1b-1c satisfy -DP provided that n ≥ n0 and n ≥ 1, respectively. Accordingly, in the rest of this section, we bound kmax for each example sketch of interest, which has the consequences for pure -differential privacy delineated above.
Flajolet-Martin ’85 The FM85 sketch, often called Probabilistic Counting with Stochastic Averaging (PCSA), consists of k bitmaps Bi of length `. Each item is hashed into a bitmap and index (Bi, Gi) and sets the indexed bit in the bitmap to 1. The chosen bitmap is uniform amongst the k bitmaps and the indexGi ∼ Geometric(1/2). If ` is the length of each bitmap, then the total number of bits used by the sketch is b = k` and kmax = k` for all seeds r. A typical value for ` is 32 bits, as used in Table 1. Past work [25] proposed an -DP version of FM85 using a similar subsampling idea combined with random bit flips. Theorem 4.1. Let v = d− log2 π0e and π̃0 := 2−v ∈ (π0/2, π0]. If n ≥ n0, then the FM85 sketch is ( , δ)-DP with δ ≤ kv exp ( −π̃0 nk ) .
For any k, FM85 has kmax ∈ {32k, 64k}. This is worse than all other sketches we study which have kmax = k, so FM85 needs a larger number of minimum items n0 to ensure the sketch is ( , δ)-DP.
LPCA The Linear Probabilistic Counting Algorithm (LPCA) consists of a length-k bitmap. Each item is hashed to an index and sets its bit to 1. If B is the number of 1 bits, the LPCA cardinality estimate is N̂LPCA = −k log(1−B/k) = k log π(Sr(D)). Trivially, kmax = k. Since all bits are expected to be 1 after processing roughly k log k distinct items, the capacity of the sketch is bounded. To estimate larger cardinalities, one first downsamples the distinct items with some sampling probability p. To ensure the sketch satisfies an -DP guarantee, one simply ensures p ≥ π0. In this case, our analysis shows that LPCA is differentially private with no modifications if the cardinality is sufficiently large. Otherwise, since the estimator N̂(s) is a function of the sampling probability π(s), Theorem 2.6 provides an ( , δ) guarantee in terms of N̂ . Theorem 4.2. Consider a LPCA sketch with k bits and downsampling probability p. If p < π0 and n > k1−e− then LPCA is -DP. Otherwise, let b0 = dk(1− π0/p)e, π̃0 = b0/k, and µ0 be the expected number of items inserted to fill b0 bits in the sketch. Then, LPCA is ( , δ)-DP if n > µ0 with
δ = Pr r (B < b0) < µ0 n exp ( − π̃0 µ0 n ) exp(−π̃0) (11)
where B is the number of filled bits in the sketch. Furthermore, µ0 < Ñ(π̃0) where Ñ(π̃) = −kp log(1− π̃) is the cardinality estimate of the sketch when the sampling probability is π̃.
Bottom-k (also known as MinCount or KMV) sketches store the k smallest hash values. Removing an item changes the sketch if and only if 1) the item’s hash value is one of these k and 2) it does not collide with another item’s hash value. Thus, kmax = k. Typically, the output size of the hash function is large enough to ensure that the collision probability is negligible, so for practical purposes kmax = k exactly. Since the Bottom-k estimator N̂(s) = (k − 1)/π(s) is a function of the update probability π(s), Theorem 2.6 gives an ( , δ)-DP guarantee in terms of the cardinality estimate by coupon collecting; Theorem 4.3 tightens this bound on δ for a stronger ( , δ)-DP guarantee.
3This approximation holds for n < k. A better approximation of the error is √ k(exp(n/k)− n/k − 1)
Theorem 4.3. Consider Bottom-k with k minimum values. Given > 0, let π0, n0 be the corresponding subsampling and minimum cardinality to ensure the modified Bottom-k sketch is ( , 0)-DP. When run on streams of cardinality n ≥ n0, then the unmodified sketch is ( , δ)-DP, where δ = P (X ≤ k) < exp(−nαn) where X ∼ Binomial(n, π0) and αn = 12 (π0−k/n)2 π0(1−π0)+1/3n2 → 1 2 π0 1−π0 as n→∞.
The closely related Adaptive Sampling sketch has the same privacy behavior as a bottom-k sketch. Rather than storing exactly k hashes, the algorithm maintains a threshold p and stores up to k hash values beneath p. Once the sketch size exceeds k, the threshold is halved and only hashes less than p/2 are kept. Since at most k hashes are stored, and the sketch is modified only if one of these hashes is removed the maximum number of items that can modify the sketch by removal is kmax = k. Corollary 4.4. For any size k and cardinality n, if a bottom-k sketch is ( , δ)-DP, then a maximum size k adaptive sampling sketch is ( , δ)-DP with the same and δ.
HyperLogLog (HLL) hashes each item to a bin and value (Bi, Gi). Within each bin, it takes the maximum value so each bin is a form of Bottom-1 sketch. If there are k bins, then kmax = k.
Our results uniformly improve upon existing DP results on the HLL sketch and its variants. One variation of the HLL sketch achieves -DP but is far slower than HLL, as it requires every item to be independently hashed once for each of the k bins, rather than just one time [22]. In other words, [22] needs O(k) update time compared to O(1) for our algorithms. Another provides an ( , δ) guarantee for streams of cardinality n ≥ n′0, for an n′0 that is larger than our n0 by a factor of roughly (at least) 8, with δ falling exponentially with n [5]. In contrast, for streams with cardinality n ≥ n0, we provide a pure -DP guarantee using Algorithms 1b-1c. HLL also has the following ( , δ) guarantee. Theorem 4.5. If n ≥ n0, then HLL satisfies an ( , δ)-DP guarantee where δ ≤ k exp(−π0n/k)
HLL’s estimator is only a function of π(s) for medium to large cardinalities as it has the form N̂(s) = Ñ(π(s)) when Ñ(π(s)) > 5k/2. Thus, if π0 is sufficiently small so that Ñ(π0(s)) > 5k/2, then Theorem 2.6 can still be applied, and HLL satisfies ( , δ)-DP with δ = P (N̂(Sr(D)) < Ñ(π0)).
5 Empirical Evaluation
We provide two experiments highlighting the practical benefits of our approach. Of past works, only [5, 22] are comparable and both differ from our approach in significant ways. We empirically compare only to [22] since [5] is simply an analysis of HLL. Our improvement over [5] for HLL consists of providing significantly tighter privacy bounds in Section 4 and providing a fully -DP sketch in the secret hash setting. We denote our -DP version of HLL using Algorithm 1b by PHLL (private-HLL) and that of [22] by QLL. Details of the experimental setup are in Appendix D.
Experiment 1: Update Time (Figure 1a). We implemented regular, non-private HLL, our PHLL, and QLL and recorded the time to populate every sketch over 210 updates with k ∈ {27, 28, . . . 212} buckets. For HLL, these bucket sizes correspond to relative standard errors ranging from ≈ 9% down to ≈ 1.6%. Each marker represents the mean update time over all updates and the curves are the evaluated mean update time over 10 trials.
As expected from theory, the update time of [22] grows as O(k). In contrast, our method PHLL has a constant update time and is similar in magnitude to HLL. Both are roughly 500× faster than [22] when k = 212. Thus, figure 1a shows that [22] is not a scalable solution and the speedup by achieving O(1) updates is substantial.
Experiment 2: Space Comparison (Figure 1b). In addition to having a worse update time, we also show that QLL has lower utility in the sense that it requires more space than PHLL to achieve the same error. Fixing the input cardinality at n = 220 and the privacy budget at = ln(2), we vary the number of buckets k ∈ {27, 28, . . . 212} and simulate the -DP methods, PHLL and QLL [22]. The number of buckets controls the error and we found that both methods obtained very similar mean relative error for a given number of bins4 so we plot the space usage against the expected relative error for a given number of buckets. For QLL, since the error guarantees tie the parameter γ to the number of buckets, we modify γ accordingly as well. We compare the sizes of each sketch as the error varies.
Since the number of bits required for each bin depends on the range of values the bin can take, we record the simulated total sketch size := k · log2 maxi si, by using the space required for the largest bin value over k buckets.
Although QLL achieves similar utility, it does so using a sketch that is larger: when k = 27, we expect an error of roughly 9%, QLL is roughly 1.1× larger. This increases to about 1.6× larger than our PHLL sketch when k = 212, achieving error of roughly 1.6%. We see that the average increase in space when using QLL compared to PHLL grows exponentially in the desired accuracy of the sketch; when lower relative error is necessary, we obtain a greater space improvement over QLL than at higher relative errors. This supports the behavior expected by comparing with space bounds of [22] with (P)HLL.
6 Conclusion
We have studied the (differential) privacy of a class of cardinality estimation sketches that includes most popular algorithms. Two examples are the HLL and KMV (bottom-k) sketches that have been deployed in large systems [14, 1]. We have shown that the sketches returned by these algorithms are -differentially private when run on streams of cardinality greater than n0 = kmax1−e− and when combined with a simple downsampling procedure. Moreover, even without downsampling, these algorithms satisfy ( , δ)-differential privacy where δ falls exponentially with the stream cardinality n once n is larger than the threshold n0. Our results are more general and yield better privacy guarantees than prior work for small space cardinality estimators that preserve differential privacy. Our empirical validations show that our approach is practical and scalable, being much faster than previous state-of-the-art while consuming much less space.
Acknowledgments and Disclosure of Funding
We are grateful to Graham Cormode for valuable comments on an earlier version of this manuscript. Justin Thaler was supported by NSF SPX award CCF-1918989 and NSF CAREER award CCF1845125.
4This is shown in Figure 3, Appendix D.
7 Paper Checklist
1. (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes]
(b) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes]
(c) Did you discuss any potential negative societal impacts of your work? See answer to next question.
(d) Did you describe the limitations of your work? Our work shows that existing algorithms, or mild variants thereof, preserve privacy. Therefore, there should not be any negative societal impacts that are consequences of positive privacy results unless users/readers incorrectly apply the results to their systems. Any mathematical limitations from the theory are clearly outlined through the formal technical statements.
2. (a) Did you state the full set of assumptions of all theoretical results?[Yes] (b) Did you include complete proofs of all theoretical results?[Yes]
3. (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? A small repository containing the experimental scripts and figure plotting has been provided.
(b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes]
(c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? For Figure 1a the standard deviations have been plotted in shaded regions but these are too small in magnitude to be seen on the scale of the plot, indicating that there is very little variation. For Figure 1b we have plotted the entire distribution over all trials.
(d) Did you include the amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [No]
4. (a) If your work uses existing assets, did you cite the creators? [N/A] (b) Did you mention the license of the assets? [N/A] (c) Did you include any new assets either in the supplemental material or as a URL? [No] (d) Did you discuss whether and how consent was obtained from people whose data you’re
using/curating? [N/A] (e) Did you discuss whether the data you are using/curating contains personally identifiable
information or offensive content? [N/A] 5. (a) Did you include the full text of instructions given to participants and screenshots, if
applicable? [N/A] (b) Did you describe any potential participant risks, with links to Institutional Review
Board (IRB) approvals, if applicable? [N/A] (c) Did you include the estimated hourly wage paid to participants and the total amount
spent on participant compensation? [N/A] | 1. What is the main contribution of the paper regarding distinct element sketches?
2. What are the strengths of the proposed unified analysis?
3. Do you have any questions regarding the explanation in the paper?
4. How does the reviewer assess the privacy properties of the sketches in terms of their practicality?
5. Are there any limitations in proving privacy for public hash functions? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
A sketching algorithm for the distinct elements problem maps a stream of elements from some universe into a small space data structure (the sketch), such that an approximation to the number of distinct elements in the stream can be recovered from the data structure with high probability. Sketches thus compress the data, and it stands to reason that the task of designing a good sketch is aligned with the goal of preserving privacy with respect to the data in the stream. Recent work has confirmed this intuition, and shown that some specific randomized sketches for the distinct elements problem preserve differential privacy, provided that
the randomness used by the sketch is hidden from the adversary;
the data stream contains sufficiently many elements: this can be achieved by padding.
The analyses in prior work are sketch-specific, and sometimes require modifying the existing sketch in ways that may affect its practicality.
The present paper unifies this line of work, and, in a way, shows that the privacy preserving properties of distinct element sketches are a deeper property, common to a large class of sketches. Namely, all sketches based on random hashing that are invariant under permuting and duplicating elements satisfy differential privacy, and the privacy parameter depends on two quantities associated with the sketch. One of these quantities is automatically bounded when the stream contains enough elements, and the other is usually small enough with high probability, and, in any case, can be made small enough by subsampling the stream. This unified analysis gives improved bounds, in some cases, and allows ensuring privacy for the most efficient and practical variants of the sketches.
Strengths And Weaknesses
The unified analysis in the paper seems to get to the core of why distinct element sketches preserve privacy. The authors have found a good abstraction that allows isolating what makes a sketch private. Thus the proofs are simple, but give strong results for a wide class of algorithms. This makes the paper much more satisfying than papers that analyze just one sketch, even though there are some nice ones in that category as well.
One minor weakness may be that the unity of the results is somewhat undermined by the fact that
π
(
s
)
is not always bounded, which necessitates either a sketch-specific analysis, or preprocessing the stream by subsampling. The fix is simple, however, and does not affect error much. Also, even boiling down the privacy properties of a sketch to just one or two parameters is already quite interesting.
A weakness of this line of work is that the proof of privacy requires the random hash to be secret, which means that the privacy properties do not hold in a distributed setting. I do not hold this against this paper in particular, however.
Questions
The explanation on lines 177-184 is not very clear. The last sentence of the paragraph is a little puzzling. Looking at the supplementary material mostly clears this up.
There are some typos in the supplementary material: at least one missing reference, one typo that says “n < n_0” rather than “n > n_0” while referencing (23). Please proofread this part of the paper.
Do you think it’s possible to prove privacy for these sketches if the hash function is public but the stream is preprocessed in some random way?
Limitations
I think limitations are addressed adequately. |