venue
stringclasses
2 values
paper_content
stringlengths
7.54k
83.7k
prompt
stringlengths
161
2.5k
format
stringclasses
5 values
review
stringlengths
293
9.84k
NIPS
Title Online Meta-Critic Learning for Off-Policy Actor-Critic Methods Abstract Off-Policy Actor-Critic (OffP-AC) methods have proven successful in a variety of continuous control tasks. Normally, the critic’s action-value function is updated using temporal-difference, and the critic in turn provides a loss for the actor that trains it to take actions with higher expected return. In this paper, we introduce a flexible meta-critic framework based on observing the learning process and metalearning an additional loss for the actor that accelerates and improves actor-critic learning. Compared to existing meta-learning algorithms, meta-critic is rapidly learned online for a single task, rather than slowly over a family of tasks. Crucially, our meta-critic is designed for off-policy based learners, which currently provide state-of-the-art reinforcement learning sample efficiency. We demonstrate that online meta-critic learning benefits to a variety of continuous control tasks when combined with contemporary OffP-AC methods DDPG, TD3 and SAC. 1 Introduction Off-policy Actor-Critic (OffP-AC) methods are currently central in deep reinforcement learning (RL) research due to their greater sample efficiency compared to on-policy alternatives. On-policy learning requires new trajectories to be collected for each update to the policy, and is expensive as the number of gradient steps and samples per step increases with task-complexity even for contemporary TRPO [33], PPO [34] and A3C [27] algorithms. Off-policy methods, such as DDPG [20], TD3 [9] and SAC [13] achieve greater sample efficiency as they can learn from randomly sampled historical transitions without a time sequence requirement, making better use of past experience. The critic estimates action-value (Q-value) function using a differentiable function approximator, and the actor updates its policy parameters in the direction of the approximate action-value gradient. Briefly, the critic provides a loss to guide the actor, and is trained in turn to estimate the environmental action-value under the current policy via temporal-difference learning [38]. In all these cases the learning objective function is hand-crafted and fixed. Recently, meta-learning [14] has become topical as a paradigm to accelerate RL by learning aspects of the learning strategy, for example, learning fast adaptation strategies [7, 30, 31], losses [3, 15, 17, 36], optimisation strategies [6], exploration strategies [11], hyperparameters [40, 42], and intrinsic rewards [44]. However, most of these works perform meta-learning on a family of tasks or environments and amortize this huge cost by deploying the trained strategy for fast learning on a new task. ∗Contributed equally. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. In this paper we introduce a meta-critic network to enhance OffP-AC learning methods. The metacritic augments the vanilla critic to provide an additional loss to guide the actor’s learning. However, compared to the vanilla critic, the meta-critic is explicitly (meta)-trained to accelerate the learning process rather than merely estimate the action-value function. Overall, the actor is trained by both critic and meta-critic provided losses, the critic is trained by temporal-difference as usual, and crucially the meta-critic is trained to generate maximum learning progress in the actor. Both the critic and meta-critic use randomly sampled transitions for effective OffP-AC learning, providing superior sample efficiency compared to existing on-policy meta-learners. We emphasize that meta-critic can be successfully learned online within a single task. This is in contrast to the currently widely used meta-learning paradigm – where entire task families are required to provide enough data for meta-learning, and to provide new tasks to amortize the huge cost of meta-learning. Our framework augments vanilla AC learning with an additional meta-learned critic, which can be seen as providing intrinsic motivation towards optimum actor learning progress [28]. As analogously observed in recent meta-learning studies [8], our loss-learning can be formalized as bi-level optimisation with the upper level being meta-critic learning, and lower level being conventional learning. We solve this joint optimisation by iteratively updating the meta-critic and base learner online in a single task. Our strategy is related to the meta-loss learning in EPG [15], but learned online rather than offline, and integrated with OffP-AC rather than their on-policy policy-gradient learning. The most related prior work is LIRPG [44], which meta-learns an intrinsic reward online. However, their intrinsic reward just provides a helpful scalar offset to the environmental reward for on-policy trajectory optimisation via policy-gradient [37]. In contrast our meta-critic provides a loss for direct actor optimisation using sampled transitions, and achieves dramatically better sample efficiency than LIRPG reward learning. We evaluate several continuous control benchmarks and show that online meta-critic learning can improve contemporary OffP-AC algorithms including DDPG, TD3 and SAC. 2 Background and Related Work Policy-Gradient (PG) RL Methods. Reinforcement learning involves an agent interacting with environment E. At each time t, the agent receives an observation st, takes a (possibly stochastic) action at based on its policy π : S → A, and receives a reward rt and new state st+1. The tuple (st, at, rt, st+1) describes a state transition. The objective of RL is to find the optimal policy πφ, which maximizes the expected cumulative return J . In on-policy RL, J is defined as the discounted episodic return based on a sequential trajectory over horizon H: (s0, a0, r0, s1 · · · , sH , aH , rH , sH+1). J = Ert,st∼E,at∼π [∑H t=0 γ trt ] . In on-policy AC, r is represented by a surrogate state-value V (st) from its critic. Since J is only a scalar value that is not differentiable, the gradient of J with respect to policy πφ has to be optimised under the policy gradient theorem [37]: ∇φJ(φ) = E [J ∇φ log πφ(at|st)]. However, with respect to sample efficiency, even exploiting tricks like importance sampling and improved application of A2C [44], the use of full trajectories is less effective than the use of individual transitions by off-policy methods. Off-policy actor-critic architectures provide better sample efficiency by reusing past experience (previously collected transitions). DDPG [20] borrows two main ideas from Deep Q Networks [25, 26]: a replay buffer and a target Q network to give consistent targets during temporal-difference backups. TD3 (Twin Delayed Deep Deterministic policy gradient) [9] develops a variant of Double Q-learning by taking the minimum value between a pair of critics to limit over-estimation, and the computational cost is reduced by using a single actor optimised with respect to Qθ1 . SAC (Soft Actor-Critic) [12, 13] proposes a maximum entropy RL framework where its stochastic actor aims to simultaneously maximize expected action-value and entropy. The latest version of SAC [13] also includes the “the minimum value between both critics” idea in its implementation. Specifically, in these off-policy AC methods, parameterized policies πφ can be directly updated by defining actor loss in terms of the expected return J(φ) and taking its gradient ∇φJ(φ), where J(φ) depends on the action-value Qθ(s, a). Based on a batch of transitions randomly sampled from the buffer, the loss for actor provided by the critic is basically calculated as: Lcritic = −J(φ) = −Es∼pπQθ(s, a)|a=πφ(s). (1) Specifically, the loss Lcritic for actor in TD3 and SAC is calculated as Eq. (2) and Eq. (3) respectively: LcriticTD3 = −Es∼pπQθ1(s, a)|a=πφ(s); (2) LcriticSAC = Es∼pπ [α log (πφ(a|s))−Qθ(s, a)|a=πφ(s)]. (3) The actor is then updated as ∆φ = α∇φLcritic, following the critic’s gradient to increase the likelihood of actions that achieve a higher Q-value. Meanwhile, the critic θ uses Q-learning updates to estimate the action-value function: θ ← arg min θ E(Qθ(st, at)− rt − γQθ(st+1, π(st+1))2. (4) Meta Learning for RL. Meta-learning (a.k.a. learning to learn) [7, 14, 32] has received a resurgence in interest recently due to its potential to improve learning performance and sample efficiency in RL [11]. Several studies learn optimisers that provide policy updates with respect to known loss or reward functions [1, 6, 23]. A few studies learn hyperparameters [40, 42], loss functions [3, 15, 36] or rewards [44] that steer the learning of standard optimisers. Our meta-critic framework is in the category of loss-function meta-learning, but unlike most of these we are able to meta-learn the loss function online in parallel to learning a single extrinsic task rather. No costly offline learning on a task family is required as in Houthooft et al. [15], Sung et al. [36]. Most current Meta-RL methods are based on on-policy policy-gradient, limiting sample efficiency. For example, while LIRPG [44] is one of the few prior works to attempt online meta-learning, it is ineffective in practice due to only providing a scalar reward increment rather than a loss for direct optimisation. A few meta-RL studies have begun to address off-policy RL, for conventional multi-task meta-learning [30] and for optimising transfer vs forgetting in continual learning of multiple tasks [31]. The contribution of our Meta-Critic is to enhance state-of-the-art single-task OffP-AC RL with online meta-learning. Loss Learning. Loss learning has been exploited in ‘learning to teach’ [41] and surrogate loss learning [10, 16] where a teacher network predicts the parameters of a manually designed loss in the supervised learning. In contrast our meta-critic is itself a differentiable loss, and is designed for use in RL. Other applications learn losses that improve model robustness to out of distribution samples [2, 19]. Some recent loss learning studies in RL focus mainly on the multi-task adaptation scenarios [3, 15, 36] or the generalization to entirely different environments [17]. Our loss learning architecture is related to Li et al. [19], but designed for accelerating single-task OffP-AC RL rather than improving robustness in multi-domain supervised learning. 3 Methodology We aim to learn a meta-critic which augments the vanilla critic by providing an additional loss Lmcriticω for the actor. The vanilla loss for the policy (actor) is Lcritic given by the conventional critic. The actor is trained by Lcritic and Lmcriticω via stochastic gradient descent. The meta-critic parameter ω is optimized by meta-learning to accelerate actor learning progress. Here we follow the notation in TD3 and SAC that φ and θ denote actors and critics respectively. Algorithm Overview. We train a meta-critic loss Lmcriticω that augments the vanilla critic Lcritic to enhance actor learning. Specifically, it should lead to the actor φ having improved performance on the normal task, as measured by Lcritic on the validation data, after learning on both meta-critic and vanilla critic losses. This can be seen as a bi-level optimisation problem1 [8, 14, 29] of the form: ω = arg min ω Lmeta(dval;φ ∗) s.t. φ∗ = arg min φ (Lcritic(dtrn;φ) + L mcritic ω (dtrn;φ)), (5) where we can assume Lmeta(·) = Lcritic(·) for now. dtrn and dval are different transition batches from replay buffer. Here the lower-level optimisation trains actor φ to minimize both the normal loss and meta-critic-provided loss on training samples. The upper-level optimisation further requires meta-critic ω to have produced a learned actor φ∗ that minimizes a meta-loss that measures actor’s normal performance on a set of validation samples, after being trained by meta-critic. Note that in principle the lower-level optimisation could purely rely on Lmcriticω analogously to the procedure in EPG [15], but we find optimising their sum greatly increases learning stability and speed. Eq. (5) is satisfied when meta-critic successfully trains the actor for good performance on the normal task 1See Franceschi et al. [8] for a discussion on convergence of bi-level algorithms. Algorithm 1 Online Meta-Critic Learning for OffP-AC RL φ, θ, ω,D ← ∅ // Initialise actor, critic, meta-critic and buffer for each iteration do for each environment step do at ∼ πφ(at|st) // Select action according to the current policy st+1 ∼ p(st+1|st, at), rt // Observe reward rt and new state st+1 D ← D ∪ {(st, at, rt, st+1)} // Store the transition in the replay buffer end for for each gradient step do Sample mini-batch dtrn from D Update θ ← Eq. (4) // Update the critic parameters meta-train: Lcritic ← Eqs. (1), (2) or (3) // Vanilla-critic-provided loss for actor Lmcriticω ← Eqs. (10) or (11) // Meta-critic-provided loss for actor φold = φ− η∇φLcritic // Update actor according to Lcritic only φnew = φold − η∇φLmcriticω // Update actor according to Lcritic and Lmcriticω meta-test: Sample mini-batch dval from D Lmeta(dval;φnew) or L meta clip(dval;φold, φnew)← Eqs. (8) or (9) // Meta-loss meta-optimisation φ← φ− η(∇φLcritic +∇φLmcriticω ) // Update the actor parameters ω ← ω − η∇ωLmeta or ω − η∇ωLmetaclip // Update the meta-critic parameters end for end for=0 as measured by validation meta loss. The update of vanilla-critic is also in the lower loop, but as it updates as usual, we focus on the actor and meta-critic optimisation for simplicity of exposition. In this setup the meta-critic is a neural network hω(dtrn;φ) that takes as input some featurisation of the actor φ and the states and actions in dtrn. The meta-critic network must produce a scalar output, which we can then treat as a loss Lmcriticω := hω , and must be differentiable with respect to φ. We next discuss the overall optimisation flow and the specific meta-critic architecture. Meta-Optimisation Flow. To optimise Eq. (5), we iteratively update the meta-critic parameter ω (upper-level) and actor and vanilla-critic parameters φ and θ (lower-level). At each iteration, we perform: (i) Meta-train: Sample a mini-batch of transitions and putatively update policy φ based on the vanilla-critic-provided Lcritic and the meta-critic-provided Lmcriticω losses. (ii) Meta-test: Sample another mini-batch of transitions to evaluate the performance of the updated policy according to Lmeta. (iii) Meta-optimisation: Update meta-critic ω to maximize the performance on the validation batch, and perform the real actor update according to both losses. Thus the meta-critic co-evolves with the actor as they are trained online and in parallel. Figure 1 and Alg. 1 summarize the process and the details of each step are explained next. Meta-critic can be flexibly integrated with any OffP-AC algorithms, and the further implementation details for DDPG, TD3 and SAC are in the supplementary material. Updating Actor Parameters (φ). During metatrain, we sample a mini-batch of transitions dtrn = {(si, ai, ri, si+1)} with batch size N from the replay buffer D. We update the policy using both losses as: φnew = φ− η ∂ Lcritic(dtrn) ∂φ − η ∂ L mcritic ω (dtrn) ∂φ . (6) We also compute a separate update: φold = φ− η ∂Lcritic(dtrn) ∂φ (7) that only leverages the vanilla-critic-provided loss. If meta-critic provided a beneficial source of loss, φnew should be a better parameter than φ, and in particular a better parameter than φold. We will use this comparison in the next meta-test step. Updating Meta-Critic Parameters (ω). To train the meta-critic, we sample another mini-batch of transitions: dval = {(svali , avali , rvali , svali+1)} with batch size M . The use of a validation batch for bi-level meta-optimisation [8, 29] ensures the meta-learned component does not overfit. As our framework is off-policy, this does not incur any sample efficiency cost. The meta-critic is then updated by a meta-loss ω ← ω − ηLmeta(·) that measures actor performance after learning. Meta-Loss Definition. The most intuitive meta-loss definition is the validation performance of updated actor φnew as measured by the normal critic: Lmeta = Lcritic(dval;φnew). (8) However, we find it helpful for optimisation efficiency and stability to optimise the clipped difference between updates with- and without meta-critic’s input as: Lmetaclip = tanh(L critic(dval;φnew)− Lcritic(dval;φold)). (9) This is simply a monotonic re-centering and re-scaling of Lcritic. (The parameter ω that minimizes Lmetaclip as Eq. (9) also minimizes L meta of Eq. (8) and vice-versa.) Note that in Eq. (9) the updated actor φnew depends on the feedback given by meta-critic ω and φold does not. Thus only the first term is optimised for ω. In this setup the Lcritic(dval;φnew) term should obtain high reward/low loss on the validation batch and the latter Lcritic(dval;φold) provides a baseline, analogous to the baseline widely used to accelerate and stabilize the policy-gradient RL. tanh ensures meta-loss range is always nicely distributed in (−1, 1), and caps the magnitude of the meta-gradient. In essence, meta-loss is for the agent to ask itself: “Did meta-critic learning improve validation performance compared to vanilla learning?”, and adjusts meta-critic ω accordingly. We will compare the options Lmeta and Lmetaclip later. Designing Meta-Critic (hω). The meta-critic hω implements the additional loss for actor. The design-space for hω has several requirements: (i) Its input must depend on the policy parameters φ, because this meta-critic-provided loss is also used to update the policy. (ii) It should be permutation invariant to transitions in dtrn, i.e., it should not make a difference if we feed the randomly sampled transitions indexed [1,2,3] or [3,2,1]. A naivest way to achieve (i) is given in MetaReg [2] which meta-learns a parameter regularizer: hω(φ) = ∑ i ωi|φi|. Although this form of hω acts directly on φ, it does not exploit state information, and introduces a large number of parameters in hω, as φ may be a high-dimensional neural network. Therefore, we design a more efficient and effective form of hω that also meets both of these requirements. Similar to the feature extractor in supervised learning, the actor needs to analyse and extract information from states for decision-making. We assume the policy network can be represented as πφ(s) = π̂(π̄(s)) and decomposed into the feature extraction π̄φ and decision-making π̂φ (i.e., the last layer of the full policy network) modules. Thus the output of the penultimate layer of full policy network is just the output of feature extraction π̄φ(s), and such output of feature jointly encodes φ and s. Given this encoding, we implement hw(dtrn;φ) as a three-layer multi-layer perceptron fω whose input is the extracted feature from π̄φ(s). Here we consider two designs for meta-critic (hω): using our joint feature alone (Eq. (10)) or augmenting the joint feature with states and actions (Eq. (11)): hw(dtrn;φ) = 1 N N∑ i=1 fω(π̄φ(si)), (10) hw(dtrn;φ) = 1 N N∑ i=1 fω(π̄φ(si), si, ai). (11) hω provides as an auxiliary critic whose input is based on the batch-wise set-embedding [43] of our joint actor-state feature. That is to say, dtrn is a randomly sampled mini-batch transitions from the replay buffer, and then s (and a) of transitions are inputted to hω, and finally we obtain the meta-critic-provided loss for dtrn. Here, our design of Eq. (11) also includes the cues in LIRPG and EPG where si and ai are used as the input of their learned reward and loss respectively. We set a softplus activation to the final layer of hω, following the idea in TD3 that vanilla critic may over-estimate and so the a non-negative additional actor loss can mitigate such over-estimation. Moreover, note that only si (and ai) from dtrn are used to calculate Lcritic and Lmcriticω , while si, ai, ri and si+1 are all used for optimising the vanilla critic. 4 Experiments and Evaluation We take the algorithms DDPG, TD3 and SAC as our vanilla baselines, and denote their enhancements by meta-critic as DDPG-MC, TD3-MC, SAC-MC. All -MCs augment their built-in vanilla critic with the proposed meta-critic. We take Eq. (10) and Lmetaclip as the default meta-critic setup, and compare alternatives in the ablation study. For our implementation of meta-critic, we use a three-layer neural network with an input dimension of π̄ (300 in DDPG and TD3, 256 in SAC), two hidden feed-forward layers of 100 hidden nodes each, and ReLU non-linearity between layers. Implementation Details. We evaluate the methods on a suite of seven MuJoCo tasks [39] in OpenAI Gym [4], two MuJoCo tasks in rllab [5], and a simulated racing car TORCS [22]. For MuJoCo-Gym, we use the latest V2 tasks instead of V1 used in TD3 and the old-SAC [12] without modification to their original environment or reward. We use the open-source implementations “OurDDPG”2, TD33 and SAC4. Here, “OurDDPG” is the re-tuned version of DDPG implemented in Fujimoto et al. [9] with the same hyper-parameters. In MuJoCo cases we integrate our meta-critic with learning rate 0.001. The details of TORCS hyper-parameters are in the supplementary material. Our demo code can be viewed on https://github.com/zwfightzw/Meta-Critic. 4.1 Evaluation of Meta-Critic OffP-AC Learning TD3 and SAC. Figure 3 reports the learning curves for TD3. For some tasks the vanilla TD3’s performance declines in the long run, while TD3-MC shows improved stability with much higher asymptotic performance. Thus TD3-MC provides comparable or better learning performance in each case, while Table 1 shows the clear improvement in the max average return. For SAC in Figure 4, note that we use the most recent update of SAC [13], which is actually the combination of SAC+TD3. Although SAC+TD3 is arguably the strongest existing method, SAC-MC still gives a clear boost on the asymptotic performance for many tasks, especially the most challenging TORCS. 2https://github.com/sfujim/TD3/blob/master/OurDDPG.py 3https://github.com/sfujim/TD3/blob/master/TD3.py 4https://github.com/pranz24/pytorch-soft-actor-critic Comparison vs PPO-LIRPG. Intrinsic Reward Learning for PPO [44] is the most related method to our work in performing online single-task meta-learning of an additional reward/loss. Their original PPO-LIRPG evaluated on a modified environment with hidden rewards. Here we apply it to the standard unmodified learning tasks that we aim to improve. Table 1 tells that: (i) In this conventional setting, PPO-LIRPG worsens rather than improves basic PPO performance. (ii) Overall OffP-AC methods generally perform better than on-policy PPO for most environments. This shows the importance of our meta-learning contribution to the off-policy setting. In general Meta-Critic is preferred compared to PPO-LIRPG because the latter only provides a scalar reward bonus that helps the policy indirectly via high-variance policy-gradient updates, while ours provides a direct loss. Summary. Table 1 and Figure 5 summarize all the results by max average return. SAC-MC generally performs best and -MCs are generally comparable or better than their corresponding vanilla alternatives. -MCs usually provide improved variance in return compared to their baselines. 4.2 Further Analysis Loss and Optimisation Analysis. We take tabular MDP [6] (|S| = 2, |A| = 2) as an example using DDPG. Figure 6 first reports the normal Lcritic of actor, and the introduced hω (i.e., Lmcriticω ) and Lmetaclip over 5 trials. We also plot model optimisation trajectories (pink dots) via a 2D weight-space slice in right part of Figure 6. They are plotted over the average reward surface. Following the network visualization in Li et al. [18], we calculate the subspace to plot as: Let φi denote model parameters at episode i and the final estimate as φn (here n = 100). We apply PCA to matrix M = [φ0 − φn, . . . , φn−1 − φn], and take the two most explanatory directions of this optimisation path. Parameters are then projected onto the plane defined by these directions for plotting; and models at each point are densely evaluated to get average reward. Figure 6 shows: (i) DDPG-MC convergences faster to a lower value of Lcritic, demonstrating the meta-critic’s ability to accelerate learning. (ii) Meta-loss is randomly initialised at the start, but as ω begins to be trained via meta-test on validation data, meta-loss drops swiftly below zero and then φnew is better than φold. In the late stage, meta-loss goes towards zero, indicating all of hω’s knowledge has been distilled to help the actor. Thus meta-critic is helpful in defining better update directions in the early stages of learning (but note that it can still impact later stage learning via changing choices made early). (iii) Lmcriticω converges smoothly under the supervision of meta-loss. (iv) DDPG-MC has a very direct and fast optimisation movement to the high reward zone of parameter space, while the vanilla DDPG moves slowly through the low reward space before finally finding the direction to the high-reward zone. Ablation on hω design. We run Walker2d under SAC-MC with the alternative hω from Eq. (11) or in MetaReg [2] format (input actor parameters directly). In Table 2, we record the max average return and sum average return (area under the average reward curve) of evaluations over all time steps. Eq. (11) achieves the highest max average return and our default hω (Eq. (10)) attains the highest mean average return. We can also see some improvement for hω(φ) in MetaReg format, but the huge number (73484) of parameters is expensive. Overall, all meta-critic designs provide at least a small improvement on vanilla SAC. Ablation on meta-loss design. We considered two meta-loss designs in Eqs. (8&9). For Lmetaclip in Eq. (9), we use Lcritic(dval;φold) as a baseline to improve numerical stability of the gradient update. To evaluate this design, we also compare using vanilla Lmeta in Eq. (8). The last column in Table 2 shows vanilla Lmeta barely improves on vanilla SAC, validating our meta-loss design. Controlling for compute cost and parameter count. We find that meta-critic increases 15-30% compute cost and 10% parameter count above the baselines (the latter is neglectable as it is small compared to the replay buffer’s memory footprint) during training, and this is primarily attributable to the cost of evaluating the meta-loss Lmetaclip and hence L mcritic ω . To investigate whether the benefit of meta-critic can be replicated simply by increasing compute expenditure or model size, we perform control experiments by increasing the vanilla baselines’ compute budget or parameter count to match the -MCs. Specifically, if meta-critic takes K% more compute than the baseline, then we re-run the baseline with K% more update steps per iteration. This ‘+updates’ condition provides the baseline with more mini-batch samples while controlling the number of environment interactions. Note that due to implementation constraints of SAC, increasing updates in ‘SAC+updates’ requires taking at least 2x gradient updates per environment step compared to SAC and SAC-MC. Thus it takes 100% more updates than SAC and significantly more compute time than SAC-MC. To control for parameter count, if meta-critic takesN% more parameters than baseline, then we increase the baselines’ network size with N% more parameters by linearly scaling up the size of all hidden layers (‘+params’). The max average return results for the seven tasks in these control experiments are shown in Table 3, and the detailed learning curves of the control experiments are in the supplementary material. Overall, there is no consistent benefit in providing the baseline with more compute iterations or parameters, and in many environments they perform worse than the baseline or even fail entirely, especially in ‘+updates’ condition. Thus -MCs’ good performance can not be simply replicated by a corresponding increase in gradient steps or parameter size taken by the baseline. Discussion. We introduce an auxiliary meta-critic that goes beyond the information available to vanilla critic to leverage measured actor learning progress (Eq. (9)). This is a generic module that can potentially improve any off-policy actor-critic derivative-based RL method for a minor overhead at train time, and no overhead at test time; and can be applied directly to single tasks without requiring task-families as per most other meta-RL methods [3, 7, 15, 30]. Our method is myopic, in that it uses a single inner (base) step per outer (meta) step. A longer horizon look-ahead may ultimately lead to superior performance. However, this incurs the cost of additional higher-order gradients and associated memory use, and risk of unstable high-variance gradients [21, 29]. New meta-optimizers [24] may ultimately enable these issues to be solved, but we leave this to future work. 5 Conclusion We present Meta-Critic, a derivative-based auxiliary critic module for off-policy actor-critic reinforcement learning methods that can be meta-learned online during single task learning. The meta-critic is trained to provide an additional loss for the actor to assist the actor learning progress, and leads to long run performance gains in continuous control. This meta-critic module can be flexibly incorporated into various contemporary OffP-AC methods to boost performance. In future work, we plan to apply the meta-critic to conventional meta-learning with multi-task and multi-domain RL. Acknowledgements This work was partially supported by the National Natural Science Foundation of China (No. 61751208) and the Advanced Research Program (No. 41412050202) and the Engineering and Physical Sciences Research Council of the UK (EPSRC) Grant number EP/S000631/1. Broader Impact We introduced a framework for meta RL where learning is improved through the addition of an auxiliary meta-critic which is trained online to maximise learning progress. This technology could benefit all current and potential future downstream applications of reinforcement learning, where learning speed and/or asymptotic performance can still be improved – such as in game playing agents and robot control. Faster reinforcement learning algorithms such as meta-critic could help to reduce the energy requirements training agents, which can add up to a significant environmental cost [35]; and bring us one step closer to enabling learning-based control of physical robots, which is currently rare due to the sample inefficiency of RL algorithms in comparison to the limited robustness of real robots to physical wear and tear of prolonged operation. Returning to our specific algorithmic contribution, introducing learnable reward functions rather than relying solely on manually specified rewards introduces a certain additional level of complexity and associated risk above that of conventional reinforcement learning. If the agent participates in defining its own reward, one might like to be able to interpret the learned reward function and validate that it is reasonable and will not lead to the robot learning to perform undesirable behaviours. This suggests that development of explainable AI techniques suited for reward function analysis could be a good topic for future research.
1. What is the focus and contribution of the paper regarding off-policy reinforcement learning? 2. What are the strengths of the proposed approach, particularly in its empirical evaluation? 3. What are the weaknesses of the paper, especially in its comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions or concerns regarding the proposed method, its applications, or its limitations?
Summary and Contributions Strengths Weaknesses
Summary and Contributions In this paper, the authors focus on the actor-critic algorithms of off-policy reinforcement learning. Specifically, the authors propose a method of meta-learning an auxiliary critic function to help improve the actor. The auxiliary critic function is learned with a meta-optimization objective such that the actor achieves low loss on the normal critic after being trained with the auxiliary critic. The policy is then trained with both the normal critic loss and the auxiliary critic loss. The authors implement the proposed method on top of 3 commonly used actor-critic algorithms: DDPG, TD3 and SAC. The authors evaluate the proposed method in MuJoCo robot locomotion environments and a simulated racing car environment. The empirical evidence suggests that the proposed method achieves almost uniform improvements compared to the baselines. Strengths The empirical evaluation of the proposed method is really comprehensive. The authors evaluate the proposed meta-critic method on top of 3 common actor-critic algorithms and in a wide variety of environments. The results show almost uniform improvement on all base algorithms and environments. It can be clearly seen that the proposed method improves the sample efficiency for actor-critic methods. The proposed framework for meta-learning an auxiliary critic is general and therefore is applicable to many variants of actor-critic algorithms. Also more flexible designs of the auxiliary critic could be applied to obtain better performance. Weaknesses I am not very convinced that it is fair to compare the proposed method to vanilla actor critic algorithms (DDPG, TD3 and SAC) in their default configuration. With the proposed method, at every gradient step, two different batches of data are used to train the meta-critic. This means that the meta-training process for the proposed method is implicitly taking more gradient steps compared to the vanilla actor critic algorithm. The paper does include experiments in the appendix about taking more gradient steps by directly scaling the number up proportional to the computation. While this is an important comparison, it would also be important to evaluate the baselines with different number of gradient steps per environment step (ranging from 2 to 8). This is because taking more gradient steps per environment step could speed up training but taking too many steps would result in overfit. The design for the meta-critic architecture seems a little arbitrary. While the authors include an ablation study comparing two designs and the meta-regularization, it would be great to include experiments with other types of meta-critic designs, such as one taking policy actions, Q values and next states.
NIPS
Title Online Meta-Critic Learning for Off-Policy Actor-Critic Methods Abstract Off-Policy Actor-Critic (OffP-AC) methods have proven successful in a variety of continuous control tasks. Normally, the critic’s action-value function is updated using temporal-difference, and the critic in turn provides a loss for the actor that trains it to take actions with higher expected return. In this paper, we introduce a flexible meta-critic framework based on observing the learning process and metalearning an additional loss for the actor that accelerates and improves actor-critic learning. Compared to existing meta-learning algorithms, meta-critic is rapidly learned online for a single task, rather than slowly over a family of tasks. Crucially, our meta-critic is designed for off-policy based learners, which currently provide state-of-the-art reinforcement learning sample efficiency. We demonstrate that online meta-critic learning benefits to a variety of continuous control tasks when combined with contemporary OffP-AC methods DDPG, TD3 and SAC. 1 Introduction Off-policy Actor-Critic (OffP-AC) methods are currently central in deep reinforcement learning (RL) research due to their greater sample efficiency compared to on-policy alternatives. On-policy learning requires new trajectories to be collected for each update to the policy, and is expensive as the number of gradient steps and samples per step increases with task-complexity even for contemporary TRPO [33], PPO [34] and A3C [27] algorithms. Off-policy methods, such as DDPG [20], TD3 [9] and SAC [13] achieve greater sample efficiency as they can learn from randomly sampled historical transitions without a time sequence requirement, making better use of past experience. The critic estimates action-value (Q-value) function using a differentiable function approximator, and the actor updates its policy parameters in the direction of the approximate action-value gradient. Briefly, the critic provides a loss to guide the actor, and is trained in turn to estimate the environmental action-value under the current policy via temporal-difference learning [38]. In all these cases the learning objective function is hand-crafted and fixed. Recently, meta-learning [14] has become topical as a paradigm to accelerate RL by learning aspects of the learning strategy, for example, learning fast adaptation strategies [7, 30, 31], losses [3, 15, 17, 36], optimisation strategies [6], exploration strategies [11], hyperparameters [40, 42], and intrinsic rewards [44]. However, most of these works perform meta-learning on a family of tasks or environments and amortize this huge cost by deploying the trained strategy for fast learning on a new task. ∗Contributed equally. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. In this paper we introduce a meta-critic network to enhance OffP-AC learning methods. The metacritic augments the vanilla critic to provide an additional loss to guide the actor’s learning. However, compared to the vanilla critic, the meta-critic is explicitly (meta)-trained to accelerate the learning process rather than merely estimate the action-value function. Overall, the actor is trained by both critic and meta-critic provided losses, the critic is trained by temporal-difference as usual, and crucially the meta-critic is trained to generate maximum learning progress in the actor. Both the critic and meta-critic use randomly sampled transitions for effective OffP-AC learning, providing superior sample efficiency compared to existing on-policy meta-learners. We emphasize that meta-critic can be successfully learned online within a single task. This is in contrast to the currently widely used meta-learning paradigm – where entire task families are required to provide enough data for meta-learning, and to provide new tasks to amortize the huge cost of meta-learning. Our framework augments vanilla AC learning with an additional meta-learned critic, which can be seen as providing intrinsic motivation towards optimum actor learning progress [28]. As analogously observed in recent meta-learning studies [8], our loss-learning can be formalized as bi-level optimisation with the upper level being meta-critic learning, and lower level being conventional learning. We solve this joint optimisation by iteratively updating the meta-critic and base learner online in a single task. Our strategy is related to the meta-loss learning in EPG [15], but learned online rather than offline, and integrated with OffP-AC rather than their on-policy policy-gradient learning. The most related prior work is LIRPG [44], which meta-learns an intrinsic reward online. However, their intrinsic reward just provides a helpful scalar offset to the environmental reward for on-policy trajectory optimisation via policy-gradient [37]. In contrast our meta-critic provides a loss for direct actor optimisation using sampled transitions, and achieves dramatically better sample efficiency than LIRPG reward learning. We evaluate several continuous control benchmarks and show that online meta-critic learning can improve contemporary OffP-AC algorithms including DDPG, TD3 and SAC. 2 Background and Related Work Policy-Gradient (PG) RL Methods. Reinforcement learning involves an agent interacting with environment E. At each time t, the agent receives an observation st, takes a (possibly stochastic) action at based on its policy π : S → A, and receives a reward rt and new state st+1. The tuple (st, at, rt, st+1) describes a state transition. The objective of RL is to find the optimal policy πφ, which maximizes the expected cumulative return J . In on-policy RL, J is defined as the discounted episodic return based on a sequential trajectory over horizon H: (s0, a0, r0, s1 · · · , sH , aH , rH , sH+1). J = Ert,st∼E,at∼π [∑H t=0 γ trt ] . In on-policy AC, r is represented by a surrogate state-value V (st) from its critic. Since J is only a scalar value that is not differentiable, the gradient of J with respect to policy πφ has to be optimised under the policy gradient theorem [37]: ∇φJ(φ) = E [J ∇φ log πφ(at|st)]. However, with respect to sample efficiency, even exploiting tricks like importance sampling and improved application of A2C [44], the use of full trajectories is less effective than the use of individual transitions by off-policy methods. Off-policy actor-critic architectures provide better sample efficiency by reusing past experience (previously collected transitions). DDPG [20] borrows two main ideas from Deep Q Networks [25, 26]: a replay buffer and a target Q network to give consistent targets during temporal-difference backups. TD3 (Twin Delayed Deep Deterministic policy gradient) [9] develops a variant of Double Q-learning by taking the minimum value between a pair of critics to limit over-estimation, and the computational cost is reduced by using a single actor optimised with respect to Qθ1 . SAC (Soft Actor-Critic) [12, 13] proposes a maximum entropy RL framework where its stochastic actor aims to simultaneously maximize expected action-value and entropy. The latest version of SAC [13] also includes the “the minimum value between both critics” idea in its implementation. Specifically, in these off-policy AC methods, parameterized policies πφ can be directly updated by defining actor loss in terms of the expected return J(φ) and taking its gradient ∇φJ(φ), where J(φ) depends on the action-value Qθ(s, a). Based on a batch of transitions randomly sampled from the buffer, the loss for actor provided by the critic is basically calculated as: Lcritic = −J(φ) = −Es∼pπQθ(s, a)|a=πφ(s). (1) Specifically, the loss Lcritic for actor in TD3 and SAC is calculated as Eq. (2) and Eq. (3) respectively: LcriticTD3 = −Es∼pπQθ1(s, a)|a=πφ(s); (2) LcriticSAC = Es∼pπ [α log (πφ(a|s))−Qθ(s, a)|a=πφ(s)]. (3) The actor is then updated as ∆φ = α∇φLcritic, following the critic’s gradient to increase the likelihood of actions that achieve a higher Q-value. Meanwhile, the critic θ uses Q-learning updates to estimate the action-value function: θ ← arg min θ E(Qθ(st, at)− rt − γQθ(st+1, π(st+1))2. (4) Meta Learning for RL. Meta-learning (a.k.a. learning to learn) [7, 14, 32] has received a resurgence in interest recently due to its potential to improve learning performance and sample efficiency in RL [11]. Several studies learn optimisers that provide policy updates with respect to known loss or reward functions [1, 6, 23]. A few studies learn hyperparameters [40, 42], loss functions [3, 15, 36] or rewards [44] that steer the learning of standard optimisers. Our meta-critic framework is in the category of loss-function meta-learning, but unlike most of these we are able to meta-learn the loss function online in parallel to learning a single extrinsic task rather. No costly offline learning on a task family is required as in Houthooft et al. [15], Sung et al. [36]. Most current Meta-RL methods are based on on-policy policy-gradient, limiting sample efficiency. For example, while LIRPG [44] is one of the few prior works to attempt online meta-learning, it is ineffective in practice due to only providing a scalar reward increment rather than a loss for direct optimisation. A few meta-RL studies have begun to address off-policy RL, for conventional multi-task meta-learning [30] and for optimising transfer vs forgetting in continual learning of multiple tasks [31]. The contribution of our Meta-Critic is to enhance state-of-the-art single-task OffP-AC RL with online meta-learning. Loss Learning. Loss learning has been exploited in ‘learning to teach’ [41] and surrogate loss learning [10, 16] where a teacher network predicts the parameters of a manually designed loss in the supervised learning. In contrast our meta-critic is itself a differentiable loss, and is designed for use in RL. Other applications learn losses that improve model robustness to out of distribution samples [2, 19]. Some recent loss learning studies in RL focus mainly on the multi-task adaptation scenarios [3, 15, 36] or the generalization to entirely different environments [17]. Our loss learning architecture is related to Li et al. [19], but designed for accelerating single-task OffP-AC RL rather than improving robustness in multi-domain supervised learning. 3 Methodology We aim to learn a meta-critic which augments the vanilla critic by providing an additional loss Lmcriticω for the actor. The vanilla loss for the policy (actor) is Lcritic given by the conventional critic. The actor is trained by Lcritic and Lmcriticω via stochastic gradient descent. The meta-critic parameter ω is optimized by meta-learning to accelerate actor learning progress. Here we follow the notation in TD3 and SAC that φ and θ denote actors and critics respectively. Algorithm Overview. We train a meta-critic loss Lmcriticω that augments the vanilla critic Lcritic to enhance actor learning. Specifically, it should lead to the actor φ having improved performance on the normal task, as measured by Lcritic on the validation data, after learning on both meta-critic and vanilla critic losses. This can be seen as a bi-level optimisation problem1 [8, 14, 29] of the form: ω = arg min ω Lmeta(dval;φ ∗) s.t. φ∗ = arg min φ (Lcritic(dtrn;φ) + L mcritic ω (dtrn;φ)), (5) where we can assume Lmeta(·) = Lcritic(·) for now. dtrn and dval are different transition batches from replay buffer. Here the lower-level optimisation trains actor φ to minimize both the normal loss and meta-critic-provided loss on training samples. The upper-level optimisation further requires meta-critic ω to have produced a learned actor φ∗ that minimizes a meta-loss that measures actor’s normal performance on a set of validation samples, after being trained by meta-critic. Note that in principle the lower-level optimisation could purely rely on Lmcriticω analogously to the procedure in EPG [15], but we find optimising their sum greatly increases learning stability and speed. Eq. (5) is satisfied when meta-critic successfully trains the actor for good performance on the normal task 1See Franceschi et al. [8] for a discussion on convergence of bi-level algorithms. Algorithm 1 Online Meta-Critic Learning for OffP-AC RL φ, θ, ω,D ← ∅ // Initialise actor, critic, meta-critic and buffer for each iteration do for each environment step do at ∼ πφ(at|st) // Select action according to the current policy st+1 ∼ p(st+1|st, at), rt // Observe reward rt and new state st+1 D ← D ∪ {(st, at, rt, st+1)} // Store the transition in the replay buffer end for for each gradient step do Sample mini-batch dtrn from D Update θ ← Eq. (4) // Update the critic parameters meta-train: Lcritic ← Eqs. (1), (2) or (3) // Vanilla-critic-provided loss for actor Lmcriticω ← Eqs. (10) or (11) // Meta-critic-provided loss for actor φold = φ− η∇φLcritic // Update actor according to Lcritic only φnew = φold − η∇φLmcriticω // Update actor according to Lcritic and Lmcriticω meta-test: Sample mini-batch dval from D Lmeta(dval;φnew) or L meta clip(dval;φold, φnew)← Eqs. (8) or (9) // Meta-loss meta-optimisation φ← φ− η(∇φLcritic +∇φLmcriticω ) // Update the actor parameters ω ← ω − η∇ωLmeta or ω − η∇ωLmetaclip // Update the meta-critic parameters end for end for=0 as measured by validation meta loss. The update of vanilla-critic is also in the lower loop, but as it updates as usual, we focus on the actor and meta-critic optimisation for simplicity of exposition. In this setup the meta-critic is a neural network hω(dtrn;φ) that takes as input some featurisation of the actor φ and the states and actions in dtrn. The meta-critic network must produce a scalar output, which we can then treat as a loss Lmcriticω := hω , and must be differentiable with respect to φ. We next discuss the overall optimisation flow and the specific meta-critic architecture. Meta-Optimisation Flow. To optimise Eq. (5), we iteratively update the meta-critic parameter ω (upper-level) and actor and vanilla-critic parameters φ and θ (lower-level). At each iteration, we perform: (i) Meta-train: Sample a mini-batch of transitions and putatively update policy φ based on the vanilla-critic-provided Lcritic and the meta-critic-provided Lmcriticω losses. (ii) Meta-test: Sample another mini-batch of transitions to evaluate the performance of the updated policy according to Lmeta. (iii) Meta-optimisation: Update meta-critic ω to maximize the performance on the validation batch, and perform the real actor update according to both losses. Thus the meta-critic co-evolves with the actor as they are trained online and in parallel. Figure 1 and Alg. 1 summarize the process and the details of each step are explained next. Meta-critic can be flexibly integrated with any OffP-AC algorithms, and the further implementation details for DDPG, TD3 and SAC are in the supplementary material. Updating Actor Parameters (φ). During metatrain, we sample a mini-batch of transitions dtrn = {(si, ai, ri, si+1)} with batch size N from the replay buffer D. We update the policy using both losses as: φnew = φ− η ∂ Lcritic(dtrn) ∂φ − η ∂ L mcritic ω (dtrn) ∂φ . (6) We also compute a separate update: φold = φ− η ∂Lcritic(dtrn) ∂φ (7) that only leverages the vanilla-critic-provided loss. If meta-critic provided a beneficial source of loss, φnew should be a better parameter than φ, and in particular a better parameter than φold. We will use this comparison in the next meta-test step. Updating Meta-Critic Parameters (ω). To train the meta-critic, we sample another mini-batch of transitions: dval = {(svali , avali , rvali , svali+1)} with batch size M . The use of a validation batch for bi-level meta-optimisation [8, 29] ensures the meta-learned component does not overfit. As our framework is off-policy, this does not incur any sample efficiency cost. The meta-critic is then updated by a meta-loss ω ← ω − ηLmeta(·) that measures actor performance after learning. Meta-Loss Definition. The most intuitive meta-loss definition is the validation performance of updated actor φnew as measured by the normal critic: Lmeta = Lcritic(dval;φnew). (8) However, we find it helpful for optimisation efficiency and stability to optimise the clipped difference between updates with- and without meta-critic’s input as: Lmetaclip = tanh(L critic(dval;φnew)− Lcritic(dval;φold)). (9) This is simply a monotonic re-centering and re-scaling of Lcritic. (The parameter ω that minimizes Lmetaclip as Eq. (9) also minimizes L meta of Eq. (8) and vice-versa.) Note that in Eq. (9) the updated actor φnew depends on the feedback given by meta-critic ω and φold does not. Thus only the first term is optimised for ω. In this setup the Lcritic(dval;φnew) term should obtain high reward/low loss on the validation batch and the latter Lcritic(dval;φold) provides a baseline, analogous to the baseline widely used to accelerate and stabilize the policy-gradient RL. tanh ensures meta-loss range is always nicely distributed in (−1, 1), and caps the magnitude of the meta-gradient. In essence, meta-loss is for the agent to ask itself: “Did meta-critic learning improve validation performance compared to vanilla learning?”, and adjusts meta-critic ω accordingly. We will compare the options Lmeta and Lmetaclip later. Designing Meta-Critic (hω). The meta-critic hω implements the additional loss for actor. The design-space for hω has several requirements: (i) Its input must depend on the policy parameters φ, because this meta-critic-provided loss is also used to update the policy. (ii) It should be permutation invariant to transitions in dtrn, i.e., it should not make a difference if we feed the randomly sampled transitions indexed [1,2,3] or [3,2,1]. A naivest way to achieve (i) is given in MetaReg [2] which meta-learns a parameter regularizer: hω(φ) = ∑ i ωi|φi|. Although this form of hω acts directly on φ, it does not exploit state information, and introduces a large number of parameters in hω, as φ may be a high-dimensional neural network. Therefore, we design a more efficient and effective form of hω that also meets both of these requirements. Similar to the feature extractor in supervised learning, the actor needs to analyse and extract information from states for decision-making. We assume the policy network can be represented as πφ(s) = π̂(π̄(s)) and decomposed into the feature extraction π̄φ and decision-making π̂φ (i.e., the last layer of the full policy network) modules. Thus the output of the penultimate layer of full policy network is just the output of feature extraction π̄φ(s), and such output of feature jointly encodes φ and s. Given this encoding, we implement hw(dtrn;φ) as a three-layer multi-layer perceptron fω whose input is the extracted feature from π̄φ(s). Here we consider two designs for meta-critic (hω): using our joint feature alone (Eq. (10)) or augmenting the joint feature with states and actions (Eq. (11)): hw(dtrn;φ) = 1 N N∑ i=1 fω(π̄φ(si)), (10) hw(dtrn;φ) = 1 N N∑ i=1 fω(π̄φ(si), si, ai). (11) hω provides as an auxiliary critic whose input is based on the batch-wise set-embedding [43] of our joint actor-state feature. That is to say, dtrn is a randomly sampled mini-batch transitions from the replay buffer, and then s (and a) of transitions are inputted to hω, and finally we obtain the meta-critic-provided loss for dtrn. Here, our design of Eq. (11) also includes the cues in LIRPG and EPG where si and ai are used as the input of their learned reward and loss respectively. We set a softplus activation to the final layer of hω, following the idea in TD3 that vanilla critic may over-estimate and so the a non-negative additional actor loss can mitigate such over-estimation. Moreover, note that only si (and ai) from dtrn are used to calculate Lcritic and Lmcriticω , while si, ai, ri and si+1 are all used for optimising the vanilla critic. 4 Experiments and Evaluation We take the algorithms DDPG, TD3 and SAC as our vanilla baselines, and denote their enhancements by meta-critic as DDPG-MC, TD3-MC, SAC-MC. All -MCs augment their built-in vanilla critic with the proposed meta-critic. We take Eq. (10) and Lmetaclip as the default meta-critic setup, and compare alternatives in the ablation study. For our implementation of meta-critic, we use a three-layer neural network with an input dimension of π̄ (300 in DDPG and TD3, 256 in SAC), two hidden feed-forward layers of 100 hidden nodes each, and ReLU non-linearity between layers. Implementation Details. We evaluate the methods on a suite of seven MuJoCo tasks [39] in OpenAI Gym [4], two MuJoCo tasks in rllab [5], and a simulated racing car TORCS [22]. For MuJoCo-Gym, we use the latest V2 tasks instead of V1 used in TD3 and the old-SAC [12] without modification to their original environment or reward. We use the open-source implementations “OurDDPG”2, TD33 and SAC4. Here, “OurDDPG” is the re-tuned version of DDPG implemented in Fujimoto et al. [9] with the same hyper-parameters. In MuJoCo cases we integrate our meta-critic with learning rate 0.001. The details of TORCS hyper-parameters are in the supplementary material. Our demo code can be viewed on https://github.com/zwfightzw/Meta-Critic. 4.1 Evaluation of Meta-Critic OffP-AC Learning TD3 and SAC. Figure 3 reports the learning curves for TD3. For some tasks the vanilla TD3’s performance declines in the long run, while TD3-MC shows improved stability with much higher asymptotic performance. Thus TD3-MC provides comparable or better learning performance in each case, while Table 1 shows the clear improvement in the max average return. For SAC in Figure 4, note that we use the most recent update of SAC [13], which is actually the combination of SAC+TD3. Although SAC+TD3 is arguably the strongest existing method, SAC-MC still gives a clear boost on the asymptotic performance for many tasks, especially the most challenging TORCS. 2https://github.com/sfujim/TD3/blob/master/OurDDPG.py 3https://github.com/sfujim/TD3/blob/master/TD3.py 4https://github.com/pranz24/pytorch-soft-actor-critic Comparison vs PPO-LIRPG. Intrinsic Reward Learning for PPO [44] is the most related method to our work in performing online single-task meta-learning of an additional reward/loss. Their original PPO-LIRPG evaluated on a modified environment with hidden rewards. Here we apply it to the standard unmodified learning tasks that we aim to improve. Table 1 tells that: (i) In this conventional setting, PPO-LIRPG worsens rather than improves basic PPO performance. (ii) Overall OffP-AC methods generally perform better than on-policy PPO for most environments. This shows the importance of our meta-learning contribution to the off-policy setting. In general Meta-Critic is preferred compared to PPO-LIRPG because the latter only provides a scalar reward bonus that helps the policy indirectly via high-variance policy-gradient updates, while ours provides a direct loss. Summary. Table 1 and Figure 5 summarize all the results by max average return. SAC-MC generally performs best and -MCs are generally comparable or better than their corresponding vanilla alternatives. -MCs usually provide improved variance in return compared to their baselines. 4.2 Further Analysis Loss and Optimisation Analysis. We take tabular MDP [6] (|S| = 2, |A| = 2) as an example using DDPG. Figure 6 first reports the normal Lcritic of actor, and the introduced hω (i.e., Lmcriticω ) and Lmetaclip over 5 trials. We also plot model optimisation trajectories (pink dots) via a 2D weight-space slice in right part of Figure 6. They are plotted over the average reward surface. Following the network visualization in Li et al. [18], we calculate the subspace to plot as: Let φi denote model parameters at episode i and the final estimate as φn (here n = 100). We apply PCA to matrix M = [φ0 − φn, . . . , φn−1 − φn], and take the two most explanatory directions of this optimisation path. Parameters are then projected onto the plane defined by these directions for plotting; and models at each point are densely evaluated to get average reward. Figure 6 shows: (i) DDPG-MC convergences faster to a lower value of Lcritic, demonstrating the meta-critic’s ability to accelerate learning. (ii) Meta-loss is randomly initialised at the start, but as ω begins to be trained via meta-test on validation data, meta-loss drops swiftly below zero and then φnew is better than φold. In the late stage, meta-loss goes towards zero, indicating all of hω’s knowledge has been distilled to help the actor. Thus meta-critic is helpful in defining better update directions in the early stages of learning (but note that it can still impact later stage learning via changing choices made early). (iii) Lmcriticω converges smoothly under the supervision of meta-loss. (iv) DDPG-MC has a very direct and fast optimisation movement to the high reward zone of parameter space, while the vanilla DDPG moves slowly through the low reward space before finally finding the direction to the high-reward zone. Ablation on hω design. We run Walker2d under SAC-MC with the alternative hω from Eq. (11) or in MetaReg [2] format (input actor parameters directly). In Table 2, we record the max average return and sum average return (area under the average reward curve) of evaluations over all time steps. Eq. (11) achieves the highest max average return and our default hω (Eq. (10)) attains the highest mean average return. We can also see some improvement for hω(φ) in MetaReg format, but the huge number (73484) of parameters is expensive. Overall, all meta-critic designs provide at least a small improvement on vanilla SAC. Ablation on meta-loss design. We considered two meta-loss designs in Eqs. (8&9). For Lmetaclip in Eq. (9), we use Lcritic(dval;φold) as a baseline to improve numerical stability of the gradient update. To evaluate this design, we also compare using vanilla Lmeta in Eq. (8). The last column in Table 2 shows vanilla Lmeta barely improves on vanilla SAC, validating our meta-loss design. Controlling for compute cost and parameter count. We find that meta-critic increases 15-30% compute cost and 10% parameter count above the baselines (the latter is neglectable as it is small compared to the replay buffer’s memory footprint) during training, and this is primarily attributable to the cost of evaluating the meta-loss Lmetaclip and hence L mcritic ω . To investigate whether the benefit of meta-critic can be replicated simply by increasing compute expenditure or model size, we perform control experiments by increasing the vanilla baselines’ compute budget or parameter count to match the -MCs. Specifically, if meta-critic takes K% more compute than the baseline, then we re-run the baseline with K% more update steps per iteration. This ‘+updates’ condition provides the baseline with more mini-batch samples while controlling the number of environment interactions. Note that due to implementation constraints of SAC, increasing updates in ‘SAC+updates’ requires taking at least 2x gradient updates per environment step compared to SAC and SAC-MC. Thus it takes 100% more updates than SAC and significantly more compute time than SAC-MC. To control for parameter count, if meta-critic takesN% more parameters than baseline, then we increase the baselines’ network size with N% more parameters by linearly scaling up the size of all hidden layers (‘+params’). The max average return results for the seven tasks in these control experiments are shown in Table 3, and the detailed learning curves of the control experiments are in the supplementary material. Overall, there is no consistent benefit in providing the baseline with more compute iterations or parameters, and in many environments they perform worse than the baseline or even fail entirely, especially in ‘+updates’ condition. Thus -MCs’ good performance can not be simply replicated by a corresponding increase in gradient steps or parameter size taken by the baseline. Discussion. We introduce an auxiliary meta-critic that goes beyond the information available to vanilla critic to leverage measured actor learning progress (Eq. (9)). This is a generic module that can potentially improve any off-policy actor-critic derivative-based RL method for a minor overhead at train time, and no overhead at test time; and can be applied directly to single tasks without requiring task-families as per most other meta-RL methods [3, 7, 15, 30]. Our method is myopic, in that it uses a single inner (base) step per outer (meta) step. A longer horizon look-ahead may ultimately lead to superior performance. However, this incurs the cost of additional higher-order gradients and associated memory use, and risk of unstable high-variance gradients [21, 29]. New meta-optimizers [24] may ultimately enable these issues to be solved, but we leave this to future work. 5 Conclusion We present Meta-Critic, a derivative-based auxiliary critic module for off-policy actor-critic reinforcement learning methods that can be meta-learned online during single task learning. The meta-critic is trained to provide an additional loss for the actor to assist the actor learning progress, and leads to long run performance gains in continuous control. This meta-critic module can be flexibly incorporated into various contemporary OffP-AC methods to boost performance. In future work, we plan to apply the meta-critic to conventional meta-learning with multi-task and multi-domain RL. Acknowledgements This work was partially supported by the National Natural Science Foundation of China (No. 61751208) and the Advanced Research Program (No. 41412050202) and the Engineering and Physical Sciences Research Council of the UK (EPSRC) Grant number EP/S000631/1. Broader Impact We introduced a framework for meta RL where learning is improved through the addition of an auxiliary meta-critic which is trained online to maximise learning progress. This technology could benefit all current and potential future downstream applications of reinforcement learning, where learning speed and/or asymptotic performance can still be improved – such as in game playing agents and robot control. Faster reinforcement learning algorithms such as meta-critic could help to reduce the energy requirements training agents, which can add up to a significant environmental cost [35]; and bring us one step closer to enabling learning-based control of physical robots, which is currently rare due to the sample inefficiency of RL algorithms in comparison to the limited robustness of real robots to physical wear and tear of prolonged operation. Returning to our specific algorithmic contribution, introducing learnable reward functions rather than relying solely on manually specified rewards introduces a certain additional level of complexity and associated risk above that of conventional reinforcement learning. If the agent participates in defining its own reward, one might like to be able to interpret the learned reward function and validate that it is reasonable and will not lead to the robot learning to perform undesirable behaviours. This suggests that development of explainable AI techniques suited for reward function analysis could be a good topic for future research.
1. What is the focus and contribution of the paper regarding meta-reinforcement learning? 2. What are the strengths of the proposed approach, particularly its novelty and significance? 3. What are the weaknesses of the paper, especially regarding its theoretical analysis and experimental presentation? 4. Do you have any concerns about the results' accuracy, particularly for specific tasks? 5. How do the results impact the broader area of reinforcement learning?
Summary and Contributions Strengths Weaknesses
Summary and Contributions In this paper, the authors propose a meta critic to use in meta-reinforcement learning. Strengths I find the idea of using a validation set of transitions a sound idea. The idea seems simple, novel and significant. The reviewer is also glad to see that the authors have tested their idea on multiple algorithms and have shown that the idea works (or does not degrade performance), regardless of whether it is used on top of DDPG, TD3, SAC or PPO. This is the main reason I would recommend on accepting the paper, as it indicates significance and relevance to a broader range of algorithms. Weaknesses I find the theoretical grounding light, but it did not bother me. I think the bigger weakness is the evaluation in section 4.1 and Figure 2. Those curves look weird. I.e. on Figure 2 for Ant rllab, around 2.5e6 steps, all 5 seeds seem to drop simultaneously (the uncertainty bound is very small throughout the run, yet the mean is noisy). I would look for a bug in the setup, or at least say that the results are presented in a misleading manner. Additionally, in Figure 4 for the Torcs task, all seeds for SAC seemed to decay in the same way at the same time. I am skeptical of those results, but presume they are due to a bug in the way the figure was made, and not in the setup.
NIPS
Title Is a Modular Architecture Enough? Abstract Inspired from human cognition, machine learning systems are gradually revealing advantages of sparser and more modular architectures. Recent work demonstrates that not only do some modular architectures generalize well, but they also lead to better out-of-distribution generalization, scaling properties, learning speed, and interpretability. A key intuition behind the success of such systems is that the data generating system for most real-world settings is considered to consist of sparsely interacting parts, and endowing models with similar inductive biases will be helpful. However, the field has been lacking in a rigorous quantitative assessment of such systems because these real-world data distributions are complex and unknown. In this work, we provide a thorough assessment of common modular architectures, through the lens of simple and known modular data distributions. We highlight the benefits of modularity and sparsity and reveal insights on the challenges faced while optimizing modular systems. In doing so, we propose evaluation metrics that highlight the benefits of modularity, the regimes in which these benefits are substantial, as well as the sub-optimality of current end-to-end learned modular systems as opposed to their claimed potential.1 1 Introduction Deep learning research has an established history of drawing inspiration from neuroscience and cognitive science. From the way hidden units combine afferent inputs, to how connectivity and network architectures are designed, many breakthroughs have relied on mimicking brain strategies. It is no surprise then that modularity and attention have been leveraged, often together, in artificial networks in recent years (Bahdanau et al., 2015; Andreas et al., 2016; Hu et al., 2017; Vaswani et al., 2017; Kipf et al., 2018; Battaglia et al., 2018; Goyal et al., 2019, 2021), with impressive results. Indeed, work from cognitive neuroscience (Baars, 1997; Dehaene et al., 2017) suggests that cortex represents knowledge in a modular way, with different such modules communicating through the bottleneck of working memory (where very few items can simultaneously be represented), in which content is selected by attention mechanisms. In recent work from the AI community (Bengio, 2017; Goyal & Bengio, 2020), it was proposed that these characteristics could correspond to meaningful inductive biases for deep networks, i.e., statistical assumptions about the dependencies between concepts manipulated at the higher levels of cognition. Both sparsity of the dependencies between these high-level variables and the decomposition of knowledge into recomposable pieces that are as independent as possible (Peters et al., 2017; Bengio et al., 2019; Goyal & Bengio, 2020; Ke et al., 2021) would make learning more efficient. Out-of-distribution (OoD) generalization would be facilitated by making it possible to sequentially compose the computations performed by these modules where new situations can be explained by novel combinations of existing concepts. Although a number of recent results hinge on such modular architectures (Graves et al., 2014; Andreas et al., 2016; Hu et al., 2017; Vaswani et al., 2017; Kipf et al., 2018; Santoro et al., 2018; Battaglia et al., 2018; Goyal et al., 2019, 2021; Locatello et al., 2020; Mittal et al., 2020; Madan et al., 2021), †Correspondence authors sarthmit@gmail.com 1Open-sourced implementation is available at https://github.com/sarthmit/Mod_Arch 36th Conference on Neural Information Processing Systems (NeurIPS 2022). the abundance of tricks and proposed architectural modifications makes it challenging to parse real, usable architectural principles. It is also unclear whether the performance gains obtained by such Mixture-of-Experts (MoE) based modular systems are actually due to good specialization, as is often claimed, or due to other potential confounding factors like ease of optimization. In this work, we extend the analysis from Rosenbaum et al. (2019); Maziarz et al. (2019); Cui & Jaech (2020); Csordás et al. (2020) and propose a principled approach to evaluate, quantify, and analyse common ingredients of modular architectures, supported by either standard MLP-like connectivity, recurrent connections or attention (Bahdanau et al., 2015; Vaswani et al., 2017) operations. To do so, we develop a series of benchmarks and metrics aimed at probing the efficacy of a wide range of modular networks, where computation is factorized. This reveals valuable insights and helps identify not only where current approaches succeed but also when and how they fail. Whereas previous work on disentangling (Bengio, 2013; Higgins et al., 2016; Kim & Mnih, 2018) has focused on factoring out the different high-level variables that explain the data, here we focus on disentangling system modules from each other, the structural ingredients of network that can facilitate this factorization, and how such ingredients relate to the data generating distributions being parsed and processed. Given the recent increased interest in sparse modular systems (Rahaman et al., 2021; Fedus et al., 2021; Du et al., 2021; Mittal et al., 2021), we believe that this work will provide a test-bed for investigating the workings of such models and allow for research into inductive biases that can push such models to achieve good specialization. Through detailed experiments and evaluation metrics, we make the following observations and contributions: • We develop benchmark tasks and metrics based on probabilistically selected rules to quantify two important phenomena in modular systems, the extent of collapse and specialization. • We distill commonly used modularity inductive biases and systematically evaluate them through a series of models aimed at extracting commonly used architectural attributes (Monolithic, Modular, Modular-op and GT-Modular models). • We find that specialization in modular systems leads to significant boosts in performance when there are many underlying rules within a task, but not so much with only few rules. • We find standard modular systems to be often sub-optimal in both their capacity on focusing on the right information as well as in their ability to specialize, suggesting the need for additional inductive biases. 2 Notation / Terminology In this paper, we study how a family of modular systems performs on a common set of tasks, prescribed by a synthetic data generating process which we call rule-based data. Below, we introduce the notation for key ingredients: (1) rules and how they form tasks, (2) modules and how they can take different model architectures, (3) specialization and how we evaluate models. We refer the reader to Figure 1 for an illustration of our setup. Rules. To properly understand modular systems and analyze their benefits and shortcomings, we consider synthetic settings that allow fine-grained control over different aspects of task requirements. In particular, operations must be learned on the data-generating distri- bution illustrated in Equations 1-3, which we also refer to as rules. Details about the exact operations used in experiments are described in Section 3. c ∼ Categorical(·) (1) x ∼ px(·) (2) y |x, c ∼ py(· |x, c). (3) Given this distribution, we define a rule to be an expert of this distribution, that is, rule r is defined as py(· |x, c = r) where c is a categorical variable representing context, and x is an input sequence. For example, consider x = (1, 2) and c to select between addition and multiplication. Then, depending on c, the correct output should be either y = 3 or y = 2. More details about the specifics of these data distributions are presented in Section 3. Systems will be trained to infer y given c and x. This simple setup is meant to capture context-dependent tasks on variable data distributions, e.g. reasoning according to different features (e.g. shape, color, etc.). However, unlike such complex systems, ground truth knowledge of required operations is known for our synthetic task, allowing for deeper quantitative analysis. Tasks. A task is described by the set of rules (data-generating distribution) illustrated in Equations 1-3. Different sets of {py(· |x, c)}c imply different tasks. For a given number of rules, we train models on multiple tasks to remove bias towards any particular task. Modules. A modular system comprises a set of neural network modules, each of which can contribute to the overall output. One can see this through the functional form y = ∑M m=1 pm ym, where ym denotes the output and pm the activation of the mth module. Details about the different modular systems are outlined in Section 4. From this point onwards, we exclusively use rules to refer to the specialized components in the data-generating process, and modules to refer to the experts that are learned by a modular system. Further, for ease of quantitative assessment, we always set the number of modules equal to the number of rules, except when evaluating monolithic models (with a single module). Modules can be implemented in three different architectures, as described next. Model Architectures. Model architectures describe the choice of architecture considered for each module of a modular system, or the single module in a monolithic system. Here we consider Multi-Layer Perceptron (MLP), Multi-Head Attentions (MHA), and Recurrrent Neural Network (RNN). Importantly, the rules (or data generating distributions) are adapted to the model architecture, and we often refer to them as such (e.g. MLP based rules). Details about the data distributions and models considered in this work are provided in Sections 3 and 4 respectively. Perfect Specialization. When training modular systems on rule-based data, we would like the modules to specialize according to the rules in the data-generating distribution. Thus, there is an important need to quantify what constitutes perfect specialization of the system to the data. To allow for easier quantification, we always consider an equal number of modules and rules. However, future work should evaluate the ability of modular systems to automatically infer the required number of modules. 3 Data Generating Process Since we aim to study modular systems through synthetic data, here we flesh out the data-generating processes operating based on the rules scheme described above (see Equations 1-3). We use a simple Mixture-of-Experts (MoE) Yuksel et al. (2012); Masoudnia & Ebrahimpour (2014) styled data-generating process (Mixture Distribution), where we expect different modules to specialize to the different mixture components (rules). It is important to note that this system is slightly different from the traditional flat MoE since the experts are more plug-and-play and can be composed to solve a particular problem. As an example, if we consider a mixture of recurrent systems, different tokens (timepoints) in the input sequence can undergo computations according to different rules (e.g. a switching linear dynamical system), as opposed to the choice of expert being governed by the whole sequence. We now look at more specific setups of the data-generating systems in consideration, the general template of which was outlined above. To do so, we explain the data-generating processes amenable to our three model architectures: MLP, MHA, and RNN. Additionally, each of the following tasks have two versions: regression, and classification. These are included to explore potential differences these distinct loss types may induce. c ∼ U{1, R} (4) x1,x2 iid∼ N (0, I) (5) y = αcx1 + βcx2 (6) MLP. Here, we define the data scheme that is amenable for learning of modular MLP-based systems. In this synthetic data-generating scheme, a data sample consists of two independent numbers and a choice of rule being sampled from some distribution. Different rules lead to different linear combinations of the two numbers to give the output. That is, the choice of linear combination is dynamically instantiated based on the rule drawn. This is mathematically formulated in Equations 4-6, where αc and βc are the data parameters, I the identity matrix and y denotes the label for the regression tasks and sign(y) for the classification tasks. Hence, the data comes from a MoE distribution where c denotes which linear combination governs the conditional distribution py(· |x1,x2, c). When training modular architectures on such data, one expects each module in the trained system to specialize according to a unique rule. cn iid∼ U{1, R} (7) qnr,q ′ nr,vnr,v ′ nr iid∼ N (0, I) (8) sn = min i ̸=n d (qncn ,qicn) (9) s′n = min i ̸=n d ( q′ncn ,q ′ icn ) (10) yn = αcnvsncn + βcnv ′ s′ncn (11) MHA. Now, we define the data scheme that is tuned for learning in modular MHA based systems. Essentially, a MHA module can be understood through a set of searches (query-key interactions), a set of corresponding retrievals (values) and then some computation of the retrieved values, as explained by Mittal et al. (2021). Accordingly, we design the data-generating distribution with the following properties: Each rule is composed of a different notion of search, retrieval and the final linear combination of the retrieved information respectively. We mathematically describe the process in Equations 7-11, where n = 1, ..., N and r = 1, ..., R with N as the sequence length and R the number of rules. We denote the tuple (qnr,q′nr,vnr,v ′ nr) as xn. Further, yn denotes the label for the regression tasks while for classification, we consider the categorical label to be sign(yn). Thus, we can see that cn denotes the rule for the nth token. This rule governs which two tokens are closest to the nth token, demonstrated as sn and s′n. It also governs what features are retrieved from the searched tokens, which are vsncn and vs′ncn . These retrieved features then undergo a rule-dependent linear combination (on cn). Here, too, when training a modular MHA architecture, we want each MHA module in the system to be able to specialize to a unique MHA rule in the data system. cn iid∼ U{1, R} (12) xn iid∼ N (0, I) (13) sn = Acnsn−1 +Bcnxn (14) yn = w T sn (15) RNN. For recurrent systems, we define a rule as a kind of linear dynamical system, where one of multiple rules can be triggered at any time-point. Mathematically, this process can be defined through Equations 12-15, where n = 1, ...N , with N describing the sequence length. Each rule thus describes a different procedure for the update of the state st as well as the effect of the input xt to the state. Thus, we can see that cn denotes the rule to be used at the nth time-point. Further, yn denotes the label for the regression tasks while for classification, we consider the labels as sign(yn). Hence, in all settings, the data comes from a MoE distribution where c denotes the rule and governs the conditional py(· |x, c). When training modular architectures on such data, one expects each module in the trained system to specialize according to a unique rule. Our aim is to use these synthetic rule-based data setting to study and analyse modular systems and understand whether end-to-end trained modular systems concentrate on the right information to specialize based on, i.e. based on c, whether they do learn perfect specialization and whether perfect specialization actually helps in these settings. To properly understand this, we detail the different kinds of models considered in Section 4 as well as the different metrics proposed in Section 5 to analyse trained systems. For this work, we limit our analysis to infinite-data regime where each training iteration operates on a new data sample Future work would perform similar analysis in the regime of limited data. 4 Models Model Functional Form Several works claim that end-to-end trained modular systems outperform their monolithic counterparts, especially in out-of-distribution settings. However, there is a lack of step-by-step analysis on the benefits of such systems and whether they actually specialize according to the data generating distribution or not. To perform an in-depth analysis, we consider four different types of models that allow for varying levels of specialization, which are: Monolithic, Modular, Modular-op, and GT-Modular. We give the formulations for each of these models below and then discuss the different analysis we can perform through them. We also illustrate these models in Table 1 and depending on the data-generating procedure described in Section 3, f and fm can be implemented as either MLP, MHA or RNN cells in this work. Monolithic. A monolithic system is a big neural network that takes the entire data (x, c) as input and makes predictions ŷ based on it. There is no inductive bias about modularity or sparsity explicitly baked in the system and it is completely up to back-propagation to learn whatever functional form is needed to solve the task. An example of such a system is a traditional Multi-Head Attention (MHA) based system, eg. a Transformer. Modular. A modular system is composed of a number of modules, each of which is a neural network of a given architectural type (MLP, MHA, or RNN). Each module m takes the data (x, c) as input and computes an output ŷm and a confidence score, normalized across modules into an activation probability pm. The activation probability reflects the contribution of each module’s output to the final output ŷ of the system. Thus, there is an explicit baked-in inductive bias of modularity but it is still up to system-wide back-propagation to figure out the right specialization. An example of such a system is a mixture of MLPs or reusable RNNs, reusable across different time/positions. Modular-op. A modular-op (for operation only) system is very similar to the modular system with just one small difference. Instead of the activation probability pm of module m being a function of (x, c), we instead make sure that the activation is decided only by the rule context c. Hence, unlike modular systems, modular-op cannot be distracted by x in figuring out specialization of different modules. Even though the operation required is explicitly provided, this model still needs to learn specialization through back-propagation. GT-Modular. A GT-modular system (for ground truth) serves as an oracle benchmark, i.e., a modular system that specializes perfectly. In particular, the activation probability p′ms of modules are just set according to c, which is the indicator present in the data (x, c). Thus, this is a perfectly specializing system that chooses different modules sparsely and perfectly according to the different data rules. Given enough capacity, we can see that there is a hierarchy of models based on the functions they can implement, with GT-Modular ⊆ Modular-op ⊆ Modular ⊆ Monolithic. Put differently, models from Monolithic to GT-Modular increasingly incorporate the inductive biases for modularity and sparsity. This is proved in Appendix C by inspecting the function classes implemented by these models. In what follows, we want to analyse the benefits of having simple end-toend trained modular systems as opposed to monolithic ones. This can be understood through a comparison of various performance based metrics between Monolithic and Modular models, explained in the next section. This will allow us to answer if a modular architecture is always better for various distinct rule-based data generating systems. For instance, a comparison between the Modular and Modularop models will show whether the stan- dard modular systems are able to focus on the right information and ignore the distractors in driving specialization. To study this, we will look at performance as well as collapse and specialization metrics between these class of models. A comparison between GT-Modular and Modular-op will show the benefits of having a sparse activation pattern with proper resource allocation of modules as opposed to an end-to-end learned specialization on the right information (without distractors). Finally, we note that GT-Modular is a modular system which obtains perfect specialization. Through this model, we aim to analyse whether perfect specialization is in-fact important and if so, how far are typical modular systems from obtaining similar performance and specialization through end-to-end training. We now describe the metrics used for these evaluations. 5 Metrics To reliably evaluate modular systems, we propose a suite of metrics that not only gauge the performance benefits of such systems but also evaluate them across two important modalities: collapse and specialization, which we use to analyse the extent of resource allocation (in terms of parameters/modules) and specialization respectively of a modular system. Performance. The first set of evaluation metrics are based on performance of the models in both in-distribution as well as out-of-distribution (OoD) settings. These metrics capture how well the different models perform on a wide variety of different tasks. For classification settings, we report the classification error while for regression settings, we report the loss. In-Distribution. This refers to the in-distribution performance, evaluated by looking at both the final performance as well as convergence speeds of the different models. Out-of-Distribution. This refers to the OoD performance of different models. We consider very simple forms of OoD generalization: either (a) change in distribution of x by increasing variance, or (b) different sequence lengths, wherever the possibility presents (eg. in MHA and RNN). Collapse Metrics. We propose a set of metrics Collapse-Avg and Collapse-Worst that quantify the amount of collapse suffered by a modular system. Collapse refers to the degree of under-utilization of the modules. An example of this is illustrated in Figure 2, where we can see that Module 3 is never used. We consider the setting where all the data rules are equi-probable and the number of modules in the model are set to be the same as the number of data rules, to R. High collapse thus refers to under-utilization of resource (parameters) provided to the model, illustrating that certain modules are never being used and concurrently meaning that certain modules are being utilized for multiple rules. CA = R R− 1 R∑ m=1 max ( 0, 1 R − p(m) ) (16) Collapse-Avg. Given the data-setting with R equi-probable rules, and hence R modules in the model, we let p(m) be the marginal probability distribution of activation of module m. Then, we define the Collapse-Avg metric CA as in Equation 16, where RR−1 is for normalization. This metric captures the amount of under-utilization of all the modules of the system. A lower number is preferable for this metric, as a lower number demonstrates that all the modules are equally utilized. CW = 1−R min m p(m) (17)Collapse-Worst. Given the same data and model setting as above, the Collapse-Worst metric CW is defined as in Equation 17. This metric captures the amount of under-utilization of the least used module of the system. Again, a low number is preferable as it signifies that even the least used module is decently utilized by the model. Specialization Metrics. To complement collapse metrics, we also propose a set of metrics, (1) Alignment, (2) Adaptation and (3) Inverse Mutual Information to quantify the amount of specialization obtained by the modular systems. We again consider the setting of equi-probable rules and the same number of modules and rules R. These metrics are aimed at capturing how well the modules specialize to the rules, that is, whether different modules stick to different rules (good specialization) or whether all modules contribute almost equally to all rules (poor specialization). sd = min P∈SR d (A,P) (18) Alignment. Given a modular system trained on rule-based data with R rules and modules, one can obtain the activation matrix A, where Arm denotes p(module = m | rule = r), that is, the probability of activation of module m conditioned on rule r. Further, given a distance metric d(·, ·) over the space of matrices, perfect specialization can be quantified through Equation 18, where SR denotes the space of permutation matrices over R objects. We consider d(·, ·) as a normalized L1 distance. The score sd demonstrates the distance between the activation matrix A and its closest permutation matrix, with distances computed according to the metric d(·, ·). Note that sd → 0 implies that each module specializes to a unique rule, thereby signifying perfect specialization. Since the space of permutation matrices SR grows exponentially at the rate of Θ(R!), computing sd naively soon becomes intractable. However, we use the Hungarian algorithm (Kuhn, 1955) to compute it in polynomial time. This metric shows how close the learned modular system is to a perfectly specializing one, where a low score implies better specialization. SIMI = 1− 1 logR Ep(m,r) [ log p(m, r) p(m)× p(r) ] (19) Inverse Mutual Information. Given R as the number of rules and modules and let the joint distribution p(m, r) denote the activation probability of module m on rule r, the Inverse Mutual Information metric SIMI is defined as in Equation 19. A low inverse mutual information metric is preferable as it denotes that the modules are more specialized to the rules as opposed to multiple modules contributing to a single rule. SA = Ep∼P [ R∑ i=1 ∣∣∣p(r̂i)− q(m̂i)∣∣∣] (20) Adaptation. Let R be the number of rules and modules and P a distribution over the R-dimensional simplex. Further, let p(·) be the distribution over rules (not equi-probable in this metric) and q(·) the corresponding distribution obtained over the modules. Note that the distribution q(·) is dependent on p(·). Given these distributions, we define the Adaptation metric SA in Equation 20, where r̂i and m̂i are such that p(r̂1) ≤ p(r̂2) ≤ ... ≤ p(r̂R) and q(m̂1) ≤ q(m̂2) ≤ ... ≤ q(m̂R) and P is a dirichlet distribution. This metric can be understood as the amount by which the modules adapt (signified through the distribution q(·)) to changes in the rule distributions (which are p(·) sampled from P). The matching between the rule and module is obtained through a simple sort as defined above. A low adaptation score implies that the marginal distribution of the modules adapt well according to the distribution of the rules. That is, when a rule is weakly present in the data, there exists a module which weakly contributes in the corresponding output, averaged over multiple different rule distributions. To understand these metrics, note that uniform random activation patterns for the modules lead to low collapse metrics but high alignment, adaptation and inverse mutual information metrics, implying little collapse but poor specialization, as expected. On the other hand, GT-Modular systems necessarily lead to low collapse metrics as well as low alignment, adaptation and inverse mutual information, denoting little collapse and good specialization, which is expected since specialization is given as oracle. 6 Experiments We are now ready to report experiments on the models outlined in Section 4 with associated data generation processes described in Section 3. For each level of modularity (i.e. Monolithic, Modular, Modular-op, GT-Modular), we analyse models learning over five different number of rules, ranging from few (2) to many (32), five different model capacities (number of parameters) and two different training settings, i.e. regression and binary classification. To remove any biases towards particular task parameters (e.g. αc, βc in Equation 6), we randomly select new rules to create five different tasks per setting and, train five seeds per task. In essence, we train ∼20,000 models2 to properly analyse the benefits of modularity, the level of specialization obtained by end-to-end trained systems, the impact of number of rules and the impact of model capacity. Performance. We refer the readers to Figure 3 for a compressed overview on the performance of various models. We see that GT-Modular system wins most of the times (left), indicating the benefits of perfect specialization. We also see that between standard end-to-end trained Modular and Monolithic systems, the former outperforms but not by a huge gap. Together, these two pie charts indicate that current end-to-end trained modular systems do not achieve good specialization and are thus sub-optimal by a substantial margin. We then look at the specific architectural choices (MLP, MHA and RNN cells for functions f and fm in Table 1) and analyse their performance and trends across increasing number of rules. Figure 4 shows that while there are concrete benefits of a perfectly specializing system (GT-Modular) or even models that know what information to drive specialization from (Modular-op), typical end-to-end trained Modular systems are quite sub-optimal and not able to realize these benefits, especially with increasing number of rules which is where we see substantial benefits of good specialization (contrast Modular vs GT-Modular and Modular-op). Moreover, while such end-to- end Modular systems do generally outperform the Monolithic ones, it is often only by a small margin. We also see the training pattern of different models averaged over all other settings, with the average containing error for classification and loss for regression, in Figure 7. We can see that good specialization not only leads to better performances but also faster training. Collapse. We evaluate all the models on the two collapse metrics outlined in Section 5. Figure 5 shows the two collapse metrics, Collapse-Avg and Collapse-Worst, for different models against 2All models are trained on single V100 GPUs, each taking a few hours. varying number of rules, averaged over the different model architectures (MLP, MHA and RNN), training settings (Classification and Regression), model capacities, tasks and seeds. First, we notice that a Random activation baseline and the GT-Modular system do not have any collapse, which is expected. Next, we notice that both Modular and Modular-op suffer from the problems of collapse and this problem becomes worse with increasing number of rules. Figure 6 further shows similar information averaged over the number of rules too, highlighting that Modular-op has less collapse than Modular in general. However, we still see that the problem of collapse is significant whenever back-propagation is tasked with finding the right activation patterns, especially in the regime of large number of rules. This clearly indicates the need for investigation into different forms of regularizations to alleviate some of the collapse problems. Specialization. Next, we evaluate through the proposed specialization metrics in Section 5 whether the end-to-end trained modular systems actually specialize according to the data-generating distribution. Figure 5 shows the three specialization metrics, Alignment, Adaptation and Inverse Mutual Information, for different models against varying number of rules, again averaged over different model architectures, training settings, model capacities, tasks and seeds. As expected, we see that the Random activation baseline has poor specialization (high metrics) while the GT-Modular system has very good specialization. We further see that end-to-end trained Modular systems as well as Modularop suffer from sub-optimal specialization, as indicated by the high metrics. As with collapse, we again see that it becomes harder to reach optimal specialization with increasing number of rules. Figure 6 shows that while Modular-op has marginally better specialization than standard Modular systems, they are indeed quite sub-optimal when compared to a perfectly specializing system, i.e. GT-Modular. We refer the readers to Appendix D, E and F for training details as well as additional experiments regarding the effect of model sizes for MLP, MHA and RNN architectures respectively. 7 Conclusion and Discussion We provide a benchmark suitable for the analysis of modular systems and provide metrics that not only evaluate them on in-distribution and out-of-distribution performance, but also on collapse and specialization. Through our large-scale analysis, we uncover many intriguing properties of modular systems and highlight potential issues that could lead to poor scaling properties of such systems. Perfect Specialization. We discover that perfect specialization indeed helps in boosting performance both in-distribution and out-of-distribution, especially in the regime of many rules. On the contrary, monolithic systems often do comparatively or sometimes better when there are only a few rules, but do not rely on specialization to do so. End-to-End Trained Modular systems. While Modular systems outperform Monolithic ones, the margin of improvement is often small. This is because when solely relying on back-propagation of the task-losses, these models do not discover perfect specialization. In fact, the problem of poor specialization and high collapse becomes worse with increasing number of rules. This is slightly mitigated by allowing contextual information from the task to be used explicitly, as is the case for Modular-op, but the problems still persist and get worse over large number of rules. In summary, through systematic and extensive experiments, this work shows that modularity, when supporting good and distributed specialization (i.e. little collapse), can outperform monolithic models both in and out of distribution testing. However, we also find that although perfectly specialized solutions are attainable by modular networks, end-to-end training does not recover them, often even with explicit information about task context (as in Modular-op). Since real-world data distributions are often complex and unknown, we cannot get access to oracle networks like GT-Modular for analysis. An important conclusion is that additional inductive biases are required to learn adequately specialized solutions. These could include other architectural features to facilitate module routing, or regularization schemes (e.g. load-balancing Fedus et al. (2021)) or optimization strategies (e.g. learning rate scheduling) to promote module specialization. We refer the reader to Appendices A and B for further discussion on these exciting prospects and extensions to real-world domains. We believe the framework proposed in this work is ideal to drive research into such inductive biases and a necessary stepping stone for applications of these designs at scale. Finally, we highlight that the use of network architectures that promote contextual specialization, such as the use of modules as studied here, could potentially promote unwanted biases when deployed in models use by the public due to collapse or ill-distributed specialization. The framework proposed in this work could help mitigate this potentially problematic impact on society. Acknowledgments and Disclosure of Funding SM would like to acknowledge the support of scholarships from UNIQUE and IVADO as well as compute resources from Alliance and its regional partner organizations (ACENET, Calcul Québec, Compute Ontario, the BC DRI Group and the Prairie DRI Group) towards his research. YB and GL acknowledge the support from Canada CIFAR AI Chair Program, as well as Samsung Electronics Co., Ldt. GL acknowledges NSERC Discovery Grant [RGPIN-2018-04821].
1. What is the focus and contribution of the paper on modular networks? 2. What are the strengths of the proposed approach, particularly in terms of its comprehensive reevaluation and assessment of existing works? 3. Do you have any concerns or questions regarding the implementation details of the modular network setting? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The submission is a comprehensive rethinking and assessment of the research in modular network. It develops a series of benchmarks and metrics to evaluate the benefit of existing works using modular architecture. Specifically, It performs experiments on four kinds of model corresponding to different levels of specialization and obtains some empirical findings about the design of modular network. Strengths And Weaknesses Strengths: The submission is a pioneer in systematically evaluating the performance of modular network in a unified framework. Modular network is a heated topic attracting wide interest, and the work is of great significance to the community. Weaknesses: None Questions It seems that the description is very high-level and the implementation detail is omitted. For example, in modular setting, how is the confidence score computed? Similarly, in modular-op setting, how to decide which module to evoke (we only know it is decided on c )? Limitations limitations are non-applicable / adequately addressed.
NIPS
Title Is a Modular Architecture Enough? Abstract Inspired from human cognition, machine learning systems are gradually revealing advantages of sparser and more modular architectures. Recent work demonstrates that not only do some modular architectures generalize well, but they also lead to better out-of-distribution generalization, scaling properties, learning speed, and interpretability. A key intuition behind the success of such systems is that the data generating system for most real-world settings is considered to consist of sparsely interacting parts, and endowing models with similar inductive biases will be helpful. However, the field has been lacking in a rigorous quantitative assessment of such systems because these real-world data distributions are complex and unknown. In this work, we provide a thorough assessment of common modular architectures, through the lens of simple and known modular data distributions. We highlight the benefits of modularity and sparsity and reveal insights on the challenges faced while optimizing modular systems. In doing so, we propose evaluation metrics that highlight the benefits of modularity, the regimes in which these benefits are substantial, as well as the sub-optimality of current end-to-end learned modular systems as opposed to their claimed potential.1 1 Introduction Deep learning research has an established history of drawing inspiration from neuroscience and cognitive science. From the way hidden units combine afferent inputs, to how connectivity and network architectures are designed, many breakthroughs have relied on mimicking brain strategies. It is no surprise then that modularity and attention have been leveraged, often together, in artificial networks in recent years (Bahdanau et al., 2015; Andreas et al., 2016; Hu et al., 2017; Vaswani et al., 2017; Kipf et al., 2018; Battaglia et al., 2018; Goyal et al., 2019, 2021), with impressive results. Indeed, work from cognitive neuroscience (Baars, 1997; Dehaene et al., 2017) suggests that cortex represents knowledge in a modular way, with different such modules communicating through the bottleneck of working memory (where very few items can simultaneously be represented), in which content is selected by attention mechanisms. In recent work from the AI community (Bengio, 2017; Goyal & Bengio, 2020), it was proposed that these characteristics could correspond to meaningful inductive biases for deep networks, i.e., statistical assumptions about the dependencies between concepts manipulated at the higher levels of cognition. Both sparsity of the dependencies between these high-level variables and the decomposition of knowledge into recomposable pieces that are as independent as possible (Peters et al., 2017; Bengio et al., 2019; Goyal & Bengio, 2020; Ke et al., 2021) would make learning more efficient. Out-of-distribution (OoD) generalization would be facilitated by making it possible to sequentially compose the computations performed by these modules where new situations can be explained by novel combinations of existing concepts. Although a number of recent results hinge on such modular architectures (Graves et al., 2014; Andreas et al., 2016; Hu et al., 2017; Vaswani et al., 2017; Kipf et al., 2018; Santoro et al., 2018; Battaglia et al., 2018; Goyal et al., 2019, 2021; Locatello et al., 2020; Mittal et al., 2020; Madan et al., 2021), †Correspondence authors sarthmit@gmail.com 1Open-sourced implementation is available at https://github.com/sarthmit/Mod_Arch 36th Conference on Neural Information Processing Systems (NeurIPS 2022). the abundance of tricks and proposed architectural modifications makes it challenging to parse real, usable architectural principles. It is also unclear whether the performance gains obtained by such Mixture-of-Experts (MoE) based modular systems are actually due to good specialization, as is often claimed, or due to other potential confounding factors like ease of optimization. In this work, we extend the analysis from Rosenbaum et al. (2019); Maziarz et al. (2019); Cui & Jaech (2020); Csordás et al. (2020) and propose a principled approach to evaluate, quantify, and analyse common ingredients of modular architectures, supported by either standard MLP-like connectivity, recurrent connections or attention (Bahdanau et al., 2015; Vaswani et al., 2017) operations. To do so, we develop a series of benchmarks and metrics aimed at probing the efficacy of a wide range of modular networks, where computation is factorized. This reveals valuable insights and helps identify not only where current approaches succeed but also when and how they fail. Whereas previous work on disentangling (Bengio, 2013; Higgins et al., 2016; Kim & Mnih, 2018) has focused on factoring out the different high-level variables that explain the data, here we focus on disentangling system modules from each other, the structural ingredients of network that can facilitate this factorization, and how such ingredients relate to the data generating distributions being parsed and processed. Given the recent increased interest in sparse modular systems (Rahaman et al., 2021; Fedus et al., 2021; Du et al., 2021; Mittal et al., 2021), we believe that this work will provide a test-bed for investigating the workings of such models and allow for research into inductive biases that can push such models to achieve good specialization. Through detailed experiments and evaluation metrics, we make the following observations and contributions: • We develop benchmark tasks and metrics based on probabilistically selected rules to quantify two important phenomena in modular systems, the extent of collapse and specialization. • We distill commonly used modularity inductive biases and systematically evaluate them through a series of models aimed at extracting commonly used architectural attributes (Monolithic, Modular, Modular-op and GT-Modular models). • We find that specialization in modular systems leads to significant boosts in performance when there are many underlying rules within a task, but not so much with only few rules. • We find standard modular systems to be often sub-optimal in both their capacity on focusing on the right information as well as in their ability to specialize, suggesting the need for additional inductive biases. 2 Notation / Terminology In this paper, we study how a family of modular systems performs on a common set of tasks, prescribed by a synthetic data generating process which we call rule-based data. Below, we introduce the notation for key ingredients: (1) rules and how they form tasks, (2) modules and how they can take different model architectures, (3) specialization and how we evaluate models. We refer the reader to Figure 1 for an illustration of our setup. Rules. To properly understand modular systems and analyze their benefits and shortcomings, we consider synthetic settings that allow fine-grained control over different aspects of task requirements. In particular, operations must be learned on the data-generating distri- bution illustrated in Equations 1-3, which we also refer to as rules. Details about the exact operations used in experiments are described in Section 3. c ∼ Categorical(·) (1) x ∼ px(·) (2) y |x, c ∼ py(· |x, c). (3) Given this distribution, we define a rule to be an expert of this distribution, that is, rule r is defined as py(· |x, c = r) where c is a categorical variable representing context, and x is an input sequence. For example, consider x = (1, 2) and c to select between addition and multiplication. Then, depending on c, the correct output should be either y = 3 or y = 2. More details about the specifics of these data distributions are presented in Section 3. Systems will be trained to infer y given c and x. This simple setup is meant to capture context-dependent tasks on variable data distributions, e.g. reasoning according to different features (e.g. shape, color, etc.). However, unlike such complex systems, ground truth knowledge of required operations is known for our synthetic task, allowing for deeper quantitative analysis. Tasks. A task is described by the set of rules (data-generating distribution) illustrated in Equations 1-3. Different sets of {py(· |x, c)}c imply different tasks. For a given number of rules, we train models on multiple tasks to remove bias towards any particular task. Modules. A modular system comprises a set of neural network modules, each of which can contribute to the overall output. One can see this through the functional form y = ∑M m=1 pm ym, where ym denotes the output and pm the activation of the mth module. Details about the different modular systems are outlined in Section 4. From this point onwards, we exclusively use rules to refer to the specialized components in the data-generating process, and modules to refer to the experts that are learned by a modular system. Further, for ease of quantitative assessment, we always set the number of modules equal to the number of rules, except when evaluating monolithic models (with a single module). Modules can be implemented in three different architectures, as described next. Model Architectures. Model architectures describe the choice of architecture considered for each module of a modular system, or the single module in a monolithic system. Here we consider Multi-Layer Perceptron (MLP), Multi-Head Attentions (MHA), and Recurrrent Neural Network (RNN). Importantly, the rules (or data generating distributions) are adapted to the model architecture, and we often refer to them as such (e.g. MLP based rules). Details about the data distributions and models considered in this work are provided in Sections 3 and 4 respectively. Perfect Specialization. When training modular systems on rule-based data, we would like the modules to specialize according to the rules in the data-generating distribution. Thus, there is an important need to quantify what constitutes perfect specialization of the system to the data. To allow for easier quantification, we always consider an equal number of modules and rules. However, future work should evaluate the ability of modular systems to automatically infer the required number of modules. 3 Data Generating Process Since we aim to study modular systems through synthetic data, here we flesh out the data-generating processes operating based on the rules scheme described above (see Equations 1-3). We use a simple Mixture-of-Experts (MoE) Yuksel et al. (2012); Masoudnia & Ebrahimpour (2014) styled data-generating process (Mixture Distribution), where we expect different modules to specialize to the different mixture components (rules). It is important to note that this system is slightly different from the traditional flat MoE since the experts are more plug-and-play and can be composed to solve a particular problem. As an example, if we consider a mixture of recurrent systems, different tokens (timepoints) in the input sequence can undergo computations according to different rules (e.g. a switching linear dynamical system), as opposed to the choice of expert being governed by the whole sequence. We now look at more specific setups of the data-generating systems in consideration, the general template of which was outlined above. To do so, we explain the data-generating processes amenable to our three model architectures: MLP, MHA, and RNN. Additionally, each of the following tasks have two versions: regression, and classification. These are included to explore potential differences these distinct loss types may induce. c ∼ U{1, R} (4) x1,x2 iid∼ N (0, I) (5) y = αcx1 + βcx2 (6) MLP. Here, we define the data scheme that is amenable for learning of modular MLP-based systems. In this synthetic data-generating scheme, a data sample consists of two independent numbers and a choice of rule being sampled from some distribution. Different rules lead to different linear combinations of the two numbers to give the output. That is, the choice of linear combination is dynamically instantiated based on the rule drawn. This is mathematically formulated in Equations 4-6, where αc and βc are the data parameters, I the identity matrix and y denotes the label for the regression tasks and sign(y) for the classification tasks. Hence, the data comes from a MoE distribution where c denotes which linear combination governs the conditional distribution py(· |x1,x2, c). When training modular architectures on such data, one expects each module in the trained system to specialize according to a unique rule. cn iid∼ U{1, R} (7) qnr,q ′ nr,vnr,v ′ nr iid∼ N (0, I) (8) sn = min i ̸=n d (qncn ,qicn) (9) s′n = min i ̸=n d ( q′ncn ,q ′ icn ) (10) yn = αcnvsncn + βcnv ′ s′ncn (11) MHA. Now, we define the data scheme that is tuned for learning in modular MHA based systems. Essentially, a MHA module can be understood through a set of searches (query-key interactions), a set of corresponding retrievals (values) and then some computation of the retrieved values, as explained by Mittal et al. (2021). Accordingly, we design the data-generating distribution with the following properties: Each rule is composed of a different notion of search, retrieval and the final linear combination of the retrieved information respectively. We mathematically describe the process in Equations 7-11, where n = 1, ..., N and r = 1, ..., R with N as the sequence length and R the number of rules. We denote the tuple (qnr,q′nr,vnr,v ′ nr) as xn. Further, yn denotes the label for the regression tasks while for classification, we consider the categorical label to be sign(yn). Thus, we can see that cn denotes the rule for the nth token. This rule governs which two tokens are closest to the nth token, demonstrated as sn and s′n. It also governs what features are retrieved from the searched tokens, which are vsncn and vs′ncn . These retrieved features then undergo a rule-dependent linear combination (on cn). Here, too, when training a modular MHA architecture, we want each MHA module in the system to be able to specialize to a unique MHA rule in the data system. cn iid∼ U{1, R} (12) xn iid∼ N (0, I) (13) sn = Acnsn−1 +Bcnxn (14) yn = w T sn (15) RNN. For recurrent systems, we define a rule as a kind of linear dynamical system, where one of multiple rules can be triggered at any time-point. Mathematically, this process can be defined through Equations 12-15, where n = 1, ...N , with N describing the sequence length. Each rule thus describes a different procedure for the update of the state st as well as the effect of the input xt to the state. Thus, we can see that cn denotes the rule to be used at the nth time-point. Further, yn denotes the label for the regression tasks while for classification, we consider the labels as sign(yn). Hence, in all settings, the data comes from a MoE distribution where c denotes the rule and governs the conditional py(· |x, c). When training modular architectures on such data, one expects each module in the trained system to specialize according to a unique rule. Our aim is to use these synthetic rule-based data setting to study and analyse modular systems and understand whether end-to-end trained modular systems concentrate on the right information to specialize based on, i.e. based on c, whether they do learn perfect specialization and whether perfect specialization actually helps in these settings. To properly understand this, we detail the different kinds of models considered in Section 4 as well as the different metrics proposed in Section 5 to analyse trained systems. For this work, we limit our analysis to infinite-data regime where each training iteration operates on a new data sample Future work would perform similar analysis in the regime of limited data. 4 Models Model Functional Form Several works claim that end-to-end trained modular systems outperform their monolithic counterparts, especially in out-of-distribution settings. However, there is a lack of step-by-step analysis on the benefits of such systems and whether they actually specialize according to the data generating distribution or not. To perform an in-depth analysis, we consider four different types of models that allow for varying levels of specialization, which are: Monolithic, Modular, Modular-op, and GT-Modular. We give the formulations for each of these models below and then discuss the different analysis we can perform through them. We also illustrate these models in Table 1 and depending on the data-generating procedure described in Section 3, f and fm can be implemented as either MLP, MHA or RNN cells in this work. Monolithic. A monolithic system is a big neural network that takes the entire data (x, c) as input and makes predictions ŷ based on it. There is no inductive bias about modularity or sparsity explicitly baked in the system and it is completely up to back-propagation to learn whatever functional form is needed to solve the task. An example of such a system is a traditional Multi-Head Attention (MHA) based system, eg. a Transformer. Modular. A modular system is composed of a number of modules, each of which is a neural network of a given architectural type (MLP, MHA, or RNN). Each module m takes the data (x, c) as input and computes an output ŷm and a confidence score, normalized across modules into an activation probability pm. The activation probability reflects the contribution of each module’s output to the final output ŷ of the system. Thus, there is an explicit baked-in inductive bias of modularity but it is still up to system-wide back-propagation to figure out the right specialization. An example of such a system is a mixture of MLPs or reusable RNNs, reusable across different time/positions. Modular-op. A modular-op (for operation only) system is very similar to the modular system with just one small difference. Instead of the activation probability pm of module m being a function of (x, c), we instead make sure that the activation is decided only by the rule context c. Hence, unlike modular systems, modular-op cannot be distracted by x in figuring out specialization of different modules. Even though the operation required is explicitly provided, this model still needs to learn specialization through back-propagation. GT-Modular. A GT-modular system (for ground truth) serves as an oracle benchmark, i.e., a modular system that specializes perfectly. In particular, the activation probability p′ms of modules are just set according to c, which is the indicator present in the data (x, c). Thus, this is a perfectly specializing system that chooses different modules sparsely and perfectly according to the different data rules. Given enough capacity, we can see that there is a hierarchy of models based on the functions they can implement, with GT-Modular ⊆ Modular-op ⊆ Modular ⊆ Monolithic. Put differently, models from Monolithic to GT-Modular increasingly incorporate the inductive biases for modularity and sparsity. This is proved in Appendix C by inspecting the function classes implemented by these models. In what follows, we want to analyse the benefits of having simple end-toend trained modular systems as opposed to monolithic ones. This can be understood through a comparison of various performance based metrics between Monolithic and Modular models, explained in the next section. This will allow us to answer if a modular architecture is always better for various distinct rule-based data generating systems. For instance, a comparison between the Modular and Modularop models will show whether the stan- dard modular systems are able to focus on the right information and ignore the distractors in driving specialization. To study this, we will look at performance as well as collapse and specialization metrics between these class of models. A comparison between GT-Modular and Modular-op will show the benefits of having a sparse activation pattern with proper resource allocation of modules as opposed to an end-to-end learned specialization on the right information (without distractors). Finally, we note that GT-Modular is a modular system which obtains perfect specialization. Through this model, we aim to analyse whether perfect specialization is in-fact important and if so, how far are typical modular systems from obtaining similar performance and specialization through end-to-end training. We now describe the metrics used for these evaluations. 5 Metrics To reliably evaluate modular systems, we propose a suite of metrics that not only gauge the performance benefits of such systems but also evaluate them across two important modalities: collapse and specialization, which we use to analyse the extent of resource allocation (in terms of parameters/modules) and specialization respectively of a modular system. Performance. The first set of evaluation metrics are based on performance of the models in both in-distribution as well as out-of-distribution (OoD) settings. These metrics capture how well the different models perform on a wide variety of different tasks. For classification settings, we report the classification error while for regression settings, we report the loss. In-Distribution. This refers to the in-distribution performance, evaluated by looking at both the final performance as well as convergence speeds of the different models. Out-of-Distribution. This refers to the OoD performance of different models. We consider very simple forms of OoD generalization: either (a) change in distribution of x by increasing variance, or (b) different sequence lengths, wherever the possibility presents (eg. in MHA and RNN). Collapse Metrics. We propose a set of metrics Collapse-Avg and Collapse-Worst that quantify the amount of collapse suffered by a modular system. Collapse refers to the degree of under-utilization of the modules. An example of this is illustrated in Figure 2, where we can see that Module 3 is never used. We consider the setting where all the data rules are equi-probable and the number of modules in the model are set to be the same as the number of data rules, to R. High collapse thus refers to under-utilization of resource (parameters) provided to the model, illustrating that certain modules are never being used and concurrently meaning that certain modules are being utilized for multiple rules. CA = R R− 1 R∑ m=1 max ( 0, 1 R − p(m) ) (16) Collapse-Avg. Given the data-setting with R equi-probable rules, and hence R modules in the model, we let p(m) be the marginal probability distribution of activation of module m. Then, we define the Collapse-Avg metric CA as in Equation 16, where RR−1 is for normalization. This metric captures the amount of under-utilization of all the modules of the system. A lower number is preferable for this metric, as a lower number demonstrates that all the modules are equally utilized. CW = 1−R min m p(m) (17)Collapse-Worst. Given the same data and model setting as above, the Collapse-Worst metric CW is defined as in Equation 17. This metric captures the amount of under-utilization of the least used module of the system. Again, a low number is preferable as it signifies that even the least used module is decently utilized by the model. Specialization Metrics. To complement collapse metrics, we also propose a set of metrics, (1) Alignment, (2) Adaptation and (3) Inverse Mutual Information to quantify the amount of specialization obtained by the modular systems. We again consider the setting of equi-probable rules and the same number of modules and rules R. These metrics are aimed at capturing how well the modules specialize to the rules, that is, whether different modules stick to different rules (good specialization) or whether all modules contribute almost equally to all rules (poor specialization). sd = min P∈SR d (A,P) (18) Alignment. Given a modular system trained on rule-based data with R rules and modules, one can obtain the activation matrix A, where Arm denotes p(module = m | rule = r), that is, the probability of activation of module m conditioned on rule r. Further, given a distance metric d(·, ·) over the space of matrices, perfect specialization can be quantified through Equation 18, where SR denotes the space of permutation matrices over R objects. We consider d(·, ·) as a normalized L1 distance. The score sd demonstrates the distance between the activation matrix A and its closest permutation matrix, with distances computed according to the metric d(·, ·). Note that sd → 0 implies that each module specializes to a unique rule, thereby signifying perfect specialization. Since the space of permutation matrices SR grows exponentially at the rate of Θ(R!), computing sd naively soon becomes intractable. However, we use the Hungarian algorithm (Kuhn, 1955) to compute it in polynomial time. This metric shows how close the learned modular system is to a perfectly specializing one, where a low score implies better specialization. SIMI = 1− 1 logR Ep(m,r) [ log p(m, r) p(m)× p(r) ] (19) Inverse Mutual Information. Given R as the number of rules and modules and let the joint distribution p(m, r) denote the activation probability of module m on rule r, the Inverse Mutual Information metric SIMI is defined as in Equation 19. A low inverse mutual information metric is preferable as it denotes that the modules are more specialized to the rules as opposed to multiple modules contributing to a single rule. SA = Ep∼P [ R∑ i=1 ∣∣∣p(r̂i)− q(m̂i)∣∣∣] (20) Adaptation. Let R be the number of rules and modules and P a distribution over the R-dimensional simplex. Further, let p(·) be the distribution over rules (not equi-probable in this metric) and q(·) the corresponding distribution obtained over the modules. Note that the distribution q(·) is dependent on p(·). Given these distributions, we define the Adaptation metric SA in Equation 20, where r̂i and m̂i are such that p(r̂1) ≤ p(r̂2) ≤ ... ≤ p(r̂R) and q(m̂1) ≤ q(m̂2) ≤ ... ≤ q(m̂R) and P is a dirichlet distribution. This metric can be understood as the amount by which the modules adapt (signified through the distribution q(·)) to changes in the rule distributions (which are p(·) sampled from P). The matching between the rule and module is obtained through a simple sort as defined above. A low adaptation score implies that the marginal distribution of the modules adapt well according to the distribution of the rules. That is, when a rule is weakly present in the data, there exists a module which weakly contributes in the corresponding output, averaged over multiple different rule distributions. To understand these metrics, note that uniform random activation patterns for the modules lead to low collapse metrics but high alignment, adaptation and inverse mutual information metrics, implying little collapse but poor specialization, as expected. On the other hand, GT-Modular systems necessarily lead to low collapse metrics as well as low alignment, adaptation and inverse mutual information, denoting little collapse and good specialization, which is expected since specialization is given as oracle. 6 Experiments We are now ready to report experiments on the models outlined in Section 4 with associated data generation processes described in Section 3. For each level of modularity (i.e. Monolithic, Modular, Modular-op, GT-Modular), we analyse models learning over five different number of rules, ranging from few (2) to many (32), five different model capacities (number of parameters) and two different training settings, i.e. regression and binary classification. To remove any biases towards particular task parameters (e.g. αc, βc in Equation 6), we randomly select new rules to create five different tasks per setting and, train five seeds per task. In essence, we train ∼20,000 models2 to properly analyse the benefits of modularity, the level of specialization obtained by end-to-end trained systems, the impact of number of rules and the impact of model capacity. Performance. We refer the readers to Figure 3 for a compressed overview on the performance of various models. We see that GT-Modular system wins most of the times (left), indicating the benefits of perfect specialization. We also see that between standard end-to-end trained Modular and Monolithic systems, the former outperforms but not by a huge gap. Together, these two pie charts indicate that current end-to-end trained modular systems do not achieve good specialization and are thus sub-optimal by a substantial margin. We then look at the specific architectural choices (MLP, MHA and RNN cells for functions f and fm in Table 1) and analyse their performance and trends across increasing number of rules. Figure 4 shows that while there are concrete benefits of a perfectly specializing system (GT-Modular) or even models that know what information to drive specialization from (Modular-op), typical end-to-end trained Modular systems are quite sub-optimal and not able to realize these benefits, especially with increasing number of rules which is where we see substantial benefits of good specialization (contrast Modular vs GT-Modular and Modular-op). Moreover, while such end-to- end Modular systems do generally outperform the Monolithic ones, it is often only by a small margin. We also see the training pattern of different models averaged over all other settings, with the average containing error for classification and loss for regression, in Figure 7. We can see that good specialization not only leads to better performances but also faster training. Collapse. We evaluate all the models on the two collapse metrics outlined in Section 5. Figure 5 shows the two collapse metrics, Collapse-Avg and Collapse-Worst, for different models against 2All models are trained on single V100 GPUs, each taking a few hours. varying number of rules, averaged over the different model architectures (MLP, MHA and RNN), training settings (Classification and Regression), model capacities, tasks and seeds. First, we notice that a Random activation baseline and the GT-Modular system do not have any collapse, which is expected. Next, we notice that both Modular and Modular-op suffer from the problems of collapse and this problem becomes worse with increasing number of rules. Figure 6 further shows similar information averaged over the number of rules too, highlighting that Modular-op has less collapse than Modular in general. However, we still see that the problem of collapse is significant whenever back-propagation is tasked with finding the right activation patterns, especially in the regime of large number of rules. This clearly indicates the need for investigation into different forms of regularizations to alleviate some of the collapse problems. Specialization. Next, we evaluate through the proposed specialization metrics in Section 5 whether the end-to-end trained modular systems actually specialize according to the data-generating distribution. Figure 5 shows the three specialization metrics, Alignment, Adaptation and Inverse Mutual Information, for different models against varying number of rules, again averaged over different model architectures, training settings, model capacities, tasks and seeds. As expected, we see that the Random activation baseline has poor specialization (high metrics) while the GT-Modular system has very good specialization. We further see that end-to-end trained Modular systems as well as Modularop suffer from sub-optimal specialization, as indicated by the high metrics. As with collapse, we again see that it becomes harder to reach optimal specialization with increasing number of rules. Figure 6 shows that while Modular-op has marginally better specialization than standard Modular systems, they are indeed quite sub-optimal when compared to a perfectly specializing system, i.e. GT-Modular. We refer the readers to Appendix D, E and F for training details as well as additional experiments regarding the effect of model sizes for MLP, MHA and RNN architectures respectively. 7 Conclusion and Discussion We provide a benchmark suitable for the analysis of modular systems and provide metrics that not only evaluate them on in-distribution and out-of-distribution performance, but also on collapse and specialization. Through our large-scale analysis, we uncover many intriguing properties of modular systems and highlight potential issues that could lead to poor scaling properties of such systems. Perfect Specialization. We discover that perfect specialization indeed helps in boosting performance both in-distribution and out-of-distribution, especially in the regime of many rules. On the contrary, monolithic systems often do comparatively or sometimes better when there are only a few rules, but do not rely on specialization to do so. End-to-End Trained Modular systems. While Modular systems outperform Monolithic ones, the margin of improvement is often small. This is because when solely relying on back-propagation of the task-losses, these models do not discover perfect specialization. In fact, the problem of poor specialization and high collapse becomes worse with increasing number of rules. This is slightly mitigated by allowing contextual information from the task to be used explicitly, as is the case for Modular-op, but the problems still persist and get worse over large number of rules. In summary, through systematic and extensive experiments, this work shows that modularity, when supporting good and distributed specialization (i.e. little collapse), can outperform monolithic models both in and out of distribution testing. However, we also find that although perfectly specialized solutions are attainable by modular networks, end-to-end training does not recover them, often even with explicit information about task context (as in Modular-op). Since real-world data distributions are often complex and unknown, we cannot get access to oracle networks like GT-Modular for analysis. An important conclusion is that additional inductive biases are required to learn adequately specialized solutions. These could include other architectural features to facilitate module routing, or regularization schemes (e.g. load-balancing Fedus et al. (2021)) or optimization strategies (e.g. learning rate scheduling) to promote module specialization. We refer the reader to Appendices A and B for further discussion on these exciting prospects and extensions to real-world domains. We believe the framework proposed in this work is ideal to drive research into such inductive biases and a necessary stepping stone for applications of these designs at scale. Finally, we highlight that the use of network architectures that promote contextual specialization, such as the use of modules as studied here, could potentially promote unwanted biases when deployed in models use by the public due to collapse or ill-distributed specialization. The framework proposed in this work could help mitigate this potentially problematic impact on society. Acknowledgments and Disclosure of Funding SM would like to acknowledge the support of scholarships from UNIQUE and IVADO as well as compute resources from Alliance and its regional partner organizations (ACENET, Calcul Québec, Compute Ontario, the BC DRI Group and the Prairie DRI Group) towards his research. YB and GL acknowledge the support from Canada CIFAR AI Chair Program, as well as Samsung Electronics Co., Ldt. GL acknowledges NSERC Discovery Grant [RGPIN-2018-04821].
1. What is the focus of the paper regarding neural architectures and their performance? 2. What are the strengths of the paper, particularly in its methodology and contributions? 3. What are the weaknesses of the paper, especially regarding its scope and limitations? 4. How does the reviewer suggest improving the paper's title to better reflect its content? 5. What are some potential limitations of the paper that the reviewer would like to see addressed?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper carefully and thoroughly examines recent trends around modularity in neural architectures, with a special focus on recent sparse mixture-of-experts (MoE) models through construction of synthetic “rule-based” tasks. These tasks specifically target both the learning and generalization potential of these architectures, showing how various architectural inductive biases perform in the presence of multiple “rules/tasks” (different pathways in an MoE for example), and in-distribution/out-of-distribution data. Using the proposed rule-based data generation procedure and evaluating three core architectures (MLPs, self-attention, and RNNs with and without various modular architectural tweaks), the results show the impact of modular-constrained specialization (it helps!), and a small gap between “modular” and “monolithic” systems trained end-to-end (we need to do better at training modular systems!). Strengths And Weaknesses The strengths of this paper are in its clarity and simplicity. It sets out to rigorously test the abilities of sparse, modular architectures vs. the “monolithic” architectural equivalents — what can these modular architectures learn that monolithic architectures cannot? In an ideal world, are modular architectures better? Being able to construct a simple process for generating data and evaluating these hypotheses is a strong contribution of this work; going further to test the various types of generalization, collapse modes, and carefully probe the “end-to-end” modular learning vs. an “oracle” learning are just additional strengths that really help contextualize what it happening. The weakness of this paper is that there’s little analysis of the existing sparse-MoE models that are trained on tremendous amounts of natural data (e.g., Switch-Transformers, MoE Language Models). It’d be interesting to see if you can construct synthetic language tasks that capture the same type of modularity and show that even when fine-tuning (or zero/few-shot finetuning these existing base models), the existing failure modes still appear! Questions Nit: Could the title be a bit more descriptive? I understand the desire for something short and punchy, but this paper does a lot of really cool stuff that should be expressed in the title? Perhaps something like “Evaluating Learning & Generalization of Modular Inductive Biases in Neural Architectures through Rigorous Control Tasks”? Limitations I believe this paper could do a better job of stating the limitations with respect to the fully synthetic nature of the proposed control tasks. These are absolutely useful; but there will always be a faction of scientists who want to see how real data (especially at scale) interacts with the story presented in this work!
NIPS
Title Is a Modular Architecture Enough? Abstract Inspired from human cognition, machine learning systems are gradually revealing advantages of sparser and more modular architectures. Recent work demonstrates that not only do some modular architectures generalize well, but they also lead to better out-of-distribution generalization, scaling properties, learning speed, and interpretability. A key intuition behind the success of such systems is that the data generating system for most real-world settings is considered to consist of sparsely interacting parts, and endowing models with similar inductive biases will be helpful. However, the field has been lacking in a rigorous quantitative assessment of such systems because these real-world data distributions are complex and unknown. In this work, we provide a thorough assessment of common modular architectures, through the lens of simple and known modular data distributions. We highlight the benefits of modularity and sparsity and reveal insights on the challenges faced while optimizing modular systems. In doing so, we propose evaluation metrics that highlight the benefits of modularity, the regimes in which these benefits are substantial, as well as the sub-optimality of current end-to-end learned modular systems as opposed to their claimed potential.1 1 Introduction Deep learning research has an established history of drawing inspiration from neuroscience and cognitive science. From the way hidden units combine afferent inputs, to how connectivity and network architectures are designed, many breakthroughs have relied on mimicking brain strategies. It is no surprise then that modularity and attention have been leveraged, often together, in artificial networks in recent years (Bahdanau et al., 2015; Andreas et al., 2016; Hu et al., 2017; Vaswani et al., 2017; Kipf et al., 2018; Battaglia et al., 2018; Goyal et al., 2019, 2021), with impressive results. Indeed, work from cognitive neuroscience (Baars, 1997; Dehaene et al., 2017) suggests that cortex represents knowledge in a modular way, with different such modules communicating through the bottleneck of working memory (where very few items can simultaneously be represented), in which content is selected by attention mechanisms. In recent work from the AI community (Bengio, 2017; Goyal & Bengio, 2020), it was proposed that these characteristics could correspond to meaningful inductive biases for deep networks, i.e., statistical assumptions about the dependencies between concepts manipulated at the higher levels of cognition. Both sparsity of the dependencies between these high-level variables and the decomposition of knowledge into recomposable pieces that are as independent as possible (Peters et al., 2017; Bengio et al., 2019; Goyal & Bengio, 2020; Ke et al., 2021) would make learning more efficient. Out-of-distribution (OoD) generalization would be facilitated by making it possible to sequentially compose the computations performed by these modules where new situations can be explained by novel combinations of existing concepts. Although a number of recent results hinge on such modular architectures (Graves et al., 2014; Andreas et al., 2016; Hu et al., 2017; Vaswani et al., 2017; Kipf et al., 2018; Santoro et al., 2018; Battaglia et al., 2018; Goyal et al., 2019, 2021; Locatello et al., 2020; Mittal et al., 2020; Madan et al., 2021), †Correspondence authors sarthmit@gmail.com 1Open-sourced implementation is available at https://github.com/sarthmit/Mod_Arch 36th Conference on Neural Information Processing Systems (NeurIPS 2022). the abundance of tricks and proposed architectural modifications makes it challenging to parse real, usable architectural principles. It is also unclear whether the performance gains obtained by such Mixture-of-Experts (MoE) based modular systems are actually due to good specialization, as is often claimed, or due to other potential confounding factors like ease of optimization. In this work, we extend the analysis from Rosenbaum et al. (2019); Maziarz et al. (2019); Cui & Jaech (2020); Csordás et al. (2020) and propose a principled approach to evaluate, quantify, and analyse common ingredients of modular architectures, supported by either standard MLP-like connectivity, recurrent connections or attention (Bahdanau et al., 2015; Vaswani et al., 2017) operations. To do so, we develop a series of benchmarks and metrics aimed at probing the efficacy of a wide range of modular networks, where computation is factorized. This reveals valuable insights and helps identify not only where current approaches succeed but also when and how they fail. Whereas previous work on disentangling (Bengio, 2013; Higgins et al., 2016; Kim & Mnih, 2018) has focused on factoring out the different high-level variables that explain the data, here we focus on disentangling system modules from each other, the structural ingredients of network that can facilitate this factorization, and how such ingredients relate to the data generating distributions being parsed and processed. Given the recent increased interest in sparse modular systems (Rahaman et al., 2021; Fedus et al., 2021; Du et al., 2021; Mittal et al., 2021), we believe that this work will provide a test-bed for investigating the workings of such models and allow for research into inductive biases that can push such models to achieve good specialization. Through detailed experiments and evaluation metrics, we make the following observations and contributions: • We develop benchmark tasks and metrics based on probabilistically selected rules to quantify two important phenomena in modular systems, the extent of collapse and specialization. • We distill commonly used modularity inductive biases and systematically evaluate them through a series of models aimed at extracting commonly used architectural attributes (Monolithic, Modular, Modular-op and GT-Modular models). • We find that specialization in modular systems leads to significant boosts in performance when there are many underlying rules within a task, but not so much with only few rules. • We find standard modular systems to be often sub-optimal in both their capacity on focusing on the right information as well as in their ability to specialize, suggesting the need for additional inductive biases. 2 Notation / Terminology In this paper, we study how a family of modular systems performs on a common set of tasks, prescribed by a synthetic data generating process which we call rule-based data. Below, we introduce the notation for key ingredients: (1) rules and how they form tasks, (2) modules and how they can take different model architectures, (3) specialization and how we evaluate models. We refer the reader to Figure 1 for an illustration of our setup. Rules. To properly understand modular systems and analyze their benefits and shortcomings, we consider synthetic settings that allow fine-grained control over different aspects of task requirements. In particular, operations must be learned on the data-generating distri- bution illustrated in Equations 1-3, which we also refer to as rules. Details about the exact operations used in experiments are described in Section 3. c ∼ Categorical(·) (1) x ∼ px(·) (2) y |x, c ∼ py(· |x, c). (3) Given this distribution, we define a rule to be an expert of this distribution, that is, rule r is defined as py(· |x, c = r) where c is a categorical variable representing context, and x is an input sequence. For example, consider x = (1, 2) and c to select between addition and multiplication. Then, depending on c, the correct output should be either y = 3 or y = 2. More details about the specifics of these data distributions are presented in Section 3. Systems will be trained to infer y given c and x. This simple setup is meant to capture context-dependent tasks on variable data distributions, e.g. reasoning according to different features (e.g. shape, color, etc.). However, unlike such complex systems, ground truth knowledge of required operations is known for our synthetic task, allowing for deeper quantitative analysis. Tasks. A task is described by the set of rules (data-generating distribution) illustrated in Equations 1-3. Different sets of {py(· |x, c)}c imply different tasks. For a given number of rules, we train models on multiple tasks to remove bias towards any particular task. Modules. A modular system comprises a set of neural network modules, each of which can contribute to the overall output. One can see this through the functional form y = ∑M m=1 pm ym, where ym denotes the output and pm the activation of the mth module. Details about the different modular systems are outlined in Section 4. From this point onwards, we exclusively use rules to refer to the specialized components in the data-generating process, and modules to refer to the experts that are learned by a modular system. Further, for ease of quantitative assessment, we always set the number of modules equal to the number of rules, except when evaluating monolithic models (with a single module). Modules can be implemented in three different architectures, as described next. Model Architectures. Model architectures describe the choice of architecture considered for each module of a modular system, or the single module in a monolithic system. Here we consider Multi-Layer Perceptron (MLP), Multi-Head Attentions (MHA), and Recurrrent Neural Network (RNN). Importantly, the rules (or data generating distributions) are adapted to the model architecture, and we often refer to them as such (e.g. MLP based rules). Details about the data distributions and models considered in this work are provided in Sections 3 and 4 respectively. Perfect Specialization. When training modular systems on rule-based data, we would like the modules to specialize according to the rules in the data-generating distribution. Thus, there is an important need to quantify what constitutes perfect specialization of the system to the data. To allow for easier quantification, we always consider an equal number of modules and rules. However, future work should evaluate the ability of modular systems to automatically infer the required number of modules. 3 Data Generating Process Since we aim to study modular systems through synthetic data, here we flesh out the data-generating processes operating based on the rules scheme described above (see Equations 1-3). We use a simple Mixture-of-Experts (MoE) Yuksel et al. (2012); Masoudnia & Ebrahimpour (2014) styled data-generating process (Mixture Distribution), where we expect different modules to specialize to the different mixture components (rules). It is important to note that this system is slightly different from the traditional flat MoE since the experts are more plug-and-play and can be composed to solve a particular problem. As an example, if we consider a mixture of recurrent systems, different tokens (timepoints) in the input sequence can undergo computations according to different rules (e.g. a switching linear dynamical system), as opposed to the choice of expert being governed by the whole sequence. We now look at more specific setups of the data-generating systems in consideration, the general template of which was outlined above. To do so, we explain the data-generating processes amenable to our three model architectures: MLP, MHA, and RNN. Additionally, each of the following tasks have two versions: regression, and classification. These are included to explore potential differences these distinct loss types may induce. c ∼ U{1, R} (4) x1,x2 iid∼ N (0, I) (5) y = αcx1 + βcx2 (6) MLP. Here, we define the data scheme that is amenable for learning of modular MLP-based systems. In this synthetic data-generating scheme, a data sample consists of two independent numbers and a choice of rule being sampled from some distribution. Different rules lead to different linear combinations of the two numbers to give the output. That is, the choice of linear combination is dynamically instantiated based on the rule drawn. This is mathematically formulated in Equations 4-6, where αc and βc are the data parameters, I the identity matrix and y denotes the label for the regression tasks and sign(y) for the classification tasks. Hence, the data comes from a MoE distribution where c denotes which linear combination governs the conditional distribution py(· |x1,x2, c). When training modular architectures on such data, one expects each module in the trained system to specialize according to a unique rule. cn iid∼ U{1, R} (7) qnr,q ′ nr,vnr,v ′ nr iid∼ N (0, I) (8) sn = min i ̸=n d (qncn ,qicn) (9) s′n = min i ̸=n d ( q′ncn ,q ′ icn ) (10) yn = αcnvsncn + βcnv ′ s′ncn (11) MHA. Now, we define the data scheme that is tuned for learning in modular MHA based systems. Essentially, a MHA module can be understood through a set of searches (query-key interactions), a set of corresponding retrievals (values) and then some computation of the retrieved values, as explained by Mittal et al. (2021). Accordingly, we design the data-generating distribution with the following properties: Each rule is composed of a different notion of search, retrieval and the final linear combination of the retrieved information respectively. We mathematically describe the process in Equations 7-11, where n = 1, ..., N and r = 1, ..., R with N as the sequence length and R the number of rules. We denote the tuple (qnr,q′nr,vnr,v ′ nr) as xn. Further, yn denotes the label for the regression tasks while for classification, we consider the categorical label to be sign(yn). Thus, we can see that cn denotes the rule for the nth token. This rule governs which two tokens are closest to the nth token, demonstrated as sn and s′n. It also governs what features are retrieved from the searched tokens, which are vsncn and vs′ncn . These retrieved features then undergo a rule-dependent linear combination (on cn). Here, too, when training a modular MHA architecture, we want each MHA module in the system to be able to specialize to a unique MHA rule in the data system. cn iid∼ U{1, R} (12) xn iid∼ N (0, I) (13) sn = Acnsn−1 +Bcnxn (14) yn = w T sn (15) RNN. For recurrent systems, we define a rule as a kind of linear dynamical system, where one of multiple rules can be triggered at any time-point. Mathematically, this process can be defined through Equations 12-15, where n = 1, ...N , with N describing the sequence length. Each rule thus describes a different procedure for the update of the state st as well as the effect of the input xt to the state. Thus, we can see that cn denotes the rule to be used at the nth time-point. Further, yn denotes the label for the regression tasks while for classification, we consider the labels as sign(yn). Hence, in all settings, the data comes from a MoE distribution where c denotes the rule and governs the conditional py(· |x, c). When training modular architectures on such data, one expects each module in the trained system to specialize according to a unique rule. Our aim is to use these synthetic rule-based data setting to study and analyse modular systems and understand whether end-to-end trained modular systems concentrate on the right information to specialize based on, i.e. based on c, whether they do learn perfect specialization and whether perfect specialization actually helps in these settings. To properly understand this, we detail the different kinds of models considered in Section 4 as well as the different metrics proposed in Section 5 to analyse trained systems. For this work, we limit our analysis to infinite-data regime where each training iteration operates on a new data sample Future work would perform similar analysis in the regime of limited data. 4 Models Model Functional Form Several works claim that end-to-end trained modular systems outperform their monolithic counterparts, especially in out-of-distribution settings. However, there is a lack of step-by-step analysis on the benefits of such systems and whether they actually specialize according to the data generating distribution or not. To perform an in-depth analysis, we consider four different types of models that allow for varying levels of specialization, which are: Monolithic, Modular, Modular-op, and GT-Modular. We give the formulations for each of these models below and then discuss the different analysis we can perform through them. We also illustrate these models in Table 1 and depending on the data-generating procedure described in Section 3, f and fm can be implemented as either MLP, MHA or RNN cells in this work. Monolithic. A monolithic system is a big neural network that takes the entire data (x, c) as input and makes predictions ŷ based on it. There is no inductive bias about modularity or sparsity explicitly baked in the system and it is completely up to back-propagation to learn whatever functional form is needed to solve the task. An example of such a system is a traditional Multi-Head Attention (MHA) based system, eg. a Transformer. Modular. A modular system is composed of a number of modules, each of which is a neural network of a given architectural type (MLP, MHA, or RNN). Each module m takes the data (x, c) as input and computes an output ŷm and a confidence score, normalized across modules into an activation probability pm. The activation probability reflects the contribution of each module’s output to the final output ŷ of the system. Thus, there is an explicit baked-in inductive bias of modularity but it is still up to system-wide back-propagation to figure out the right specialization. An example of such a system is a mixture of MLPs or reusable RNNs, reusable across different time/positions. Modular-op. A modular-op (for operation only) system is very similar to the modular system with just one small difference. Instead of the activation probability pm of module m being a function of (x, c), we instead make sure that the activation is decided only by the rule context c. Hence, unlike modular systems, modular-op cannot be distracted by x in figuring out specialization of different modules. Even though the operation required is explicitly provided, this model still needs to learn specialization through back-propagation. GT-Modular. A GT-modular system (for ground truth) serves as an oracle benchmark, i.e., a modular system that specializes perfectly. In particular, the activation probability p′ms of modules are just set according to c, which is the indicator present in the data (x, c). Thus, this is a perfectly specializing system that chooses different modules sparsely and perfectly according to the different data rules. Given enough capacity, we can see that there is a hierarchy of models based on the functions they can implement, with GT-Modular ⊆ Modular-op ⊆ Modular ⊆ Monolithic. Put differently, models from Monolithic to GT-Modular increasingly incorporate the inductive biases for modularity and sparsity. This is proved in Appendix C by inspecting the function classes implemented by these models. In what follows, we want to analyse the benefits of having simple end-toend trained modular systems as opposed to monolithic ones. This can be understood through a comparison of various performance based metrics between Monolithic and Modular models, explained in the next section. This will allow us to answer if a modular architecture is always better for various distinct rule-based data generating systems. For instance, a comparison between the Modular and Modularop models will show whether the stan- dard modular systems are able to focus on the right information and ignore the distractors in driving specialization. To study this, we will look at performance as well as collapse and specialization metrics between these class of models. A comparison between GT-Modular and Modular-op will show the benefits of having a sparse activation pattern with proper resource allocation of modules as opposed to an end-to-end learned specialization on the right information (without distractors). Finally, we note that GT-Modular is a modular system which obtains perfect specialization. Through this model, we aim to analyse whether perfect specialization is in-fact important and if so, how far are typical modular systems from obtaining similar performance and specialization through end-to-end training. We now describe the metrics used for these evaluations. 5 Metrics To reliably evaluate modular systems, we propose a suite of metrics that not only gauge the performance benefits of such systems but also evaluate them across two important modalities: collapse and specialization, which we use to analyse the extent of resource allocation (in terms of parameters/modules) and specialization respectively of a modular system. Performance. The first set of evaluation metrics are based on performance of the models in both in-distribution as well as out-of-distribution (OoD) settings. These metrics capture how well the different models perform on a wide variety of different tasks. For classification settings, we report the classification error while for regression settings, we report the loss. In-Distribution. This refers to the in-distribution performance, evaluated by looking at both the final performance as well as convergence speeds of the different models. Out-of-Distribution. This refers to the OoD performance of different models. We consider very simple forms of OoD generalization: either (a) change in distribution of x by increasing variance, or (b) different sequence lengths, wherever the possibility presents (eg. in MHA and RNN). Collapse Metrics. We propose a set of metrics Collapse-Avg and Collapse-Worst that quantify the amount of collapse suffered by a modular system. Collapse refers to the degree of under-utilization of the modules. An example of this is illustrated in Figure 2, where we can see that Module 3 is never used. We consider the setting where all the data rules are equi-probable and the number of modules in the model are set to be the same as the number of data rules, to R. High collapse thus refers to under-utilization of resource (parameters) provided to the model, illustrating that certain modules are never being used and concurrently meaning that certain modules are being utilized for multiple rules. CA = R R− 1 R∑ m=1 max ( 0, 1 R − p(m) ) (16) Collapse-Avg. Given the data-setting with R equi-probable rules, and hence R modules in the model, we let p(m) be the marginal probability distribution of activation of module m. Then, we define the Collapse-Avg metric CA as in Equation 16, where RR−1 is for normalization. This metric captures the amount of under-utilization of all the modules of the system. A lower number is preferable for this metric, as a lower number demonstrates that all the modules are equally utilized. CW = 1−R min m p(m) (17)Collapse-Worst. Given the same data and model setting as above, the Collapse-Worst metric CW is defined as in Equation 17. This metric captures the amount of under-utilization of the least used module of the system. Again, a low number is preferable as it signifies that even the least used module is decently utilized by the model. Specialization Metrics. To complement collapse metrics, we also propose a set of metrics, (1) Alignment, (2) Adaptation and (3) Inverse Mutual Information to quantify the amount of specialization obtained by the modular systems. We again consider the setting of equi-probable rules and the same number of modules and rules R. These metrics are aimed at capturing how well the modules specialize to the rules, that is, whether different modules stick to different rules (good specialization) or whether all modules contribute almost equally to all rules (poor specialization). sd = min P∈SR d (A,P) (18) Alignment. Given a modular system trained on rule-based data with R rules and modules, one can obtain the activation matrix A, where Arm denotes p(module = m | rule = r), that is, the probability of activation of module m conditioned on rule r. Further, given a distance metric d(·, ·) over the space of matrices, perfect specialization can be quantified through Equation 18, where SR denotes the space of permutation matrices over R objects. We consider d(·, ·) as a normalized L1 distance. The score sd demonstrates the distance between the activation matrix A and its closest permutation matrix, with distances computed according to the metric d(·, ·). Note that sd → 0 implies that each module specializes to a unique rule, thereby signifying perfect specialization. Since the space of permutation matrices SR grows exponentially at the rate of Θ(R!), computing sd naively soon becomes intractable. However, we use the Hungarian algorithm (Kuhn, 1955) to compute it in polynomial time. This metric shows how close the learned modular system is to a perfectly specializing one, where a low score implies better specialization. SIMI = 1− 1 logR Ep(m,r) [ log p(m, r) p(m)× p(r) ] (19) Inverse Mutual Information. Given R as the number of rules and modules and let the joint distribution p(m, r) denote the activation probability of module m on rule r, the Inverse Mutual Information metric SIMI is defined as in Equation 19. A low inverse mutual information metric is preferable as it denotes that the modules are more specialized to the rules as opposed to multiple modules contributing to a single rule. SA = Ep∼P [ R∑ i=1 ∣∣∣p(r̂i)− q(m̂i)∣∣∣] (20) Adaptation. Let R be the number of rules and modules and P a distribution over the R-dimensional simplex. Further, let p(·) be the distribution over rules (not equi-probable in this metric) and q(·) the corresponding distribution obtained over the modules. Note that the distribution q(·) is dependent on p(·). Given these distributions, we define the Adaptation metric SA in Equation 20, where r̂i and m̂i are such that p(r̂1) ≤ p(r̂2) ≤ ... ≤ p(r̂R) and q(m̂1) ≤ q(m̂2) ≤ ... ≤ q(m̂R) and P is a dirichlet distribution. This metric can be understood as the amount by which the modules adapt (signified through the distribution q(·)) to changes in the rule distributions (which are p(·) sampled from P). The matching between the rule and module is obtained through a simple sort as defined above. A low adaptation score implies that the marginal distribution of the modules adapt well according to the distribution of the rules. That is, when a rule is weakly present in the data, there exists a module which weakly contributes in the corresponding output, averaged over multiple different rule distributions. To understand these metrics, note that uniform random activation patterns for the modules lead to low collapse metrics but high alignment, adaptation and inverse mutual information metrics, implying little collapse but poor specialization, as expected. On the other hand, GT-Modular systems necessarily lead to low collapse metrics as well as low alignment, adaptation and inverse mutual information, denoting little collapse and good specialization, which is expected since specialization is given as oracle. 6 Experiments We are now ready to report experiments on the models outlined in Section 4 with associated data generation processes described in Section 3. For each level of modularity (i.e. Monolithic, Modular, Modular-op, GT-Modular), we analyse models learning over five different number of rules, ranging from few (2) to many (32), five different model capacities (number of parameters) and two different training settings, i.e. regression and binary classification. To remove any biases towards particular task parameters (e.g. αc, βc in Equation 6), we randomly select new rules to create five different tasks per setting and, train five seeds per task. In essence, we train ∼20,000 models2 to properly analyse the benefits of modularity, the level of specialization obtained by end-to-end trained systems, the impact of number of rules and the impact of model capacity. Performance. We refer the readers to Figure 3 for a compressed overview on the performance of various models. We see that GT-Modular system wins most of the times (left), indicating the benefits of perfect specialization. We also see that between standard end-to-end trained Modular and Monolithic systems, the former outperforms but not by a huge gap. Together, these two pie charts indicate that current end-to-end trained modular systems do not achieve good specialization and are thus sub-optimal by a substantial margin. We then look at the specific architectural choices (MLP, MHA and RNN cells for functions f and fm in Table 1) and analyse their performance and trends across increasing number of rules. Figure 4 shows that while there are concrete benefits of a perfectly specializing system (GT-Modular) or even models that know what information to drive specialization from (Modular-op), typical end-to-end trained Modular systems are quite sub-optimal and not able to realize these benefits, especially with increasing number of rules which is where we see substantial benefits of good specialization (contrast Modular vs GT-Modular and Modular-op). Moreover, while such end-to- end Modular systems do generally outperform the Monolithic ones, it is often only by a small margin. We also see the training pattern of different models averaged over all other settings, with the average containing error for classification and loss for regression, in Figure 7. We can see that good specialization not only leads to better performances but also faster training. Collapse. We evaluate all the models on the two collapse metrics outlined in Section 5. Figure 5 shows the two collapse metrics, Collapse-Avg and Collapse-Worst, for different models against 2All models are trained on single V100 GPUs, each taking a few hours. varying number of rules, averaged over the different model architectures (MLP, MHA and RNN), training settings (Classification and Regression), model capacities, tasks and seeds. First, we notice that a Random activation baseline and the GT-Modular system do not have any collapse, which is expected. Next, we notice that both Modular and Modular-op suffer from the problems of collapse and this problem becomes worse with increasing number of rules. Figure 6 further shows similar information averaged over the number of rules too, highlighting that Modular-op has less collapse than Modular in general. However, we still see that the problem of collapse is significant whenever back-propagation is tasked with finding the right activation patterns, especially in the regime of large number of rules. This clearly indicates the need for investigation into different forms of regularizations to alleviate some of the collapse problems. Specialization. Next, we evaluate through the proposed specialization metrics in Section 5 whether the end-to-end trained modular systems actually specialize according to the data-generating distribution. Figure 5 shows the three specialization metrics, Alignment, Adaptation and Inverse Mutual Information, for different models against varying number of rules, again averaged over different model architectures, training settings, model capacities, tasks and seeds. As expected, we see that the Random activation baseline has poor specialization (high metrics) while the GT-Modular system has very good specialization. We further see that end-to-end trained Modular systems as well as Modularop suffer from sub-optimal specialization, as indicated by the high metrics. As with collapse, we again see that it becomes harder to reach optimal specialization with increasing number of rules. Figure 6 shows that while Modular-op has marginally better specialization than standard Modular systems, they are indeed quite sub-optimal when compared to a perfectly specializing system, i.e. GT-Modular. We refer the readers to Appendix D, E and F for training details as well as additional experiments regarding the effect of model sizes for MLP, MHA and RNN architectures respectively. 7 Conclusion and Discussion We provide a benchmark suitable for the analysis of modular systems and provide metrics that not only evaluate them on in-distribution and out-of-distribution performance, but also on collapse and specialization. Through our large-scale analysis, we uncover many intriguing properties of modular systems and highlight potential issues that could lead to poor scaling properties of such systems. Perfect Specialization. We discover that perfect specialization indeed helps in boosting performance both in-distribution and out-of-distribution, especially in the regime of many rules. On the contrary, monolithic systems often do comparatively or sometimes better when there are only a few rules, but do not rely on specialization to do so. End-to-End Trained Modular systems. While Modular systems outperform Monolithic ones, the margin of improvement is often small. This is because when solely relying on back-propagation of the task-losses, these models do not discover perfect specialization. In fact, the problem of poor specialization and high collapse becomes worse with increasing number of rules. This is slightly mitigated by allowing contextual information from the task to be used explicitly, as is the case for Modular-op, but the problems still persist and get worse over large number of rules. In summary, through systematic and extensive experiments, this work shows that modularity, when supporting good and distributed specialization (i.e. little collapse), can outperform monolithic models both in and out of distribution testing. However, we also find that although perfectly specialized solutions are attainable by modular networks, end-to-end training does not recover them, often even with explicit information about task context (as in Modular-op). Since real-world data distributions are often complex and unknown, we cannot get access to oracle networks like GT-Modular for analysis. An important conclusion is that additional inductive biases are required to learn adequately specialized solutions. These could include other architectural features to facilitate module routing, or regularization schemes (e.g. load-balancing Fedus et al. (2021)) or optimization strategies (e.g. learning rate scheduling) to promote module specialization. We refer the reader to Appendices A and B for further discussion on these exciting prospects and extensions to real-world domains. We believe the framework proposed in this work is ideal to drive research into such inductive biases and a necessary stepping stone for applications of these designs at scale. Finally, we highlight that the use of network architectures that promote contextual specialization, such as the use of modules as studied here, could potentially promote unwanted biases when deployed in models use by the public due to collapse or ill-distributed specialization. The framework proposed in this work could help mitigate this potentially problematic impact on society. Acknowledgments and Disclosure of Funding SM would like to acknowledge the support of scholarships from UNIQUE and IVADO as well as compute resources from Alliance and its regional partner organizations (ACENET, Calcul Québec, Compute Ontario, the BC DRI Group and the Prairie DRI Group) towards his research. YB and GL acknowledge the support from Canada CIFAR AI Chair Program, as well as Samsung Electronics Co., Ldt. GL acknowledges NSERC Discovery Grant [RGPIN-2018-04821].
1. What are the main contributions and findings of the paper regarding modular network architectures? 2. What are the strengths and weaknesses of the paper, particularly in terms of its organization, data process, and proposed metrics? 3. Do you have any concerns or questions about the conclusions drawn from the experiments, especially regarding the generalizability of the results to real-world scenarios? 4. How does the reviewer assess the significance and novelty of the paper's content, especially compared to prior works in the field? 5. Are there any limitations or areas for improvement in the paper that the reviewer identifies?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper studies the benefits of MoE like modular network, in terms of many metrics, e.g., in/out of distribution performance, collapse-avg/worst, alignment, adaptation and inverse mutual information. The authors generate data, rules and tasks by synthetic neural process, and study monolithic, modular, modular-op model architectures against the ground-truth modular structure. They point out that an architecture with modular prior is not enough to perfectly learn the ground truth. Strengths And Weaknesses + Whether modularity architecture helps multi-task learning is studied from a well-defined perspective. + The data process and the developed metrics sound reasonable. + The paper is well-organized and easy to follow. - It's unknown how the proposed data process can impact on real-world rules & data setting. - A more important and meaningful metric, in addition to the proposed ones, could be transfer ability, or compositional generalization ability on new task, which is thought to be a key advantage of sparse and modularity design. The conclusion of the paper is less informative without this part. - MoE structure is only one of the implementations of modular architecture, and the title is somehow ambiguous. Questions It seems that modular-op usually comes with better performance in all metrics defined by the authors, but how general this conclusion can be? This is an important conclusion, which probably can guide us to design task level, task and input level, or input level (e.g., token level) gating. My major concern is that, the generating process of the synthetic data may not necessarily match any real-world case. The conclusion of paper seems a little bit straight-forward, considering the synthetic data-generation process. It would be more informative If the authors can further study transfer ability on modular structures. In the experiment, is the capacity of a monolithic model the same as modular & modular-op & gt-modular? Limitations N/A
NIPS
Title Is a Modular Architecture Enough? Abstract Inspired from human cognition, machine learning systems are gradually revealing advantages of sparser and more modular architectures. Recent work demonstrates that not only do some modular architectures generalize well, but they also lead to better out-of-distribution generalization, scaling properties, learning speed, and interpretability. A key intuition behind the success of such systems is that the data generating system for most real-world settings is considered to consist of sparsely interacting parts, and endowing models with similar inductive biases will be helpful. However, the field has been lacking in a rigorous quantitative assessment of such systems because these real-world data distributions are complex and unknown. In this work, we provide a thorough assessment of common modular architectures, through the lens of simple and known modular data distributions. We highlight the benefits of modularity and sparsity and reveal insights on the challenges faced while optimizing modular systems. In doing so, we propose evaluation metrics that highlight the benefits of modularity, the regimes in which these benefits are substantial, as well as the sub-optimality of current end-to-end learned modular systems as opposed to their claimed potential.1 1 Introduction Deep learning research has an established history of drawing inspiration from neuroscience and cognitive science. From the way hidden units combine afferent inputs, to how connectivity and network architectures are designed, many breakthroughs have relied on mimicking brain strategies. It is no surprise then that modularity and attention have been leveraged, often together, in artificial networks in recent years (Bahdanau et al., 2015; Andreas et al., 2016; Hu et al., 2017; Vaswani et al., 2017; Kipf et al., 2018; Battaglia et al., 2018; Goyal et al., 2019, 2021), with impressive results. Indeed, work from cognitive neuroscience (Baars, 1997; Dehaene et al., 2017) suggests that cortex represents knowledge in a modular way, with different such modules communicating through the bottleneck of working memory (where very few items can simultaneously be represented), in which content is selected by attention mechanisms. In recent work from the AI community (Bengio, 2017; Goyal & Bengio, 2020), it was proposed that these characteristics could correspond to meaningful inductive biases for deep networks, i.e., statistical assumptions about the dependencies between concepts manipulated at the higher levels of cognition. Both sparsity of the dependencies between these high-level variables and the decomposition of knowledge into recomposable pieces that are as independent as possible (Peters et al., 2017; Bengio et al., 2019; Goyal & Bengio, 2020; Ke et al., 2021) would make learning more efficient. Out-of-distribution (OoD) generalization would be facilitated by making it possible to sequentially compose the computations performed by these modules where new situations can be explained by novel combinations of existing concepts. Although a number of recent results hinge on such modular architectures (Graves et al., 2014; Andreas et al., 2016; Hu et al., 2017; Vaswani et al., 2017; Kipf et al., 2018; Santoro et al., 2018; Battaglia et al., 2018; Goyal et al., 2019, 2021; Locatello et al., 2020; Mittal et al., 2020; Madan et al., 2021), †Correspondence authors sarthmit@gmail.com 1Open-sourced implementation is available at https://github.com/sarthmit/Mod_Arch 36th Conference on Neural Information Processing Systems (NeurIPS 2022). the abundance of tricks and proposed architectural modifications makes it challenging to parse real, usable architectural principles. It is also unclear whether the performance gains obtained by such Mixture-of-Experts (MoE) based modular systems are actually due to good specialization, as is often claimed, or due to other potential confounding factors like ease of optimization. In this work, we extend the analysis from Rosenbaum et al. (2019); Maziarz et al. (2019); Cui & Jaech (2020); Csordás et al. (2020) and propose a principled approach to evaluate, quantify, and analyse common ingredients of modular architectures, supported by either standard MLP-like connectivity, recurrent connections or attention (Bahdanau et al., 2015; Vaswani et al., 2017) operations. To do so, we develop a series of benchmarks and metrics aimed at probing the efficacy of a wide range of modular networks, where computation is factorized. This reveals valuable insights and helps identify not only where current approaches succeed but also when and how they fail. Whereas previous work on disentangling (Bengio, 2013; Higgins et al., 2016; Kim & Mnih, 2018) has focused on factoring out the different high-level variables that explain the data, here we focus on disentangling system modules from each other, the structural ingredients of network that can facilitate this factorization, and how such ingredients relate to the data generating distributions being parsed and processed. Given the recent increased interest in sparse modular systems (Rahaman et al., 2021; Fedus et al., 2021; Du et al., 2021; Mittal et al., 2021), we believe that this work will provide a test-bed for investigating the workings of such models and allow for research into inductive biases that can push such models to achieve good specialization. Through detailed experiments and evaluation metrics, we make the following observations and contributions: • We develop benchmark tasks and metrics based on probabilistically selected rules to quantify two important phenomena in modular systems, the extent of collapse and specialization. • We distill commonly used modularity inductive biases and systematically evaluate them through a series of models aimed at extracting commonly used architectural attributes (Monolithic, Modular, Modular-op and GT-Modular models). • We find that specialization in modular systems leads to significant boosts in performance when there are many underlying rules within a task, but not so much with only few rules. • We find standard modular systems to be often sub-optimal in both their capacity on focusing on the right information as well as in their ability to specialize, suggesting the need for additional inductive biases. 2 Notation / Terminology In this paper, we study how a family of modular systems performs on a common set of tasks, prescribed by a synthetic data generating process which we call rule-based data. Below, we introduce the notation for key ingredients: (1) rules and how they form tasks, (2) modules and how they can take different model architectures, (3) specialization and how we evaluate models. We refer the reader to Figure 1 for an illustration of our setup. Rules. To properly understand modular systems and analyze their benefits and shortcomings, we consider synthetic settings that allow fine-grained control over different aspects of task requirements. In particular, operations must be learned on the data-generating distri- bution illustrated in Equations 1-3, which we also refer to as rules. Details about the exact operations used in experiments are described in Section 3. c ∼ Categorical(·) (1) x ∼ px(·) (2) y |x, c ∼ py(· |x, c). (3) Given this distribution, we define a rule to be an expert of this distribution, that is, rule r is defined as py(· |x, c = r) where c is a categorical variable representing context, and x is an input sequence. For example, consider x = (1, 2) and c to select between addition and multiplication. Then, depending on c, the correct output should be either y = 3 or y = 2. More details about the specifics of these data distributions are presented in Section 3. Systems will be trained to infer y given c and x. This simple setup is meant to capture context-dependent tasks on variable data distributions, e.g. reasoning according to different features (e.g. shape, color, etc.). However, unlike such complex systems, ground truth knowledge of required operations is known for our synthetic task, allowing for deeper quantitative analysis. Tasks. A task is described by the set of rules (data-generating distribution) illustrated in Equations 1-3. Different sets of {py(· |x, c)}c imply different tasks. For a given number of rules, we train models on multiple tasks to remove bias towards any particular task. Modules. A modular system comprises a set of neural network modules, each of which can contribute to the overall output. One can see this through the functional form y = ∑M m=1 pm ym, where ym denotes the output and pm the activation of the mth module. Details about the different modular systems are outlined in Section 4. From this point onwards, we exclusively use rules to refer to the specialized components in the data-generating process, and modules to refer to the experts that are learned by a modular system. Further, for ease of quantitative assessment, we always set the number of modules equal to the number of rules, except when evaluating monolithic models (with a single module). Modules can be implemented in three different architectures, as described next. Model Architectures. Model architectures describe the choice of architecture considered for each module of a modular system, or the single module in a monolithic system. Here we consider Multi-Layer Perceptron (MLP), Multi-Head Attentions (MHA), and Recurrrent Neural Network (RNN). Importantly, the rules (or data generating distributions) are adapted to the model architecture, and we often refer to them as such (e.g. MLP based rules). Details about the data distributions and models considered in this work are provided in Sections 3 and 4 respectively. Perfect Specialization. When training modular systems on rule-based data, we would like the modules to specialize according to the rules in the data-generating distribution. Thus, there is an important need to quantify what constitutes perfect specialization of the system to the data. To allow for easier quantification, we always consider an equal number of modules and rules. However, future work should evaluate the ability of modular systems to automatically infer the required number of modules. 3 Data Generating Process Since we aim to study modular systems through synthetic data, here we flesh out the data-generating processes operating based on the rules scheme described above (see Equations 1-3). We use a simple Mixture-of-Experts (MoE) Yuksel et al. (2012); Masoudnia & Ebrahimpour (2014) styled data-generating process (Mixture Distribution), where we expect different modules to specialize to the different mixture components (rules). It is important to note that this system is slightly different from the traditional flat MoE since the experts are more plug-and-play and can be composed to solve a particular problem. As an example, if we consider a mixture of recurrent systems, different tokens (timepoints) in the input sequence can undergo computations according to different rules (e.g. a switching linear dynamical system), as opposed to the choice of expert being governed by the whole sequence. We now look at more specific setups of the data-generating systems in consideration, the general template of which was outlined above. To do so, we explain the data-generating processes amenable to our three model architectures: MLP, MHA, and RNN. Additionally, each of the following tasks have two versions: regression, and classification. These are included to explore potential differences these distinct loss types may induce. c ∼ U{1, R} (4) x1,x2 iid∼ N (0, I) (5) y = αcx1 + βcx2 (6) MLP. Here, we define the data scheme that is amenable for learning of modular MLP-based systems. In this synthetic data-generating scheme, a data sample consists of two independent numbers and a choice of rule being sampled from some distribution. Different rules lead to different linear combinations of the two numbers to give the output. That is, the choice of linear combination is dynamically instantiated based on the rule drawn. This is mathematically formulated in Equations 4-6, where αc and βc are the data parameters, I the identity matrix and y denotes the label for the regression tasks and sign(y) for the classification tasks. Hence, the data comes from a MoE distribution where c denotes which linear combination governs the conditional distribution py(· |x1,x2, c). When training modular architectures on such data, one expects each module in the trained system to specialize according to a unique rule. cn iid∼ U{1, R} (7) qnr,q ′ nr,vnr,v ′ nr iid∼ N (0, I) (8) sn = min i ̸=n d (qncn ,qicn) (9) s′n = min i ̸=n d ( q′ncn ,q ′ icn ) (10) yn = αcnvsncn + βcnv ′ s′ncn (11) MHA. Now, we define the data scheme that is tuned for learning in modular MHA based systems. Essentially, a MHA module can be understood through a set of searches (query-key interactions), a set of corresponding retrievals (values) and then some computation of the retrieved values, as explained by Mittal et al. (2021). Accordingly, we design the data-generating distribution with the following properties: Each rule is composed of a different notion of search, retrieval and the final linear combination of the retrieved information respectively. We mathematically describe the process in Equations 7-11, where n = 1, ..., N and r = 1, ..., R with N as the sequence length and R the number of rules. We denote the tuple (qnr,q′nr,vnr,v ′ nr) as xn. Further, yn denotes the label for the regression tasks while for classification, we consider the categorical label to be sign(yn). Thus, we can see that cn denotes the rule for the nth token. This rule governs which two tokens are closest to the nth token, demonstrated as sn and s′n. It also governs what features are retrieved from the searched tokens, which are vsncn and vs′ncn . These retrieved features then undergo a rule-dependent linear combination (on cn). Here, too, when training a modular MHA architecture, we want each MHA module in the system to be able to specialize to a unique MHA rule in the data system. cn iid∼ U{1, R} (12) xn iid∼ N (0, I) (13) sn = Acnsn−1 +Bcnxn (14) yn = w T sn (15) RNN. For recurrent systems, we define a rule as a kind of linear dynamical system, where one of multiple rules can be triggered at any time-point. Mathematically, this process can be defined through Equations 12-15, where n = 1, ...N , with N describing the sequence length. Each rule thus describes a different procedure for the update of the state st as well as the effect of the input xt to the state. Thus, we can see that cn denotes the rule to be used at the nth time-point. Further, yn denotes the label for the regression tasks while for classification, we consider the labels as sign(yn). Hence, in all settings, the data comes from a MoE distribution where c denotes the rule and governs the conditional py(· |x, c). When training modular architectures on such data, one expects each module in the trained system to specialize according to a unique rule. Our aim is to use these synthetic rule-based data setting to study and analyse modular systems and understand whether end-to-end trained modular systems concentrate on the right information to specialize based on, i.e. based on c, whether they do learn perfect specialization and whether perfect specialization actually helps in these settings. To properly understand this, we detail the different kinds of models considered in Section 4 as well as the different metrics proposed in Section 5 to analyse trained systems. For this work, we limit our analysis to infinite-data regime where each training iteration operates on a new data sample Future work would perform similar analysis in the regime of limited data. 4 Models Model Functional Form Several works claim that end-to-end trained modular systems outperform their monolithic counterparts, especially in out-of-distribution settings. However, there is a lack of step-by-step analysis on the benefits of such systems and whether they actually specialize according to the data generating distribution or not. To perform an in-depth analysis, we consider four different types of models that allow for varying levels of specialization, which are: Monolithic, Modular, Modular-op, and GT-Modular. We give the formulations for each of these models below and then discuss the different analysis we can perform through them. We also illustrate these models in Table 1 and depending on the data-generating procedure described in Section 3, f and fm can be implemented as either MLP, MHA or RNN cells in this work. Monolithic. A monolithic system is a big neural network that takes the entire data (x, c) as input and makes predictions ŷ based on it. There is no inductive bias about modularity or sparsity explicitly baked in the system and it is completely up to back-propagation to learn whatever functional form is needed to solve the task. An example of such a system is a traditional Multi-Head Attention (MHA) based system, eg. a Transformer. Modular. A modular system is composed of a number of modules, each of which is a neural network of a given architectural type (MLP, MHA, or RNN). Each module m takes the data (x, c) as input and computes an output ŷm and a confidence score, normalized across modules into an activation probability pm. The activation probability reflects the contribution of each module’s output to the final output ŷ of the system. Thus, there is an explicit baked-in inductive bias of modularity but it is still up to system-wide back-propagation to figure out the right specialization. An example of such a system is a mixture of MLPs or reusable RNNs, reusable across different time/positions. Modular-op. A modular-op (for operation only) system is very similar to the modular system with just one small difference. Instead of the activation probability pm of module m being a function of (x, c), we instead make sure that the activation is decided only by the rule context c. Hence, unlike modular systems, modular-op cannot be distracted by x in figuring out specialization of different modules. Even though the operation required is explicitly provided, this model still needs to learn specialization through back-propagation. GT-Modular. A GT-modular system (for ground truth) serves as an oracle benchmark, i.e., a modular system that specializes perfectly. In particular, the activation probability p′ms of modules are just set according to c, which is the indicator present in the data (x, c). Thus, this is a perfectly specializing system that chooses different modules sparsely and perfectly according to the different data rules. Given enough capacity, we can see that there is a hierarchy of models based on the functions they can implement, with GT-Modular ⊆ Modular-op ⊆ Modular ⊆ Monolithic. Put differently, models from Monolithic to GT-Modular increasingly incorporate the inductive biases for modularity and sparsity. This is proved in Appendix C by inspecting the function classes implemented by these models. In what follows, we want to analyse the benefits of having simple end-toend trained modular systems as opposed to monolithic ones. This can be understood through a comparison of various performance based metrics between Monolithic and Modular models, explained in the next section. This will allow us to answer if a modular architecture is always better for various distinct rule-based data generating systems. For instance, a comparison between the Modular and Modularop models will show whether the stan- dard modular systems are able to focus on the right information and ignore the distractors in driving specialization. To study this, we will look at performance as well as collapse and specialization metrics between these class of models. A comparison between GT-Modular and Modular-op will show the benefits of having a sparse activation pattern with proper resource allocation of modules as opposed to an end-to-end learned specialization on the right information (without distractors). Finally, we note that GT-Modular is a modular system which obtains perfect specialization. Through this model, we aim to analyse whether perfect specialization is in-fact important and if so, how far are typical modular systems from obtaining similar performance and specialization through end-to-end training. We now describe the metrics used for these evaluations. 5 Metrics To reliably evaluate modular systems, we propose a suite of metrics that not only gauge the performance benefits of such systems but also evaluate them across two important modalities: collapse and specialization, which we use to analyse the extent of resource allocation (in terms of parameters/modules) and specialization respectively of a modular system. Performance. The first set of evaluation metrics are based on performance of the models in both in-distribution as well as out-of-distribution (OoD) settings. These metrics capture how well the different models perform on a wide variety of different tasks. For classification settings, we report the classification error while for regression settings, we report the loss. In-Distribution. This refers to the in-distribution performance, evaluated by looking at both the final performance as well as convergence speeds of the different models. Out-of-Distribution. This refers to the OoD performance of different models. We consider very simple forms of OoD generalization: either (a) change in distribution of x by increasing variance, or (b) different sequence lengths, wherever the possibility presents (eg. in MHA and RNN). Collapse Metrics. We propose a set of metrics Collapse-Avg and Collapse-Worst that quantify the amount of collapse suffered by a modular system. Collapse refers to the degree of under-utilization of the modules. An example of this is illustrated in Figure 2, where we can see that Module 3 is never used. We consider the setting where all the data rules are equi-probable and the number of modules in the model are set to be the same as the number of data rules, to R. High collapse thus refers to under-utilization of resource (parameters) provided to the model, illustrating that certain modules are never being used and concurrently meaning that certain modules are being utilized for multiple rules. CA = R R− 1 R∑ m=1 max ( 0, 1 R − p(m) ) (16) Collapse-Avg. Given the data-setting with R equi-probable rules, and hence R modules in the model, we let p(m) be the marginal probability distribution of activation of module m. Then, we define the Collapse-Avg metric CA as in Equation 16, where RR−1 is for normalization. This metric captures the amount of under-utilization of all the modules of the system. A lower number is preferable for this metric, as a lower number demonstrates that all the modules are equally utilized. CW = 1−R min m p(m) (17)Collapse-Worst. Given the same data and model setting as above, the Collapse-Worst metric CW is defined as in Equation 17. This metric captures the amount of under-utilization of the least used module of the system. Again, a low number is preferable as it signifies that even the least used module is decently utilized by the model. Specialization Metrics. To complement collapse metrics, we also propose a set of metrics, (1) Alignment, (2) Adaptation and (3) Inverse Mutual Information to quantify the amount of specialization obtained by the modular systems. We again consider the setting of equi-probable rules and the same number of modules and rules R. These metrics are aimed at capturing how well the modules specialize to the rules, that is, whether different modules stick to different rules (good specialization) or whether all modules contribute almost equally to all rules (poor specialization). sd = min P∈SR d (A,P) (18) Alignment. Given a modular system trained on rule-based data with R rules and modules, one can obtain the activation matrix A, where Arm denotes p(module = m | rule = r), that is, the probability of activation of module m conditioned on rule r. Further, given a distance metric d(·, ·) over the space of matrices, perfect specialization can be quantified through Equation 18, where SR denotes the space of permutation matrices over R objects. We consider d(·, ·) as a normalized L1 distance. The score sd demonstrates the distance between the activation matrix A and its closest permutation matrix, with distances computed according to the metric d(·, ·). Note that sd → 0 implies that each module specializes to a unique rule, thereby signifying perfect specialization. Since the space of permutation matrices SR grows exponentially at the rate of Θ(R!), computing sd naively soon becomes intractable. However, we use the Hungarian algorithm (Kuhn, 1955) to compute it in polynomial time. This metric shows how close the learned modular system is to a perfectly specializing one, where a low score implies better specialization. SIMI = 1− 1 logR Ep(m,r) [ log p(m, r) p(m)× p(r) ] (19) Inverse Mutual Information. Given R as the number of rules and modules and let the joint distribution p(m, r) denote the activation probability of module m on rule r, the Inverse Mutual Information metric SIMI is defined as in Equation 19. A low inverse mutual information metric is preferable as it denotes that the modules are more specialized to the rules as opposed to multiple modules contributing to a single rule. SA = Ep∼P [ R∑ i=1 ∣∣∣p(r̂i)− q(m̂i)∣∣∣] (20) Adaptation. Let R be the number of rules and modules and P a distribution over the R-dimensional simplex. Further, let p(·) be the distribution over rules (not equi-probable in this metric) and q(·) the corresponding distribution obtained over the modules. Note that the distribution q(·) is dependent on p(·). Given these distributions, we define the Adaptation metric SA in Equation 20, where r̂i and m̂i are such that p(r̂1) ≤ p(r̂2) ≤ ... ≤ p(r̂R) and q(m̂1) ≤ q(m̂2) ≤ ... ≤ q(m̂R) and P is a dirichlet distribution. This metric can be understood as the amount by which the modules adapt (signified through the distribution q(·)) to changes in the rule distributions (which are p(·) sampled from P). The matching between the rule and module is obtained through a simple sort as defined above. A low adaptation score implies that the marginal distribution of the modules adapt well according to the distribution of the rules. That is, when a rule is weakly present in the data, there exists a module which weakly contributes in the corresponding output, averaged over multiple different rule distributions. To understand these metrics, note that uniform random activation patterns for the modules lead to low collapse metrics but high alignment, adaptation and inverse mutual information metrics, implying little collapse but poor specialization, as expected. On the other hand, GT-Modular systems necessarily lead to low collapse metrics as well as low alignment, adaptation and inverse mutual information, denoting little collapse and good specialization, which is expected since specialization is given as oracle. 6 Experiments We are now ready to report experiments on the models outlined in Section 4 with associated data generation processes described in Section 3. For each level of modularity (i.e. Monolithic, Modular, Modular-op, GT-Modular), we analyse models learning over five different number of rules, ranging from few (2) to many (32), five different model capacities (number of parameters) and two different training settings, i.e. regression and binary classification. To remove any biases towards particular task parameters (e.g. αc, βc in Equation 6), we randomly select new rules to create five different tasks per setting and, train five seeds per task. In essence, we train ∼20,000 models2 to properly analyse the benefits of modularity, the level of specialization obtained by end-to-end trained systems, the impact of number of rules and the impact of model capacity. Performance. We refer the readers to Figure 3 for a compressed overview on the performance of various models. We see that GT-Modular system wins most of the times (left), indicating the benefits of perfect specialization. We also see that between standard end-to-end trained Modular and Monolithic systems, the former outperforms but not by a huge gap. Together, these two pie charts indicate that current end-to-end trained modular systems do not achieve good specialization and are thus sub-optimal by a substantial margin. We then look at the specific architectural choices (MLP, MHA and RNN cells for functions f and fm in Table 1) and analyse their performance and trends across increasing number of rules. Figure 4 shows that while there are concrete benefits of a perfectly specializing system (GT-Modular) or even models that know what information to drive specialization from (Modular-op), typical end-to-end trained Modular systems are quite sub-optimal and not able to realize these benefits, especially with increasing number of rules which is where we see substantial benefits of good specialization (contrast Modular vs GT-Modular and Modular-op). Moreover, while such end-to- end Modular systems do generally outperform the Monolithic ones, it is often only by a small margin. We also see the training pattern of different models averaged over all other settings, with the average containing error for classification and loss for regression, in Figure 7. We can see that good specialization not only leads to better performances but also faster training. Collapse. We evaluate all the models on the two collapse metrics outlined in Section 5. Figure 5 shows the two collapse metrics, Collapse-Avg and Collapse-Worst, for different models against 2All models are trained on single V100 GPUs, each taking a few hours. varying number of rules, averaged over the different model architectures (MLP, MHA and RNN), training settings (Classification and Regression), model capacities, tasks and seeds. First, we notice that a Random activation baseline and the GT-Modular system do not have any collapse, which is expected. Next, we notice that both Modular and Modular-op suffer from the problems of collapse and this problem becomes worse with increasing number of rules. Figure 6 further shows similar information averaged over the number of rules too, highlighting that Modular-op has less collapse than Modular in general. However, we still see that the problem of collapse is significant whenever back-propagation is tasked with finding the right activation patterns, especially in the regime of large number of rules. This clearly indicates the need for investigation into different forms of regularizations to alleviate some of the collapse problems. Specialization. Next, we evaluate through the proposed specialization metrics in Section 5 whether the end-to-end trained modular systems actually specialize according to the data-generating distribution. Figure 5 shows the three specialization metrics, Alignment, Adaptation and Inverse Mutual Information, for different models against varying number of rules, again averaged over different model architectures, training settings, model capacities, tasks and seeds. As expected, we see that the Random activation baseline has poor specialization (high metrics) while the GT-Modular system has very good specialization. We further see that end-to-end trained Modular systems as well as Modularop suffer from sub-optimal specialization, as indicated by the high metrics. As with collapse, we again see that it becomes harder to reach optimal specialization with increasing number of rules. Figure 6 shows that while Modular-op has marginally better specialization than standard Modular systems, they are indeed quite sub-optimal when compared to a perfectly specializing system, i.e. GT-Modular. We refer the readers to Appendix D, E and F for training details as well as additional experiments regarding the effect of model sizes for MLP, MHA and RNN architectures respectively. 7 Conclusion and Discussion We provide a benchmark suitable for the analysis of modular systems and provide metrics that not only evaluate them on in-distribution and out-of-distribution performance, but also on collapse and specialization. Through our large-scale analysis, we uncover many intriguing properties of modular systems and highlight potential issues that could lead to poor scaling properties of such systems. Perfect Specialization. We discover that perfect specialization indeed helps in boosting performance both in-distribution and out-of-distribution, especially in the regime of many rules. On the contrary, monolithic systems often do comparatively or sometimes better when there are only a few rules, but do not rely on specialization to do so. End-to-End Trained Modular systems. While Modular systems outperform Monolithic ones, the margin of improvement is often small. This is because when solely relying on back-propagation of the task-losses, these models do not discover perfect specialization. In fact, the problem of poor specialization and high collapse becomes worse with increasing number of rules. This is slightly mitigated by allowing contextual information from the task to be used explicitly, as is the case for Modular-op, but the problems still persist and get worse over large number of rules. In summary, through systematic and extensive experiments, this work shows that modularity, when supporting good and distributed specialization (i.e. little collapse), can outperform monolithic models both in and out of distribution testing. However, we also find that although perfectly specialized solutions are attainable by modular networks, end-to-end training does not recover them, often even with explicit information about task context (as in Modular-op). Since real-world data distributions are often complex and unknown, we cannot get access to oracle networks like GT-Modular for analysis. An important conclusion is that additional inductive biases are required to learn adequately specialized solutions. These could include other architectural features to facilitate module routing, or regularization schemes (e.g. load-balancing Fedus et al. (2021)) or optimization strategies (e.g. learning rate scheduling) to promote module specialization. We refer the reader to Appendices A and B for further discussion on these exciting prospects and extensions to real-world domains. We believe the framework proposed in this work is ideal to drive research into such inductive biases and a necessary stepping stone for applications of these designs at scale. Finally, we highlight that the use of network architectures that promote contextual specialization, such as the use of modules as studied here, could potentially promote unwanted biases when deployed in models use by the public due to collapse or ill-distributed specialization. The framework proposed in this work could help mitigate this potentially problematic impact on society. Acknowledgments and Disclosure of Funding SM would like to acknowledge the support of scholarships from UNIQUE and IVADO as well as compute resources from Alliance and its regional partner organizations (ACENET, Calcul Québec, Compute Ontario, the BC DRI Group and the Prairie DRI Group) towards his research. YB and GL acknowledge the support from Canada CIFAR AI Chair Program, as well as Samsung Electronics Co., Ldt. GL acknowledges NSERC Discovery Grant [RGPIN-2018-04821].
1. What is the focus and contribution of the paper regarding modular architectures in machine learning? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its ability to learn module specialization? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Do you have any concerns or questions regarding the experiment design, data complexity, and model architectures used in the study?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper presents an elegantly designed experiment to evaluate the effectiveness of a number of modular architectures in various settings. The topic is an important one, in my opinion, since modular architectures have the potential to address a number of key issues in ML, including compositionality and continual learning. The question the authors address is whether backpropagation can discover the structure in data that is inherently suited to modular architectures (because it is synthetic and designed to be) and learn to specialise in modules accordingly. By comparing with monolithic architectures at one end of the spectrum and with modular architectures forced to specialise (thanks to oracular knowledge of the data) at the other, they are able to assess both a) the extent to which perfect modularisation improves performance, and b) how well modularity can be learned by backpropagation. The results suggest that - within the narrow setting of the experiment - a) modularity does improve performance on sufficiently complex data, but b) backpropagation struggles to discover the underlying structure in the data and to learn to specialise accordingly. Strengths And Weaknesses The paper concerns the important topic of modularity and presents a well thought out experiment addressing an interesting question. The results are informative and interesting. Overall, this is good science, and the sort of thing we should see more of at the big ML venues. The main weakness of the paper, I feel, is that the synthetic data is very simple - just two real-valued variables, plus an integer context variable, parameterised by just two real values. Do we expect the results to apply with more complex datasets and large architectures? I’m not sure. Perhaps it’s easier for backpropagation to discover structure in the data at scale than in a small, simple dataset. The paper isn't very explicit about the model architectures used in the experiment, either in the main paper or in the appendix. I assumed they would be very small, given the low-dimensionality of the synthetic data. Delving into the code (thanks for providing this), I see the model architectures are a bit more nuanced than I expected. I suspected this is to help ensure the modular and monolithic versions had the same numbr of parameters? But I also see that the modular architecture has a softmax in there. Does this realise some form of competition between the modules? If so, this isn't mentioned in the main paper. Questions What do the authors mean by a “mixture experts distribution”? I am only familiar with MoE in the context of architectures, not distributions (and a quick search on Google backs this up). What is “I” in equations 5, 8, and 13? Why not just make it 1? See also questions above. Limitations See above
NIPS
Title Rates of Estimation of Optimal Transport Maps using Plug-in Estimators via Barycentric Projections Abstract Optimal transport maps between two probability distributions μ and ν on R have found extensive applications in both machine learning and statistics. In practice, these maps need to be estimated from data sampled according to μ and ν. Plugin estimators are perhaps most popular in estimating transport maps in the field of computational optimal transport. In this paper, we provide a comprehensive analysis of the rates of convergences for general plug-in estimators defined via barycentric projections. Our main contribution is a new stability estimate for barycentric projections which proceeds under minimal smoothness assumptions and can be used to analyze general plug-in estimators. We illustrate the usefulness of this stability estimate by first providing rates of convergence for the natural discretediscrete and semi-discrete estimators of optimal transport maps. We then use the same stability estimate to show that, under additional smoothness assumptions of Sobolev type or Besov type, kernel smoothed or wavelet based plug-in estimators respectively speed up the rates of convergence and significantly mitigate the curse of dimensionality suffered by the natural discrete-discrete/semi-discrete estimators. As a by-product of our analysis, we also obtain faster rates of convergence for plug-in estimators of W2(μ, ν), the Wasserstein distance between μ and ν, under the aforementioned smoothness assumptions, thereby complementing recent results in Chizat et al. (2020). Finally, we illustrate the applicability of our results in obtaining rates of convergence for Wasserstein barycenter between two probability distributions and obtaining asymptotic detection thresholds for some recent optimaltransport based tests of independence. 1 Introduction Given two random variables X ∼ µ and Y ∼ ν, where µ, ν are probability measures on Rd, d ≥ 1, the problem of finding a “nice" map T0(·) such that T0(X) ∼ ν has numerous applications in machine learning such as domain adaptation and data integration [34, 35, 38, 48, 61, 112], dimension reduction [12, 66, 90], generative models [60, 81, 88, 110], to name a few. Of particular interest is the case when T0(·) is obtained by minimizing a cost function, a line of work initiated by Gaspard Monge [97] in 1781 (see (1.1) below), in which case T0(·) is termed an optimal transport (OT) map and has applications in shape matching/transfer problems [29, 47, 107, 121], Bayesian statistics [46, 75, 80, 108], econometrics [15, 28, 45, 50, 54], nonparametric statistical inference [39– 41, 113, 114]; also see [111, 128, 129] for book-length treatments on the subject. In this paper, we will focus on the OT map obtained using the standard squared Euclidean cost function, i.e., T0 := argmin T :T#µ=ν E‖X − T (X)‖2, (1.1) where T#µ = ν means T (X) ∼ ν for X ∼ µ. The estimation of T0 has attracted a lot of interest in recent years due to its myriad applications (as stated above) and interesting geometrical properties (see [19, 56, 91] and Definition 1.1 below). In practice, the main hurdle in constructing estimators for T0 is that the explicit forms of the measures µ, ν are unknown; instead only random samples X1, . . . , Xm ∼ µ and Y1, . . . , Yn ∼ ν 35th Conference on Neural Information Processing Systems (NeurIPS 2021). are available. A natural strategy in this scenario is to estimate T0 using T̃m,n, where T̃m,n is computed as in (1.1) with µ and ν replaced by µ̃m and ν̃n which are empirical approximations of µ and ν based on X1, . . . , Xm and Y1, . . . , Yn respectively (see Definition 1.2). Such estimators are often called plug-in estimators and have been used extensively; see [7, 30, 67, 93, 94, 102, 116]. The main goal of this paper is to study the rates of convergence of general plug-in estimators of T0 under a unified framework. We show that when µ̃m and ν̃n are chosen as µ̂m and ν̂n respectively, where µ̂m and ν̂n are the standard empirical distributions supported on m and n atoms, i.e., µ̂m := 1 m m∑ i=1 δXi and ν̂n := 1 n n∑ j=1 δYj , (1.2) T̃m,n (appropriately defined using Definition 1.2) converges at a rate of m−2/d + n−2/d for d ≥ 4 in the sense of (1.8). This rate happens to be minimax optimal under minimal smoothness assumptions (see [72, Theorem 6]) but suffers from the curse of dimensionality. We next show that, if µ and ν are known to admit sufficiently smooth densities, it is possible to apply kernel or wavelet based smoothing techniques on µ̂m and ν̂n to obtain plug-in estimators that mitigate the aforementioned curse of dimensionality. Our next contribution pertains to the estimation of W 22 (µ, ν) (the squared Wasserstein distance), see (1.3) below, a quantity of independent interest in statistics and machine learning with applications in structured prediction [51, 89], image analysis [18, 59], nonparametric testing [16, 106], generative modeling [10, 96], etc. In this paper, we also obtain rates of convergence for plug-in estimators W 22 (µ̃m, ν̃n) of W 2 2 (µ, ν). We show that kernel smoothing µ̂m and ν̂n can be used to obtain plug-in estimators of W 22 (µ, ν) that mitigate the curse of dimensionality as opposed to a direct plug-in approach using µ̂m and ν̂n (as used in [30, Theorem 2]). This provides an answer to the open question of estimating W 22 (µ, ν) when µ, ν admit smooth densities laid out in [30]. 1.1 Background on optimal transport In this section, we present some basic concepts and results associated with the OT problem that will play a crucial role in the sequel. Let Pac(Rd) denote the set of all Lebesgue absolutely continuous probability measures on Rd andP2(Rd) be the set of probability measures with finite second moments. Then the 2-Wasserstein distance (squared) between µ, ν ∈ P2(Rd) is defined as: W 22 (µ, ν) := min π∈Π(µ,ν) ∫ ‖x− y‖2 dπ(x, y), (1.3) where Π(µ, ν) is the set of probability measures on Rd×Rd with marginals µ and ν. The optimization problem in (1.3) is often called the Kantorovich relaxation (see [76, 77]) of the optimization problem in (1.1). The existence of a minimizer in (1.3) follows from [129, Theorem 4.1]. Proposition 1.1 (Brenier-McCann polar factorization theorem, see [91, 128]). Suppose µ ∈ Pac(Rd). Then there exists a µ-a.e. (almost everywhere) unique function T0(·) : Rd → Rd, which is the gradient of a real-valued d-variate convex function, say ϕ0(·) : Rd → R, such that T0#µ = ν. Further, the distribution defined as π(A×B) = µ(A ∩ (T0)−1(B)) for all Borel sets A,B ⊆ Rd is the unique minimizer in (1.3) provided µ, ν ∈ P2(Rd). Definition 1.1 (OT map and potential function). The function T0 : Rd → Rd in Proposition 1.1 which satisfies T0#µ = ν will be called the OT map from µ to ν. A convex function ϕ0(·) in Proposition 1.1 satisfying∇ϕ0 = T0 will be termed an OT potential. The next and final important ingredient is the alternate dual representation of (1.3) which gives: 1 2 W 22 (µ, ν) = 1 2 ∫ ‖x‖2 dµ(x) + 1 2 ∫ ‖y‖2 dν(y)−min f∈F Sµ,ν(f), where (1.4) Sµ,ν(f) = ∫ f dµ+ ∫ f∗ dν. (1.5) Here F denotes the space of convex functions on Rd which are also elements of L1(µ) and f∗(·) is the standard Legendre-Fenchel dual defined as: f∗(x) := sup y∈Rd [y>x− f(y)], for x ∈ dom(f). (1.6) 1.2 Estimating OT map via barycentric projection Recall the setting from the Introduction. Let µ̃m, ν̃n ∈ P2(Rd). Here µ̃m, ν̃n need not be absolutely continuous and can be very general. Intuitively, µ̃m and ν̃n can be viewed as some empirical approximation of µ and ν respectively. Example 1.2 (Simple choices of µ̃m and ν̃n). Let X1, . . . , Xm i.i.d.∼ µ and Y1, . . . , Yn i.i.d.∼ ν; in which case a natural choice would be to set µ̃m = µ̂m and ν̃n = ν̂n where µ̂m and ν̂n are the empirical distributions of X1, . . . , Xm and Y1, . . . , Yn respectively, as defined in (1.2). This is the standard choice adopted in the discrete-discrete Kantorovich relaxation; see [104, Section 2.3]. Another popular choice is µ̃m = µ̂m, ν̃n = ν or µ̃m = µ, ν̃n = ν̂n. This is the semi-discrete Kantorovich problem and is popular when one of the measures is fully specified; see [26, 55]. A natural way to estimate T0(·), as defined in (1.1), would be to approximate it using the OT map from µ̃m to ν̃n. However as µ̃m and ν̃n may not be elements of Pac(Rd), Proposition 1.1 does not apply and an OT map may not exist from µ̃m to ν̃n. Such is the case in Example 1.2 in the discrete-discrete case when m 6= n. To circumvent this issue, we leverage the notion of barycentric projections (see [3, Definition 5.4.2]) defined below: Definition 1.2 (Barycentric projection). Define the set Γ̃min := argmin π∈Π(µ̃m,ν̃n) ∫ ‖x− y‖2 dπ(x, y). The optimization problem above is the plug-in analog of the optimization problem on the right hand side of (1.3). Given any γ ∈ Γ̃min, define the barycentric projection of γ as the conditional mean of y given x under γ, i.e., T̃m,n(x) ≡ T̃ γm,n(x) := ∫ y y dγ(x, y)∫ y dγ(x, y) , for x ∈ supp (µ̃m) . (1.7) In general, Γ̃min need not be a singleton which is why we index the barycentric projection T̃ γm,n(·) by γ ∈ Γ̃min. Note that T̃ γm,n(·) need not be a transport map; however, if an OT map exists then it must be equal to T̃ γm,n(·) (µ̃m-a.e.). Our goal is to obtain stochastic upper bounds for sup γ∈Γ̃min ∫ ∥∥T̃ γm,n(x)− T0(x)∥∥2 dµ̃m(x). (1.8) In addition, our proof techniques also yield rates of convergence for∣∣W 22 (µ̃m, ν̃n)−W 22 (µ, ν)∣∣. (1.9) In this paper, we will focus on d ≥ 2. Due to the canonical ordering of R, the case d = 1 can be handled easily using the classical Hungarian embedding theorem [82]. 1.3 Contributions 1. We provide a new and flexible stability estimate Theorem 2.1 which yields a unified approach to obtaining rates of convergence for general plug-in estimators of the OT map T0(·). Unlike existing stability estimates, Theorem 2.1 holds for the barycentric projection (which is the same as the OT map when it exists) and does not require any smoothness assumptions on µ̃m, ν̃n or T̃ γm,n(·); also see Remark 2.1 for a comparison with the existing literature. 2. In Sections 2.1 and 2.2, we use Theorem 2.1 to bound (1.8) and (1.9): • In Section 2.1, we show that in both the discrete-discrete and semi-discrete Kantorovich relaxation problems (see Example 1.2), the rate of convergence of (1.8) is m−2/d + n−2/d for d ≥ 4 when T0 is assumed to be Lipschitz (see Theorem 2.2), which is the minimax rate (see [72, Theorem 6]). To the best of our knowledge, rates of convergence for these natural estimators weren’t previously established in the literature. • In Section 2.2 and Appendix A, we show that the curse of dimensionality in the above rates can be mitigated provided µ and ν admit (uniform) Sobolev smooth densities (see Section 2.2) or Besov smooth densities (see Appendix A). In Section 2.2, our plug-in estimator is obtained by choosing µ̃m (and ν̃n) as the convolution of µ̂m (and ν̂n) and a smooth kernel with an appropriate bandwidth. Under this choice, the rate of convergence in (1.8) is m−( s+2 d ∧ 1 2 ) + n−( s+2 d ∧ 1 2 ), where s denotes the degree of Sobolev smoothness (see Theorem 2.5). Clearly, if 2(s + 2) ≥ d, the rate of convergence becomes dimension-free and mitigates the curse of dimensionality. We also show the same rates of convergence mentioned above hold for (1.9) (see e.g., Proposition 2.6) which makes a strong case in favor of incorporating smoothness in the construction of plug-in estimators as was conjectured in [30]. In Appendix A, our plug-in estimator is obtained using natural wavelet based density estimators. The rate of convergence in (1.8) turns out to be n− 1+s d+2s where s denotes the degree of Besov smoothness (see Theorem A.1). Note that by choosing s large enough, the exponent in the rate can be made arbitrarily close to 1/2, thereby reducing the curse of dimensionality. 3. In Section 2.3, we use a discretization technique from [131] to construct discrete approximations to the smoothed µ̃m and ν̃n from the previous paragraph that in turn yield computable plug-in estimators for T0 (provided one can sample from µ̃m and ν̃n) that also achieve the same statistical guarantees as the smoothed plug-in estimator from Section 2.2 (see Theorem 2.7). However the number of atoms required in the discretizations and correspondingly the computational complexity increases with the degree of smoothness; this highlights a statistical and computational trade-off. 4. We provide implications of our results in popular applications of OT such as estimating the barycenter of two multivariate probability distributions (see Theorem B.1 in Appendix B.1) and in nonparametric independence testing (see Theorem B.3 in Appendix B.2). 1.4 Related work Many recent works have focused on obtaining consistent estimators of T0 using the plug-in principle, see [26, 55] (in the semi-discrete problem) and [41, 68, 132] (in the discrete-discrete problem). In [55], the authors studied the rate of convergence of the semi-discrete optimal transport map from ν (absolutely continuous) to µ̂m. This paper complements the aforementioned papers by studying the rates of convergence for general plug-in estimators in a unified fashion. In two other papers [9, Theorem 1.1] and [87, Section 4], the authors use a “Voronoi tessellation" approach to estimate T0, however the rates obtained in this paper, even in the absence of smoothness, are strictly better than those in [9, 87]. Perhaps the most closely related paper to ours would be [67]. In [67], the author uses variational techniques to arrive at stability estimates while we exploit the Lipschitz nature of the OT map (see Definition 1.1). Further the rates in this paper have exponents s+2d ∧ 1 2 which are strictly better than the exponents s+22(s+2)+d obtained in [67, Proposition 1] under the same smoothness assumptions (Sobolev type of order s, see Definition 2.4). In another line of work [72], the authors use theoretical wavelet based estimators (not of the plug-in type) of T0 to obtain nearly minimax optimal rates of convergence. However these estimators, by themselves, are not transport maps between two probability measures, which makes them harder to interpret. In contrast, our focus is on obtaining rates of convergence for plug-in estimators, which are transport maps between natural approximations of µ and ν. Such plug-in type strategies are a lot more popular in computational OT [7, 30, 67, 93, 94, 102, 116]. In terms of obtaining rates of convergence for (1.9), some attempts include [109, 116] where parametric rates are obtained when µ, ν are known to be finitely supported or are both Gaussian. In a related problem, bounds for W 22 (µ̂m, µ) were obtained in [6, 42, 49, 100, 123, 131]. Using these bounds, for m = n, it is easy to get a n−1/d rate of convergence for (1.9). This rate was recently improved to n−2/d in [30] under no smoothness assumptions. Our rates coincide with the n−2/d rate from [30] under no smoothness assumptions. But further, we show in this paper that the curse of dimensionality in the above rate can be mitigated by incorporating smoothness into the plug-in procedure. 2 Main results Recall ϕ0(·) from Definition 1.1. The following is our main result. Theorem 2.1 (Stability estimate). Suppose that µ, ν ∈ Pac(Rd) ∩ P2(Rd) and µ̃m, ν̃n ∈ P2(Rd). Assume that T0(·) (as defined in (1.1)) is L-Lipschitz (L > 0). Then, sup γ∈Γ̃min ∫ ‖T̃ γm,n(x)− T0(x)‖2 dµ̃m(x) ≤ Lmax {∣∣∣∣ ∫ Ψ∗µ̃m,ν̃n d(ν̃n − ν†m)∣∣∣∣, ∣∣∣∣ ∫ Ψ∗µ̃m,ν†m d(ν̃n − ν†m) ∣∣∣∣} + 2L ∫ ϕ∗0(y) d(ν̃n − ν†m)(y), (2.1) where ν†m := T0#µ̃m, ϕ ∗ 0(·) is defined as in (1.6), and with S·,·(·) defined as in (1.5), Ψµ̃m,ν̃n(·) := argminf∈F Sµ̃m,ν̃n(f), Ψµ̃m,ν†m(·) := argminf∈F Sµ̃m,ν†m(f), and D denotes the space of realvalued convex functions on Rd. The proof of Theorem 2.1 (see Appendix C.1) starts along the same lines as the proof of the curvature estimate in [56, Proposition 3.3]. This is followed by some careful manipulations of W 22 (·, ·) (as in (1.3)) and an application of the conditional version of Jensen’s inequality, see (C.3). The final step of the proof uses the dual representation in (1.4) with techniques similar to some intermediate steps in the proof of [92, Proposition 2] and [30, Lemma 3]. Remark 2.1 (Comparison with other stability estimates). Theorem 2.1 provides some important advantages to existing stability estimates in the literature. One of the earliest results in this direction can be found in [56, Proposition 3.3] but their bound involves a push-forward constraint which makes it hard to use for rate of convergence analysis. A bound similar to Theorem 2.1 is presented in [55, Lemma 5.1] but there the authors assume the existence of an OT map from µ̃m to ν̃n. Therefore, it does not apply to the discrete-discrete problem where µ̃m = µ̂m and ν̃n = ν̂n with m 6= n. Overcoming all these limitations is an important contribution of Theorem 2.1 and allows us to deal with popular plug-in estimators all in one go. The stability estimate in [72, Proposition 10] on the other hand requires µ̃m, ν̃n to be sufficiently smooth and hence it does not hold for discrete-discrete or semi-discrete plug-in estimators (see Example 1.2). Further their result requires all the measures involved to be compactly supported unlike the much milder requirements of Theorem 2.1. However, a shortcoming of Theorem 2.1 is that it is hard to obtain rates faster than n−1/2 using it directly, whereas [72] can obtain rates arbitrarily close to n−1. This is a price we pay for analyzing natural and popular plug-in estimators as opposed to the (more intractable) wavelet based estimators in [72]. Remark 2.2 (How to use Theorem 2.1 to obtain rates of convergence?). Note that the second term on the right hand side of (2.1), under appropriate moment assumptions, is Op(m−1/2 +n−1/2) (free of dimension) by a direct application of Markov’s inequality. We therefore focus on the first term. By (1.5), Ψ∗µ̃m,ν̃n(·), Ψ ∗ µ̃m,ν † m (·) ∈ F . Further, by Caffarelli’s regularity theory [20–22], depending on the “smoothness" of µ̃m, ν̃n, it can be shown that there exists a further class of functions Fs (see Remarks 2.3 and 2.6) such that Ψ∗µ̃m,ν̃n(·), Ψ ∗ µ̃m,ν † m (·) ∈ F ∩ Fs. Thus, we can bound the first term on the right hand side of (2.1) as: max {∣∣∣∣ ∫ Ψ∗µ̃m,ν̃n d(ν̃n − ν†m)∣∣∣∣, ∣∣∣∣ ∫ Ψ∗µ̃m,ν†m d(ν̃n − ν†m) ∣∣∣∣} ≤ sup f∈F∩Fs ∣∣∣∣ ∫ f d(ν̃n − ν†m)∣∣∣∣. (2.2) The right hand side of (2.2) can now be bounded using the corresponding Dudley’s entropy integral bounds using empirical process techniques; see [126, Lemmas 19.35-19.37]. To conclude, the two main steps in our strategy are identifying the family of functions Fs and computing Dudley’s entropy integral. Further, the more the smoothness of µ̃m, ν̃n, the smaller is the class of functions Fs and smaller the supremum on the right hand side of (2.2). This shows why better rates can be expected under smoothness assumptions. 2.1 Natural non-smooth plug-in estimators In this case, we discuss the rates of convergence for the discrete-discrete problem and the semi-discrete problem, where no smoothness is available on µ̃m and ν̃n. Theorem 2.2. Suppose that T0(·) is L-Lipschitz, ν is compactly supported and E exp(t‖X1‖α) <∞ for some t > 0, α > 0. (Discrete-discrete): Set µ̃m = µ̂m and ν̃n = ν̂n. Then the following holds: sup γ∈Γ̃min ∫ ‖T̃ γm,n(x)− T0(x)‖2 dµ̃m(x) = Op ( r (m,n) d × (log (1 + max{m,n})) td,α ) , (2.3) where r(m,n)d := m−1/2 + n−1/2 for d = 2, 3, m−1/2 log (1 +m) + n−1/2 log (1 + n) for d = 4, m−2/d + n−2/d for d ≥ 5, (2.4) and td,α := (4α)−1(4 + ((2α+ 2dα− d) ∨ 0)) for d < 4, (α−1 ∨ 7/2)− 1 for d = 4, 2(1 + d−1) for d > 4. The same bound holds for |W 22 (µ̃m, ν̃n)−W 22 (µ, ν)| without assuming T0(·) is Lipschitz. (Semi-discrete): Set µ̃m = µ, ν̃n = ν̂n or µ̃m = µ̂m, ν̃n = ν. Then the left hand side of (2.3) is Op(r (n,n) d × (log (1 + n))td,α) or Op(r (m,m) d × (log (1 +m))td,α) respectively. A stronger result can be proved if both µ and ν are compactly supported. Corollary 2.3. Consider the setting from Theorem 2.2 and assume further that µ is compactly supported. Then, with r(m,n)d defined as in (2.4), we have: E [ sup γ∈Γ̃min ∫ ‖T̃ γm,n(x)− T0(x)‖2 dµ̃m(x) ] ≤ Cr(m,n)d , for some constant C > 0, in both the discrete-discrete and semi-discrete settings from Theorem 2.2. A brief description of the proof technique of Theorem 2.2 using Theorem 2.1 is provided in Remark 2.3 below, and the actual proof is presented in Appendix C.1. Remark 2.3 (Proof technique). The proof of Theorem 2.2 proceeds via the strategy outlined in Remark 2.2. We first show that Fs (see Remark 2.2) can be chosen as a certain sub-class of convex functions which are in L2(ν). We then use Dudley’s entropy integral type bounds which in turn requires the bracketing entropy [126, Page 270] of Fs, recently proved in [83, Equation 26]. This strategy is slightly different from that used in the proof of [30, Theorem 2], where the authors assume that µ is compactly supported whereas we only assume the finiteness of E exp(t‖X1‖α) for some t > 0, α > 0. The compactness assumption on µ allows one to further restrict Fs to the class of Lipschitz functions. This additional restriction does not seem to be immediate without the compactness assumption. As discussed in Section 1.3, the exponents obtained in Theorem 2.2 are minimax optimal, up to multiplicative logarithmic factors, under bare minimal smoothness assumptions (see [72, Theorem 6]). To the best of our knowledge, rates for the discrete-discrete case for m 6= n and those for the semi-discrete case were not known previously in the literature. Our rates are also strictly better than those (for different estimators, based on space tessellations) obtained in [9, 87] and require less stringent assumptions than those in [30]. In the next section, we show how smoothness assumptions can be leveraged to mitigate the curse of dimensionality in Theorem 2.2. 2.2 Smooth kernel based plug-in estimator: mitigating the curse of dimensionality In this section, we focus on kernel based density estimators for the probability densities associated with µ and ν (see [57, 58, 99, 103, 115]). We will show, using Theorem 2.1, that the corresponding estimators of T0(·) achieve (near) dimension-free rates under sufficient smoothness assumptions. We first introduce the Sobolev class of functions which we will exploit in this subsection to construct estimators that achieve rates of convergence which mitigate the curse of dimensionality under sufficient smoothness. Definition 2.4 (Uniform Sobolev class of functions). Let Ω ⊆ Rd and f(·) be uniformly continuous on Ω and admits uniformly continuous derivatives up to order s on Ω for some s ∈ N. For any m := (m1, . . . ,md) ∈ Nd, let ∂mf := ∂ ∂m1x1 . . . ∂ ∂mdxd f, |m| := d∑ i=1 mi. For any k ≤ s, we further define, ‖f‖Ck(Ω) := ∑ |m|≤k ‖∂mf‖L∞(Ω). The space Cs(Ω) is defined as the set of functions f(·) for which ‖f‖Ck(Ω) <∞ for all k ≤ s. For this subsection, assume that µ and ν admit Sobolev smooth densities fµ(·) and fν(·) in the uniform norm (see Definition 2.4 above). Given Ω ⊆ Rd and s ∈ N, let Cs(Ω) denote the set of Sobolev smooth functions on Ω of order s. Assumption (A1) (Regularity of the densities). Suppose that 1. fµ and fν are supported on compact and convex subsets of Rd, say X and Y respectively. 2. There exists s,M > 0 such that fµ(·) ∈ Cs(X ;M) and fν(·) ∈ Cs(Y;M) where Cs(X ;M) is the space of real valued functions supported on X such that for all f(·) ∈ Cs(X ;M), we have M−1 ≤ f(x) ≤ M for all x ∈ X and ‖f‖Cs(X ) ≤ M . Here ‖·‖Cs(X ) is the standard uniform Sobolev norm as defined in Definition 2.4. The space Cs(Y;M) is defined analogously. We now define our estimators for fµ(·) and fν(·) using the standard kernel density estimation technique (see [125, Section 1.2]). Set f̂µ(x) := 1 mhdm m∑ i=1 Kd ( Xi − x hm ) , (2.5) for some bandwidth parameter hm > 0 and d-variate kernel Kd(·). We assume that Kd(·) is the dfold product of univariate kernels, i.e., there exists a kernelK(·) such that for u = (u1, . . . , ud) ∈ Rd, Kd(u) = ∏d i=1K(ui). We define f̂ν(·) similarly with the same univariate kernel and bandwidth. Assumption (A2) (Regularity of the kernel). Assume that K(·) is a symmetric, bounded, s+ 1 times differentiable kernel on Rd with all s+ 1 derivatives bounded and integrable. Further, suppose that K(·) is of order 2s+ 2, i.e.,∫ ujK(u) du = 1(j = 0), for j = {0, 1, 2, . . . , 2s+1}, and ∫ |u|2s+2|K(u)| du <∞. The above assumptions on K(·) are standard for estimating smooth densities and their derivatives of different orders in the kernel density estimation literature; see e.g. [4, 57, 58, 69, 125]. There are several natural ways to construct kernels satisfying Assumption (A2), see [125, Section 1.2.2]; an example is also provided in Example 2.4 below. Example 2.4 (Example of a kernel satisfying Assumption (A2)). Let ψm(·) be the m-th Hermite polynomial on R (see [84]). Then the kernel function defined as K(u) := 2s+2∑ m=0 ψm(0)ψm(u) exp(−u2/2) satisfies Assumption (A2). It is evident from Assumption (A2) that K(·) may take some negative values, in which case, f̂µ(·) (respectively f̂ν(·)) may not be a probability density. Consequently the barycentric projection (see Definition 1.2) between f̂µ(·) and f̂ν(·) is not well-defined. We get around this by projecting f̂µ(·) and f̂ν(·) on an appropriate space of “smooth" probability densities (see (2.6)), via an integral probability metric (see Definition 2.5 below; also see [98, 105, 117] for examples, computational procedures and applications of such metrics). Definition 2.5 (Integral probability metric). Given a classH of bounded functions on Rd and two probability densities g1(·) and g2(·) on Rd, the integral probability metric/distance between g1(·) and g2(·) with respect toH is defined as dIP(g1, g2;H) := sup ψ(·)∈H ∣∣∣∣ ∫ ψ(x)(g1(x)− g2(x)) dx∣∣∣∣. Sufficient conditions onH for dIP(·, ·;H) to be a metric on the space of probability measures (not on the space of probability densities as they can be altered on set of Lebesgue measure 0 without altering the underlying probability measures) on Rd have been discussed in [98]. Observe that the measure dIP(g1, g2;H) is well defined even when g1(·) and g2(·) are not probability densities. In Theorem 2.5 below, we useH = Cs+2(X ,M ′). Note that any function in Cs+2(X ,M ′) can be extended to a function in Cs+2(Rd;M ′) (see [72, Theorem 23] and [124, Theorem 1.105]). The fact that this choice of F results in a metric follows from the argument in [98, Page 8]. We are now in a position to describe the projection estimators for fµ(·) and fν(·), and the rates achieved by the corresponding plug-in estimator. Theorem 2.5. Assume that T0(·) is L-Lipschitz and fµ, fν are Lebesgue densities satisfying Assumption (A1). Also suppose that K(·) satisfies Assumption (A2). Define hm := m− 1 d+2s logm , hn := n − 1d+2s log n and T := ∫ |Kd(u)| du+ 1. Fix any M ′ > 0. Consider any probability density f̃M ′ µ (·) ∈ Cs(X ;TM) (where M is defined as in Assumption (A1)) which satisfies dIP ( f̃M ′ µ , f̂µ;C s+2(X ;M ′) ) ≤ inf f(·)∈Cs(X ;TM) f≥0, ∫ f=1 dIP ( f̂µ, f ;C s+2(X ;M ′) ) + r (m,n) d,s (2.6) where r(m,n)d,s is defined as in (2.7) and dIP(·, ·;Cs+2(X ;M ′)) is the integral probability metric defined in Definition 2.5. We define f̃M ′ ν (·) analogously as in (2.6) with X , f̂µ(·) replaced by Y , f̂ν(·). Then the following conclusions hold. 1. SetM ′ := 8(1+TM). If µ̃m and ν̃n are the probability measures corresponding to the probability densities f̃M ′ µ (·) and f̃M ′ ν (·), then the following holds for some constant C > 0: E [ sup γ∈Γ̃min ∫ ‖T̃ γm,n(x)− T0(x)‖2 dµ̃m(x) ] ≤ Cr(m,n)d,s , where r(m,n)d,s := m−1/2 + n−1/2 for d < 2(s+ 2), m−1/2 (log (1 +m)) d + n−1/2 (log (1 + n)) d for d = 2(s+ 2), m− s+2 d + n− s+2 d for d ≥ 2(s+ 2). (2.7) The same bound also holds for E|W 22 (µ̃m, ν̃n)−W 22 (µ, ν)|. 2. f̂µ(·) satisfies lim n→∞ max { P ( ‖f̂µ‖Cs(X̃ ) ≥ TM ) ,P ( sup x∈X̃ |f̂µ(x)− fµ(x)| ≥ ε )} = 0 (2.8) for any ε > 0, where X̃ is any compact subset of X o. The same conclusion holds for f̂ν(·) with X replaced by Y . In Theorem 2.5, we have shown that the plug-in estimator for T0(·) using f̃M ′ µ (·) and f̃M ′ ν (·) (with M ′ = 8(1 + TM)) achieves rates that mitigate the curse of dimensionality under sufficient smoothness. In fact, f̃M ′ µ (·) can be viewed as an approximate minimizer of dIP(f̂µ, ·;Cs+2(X ,M ′)) over an appropriate class of Sobolev smooth probability densities. This is carried out because f̂µ(·) by itself may not be a probability density. Further note that µ̃m, ν̃n as specified in Theorem 2.5 are both smooth, and consequently Γ̃min is a singleton and the supremum in Theorem 2.5 can be dropped. A brief description of the proof technique for Theorem 2.5 is presented in Remark 2.6 below and the actual proof is given in Appendix C.1. Remark 2.6 (Proof technique). The proof of Theorem 2.5 proceeds along the same lines as Remark 2.3. We first show that Fs (see Remark 2.2) can be chosen as a certain subset of Cs+2(Y◦). We then use Dudley’s entropy integral type bounds which in turn requires the bracketing entropy [126, Page 270] of the class of compactly supported Sobolev smooth functions which can be found in [127, Corollary 2.7.2]. We now explain the implications of both the parts of Theorem 2.5 in the following two remarks. Remark 2.7 (Mitigating the curse of dimensionality). Theorem 2.5 shows that, under enough smoothness, i.e., when 2(s+ 2) > d, both the upper bounds for (1.8) and (1.9) are Op(n−1/2). This shows that, for large dimensions, provided µ and ν admit smooth enough densities, it is possible to construct plug-in estimators that mitigate the curse of dimensionality. Note that a similar estimator was analyzed in [67, Proposition 1] when m = n. However, the rates obtained in Theorem 2.5 are strictly better than those in [67, Proposition 1]. For m = n, when d < 2(s + 2), [67] obtained a rate of n− s+2 2(s+2)+d which is worse than n−1/2 obtained in Theorem 2.5. For the other regimes, [67] obtains rates (up to log factors) of n−1/4 and n− 1 (s+2)(d+2(s+2)) which are both worse than the respective rates of n−1/2 and n− s+2 d in Theorem 2.5. Remark 2.8 (Computational aspects of Theorem 2.5). Note that f̃M ′ µ (·) (with M ′ = 8(1 + TM)) is hard to compute whereas f̂µ(·) is computable easily in linear time. Note that if f̂µ(·) itself were a probability density in Cs(X ;TM), then we would have f̂µ = f̃M ′ µ . While Theorem 2.5 does not establish that, it does come close in part 2, from which we can easily derive the following: lim n→∞ P(f̂µ(·) /∈ Cs(X̃ ;TM)) = 0. The above shows that f̂µ(·) is indeed bounded below by (TM)−1 on X̃ (any compact subset of the interior of X ), and additionally belongs to Cs(X̃ ;TM) with probability converging to 1. This leads us to conjecture that the natural density version of f̂µ(·), i.e., max{f̂µ(·), 0}∫ max{f̂µ(x), 0} dx should serve as a good proxy for f̃M ′ µ (·) and lead to rates of convergence that mitigate the curse of dimensionality. From a computational perspective, the density specified above is easy to simulate from using an accept-reject algorithm without computing the integral in the denominator (see [101, Algorithm 4.3]). However, our current proof technique does not provide rates of convergence for the above density estimator based on f̂µ(·). Another important implication of Theorem 2.5 is the bound obtained on |W2(µ̃m, ν̃n)−W2(µ, ν)| when µ 6= ν. We first present the result and then describe the implication. Proposition 2.6. Consider the setting in Theorem 2.5. Then, provided µ 6= ν, the following holds: |W2(µ̃m, ν̃n)−W2(µ, ν)| = Op(r(m,n)d,s ). Proposition 2.6 (see Appendix C.1 for a proof) shows an interesting distinction between the µ 6= ν case and the µ = ν case. For µ = ν, the best possible exponent is n− 1+s 2s+d for d ≥ 3 (see [131, Theorem 3] where the result was established under more general Besov smoothness assumptions). On the contrary, when µ 6= ν, Proposition 2.6 establishes a rate of n− s+2d for the Wasserstein distance which is strictly better than the minimax achievable rate mentioned above when µ = ν. This observation complements [30, Corollary 1] where the authors make a similar remark for the special case of s = 0. 2.3 Discretized plug-in estimator under smoothness assumptions In Section 2.1, we discussed how smoothness can be incorporated into the plug-in procedure to get faster rates of convergence. Such plug-in estimators are popular in the computational OT literature (see [7, 8, 25, 36]). However, even after f̃µ(·) ≡ f̃M ′ µ (·), f̃ν(·) ≡ f̃M ′ ν (·) are calculated, T̃ γm,n as in Theorem 2.5 cannot be computed explicitly from data if f̃µ(·) and f̃ν(·) are continuous densities. This is in contrast to T̃ γm,n from Theorem 2.2 in the discrete-discrete case which is explicitly computable using a standard linear program, but achieves worse rates of convergence. This is not unexpected. Thanks to the no free lunch principle, better statistical accuracy is naturally accompanied by heavier computational challenges. Therefore, our goal here is to construct estimators, under smoothness assumptions as in Section 2.2, which are computable in polynomial time (with complexity increasing with smoothness) provided f̃µ(·) and f̃ν(·) can be sampled from, and also attain rates that mitigate the curse of dimensionality. Construction: We will illustrate the discretized estimator using the kernel based estimator from Section 2.2. Similar results also hold for the wavelet based estimator from Appendix A. Recall the kernel density estimators f̃µ(·) and f̃ν(·) (see (2.6)). Sample M ≥ 1 random points from both f̃µ(·) and f̃ν(·). Let µ̂m,M and ν̂n,M denote the standard empirical measures on the M points sampled from f̃µ(·) and f̃ν(·) respectively. Finally construct T̃m,n ≡ T̃ γm,n as in Definition 1.2 with µ̃m = µ̂m,M and ν̃n = ν̂n,M . It should be pointed out that a similar construction was also used in [131, Section 6] for estimating probability densities under the Wasserstein loss. Based on this construction, the main result of this section is as follows: Theorem 2.7. Consider the setting in Theorem 2.5 and the same construction of T̃ γm,n as above. For simplicity, let’s also assume m = n. Accordingly set M = n s+2 2 . Then Γ̃min is a singleton and consequently the following conclusion holds for some constant C > 0: E [∫ ‖T̃m,n(x)− T0(x)‖2 dµ̃m(x) ] ≤ Cr(n,n)d,s . The same rates also hold for E|W 22 (µ̃m, ν̃n)−W 22 (µ, ν)|. The proof of Theorem 2.7 is given in Appendix C.1. Once the empirical measures µ̂m,M and ν̂n,M have been obtained, an explicit computation of T̃m,n as described above requires O(M3) = O(n 3(s+2) 2 ) steps using the Hungarian algorithm, see [73]. This highlights the statistical versus computational trade-off, i.e., in order to mitigate the curse of dimensionality in convergence rates by exploiting smoothness, the computational complexity gets progressively worse by polynomial factors in n. It should be mentioned that (approximate) algorithms faster than the Hungarian algorithm stated above, can be found in [1, 36, 53] to name a few. Due to space constraints, we avoid a detailed discussion on this. In the above construction, sampling from the smoothed kernel densities f̃µ(·) and f̃ν(·) is crucial. If we would simply draw M bootstrap samples from the empirical distributions µ̂m and ν̂n, the rates of convergence wouldn’t improve from those observed in Theorem 2.2 no matter how large M is. Acknowledgments and Disclosure of Funding We would like to thank the reviewers for their constructive suggestions that greatly helped improve the quality of the paper. The third author is supported by NSF Grant DMS-2015376.
1. What is the main contribution of the paper regarding the estimation of empirical Monge maps? 2. What are the strengths of the proposed approach, particularly in terms of rates of convergence? 3. Are there any concerns or questions regarding the assumptions made in the paper, such as compact support or L-lipschitzness? 4. How does the paper handle the issue of statistical accuracy versus computational cost in sampling from smoothed distributions? 5. Can the authors provide further clarification or explanation regarding Proposition 2.5 and its implications for shifting the mean of a measure?
Summary Of The Paper Review
Summary Of The Paper Summary Context: This paper tackles the problem of estimating the rate of convergence of empirical Monge maps (through its Kantorovich barycentric map approximator) to the true Monge map for absolutely continuous Lebesgue measures. Contribution The authors provide new rates of convergence of the barycentric map using standard empirical plug-in estimators (O(n^{-2/d})) If the measures are regular enough (Holder smooth), the dependence on the dimension completely vanishes (which was established for Gaussian distributions https://arxiv.org/pdf/1905.10155.pdf) The authors propose to take advantage of this smoothness result to sample from smoothened distributions (using a Kernel density estimator) and provide the sufficient conditions to obtain a desired rate. Higher smoothness however requires more samples: a tradeoff between statistical accuracy and computational cost cannot be avoided. Review review General assessment For a purely theoretical paper, this paper is vey well structured and easy to read. Transitions are smooth, and theorems are fairly well motivated and interpreted. Even though one may wonder whether a journal would have been more appropriate for this paper, the theoretical contributions of this paper are novel and are certainly valuable for the OT community at Neurips. My only main concern is the lack of clarity of how can proposition 2.5 hold, which suggests that the rate of convergence of W2 is significantly different when mu = nu (see point (5) below). This is the only point -- pre-rebuttal -- retaining me from increasing my score. I would appreciate if the authors could shed some light on this: are there any assumptions mismatch between this setting and the one from [119] that would explain this disparity ? Otherwise shifting the samples of a distribution slightly would lead to better convergence rates. Specific comments In remark 2.1, the authors mention that theorem 2.1 doesn't require compactly supported measures unlike previous results (of [69]). Still, for all the derived convergence rates (subsequent theorems), at least one of the measures needs to have a compact support for a uniform bound of the potential function. While requiring less assumptions on the measures to obtain similar stability results is certainly an improvement over the literature, I'm wondering whether Theorem 2.1 can be used to obtain convergence rates (or any other simpler and interpretable upper bounds) without assuming compacity of any of the measures. Still on compacity: how would an assumption on the tail of the measures compare with the compacity assumption ? Could the uniform norm be substitued with a less stringent norm in the technical derivations so as to generalize the known convergence rates for Gaussians for example ? Throughout the paper, T0 is assumed L-lipschitz. However the regularity of T0 can also be seen as a consequence of the regularity of the measures themselves which begs the question: are there any sufficient conditions on the measures that guarantee an L-lipschitz T0 without assuming "too much" smoothness on the measures. The constant C in the corollary seems to be obtained through existence theorems of arbitrary constants C1 and C2. Do we have any bounds on these constants and their dependence on the parameters of the problem ? Perhaps the diameter of the bounded supports ? in L 248, authors mention that T_m,n of theorem 2.4 cannot be computed from the data, is this because T_m,n is a continuous barycentric map derived from the continuous measures given by the KDE ? If so, I think the dependence on m,n should be slightly different (in terms of notation) to distinguish it from the empirical map. I find the result of proposition 2.5 quite intriguing. [119] provided a lower bound on the convergence rate for the case mu = nu, that is a rate that cannot be beaten by any estimator. Prop 2.5 shows that this rate can be improved (significantly) if mu != nu. This suggests that If I take a measure mu and shift its mean by a certain value, I would need less samples to approximate the W2 value. Am I missing something here ? A minor remark on the appendix: please use a different notation for \bar{nu}, distinguishing it from \tilde{nu} is quite challenging specially when verifying the proofs.
NIPS
Title Rates of Estimation of Optimal Transport Maps using Plug-in Estimators via Barycentric Projections Abstract Optimal transport maps between two probability distributions μ and ν on R have found extensive applications in both machine learning and statistics. In practice, these maps need to be estimated from data sampled according to μ and ν. Plugin estimators are perhaps most popular in estimating transport maps in the field of computational optimal transport. In this paper, we provide a comprehensive analysis of the rates of convergences for general plug-in estimators defined via barycentric projections. Our main contribution is a new stability estimate for barycentric projections which proceeds under minimal smoothness assumptions and can be used to analyze general plug-in estimators. We illustrate the usefulness of this stability estimate by first providing rates of convergence for the natural discretediscrete and semi-discrete estimators of optimal transport maps. We then use the same stability estimate to show that, under additional smoothness assumptions of Sobolev type or Besov type, kernel smoothed or wavelet based plug-in estimators respectively speed up the rates of convergence and significantly mitigate the curse of dimensionality suffered by the natural discrete-discrete/semi-discrete estimators. As a by-product of our analysis, we also obtain faster rates of convergence for plug-in estimators of W2(μ, ν), the Wasserstein distance between μ and ν, under the aforementioned smoothness assumptions, thereby complementing recent results in Chizat et al. (2020). Finally, we illustrate the applicability of our results in obtaining rates of convergence for Wasserstein barycenter between two probability distributions and obtaining asymptotic detection thresholds for some recent optimaltransport based tests of independence. 1 Introduction Given two random variables X ∼ µ and Y ∼ ν, where µ, ν are probability measures on Rd, d ≥ 1, the problem of finding a “nice" map T0(·) such that T0(X) ∼ ν has numerous applications in machine learning such as domain adaptation and data integration [34, 35, 38, 48, 61, 112], dimension reduction [12, 66, 90], generative models [60, 81, 88, 110], to name a few. Of particular interest is the case when T0(·) is obtained by minimizing a cost function, a line of work initiated by Gaspard Monge [97] in 1781 (see (1.1) below), in which case T0(·) is termed an optimal transport (OT) map and has applications in shape matching/transfer problems [29, 47, 107, 121], Bayesian statistics [46, 75, 80, 108], econometrics [15, 28, 45, 50, 54], nonparametric statistical inference [39– 41, 113, 114]; also see [111, 128, 129] for book-length treatments on the subject. In this paper, we will focus on the OT map obtained using the standard squared Euclidean cost function, i.e., T0 := argmin T :T#µ=ν E‖X − T (X)‖2, (1.1) where T#µ = ν means T (X) ∼ ν for X ∼ µ. The estimation of T0 has attracted a lot of interest in recent years due to its myriad applications (as stated above) and interesting geometrical properties (see [19, 56, 91] and Definition 1.1 below). In practice, the main hurdle in constructing estimators for T0 is that the explicit forms of the measures µ, ν are unknown; instead only random samples X1, . . . , Xm ∼ µ and Y1, . . . , Yn ∼ ν 35th Conference on Neural Information Processing Systems (NeurIPS 2021). are available. A natural strategy in this scenario is to estimate T0 using T̃m,n, where T̃m,n is computed as in (1.1) with µ and ν replaced by µ̃m and ν̃n which are empirical approximations of µ and ν based on X1, . . . , Xm and Y1, . . . , Yn respectively (see Definition 1.2). Such estimators are often called plug-in estimators and have been used extensively; see [7, 30, 67, 93, 94, 102, 116]. The main goal of this paper is to study the rates of convergence of general plug-in estimators of T0 under a unified framework. We show that when µ̃m and ν̃n are chosen as µ̂m and ν̂n respectively, where µ̂m and ν̂n are the standard empirical distributions supported on m and n atoms, i.e., µ̂m := 1 m m∑ i=1 δXi and ν̂n := 1 n n∑ j=1 δYj , (1.2) T̃m,n (appropriately defined using Definition 1.2) converges at a rate of m−2/d + n−2/d for d ≥ 4 in the sense of (1.8). This rate happens to be minimax optimal under minimal smoothness assumptions (see [72, Theorem 6]) but suffers from the curse of dimensionality. We next show that, if µ and ν are known to admit sufficiently smooth densities, it is possible to apply kernel or wavelet based smoothing techniques on µ̂m and ν̂n to obtain plug-in estimators that mitigate the aforementioned curse of dimensionality. Our next contribution pertains to the estimation of W 22 (µ, ν) (the squared Wasserstein distance), see (1.3) below, a quantity of independent interest in statistics and machine learning with applications in structured prediction [51, 89], image analysis [18, 59], nonparametric testing [16, 106], generative modeling [10, 96], etc. In this paper, we also obtain rates of convergence for plug-in estimators W 22 (µ̃m, ν̃n) of W 2 2 (µ, ν). We show that kernel smoothing µ̂m and ν̂n can be used to obtain plug-in estimators of W 22 (µ, ν) that mitigate the curse of dimensionality as opposed to a direct plug-in approach using µ̂m and ν̂n (as used in [30, Theorem 2]). This provides an answer to the open question of estimating W 22 (µ, ν) when µ, ν admit smooth densities laid out in [30]. 1.1 Background on optimal transport In this section, we present some basic concepts and results associated with the OT problem that will play a crucial role in the sequel. Let Pac(Rd) denote the set of all Lebesgue absolutely continuous probability measures on Rd andP2(Rd) be the set of probability measures with finite second moments. Then the 2-Wasserstein distance (squared) between µ, ν ∈ P2(Rd) is defined as: W 22 (µ, ν) := min π∈Π(µ,ν) ∫ ‖x− y‖2 dπ(x, y), (1.3) where Π(µ, ν) is the set of probability measures on Rd×Rd with marginals µ and ν. The optimization problem in (1.3) is often called the Kantorovich relaxation (see [76, 77]) of the optimization problem in (1.1). The existence of a minimizer in (1.3) follows from [129, Theorem 4.1]. Proposition 1.1 (Brenier-McCann polar factorization theorem, see [91, 128]). Suppose µ ∈ Pac(Rd). Then there exists a µ-a.e. (almost everywhere) unique function T0(·) : Rd → Rd, which is the gradient of a real-valued d-variate convex function, say ϕ0(·) : Rd → R, such that T0#µ = ν. Further, the distribution defined as π(A×B) = µ(A ∩ (T0)−1(B)) for all Borel sets A,B ⊆ Rd is the unique minimizer in (1.3) provided µ, ν ∈ P2(Rd). Definition 1.1 (OT map and potential function). The function T0 : Rd → Rd in Proposition 1.1 which satisfies T0#µ = ν will be called the OT map from µ to ν. A convex function ϕ0(·) in Proposition 1.1 satisfying∇ϕ0 = T0 will be termed an OT potential. The next and final important ingredient is the alternate dual representation of (1.3) which gives: 1 2 W 22 (µ, ν) = 1 2 ∫ ‖x‖2 dµ(x) + 1 2 ∫ ‖y‖2 dν(y)−min f∈F Sµ,ν(f), where (1.4) Sµ,ν(f) = ∫ f dµ+ ∫ f∗ dν. (1.5) Here F denotes the space of convex functions on Rd which are also elements of L1(µ) and f∗(·) is the standard Legendre-Fenchel dual defined as: f∗(x) := sup y∈Rd [y>x− f(y)], for x ∈ dom(f). (1.6) 1.2 Estimating OT map via barycentric projection Recall the setting from the Introduction. Let µ̃m, ν̃n ∈ P2(Rd). Here µ̃m, ν̃n need not be absolutely continuous and can be very general. Intuitively, µ̃m and ν̃n can be viewed as some empirical approximation of µ and ν respectively. Example 1.2 (Simple choices of µ̃m and ν̃n). Let X1, . . . , Xm i.i.d.∼ µ and Y1, . . . , Yn i.i.d.∼ ν; in which case a natural choice would be to set µ̃m = µ̂m and ν̃n = ν̂n where µ̂m and ν̂n are the empirical distributions of X1, . . . , Xm and Y1, . . . , Yn respectively, as defined in (1.2). This is the standard choice adopted in the discrete-discrete Kantorovich relaxation; see [104, Section 2.3]. Another popular choice is µ̃m = µ̂m, ν̃n = ν or µ̃m = µ, ν̃n = ν̂n. This is the semi-discrete Kantorovich problem and is popular when one of the measures is fully specified; see [26, 55]. A natural way to estimate T0(·), as defined in (1.1), would be to approximate it using the OT map from µ̃m to ν̃n. However as µ̃m and ν̃n may not be elements of Pac(Rd), Proposition 1.1 does not apply and an OT map may not exist from µ̃m to ν̃n. Such is the case in Example 1.2 in the discrete-discrete case when m 6= n. To circumvent this issue, we leverage the notion of barycentric projections (see [3, Definition 5.4.2]) defined below: Definition 1.2 (Barycentric projection). Define the set Γ̃min := argmin π∈Π(µ̃m,ν̃n) ∫ ‖x− y‖2 dπ(x, y). The optimization problem above is the plug-in analog of the optimization problem on the right hand side of (1.3). Given any γ ∈ Γ̃min, define the barycentric projection of γ as the conditional mean of y given x under γ, i.e., T̃m,n(x) ≡ T̃ γm,n(x) := ∫ y y dγ(x, y)∫ y dγ(x, y) , for x ∈ supp (µ̃m) . (1.7) In general, Γ̃min need not be a singleton which is why we index the barycentric projection T̃ γm,n(·) by γ ∈ Γ̃min. Note that T̃ γm,n(·) need not be a transport map; however, if an OT map exists then it must be equal to T̃ γm,n(·) (µ̃m-a.e.). Our goal is to obtain stochastic upper bounds for sup γ∈Γ̃min ∫ ∥∥T̃ γm,n(x)− T0(x)∥∥2 dµ̃m(x). (1.8) In addition, our proof techniques also yield rates of convergence for∣∣W 22 (µ̃m, ν̃n)−W 22 (µ, ν)∣∣. (1.9) In this paper, we will focus on d ≥ 2. Due to the canonical ordering of R, the case d = 1 can be handled easily using the classical Hungarian embedding theorem [82]. 1.3 Contributions 1. We provide a new and flexible stability estimate Theorem 2.1 which yields a unified approach to obtaining rates of convergence for general plug-in estimators of the OT map T0(·). Unlike existing stability estimates, Theorem 2.1 holds for the barycentric projection (which is the same as the OT map when it exists) and does not require any smoothness assumptions on µ̃m, ν̃n or T̃ γm,n(·); also see Remark 2.1 for a comparison with the existing literature. 2. In Sections 2.1 and 2.2, we use Theorem 2.1 to bound (1.8) and (1.9): • In Section 2.1, we show that in both the discrete-discrete and semi-discrete Kantorovich relaxation problems (see Example 1.2), the rate of convergence of (1.8) is m−2/d + n−2/d for d ≥ 4 when T0 is assumed to be Lipschitz (see Theorem 2.2), which is the minimax rate (see [72, Theorem 6]). To the best of our knowledge, rates of convergence for these natural estimators weren’t previously established in the literature. • In Section 2.2 and Appendix A, we show that the curse of dimensionality in the above rates can be mitigated provided µ and ν admit (uniform) Sobolev smooth densities (see Section 2.2) or Besov smooth densities (see Appendix A). In Section 2.2, our plug-in estimator is obtained by choosing µ̃m (and ν̃n) as the convolution of µ̂m (and ν̂n) and a smooth kernel with an appropriate bandwidth. Under this choice, the rate of convergence in (1.8) is m−( s+2 d ∧ 1 2 ) + n−( s+2 d ∧ 1 2 ), where s denotes the degree of Sobolev smoothness (see Theorem 2.5). Clearly, if 2(s + 2) ≥ d, the rate of convergence becomes dimension-free and mitigates the curse of dimensionality. We also show the same rates of convergence mentioned above hold for (1.9) (see e.g., Proposition 2.6) which makes a strong case in favor of incorporating smoothness in the construction of plug-in estimators as was conjectured in [30]. In Appendix A, our plug-in estimator is obtained using natural wavelet based density estimators. The rate of convergence in (1.8) turns out to be n− 1+s d+2s where s denotes the degree of Besov smoothness (see Theorem A.1). Note that by choosing s large enough, the exponent in the rate can be made arbitrarily close to 1/2, thereby reducing the curse of dimensionality. 3. In Section 2.3, we use a discretization technique from [131] to construct discrete approximations to the smoothed µ̃m and ν̃n from the previous paragraph that in turn yield computable plug-in estimators for T0 (provided one can sample from µ̃m and ν̃n) that also achieve the same statistical guarantees as the smoothed plug-in estimator from Section 2.2 (see Theorem 2.7). However the number of atoms required in the discretizations and correspondingly the computational complexity increases with the degree of smoothness; this highlights a statistical and computational trade-off. 4. We provide implications of our results in popular applications of OT such as estimating the barycenter of two multivariate probability distributions (see Theorem B.1 in Appendix B.1) and in nonparametric independence testing (see Theorem B.3 in Appendix B.2). 1.4 Related work Many recent works have focused on obtaining consistent estimators of T0 using the plug-in principle, see [26, 55] (in the semi-discrete problem) and [41, 68, 132] (in the discrete-discrete problem). In [55], the authors studied the rate of convergence of the semi-discrete optimal transport map from ν (absolutely continuous) to µ̂m. This paper complements the aforementioned papers by studying the rates of convergence for general plug-in estimators in a unified fashion. In two other papers [9, Theorem 1.1] and [87, Section 4], the authors use a “Voronoi tessellation" approach to estimate T0, however the rates obtained in this paper, even in the absence of smoothness, are strictly better than those in [9, 87]. Perhaps the most closely related paper to ours would be [67]. In [67], the author uses variational techniques to arrive at stability estimates while we exploit the Lipschitz nature of the OT map (see Definition 1.1). Further the rates in this paper have exponents s+2d ∧ 1 2 which are strictly better than the exponents s+22(s+2)+d obtained in [67, Proposition 1] under the same smoothness assumptions (Sobolev type of order s, see Definition 2.4). In another line of work [72], the authors use theoretical wavelet based estimators (not of the plug-in type) of T0 to obtain nearly minimax optimal rates of convergence. However these estimators, by themselves, are not transport maps between two probability measures, which makes them harder to interpret. In contrast, our focus is on obtaining rates of convergence for plug-in estimators, which are transport maps between natural approximations of µ and ν. Such plug-in type strategies are a lot more popular in computational OT [7, 30, 67, 93, 94, 102, 116]. In terms of obtaining rates of convergence for (1.9), some attempts include [109, 116] where parametric rates are obtained when µ, ν are known to be finitely supported or are both Gaussian. In a related problem, bounds for W 22 (µ̂m, µ) were obtained in [6, 42, 49, 100, 123, 131]. Using these bounds, for m = n, it is easy to get a n−1/d rate of convergence for (1.9). This rate was recently improved to n−2/d in [30] under no smoothness assumptions. Our rates coincide with the n−2/d rate from [30] under no smoothness assumptions. But further, we show in this paper that the curse of dimensionality in the above rate can be mitigated by incorporating smoothness into the plug-in procedure. 2 Main results Recall ϕ0(·) from Definition 1.1. The following is our main result. Theorem 2.1 (Stability estimate). Suppose that µ, ν ∈ Pac(Rd) ∩ P2(Rd) and µ̃m, ν̃n ∈ P2(Rd). Assume that T0(·) (as defined in (1.1)) is L-Lipschitz (L > 0). Then, sup γ∈Γ̃min ∫ ‖T̃ γm,n(x)− T0(x)‖2 dµ̃m(x) ≤ Lmax {∣∣∣∣ ∫ Ψ∗µ̃m,ν̃n d(ν̃n − ν†m)∣∣∣∣, ∣∣∣∣ ∫ Ψ∗µ̃m,ν†m d(ν̃n − ν†m) ∣∣∣∣} + 2L ∫ ϕ∗0(y) d(ν̃n − ν†m)(y), (2.1) where ν†m := T0#µ̃m, ϕ ∗ 0(·) is defined as in (1.6), and with S·,·(·) defined as in (1.5), Ψµ̃m,ν̃n(·) := argminf∈F Sµ̃m,ν̃n(f), Ψµ̃m,ν†m(·) := argminf∈F Sµ̃m,ν†m(f), and D denotes the space of realvalued convex functions on Rd. The proof of Theorem 2.1 (see Appendix C.1) starts along the same lines as the proof of the curvature estimate in [56, Proposition 3.3]. This is followed by some careful manipulations of W 22 (·, ·) (as in (1.3)) and an application of the conditional version of Jensen’s inequality, see (C.3). The final step of the proof uses the dual representation in (1.4) with techniques similar to some intermediate steps in the proof of [92, Proposition 2] and [30, Lemma 3]. Remark 2.1 (Comparison with other stability estimates). Theorem 2.1 provides some important advantages to existing stability estimates in the literature. One of the earliest results in this direction can be found in [56, Proposition 3.3] but their bound involves a push-forward constraint which makes it hard to use for rate of convergence analysis. A bound similar to Theorem 2.1 is presented in [55, Lemma 5.1] but there the authors assume the existence of an OT map from µ̃m to ν̃n. Therefore, it does not apply to the discrete-discrete problem where µ̃m = µ̂m and ν̃n = ν̂n with m 6= n. Overcoming all these limitations is an important contribution of Theorem 2.1 and allows us to deal with popular plug-in estimators all in one go. The stability estimate in [72, Proposition 10] on the other hand requires µ̃m, ν̃n to be sufficiently smooth and hence it does not hold for discrete-discrete or semi-discrete plug-in estimators (see Example 1.2). Further their result requires all the measures involved to be compactly supported unlike the much milder requirements of Theorem 2.1. However, a shortcoming of Theorem 2.1 is that it is hard to obtain rates faster than n−1/2 using it directly, whereas [72] can obtain rates arbitrarily close to n−1. This is a price we pay for analyzing natural and popular plug-in estimators as opposed to the (more intractable) wavelet based estimators in [72]. Remark 2.2 (How to use Theorem 2.1 to obtain rates of convergence?). Note that the second term on the right hand side of (2.1), under appropriate moment assumptions, is Op(m−1/2 +n−1/2) (free of dimension) by a direct application of Markov’s inequality. We therefore focus on the first term. By (1.5), Ψ∗µ̃m,ν̃n(·), Ψ ∗ µ̃m,ν † m (·) ∈ F . Further, by Caffarelli’s regularity theory [20–22], depending on the “smoothness" of µ̃m, ν̃n, it can be shown that there exists a further class of functions Fs (see Remarks 2.3 and 2.6) such that Ψ∗µ̃m,ν̃n(·), Ψ ∗ µ̃m,ν † m (·) ∈ F ∩ Fs. Thus, we can bound the first term on the right hand side of (2.1) as: max {∣∣∣∣ ∫ Ψ∗µ̃m,ν̃n d(ν̃n − ν†m)∣∣∣∣, ∣∣∣∣ ∫ Ψ∗µ̃m,ν†m d(ν̃n − ν†m) ∣∣∣∣} ≤ sup f∈F∩Fs ∣∣∣∣ ∫ f d(ν̃n − ν†m)∣∣∣∣. (2.2) The right hand side of (2.2) can now be bounded using the corresponding Dudley’s entropy integral bounds using empirical process techniques; see [126, Lemmas 19.35-19.37]. To conclude, the two main steps in our strategy are identifying the family of functions Fs and computing Dudley’s entropy integral. Further, the more the smoothness of µ̃m, ν̃n, the smaller is the class of functions Fs and smaller the supremum on the right hand side of (2.2). This shows why better rates can be expected under smoothness assumptions. 2.1 Natural non-smooth plug-in estimators In this case, we discuss the rates of convergence for the discrete-discrete problem and the semi-discrete problem, where no smoothness is available on µ̃m and ν̃n. Theorem 2.2. Suppose that T0(·) is L-Lipschitz, ν is compactly supported and E exp(t‖X1‖α) <∞ for some t > 0, α > 0. (Discrete-discrete): Set µ̃m = µ̂m and ν̃n = ν̂n. Then the following holds: sup γ∈Γ̃min ∫ ‖T̃ γm,n(x)− T0(x)‖2 dµ̃m(x) = Op ( r (m,n) d × (log (1 + max{m,n})) td,α ) , (2.3) where r(m,n)d := m−1/2 + n−1/2 for d = 2, 3, m−1/2 log (1 +m) + n−1/2 log (1 + n) for d = 4, m−2/d + n−2/d for d ≥ 5, (2.4) and td,α := (4α)−1(4 + ((2α+ 2dα− d) ∨ 0)) for d < 4, (α−1 ∨ 7/2)− 1 for d = 4, 2(1 + d−1) for d > 4. The same bound holds for |W 22 (µ̃m, ν̃n)−W 22 (µ, ν)| without assuming T0(·) is Lipschitz. (Semi-discrete): Set µ̃m = µ, ν̃n = ν̂n or µ̃m = µ̂m, ν̃n = ν. Then the left hand side of (2.3) is Op(r (n,n) d × (log (1 + n))td,α) or Op(r (m,m) d × (log (1 +m))td,α) respectively. A stronger result can be proved if both µ and ν are compactly supported. Corollary 2.3. Consider the setting from Theorem 2.2 and assume further that µ is compactly supported. Then, with r(m,n)d defined as in (2.4), we have: E [ sup γ∈Γ̃min ∫ ‖T̃ γm,n(x)− T0(x)‖2 dµ̃m(x) ] ≤ Cr(m,n)d , for some constant C > 0, in both the discrete-discrete and semi-discrete settings from Theorem 2.2. A brief description of the proof technique of Theorem 2.2 using Theorem 2.1 is provided in Remark 2.3 below, and the actual proof is presented in Appendix C.1. Remark 2.3 (Proof technique). The proof of Theorem 2.2 proceeds via the strategy outlined in Remark 2.2. We first show that Fs (see Remark 2.2) can be chosen as a certain sub-class of convex functions which are in L2(ν). We then use Dudley’s entropy integral type bounds which in turn requires the bracketing entropy [126, Page 270] of Fs, recently proved in [83, Equation 26]. This strategy is slightly different from that used in the proof of [30, Theorem 2], where the authors assume that µ is compactly supported whereas we only assume the finiteness of E exp(t‖X1‖α) for some t > 0, α > 0. The compactness assumption on µ allows one to further restrict Fs to the class of Lipschitz functions. This additional restriction does not seem to be immediate without the compactness assumption. As discussed in Section 1.3, the exponents obtained in Theorem 2.2 are minimax optimal, up to multiplicative logarithmic factors, under bare minimal smoothness assumptions (see [72, Theorem 6]). To the best of our knowledge, rates for the discrete-discrete case for m 6= n and those for the semi-discrete case were not known previously in the literature. Our rates are also strictly better than those (for different estimators, based on space tessellations) obtained in [9, 87] and require less stringent assumptions than those in [30]. In the next section, we show how smoothness assumptions can be leveraged to mitigate the curse of dimensionality in Theorem 2.2. 2.2 Smooth kernel based plug-in estimator: mitigating the curse of dimensionality In this section, we focus on kernel based density estimators for the probability densities associated with µ and ν (see [57, 58, 99, 103, 115]). We will show, using Theorem 2.1, that the corresponding estimators of T0(·) achieve (near) dimension-free rates under sufficient smoothness assumptions. We first introduce the Sobolev class of functions which we will exploit in this subsection to construct estimators that achieve rates of convergence which mitigate the curse of dimensionality under sufficient smoothness. Definition 2.4 (Uniform Sobolev class of functions). Let Ω ⊆ Rd and f(·) be uniformly continuous on Ω and admits uniformly continuous derivatives up to order s on Ω for some s ∈ N. For any m := (m1, . . . ,md) ∈ Nd, let ∂mf := ∂ ∂m1x1 . . . ∂ ∂mdxd f, |m| := d∑ i=1 mi. For any k ≤ s, we further define, ‖f‖Ck(Ω) := ∑ |m|≤k ‖∂mf‖L∞(Ω). The space Cs(Ω) is defined as the set of functions f(·) for which ‖f‖Ck(Ω) <∞ for all k ≤ s. For this subsection, assume that µ and ν admit Sobolev smooth densities fµ(·) and fν(·) in the uniform norm (see Definition 2.4 above). Given Ω ⊆ Rd and s ∈ N, let Cs(Ω) denote the set of Sobolev smooth functions on Ω of order s. Assumption (A1) (Regularity of the densities). Suppose that 1. fµ and fν are supported on compact and convex subsets of Rd, say X and Y respectively. 2. There exists s,M > 0 such that fµ(·) ∈ Cs(X ;M) and fν(·) ∈ Cs(Y;M) where Cs(X ;M) is the space of real valued functions supported on X such that for all f(·) ∈ Cs(X ;M), we have M−1 ≤ f(x) ≤ M for all x ∈ X and ‖f‖Cs(X ) ≤ M . Here ‖·‖Cs(X ) is the standard uniform Sobolev norm as defined in Definition 2.4. The space Cs(Y;M) is defined analogously. We now define our estimators for fµ(·) and fν(·) using the standard kernel density estimation technique (see [125, Section 1.2]). Set f̂µ(x) := 1 mhdm m∑ i=1 Kd ( Xi − x hm ) , (2.5) for some bandwidth parameter hm > 0 and d-variate kernel Kd(·). We assume that Kd(·) is the dfold product of univariate kernels, i.e., there exists a kernelK(·) such that for u = (u1, . . . , ud) ∈ Rd, Kd(u) = ∏d i=1K(ui). We define f̂ν(·) similarly with the same univariate kernel and bandwidth. Assumption (A2) (Regularity of the kernel). Assume that K(·) is a symmetric, bounded, s+ 1 times differentiable kernel on Rd with all s+ 1 derivatives bounded and integrable. Further, suppose that K(·) is of order 2s+ 2, i.e.,∫ ujK(u) du = 1(j = 0), for j = {0, 1, 2, . . . , 2s+1}, and ∫ |u|2s+2|K(u)| du <∞. The above assumptions on K(·) are standard for estimating smooth densities and their derivatives of different orders in the kernel density estimation literature; see e.g. [4, 57, 58, 69, 125]. There are several natural ways to construct kernels satisfying Assumption (A2), see [125, Section 1.2.2]; an example is also provided in Example 2.4 below. Example 2.4 (Example of a kernel satisfying Assumption (A2)). Let ψm(·) be the m-th Hermite polynomial on R (see [84]). Then the kernel function defined as K(u) := 2s+2∑ m=0 ψm(0)ψm(u) exp(−u2/2) satisfies Assumption (A2). It is evident from Assumption (A2) that K(·) may take some negative values, in which case, f̂µ(·) (respectively f̂ν(·)) may not be a probability density. Consequently the barycentric projection (see Definition 1.2) between f̂µ(·) and f̂ν(·) is not well-defined. We get around this by projecting f̂µ(·) and f̂ν(·) on an appropriate space of “smooth" probability densities (see (2.6)), via an integral probability metric (see Definition 2.5 below; also see [98, 105, 117] for examples, computational procedures and applications of such metrics). Definition 2.5 (Integral probability metric). Given a classH of bounded functions on Rd and two probability densities g1(·) and g2(·) on Rd, the integral probability metric/distance between g1(·) and g2(·) with respect toH is defined as dIP(g1, g2;H) := sup ψ(·)∈H ∣∣∣∣ ∫ ψ(x)(g1(x)− g2(x)) dx∣∣∣∣. Sufficient conditions onH for dIP(·, ·;H) to be a metric on the space of probability measures (not on the space of probability densities as they can be altered on set of Lebesgue measure 0 without altering the underlying probability measures) on Rd have been discussed in [98]. Observe that the measure dIP(g1, g2;H) is well defined even when g1(·) and g2(·) are not probability densities. In Theorem 2.5 below, we useH = Cs+2(X ,M ′). Note that any function in Cs+2(X ,M ′) can be extended to a function in Cs+2(Rd;M ′) (see [72, Theorem 23] and [124, Theorem 1.105]). The fact that this choice of F results in a metric follows from the argument in [98, Page 8]. We are now in a position to describe the projection estimators for fµ(·) and fν(·), and the rates achieved by the corresponding plug-in estimator. Theorem 2.5. Assume that T0(·) is L-Lipschitz and fµ, fν are Lebesgue densities satisfying Assumption (A1). Also suppose that K(·) satisfies Assumption (A2). Define hm := m− 1 d+2s logm , hn := n − 1d+2s log n and T := ∫ |Kd(u)| du+ 1. Fix any M ′ > 0. Consider any probability density f̃M ′ µ (·) ∈ Cs(X ;TM) (where M is defined as in Assumption (A1)) which satisfies dIP ( f̃M ′ µ , f̂µ;C s+2(X ;M ′) ) ≤ inf f(·)∈Cs(X ;TM) f≥0, ∫ f=1 dIP ( f̂µ, f ;C s+2(X ;M ′) ) + r (m,n) d,s (2.6) where r(m,n)d,s is defined as in (2.7) and dIP(·, ·;Cs+2(X ;M ′)) is the integral probability metric defined in Definition 2.5. We define f̃M ′ ν (·) analogously as in (2.6) with X , f̂µ(·) replaced by Y , f̂ν(·). Then the following conclusions hold. 1. SetM ′ := 8(1+TM). If µ̃m and ν̃n are the probability measures corresponding to the probability densities f̃M ′ µ (·) and f̃M ′ ν (·), then the following holds for some constant C > 0: E [ sup γ∈Γ̃min ∫ ‖T̃ γm,n(x)− T0(x)‖2 dµ̃m(x) ] ≤ Cr(m,n)d,s , where r(m,n)d,s := m−1/2 + n−1/2 for d < 2(s+ 2), m−1/2 (log (1 +m)) d + n−1/2 (log (1 + n)) d for d = 2(s+ 2), m− s+2 d + n− s+2 d for d ≥ 2(s+ 2). (2.7) The same bound also holds for E|W 22 (µ̃m, ν̃n)−W 22 (µ, ν)|. 2. f̂µ(·) satisfies lim n→∞ max { P ( ‖f̂µ‖Cs(X̃ ) ≥ TM ) ,P ( sup x∈X̃ |f̂µ(x)− fµ(x)| ≥ ε )} = 0 (2.8) for any ε > 0, where X̃ is any compact subset of X o. The same conclusion holds for f̂ν(·) with X replaced by Y . In Theorem 2.5, we have shown that the plug-in estimator for T0(·) using f̃M ′ µ (·) and f̃M ′ ν (·) (with M ′ = 8(1 + TM)) achieves rates that mitigate the curse of dimensionality under sufficient smoothness. In fact, f̃M ′ µ (·) can be viewed as an approximate minimizer of dIP(f̂µ, ·;Cs+2(X ,M ′)) over an appropriate class of Sobolev smooth probability densities. This is carried out because f̂µ(·) by itself may not be a probability density. Further note that µ̃m, ν̃n as specified in Theorem 2.5 are both smooth, and consequently Γ̃min is a singleton and the supremum in Theorem 2.5 can be dropped. A brief description of the proof technique for Theorem 2.5 is presented in Remark 2.6 below and the actual proof is given in Appendix C.1. Remark 2.6 (Proof technique). The proof of Theorem 2.5 proceeds along the same lines as Remark 2.3. We first show that Fs (see Remark 2.2) can be chosen as a certain subset of Cs+2(Y◦). We then use Dudley’s entropy integral type bounds which in turn requires the bracketing entropy [126, Page 270] of the class of compactly supported Sobolev smooth functions which can be found in [127, Corollary 2.7.2]. We now explain the implications of both the parts of Theorem 2.5 in the following two remarks. Remark 2.7 (Mitigating the curse of dimensionality). Theorem 2.5 shows that, under enough smoothness, i.e., when 2(s+ 2) > d, both the upper bounds for (1.8) and (1.9) are Op(n−1/2). This shows that, for large dimensions, provided µ and ν admit smooth enough densities, it is possible to construct plug-in estimators that mitigate the curse of dimensionality. Note that a similar estimator was analyzed in [67, Proposition 1] when m = n. However, the rates obtained in Theorem 2.5 are strictly better than those in [67, Proposition 1]. For m = n, when d < 2(s + 2), [67] obtained a rate of n− s+2 2(s+2)+d which is worse than n−1/2 obtained in Theorem 2.5. For the other regimes, [67] obtains rates (up to log factors) of n−1/4 and n− 1 (s+2)(d+2(s+2)) which are both worse than the respective rates of n−1/2 and n− s+2 d in Theorem 2.5. Remark 2.8 (Computational aspects of Theorem 2.5). Note that f̃M ′ µ (·) (with M ′ = 8(1 + TM)) is hard to compute whereas f̂µ(·) is computable easily in linear time. Note that if f̂µ(·) itself were a probability density in Cs(X ;TM), then we would have f̂µ = f̃M ′ µ . While Theorem 2.5 does not establish that, it does come close in part 2, from which we can easily derive the following: lim n→∞ P(f̂µ(·) /∈ Cs(X̃ ;TM)) = 0. The above shows that f̂µ(·) is indeed bounded below by (TM)−1 on X̃ (any compact subset of the interior of X ), and additionally belongs to Cs(X̃ ;TM) with probability converging to 1. This leads us to conjecture that the natural density version of f̂µ(·), i.e., max{f̂µ(·), 0}∫ max{f̂µ(x), 0} dx should serve as a good proxy for f̃M ′ µ (·) and lead to rates of convergence that mitigate the curse of dimensionality. From a computational perspective, the density specified above is easy to simulate from using an accept-reject algorithm without computing the integral in the denominator (see [101, Algorithm 4.3]). However, our current proof technique does not provide rates of convergence for the above density estimator based on f̂µ(·). Another important implication of Theorem 2.5 is the bound obtained on |W2(µ̃m, ν̃n)−W2(µ, ν)| when µ 6= ν. We first present the result and then describe the implication. Proposition 2.6. Consider the setting in Theorem 2.5. Then, provided µ 6= ν, the following holds: |W2(µ̃m, ν̃n)−W2(µ, ν)| = Op(r(m,n)d,s ). Proposition 2.6 (see Appendix C.1 for a proof) shows an interesting distinction between the µ 6= ν case and the µ = ν case. For µ = ν, the best possible exponent is n− 1+s 2s+d for d ≥ 3 (see [131, Theorem 3] where the result was established under more general Besov smoothness assumptions). On the contrary, when µ 6= ν, Proposition 2.6 establishes a rate of n− s+2d for the Wasserstein distance which is strictly better than the minimax achievable rate mentioned above when µ = ν. This observation complements [30, Corollary 1] where the authors make a similar remark for the special case of s = 0. 2.3 Discretized plug-in estimator under smoothness assumptions In Section 2.1, we discussed how smoothness can be incorporated into the plug-in procedure to get faster rates of convergence. Such plug-in estimators are popular in the computational OT literature (see [7, 8, 25, 36]). However, even after f̃µ(·) ≡ f̃M ′ µ (·), f̃ν(·) ≡ f̃M ′ ν (·) are calculated, T̃ γm,n as in Theorem 2.5 cannot be computed explicitly from data if f̃µ(·) and f̃ν(·) are continuous densities. This is in contrast to T̃ γm,n from Theorem 2.2 in the discrete-discrete case which is explicitly computable using a standard linear program, but achieves worse rates of convergence. This is not unexpected. Thanks to the no free lunch principle, better statistical accuracy is naturally accompanied by heavier computational challenges. Therefore, our goal here is to construct estimators, under smoothness assumptions as in Section 2.2, which are computable in polynomial time (with complexity increasing with smoothness) provided f̃µ(·) and f̃ν(·) can be sampled from, and also attain rates that mitigate the curse of dimensionality. Construction: We will illustrate the discretized estimator using the kernel based estimator from Section 2.2. Similar results also hold for the wavelet based estimator from Appendix A. Recall the kernel density estimators f̃µ(·) and f̃ν(·) (see (2.6)). Sample M ≥ 1 random points from both f̃µ(·) and f̃ν(·). Let µ̂m,M and ν̂n,M denote the standard empirical measures on the M points sampled from f̃µ(·) and f̃ν(·) respectively. Finally construct T̃m,n ≡ T̃ γm,n as in Definition 1.2 with µ̃m = µ̂m,M and ν̃n = ν̂n,M . It should be pointed out that a similar construction was also used in [131, Section 6] for estimating probability densities under the Wasserstein loss. Based on this construction, the main result of this section is as follows: Theorem 2.7. Consider the setting in Theorem 2.5 and the same construction of T̃ γm,n as above. For simplicity, let’s also assume m = n. Accordingly set M = n s+2 2 . Then Γ̃min is a singleton and consequently the following conclusion holds for some constant C > 0: E [∫ ‖T̃m,n(x)− T0(x)‖2 dµ̃m(x) ] ≤ Cr(n,n)d,s . The same rates also hold for E|W 22 (µ̃m, ν̃n)−W 22 (µ, ν)|. The proof of Theorem 2.7 is given in Appendix C.1. Once the empirical measures µ̂m,M and ν̂n,M have been obtained, an explicit computation of T̃m,n as described above requires O(M3) = O(n 3(s+2) 2 ) steps using the Hungarian algorithm, see [73]. This highlights the statistical versus computational trade-off, i.e., in order to mitigate the curse of dimensionality in convergence rates by exploiting smoothness, the computational complexity gets progressively worse by polynomial factors in n. It should be mentioned that (approximate) algorithms faster than the Hungarian algorithm stated above, can be found in [1, 36, 53] to name a few. Due to space constraints, we avoid a detailed discussion on this. In the above construction, sampling from the smoothed kernel densities f̃µ(·) and f̃ν(·) is crucial. If we would simply draw M bootstrap samples from the empirical distributions µ̂m and ν̂n, the rates of convergence wouldn’t improve from those observed in Theorem 2.2 no matter how large M is. Acknowledgments and Disclosure of Funding We would like to thank the reviewers for their constructive suggestions that greatly helped improve the quality of the paper. The third author is supported by NSF Grant DMS-2015376.
1. What is the focus of the paper regarding statistical behavior? 2. What are the strengths of the proposed approach, particularly in terms of rate of convergence and plug-in estimators? 3. Do you have any concerns or suggestions for improving the paper, such as discussing sample procurement, selecting practical values for smoothness, or providing experimental results for the discretized plug-in estimator? 4. Are there any minor comments or typos that could be improved, such as explaining the intuition behind rHSIC or addressing the dropping of the squared 2-Wasserstein distance?
Summary Of The Paper Review
Summary Of The Paper This paper analyze the statistical behavior of the general plug-in estimators of OT defined by barycentric projections. The authors provide a thorough analysis of the rate of convergence for the transport cost and the transport map. They also consider kernel smoothed plug-in estimators and relate its rate of convergence to the smoothness of the densities, which alleviates the curse of dimensionality suffered by the plug-in estimator. Review Strength: The upper bound on the stability of the transport map defined via barycentric projections is new. The rate of convergence of the smoothed plug-in estimators is quite interesting since it reflects the smoothness of the densities. The paper has an extensive related work section discussing clearly the relationship of their work to previous ones. The paper is technically solid. The theoretical results are well explained and the intuition are nicely conveyed. The authors also design a discretized version of the smoothed plug-in estimator which enjoys the same rate of convergence and can be computed in practice. Major comments: In the construction of discretized plug-in estimator on line 255, the procedure to obtain samples from the kernel density estimators is not discussed. This should also appear in the complexity analysis of computing T ~ n , m especially when the ambient dimension is high. In Theorem 2.6, the number of samples M is set to be n s + 2 2 . In practice, the smoothness s is unknown. It would be better to discuss how to select s in practice. An important selling point of this paper is, in my opinion, a discretized plug-in estimator which enjoys a good rate of convergence and can be computed in practice. It is expected to see some experimental results supporting this claim. Minor comments: On line 227 the squared 2-Wasserstein distance is considered, while the square is dropped in Proposition 2.5. Section 3.2 feels rushed. It would be better to explain more on the intuition behind rHSIC. Updates: the authors addressed most of my concerns in the response. While I do believe adding the experiments on the discretized plug-in estimator will be a good addition to the paper, I also think the current version is already a good paper. Hence, I am keeping my score.
NIPS
Title Rates of Estimation of Optimal Transport Maps using Plug-in Estimators via Barycentric Projections Abstract Optimal transport maps between two probability distributions μ and ν on R have found extensive applications in both machine learning and statistics. In practice, these maps need to be estimated from data sampled according to μ and ν. Plugin estimators are perhaps most popular in estimating transport maps in the field of computational optimal transport. In this paper, we provide a comprehensive analysis of the rates of convergences for general plug-in estimators defined via barycentric projections. Our main contribution is a new stability estimate for barycentric projections which proceeds under minimal smoothness assumptions and can be used to analyze general plug-in estimators. We illustrate the usefulness of this stability estimate by first providing rates of convergence for the natural discretediscrete and semi-discrete estimators of optimal transport maps. We then use the same stability estimate to show that, under additional smoothness assumptions of Sobolev type or Besov type, kernel smoothed or wavelet based plug-in estimators respectively speed up the rates of convergence and significantly mitigate the curse of dimensionality suffered by the natural discrete-discrete/semi-discrete estimators. As a by-product of our analysis, we also obtain faster rates of convergence for plug-in estimators of W2(μ, ν), the Wasserstein distance between μ and ν, under the aforementioned smoothness assumptions, thereby complementing recent results in Chizat et al. (2020). Finally, we illustrate the applicability of our results in obtaining rates of convergence for Wasserstein barycenter between two probability distributions and obtaining asymptotic detection thresholds for some recent optimaltransport based tests of independence. 1 Introduction Given two random variables X ∼ µ and Y ∼ ν, where µ, ν are probability measures on Rd, d ≥ 1, the problem of finding a “nice" map T0(·) such that T0(X) ∼ ν has numerous applications in machine learning such as domain adaptation and data integration [34, 35, 38, 48, 61, 112], dimension reduction [12, 66, 90], generative models [60, 81, 88, 110], to name a few. Of particular interest is the case when T0(·) is obtained by minimizing a cost function, a line of work initiated by Gaspard Monge [97] in 1781 (see (1.1) below), in which case T0(·) is termed an optimal transport (OT) map and has applications in shape matching/transfer problems [29, 47, 107, 121], Bayesian statistics [46, 75, 80, 108], econometrics [15, 28, 45, 50, 54], nonparametric statistical inference [39– 41, 113, 114]; also see [111, 128, 129] for book-length treatments on the subject. In this paper, we will focus on the OT map obtained using the standard squared Euclidean cost function, i.e., T0 := argmin T :T#µ=ν E‖X − T (X)‖2, (1.1) where T#µ = ν means T (X) ∼ ν for X ∼ µ. The estimation of T0 has attracted a lot of interest in recent years due to its myriad applications (as stated above) and interesting geometrical properties (see [19, 56, 91] and Definition 1.1 below). In practice, the main hurdle in constructing estimators for T0 is that the explicit forms of the measures µ, ν are unknown; instead only random samples X1, . . . , Xm ∼ µ and Y1, . . . , Yn ∼ ν 35th Conference on Neural Information Processing Systems (NeurIPS 2021). are available. A natural strategy in this scenario is to estimate T0 using T̃m,n, where T̃m,n is computed as in (1.1) with µ and ν replaced by µ̃m and ν̃n which are empirical approximations of µ and ν based on X1, . . . , Xm and Y1, . . . , Yn respectively (see Definition 1.2). Such estimators are often called plug-in estimators and have been used extensively; see [7, 30, 67, 93, 94, 102, 116]. The main goal of this paper is to study the rates of convergence of general plug-in estimators of T0 under a unified framework. We show that when µ̃m and ν̃n are chosen as µ̂m and ν̂n respectively, where µ̂m and ν̂n are the standard empirical distributions supported on m and n atoms, i.e., µ̂m := 1 m m∑ i=1 δXi and ν̂n := 1 n n∑ j=1 δYj , (1.2) T̃m,n (appropriately defined using Definition 1.2) converges at a rate of m−2/d + n−2/d for d ≥ 4 in the sense of (1.8). This rate happens to be minimax optimal under minimal smoothness assumptions (see [72, Theorem 6]) but suffers from the curse of dimensionality. We next show that, if µ and ν are known to admit sufficiently smooth densities, it is possible to apply kernel or wavelet based smoothing techniques on µ̂m and ν̂n to obtain plug-in estimators that mitigate the aforementioned curse of dimensionality. Our next contribution pertains to the estimation of W 22 (µ, ν) (the squared Wasserstein distance), see (1.3) below, a quantity of independent interest in statistics and machine learning with applications in structured prediction [51, 89], image analysis [18, 59], nonparametric testing [16, 106], generative modeling [10, 96], etc. In this paper, we also obtain rates of convergence for plug-in estimators W 22 (µ̃m, ν̃n) of W 2 2 (µ, ν). We show that kernel smoothing µ̂m and ν̂n can be used to obtain plug-in estimators of W 22 (µ, ν) that mitigate the curse of dimensionality as opposed to a direct plug-in approach using µ̂m and ν̂n (as used in [30, Theorem 2]). This provides an answer to the open question of estimating W 22 (µ, ν) when µ, ν admit smooth densities laid out in [30]. 1.1 Background on optimal transport In this section, we present some basic concepts and results associated with the OT problem that will play a crucial role in the sequel. Let Pac(Rd) denote the set of all Lebesgue absolutely continuous probability measures on Rd andP2(Rd) be the set of probability measures with finite second moments. Then the 2-Wasserstein distance (squared) between µ, ν ∈ P2(Rd) is defined as: W 22 (µ, ν) := min π∈Π(µ,ν) ∫ ‖x− y‖2 dπ(x, y), (1.3) where Π(µ, ν) is the set of probability measures on Rd×Rd with marginals µ and ν. The optimization problem in (1.3) is often called the Kantorovich relaxation (see [76, 77]) of the optimization problem in (1.1). The existence of a minimizer in (1.3) follows from [129, Theorem 4.1]. Proposition 1.1 (Brenier-McCann polar factorization theorem, see [91, 128]). Suppose µ ∈ Pac(Rd). Then there exists a µ-a.e. (almost everywhere) unique function T0(·) : Rd → Rd, which is the gradient of a real-valued d-variate convex function, say ϕ0(·) : Rd → R, such that T0#µ = ν. Further, the distribution defined as π(A×B) = µ(A ∩ (T0)−1(B)) for all Borel sets A,B ⊆ Rd is the unique minimizer in (1.3) provided µ, ν ∈ P2(Rd). Definition 1.1 (OT map and potential function). The function T0 : Rd → Rd in Proposition 1.1 which satisfies T0#µ = ν will be called the OT map from µ to ν. A convex function ϕ0(·) in Proposition 1.1 satisfying∇ϕ0 = T0 will be termed an OT potential. The next and final important ingredient is the alternate dual representation of (1.3) which gives: 1 2 W 22 (µ, ν) = 1 2 ∫ ‖x‖2 dµ(x) + 1 2 ∫ ‖y‖2 dν(y)−min f∈F Sµ,ν(f), where (1.4) Sµ,ν(f) = ∫ f dµ+ ∫ f∗ dν. (1.5) Here F denotes the space of convex functions on Rd which are also elements of L1(µ) and f∗(·) is the standard Legendre-Fenchel dual defined as: f∗(x) := sup y∈Rd [y>x− f(y)], for x ∈ dom(f). (1.6) 1.2 Estimating OT map via barycentric projection Recall the setting from the Introduction. Let µ̃m, ν̃n ∈ P2(Rd). Here µ̃m, ν̃n need not be absolutely continuous and can be very general. Intuitively, µ̃m and ν̃n can be viewed as some empirical approximation of µ and ν respectively. Example 1.2 (Simple choices of µ̃m and ν̃n). Let X1, . . . , Xm i.i.d.∼ µ and Y1, . . . , Yn i.i.d.∼ ν; in which case a natural choice would be to set µ̃m = µ̂m and ν̃n = ν̂n where µ̂m and ν̂n are the empirical distributions of X1, . . . , Xm and Y1, . . . , Yn respectively, as defined in (1.2). This is the standard choice adopted in the discrete-discrete Kantorovich relaxation; see [104, Section 2.3]. Another popular choice is µ̃m = µ̂m, ν̃n = ν or µ̃m = µ, ν̃n = ν̂n. This is the semi-discrete Kantorovich problem and is popular when one of the measures is fully specified; see [26, 55]. A natural way to estimate T0(·), as defined in (1.1), would be to approximate it using the OT map from µ̃m to ν̃n. However as µ̃m and ν̃n may not be elements of Pac(Rd), Proposition 1.1 does not apply and an OT map may not exist from µ̃m to ν̃n. Such is the case in Example 1.2 in the discrete-discrete case when m 6= n. To circumvent this issue, we leverage the notion of barycentric projections (see [3, Definition 5.4.2]) defined below: Definition 1.2 (Barycentric projection). Define the set Γ̃min := argmin π∈Π(µ̃m,ν̃n) ∫ ‖x− y‖2 dπ(x, y). The optimization problem above is the plug-in analog of the optimization problem on the right hand side of (1.3). Given any γ ∈ Γ̃min, define the barycentric projection of γ as the conditional mean of y given x under γ, i.e., T̃m,n(x) ≡ T̃ γm,n(x) := ∫ y y dγ(x, y)∫ y dγ(x, y) , for x ∈ supp (µ̃m) . (1.7) In general, Γ̃min need not be a singleton which is why we index the barycentric projection T̃ γm,n(·) by γ ∈ Γ̃min. Note that T̃ γm,n(·) need not be a transport map; however, if an OT map exists then it must be equal to T̃ γm,n(·) (µ̃m-a.e.). Our goal is to obtain stochastic upper bounds for sup γ∈Γ̃min ∫ ∥∥T̃ γm,n(x)− T0(x)∥∥2 dµ̃m(x). (1.8) In addition, our proof techniques also yield rates of convergence for∣∣W 22 (µ̃m, ν̃n)−W 22 (µ, ν)∣∣. (1.9) In this paper, we will focus on d ≥ 2. Due to the canonical ordering of R, the case d = 1 can be handled easily using the classical Hungarian embedding theorem [82]. 1.3 Contributions 1. We provide a new and flexible stability estimate Theorem 2.1 which yields a unified approach to obtaining rates of convergence for general plug-in estimators of the OT map T0(·). Unlike existing stability estimates, Theorem 2.1 holds for the barycentric projection (which is the same as the OT map when it exists) and does not require any smoothness assumptions on µ̃m, ν̃n or T̃ γm,n(·); also see Remark 2.1 for a comparison with the existing literature. 2. In Sections 2.1 and 2.2, we use Theorem 2.1 to bound (1.8) and (1.9): • In Section 2.1, we show that in both the discrete-discrete and semi-discrete Kantorovich relaxation problems (see Example 1.2), the rate of convergence of (1.8) is m−2/d + n−2/d for d ≥ 4 when T0 is assumed to be Lipschitz (see Theorem 2.2), which is the minimax rate (see [72, Theorem 6]). To the best of our knowledge, rates of convergence for these natural estimators weren’t previously established in the literature. • In Section 2.2 and Appendix A, we show that the curse of dimensionality in the above rates can be mitigated provided µ and ν admit (uniform) Sobolev smooth densities (see Section 2.2) or Besov smooth densities (see Appendix A). In Section 2.2, our plug-in estimator is obtained by choosing µ̃m (and ν̃n) as the convolution of µ̂m (and ν̂n) and a smooth kernel with an appropriate bandwidth. Under this choice, the rate of convergence in (1.8) is m−( s+2 d ∧ 1 2 ) + n−( s+2 d ∧ 1 2 ), where s denotes the degree of Sobolev smoothness (see Theorem 2.5). Clearly, if 2(s + 2) ≥ d, the rate of convergence becomes dimension-free and mitigates the curse of dimensionality. We also show the same rates of convergence mentioned above hold for (1.9) (see e.g., Proposition 2.6) which makes a strong case in favor of incorporating smoothness in the construction of plug-in estimators as was conjectured in [30]. In Appendix A, our plug-in estimator is obtained using natural wavelet based density estimators. The rate of convergence in (1.8) turns out to be n− 1+s d+2s where s denotes the degree of Besov smoothness (see Theorem A.1). Note that by choosing s large enough, the exponent in the rate can be made arbitrarily close to 1/2, thereby reducing the curse of dimensionality. 3. In Section 2.3, we use a discretization technique from [131] to construct discrete approximations to the smoothed µ̃m and ν̃n from the previous paragraph that in turn yield computable plug-in estimators for T0 (provided one can sample from µ̃m and ν̃n) that also achieve the same statistical guarantees as the smoothed plug-in estimator from Section 2.2 (see Theorem 2.7). However the number of atoms required in the discretizations and correspondingly the computational complexity increases with the degree of smoothness; this highlights a statistical and computational trade-off. 4. We provide implications of our results in popular applications of OT such as estimating the barycenter of two multivariate probability distributions (see Theorem B.1 in Appendix B.1) and in nonparametric independence testing (see Theorem B.3 in Appendix B.2). 1.4 Related work Many recent works have focused on obtaining consistent estimators of T0 using the plug-in principle, see [26, 55] (in the semi-discrete problem) and [41, 68, 132] (in the discrete-discrete problem). In [55], the authors studied the rate of convergence of the semi-discrete optimal transport map from ν (absolutely continuous) to µ̂m. This paper complements the aforementioned papers by studying the rates of convergence for general plug-in estimators in a unified fashion. In two other papers [9, Theorem 1.1] and [87, Section 4], the authors use a “Voronoi tessellation" approach to estimate T0, however the rates obtained in this paper, even in the absence of smoothness, are strictly better than those in [9, 87]. Perhaps the most closely related paper to ours would be [67]. In [67], the author uses variational techniques to arrive at stability estimates while we exploit the Lipschitz nature of the OT map (see Definition 1.1). Further the rates in this paper have exponents s+2d ∧ 1 2 which are strictly better than the exponents s+22(s+2)+d obtained in [67, Proposition 1] under the same smoothness assumptions (Sobolev type of order s, see Definition 2.4). In another line of work [72], the authors use theoretical wavelet based estimators (not of the plug-in type) of T0 to obtain nearly minimax optimal rates of convergence. However these estimators, by themselves, are not transport maps between two probability measures, which makes them harder to interpret. In contrast, our focus is on obtaining rates of convergence for plug-in estimators, which are transport maps between natural approximations of µ and ν. Such plug-in type strategies are a lot more popular in computational OT [7, 30, 67, 93, 94, 102, 116]. In terms of obtaining rates of convergence for (1.9), some attempts include [109, 116] where parametric rates are obtained when µ, ν are known to be finitely supported or are both Gaussian. In a related problem, bounds for W 22 (µ̂m, µ) were obtained in [6, 42, 49, 100, 123, 131]. Using these bounds, for m = n, it is easy to get a n−1/d rate of convergence for (1.9). This rate was recently improved to n−2/d in [30] under no smoothness assumptions. Our rates coincide with the n−2/d rate from [30] under no smoothness assumptions. But further, we show in this paper that the curse of dimensionality in the above rate can be mitigated by incorporating smoothness into the plug-in procedure. 2 Main results Recall ϕ0(·) from Definition 1.1. The following is our main result. Theorem 2.1 (Stability estimate). Suppose that µ, ν ∈ Pac(Rd) ∩ P2(Rd) and µ̃m, ν̃n ∈ P2(Rd). Assume that T0(·) (as defined in (1.1)) is L-Lipschitz (L > 0). Then, sup γ∈Γ̃min ∫ ‖T̃ γm,n(x)− T0(x)‖2 dµ̃m(x) ≤ Lmax {∣∣∣∣ ∫ Ψ∗µ̃m,ν̃n d(ν̃n − ν†m)∣∣∣∣, ∣∣∣∣ ∫ Ψ∗µ̃m,ν†m d(ν̃n − ν†m) ∣∣∣∣} + 2L ∫ ϕ∗0(y) d(ν̃n − ν†m)(y), (2.1) where ν†m := T0#µ̃m, ϕ ∗ 0(·) is defined as in (1.6), and with S·,·(·) defined as in (1.5), Ψµ̃m,ν̃n(·) := argminf∈F Sµ̃m,ν̃n(f), Ψµ̃m,ν†m(·) := argminf∈F Sµ̃m,ν†m(f), and D denotes the space of realvalued convex functions on Rd. The proof of Theorem 2.1 (see Appendix C.1) starts along the same lines as the proof of the curvature estimate in [56, Proposition 3.3]. This is followed by some careful manipulations of W 22 (·, ·) (as in (1.3)) and an application of the conditional version of Jensen’s inequality, see (C.3). The final step of the proof uses the dual representation in (1.4) with techniques similar to some intermediate steps in the proof of [92, Proposition 2] and [30, Lemma 3]. Remark 2.1 (Comparison with other stability estimates). Theorem 2.1 provides some important advantages to existing stability estimates in the literature. One of the earliest results in this direction can be found in [56, Proposition 3.3] but their bound involves a push-forward constraint which makes it hard to use for rate of convergence analysis. A bound similar to Theorem 2.1 is presented in [55, Lemma 5.1] but there the authors assume the existence of an OT map from µ̃m to ν̃n. Therefore, it does not apply to the discrete-discrete problem where µ̃m = µ̂m and ν̃n = ν̂n with m 6= n. Overcoming all these limitations is an important contribution of Theorem 2.1 and allows us to deal with popular plug-in estimators all in one go. The stability estimate in [72, Proposition 10] on the other hand requires µ̃m, ν̃n to be sufficiently smooth and hence it does not hold for discrete-discrete or semi-discrete plug-in estimators (see Example 1.2). Further their result requires all the measures involved to be compactly supported unlike the much milder requirements of Theorem 2.1. However, a shortcoming of Theorem 2.1 is that it is hard to obtain rates faster than n−1/2 using it directly, whereas [72] can obtain rates arbitrarily close to n−1. This is a price we pay for analyzing natural and popular plug-in estimators as opposed to the (more intractable) wavelet based estimators in [72]. Remark 2.2 (How to use Theorem 2.1 to obtain rates of convergence?). Note that the second term on the right hand side of (2.1), under appropriate moment assumptions, is Op(m−1/2 +n−1/2) (free of dimension) by a direct application of Markov’s inequality. We therefore focus on the first term. By (1.5), Ψ∗µ̃m,ν̃n(·), Ψ ∗ µ̃m,ν † m (·) ∈ F . Further, by Caffarelli’s regularity theory [20–22], depending on the “smoothness" of µ̃m, ν̃n, it can be shown that there exists a further class of functions Fs (see Remarks 2.3 and 2.6) such that Ψ∗µ̃m,ν̃n(·), Ψ ∗ µ̃m,ν † m (·) ∈ F ∩ Fs. Thus, we can bound the first term on the right hand side of (2.1) as: max {∣∣∣∣ ∫ Ψ∗µ̃m,ν̃n d(ν̃n − ν†m)∣∣∣∣, ∣∣∣∣ ∫ Ψ∗µ̃m,ν†m d(ν̃n − ν†m) ∣∣∣∣} ≤ sup f∈F∩Fs ∣∣∣∣ ∫ f d(ν̃n − ν†m)∣∣∣∣. (2.2) The right hand side of (2.2) can now be bounded using the corresponding Dudley’s entropy integral bounds using empirical process techniques; see [126, Lemmas 19.35-19.37]. To conclude, the two main steps in our strategy are identifying the family of functions Fs and computing Dudley’s entropy integral. Further, the more the smoothness of µ̃m, ν̃n, the smaller is the class of functions Fs and smaller the supremum on the right hand side of (2.2). This shows why better rates can be expected under smoothness assumptions. 2.1 Natural non-smooth plug-in estimators In this case, we discuss the rates of convergence for the discrete-discrete problem and the semi-discrete problem, where no smoothness is available on µ̃m and ν̃n. Theorem 2.2. Suppose that T0(·) is L-Lipschitz, ν is compactly supported and E exp(t‖X1‖α) <∞ for some t > 0, α > 0. (Discrete-discrete): Set µ̃m = µ̂m and ν̃n = ν̂n. Then the following holds: sup γ∈Γ̃min ∫ ‖T̃ γm,n(x)− T0(x)‖2 dµ̃m(x) = Op ( r (m,n) d × (log (1 + max{m,n})) td,α ) , (2.3) where r(m,n)d := m−1/2 + n−1/2 for d = 2, 3, m−1/2 log (1 +m) + n−1/2 log (1 + n) for d = 4, m−2/d + n−2/d for d ≥ 5, (2.4) and td,α := (4α)−1(4 + ((2α+ 2dα− d) ∨ 0)) for d < 4, (α−1 ∨ 7/2)− 1 for d = 4, 2(1 + d−1) for d > 4. The same bound holds for |W 22 (µ̃m, ν̃n)−W 22 (µ, ν)| without assuming T0(·) is Lipschitz. (Semi-discrete): Set µ̃m = µ, ν̃n = ν̂n or µ̃m = µ̂m, ν̃n = ν. Then the left hand side of (2.3) is Op(r (n,n) d × (log (1 + n))td,α) or Op(r (m,m) d × (log (1 +m))td,α) respectively. A stronger result can be proved if both µ and ν are compactly supported. Corollary 2.3. Consider the setting from Theorem 2.2 and assume further that µ is compactly supported. Then, with r(m,n)d defined as in (2.4), we have: E [ sup γ∈Γ̃min ∫ ‖T̃ γm,n(x)− T0(x)‖2 dµ̃m(x) ] ≤ Cr(m,n)d , for some constant C > 0, in both the discrete-discrete and semi-discrete settings from Theorem 2.2. A brief description of the proof technique of Theorem 2.2 using Theorem 2.1 is provided in Remark 2.3 below, and the actual proof is presented in Appendix C.1. Remark 2.3 (Proof technique). The proof of Theorem 2.2 proceeds via the strategy outlined in Remark 2.2. We first show that Fs (see Remark 2.2) can be chosen as a certain sub-class of convex functions which are in L2(ν). We then use Dudley’s entropy integral type bounds which in turn requires the bracketing entropy [126, Page 270] of Fs, recently proved in [83, Equation 26]. This strategy is slightly different from that used in the proof of [30, Theorem 2], where the authors assume that µ is compactly supported whereas we only assume the finiteness of E exp(t‖X1‖α) for some t > 0, α > 0. The compactness assumption on µ allows one to further restrict Fs to the class of Lipschitz functions. This additional restriction does not seem to be immediate without the compactness assumption. As discussed in Section 1.3, the exponents obtained in Theorem 2.2 are minimax optimal, up to multiplicative logarithmic factors, under bare minimal smoothness assumptions (see [72, Theorem 6]). To the best of our knowledge, rates for the discrete-discrete case for m 6= n and those for the semi-discrete case were not known previously in the literature. Our rates are also strictly better than those (for different estimators, based on space tessellations) obtained in [9, 87] and require less stringent assumptions than those in [30]. In the next section, we show how smoothness assumptions can be leveraged to mitigate the curse of dimensionality in Theorem 2.2. 2.2 Smooth kernel based plug-in estimator: mitigating the curse of dimensionality In this section, we focus on kernel based density estimators for the probability densities associated with µ and ν (see [57, 58, 99, 103, 115]). We will show, using Theorem 2.1, that the corresponding estimators of T0(·) achieve (near) dimension-free rates under sufficient smoothness assumptions. We first introduce the Sobolev class of functions which we will exploit in this subsection to construct estimators that achieve rates of convergence which mitigate the curse of dimensionality under sufficient smoothness. Definition 2.4 (Uniform Sobolev class of functions). Let Ω ⊆ Rd and f(·) be uniformly continuous on Ω and admits uniformly continuous derivatives up to order s on Ω for some s ∈ N. For any m := (m1, . . . ,md) ∈ Nd, let ∂mf := ∂ ∂m1x1 . . . ∂ ∂mdxd f, |m| := d∑ i=1 mi. For any k ≤ s, we further define, ‖f‖Ck(Ω) := ∑ |m|≤k ‖∂mf‖L∞(Ω). The space Cs(Ω) is defined as the set of functions f(·) for which ‖f‖Ck(Ω) <∞ for all k ≤ s. For this subsection, assume that µ and ν admit Sobolev smooth densities fµ(·) and fν(·) in the uniform norm (see Definition 2.4 above). Given Ω ⊆ Rd and s ∈ N, let Cs(Ω) denote the set of Sobolev smooth functions on Ω of order s. Assumption (A1) (Regularity of the densities). Suppose that 1. fµ and fν are supported on compact and convex subsets of Rd, say X and Y respectively. 2. There exists s,M > 0 such that fµ(·) ∈ Cs(X ;M) and fν(·) ∈ Cs(Y;M) where Cs(X ;M) is the space of real valued functions supported on X such that for all f(·) ∈ Cs(X ;M), we have M−1 ≤ f(x) ≤ M for all x ∈ X and ‖f‖Cs(X ) ≤ M . Here ‖·‖Cs(X ) is the standard uniform Sobolev norm as defined in Definition 2.4. The space Cs(Y;M) is defined analogously. We now define our estimators for fµ(·) and fν(·) using the standard kernel density estimation technique (see [125, Section 1.2]). Set f̂µ(x) := 1 mhdm m∑ i=1 Kd ( Xi − x hm ) , (2.5) for some bandwidth parameter hm > 0 and d-variate kernel Kd(·). We assume that Kd(·) is the dfold product of univariate kernels, i.e., there exists a kernelK(·) such that for u = (u1, . . . , ud) ∈ Rd, Kd(u) = ∏d i=1K(ui). We define f̂ν(·) similarly with the same univariate kernel and bandwidth. Assumption (A2) (Regularity of the kernel). Assume that K(·) is a symmetric, bounded, s+ 1 times differentiable kernel on Rd with all s+ 1 derivatives bounded and integrable. Further, suppose that K(·) is of order 2s+ 2, i.e.,∫ ujK(u) du = 1(j = 0), for j = {0, 1, 2, . . . , 2s+1}, and ∫ |u|2s+2|K(u)| du <∞. The above assumptions on K(·) are standard for estimating smooth densities and their derivatives of different orders in the kernel density estimation literature; see e.g. [4, 57, 58, 69, 125]. There are several natural ways to construct kernels satisfying Assumption (A2), see [125, Section 1.2.2]; an example is also provided in Example 2.4 below. Example 2.4 (Example of a kernel satisfying Assumption (A2)). Let ψm(·) be the m-th Hermite polynomial on R (see [84]). Then the kernel function defined as K(u) := 2s+2∑ m=0 ψm(0)ψm(u) exp(−u2/2) satisfies Assumption (A2). It is evident from Assumption (A2) that K(·) may take some negative values, in which case, f̂µ(·) (respectively f̂ν(·)) may not be a probability density. Consequently the barycentric projection (see Definition 1.2) between f̂µ(·) and f̂ν(·) is not well-defined. We get around this by projecting f̂µ(·) and f̂ν(·) on an appropriate space of “smooth" probability densities (see (2.6)), via an integral probability metric (see Definition 2.5 below; also see [98, 105, 117] for examples, computational procedures and applications of such metrics). Definition 2.5 (Integral probability metric). Given a classH of bounded functions on Rd and two probability densities g1(·) and g2(·) on Rd, the integral probability metric/distance between g1(·) and g2(·) with respect toH is defined as dIP(g1, g2;H) := sup ψ(·)∈H ∣∣∣∣ ∫ ψ(x)(g1(x)− g2(x)) dx∣∣∣∣. Sufficient conditions onH for dIP(·, ·;H) to be a metric on the space of probability measures (not on the space of probability densities as they can be altered on set of Lebesgue measure 0 without altering the underlying probability measures) on Rd have been discussed in [98]. Observe that the measure dIP(g1, g2;H) is well defined even when g1(·) and g2(·) are not probability densities. In Theorem 2.5 below, we useH = Cs+2(X ,M ′). Note that any function in Cs+2(X ,M ′) can be extended to a function in Cs+2(Rd;M ′) (see [72, Theorem 23] and [124, Theorem 1.105]). The fact that this choice of F results in a metric follows from the argument in [98, Page 8]. We are now in a position to describe the projection estimators for fµ(·) and fν(·), and the rates achieved by the corresponding plug-in estimator. Theorem 2.5. Assume that T0(·) is L-Lipschitz and fµ, fν are Lebesgue densities satisfying Assumption (A1). Also suppose that K(·) satisfies Assumption (A2). Define hm := m− 1 d+2s logm , hn := n − 1d+2s log n and T := ∫ |Kd(u)| du+ 1. Fix any M ′ > 0. Consider any probability density f̃M ′ µ (·) ∈ Cs(X ;TM) (where M is defined as in Assumption (A1)) which satisfies dIP ( f̃M ′ µ , f̂µ;C s+2(X ;M ′) ) ≤ inf f(·)∈Cs(X ;TM) f≥0, ∫ f=1 dIP ( f̂µ, f ;C s+2(X ;M ′) ) + r (m,n) d,s (2.6) where r(m,n)d,s is defined as in (2.7) and dIP(·, ·;Cs+2(X ;M ′)) is the integral probability metric defined in Definition 2.5. We define f̃M ′ ν (·) analogously as in (2.6) with X , f̂µ(·) replaced by Y , f̂ν(·). Then the following conclusions hold. 1. SetM ′ := 8(1+TM). If µ̃m and ν̃n are the probability measures corresponding to the probability densities f̃M ′ µ (·) and f̃M ′ ν (·), then the following holds for some constant C > 0: E [ sup γ∈Γ̃min ∫ ‖T̃ γm,n(x)− T0(x)‖2 dµ̃m(x) ] ≤ Cr(m,n)d,s , where r(m,n)d,s := m−1/2 + n−1/2 for d < 2(s+ 2), m−1/2 (log (1 +m)) d + n−1/2 (log (1 + n)) d for d = 2(s+ 2), m− s+2 d + n− s+2 d for d ≥ 2(s+ 2). (2.7) The same bound also holds for E|W 22 (µ̃m, ν̃n)−W 22 (µ, ν)|. 2. f̂µ(·) satisfies lim n→∞ max { P ( ‖f̂µ‖Cs(X̃ ) ≥ TM ) ,P ( sup x∈X̃ |f̂µ(x)− fµ(x)| ≥ ε )} = 0 (2.8) for any ε > 0, where X̃ is any compact subset of X o. The same conclusion holds for f̂ν(·) with X replaced by Y . In Theorem 2.5, we have shown that the plug-in estimator for T0(·) using f̃M ′ µ (·) and f̃M ′ ν (·) (with M ′ = 8(1 + TM)) achieves rates that mitigate the curse of dimensionality under sufficient smoothness. In fact, f̃M ′ µ (·) can be viewed as an approximate minimizer of dIP(f̂µ, ·;Cs+2(X ,M ′)) over an appropriate class of Sobolev smooth probability densities. This is carried out because f̂µ(·) by itself may not be a probability density. Further note that µ̃m, ν̃n as specified in Theorem 2.5 are both smooth, and consequently Γ̃min is a singleton and the supremum in Theorem 2.5 can be dropped. A brief description of the proof technique for Theorem 2.5 is presented in Remark 2.6 below and the actual proof is given in Appendix C.1. Remark 2.6 (Proof technique). The proof of Theorem 2.5 proceeds along the same lines as Remark 2.3. We first show that Fs (see Remark 2.2) can be chosen as a certain subset of Cs+2(Y◦). We then use Dudley’s entropy integral type bounds which in turn requires the bracketing entropy [126, Page 270] of the class of compactly supported Sobolev smooth functions which can be found in [127, Corollary 2.7.2]. We now explain the implications of both the parts of Theorem 2.5 in the following two remarks. Remark 2.7 (Mitigating the curse of dimensionality). Theorem 2.5 shows that, under enough smoothness, i.e., when 2(s+ 2) > d, both the upper bounds for (1.8) and (1.9) are Op(n−1/2). This shows that, for large dimensions, provided µ and ν admit smooth enough densities, it is possible to construct plug-in estimators that mitigate the curse of dimensionality. Note that a similar estimator was analyzed in [67, Proposition 1] when m = n. However, the rates obtained in Theorem 2.5 are strictly better than those in [67, Proposition 1]. For m = n, when d < 2(s + 2), [67] obtained a rate of n− s+2 2(s+2)+d which is worse than n−1/2 obtained in Theorem 2.5. For the other regimes, [67] obtains rates (up to log factors) of n−1/4 and n− 1 (s+2)(d+2(s+2)) which are both worse than the respective rates of n−1/2 and n− s+2 d in Theorem 2.5. Remark 2.8 (Computational aspects of Theorem 2.5). Note that f̃M ′ µ (·) (with M ′ = 8(1 + TM)) is hard to compute whereas f̂µ(·) is computable easily in linear time. Note that if f̂µ(·) itself were a probability density in Cs(X ;TM), then we would have f̂µ = f̃M ′ µ . While Theorem 2.5 does not establish that, it does come close in part 2, from which we can easily derive the following: lim n→∞ P(f̂µ(·) /∈ Cs(X̃ ;TM)) = 0. The above shows that f̂µ(·) is indeed bounded below by (TM)−1 on X̃ (any compact subset of the interior of X ), and additionally belongs to Cs(X̃ ;TM) with probability converging to 1. This leads us to conjecture that the natural density version of f̂µ(·), i.e., max{f̂µ(·), 0}∫ max{f̂µ(x), 0} dx should serve as a good proxy for f̃M ′ µ (·) and lead to rates of convergence that mitigate the curse of dimensionality. From a computational perspective, the density specified above is easy to simulate from using an accept-reject algorithm without computing the integral in the denominator (see [101, Algorithm 4.3]). However, our current proof technique does not provide rates of convergence for the above density estimator based on f̂µ(·). Another important implication of Theorem 2.5 is the bound obtained on |W2(µ̃m, ν̃n)−W2(µ, ν)| when µ 6= ν. We first present the result and then describe the implication. Proposition 2.6. Consider the setting in Theorem 2.5. Then, provided µ 6= ν, the following holds: |W2(µ̃m, ν̃n)−W2(µ, ν)| = Op(r(m,n)d,s ). Proposition 2.6 (see Appendix C.1 for a proof) shows an interesting distinction between the µ 6= ν case and the µ = ν case. For µ = ν, the best possible exponent is n− 1+s 2s+d for d ≥ 3 (see [131, Theorem 3] where the result was established under more general Besov smoothness assumptions). On the contrary, when µ 6= ν, Proposition 2.6 establishes a rate of n− s+2d for the Wasserstein distance which is strictly better than the minimax achievable rate mentioned above when µ = ν. This observation complements [30, Corollary 1] where the authors make a similar remark for the special case of s = 0. 2.3 Discretized plug-in estimator under smoothness assumptions In Section 2.1, we discussed how smoothness can be incorporated into the plug-in procedure to get faster rates of convergence. Such plug-in estimators are popular in the computational OT literature (see [7, 8, 25, 36]). However, even after f̃µ(·) ≡ f̃M ′ µ (·), f̃ν(·) ≡ f̃M ′ ν (·) are calculated, T̃ γm,n as in Theorem 2.5 cannot be computed explicitly from data if f̃µ(·) and f̃ν(·) are continuous densities. This is in contrast to T̃ γm,n from Theorem 2.2 in the discrete-discrete case which is explicitly computable using a standard linear program, but achieves worse rates of convergence. This is not unexpected. Thanks to the no free lunch principle, better statistical accuracy is naturally accompanied by heavier computational challenges. Therefore, our goal here is to construct estimators, under smoothness assumptions as in Section 2.2, which are computable in polynomial time (with complexity increasing with smoothness) provided f̃µ(·) and f̃ν(·) can be sampled from, and also attain rates that mitigate the curse of dimensionality. Construction: We will illustrate the discretized estimator using the kernel based estimator from Section 2.2. Similar results also hold for the wavelet based estimator from Appendix A. Recall the kernel density estimators f̃µ(·) and f̃ν(·) (see (2.6)). Sample M ≥ 1 random points from both f̃µ(·) and f̃ν(·). Let µ̂m,M and ν̂n,M denote the standard empirical measures on the M points sampled from f̃µ(·) and f̃ν(·) respectively. Finally construct T̃m,n ≡ T̃ γm,n as in Definition 1.2 with µ̃m = µ̂m,M and ν̃n = ν̂n,M . It should be pointed out that a similar construction was also used in [131, Section 6] for estimating probability densities under the Wasserstein loss. Based on this construction, the main result of this section is as follows: Theorem 2.7. Consider the setting in Theorem 2.5 and the same construction of T̃ γm,n as above. For simplicity, let’s also assume m = n. Accordingly set M = n s+2 2 . Then Γ̃min is a singleton and consequently the following conclusion holds for some constant C > 0: E [∫ ‖T̃m,n(x)− T0(x)‖2 dµ̃m(x) ] ≤ Cr(n,n)d,s . The same rates also hold for E|W 22 (µ̃m, ν̃n)−W 22 (µ, ν)|. The proof of Theorem 2.7 is given in Appendix C.1. Once the empirical measures µ̂m,M and ν̂n,M have been obtained, an explicit computation of T̃m,n as described above requires O(M3) = O(n 3(s+2) 2 ) steps using the Hungarian algorithm, see [73]. This highlights the statistical versus computational trade-off, i.e., in order to mitigate the curse of dimensionality in convergence rates by exploiting smoothness, the computational complexity gets progressively worse by polynomial factors in n. It should be mentioned that (approximate) algorithms faster than the Hungarian algorithm stated above, can be found in [1, 36, 53] to name a few. Due to space constraints, we avoid a detailed discussion on this. In the above construction, sampling from the smoothed kernel densities f̃µ(·) and f̃ν(·) is crucial. If we would simply draw M bootstrap samples from the empirical distributions µ̂m and ν̂n, the rates of convergence wouldn’t improve from those observed in Theorem 2.2 no matter how large M is. Acknowledgments and Disclosure of Funding We would like to thank the reviewers for their constructive suggestions that greatly helped improve the quality of the paper. The third author is supported by NSF Grant DMS-2015376.
1. What is the focus of the paper in terms of optical flow dataset generation? 2. What are the strengths of the proposed approach in differentiable data generation? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. What are the weaknesses of the paper regarding its claims, experiments, and comparisons with other works? 5. What is the contribution of the paper and the significance of the proposed modules?
Summary Of The Paper Review
Summary Of The Paper In standard OT problem, marginal distributions of the data are unknown. The current work considers plug-in estimation using Barycentric projection and derived rate of estimation. The idea of Barycentric projection is to plug the estimated marginal distributions of the data in the OT problem. Conditional mean of given us estimated and rate of estimation is derived. Various plug-in estimators of density of are considered such as empirical CDF and kernel density estimator. Review The work is new and results are strong and important. Paper written is clear and of good quality.
NIPS
Title Rates of Estimation of Optimal Transport Maps using Plug-in Estimators via Barycentric Projections Abstract Optimal transport maps between two probability distributions μ and ν on R have found extensive applications in both machine learning and statistics. In practice, these maps need to be estimated from data sampled according to μ and ν. Plugin estimators are perhaps most popular in estimating transport maps in the field of computational optimal transport. In this paper, we provide a comprehensive analysis of the rates of convergences for general plug-in estimators defined via barycentric projections. Our main contribution is a new stability estimate for barycentric projections which proceeds under minimal smoothness assumptions and can be used to analyze general plug-in estimators. We illustrate the usefulness of this stability estimate by first providing rates of convergence for the natural discretediscrete and semi-discrete estimators of optimal transport maps. We then use the same stability estimate to show that, under additional smoothness assumptions of Sobolev type or Besov type, kernel smoothed or wavelet based plug-in estimators respectively speed up the rates of convergence and significantly mitigate the curse of dimensionality suffered by the natural discrete-discrete/semi-discrete estimators. As a by-product of our analysis, we also obtain faster rates of convergence for plug-in estimators of W2(μ, ν), the Wasserstein distance between μ and ν, under the aforementioned smoothness assumptions, thereby complementing recent results in Chizat et al. (2020). Finally, we illustrate the applicability of our results in obtaining rates of convergence for Wasserstein barycenter between two probability distributions and obtaining asymptotic detection thresholds for some recent optimaltransport based tests of independence. 1 Introduction Given two random variables X ∼ µ and Y ∼ ν, where µ, ν are probability measures on Rd, d ≥ 1, the problem of finding a “nice" map T0(·) such that T0(X) ∼ ν has numerous applications in machine learning such as domain adaptation and data integration [34, 35, 38, 48, 61, 112], dimension reduction [12, 66, 90], generative models [60, 81, 88, 110], to name a few. Of particular interest is the case when T0(·) is obtained by minimizing a cost function, a line of work initiated by Gaspard Monge [97] in 1781 (see (1.1) below), in which case T0(·) is termed an optimal transport (OT) map and has applications in shape matching/transfer problems [29, 47, 107, 121], Bayesian statistics [46, 75, 80, 108], econometrics [15, 28, 45, 50, 54], nonparametric statistical inference [39– 41, 113, 114]; also see [111, 128, 129] for book-length treatments on the subject. In this paper, we will focus on the OT map obtained using the standard squared Euclidean cost function, i.e., T0 := argmin T :T#µ=ν E‖X − T (X)‖2, (1.1) where T#µ = ν means T (X) ∼ ν for X ∼ µ. The estimation of T0 has attracted a lot of interest in recent years due to its myriad applications (as stated above) and interesting geometrical properties (see [19, 56, 91] and Definition 1.1 below). In practice, the main hurdle in constructing estimators for T0 is that the explicit forms of the measures µ, ν are unknown; instead only random samples X1, . . . , Xm ∼ µ and Y1, . . . , Yn ∼ ν 35th Conference on Neural Information Processing Systems (NeurIPS 2021). are available. A natural strategy in this scenario is to estimate T0 using T̃m,n, where T̃m,n is computed as in (1.1) with µ and ν replaced by µ̃m and ν̃n which are empirical approximations of µ and ν based on X1, . . . , Xm and Y1, . . . , Yn respectively (see Definition 1.2). Such estimators are often called plug-in estimators and have been used extensively; see [7, 30, 67, 93, 94, 102, 116]. The main goal of this paper is to study the rates of convergence of general plug-in estimators of T0 under a unified framework. We show that when µ̃m and ν̃n are chosen as µ̂m and ν̂n respectively, where µ̂m and ν̂n are the standard empirical distributions supported on m and n atoms, i.e., µ̂m := 1 m m∑ i=1 δXi and ν̂n := 1 n n∑ j=1 δYj , (1.2) T̃m,n (appropriately defined using Definition 1.2) converges at a rate of m−2/d + n−2/d for d ≥ 4 in the sense of (1.8). This rate happens to be minimax optimal under minimal smoothness assumptions (see [72, Theorem 6]) but suffers from the curse of dimensionality. We next show that, if µ and ν are known to admit sufficiently smooth densities, it is possible to apply kernel or wavelet based smoothing techniques on µ̂m and ν̂n to obtain plug-in estimators that mitigate the aforementioned curse of dimensionality. Our next contribution pertains to the estimation of W 22 (µ, ν) (the squared Wasserstein distance), see (1.3) below, a quantity of independent interest in statistics and machine learning with applications in structured prediction [51, 89], image analysis [18, 59], nonparametric testing [16, 106], generative modeling [10, 96], etc. In this paper, we also obtain rates of convergence for plug-in estimators W 22 (µ̃m, ν̃n) of W 2 2 (µ, ν). We show that kernel smoothing µ̂m and ν̂n can be used to obtain plug-in estimators of W 22 (µ, ν) that mitigate the curse of dimensionality as opposed to a direct plug-in approach using µ̂m and ν̂n (as used in [30, Theorem 2]). This provides an answer to the open question of estimating W 22 (µ, ν) when µ, ν admit smooth densities laid out in [30]. 1.1 Background on optimal transport In this section, we present some basic concepts and results associated with the OT problem that will play a crucial role in the sequel. Let Pac(Rd) denote the set of all Lebesgue absolutely continuous probability measures on Rd andP2(Rd) be the set of probability measures with finite second moments. Then the 2-Wasserstein distance (squared) between µ, ν ∈ P2(Rd) is defined as: W 22 (µ, ν) := min π∈Π(µ,ν) ∫ ‖x− y‖2 dπ(x, y), (1.3) where Π(µ, ν) is the set of probability measures on Rd×Rd with marginals µ and ν. The optimization problem in (1.3) is often called the Kantorovich relaxation (see [76, 77]) of the optimization problem in (1.1). The existence of a minimizer in (1.3) follows from [129, Theorem 4.1]. Proposition 1.1 (Brenier-McCann polar factorization theorem, see [91, 128]). Suppose µ ∈ Pac(Rd). Then there exists a µ-a.e. (almost everywhere) unique function T0(·) : Rd → Rd, which is the gradient of a real-valued d-variate convex function, say ϕ0(·) : Rd → R, such that T0#µ = ν. Further, the distribution defined as π(A×B) = µ(A ∩ (T0)−1(B)) for all Borel sets A,B ⊆ Rd is the unique minimizer in (1.3) provided µ, ν ∈ P2(Rd). Definition 1.1 (OT map and potential function). The function T0 : Rd → Rd in Proposition 1.1 which satisfies T0#µ = ν will be called the OT map from µ to ν. A convex function ϕ0(·) in Proposition 1.1 satisfying∇ϕ0 = T0 will be termed an OT potential. The next and final important ingredient is the alternate dual representation of (1.3) which gives: 1 2 W 22 (µ, ν) = 1 2 ∫ ‖x‖2 dµ(x) + 1 2 ∫ ‖y‖2 dν(y)−min f∈F Sµ,ν(f), where (1.4) Sµ,ν(f) = ∫ f dµ+ ∫ f∗ dν. (1.5) Here F denotes the space of convex functions on Rd which are also elements of L1(µ) and f∗(·) is the standard Legendre-Fenchel dual defined as: f∗(x) := sup y∈Rd [y>x− f(y)], for x ∈ dom(f). (1.6) 1.2 Estimating OT map via barycentric projection Recall the setting from the Introduction. Let µ̃m, ν̃n ∈ P2(Rd). Here µ̃m, ν̃n need not be absolutely continuous and can be very general. Intuitively, µ̃m and ν̃n can be viewed as some empirical approximation of µ and ν respectively. Example 1.2 (Simple choices of µ̃m and ν̃n). Let X1, . . . , Xm i.i.d.∼ µ and Y1, . . . , Yn i.i.d.∼ ν; in which case a natural choice would be to set µ̃m = µ̂m and ν̃n = ν̂n where µ̂m and ν̂n are the empirical distributions of X1, . . . , Xm and Y1, . . . , Yn respectively, as defined in (1.2). This is the standard choice adopted in the discrete-discrete Kantorovich relaxation; see [104, Section 2.3]. Another popular choice is µ̃m = µ̂m, ν̃n = ν or µ̃m = µ, ν̃n = ν̂n. This is the semi-discrete Kantorovich problem and is popular when one of the measures is fully specified; see [26, 55]. A natural way to estimate T0(·), as defined in (1.1), would be to approximate it using the OT map from µ̃m to ν̃n. However as µ̃m and ν̃n may not be elements of Pac(Rd), Proposition 1.1 does not apply and an OT map may not exist from µ̃m to ν̃n. Such is the case in Example 1.2 in the discrete-discrete case when m 6= n. To circumvent this issue, we leverage the notion of barycentric projections (see [3, Definition 5.4.2]) defined below: Definition 1.2 (Barycentric projection). Define the set Γ̃min := argmin π∈Π(µ̃m,ν̃n) ∫ ‖x− y‖2 dπ(x, y). The optimization problem above is the plug-in analog of the optimization problem on the right hand side of (1.3). Given any γ ∈ Γ̃min, define the barycentric projection of γ as the conditional mean of y given x under γ, i.e., T̃m,n(x) ≡ T̃ γm,n(x) := ∫ y y dγ(x, y)∫ y dγ(x, y) , for x ∈ supp (µ̃m) . (1.7) In general, Γ̃min need not be a singleton which is why we index the barycentric projection T̃ γm,n(·) by γ ∈ Γ̃min. Note that T̃ γm,n(·) need not be a transport map; however, if an OT map exists then it must be equal to T̃ γm,n(·) (µ̃m-a.e.). Our goal is to obtain stochastic upper bounds for sup γ∈Γ̃min ∫ ∥∥T̃ γm,n(x)− T0(x)∥∥2 dµ̃m(x). (1.8) In addition, our proof techniques also yield rates of convergence for∣∣W 22 (µ̃m, ν̃n)−W 22 (µ, ν)∣∣. (1.9) In this paper, we will focus on d ≥ 2. Due to the canonical ordering of R, the case d = 1 can be handled easily using the classical Hungarian embedding theorem [82]. 1.3 Contributions 1. We provide a new and flexible stability estimate Theorem 2.1 which yields a unified approach to obtaining rates of convergence for general plug-in estimators of the OT map T0(·). Unlike existing stability estimates, Theorem 2.1 holds for the barycentric projection (which is the same as the OT map when it exists) and does not require any smoothness assumptions on µ̃m, ν̃n or T̃ γm,n(·); also see Remark 2.1 for a comparison with the existing literature. 2. In Sections 2.1 and 2.2, we use Theorem 2.1 to bound (1.8) and (1.9): • In Section 2.1, we show that in both the discrete-discrete and semi-discrete Kantorovich relaxation problems (see Example 1.2), the rate of convergence of (1.8) is m−2/d + n−2/d for d ≥ 4 when T0 is assumed to be Lipschitz (see Theorem 2.2), which is the minimax rate (see [72, Theorem 6]). To the best of our knowledge, rates of convergence for these natural estimators weren’t previously established in the literature. • In Section 2.2 and Appendix A, we show that the curse of dimensionality in the above rates can be mitigated provided µ and ν admit (uniform) Sobolev smooth densities (see Section 2.2) or Besov smooth densities (see Appendix A). In Section 2.2, our plug-in estimator is obtained by choosing µ̃m (and ν̃n) as the convolution of µ̂m (and ν̂n) and a smooth kernel with an appropriate bandwidth. Under this choice, the rate of convergence in (1.8) is m−( s+2 d ∧ 1 2 ) + n−( s+2 d ∧ 1 2 ), where s denotes the degree of Sobolev smoothness (see Theorem 2.5). Clearly, if 2(s + 2) ≥ d, the rate of convergence becomes dimension-free and mitigates the curse of dimensionality. We also show the same rates of convergence mentioned above hold for (1.9) (see e.g., Proposition 2.6) which makes a strong case in favor of incorporating smoothness in the construction of plug-in estimators as was conjectured in [30]. In Appendix A, our plug-in estimator is obtained using natural wavelet based density estimators. The rate of convergence in (1.8) turns out to be n− 1+s d+2s where s denotes the degree of Besov smoothness (see Theorem A.1). Note that by choosing s large enough, the exponent in the rate can be made arbitrarily close to 1/2, thereby reducing the curse of dimensionality. 3. In Section 2.3, we use a discretization technique from [131] to construct discrete approximations to the smoothed µ̃m and ν̃n from the previous paragraph that in turn yield computable plug-in estimators for T0 (provided one can sample from µ̃m and ν̃n) that also achieve the same statistical guarantees as the smoothed plug-in estimator from Section 2.2 (see Theorem 2.7). However the number of atoms required in the discretizations and correspondingly the computational complexity increases with the degree of smoothness; this highlights a statistical and computational trade-off. 4. We provide implications of our results in popular applications of OT such as estimating the barycenter of two multivariate probability distributions (see Theorem B.1 in Appendix B.1) and in nonparametric independence testing (see Theorem B.3 in Appendix B.2). 1.4 Related work Many recent works have focused on obtaining consistent estimators of T0 using the plug-in principle, see [26, 55] (in the semi-discrete problem) and [41, 68, 132] (in the discrete-discrete problem). In [55], the authors studied the rate of convergence of the semi-discrete optimal transport map from ν (absolutely continuous) to µ̂m. This paper complements the aforementioned papers by studying the rates of convergence for general plug-in estimators in a unified fashion. In two other papers [9, Theorem 1.1] and [87, Section 4], the authors use a “Voronoi tessellation" approach to estimate T0, however the rates obtained in this paper, even in the absence of smoothness, are strictly better than those in [9, 87]. Perhaps the most closely related paper to ours would be [67]. In [67], the author uses variational techniques to arrive at stability estimates while we exploit the Lipschitz nature of the OT map (see Definition 1.1). Further the rates in this paper have exponents s+2d ∧ 1 2 which are strictly better than the exponents s+22(s+2)+d obtained in [67, Proposition 1] under the same smoothness assumptions (Sobolev type of order s, see Definition 2.4). In another line of work [72], the authors use theoretical wavelet based estimators (not of the plug-in type) of T0 to obtain nearly minimax optimal rates of convergence. However these estimators, by themselves, are not transport maps between two probability measures, which makes them harder to interpret. In contrast, our focus is on obtaining rates of convergence for plug-in estimators, which are transport maps between natural approximations of µ and ν. Such plug-in type strategies are a lot more popular in computational OT [7, 30, 67, 93, 94, 102, 116]. In terms of obtaining rates of convergence for (1.9), some attempts include [109, 116] where parametric rates are obtained when µ, ν are known to be finitely supported or are both Gaussian. In a related problem, bounds for W 22 (µ̂m, µ) were obtained in [6, 42, 49, 100, 123, 131]. Using these bounds, for m = n, it is easy to get a n−1/d rate of convergence for (1.9). This rate was recently improved to n−2/d in [30] under no smoothness assumptions. Our rates coincide with the n−2/d rate from [30] under no smoothness assumptions. But further, we show in this paper that the curse of dimensionality in the above rate can be mitigated by incorporating smoothness into the plug-in procedure. 2 Main results Recall ϕ0(·) from Definition 1.1. The following is our main result. Theorem 2.1 (Stability estimate). Suppose that µ, ν ∈ Pac(Rd) ∩ P2(Rd) and µ̃m, ν̃n ∈ P2(Rd). Assume that T0(·) (as defined in (1.1)) is L-Lipschitz (L > 0). Then, sup γ∈Γ̃min ∫ ‖T̃ γm,n(x)− T0(x)‖2 dµ̃m(x) ≤ Lmax {∣∣∣∣ ∫ Ψ∗µ̃m,ν̃n d(ν̃n − ν†m)∣∣∣∣, ∣∣∣∣ ∫ Ψ∗µ̃m,ν†m d(ν̃n − ν†m) ∣∣∣∣} + 2L ∫ ϕ∗0(y) d(ν̃n − ν†m)(y), (2.1) where ν†m := T0#µ̃m, ϕ ∗ 0(·) is defined as in (1.6), and with S·,·(·) defined as in (1.5), Ψµ̃m,ν̃n(·) := argminf∈F Sµ̃m,ν̃n(f), Ψµ̃m,ν†m(·) := argminf∈F Sµ̃m,ν†m(f), and D denotes the space of realvalued convex functions on Rd. The proof of Theorem 2.1 (see Appendix C.1) starts along the same lines as the proof of the curvature estimate in [56, Proposition 3.3]. This is followed by some careful manipulations of W 22 (·, ·) (as in (1.3)) and an application of the conditional version of Jensen’s inequality, see (C.3). The final step of the proof uses the dual representation in (1.4) with techniques similar to some intermediate steps in the proof of [92, Proposition 2] and [30, Lemma 3]. Remark 2.1 (Comparison with other stability estimates). Theorem 2.1 provides some important advantages to existing stability estimates in the literature. One of the earliest results in this direction can be found in [56, Proposition 3.3] but their bound involves a push-forward constraint which makes it hard to use for rate of convergence analysis. A bound similar to Theorem 2.1 is presented in [55, Lemma 5.1] but there the authors assume the existence of an OT map from µ̃m to ν̃n. Therefore, it does not apply to the discrete-discrete problem where µ̃m = µ̂m and ν̃n = ν̂n with m 6= n. Overcoming all these limitations is an important contribution of Theorem 2.1 and allows us to deal with popular plug-in estimators all in one go. The stability estimate in [72, Proposition 10] on the other hand requires µ̃m, ν̃n to be sufficiently smooth and hence it does not hold for discrete-discrete or semi-discrete plug-in estimators (see Example 1.2). Further their result requires all the measures involved to be compactly supported unlike the much milder requirements of Theorem 2.1. However, a shortcoming of Theorem 2.1 is that it is hard to obtain rates faster than n−1/2 using it directly, whereas [72] can obtain rates arbitrarily close to n−1. This is a price we pay for analyzing natural and popular plug-in estimators as opposed to the (more intractable) wavelet based estimators in [72]. Remark 2.2 (How to use Theorem 2.1 to obtain rates of convergence?). Note that the second term on the right hand side of (2.1), under appropriate moment assumptions, is Op(m−1/2 +n−1/2) (free of dimension) by a direct application of Markov’s inequality. We therefore focus on the first term. By (1.5), Ψ∗µ̃m,ν̃n(·), Ψ ∗ µ̃m,ν † m (·) ∈ F . Further, by Caffarelli’s regularity theory [20–22], depending on the “smoothness" of µ̃m, ν̃n, it can be shown that there exists a further class of functions Fs (see Remarks 2.3 and 2.6) such that Ψ∗µ̃m,ν̃n(·), Ψ ∗ µ̃m,ν † m (·) ∈ F ∩ Fs. Thus, we can bound the first term on the right hand side of (2.1) as: max {∣∣∣∣ ∫ Ψ∗µ̃m,ν̃n d(ν̃n − ν†m)∣∣∣∣, ∣∣∣∣ ∫ Ψ∗µ̃m,ν†m d(ν̃n − ν†m) ∣∣∣∣} ≤ sup f∈F∩Fs ∣∣∣∣ ∫ f d(ν̃n − ν†m)∣∣∣∣. (2.2) The right hand side of (2.2) can now be bounded using the corresponding Dudley’s entropy integral bounds using empirical process techniques; see [126, Lemmas 19.35-19.37]. To conclude, the two main steps in our strategy are identifying the family of functions Fs and computing Dudley’s entropy integral. Further, the more the smoothness of µ̃m, ν̃n, the smaller is the class of functions Fs and smaller the supremum on the right hand side of (2.2). This shows why better rates can be expected under smoothness assumptions. 2.1 Natural non-smooth plug-in estimators In this case, we discuss the rates of convergence for the discrete-discrete problem and the semi-discrete problem, where no smoothness is available on µ̃m and ν̃n. Theorem 2.2. Suppose that T0(·) is L-Lipschitz, ν is compactly supported and E exp(t‖X1‖α) <∞ for some t > 0, α > 0. (Discrete-discrete): Set µ̃m = µ̂m and ν̃n = ν̂n. Then the following holds: sup γ∈Γ̃min ∫ ‖T̃ γm,n(x)− T0(x)‖2 dµ̃m(x) = Op ( r (m,n) d × (log (1 + max{m,n})) td,α ) , (2.3) where r(m,n)d := m−1/2 + n−1/2 for d = 2, 3, m−1/2 log (1 +m) + n−1/2 log (1 + n) for d = 4, m−2/d + n−2/d for d ≥ 5, (2.4) and td,α := (4α)−1(4 + ((2α+ 2dα− d) ∨ 0)) for d < 4, (α−1 ∨ 7/2)− 1 for d = 4, 2(1 + d−1) for d > 4. The same bound holds for |W 22 (µ̃m, ν̃n)−W 22 (µ, ν)| without assuming T0(·) is Lipschitz. (Semi-discrete): Set µ̃m = µ, ν̃n = ν̂n or µ̃m = µ̂m, ν̃n = ν. Then the left hand side of (2.3) is Op(r (n,n) d × (log (1 + n))td,α) or Op(r (m,m) d × (log (1 +m))td,α) respectively. A stronger result can be proved if both µ and ν are compactly supported. Corollary 2.3. Consider the setting from Theorem 2.2 and assume further that µ is compactly supported. Then, with r(m,n)d defined as in (2.4), we have: E [ sup γ∈Γ̃min ∫ ‖T̃ γm,n(x)− T0(x)‖2 dµ̃m(x) ] ≤ Cr(m,n)d , for some constant C > 0, in both the discrete-discrete and semi-discrete settings from Theorem 2.2. A brief description of the proof technique of Theorem 2.2 using Theorem 2.1 is provided in Remark 2.3 below, and the actual proof is presented in Appendix C.1. Remark 2.3 (Proof technique). The proof of Theorem 2.2 proceeds via the strategy outlined in Remark 2.2. We first show that Fs (see Remark 2.2) can be chosen as a certain sub-class of convex functions which are in L2(ν). We then use Dudley’s entropy integral type bounds which in turn requires the bracketing entropy [126, Page 270] of Fs, recently proved in [83, Equation 26]. This strategy is slightly different from that used in the proof of [30, Theorem 2], where the authors assume that µ is compactly supported whereas we only assume the finiteness of E exp(t‖X1‖α) for some t > 0, α > 0. The compactness assumption on µ allows one to further restrict Fs to the class of Lipschitz functions. This additional restriction does not seem to be immediate without the compactness assumption. As discussed in Section 1.3, the exponents obtained in Theorem 2.2 are minimax optimal, up to multiplicative logarithmic factors, under bare minimal smoothness assumptions (see [72, Theorem 6]). To the best of our knowledge, rates for the discrete-discrete case for m 6= n and those for the semi-discrete case were not known previously in the literature. Our rates are also strictly better than those (for different estimators, based on space tessellations) obtained in [9, 87] and require less stringent assumptions than those in [30]. In the next section, we show how smoothness assumptions can be leveraged to mitigate the curse of dimensionality in Theorem 2.2. 2.2 Smooth kernel based plug-in estimator: mitigating the curse of dimensionality In this section, we focus on kernel based density estimators for the probability densities associated with µ and ν (see [57, 58, 99, 103, 115]). We will show, using Theorem 2.1, that the corresponding estimators of T0(·) achieve (near) dimension-free rates under sufficient smoothness assumptions. We first introduce the Sobolev class of functions which we will exploit in this subsection to construct estimators that achieve rates of convergence which mitigate the curse of dimensionality under sufficient smoothness. Definition 2.4 (Uniform Sobolev class of functions). Let Ω ⊆ Rd and f(·) be uniformly continuous on Ω and admits uniformly continuous derivatives up to order s on Ω for some s ∈ N. For any m := (m1, . . . ,md) ∈ Nd, let ∂mf := ∂ ∂m1x1 . . . ∂ ∂mdxd f, |m| := d∑ i=1 mi. For any k ≤ s, we further define, ‖f‖Ck(Ω) := ∑ |m|≤k ‖∂mf‖L∞(Ω). The space Cs(Ω) is defined as the set of functions f(·) for which ‖f‖Ck(Ω) <∞ for all k ≤ s. For this subsection, assume that µ and ν admit Sobolev smooth densities fµ(·) and fν(·) in the uniform norm (see Definition 2.4 above). Given Ω ⊆ Rd and s ∈ N, let Cs(Ω) denote the set of Sobolev smooth functions on Ω of order s. Assumption (A1) (Regularity of the densities). Suppose that 1. fµ and fν are supported on compact and convex subsets of Rd, say X and Y respectively. 2. There exists s,M > 0 such that fµ(·) ∈ Cs(X ;M) and fν(·) ∈ Cs(Y;M) where Cs(X ;M) is the space of real valued functions supported on X such that for all f(·) ∈ Cs(X ;M), we have M−1 ≤ f(x) ≤ M for all x ∈ X and ‖f‖Cs(X ) ≤ M . Here ‖·‖Cs(X ) is the standard uniform Sobolev norm as defined in Definition 2.4. The space Cs(Y;M) is defined analogously. We now define our estimators for fµ(·) and fν(·) using the standard kernel density estimation technique (see [125, Section 1.2]). Set f̂µ(x) := 1 mhdm m∑ i=1 Kd ( Xi − x hm ) , (2.5) for some bandwidth parameter hm > 0 and d-variate kernel Kd(·). We assume that Kd(·) is the dfold product of univariate kernels, i.e., there exists a kernelK(·) such that for u = (u1, . . . , ud) ∈ Rd, Kd(u) = ∏d i=1K(ui). We define f̂ν(·) similarly with the same univariate kernel and bandwidth. Assumption (A2) (Regularity of the kernel). Assume that K(·) is a symmetric, bounded, s+ 1 times differentiable kernel on Rd with all s+ 1 derivatives bounded and integrable. Further, suppose that K(·) is of order 2s+ 2, i.e.,∫ ujK(u) du = 1(j = 0), for j = {0, 1, 2, . . . , 2s+1}, and ∫ |u|2s+2|K(u)| du <∞. The above assumptions on K(·) are standard for estimating smooth densities and their derivatives of different orders in the kernel density estimation literature; see e.g. [4, 57, 58, 69, 125]. There are several natural ways to construct kernels satisfying Assumption (A2), see [125, Section 1.2.2]; an example is also provided in Example 2.4 below. Example 2.4 (Example of a kernel satisfying Assumption (A2)). Let ψm(·) be the m-th Hermite polynomial on R (see [84]). Then the kernel function defined as K(u) := 2s+2∑ m=0 ψm(0)ψm(u) exp(−u2/2) satisfies Assumption (A2). It is evident from Assumption (A2) that K(·) may take some negative values, in which case, f̂µ(·) (respectively f̂ν(·)) may not be a probability density. Consequently the barycentric projection (see Definition 1.2) between f̂µ(·) and f̂ν(·) is not well-defined. We get around this by projecting f̂µ(·) and f̂ν(·) on an appropriate space of “smooth" probability densities (see (2.6)), via an integral probability metric (see Definition 2.5 below; also see [98, 105, 117] for examples, computational procedures and applications of such metrics). Definition 2.5 (Integral probability metric). Given a classH of bounded functions on Rd and two probability densities g1(·) and g2(·) on Rd, the integral probability metric/distance between g1(·) and g2(·) with respect toH is defined as dIP(g1, g2;H) := sup ψ(·)∈H ∣∣∣∣ ∫ ψ(x)(g1(x)− g2(x)) dx∣∣∣∣. Sufficient conditions onH for dIP(·, ·;H) to be a metric on the space of probability measures (not on the space of probability densities as they can be altered on set of Lebesgue measure 0 without altering the underlying probability measures) on Rd have been discussed in [98]. Observe that the measure dIP(g1, g2;H) is well defined even when g1(·) and g2(·) are not probability densities. In Theorem 2.5 below, we useH = Cs+2(X ,M ′). Note that any function in Cs+2(X ,M ′) can be extended to a function in Cs+2(Rd;M ′) (see [72, Theorem 23] and [124, Theorem 1.105]). The fact that this choice of F results in a metric follows from the argument in [98, Page 8]. We are now in a position to describe the projection estimators for fµ(·) and fν(·), and the rates achieved by the corresponding plug-in estimator. Theorem 2.5. Assume that T0(·) is L-Lipschitz and fµ, fν are Lebesgue densities satisfying Assumption (A1). Also suppose that K(·) satisfies Assumption (A2). Define hm := m− 1 d+2s logm , hn := n − 1d+2s log n and T := ∫ |Kd(u)| du+ 1. Fix any M ′ > 0. Consider any probability density f̃M ′ µ (·) ∈ Cs(X ;TM) (where M is defined as in Assumption (A1)) which satisfies dIP ( f̃M ′ µ , f̂µ;C s+2(X ;M ′) ) ≤ inf f(·)∈Cs(X ;TM) f≥0, ∫ f=1 dIP ( f̂µ, f ;C s+2(X ;M ′) ) + r (m,n) d,s (2.6) where r(m,n)d,s is defined as in (2.7) and dIP(·, ·;Cs+2(X ;M ′)) is the integral probability metric defined in Definition 2.5. We define f̃M ′ ν (·) analogously as in (2.6) with X , f̂µ(·) replaced by Y , f̂ν(·). Then the following conclusions hold. 1. SetM ′ := 8(1+TM). If µ̃m and ν̃n are the probability measures corresponding to the probability densities f̃M ′ µ (·) and f̃M ′ ν (·), then the following holds for some constant C > 0: E [ sup γ∈Γ̃min ∫ ‖T̃ γm,n(x)− T0(x)‖2 dµ̃m(x) ] ≤ Cr(m,n)d,s , where r(m,n)d,s := m−1/2 + n−1/2 for d < 2(s+ 2), m−1/2 (log (1 +m)) d + n−1/2 (log (1 + n)) d for d = 2(s+ 2), m− s+2 d + n− s+2 d for d ≥ 2(s+ 2). (2.7) The same bound also holds for E|W 22 (µ̃m, ν̃n)−W 22 (µ, ν)|. 2. f̂µ(·) satisfies lim n→∞ max { P ( ‖f̂µ‖Cs(X̃ ) ≥ TM ) ,P ( sup x∈X̃ |f̂µ(x)− fµ(x)| ≥ ε )} = 0 (2.8) for any ε > 0, where X̃ is any compact subset of X o. The same conclusion holds for f̂ν(·) with X replaced by Y . In Theorem 2.5, we have shown that the plug-in estimator for T0(·) using f̃M ′ µ (·) and f̃M ′ ν (·) (with M ′ = 8(1 + TM)) achieves rates that mitigate the curse of dimensionality under sufficient smoothness. In fact, f̃M ′ µ (·) can be viewed as an approximate minimizer of dIP(f̂µ, ·;Cs+2(X ,M ′)) over an appropriate class of Sobolev smooth probability densities. This is carried out because f̂µ(·) by itself may not be a probability density. Further note that µ̃m, ν̃n as specified in Theorem 2.5 are both smooth, and consequently Γ̃min is a singleton and the supremum in Theorem 2.5 can be dropped. A brief description of the proof technique for Theorem 2.5 is presented in Remark 2.6 below and the actual proof is given in Appendix C.1. Remark 2.6 (Proof technique). The proof of Theorem 2.5 proceeds along the same lines as Remark 2.3. We first show that Fs (see Remark 2.2) can be chosen as a certain subset of Cs+2(Y◦). We then use Dudley’s entropy integral type bounds which in turn requires the bracketing entropy [126, Page 270] of the class of compactly supported Sobolev smooth functions which can be found in [127, Corollary 2.7.2]. We now explain the implications of both the parts of Theorem 2.5 in the following two remarks. Remark 2.7 (Mitigating the curse of dimensionality). Theorem 2.5 shows that, under enough smoothness, i.e., when 2(s+ 2) > d, both the upper bounds for (1.8) and (1.9) are Op(n−1/2). This shows that, for large dimensions, provided µ and ν admit smooth enough densities, it is possible to construct plug-in estimators that mitigate the curse of dimensionality. Note that a similar estimator was analyzed in [67, Proposition 1] when m = n. However, the rates obtained in Theorem 2.5 are strictly better than those in [67, Proposition 1]. For m = n, when d < 2(s + 2), [67] obtained a rate of n− s+2 2(s+2)+d which is worse than n−1/2 obtained in Theorem 2.5. For the other regimes, [67] obtains rates (up to log factors) of n−1/4 and n− 1 (s+2)(d+2(s+2)) which are both worse than the respective rates of n−1/2 and n− s+2 d in Theorem 2.5. Remark 2.8 (Computational aspects of Theorem 2.5). Note that f̃M ′ µ (·) (with M ′ = 8(1 + TM)) is hard to compute whereas f̂µ(·) is computable easily in linear time. Note that if f̂µ(·) itself were a probability density in Cs(X ;TM), then we would have f̂µ = f̃M ′ µ . While Theorem 2.5 does not establish that, it does come close in part 2, from which we can easily derive the following: lim n→∞ P(f̂µ(·) /∈ Cs(X̃ ;TM)) = 0. The above shows that f̂µ(·) is indeed bounded below by (TM)−1 on X̃ (any compact subset of the interior of X ), and additionally belongs to Cs(X̃ ;TM) with probability converging to 1. This leads us to conjecture that the natural density version of f̂µ(·), i.e., max{f̂µ(·), 0}∫ max{f̂µ(x), 0} dx should serve as a good proxy for f̃M ′ µ (·) and lead to rates of convergence that mitigate the curse of dimensionality. From a computational perspective, the density specified above is easy to simulate from using an accept-reject algorithm without computing the integral in the denominator (see [101, Algorithm 4.3]). However, our current proof technique does not provide rates of convergence for the above density estimator based on f̂µ(·). Another important implication of Theorem 2.5 is the bound obtained on |W2(µ̃m, ν̃n)−W2(µ, ν)| when µ 6= ν. We first present the result and then describe the implication. Proposition 2.6. Consider the setting in Theorem 2.5. Then, provided µ 6= ν, the following holds: |W2(µ̃m, ν̃n)−W2(µ, ν)| = Op(r(m,n)d,s ). Proposition 2.6 (see Appendix C.1 for a proof) shows an interesting distinction between the µ 6= ν case and the µ = ν case. For µ = ν, the best possible exponent is n− 1+s 2s+d for d ≥ 3 (see [131, Theorem 3] where the result was established under more general Besov smoothness assumptions). On the contrary, when µ 6= ν, Proposition 2.6 establishes a rate of n− s+2d for the Wasserstein distance which is strictly better than the minimax achievable rate mentioned above when µ = ν. This observation complements [30, Corollary 1] where the authors make a similar remark for the special case of s = 0. 2.3 Discretized plug-in estimator under smoothness assumptions In Section 2.1, we discussed how smoothness can be incorporated into the plug-in procedure to get faster rates of convergence. Such plug-in estimators are popular in the computational OT literature (see [7, 8, 25, 36]). However, even after f̃µ(·) ≡ f̃M ′ µ (·), f̃ν(·) ≡ f̃M ′ ν (·) are calculated, T̃ γm,n as in Theorem 2.5 cannot be computed explicitly from data if f̃µ(·) and f̃ν(·) are continuous densities. This is in contrast to T̃ γm,n from Theorem 2.2 in the discrete-discrete case which is explicitly computable using a standard linear program, but achieves worse rates of convergence. This is not unexpected. Thanks to the no free lunch principle, better statistical accuracy is naturally accompanied by heavier computational challenges. Therefore, our goal here is to construct estimators, under smoothness assumptions as in Section 2.2, which are computable in polynomial time (with complexity increasing with smoothness) provided f̃µ(·) and f̃ν(·) can be sampled from, and also attain rates that mitigate the curse of dimensionality. Construction: We will illustrate the discretized estimator using the kernel based estimator from Section 2.2. Similar results also hold for the wavelet based estimator from Appendix A. Recall the kernel density estimators f̃µ(·) and f̃ν(·) (see (2.6)). Sample M ≥ 1 random points from both f̃µ(·) and f̃ν(·). Let µ̂m,M and ν̂n,M denote the standard empirical measures on the M points sampled from f̃µ(·) and f̃ν(·) respectively. Finally construct T̃m,n ≡ T̃ γm,n as in Definition 1.2 with µ̃m = µ̂m,M and ν̃n = ν̂n,M . It should be pointed out that a similar construction was also used in [131, Section 6] for estimating probability densities under the Wasserstein loss. Based on this construction, the main result of this section is as follows: Theorem 2.7. Consider the setting in Theorem 2.5 and the same construction of T̃ γm,n as above. For simplicity, let’s also assume m = n. Accordingly set M = n s+2 2 . Then Γ̃min is a singleton and consequently the following conclusion holds for some constant C > 0: E [∫ ‖T̃m,n(x)− T0(x)‖2 dµ̃m(x) ] ≤ Cr(n,n)d,s . The same rates also hold for E|W 22 (µ̃m, ν̃n)−W 22 (µ, ν)|. The proof of Theorem 2.7 is given in Appendix C.1. Once the empirical measures µ̂m,M and ν̂n,M have been obtained, an explicit computation of T̃m,n as described above requires O(M3) = O(n 3(s+2) 2 ) steps using the Hungarian algorithm, see [73]. This highlights the statistical versus computational trade-off, i.e., in order to mitigate the curse of dimensionality in convergence rates by exploiting smoothness, the computational complexity gets progressively worse by polynomial factors in n. It should be mentioned that (approximate) algorithms faster than the Hungarian algorithm stated above, can be found in [1, 36, 53] to name a few. Due to space constraints, we avoid a detailed discussion on this. In the above construction, sampling from the smoothed kernel densities f̃µ(·) and f̃ν(·) is crucial. If we would simply draw M bootstrap samples from the empirical distributions µ̂m and ν̂n, the rates of convergence wouldn’t improve from those observed in Theorem 2.2 no matter how large M is. Acknowledgments and Disclosure of Funding We would like to thank the reviewers for their constructive suggestions that greatly helped improve the quality of the paper. The third author is supported by NSF Grant DMS-2015376.
1. What is the main contribution of the paper regarding optimal transport map estimation? 2. What are the strengths and weaknesses of the paper in terms of its notation density and readability? 3. Do you have any questions about the paper's definitions, loss function, and main result? 4. How does the reviewer assess the paper's relevance and novelty? 5. Are there any typos or visual distinctions that need correction in the paper? 6. Can the authors provide intuition for the phenomenon in Theorem 2.4 regarding large bandwidths? 7. How do the results in this paper apply specifically to 2-Wasserstein distances compared to general p-Wasserstein distances?
Summary Of The Paper Review
Summary Of The Paper This paper studies the problem of estimating an optimal transport map between two probability distributions on Euclidean space, given IID samples from each of the distributions. The main result (Theorem 2.1) gives a general upper bound, in terms of the dual representation of the 2-Wasserstein distance. This is used to give upper bounds, in probability, on the rate of decay of the estimation risk both for the plugging in empirical distributions in the absence of smoothness assumptions and for kernel estimates under smoothness assumptions. Finally, implications for two applications (Wasserstein barycenter estimation and nonparametric independence testing) are discussed. Review Overall, I think the paper's contributions are clearly sufficiently novel and relevant for acceptance. However, I think the paper would be rather difficult to read for all but a very small subset of the NeurIPS community. Main Comments: I found the paper rather notationally dense, and, in my opinion, the paper would benefit from providing more intuition in several places, especially around the definition (Definition 1.2) of the barycentric projection, the loss function (Eq. (1.8)), and the main result (Theorem 2.1). It would be helpful, for example, if there was some prose describing why Eq. (1.8) is a good loss function to upper bound, and what the roles of the terms appearing in Inequality (2.1) are. In Theorem 2.4, it seems that the exact rate at which the bandwidth h_n vanishes does not affect the ultimate convergence rate, as long as it is slower than (log(n)/n)^{-1/(2s+d)}. It seems counterintuitive to me that a very large bandwidth, e.g. h_n = 1/log(n), would not slow the convergence rate. Could the authors provide any intuition for this phenomenon? Minor Comments: Line 40: The sentence "T_{m,n} converges at a rate of m^{-2/d} + n^{-2/d}" isn't really meaningful unless a loss or error metric is specified. Especially for nonparametric problems such as this, different metrics likely lead to different convergence rates. Hence, I suggest adding a brief description of the error metric here, or at least pointing the reader to Eq. (1.8). In several places, such as Eqs. (2.1) and (2.2), it is impossible to visually distinguish the \overline's and \tilde's over \mu and \nu without zooming in quite a bit. This left me confused for a while trying to figure out the difference. Is it possible to make these notations more visually distinct? Line 78: Typo: "distributions on X_1, . . . , X_m" -> "distributions of X_1, . . . , X_m" Could the authors comment on how specific the results in this paper are to 2-Wasserstein, as opposed to general p-Wasserstein, distances?
NIPS
Title Rates of Estimation of Optimal Transport Maps using Plug-in Estimators via Barycentric Projections Abstract Optimal transport maps between two probability distributions μ and ν on R have found extensive applications in both machine learning and statistics. In practice, these maps need to be estimated from data sampled according to μ and ν. Plugin estimators are perhaps most popular in estimating transport maps in the field of computational optimal transport. In this paper, we provide a comprehensive analysis of the rates of convergences for general plug-in estimators defined via barycentric projections. Our main contribution is a new stability estimate for barycentric projections which proceeds under minimal smoothness assumptions and can be used to analyze general plug-in estimators. We illustrate the usefulness of this stability estimate by first providing rates of convergence for the natural discretediscrete and semi-discrete estimators of optimal transport maps. We then use the same stability estimate to show that, under additional smoothness assumptions of Sobolev type or Besov type, kernel smoothed or wavelet based plug-in estimators respectively speed up the rates of convergence and significantly mitigate the curse of dimensionality suffered by the natural discrete-discrete/semi-discrete estimators. As a by-product of our analysis, we also obtain faster rates of convergence for plug-in estimators of W2(μ, ν), the Wasserstein distance between μ and ν, under the aforementioned smoothness assumptions, thereby complementing recent results in Chizat et al. (2020). Finally, we illustrate the applicability of our results in obtaining rates of convergence for Wasserstein barycenter between two probability distributions and obtaining asymptotic detection thresholds for some recent optimaltransport based tests of independence. 1 Introduction Given two random variables X ∼ µ and Y ∼ ν, where µ, ν are probability measures on Rd, d ≥ 1, the problem of finding a “nice" map T0(·) such that T0(X) ∼ ν has numerous applications in machine learning such as domain adaptation and data integration [34, 35, 38, 48, 61, 112], dimension reduction [12, 66, 90], generative models [60, 81, 88, 110], to name a few. Of particular interest is the case when T0(·) is obtained by minimizing a cost function, a line of work initiated by Gaspard Monge [97] in 1781 (see (1.1) below), in which case T0(·) is termed an optimal transport (OT) map and has applications in shape matching/transfer problems [29, 47, 107, 121], Bayesian statistics [46, 75, 80, 108], econometrics [15, 28, 45, 50, 54], nonparametric statistical inference [39– 41, 113, 114]; also see [111, 128, 129] for book-length treatments on the subject. In this paper, we will focus on the OT map obtained using the standard squared Euclidean cost function, i.e., T0 := argmin T :T#µ=ν E‖X − T (X)‖2, (1.1) where T#µ = ν means T (X) ∼ ν for X ∼ µ. The estimation of T0 has attracted a lot of interest in recent years due to its myriad applications (as stated above) and interesting geometrical properties (see [19, 56, 91] and Definition 1.1 below). In practice, the main hurdle in constructing estimators for T0 is that the explicit forms of the measures µ, ν are unknown; instead only random samples X1, . . . , Xm ∼ µ and Y1, . . . , Yn ∼ ν 35th Conference on Neural Information Processing Systems (NeurIPS 2021). are available. A natural strategy in this scenario is to estimate T0 using T̃m,n, where T̃m,n is computed as in (1.1) with µ and ν replaced by µ̃m and ν̃n which are empirical approximations of µ and ν based on X1, . . . , Xm and Y1, . . . , Yn respectively (see Definition 1.2). Such estimators are often called plug-in estimators and have been used extensively; see [7, 30, 67, 93, 94, 102, 116]. The main goal of this paper is to study the rates of convergence of general plug-in estimators of T0 under a unified framework. We show that when µ̃m and ν̃n are chosen as µ̂m and ν̂n respectively, where µ̂m and ν̂n are the standard empirical distributions supported on m and n atoms, i.e., µ̂m := 1 m m∑ i=1 δXi and ν̂n := 1 n n∑ j=1 δYj , (1.2) T̃m,n (appropriately defined using Definition 1.2) converges at a rate of m−2/d + n−2/d for d ≥ 4 in the sense of (1.8). This rate happens to be minimax optimal under minimal smoothness assumptions (see [72, Theorem 6]) but suffers from the curse of dimensionality. We next show that, if µ and ν are known to admit sufficiently smooth densities, it is possible to apply kernel or wavelet based smoothing techniques on µ̂m and ν̂n to obtain plug-in estimators that mitigate the aforementioned curse of dimensionality. Our next contribution pertains to the estimation of W 22 (µ, ν) (the squared Wasserstein distance), see (1.3) below, a quantity of independent interest in statistics and machine learning with applications in structured prediction [51, 89], image analysis [18, 59], nonparametric testing [16, 106], generative modeling [10, 96], etc. In this paper, we also obtain rates of convergence for plug-in estimators W 22 (µ̃m, ν̃n) of W 2 2 (µ, ν). We show that kernel smoothing µ̂m and ν̂n can be used to obtain plug-in estimators of W 22 (µ, ν) that mitigate the curse of dimensionality as opposed to a direct plug-in approach using µ̂m and ν̂n (as used in [30, Theorem 2]). This provides an answer to the open question of estimating W 22 (µ, ν) when µ, ν admit smooth densities laid out in [30]. 1.1 Background on optimal transport In this section, we present some basic concepts and results associated with the OT problem that will play a crucial role in the sequel. Let Pac(Rd) denote the set of all Lebesgue absolutely continuous probability measures on Rd andP2(Rd) be the set of probability measures with finite second moments. Then the 2-Wasserstein distance (squared) between µ, ν ∈ P2(Rd) is defined as: W 22 (µ, ν) := min π∈Π(µ,ν) ∫ ‖x− y‖2 dπ(x, y), (1.3) where Π(µ, ν) is the set of probability measures on Rd×Rd with marginals µ and ν. The optimization problem in (1.3) is often called the Kantorovich relaxation (see [76, 77]) of the optimization problem in (1.1). The existence of a minimizer in (1.3) follows from [129, Theorem 4.1]. Proposition 1.1 (Brenier-McCann polar factorization theorem, see [91, 128]). Suppose µ ∈ Pac(Rd). Then there exists a µ-a.e. (almost everywhere) unique function T0(·) : Rd → Rd, which is the gradient of a real-valued d-variate convex function, say ϕ0(·) : Rd → R, such that T0#µ = ν. Further, the distribution defined as π(A×B) = µ(A ∩ (T0)−1(B)) for all Borel sets A,B ⊆ Rd is the unique minimizer in (1.3) provided µ, ν ∈ P2(Rd). Definition 1.1 (OT map and potential function). The function T0 : Rd → Rd in Proposition 1.1 which satisfies T0#µ = ν will be called the OT map from µ to ν. A convex function ϕ0(·) in Proposition 1.1 satisfying∇ϕ0 = T0 will be termed an OT potential. The next and final important ingredient is the alternate dual representation of (1.3) which gives: 1 2 W 22 (µ, ν) = 1 2 ∫ ‖x‖2 dµ(x) + 1 2 ∫ ‖y‖2 dν(y)−min f∈F Sµ,ν(f), where (1.4) Sµ,ν(f) = ∫ f dµ+ ∫ f∗ dν. (1.5) Here F denotes the space of convex functions on Rd which are also elements of L1(µ) and f∗(·) is the standard Legendre-Fenchel dual defined as: f∗(x) := sup y∈Rd [y>x− f(y)], for x ∈ dom(f). (1.6) 1.2 Estimating OT map via barycentric projection Recall the setting from the Introduction. Let µ̃m, ν̃n ∈ P2(Rd). Here µ̃m, ν̃n need not be absolutely continuous and can be very general. Intuitively, µ̃m and ν̃n can be viewed as some empirical approximation of µ and ν respectively. Example 1.2 (Simple choices of µ̃m and ν̃n). Let X1, . . . , Xm i.i.d.∼ µ and Y1, . . . , Yn i.i.d.∼ ν; in which case a natural choice would be to set µ̃m = µ̂m and ν̃n = ν̂n where µ̂m and ν̂n are the empirical distributions of X1, . . . , Xm and Y1, . . . , Yn respectively, as defined in (1.2). This is the standard choice adopted in the discrete-discrete Kantorovich relaxation; see [104, Section 2.3]. Another popular choice is µ̃m = µ̂m, ν̃n = ν or µ̃m = µ, ν̃n = ν̂n. This is the semi-discrete Kantorovich problem and is popular when one of the measures is fully specified; see [26, 55]. A natural way to estimate T0(·), as defined in (1.1), would be to approximate it using the OT map from µ̃m to ν̃n. However as µ̃m and ν̃n may not be elements of Pac(Rd), Proposition 1.1 does not apply and an OT map may not exist from µ̃m to ν̃n. Such is the case in Example 1.2 in the discrete-discrete case when m 6= n. To circumvent this issue, we leverage the notion of barycentric projections (see [3, Definition 5.4.2]) defined below: Definition 1.2 (Barycentric projection). Define the set Γ̃min := argmin π∈Π(µ̃m,ν̃n) ∫ ‖x− y‖2 dπ(x, y). The optimization problem above is the plug-in analog of the optimization problem on the right hand side of (1.3). Given any γ ∈ Γ̃min, define the barycentric projection of γ as the conditional mean of y given x under γ, i.e., T̃m,n(x) ≡ T̃ γm,n(x) := ∫ y y dγ(x, y)∫ y dγ(x, y) , for x ∈ supp (µ̃m) . (1.7) In general, Γ̃min need not be a singleton which is why we index the barycentric projection T̃ γm,n(·) by γ ∈ Γ̃min. Note that T̃ γm,n(·) need not be a transport map; however, if an OT map exists then it must be equal to T̃ γm,n(·) (µ̃m-a.e.). Our goal is to obtain stochastic upper bounds for sup γ∈Γ̃min ∫ ∥∥T̃ γm,n(x)− T0(x)∥∥2 dµ̃m(x). (1.8) In addition, our proof techniques also yield rates of convergence for∣∣W 22 (µ̃m, ν̃n)−W 22 (µ, ν)∣∣. (1.9) In this paper, we will focus on d ≥ 2. Due to the canonical ordering of R, the case d = 1 can be handled easily using the classical Hungarian embedding theorem [82]. 1.3 Contributions 1. We provide a new and flexible stability estimate Theorem 2.1 which yields a unified approach to obtaining rates of convergence for general plug-in estimators of the OT map T0(·). Unlike existing stability estimates, Theorem 2.1 holds for the barycentric projection (which is the same as the OT map when it exists) and does not require any smoothness assumptions on µ̃m, ν̃n or T̃ γm,n(·); also see Remark 2.1 for a comparison with the existing literature. 2. In Sections 2.1 and 2.2, we use Theorem 2.1 to bound (1.8) and (1.9): • In Section 2.1, we show that in both the discrete-discrete and semi-discrete Kantorovich relaxation problems (see Example 1.2), the rate of convergence of (1.8) is m−2/d + n−2/d for d ≥ 4 when T0 is assumed to be Lipschitz (see Theorem 2.2), which is the minimax rate (see [72, Theorem 6]). To the best of our knowledge, rates of convergence for these natural estimators weren’t previously established in the literature. • In Section 2.2 and Appendix A, we show that the curse of dimensionality in the above rates can be mitigated provided µ and ν admit (uniform) Sobolev smooth densities (see Section 2.2) or Besov smooth densities (see Appendix A). In Section 2.2, our plug-in estimator is obtained by choosing µ̃m (and ν̃n) as the convolution of µ̂m (and ν̂n) and a smooth kernel with an appropriate bandwidth. Under this choice, the rate of convergence in (1.8) is m−( s+2 d ∧ 1 2 ) + n−( s+2 d ∧ 1 2 ), where s denotes the degree of Sobolev smoothness (see Theorem 2.5). Clearly, if 2(s + 2) ≥ d, the rate of convergence becomes dimension-free and mitigates the curse of dimensionality. We also show the same rates of convergence mentioned above hold for (1.9) (see e.g., Proposition 2.6) which makes a strong case in favor of incorporating smoothness in the construction of plug-in estimators as was conjectured in [30]. In Appendix A, our plug-in estimator is obtained using natural wavelet based density estimators. The rate of convergence in (1.8) turns out to be n− 1+s d+2s where s denotes the degree of Besov smoothness (see Theorem A.1). Note that by choosing s large enough, the exponent in the rate can be made arbitrarily close to 1/2, thereby reducing the curse of dimensionality. 3. In Section 2.3, we use a discretization technique from [131] to construct discrete approximations to the smoothed µ̃m and ν̃n from the previous paragraph that in turn yield computable plug-in estimators for T0 (provided one can sample from µ̃m and ν̃n) that also achieve the same statistical guarantees as the smoothed plug-in estimator from Section 2.2 (see Theorem 2.7). However the number of atoms required in the discretizations and correspondingly the computational complexity increases with the degree of smoothness; this highlights a statistical and computational trade-off. 4. We provide implications of our results in popular applications of OT such as estimating the barycenter of two multivariate probability distributions (see Theorem B.1 in Appendix B.1) and in nonparametric independence testing (see Theorem B.3 in Appendix B.2). 1.4 Related work Many recent works have focused on obtaining consistent estimators of T0 using the plug-in principle, see [26, 55] (in the semi-discrete problem) and [41, 68, 132] (in the discrete-discrete problem). In [55], the authors studied the rate of convergence of the semi-discrete optimal transport map from ν (absolutely continuous) to µ̂m. This paper complements the aforementioned papers by studying the rates of convergence for general plug-in estimators in a unified fashion. In two other papers [9, Theorem 1.1] and [87, Section 4], the authors use a “Voronoi tessellation" approach to estimate T0, however the rates obtained in this paper, even in the absence of smoothness, are strictly better than those in [9, 87]. Perhaps the most closely related paper to ours would be [67]. In [67], the author uses variational techniques to arrive at stability estimates while we exploit the Lipschitz nature of the OT map (see Definition 1.1). Further the rates in this paper have exponents s+2d ∧ 1 2 which are strictly better than the exponents s+22(s+2)+d obtained in [67, Proposition 1] under the same smoothness assumptions (Sobolev type of order s, see Definition 2.4). In another line of work [72], the authors use theoretical wavelet based estimators (not of the plug-in type) of T0 to obtain nearly minimax optimal rates of convergence. However these estimators, by themselves, are not transport maps between two probability measures, which makes them harder to interpret. In contrast, our focus is on obtaining rates of convergence for plug-in estimators, which are transport maps between natural approximations of µ and ν. Such plug-in type strategies are a lot more popular in computational OT [7, 30, 67, 93, 94, 102, 116]. In terms of obtaining rates of convergence for (1.9), some attempts include [109, 116] where parametric rates are obtained when µ, ν are known to be finitely supported or are both Gaussian. In a related problem, bounds for W 22 (µ̂m, µ) were obtained in [6, 42, 49, 100, 123, 131]. Using these bounds, for m = n, it is easy to get a n−1/d rate of convergence for (1.9). This rate was recently improved to n−2/d in [30] under no smoothness assumptions. Our rates coincide with the n−2/d rate from [30] under no smoothness assumptions. But further, we show in this paper that the curse of dimensionality in the above rate can be mitigated by incorporating smoothness into the plug-in procedure. 2 Main results Recall ϕ0(·) from Definition 1.1. The following is our main result. Theorem 2.1 (Stability estimate). Suppose that µ, ν ∈ Pac(Rd) ∩ P2(Rd) and µ̃m, ν̃n ∈ P2(Rd). Assume that T0(·) (as defined in (1.1)) is L-Lipschitz (L > 0). Then, sup γ∈Γ̃min ∫ ‖T̃ γm,n(x)− T0(x)‖2 dµ̃m(x) ≤ Lmax {∣∣∣∣ ∫ Ψ∗µ̃m,ν̃n d(ν̃n − ν†m)∣∣∣∣, ∣∣∣∣ ∫ Ψ∗µ̃m,ν†m d(ν̃n − ν†m) ∣∣∣∣} + 2L ∫ ϕ∗0(y) d(ν̃n − ν†m)(y), (2.1) where ν†m := T0#µ̃m, ϕ ∗ 0(·) is defined as in (1.6), and with S·,·(·) defined as in (1.5), Ψµ̃m,ν̃n(·) := argminf∈F Sµ̃m,ν̃n(f), Ψµ̃m,ν†m(·) := argminf∈F Sµ̃m,ν†m(f), and D denotes the space of realvalued convex functions on Rd. The proof of Theorem 2.1 (see Appendix C.1) starts along the same lines as the proof of the curvature estimate in [56, Proposition 3.3]. This is followed by some careful manipulations of W 22 (·, ·) (as in (1.3)) and an application of the conditional version of Jensen’s inequality, see (C.3). The final step of the proof uses the dual representation in (1.4) with techniques similar to some intermediate steps in the proof of [92, Proposition 2] and [30, Lemma 3]. Remark 2.1 (Comparison with other stability estimates). Theorem 2.1 provides some important advantages to existing stability estimates in the literature. One of the earliest results in this direction can be found in [56, Proposition 3.3] but their bound involves a push-forward constraint which makes it hard to use for rate of convergence analysis. A bound similar to Theorem 2.1 is presented in [55, Lemma 5.1] but there the authors assume the existence of an OT map from µ̃m to ν̃n. Therefore, it does not apply to the discrete-discrete problem where µ̃m = µ̂m and ν̃n = ν̂n with m 6= n. Overcoming all these limitations is an important contribution of Theorem 2.1 and allows us to deal with popular plug-in estimators all in one go. The stability estimate in [72, Proposition 10] on the other hand requires µ̃m, ν̃n to be sufficiently smooth and hence it does not hold for discrete-discrete or semi-discrete plug-in estimators (see Example 1.2). Further their result requires all the measures involved to be compactly supported unlike the much milder requirements of Theorem 2.1. However, a shortcoming of Theorem 2.1 is that it is hard to obtain rates faster than n−1/2 using it directly, whereas [72] can obtain rates arbitrarily close to n−1. This is a price we pay for analyzing natural and popular plug-in estimators as opposed to the (more intractable) wavelet based estimators in [72]. Remark 2.2 (How to use Theorem 2.1 to obtain rates of convergence?). Note that the second term on the right hand side of (2.1), under appropriate moment assumptions, is Op(m−1/2 +n−1/2) (free of dimension) by a direct application of Markov’s inequality. We therefore focus on the first term. By (1.5), Ψ∗µ̃m,ν̃n(·), Ψ ∗ µ̃m,ν † m (·) ∈ F . Further, by Caffarelli’s regularity theory [20–22], depending on the “smoothness" of µ̃m, ν̃n, it can be shown that there exists a further class of functions Fs (see Remarks 2.3 and 2.6) such that Ψ∗µ̃m,ν̃n(·), Ψ ∗ µ̃m,ν † m (·) ∈ F ∩ Fs. Thus, we can bound the first term on the right hand side of (2.1) as: max {∣∣∣∣ ∫ Ψ∗µ̃m,ν̃n d(ν̃n − ν†m)∣∣∣∣, ∣∣∣∣ ∫ Ψ∗µ̃m,ν†m d(ν̃n − ν†m) ∣∣∣∣} ≤ sup f∈F∩Fs ∣∣∣∣ ∫ f d(ν̃n − ν†m)∣∣∣∣. (2.2) The right hand side of (2.2) can now be bounded using the corresponding Dudley’s entropy integral bounds using empirical process techniques; see [126, Lemmas 19.35-19.37]. To conclude, the two main steps in our strategy are identifying the family of functions Fs and computing Dudley’s entropy integral. Further, the more the smoothness of µ̃m, ν̃n, the smaller is the class of functions Fs and smaller the supremum on the right hand side of (2.2). This shows why better rates can be expected under smoothness assumptions. 2.1 Natural non-smooth plug-in estimators In this case, we discuss the rates of convergence for the discrete-discrete problem and the semi-discrete problem, where no smoothness is available on µ̃m and ν̃n. Theorem 2.2. Suppose that T0(·) is L-Lipschitz, ν is compactly supported and E exp(t‖X1‖α) <∞ for some t > 0, α > 0. (Discrete-discrete): Set µ̃m = µ̂m and ν̃n = ν̂n. Then the following holds: sup γ∈Γ̃min ∫ ‖T̃ γm,n(x)− T0(x)‖2 dµ̃m(x) = Op ( r (m,n) d × (log (1 + max{m,n})) td,α ) , (2.3) where r(m,n)d := m−1/2 + n−1/2 for d = 2, 3, m−1/2 log (1 +m) + n−1/2 log (1 + n) for d = 4, m−2/d + n−2/d for d ≥ 5, (2.4) and td,α := (4α)−1(4 + ((2α+ 2dα− d) ∨ 0)) for d < 4, (α−1 ∨ 7/2)− 1 for d = 4, 2(1 + d−1) for d > 4. The same bound holds for |W 22 (µ̃m, ν̃n)−W 22 (µ, ν)| without assuming T0(·) is Lipschitz. (Semi-discrete): Set µ̃m = µ, ν̃n = ν̂n or µ̃m = µ̂m, ν̃n = ν. Then the left hand side of (2.3) is Op(r (n,n) d × (log (1 + n))td,α) or Op(r (m,m) d × (log (1 +m))td,α) respectively. A stronger result can be proved if both µ and ν are compactly supported. Corollary 2.3. Consider the setting from Theorem 2.2 and assume further that µ is compactly supported. Then, with r(m,n)d defined as in (2.4), we have: E [ sup γ∈Γ̃min ∫ ‖T̃ γm,n(x)− T0(x)‖2 dµ̃m(x) ] ≤ Cr(m,n)d , for some constant C > 0, in both the discrete-discrete and semi-discrete settings from Theorem 2.2. A brief description of the proof technique of Theorem 2.2 using Theorem 2.1 is provided in Remark 2.3 below, and the actual proof is presented in Appendix C.1. Remark 2.3 (Proof technique). The proof of Theorem 2.2 proceeds via the strategy outlined in Remark 2.2. We first show that Fs (see Remark 2.2) can be chosen as a certain sub-class of convex functions which are in L2(ν). We then use Dudley’s entropy integral type bounds which in turn requires the bracketing entropy [126, Page 270] of Fs, recently proved in [83, Equation 26]. This strategy is slightly different from that used in the proof of [30, Theorem 2], where the authors assume that µ is compactly supported whereas we only assume the finiteness of E exp(t‖X1‖α) for some t > 0, α > 0. The compactness assumption on µ allows one to further restrict Fs to the class of Lipschitz functions. This additional restriction does not seem to be immediate without the compactness assumption. As discussed in Section 1.3, the exponents obtained in Theorem 2.2 are minimax optimal, up to multiplicative logarithmic factors, under bare minimal smoothness assumptions (see [72, Theorem 6]). To the best of our knowledge, rates for the discrete-discrete case for m 6= n and those for the semi-discrete case were not known previously in the literature. Our rates are also strictly better than those (for different estimators, based on space tessellations) obtained in [9, 87] and require less stringent assumptions than those in [30]. In the next section, we show how smoothness assumptions can be leveraged to mitigate the curse of dimensionality in Theorem 2.2. 2.2 Smooth kernel based plug-in estimator: mitigating the curse of dimensionality In this section, we focus on kernel based density estimators for the probability densities associated with µ and ν (see [57, 58, 99, 103, 115]). We will show, using Theorem 2.1, that the corresponding estimators of T0(·) achieve (near) dimension-free rates under sufficient smoothness assumptions. We first introduce the Sobolev class of functions which we will exploit in this subsection to construct estimators that achieve rates of convergence which mitigate the curse of dimensionality under sufficient smoothness. Definition 2.4 (Uniform Sobolev class of functions). Let Ω ⊆ Rd and f(·) be uniformly continuous on Ω and admits uniformly continuous derivatives up to order s on Ω for some s ∈ N. For any m := (m1, . . . ,md) ∈ Nd, let ∂mf := ∂ ∂m1x1 . . . ∂ ∂mdxd f, |m| := d∑ i=1 mi. For any k ≤ s, we further define, ‖f‖Ck(Ω) := ∑ |m|≤k ‖∂mf‖L∞(Ω). The space Cs(Ω) is defined as the set of functions f(·) for which ‖f‖Ck(Ω) <∞ for all k ≤ s. For this subsection, assume that µ and ν admit Sobolev smooth densities fµ(·) and fν(·) in the uniform norm (see Definition 2.4 above). Given Ω ⊆ Rd and s ∈ N, let Cs(Ω) denote the set of Sobolev smooth functions on Ω of order s. Assumption (A1) (Regularity of the densities). Suppose that 1. fµ and fν are supported on compact and convex subsets of Rd, say X and Y respectively. 2. There exists s,M > 0 such that fµ(·) ∈ Cs(X ;M) and fν(·) ∈ Cs(Y;M) where Cs(X ;M) is the space of real valued functions supported on X such that for all f(·) ∈ Cs(X ;M), we have M−1 ≤ f(x) ≤ M for all x ∈ X and ‖f‖Cs(X ) ≤ M . Here ‖·‖Cs(X ) is the standard uniform Sobolev norm as defined in Definition 2.4. The space Cs(Y;M) is defined analogously. We now define our estimators for fµ(·) and fν(·) using the standard kernel density estimation technique (see [125, Section 1.2]). Set f̂µ(x) := 1 mhdm m∑ i=1 Kd ( Xi − x hm ) , (2.5) for some bandwidth parameter hm > 0 and d-variate kernel Kd(·). We assume that Kd(·) is the dfold product of univariate kernels, i.e., there exists a kernelK(·) such that for u = (u1, . . . , ud) ∈ Rd, Kd(u) = ∏d i=1K(ui). We define f̂ν(·) similarly with the same univariate kernel and bandwidth. Assumption (A2) (Regularity of the kernel). Assume that K(·) is a symmetric, bounded, s+ 1 times differentiable kernel on Rd with all s+ 1 derivatives bounded and integrable. Further, suppose that K(·) is of order 2s+ 2, i.e.,∫ ujK(u) du = 1(j = 0), for j = {0, 1, 2, . . . , 2s+1}, and ∫ |u|2s+2|K(u)| du <∞. The above assumptions on K(·) are standard for estimating smooth densities and their derivatives of different orders in the kernel density estimation literature; see e.g. [4, 57, 58, 69, 125]. There are several natural ways to construct kernels satisfying Assumption (A2), see [125, Section 1.2.2]; an example is also provided in Example 2.4 below. Example 2.4 (Example of a kernel satisfying Assumption (A2)). Let ψm(·) be the m-th Hermite polynomial on R (see [84]). Then the kernel function defined as K(u) := 2s+2∑ m=0 ψm(0)ψm(u) exp(−u2/2) satisfies Assumption (A2). It is evident from Assumption (A2) that K(·) may take some negative values, in which case, f̂µ(·) (respectively f̂ν(·)) may not be a probability density. Consequently the barycentric projection (see Definition 1.2) between f̂µ(·) and f̂ν(·) is not well-defined. We get around this by projecting f̂µ(·) and f̂ν(·) on an appropriate space of “smooth" probability densities (see (2.6)), via an integral probability metric (see Definition 2.5 below; also see [98, 105, 117] for examples, computational procedures and applications of such metrics). Definition 2.5 (Integral probability metric). Given a classH of bounded functions on Rd and two probability densities g1(·) and g2(·) on Rd, the integral probability metric/distance between g1(·) and g2(·) with respect toH is defined as dIP(g1, g2;H) := sup ψ(·)∈H ∣∣∣∣ ∫ ψ(x)(g1(x)− g2(x)) dx∣∣∣∣. Sufficient conditions onH for dIP(·, ·;H) to be a metric on the space of probability measures (not on the space of probability densities as they can be altered on set of Lebesgue measure 0 without altering the underlying probability measures) on Rd have been discussed in [98]. Observe that the measure dIP(g1, g2;H) is well defined even when g1(·) and g2(·) are not probability densities. In Theorem 2.5 below, we useH = Cs+2(X ,M ′). Note that any function in Cs+2(X ,M ′) can be extended to a function in Cs+2(Rd;M ′) (see [72, Theorem 23] and [124, Theorem 1.105]). The fact that this choice of F results in a metric follows from the argument in [98, Page 8]. We are now in a position to describe the projection estimators for fµ(·) and fν(·), and the rates achieved by the corresponding plug-in estimator. Theorem 2.5. Assume that T0(·) is L-Lipschitz and fµ, fν are Lebesgue densities satisfying Assumption (A1). Also suppose that K(·) satisfies Assumption (A2). Define hm := m− 1 d+2s logm , hn := n − 1d+2s log n and T := ∫ |Kd(u)| du+ 1. Fix any M ′ > 0. Consider any probability density f̃M ′ µ (·) ∈ Cs(X ;TM) (where M is defined as in Assumption (A1)) which satisfies dIP ( f̃M ′ µ , f̂µ;C s+2(X ;M ′) ) ≤ inf f(·)∈Cs(X ;TM) f≥0, ∫ f=1 dIP ( f̂µ, f ;C s+2(X ;M ′) ) + r (m,n) d,s (2.6) where r(m,n)d,s is defined as in (2.7) and dIP(·, ·;Cs+2(X ;M ′)) is the integral probability metric defined in Definition 2.5. We define f̃M ′ ν (·) analogously as in (2.6) with X , f̂µ(·) replaced by Y , f̂ν(·). Then the following conclusions hold. 1. SetM ′ := 8(1+TM). If µ̃m and ν̃n are the probability measures corresponding to the probability densities f̃M ′ µ (·) and f̃M ′ ν (·), then the following holds for some constant C > 0: E [ sup γ∈Γ̃min ∫ ‖T̃ γm,n(x)− T0(x)‖2 dµ̃m(x) ] ≤ Cr(m,n)d,s , where r(m,n)d,s := m−1/2 + n−1/2 for d < 2(s+ 2), m−1/2 (log (1 +m)) d + n−1/2 (log (1 + n)) d for d = 2(s+ 2), m− s+2 d + n− s+2 d for d ≥ 2(s+ 2). (2.7) The same bound also holds for E|W 22 (µ̃m, ν̃n)−W 22 (µ, ν)|. 2. f̂µ(·) satisfies lim n→∞ max { P ( ‖f̂µ‖Cs(X̃ ) ≥ TM ) ,P ( sup x∈X̃ |f̂µ(x)− fµ(x)| ≥ ε )} = 0 (2.8) for any ε > 0, where X̃ is any compact subset of X o. The same conclusion holds for f̂ν(·) with X replaced by Y . In Theorem 2.5, we have shown that the plug-in estimator for T0(·) using f̃M ′ µ (·) and f̃M ′ ν (·) (with M ′ = 8(1 + TM)) achieves rates that mitigate the curse of dimensionality under sufficient smoothness. In fact, f̃M ′ µ (·) can be viewed as an approximate minimizer of dIP(f̂µ, ·;Cs+2(X ,M ′)) over an appropriate class of Sobolev smooth probability densities. This is carried out because f̂µ(·) by itself may not be a probability density. Further note that µ̃m, ν̃n as specified in Theorem 2.5 are both smooth, and consequently Γ̃min is a singleton and the supremum in Theorem 2.5 can be dropped. A brief description of the proof technique for Theorem 2.5 is presented in Remark 2.6 below and the actual proof is given in Appendix C.1. Remark 2.6 (Proof technique). The proof of Theorem 2.5 proceeds along the same lines as Remark 2.3. We first show that Fs (see Remark 2.2) can be chosen as a certain subset of Cs+2(Y◦). We then use Dudley’s entropy integral type bounds which in turn requires the bracketing entropy [126, Page 270] of the class of compactly supported Sobolev smooth functions which can be found in [127, Corollary 2.7.2]. We now explain the implications of both the parts of Theorem 2.5 in the following two remarks. Remark 2.7 (Mitigating the curse of dimensionality). Theorem 2.5 shows that, under enough smoothness, i.e., when 2(s+ 2) > d, both the upper bounds for (1.8) and (1.9) are Op(n−1/2). This shows that, for large dimensions, provided µ and ν admit smooth enough densities, it is possible to construct plug-in estimators that mitigate the curse of dimensionality. Note that a similar estimator was analyzed in [67, Proposition 1] when m = n. However, the rates obtained in Theorem 2.5 are strictly better than those in [67, Proposition 1]. For m = n, when d < 2(s + 2), [67] obtained a rate of n− s+2 2(s+2)+d which is worse than n−1/2 obtained in Theorem 2.5. For the other regimes, [67] obtains rates (up to log factors) of n−1/4 and n− 1 (s+2)(d+2(s+2)) which are both worse than the respective rates of n−1/2 and n− s+2 d in Theorem 2.5. Remark 2.8 (Computational aspects of Theorem 2.5). Note that f̃M ′ µ (·) (with M ′ = 8(1 + TM)) is hard to compute whereas f̂µ(·) is computable easily in linear time. Note that if f̂µ(·) itself were a probability density in Cs(X ;TM), then we would have f̂µ = f̃M ′ µ . While Theorem 2.5 does not establish that, it does come close in part 2, from which we can easily derive the following: lim n→∞ P(f̂µ(·) /∈ Cs(X̃ ;TM)) = 0. The above shows that f̂µ(·) is indeed bounded below by (TM)−1 on X̃ (any compact subset of the interior of X ), and additionally belongs to Cs(X̃ ;TM) with probability converging to 1. This leads us to conjecture that the natural density version of f̂µ(·), i.e., max{f̂µ(·), 0}∫ max{f̂µ(x), 0} dx should serve as a good proxy for f̃M ′ µ (·) and lead to rates of convergence that mitigate the curse of dimensionality. From a computational perspective, the density specified above is easy to simulate from using an accept-reject algorithm without computing the integral in the denominator (see [101, Algorithm 4.3]). However, our current proof technique does not provide rates of convergence for the above density estimator based on f̂µ(·). Another important implication of Theorem 2.5 is the bound obtained on |W2(µ̃m, ν̃n)−W2(µ, ν)| when µ 6= ν. We first present the result and then describe the implication. Proposition 2.6. Consider the setting in Theorem 2.5. Then, provided µ 6= ν, the following holds: |W2(µ̃m, ν̃n)−W2(µ, ν)| = Op(r(m,n)d,s ). Proposition 2.6 (see Appendix C.1 for a proof) shows an interesting distinction between the µ 6= ν case and the µ = ν case. For µ = ν, the best possible exponent is n− 1+s 2s+d for d ≥ 3 (see [131, Theorem 3] where the result was established under more general Besov smoothness assumptions). On the contrary, when µ 6= ν, Proposition 2.6 establishes a rate of n− s+2d for the Wasserstein distance which is strictly better than the minimax achievable rate mentioned above when µ = ν. This observation complements [30, Corollary 1] where the authors make a similar remark for the special case of s = 0. 2.3 Discretized plug-in estimator under smoothness assumptions In Section 2.1, we discussed how smoothness can be incorporated into the plug-in procedure to get faster rates of convergence. Such plug-in estimators are popular in the computational OT literature (see [7, 8, 25, 36]). However, even after f̃µ(·) ≡ f̃M ′ µ (·), f̃ν(·) ≡ f̃M ′ ν (·) are calculated, T̃ γm,n as in Theorem 2.5 cannot be computed explicitly from data if f̃µ(·) and f̃ν(·) are continuous densities. This is in contrast to T̃ γm,n from Theorem 2.2 in the discrete-discrete case which is explicitly computable using a standard linear program, but achieves worse rates of convergence. This is not unexpected. Thanks to the no free lunch principle, better statistical accuracy is naturally accompanied by heavier computational challenges. Therefore, our goal here is to construct estimators, under smoothness assumptions as in Section 2.2, which are computable in polynomial time (with complexity increasing with smoothness) provided f̃µ(·) and f̃ν(·) can be sampled from, and also attain rates that mitigate the curse of dimensionality. Construction: We will illustrate the discretized estimator using the kernel based estimator from Section 2.2. Similar results also hold for the wavelet based estimator from Appendix A. Recall the kernel density estimators f̃µ(·) and f̃ν(·) (see (2.6)). Sample M ≥ 1 random points from both f̃µ(·) and f̃ν(·). Let µ̂m,M and ν̂n,M denote the standard empirical measures on the M points sampled from f̃µ(·) and f̃ν(·) respectively. Finally construct T̃m,n ≡ T̃ γm,n as in Definition 1.2 with µ̃m = µ̂m,M and ν̃n = ν̂n,M . It should be pointed out that a similar construction was also used in [131, Section 6] for estimating probability densities under the Wasserstein loss. Based on this construction, the main result of this section is as follows: Theorem 2.7. Consider the setting in Theorem 2.5 and the same construction of T̃ γm,n as above. For simplicity, let’s also assume m = n. Accordingly set M = n s+2 2 . Then Γ̃min is a singleton and consequently the following conclusion holds for some constant C > 0: E [∫ ‖T̃m,n(x)− T0(x)‖2 dµ̃m(x) ] ≤ Cr(n,n)d,s . The same rates also hold for E|W 22 (µ̃m, ν̃n)−W 22 (µ, ν)|. The proof of Theorem 2.7 is given in Appendix C.1. Once the empirical measures µ̂m,M and ν̂n,M have been obtained, an explicit computation of T̃m,n as described above requires O(M3) = O(n 3(s+2) 2 ) steps using the Hungarian algorithm, see [73]. This highlights the statistical versus computational trade-off, i.e., in order to mitigate the curse of dimensionality in convergence rates by exploiting smoothness, the computational complexity gets progressively worse by polynomial factors in n. It should be mentioned that (approximate) algorithms faster than the Hungarian algorithm stated above, can be found in [1, 36, 53] to name a few. Due to space constraints, we avoid a detailed discussion on this. In the above construction, sampling from the smoothed kernel densities f̃µ(·) and f̃ν(·) is crucial. If we would simply draw M bootstrap samples from the empirical distributions µ̂m and ν̂n, the rates of convergence wouldn’t improve from those observed in Theorem 2.2 no matter how large M is. Acknowledgments and Disclosure of Funding We would like to thank the reviewers for their constructive suggestions that greatly helped improve the quality of the paper. The third author is supported by NSF Grant DMS-2015376.
1. What is the focus of the paper regarding Brenier maps and plug-in "barycentric mapping"? 2. What are the strengths and weaknesses of the proposed approach? 3. How does the reviewer assess the novelty and practicality of the paper's contributions? 4. What are the limitations of the provided proof and experimental validation? 5. Can you provide specific examples or comparisons to illustrate the practical significance of the main result?
Summary Of The Paper Review
Summary Of The Paper This article proposes a study on the rates of estimation of Brenier maps using the plug in "barycentric mapping" estimator. As a consequence of this study authors obtain rates of convergence for W 2 2 in various context (depending on the regularity of the measures and on the setting: discrete-discrete of semi-discrete). They apply their results for Wasserstein barycenter estimation and Nonparametric independence testing. Review On the one hand, from a theoretical point of view, I find the different results very interesting on the other hand I find that this article does not really fit in the scope of a machine learning conference. Moreover I think that the article could benefit from an improvement in its writing (see below). Overall the introduction and the problem are well presented. The main result is also quite interesting: it justifies the use of barycentric mapping as an "approximation" of the optimal Monge map. This result has many practical consequences, especially when barycentric mapping is used as a proxy for the Monge map (as in [1,2,3] or in domain adaptation where barycentric mapping is at the heart of many methods e.g. [4]). This opens, in my opinion, many practical perspectives. It also offers new justifications for the recent results concerning the estimation of W 2 2 . My main criticism concerning this article is that it does not really fit, in my opinion, in the scope of a machine learning conference but is more adapted to a "mathematical journal" format such as Annals of Statistics. First, no experience is given in the paper. I understand the purpose of purely theoretical papers but I think that a ML paper should contain at least one or two small experiments validating/illustrating the different approaches when possible (see my last comment). More importantly, the majority of the paper's contributions are in fact in supplementary material which is exclusively made up of proofs, quite specific and complex. So the paper is not really 8 pages long, but more like 20 pages of quite extensive mathematics. Consequently, given the time constraints inherent to NeurIPS, it seems to me quite reasonable to say that most ML reviewers cannot certify that all the proofs in such a paper are correct. Since these proofs are the main core of the paper, it is reasonable to think that it is difficult to judge the quality of the paper in such a setting. This argument would be invalid, however, if the above mentioned proofs were written with care and with a bit of hand-holding. However, the paper contains a lot of proofs without context or discussion and some "proofs by omission" based on other results. Indeed, the proofs are not self-contained, making the task of reviewing them very difficult, even for someone used to handling the tools of optimal transport. There are many calls in the proofs to results/equations of other papers without discussion or context. In the different proofs of the supplementary, there are no less than 12 calls to other results, which is, I think, a lot ("see Proposition 2 in [16]", "see Theorem 2.10 in [7]", "Note that by [15, Equation 26]"). Although it is common to write a proof by appealing to other results, this is usually the case when the result is "somewhat known" or by explaining a bit the outline of these results. In many cases here, these results are very specific and are not discussed. It is also difficult to say, without reading the whole articles in question, if the assumptions of the different theorems referred to are really applicable in these specific cases. For example [13, Theorem 17] for Step I of the supplementary (row 95) calls for a theorem which itself has many assumptions and it is difficult to see if these assumptions are validated here, because no explanation or context is given. Some other proofs are not very elaborated (row 79-80) "The general strategy to bound the term in (A.20) is derived from some intermediate steps in the proofs of [5, Lemmas 3 and 4]. We still present a sketch here for completeness." Finally, the paper ends on a theorem without any conclusion, which I find rather clumsy. No discussion or perspective is given about the "Application" part (which is actually another theoretical part). For example, it is rather difficult to understand the interest of the "Nonparametric independence testing" section: what is "the permutation principle as is necessary for the usual HSIC" ? Why understanding "the local power of ϕ n , α ; under "changing sequence of alternatives converging to the null" " is a practical, interesting problem ? In this section, for example, it would have been valuable to include a small experiment showing the interest of the main result for this problem. Another question that could be interesting is the practical comparison between the barycentric mapping estimator compared to the "Sinkhorn divergence" one of (Chizat, 2020). It seems to me that it is somehow harder to compute the barycentric mapping, since this is a standard OT problem, while the Sinkhorn divergence is easier due to the regularization and the complexity of Sinkhorn’s algorithm. On the other hand, for real data, maybe the estimator based on the barycentric mapping would better behave ? For all these reasons I rather recommend rejecting this paper. I find the results interesting but I don't think it is, as it stands, a suitable paper for a Machine Learning conference. I admit that some of my comments are quite subjective ("what a ML paper should be") so I am wiling to change my opinion. [1] Michaël Perrot, Nicolas Courty, Rémi Flamary, Amaury Habrard. Mapping estimation for discrete optimal transport [2] Vivien Seguy, Bharath Bhushan Damodaran, Rémi Flamary, Nicolas Courty, Antoine Rolet, Mathieu Blondel. Large-Scale Optimal Transport and Mapping Estimation [3] Elsa Cazelles, Felipe Tobar, Joaquin Fontbona. Streaming computation of optimal weak transport barycenters [4] Nicolas Courty, Rémi Flamary, Amaury Habrard, Alain Rakotomamonjy. Joint Distribution Optimal Transportation for Domain Adaptation ----- After Rebuttal ----- I thank the authors for their response. After reading the rebuttal I decided to increase my score to 5. I agree that this paper proposes interesting new theoretical results and I agree with the other reviewers on this point. However, I think it would benefit from a rewrite of the proofs (to be as self-content as possible since it is the core of the paper) and of the "Application" part which does not seem to me to be detailed enough as it is. I also think that this paper deserves a conclusion in order to give some perspective on the different results. That's why I don't increase my final grade more widely. However, I do not object to the acceptance of this paper.
NIPS
Title Adaptive Gradient Quantization for Data-Parallel SGD Abstract Many communication-efficient variants of SGD use gradient quantization schemes. These schemes are often heuristic and fixed over the course of training. We empirically observe that the statistics of gradients of deep models change during the training. Motivated by this observation, we introduce two adaptive quantization schemes, ALQ and AMQ. In both schemes, processors update their compression schemes in parallel by efficiently computing sufficient statistics of a parametric distribution. We improve the validation accuracy by almost 2% on CIFAR-10 and 1% on ImageNet in challenging low-cost communication setups. Our adaptive methods are also significantly more robust to the choice of hyperparameters. N/A 1 Introduction Stochastic gradient descent (SGD) and its variants are currently the method of choice for training deep models. Yet, large datasets cannot always be trained on a single computational node due to memory and scalability limitations. Data-parallel SGD is a remarkably scalable variant, in particular on multi-GPU systems [1–10]. However, despite its many advantages, distribution introduces new challenges for optimization algorithms. In particular, data-parallel SGD has large communication cost due to the need to transmit potentially huge gradient vectors. Ideally, we want distributed optimization methods that match the performance of SGD on a single hypothetical super machine, while paying a negligible communication cost. A common approach to reducing the communication cost in data-parallel SGD is gradient compression and quantization [4, 11–16]. In full-precision data-parallel SGD, each processor broadcasts its locally computed stochastic gradient vector at every iteration, whereas in quantized data-parallel SGD, each processor compresses its stochastic gradient before broadcasting. Current quantization methods are either designed heuristically or fixed prior to training. Convergence rates in a stochastic optimization problem are controlled by the trace of the gradient covariance matrix, which is referred ⇤Equal contributions. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. as the gradient variance in this paper [17]. As Fig. 1 shows, no fixed method can be optimal throughout the entire training because the distribution of gradients changes. A quantization method that is optimal at the first iteration will not be optimal after only a single epoch. In this paper, we propose two adaptive methods for quantizing the gradients in data-parallel SGD. We study methods that are defined by a norm and a set of quantization levels. In Adaptive Level Quantization (ALQ), we minimize the excess variance of quantization given an estimate of the distribution of the gradients. In Adaptive Multiplier Quantization (AMQ), we minimize the same objective as ALQ by modelling quantization levels as exponentially spaced levels. AMQ solves for the optimal value of a single multiplier parametrizing the exponentially spaced levels. 1.1 Summary of contributions • We propose two adaptive gradient quantization methods, ALQ and AMQ, in which processors update their compression methods in parallel. • We establish an upper bound on the excess variance for any arbitrary sequence of quantization levels under general normalization that is tight in dimension, an upper bound on the expected number of communication bits per iteration, and strong convergence guarantees on a number of problems under standard assumptions. Our bounds hold for any adaptive method, including ALQ and AMQ. • We improve the validation accuracy by almost 2% on CIFAR-10 and 1% on ImageNet in challenging low-cost communication setups. Our adaptive methods are significantly more robust to the choice of hyperparameters.2 1.2 Related work Adaptive quantization has been used for speech communication and storage [18]. In machine learning, several biased and unbiased schemes have been proposed to compress networks and gradients. Recently, lattice-based quantization has been studied for distributed mean estimation and variance reduction [19]. In this work, we focus on unbiased and coordinate-wise schemes to compress gradients. Alistarh et al. [20] proposed Quantized SGD (QSGD) focusing on the uniform quantization of stochastic gradients normalized to have unit Euclidean norm. Their experiments illustrate a similar quantization method, where gradients are normalized to have unit L1 norm, achieves better performance. We refer to this method as QSGDinf or Qinf in short. Wen et al. [15] proposed TernGrad, which can be viewed as a special case of QSGDinf with three quantization levels. Ramezani-Kebrya et al. [21] proposed nonuniform quantization levels (NUQSGD) and demonstrated superior empirical results compared to QSGDinf. Horváth et al. [22] proposed natural compression and dithering schemes, where the latter is a special case of logarithmic quantization. There have been prior attempts at adaptive quantization methods. Zhang et al. [23] proposed ZipML, which is an optimal quantization method if all points to be quantized are known a priori. To find the optimal sequence of quantization levels, a dynamic program is solved whose computational and memory cost is quadratic in the number of points to be quantized, which in the case of gradients would correspond to their dimension. For this reason, ZipML is impractical for quantizing on the fly, and is in fact used for (offline) dataset compression. They also proposed an approximation where a subsampled set of points is used and proposed to scan the data once to find the subset. However, as we show in this paper, this one-time scan is not enough as the distribution of stochastic gradients changes during the training. Zhang et al. [24] proposed LQ-Net, where weights and activations are quantized such that the inner products can be computed efficiently with bitwise operations. Compared to LQ-Net, our methods do not need additional memory for encoding vectors. Concurrent with our work, Fu et al. [25] proposed to quantize activations and gradients by modelling them with Weibull distributions. In comparison, our proposed methods accommodate general distributions. Further, our approach does not require any assumptions on the upper bound of the gradients. 2Open source code: http://github.com/tabrizian/learning-to-quantize Input: Local data, parameter vector (local copy) wt, learning rate ↵, and set of update steps U 1 for t = 1 to T do 2 if t 2 U then 3 for i = 1 to M do 4 Compute sufficient statistics and update quantization levels `; 5 for i = 1 to M do 6 Compute gi(wt), encode ci,t ENCODE` gi(wt) , and broadcast ci,t; 7 for j = 1 to M do 8 Receive ci,t from each processor i and decode ĝi(wt) DECODE` ci,t ; 9 Aggregate wt+1 P⌦ wt ↵M PM i=1 ĝi(wt) ; Algorithm 1: Adaptive data-parallel SGD. Loops are executed in parallel on each machine. At certain steps, each processor computes sufficient statistics of a parametric distribution to estimate distribution of normalized coordinates. 2 Preliminaries: data-parallel SGD Consider the problem of training a model parametrized by a high-dimensional vector w 2 Rd. Let ⌦ ✓ Rd denote a closed and compact set. Our goal is to minimize f : ⌦ ! R. Assume we have access to unbiased stochastic gradients of f , which is g, such that E[g(w)] = rf(w) for all w 2 ⌦. The update rule for full-precision SGD is given by wt+1 = P⌦ wt ↵g(wt)) where wt is the current parameter vector, ↵ is the learning rate, and P⌦ is the Euclidean projection onto ⌦. We consider data-parallel SGD, which is a synchronous and distributed framework consisting of M processors. Each processor receives gradients from all other processors and aggregates them. In data-parallel SGD with compression, gradients are compressed by each processor before transmission and decompressed before aggregation [20–23]. A stochastic compression method is unbiased if the vector after decompression is in expectation the same as the original vector. 3 Adaptive quantization In this section, we introduce novel adaptive compression methods that adapt during the training (Algorithm 1). Let v 2 Rd be a vector we seek to quantize and ri = |vi|/kvk be its normalized coordinates for i = 1, . . . , d.3 Let q`(r) : [0, 1] ! [0, 1] denote a random quantization function applied to the normalized coordinate r using adaptable quantization levels, ` = [`0, . . . , `s+1]>, where 0 = `0 < `1 < · · · < `s < `s+1 = 1. For r 2 [0, 1], let ⌧(r) denote the index of a level such that `⌧(r) r < `⌧(r)+1. Let ⇢(r) = (r `⌧(r))/(`⌧(r)+1 `⌧(r)) be the relative distance of r to level ⌧(r) + 1. We define the random variable h(r) such that h(r) = `⌧(r) with probability 1 ⇢(r) and h(r) = `⌧(r)+1 with probability ⇢(r). We define the quantization of v as Q`(v) , [q`(v1), . . . , q`(vd)]> where q`(vi) = kvk · sign(vi) · h(ri) and h = {h(ri)}i=1,...,d are independent random variables. The encoding, ENCODE(v), of a stochastic gradient is the combined encoding of kvk using a standard floating point encoding along with an optimal encoding of h(ri) and binary encoding of sign(vi) for each coordinate i. The decoding, DECODE, recovers the norm, h(ri), and the sign. Additional details of the encoding method are described in Appendix D. We define the variance of vector quantization to be the trace of the covariance matrix, Eh[kQ`(v) vk22] = kvk2 dX i=1 2(ri), (1) where 2(r) = E[(q`(r) r)2] is the variance of quantization for a single coordinate that is given by 2(r) = (`⌧(r)+1 r)(r `⌧(r)). (2) 3In this section, we use k · k to denote a general Lq norm with q 1 for simplicity. Let v be a random vector corresponding to a stochastic gradient and h capture the randomness of quantization for this random vector as defined above. We define two minimization problems, expected variance and expected normalized variance minimization: min `2L Ev,h ⇥ kQ`(v) vk22 ⇤ and min `2L Ev,h ⇥ kQ`(v) vk22/kvk2 ⇤ , where L = {` : `j `j+1, 8 j, `0 = 0, `s+1 = 1} denotes the set of feasible solutions. We first focus on the problem of minimizing the expected normalized variance and then extend our methods to minimize the expected variance in Section 3.4. Let F (r) denote the marginal cumulative distribution function (CDF) of a normalized coordinate r. Assuming normalized coordinates ri are i.i.d. given kvk, the expected normalized variance minimization can be written as min `2L (`), where (`) , sX j=0 Z `j+1 `j 2(r) dF (r). (3) The following theorem suggests that solving (3) is challenging in general; however, the sub-problem of optimizing a single level given other levels can be solved efficiently in closed form. Proofs are provided in Appendix B. Theorem 1 (Expected normalized variance minimization). Problem (3) is nonconvex in general. However, the optimal solution to minimize one level given other levels, min`i (`), is given by `⇤i = (`i 1, `i+1), where (a, c) = F 1 ✓ F (c) Z c a r a c a dF (r) ◆ . (4) 3.1 ALQ: Adapting individual levels using coordinate descent Using the single level update rule in Eq. (4) we iteratively adapt individual levels to minimize the expected normalized variance in (3). We denote quantization levels at iteration t by `(t) starting from t = 0. The update rule is `j(t+ 1) = (`j 1(t), `j+1(t)) 8j = 1, . . . , s . (5) Performing the update rule above sequentially over coordinates j is a form of coordinate descent (CD) that is guaranteed to converge to a local minima. CD is particularly interesting because it does not involve any projection step to the feasible set L. In practice, we initialize the levels with either uniform levels [20] or exponentially spaced levels proposed in [21]. We observe that starting from either initialization CD converges in small number of steps (less than 10). 3.2 Gradient descent Computing r using Leibniz’s rule [26], the gradient descent (GD) algorithm to solve (3) is based on the following update rule: `j(t+ 1) = PL ✓ `j(t) ⌘(t) @ (`(t)) @`j ◆ @ (`(t)) @`j = Z `j(t) `j 1(t) (r `j 1(t)) dF (r) Z `j+1(t) `j(t) (`j+1(t) r) dF (r) (6) for t = 0, 1, . . . and j = 1, . . . , s. Note that the projection step in Eq. (6) is itself a convex optimization problem. We propose a projection-free modification of GD update rule to systematically ensure ` 2 L. Let j(t) = min{`j(t) `j 1(t), `j+1(t) `j(t)} denote the minimum distance between two neighbouring levels at iteration t for j = 1, . . . , s. If the change in level j is bounded by j(t)/2, it is guaranteed that ` 2 L. We propose to replace Eq. (6) with the following update rule: `j(t+ 1) = `j(t) sign ✓ @ (`(t)) @`j ◆ min ⇢ ⌘(t) @ (`(t)) @`j , j(t) 2 . (7) 3.3 AMQ: Exponentially spaced levels We now focus on ` = [ 1, p, . . . , ps, ps, . . . , p, 1]>, i.e., exponentially spaced levels with symmetry. We can update p efficiently by gradient descent using the first order derivative 1 2 d (p) dp = Z ps 0 2sp2s 1 dF (r) + s 1X j=0 Z pj pj+1 (jpj 1 + (j + 1)pj)r (2j + 1)p2j dF (r). (8) 3.4 Expected variance minimization In this section, we consider the problem of minimizing the expected variance of quantization: min `2L Ev,h ⇥ kQ`(v) vk22 ⇤ . (9) To solve the expected variance minimization problem, suppose that we observe N stochastic gradients {v1, . . . ,vN}. Let Fn(r) and pn(r) denote the CDF and PDF of normalized coordinate conditioned on observing kvnk, respectively. By taking into account randomness in kvk and using the law of total expectation, an approximation of the expected variance in (9) is given by E[kQs(v) vk22] ⇡ 1 N NX n=1 kvnk2 sX j=0 Z `j+1 `j 2(r) dFn(r). (10) The optimal levels to minimize Eq. (10) are a solution to the following problem: `⇤ = argmin `2L NX n=1 kvnk2 sX j=0 Z `j+1 `j 2(r) dFn(r) = argmin `2L sX j=0 Z `j+1 `j 2(r) dF (r), where `⇤ = [`⇤1, . . . , `⇤s]> and F (r) = PN n=1 nFn(r) is the weighted sum of the conditional CDFs with n = kvnk2/ PN n=1 kvnk2. Note that we can accommodate both normal and truncated normal distributions by substituting associated expressions into pn(r) and Fn(r). Exact update rules and analysis of computational complexity of ALQ, GD, and AMQ are discussed in Appendix C. 4 Theoretical guarantees One can alternatively design quantization levels to minimize the worst-case variance. However, compared to an optimal scheme, this worst-case scheme increases the expected variance by ⌦(d), which is prohibitive in deep networks. We quantify the gap in Appendix E. Proofs are in appendices. A stochastic gradient has a second-moment upper bound B when E[kg(w)k22] B for all w 2 ⌦. Similarly, it has a variance upper bound 2 when E[kg(w) rf(w)k22] 2 for all w 2 ⌦. We consider a general adaptively quantized SGD (AQSGD) algorithm, described in Algorithm 1, where compression schemes are updated over the course of training.4 Many convergence results in stochastic optimization rely on a variance bound. We establish such a variance bound for our adaptive methods. Further, we verify that these optimization results can be made to rely only on the average variance. In the following, we provide theoretical guarantees for AQSGD algorithm, obtain variance and code-length bounds, and convergence guarantees for convex, nonconvex, and momentum-based variants of AQSGD. The analysis of nonadaptive methods in [20–23] can be considered as special cases of our theorems with fixed levels over the course of training. A naive adoption of available convergence guarantees results in having worst-case variance bounds over the course of training. In this paper, we show that an average variance bound can be applied on a number of problems. Under general normalization, we first obtain variance upper bound for arbitrary levels, in particular, for those obtained adaptively. Theorem 2 (Variance bound). Let v 2 Rd and q 1. The quantization of v under Lq normalization satisfies E[Q`(v)] = v. Furthermore, we have E[kQ`(v) vk22] ✏Qkvk22, (11) 4Our results hold for any adaptive method, including ALQ and AMQ. where ✏Q = (`j⇤+1/`j⇤ 1)2 4(`j⇤+1/`j⇤ ) + inf0<p<1 Kp`1 (2 p)d 2 p min{q,2} with j⇤ = argmax1js `j+1/`j and Kp = 1 2 p 1 p 2 p (1 p). Theorem 2 implies that if g(w) is a stochastic gradient with a second-moment bound ⌘, then Q`(g(w)) is a stochastic gradient with a variance upper bound ✏Q⌘. Note that, as long as the maximum ratio of two consecutive levels does not change, the variance upper bound decreases with the number of quantization levels. In addition, our bound matches the known ⌦( p d) lower bound in [27]. Theorem 3 (Code-length bound). Let v 2 Rd and q 1. The expectation E[|ENCODE(v)|] of the number of communication bits needed to transmit Q`(v) under Lq normalization is bounded by E[|ENCODE(v)|] b+ n`1,d + d(H(L) + 1) b+ n`1,d + d(log2(s+ 2) + 1), (12) where b is a constant, n`1,d = min{`1 q + d1 1/q `1 , d}, H(L) is the entropy of L in bits, and L is a random variable with the probability mass function given by Pr(`j) = Z `j `j 1 r `j 1 `j `j 1 dF (r) + Z `j+1 `j `j+1 r `j+1 `j dF (r) for j = 1, . . . , s. In addition, we have Pr(`0 = 0) = Z `1 0 1 r `1 dF (r) and Pr(`s+1 = 1) = Z 1 `s r `s 1 `s dF (r). Theorem 3 provides a bound on the expected number of communication bits to encode the quantized stochastic gradients. As expected, the upper bound in (12) increases monotonically with d and s. We can combine variance and code-length upper bounds and obtain convergence guarantees for AQSGD when applied to various learning problems where we have convergence guarantees for full-precision SGD under standard assumptions. Let {`1, . . . , `K} denote the set of quantization levels that AQSGD experiences on the optimization trajectory. Suppose that `k is used for Tk iterations with PK k=1 Tk = T . For each particular `k, we can obtain corresponding variance bound ✏Q,k by substituting `k into (11). Then the average variance upper bound is given by ✏Q = PK k=1 Tk✏Q,k/T . For each particular `k, we can obtain corresponding expected code-length bound NQ,k by substituting random variable Lk into (12). The average expected code-length bound is given by NQ = PK k=1 TkNQ,k/T . On convex problems, convergence guarantees can be established along the lines of [17, Theorems 6.1]. Theorem 4 (AQSGD for nonsmooth convex optimization). Let f : ⌦! R denote a convex function and let R2 , supw2⌦ kw w0k22. Let B̂ = (1 + ✏Q)B and f⇤ = infw2⌦ f(w). Suppose that AQSGD is executed for T iterations with a learning rate ↵ = RM/(B̂ p T ) on M processors, each with access to independent stochastic gradients of f with a second-moment bound B, such that quantization levels are updated K times where `k with variance bound ✏Q,k and code-length bound NQ,k is used for Tk iterations. Then AQSGD satisfies E h f ⇣ 1 T PT t=0 wt ⌘i f⇤ RB̂/(M p T ). In addition, AQSGD requires at most NQ communication bits per iteration in expectation. In Appendix H and Appendix I, we obtain convergence guarantees on nonconvex problems and for momentum-based variants of AQSGD under standard assumptions, respectively. Theoretical guarantees for levels with symmetry are established in Appendix J. 5 Experimental evaluation In this section, we showcase the effectiveness of our adaptive quantization methods in speeding up training deep models. We compare our methods to the following baselines: single-GPU SGD (SGD), full-precision multi-GPU SGD (SuperSGD), uniform levels under L1 normalization (QSGDinf) [20], ternary levels under L1 normalization (TRN) [15], and exponential levels under L2 normalization with exponential factor p = 0.5 (NUQSGD) [21, 22]. We present results for the following variations of our proposed methods: ALQ and AMQ (with norm adjustments in Section 3.4), and their normalized variations ALQ-N and AMQ-N (Sections 3.1 and 3.3). We present full training results on ImageNet in Appendix K along with additional experimental details. We compare methods in terms of the number of training iterations that is independent of a particular distributed setup. In Table 1, we present results for training ResNet-32 and ResNet-110 [28] on CIFAR-10 [29], and ResNet-18 on ImageNet [30]. We simulate training with 4-GPUs on a single GPU by quantizing and dequantizing the gradient from 4 mini-batches in each training iteration. These simulations allow us to compare the performance of quantization methods to the hypothetical full-precision SuperSGD. All quantization methods studied in this section share two hyper-parameters: the number of bits (log2 of number of quantization levels) and a bucket size. A common trick used in normalized quantization is to encode and decode a high-dimensional vector in buckets such that each coordinate is normalized by the norm of its corresponding bucket instead of the norm of the entire vector [20]. The bucket size controls the tradeoff between extra communication cost and loss of precision. With a small bucket size, there are more bucket norms to be communicated, while with a large bucket size, we lose numerical precision as a result of dividing each coordinate by a large number. In Section 5.1, we provide an empirical study of the hyperparameters. Matching the accuracy of SuperSGD. Using only 3 bits (8 levels), our adaptive methods match the performance of SuperSGD on CIFAR-10 and close the gap on ImageNet (bold in Table 1). Our most flexible method, ALQ, achieves the best overall performance on ImageNet and the gap on CIFAR-10 with ALQ-N is less than 0.3%. There is at least 1.4% gap between our best performing method and previous work in training each model. To the best of our knowledge, matching the validation loss of SuperSGD has not been achieved in any previous work using only 3 bits. Fig. 3 shows the test loss and Fig. 4 shows the average gradient variance where the average is taken over gradient coordinates. Our adaptive methods successfully achieve lower variance during training. Comparison on the trajectory of SGD. Fig. 5 shows the average variance on the optimization trajectory of single-GPU without quantization. This graph provides a more fair comparison of the quantization error of different methods decoupled from their impact on the optimization trajectory. ALQ effectively finds an improved set of levels that reduce the variance in quantization. ALQ matches the variance of SuperSGD on Resnet-110 (Fig. 5b). In Figs. 5b and 5c, the variance of QSGDinf is as high as TRN in the first half of training. This shows that extra levels (8 uniform levels) do not perform better unless designed carefully. As expected, the variance of SuperSGD is always smaller than the variance of SGD by a constant factor of the number of GPUs. Negligible computational overhead. Our adaptive methods have similar per-step computation and communication cost compared to previous methods. On ImageNet, we save at least 60 hours from 95 hours of training and add only an additional cost of at most 10 minutes in total to adapt quantization. For bucket sizes 8192 and 16384 and 3–8 bits used in our experiments, the per-step cost relative to SuperSGD (32-bits) is 21–25% for ResNet-18 on ImageNet and 32–36% for ResNet-50. That is the same as the cost of NUQSGD and QSGDinf without additional coding or pruning with the same number of bits and bucket sizes. The cost of the additional update specific to ALQ is 0.4–0.5% of the total training time. In Appendix K.3, we provide tables with detailed timing results for varying bucket sizes and bits. 5.1 Hyperparameter studies Fig. 6 shows quantization levels for each method at the end of training ResNet-32 on CIFAR-10. The quantization levels for our adaptive methods are more concentrated near zero. In Figs. 7a and 7b, we study the impact of the bucket size and number of bits on the best validation accuracy achieved by quantization methods. Adaptive levels are the best quantization methods across all values of bucket size and number of bits. ALQ and ALQ-N are the best performing methods across all values of bucket size and number of bits. The good performance of ALQ-N is unexpected as it suggests quantization for vectors with different norms can be shared. In practice, ALQ-N is easier to implement and faster to update compared to ALQ. We observe a similar relation between AMQ and AMQ-N methods. Adaptive multiplier methods show inferior performance to adaptive level methods as the bucket size significantly grows (above 104) or shrinks (below 100) as well as for very few bits (2). Note that there exists a known generalization gap between SGD and SuperSGD in ResNet-110 that can be closed by extensive hyperparameter tuning [31]. Our adaptive methods reduce this gap with standard hyperparameters. Bucket size significantly impacts non-adaptive methods. For bucket size 100 and 3 bits, NUQSGD performs nearly as good as adaptive methods but quickly loses accuracy as the bucket size grows or shrinks. QSGDinf stays competitive for a wider range of bucket sizes but still loses accuracy faster than other methods. This shows the impact of bucketing as an understudied trick in evaluating quantization methods. Adaptive methods successfully scale to large number of GPUs. Table 2 shows the result of training CIFAR-10 on ResNet-32 using 16 and 32 GPUs. Note that with 32 GPUs, TRN is achieving almost the accuracy of SuperSGD with only 3 quantization levels, which is expected because TRN is unbiased and the variance of aggregated gradients decreases linearly with the number of GPUs. 6 Conclusions To reduce communication costs of data-parallel SGD, we introduce two adaptively quantized methods, ALQ and AMQ, to learn and adapt gradient quantization method on the fly. In addition to quantization method, in both methods, processors learn and adapt their coding methods in parallel by efficiently computing sufficient statistics of a parametric distribution. We establish tight upper bounds on the excessive variance for any arbitrary sequence of quantization levels under general normalization and on the expected number of communication bits per iteration. Under standard assumptions, we establish a number of convergence guarantees for our adaptive methods. We demonstrate the superiority of ALQ and AMQ over nonadaptive methods empirically on deep models and large datasets. Broader impact This work provides additional understanding of statistical behaviour of deep machine learning models. We aim to train deep models using popular SGD algorithm as fast as possible without compromising learning outcome. As the amount of data gathered through web and a plethora of sensors deployed everywhere (e.g., IoT applications) is drastically increasing, the design of efficient machine learning algorithms that are capable of processing large-scale data in a reasonable time can improve everyone’s quality of life. Our compression schemes can be used in Federated Learning settings, where a deep model is trained on data distributed among multiple owners without exposing that data. Developing privacy-preserving learning algorithms is an integral part of responsible and ethical AI. However, the long-term impacts of our schemes may depend on how machine learning is used in society. Acknowledgement The authors would like to thank Blair Bilodeau, David Fleet, Mufan Li, and Jeffrey Negrea for helpful discussions. FF was supported by OGS Scholarship. DA and IM were supported the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 805223 ScaleML). DMR was supported by an NSERC Discovery Grant. ARK was supported by NSERC Postdoctoral Fellowship. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute.5
1. What is the main contribution of the paper, and how does it differ from previous works on distributed neural network training? 2. What are the strengths of the proposed method, particularly in terms of its optimization and dynamic update capabilities? 3. What are the weaknesses of the paper, especially regarding its experimental design and results? 4. How would you address the concerns raised about the experiment section, such as the large gap between SGD and SuperSGD, the controversial behavior of the validation accuracy in Figure 7(a), and the misleading representation of TRN in Figures 3, 4, and 7(b)? 5. Are there any questions or concerns you have regarding the theoretical study and analysis presented in the paper?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper proposes an adaptive quantization of gradients on the fly to reduce communication in distributed neural network training. Unlike quantization under fixed discrete values in previous work, the paper optimizes those discrete values on the fly based on the dynamic statistics of the gradient distribution. The problem formulation is sound and the method is well theoretically studied. However, the experiments and comparison should be improved to solidify this paper. Strengths 1. the method is well motivated (optimized quantization values and dynamic update of those values when the gradient distribution shifts); 2. theoretical study is sound. Weaknesses The most significant limitation of this work is its experiment part. The following aspects should be fixed to support the claim: 1. why there is a large gap between SGD and SuperSGD? Generally, the accuracy should be close whenever the model is trained in a single GPU or multiple GPUs. Was it because different total mini-batch sizes were used? 2. clarify if/how weights are communicated through the network, and how gradients are exchanged. 3. In Figure 7(a), the validation accuracy drops when bucket size is very small. This is controversial. When bucket size is smaller, the variance is smaller, and thus the accuracy should be closer to full-precision SGD. In the extreme case when bucket size is 1, quantized SGD is floating-point SGD. In Figure 7(b), why the TRN line is a flat when increasing the bit-width. 4. putting TRN in the content of "3 quantization bits" is misleading. The TRN only used "ternary levels" (three levels) as stated in Line 209. Please clarify this in Table 1, Figure 3, Figure 4 and Figure 7(b). Such as "All methods use 3 quantization bits". Also please clarify if the "gradient clipping" in TRN was used or not in the implementation. "gradient clipping" is also an adaptive processing to reduce the variance of gradients by limiting the maximum norm.
NIPS
Title Adaptive Gradient Quantization for Data-Parallel SGD Abstract Many communication-efficient variants of SGD use gradient quantization schemes. These schemes are often heuristic and fixed over the course of training. We empirically observe that the statistics of gradients of deep models change during the training. Motivated by this observation, we introduce two adaptive quantization schemes, ALQ and AMQ. In both schemes, processors update their compression schemes in parallel by efficiently computing sufficient statistics of a parametric distribution. We improve the validation accuracy by almost 2% on CIFAR-10 and 1% on ImageNet in challenging low-cost communication setups. Our adaptive methods are also significantly more robust to the choice of hyperparameters. N/A 1 Introduction Stochastic gradient descent (SGD) and its variants are currently the method of choice for training deep models. Yet, large datasets cannot always be trained on a single computational node due to memory and scalability limitations. Data-parallel SGD is a remarkably scalable variant, in particular on multi-GPU systems [1–10]. However, despite its many advantages, distribution introduces new challenges for optimization algorithms. In particular, data-parallel SGD has large communication cost due to the need to transmit potentially huge gradient vectors. Ideally, we want distributed optimization methods that match the performance of SGD on a single hypothetical super machine, while paying a negligible communication cost. A common approach to reducing the communication cost in data-parallel SGD is gradient compression and quantization [4, 11–16]. In full-precision data-parallel SGD, each processor broadcasts its locally computed stochastic gradient vector at every iteration, whereas in quantized data-parallel SGD, each processor compresses its stochastic gradient before broadcasting. Current quantization methods are either designed heuristically or fixed prior to training. Convergence rates in a stochastic optimization problem are controlled by the trace of the gradient covariance matrix, which is referred ⇤Equal contributions. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. as the gradient variance in this paper [17]. As Fig. 1 shows, no fixed method can be optimal throughout the entire training because the distribution of gradients changes. A quantization method that is optimal at the first iteration will not be optimal after only a single epoch. In this paper, we propose two adaptive methods for quantizing the gradients in data-parallel SGD. We study methods that are defined by a norm and a set of quantization levels. In Adaptive Level Quantization (ALQ), we minimize the excess variance of quantization given an estimate of the distribution of the gradients. In Adaptive Multiplier Quantization (AMQ), we minimize the same objective as ALQ by modelling quantization levels as exponentially spaced levels. AMQ solves for the optimal value of a single multiplier parametrizing the exponentially spaced levels. 1.1 Summary of contributions • We propose two adaptive gradient quantization methods, ALQ and AMQ, in which processors update their compression methods in parallel. • We establish an upper bound on the excess variance for any arbitrary sequence of quantization levels under general normalization that is tight in dimension, an upper bound on the expected number of communication bits per iteration, and strong convergence guarantees on a number of problems under standard assumptions. Our bounds hold for any adaptive method, including ALQ and AMQ. • We improve the validation accuracy by almost 2% on CIFAR-10 and 1% on ImageNet in challenging low-cost communication setups. Our adaptive methods are significantly more robust to the choice of hyperparameters.2 1.2 Related work Adaptive quantization has been used for speech communication and storage [18]. In machine learning, several biased and unbiased schemes have been proposed to compress networks and gradients. Recently, lattice-based quantization has been studied for distributed mean estimation and variance reduction [19]. In this work, we focus on unbiased and coordinate-wise schemes to compress gradients. Alistarh et al. [20] proposed Quantized SGD (QSGD) focusing on the uniform quantization of stochastic gradients normalized to have unit Euclidean norm. Their experiments illustrate a similar quantization method, where gradients are normalized to have unit L1 norm, achieves better performance. We refer to this method as QSGDinf or Qinf in short. Wen et al. [15] proposed TernGrad, which can be viewed as a special case of QSGDinf with three quantization levels. Ramezani-Kebrya et al. [21] proposed nonuniform quantization levels (NUQSGD) and demonstrated superior empirical results compared to QSGDinf. Horváth et al. [22] proposed natural compression and dithering schemes, where the latter is a special case of logarithmic quantization. There have been prior attempts at adaptive quantization methods. Zhang et al. [23] proposed ZipML, which is an optimal quantization method if all points to be quantized are known a priori. To find the optimal sequence of quantization levels, a dynamic program is solved whose computational and memory cost is quadratic in the number of points to be quantized, which in the case of gradients would correspond to their dimension. For this reason, ZipML is impractical for quantizing on the fly, and is in fact used for (offline) dataset compression. They also proposed an approximation where a subsampled set of points is used and proposed to scan the data once to find the subset. However, as we show in this paper, this one-time scan is not enough as the distribution of stochastic gradients changes during the training. Zhang et al. [24] proposed LQ-Net, where weights and activations are quantized such that the inner products can be computed efficiently with bitwise operations. Compared to LQ-Net, our methods do not need additional memory for encoding vectors. Concurrent with our work, Fu et al. [25] proposed to quantize activations and gradients by modelling them with Weibull distributions. In comparison, our proposed methods accommodate general distributions. Further, our approach does not require any assumptions on the upper bound of the gradients. 2Open source code: http://github.com/tabrizian/learning-to-quantize Input: Local data, parameter vector (local copy) wt, learning rate ↵, and set of update steps U 1 for t = 1 to T do 2 if t 2 U then 3 for i = 1 to M do 4 Compute sufficient statistics and update quantization levels `; 5 for i = 1 to M do 6 Compute gi(wt), encode ci,t ENCODE` gi(wt) , and broadcast ci,t; 7 for j = 1 to M do 8 Receive ci,t from each processor i and decode ĝi(wt) DECODE` ci,t ; 9 Aggregate wt+1 P⌦ wt ↵M PM i=1 ĝi(wt) ; Algorithm 1: Adaptive data-parallel SGD. Loops are executed in parallel on each machine. At certain steps, each processor computes sufficient statistics of a parametric distribution to estimate distribution of normalized coordinates. 2 Preliminaries: data-parallel SGD Consider the problem of training a model parametrized by a high-dimensional vector w 2 Rd. Let ⌦ ✓ Rd denote a closed and compact set. Our goal is to minimize f : ⌦ ! R. Assume we have access to unbiased stochastic gradients of f , which is g, such that E[g(w)] = rf(w) for all w 2 ⌦. The update rule for full-precision SGD is given by wt+1 = P⌦ wt ↵g(wt)) where wt is the current parameter vector, ↵ is the learning rate, and P⌦ is the Euclidean projection onto ⌦. We consider data-parallel SGD, which is a synchronous and distributed framework consisting of M processors. Each processor receives gradients from all other processors and aggregates them. In data-parallel SGD with compression, gradients are compressed by each processor before transmission and decompressed before aggregation [20–23]. A stochastic compression method is unbiased if the vector after decompression is in expectation the same as the original vector. 3 Adaptive quantization In this section, we introduce novel adaptive compression methods that adapt during the training (Algorithm 1). Let v 2 Rd be a vector we seek to quantize and ri = |vi|/kvk be its normalized coordinates for i = 1, . . . , d.3 Let q`(r) : [0, 1] ! [0, 1] denote a random quantization function applied to the normalized coordinate r using adaptable quantization levels, ` = [`0, . . . , `s+1]>, where 0 = `0 < `1 < · · · < `s < `s+1 = 1. For r 2 [0, 1], let ⌧(r) denote the index of a level such that `⌧(r) r < `⌧(r)+1. Let ⇢(r) = (r `⌧(r))/(`⌧(r)+1 `⌧(r)) be the relative distance of r to level ⌧(r) + 1. We define the random variable h(r) such that h(r) = `⌧(r) with probability 1 ⇢(r) and h(r) = `⌧(r)+1 with probability ⇢(r). We define the quantization of v as Q`(v) , [q`(v1), . . . , q`(vd)]> where q`(vi) = kvk · sign(vi) · h(ri) and h = {h(ri)}i=1,...,d are independent random variables. The encoding, ENCODE(v), of a stochastic gradient is the combined encoding of kvk using a standard floating point encoding along with an optimal encoding of h(ri) and binary encoding of sign(vi) for each coordinate i. The decoding, DECODE, recovers the norm, h(ri), and the sign. Additional details of the encoding method are described in Appendix D. We define the variance of vector quantization to be the trace of the covariance matrix, Eh[kQ`(v) vk22] = kvk2 dX i=1 2(ri), (1) where 2(r) = E[(q`(r) r)2] is the variance of quantization for a single coordinate that is given by 2(r) = (`⌧(r)+1 r)(r `⌧(r)). (2) 3In this section, we use k · k to denote a general Lq norm with q 1 for simplicity. Let v be a random vector corresponding to a stochastic gradient and h capture the randomness of quantization for this random vector as defined above. We define two minimization problems, expected variance and expected normalized variance minimization: min `2L Ev,h ⇥ kQ`(v) vk22 ⇤ and min `2L Ev,h ⇥ kQ`(v) vk22/kvk2 ⇤ , where L = {` : `j `j+1, 8 j, `0 = 0, `s+1 = 1} denotes the set of feasible solutions. We first focus on the problem of minimizing the expected normalized variance and then extend our methods to minimize the expected variance in Section 3.4. Let F (r) denote the marginal cumulative distribution function (CDF) of a normalized coordinate r. Assuming normalized coordinates ri are i.i.d. given kvk, the expected normalized variance minimization can be written as min `2L (`), where (`) , sX j=0 Z `j+1 `j 2(r) dF (r). (3) The following theorem suggests that solving (3) is challenging in general; however, the sub-problem of optimizing a single level given other levels can be solved efficiently in closed form. Proofs are provided in Appendix B. Theorem 1 (Expected normalized variance minimization). Problem (3) is nonconvex in general. However, the optimal solution to minimize one level given other levels, min`i (`), is given by `⇤i = (`i 1, `i+1), where (a, c) = F 1 ✓ F (c) Z c a r a c a dF (r) ◆ . (4) 3.1 ALQ: Adapting individual levels using coordinate descent Using the single level update rule in Eq. (4) we iteratively adapt individual levels to minimize the expected normalized variance in (3). We denote quantization levels at iteration t by `(t) starting from t = 0. The update rule is `j(t+ 1) = (`j 1(t), `j+1(t)) 8j = 1, . . . , s . (5) Performing the update rule above sequentially over coordinates j is a form of coordinate descent (CD) that is guaranteed to converge to a local minima. CD is particularly interesting because it does not involve any projection step to the feasible set L. In practice, we initialize the levels with either uniform levels [20] or exponentially spaced levels proposed in [21]. We observe that starting from either initialization CD converges in small number of steps (less than 10). 3.2 Gradient descent Computing r using Leibniz’s rule [26], the gradient descent (GD) algorithm to solve (3) is based on the following update rule: `j(t+ 1) = PL ✓ `j(t) ⌘(t) @ (`(t)) @`j ◆ @ (`(t)) @`j = Z `j(t) `j 1(t) (r `j 1(t)) dF (r) Z `j+1(t) `j(t) (`j+1(t) r) dF (r) (6) for t = 0, 1, . . . and j = 1, . . . , s. Note that the projection step in Eq. (6) is itself a convex optimization problem. We propose a projection-free modification of GD update rule to systematically ensure ` 2 L. Let j(t) = min{`j(t) `j 1(t), `j+1(t) `j(t)} denote the minimum distance between two neighbouring levels at iteration t for j = 1, . . . , s. If the change in level j is bounded by j(t)/2, it is guaranteed that ` 2 L. We propose to replace Eq. (6) with the following update rule: `j(t+ 1) = `j(t) sign ✓ @ (`(t)) @`j ◆ min ⇢ ⌘(t) @ (`(t)) @`j , j(t) 2 . (7) 3.3 AMQ: Exponentially spaced levels We now focus on ` = [ 1, p, . . . , ps, ps, . . . , p, 1]>, i.e., exponentially spaced levels with symmetry. We can update p efficiently by gradient descent using the first order derivative 1 2 d (p) dp = Z ps 0 2sp2s 1 dF (r) + s 1X j=0 Z pj pj+1 (jpj 1 + (j + 1)pj)r (2j + 1)p2j dF (r). (8) 3.4 Expected variance minimization In this section, we consider the problem of minimizing the expected variance of quantization: min `2L Ev,h ⇥ kQ`(v) vk22 ⇤ . (9) To solve the expected variance minimization problem, suppose that we observe N stochastic gradients {v1, . . . ,vN}. Let Fn(r) and pn(r) denote the CDF and PDF of normalized coordinate conditioned on observing kvnk, respectively. By taking into account randomness in kvk and using the law of total expectation, an approximation of the expected variance in (9) is given by E[kQs(v) vk22] ⇡ 1 N NX n=1 kvnk2 sX j=0 Z `j+1 `j 2(r) dFn(r). (10) The optimal levels to minimize Eq. (10) are a solution to the following problem: `⇤ = argmin `2L NX n=1 kvnk2 sX j=0 Z `j+1 `j 2(r) dFn(r) = argmin `2L sX j=0 Z `j+1 `j 2(r) dF (r), where `⇤ = [`⇤1, . . . , `⇤s]> and F (r) = PN n=1 nFn(r) is the weighted sum of the conditional CDFs with n = kvnk2/ PN n=1 kvnk2. Note that we can accommodate both normal and truncated normal distributions by substituting associated expressions into pn(r) and Fn(r). Exact update rules and analysis of computational complexity of ALQ, GD, and AMQ are discussed in Appendix C. 4 Theoretical guarantees One can alternatively design quantization levels to minimize the worst-case variance. However, compared to an optimal scheme, this worst-case scheme increases the expected variance by ⌦(d), which is prohibitive in deep networks. We quantify the gap in Appendix E. Proofs are in appendices. A stochastic gradient has a second-moment upper bound B when E[kg(w)k22] B for all w 2 ⌦. Similarly, it has a variance upper bound 2 when E[kg(w) rf(w)k22] 2 for all w 2 ⌦. We consider a general adaptively quantized SGD (AQSGD) algorithm, described in Algorithm 1, where compression schemes are updated over the course of training.4 Many convergence results in stochastic optimization rely on a variance bound. We establish such a variance bound for our adaptive methods. Further, we verify that these optimization results can be made to rely only on the average variance. In the following, we provide theoretical guarantees for AQSGD algorithm, obtain variance and code-length bounds, and convergence guarantees for convex, nonconvex, and momentum-based variants of AQSGD. The analysis of nonadaptive methods in [20–23] can be considered as special cases of our theorems with fixed levels over the course of training. A naive adoption of available convergence guarantees results in having worst-case variance bounds over the course of training. In this paper, we show that an average variance bound can be applied on a number of problems. Under general normalization, we first obtain variance upper bound for arbitrary levels, in particular, for those obtained adaptively. Theorem 2 (Variance bound). Let v 2 Rd and q 1. The quantization of v under Lq normalization satisfies E[Q`(v)] = v. Furthermore, we have E[kQ`(v) vk22] ✏Qkvk22, (11) 4Our results hold for any adaptive method, including ALQ and AMQ. where ✏Q = (`j⇤+1/`j⇤ 1)2 4(`j⇤+1/`j⇤ ) + inf0<p<1 Kp`1 (2 p)d 2 p min{q,2} with j⇤ = argmax1js `j+1/`j and Kp = 1 2 p 1 p 2 p (1 p). Theorem 2 implies that if g(w) is a stochastic gradient with a second-moment bound ⌘, then Q`(g(w)) is a stochastic gradient with a variance upper bound ✏Q⌘. Note that, as long as the maximum ratio of two consecutive levels does not change, the variance upper bound decreases with the number of quantization levels. In addition, our bound matches the known ⌦( p d) lower bound in [27]. Theorem 3 (Code-length bound). Let v 2 Rd and q 1. The expectation E[|ENCODE(v)|] of the number of communication bits needed to transmit Q`(v) under Lq normalization is bounded by E[|ENCODE(v)|] b+ n`1,d + d(H(L) + 1) b+ n`1,d + d(log2(s+ 2) + 1), (12) where b is a constant, n`1,d = min{`1 q + d1 1/q `1 , d}, H(L) is the entropy of L in bits, and L is a random variable with the probability mass function given by Pr(`j) = Z `j `j 1 r `j 1 `j `j 1 dF (r) + Z `j+1 `j `j+1 r `j+1 `j dF (r) for j = 1, . . . , s. In addition, we have Pr(`0 = 0) = Z `1 0 1 r `1 dF (r) and Pr(`s+1 = 1) = Z 1 `s r `s 1 `s dF (r). Theorem 3 provides a bound on the expected number of communication bits to encode the quantized stochastic gradients. As expected, the upper bound in (12) increases monotonically with d and s. We can combine variance and code-length upper bounds and obtain convergence guarantees for AQSGD when applied to various learning problems where we have convergence guarantees for full-precision SGD under standard assumptions. Let {`1, . . . , `K} denote the set of quantization levels that AQSGD experiences on the optimization trajectory. Suppose that `k is used for Tk iterations with PK k=1 Tk = T . For each particular `k, we can obtain corresponding variance bound ✏Q,k by substituting `k into (11). Then the average variance upper bound is given by ✏Q = PK k=1 Tk✏Q,k/T . For each particular `k, we can obtain corresponding expected code-length bound NQ,k by substituting random variable Lk into (12). The average expected code-length bound is given by NQ = PK k=1 TkNQ,k/T . On convex problems, convergence guarantees can be established along the lines of [17, Theorems 6.1]. Theorem 4 (AQSGD for nonsmooth convex optimization). Let f : ⌦! R denote a convex function and let R2 , supw2⌦ kw w0k22. Let B̂ = (1 + ✏Q)B and f⇤ = infw2⌦ f(w). Suppose that AQSGD is executed for T iterations with a learning rate ↵ = RM/(B̂ p T ) on M processors, each with access to independent stochastic gradients of f with a second-moment bound B, such that quantization levels are updated K times where `k with variance bound ✏Q,k and code-length bound NQ,k is used for Tk iterations. Then AQSGD satisfies E h f ⇣ 1 T PT t=0 wt ⌘i f⇤ RB̂/(M p T ). In addition, AQSGD requires at most NQ communication bits per iteration in expectation. In Appendix H and Appendix I, we obtain convergence guarantees on nonconvex problems and for momentum-based variants of AQSGD under standard assumptions, respectively. Theoretical guarantees for levels with symmetry are established in Appendix J. 5 Experimental evaluation In this section, we showcase the effectiveness of our adaptive quantization methods in speeding up training deep models. We compare our methods to the following baselines: single-GPU SGD (SGD), full-precision multi-GPU SGD (SuperSGD), uniform levels under L1 normalization (QSGDinf) [20], ternary levels under L1 normalization (TRN) [15], and exponential levels under L2 normalization with exponential factor p = 0.5 (NUQSGD) [21, 22]. We present results for the following variations of our proposed methods: ALQ and AMQ (with norm adjustments in Section 3.4), and their normalized variations ALQ-N and AMQ-N (Sections 3.1 and 3.3). We present full training results on ImageNet in Appendix K along with additional experimental details. We compare methods in terms of the number of training iterations that is independent of a particular distributed setup. In Table 1, we present results for training ResNet-32 and ResNet-110 [28] on CIFAR-10 [29], and ResNet-18 on ImageNet [30]. We simulate training with 4-GPUs on a single GPU by quantizing and dequantizing the gradient from 4 mini-batches in each training iteration. These simulations allow us to compare the performance of quantization methods to the hypothetical full-precision SuperSGD. All quantization methods studied in this section share two hyper-parameters: the number of bits (log2 of number of quantization levels) and a bucket size. A common trick used in normalized quantization is to encode and decode a high-dimensional vector in buckets such that each coordinate is normalized by the norm of its corresponding bucket instead of the norm of the entire vector [20]. The bucket size controls the tradeoff between extra communication cost and loss of precision. With a small bucket size, there are more bucket norms to be communicated, while with a large bucket size, we lose numerical precision as a result of dividing each coordinate by a large number. In Section 5.1, we provide an empirical study of the hyperparameters. Matching the accuracy of SuperSGD. Using only 3 bits (8 levels), our adaptive methods match the performance of SuperSGD on CIFAR-10 and close the gap on ImageNet (bold in Table 1). Our most flexible method, ALQ, achieves the best overall performance on ImageNet and the gap on CIFAR-10 with ALQ-N is less than 0.3%. There is at least 1.4% gap between our best performing method and previous work in training each model. To the best of our knowledge, matching the validation loss of SuperSGD has not been achieved in any previous work using only 3 bits. Fig. 3 shows the test loss and Fig. 4 shows the average gradient variance where the average is taken over gradient coordinates. Our adaptive methods successfully achieve lower variance during training. Comparison on the trajectory of SGD. Fig. 5 shows the average variance on the optimization trajectory of single-GPU without quantization. This graph provides a more fair comparison of the quantization error of different methods decoupled from their impact on the optimization trajectory. ALQ effectively finds an improved set of levels that reduce the variance in quantization. ALQ matches the variance of SuperSGD on Resnet-110 (Fig. 5b). In Figs. 5b and 5c, the variance of QSGDinf is as high as TRN in the first half of training. This shows that extra levels (8 uniform levels) do not perform better unless designed carefully. As expected, the variance of SuperSGD is always smaller than the variance of SGD by a constant factor of the number of GPUs. Negligible computational overhead. Our adaptive methods have similar per-step computation and communication cost compared to previous methods. On ImageNet, we save at least 60 hours from 95 hours of training and add only an additional cost of at most 10 minutes in total to adapt quantization. For bucket sizes 8192 and 16384 and 3–8 bits used in our experiments, the per-step cost relative to SuperSGD (32-bits) is 21–25% for ResNet-18 on ImageNet and 32–36% for ResNet-50. That is the same as the cost of NUQSGD and QSGDinf without additional coding or pruning with the same number of bits and bucket sizes. The cost of the additional update specific to ALQ is 0.4–0.5% of the total training time. In Appendix K.3, we provide tables with detailed timing results for varying bucket sizes and bits. 5.1 Hyperparameter studies Fig. 6 shows quantization levels for each method at the end of training ResNet-32 on CIFAR-10. The quantization levels for our adaptive methods are more concentrated near zero. In Figs. 7a and 7b, we study the impact of the bucket size and number of bits on the best validation accuracy achieved by quantization methods. Adaptive levels are the best quantization methods across all values of bucket size and number of bits. ALQ and ALQ-N are the best performing methods across all values of bucket size and number of bits. The good performance of ALQ-N is unexpected as it suggests quantization for vectors with different norms can be shared. In practice, ALQ-N is easier to implement and faster to update compared to ALQ. We observe a similar relation between AMQ and AMQ-N methods. Adaptive multiplier methods show inferior performance to adaptive level methods as the bucket size significantly grows (above 104) or shrinks (below 100) as well as for very few bits (2). Note that there exists a known generalization gap between SGD and SuperSGD in ResNet-110 that can be closed by extensive hyperparameter tuning [31]. Our adaptive methods reduce this gap with standard hyperparameters. Bucket size significantly impacts non-adaptive methods. For bucket size 100 and 3 bits, NUQSGD performs nearly as good as adaptive methods but quickly loses accuracy as the bucket size grows or shrinks. QSGDinf stays competitive for a wider range of bucket sizes but still loses accuracy faster than other methods. This shows the impact of bucketing as an understudied trick in evaluating quantization methods. Adaptive methods successfully scale to large number of GPUs. Table 2 shows the result of training CIFAR-10 on ResNet-32 using 16 and 32 GPUs. Note that with 32 GPUs, TRN is achieving almost the accuracy of SuperSGD with only 3 quantization levels, which is expected because TRN is unbiased and the variance of aggregated gradients decreases linearly with the number of GPUs. 6 Conclusions To reduce communication costs of data-parallel SGD, we introduce two adaptively quantized methods, ALQ and AMQ, to learn and adapt gradient quantization method on the fly. In addition to quantization method, in both methods, processors learn and adapt their coding methods in parallel by efficiently computing sufficient statistics of a parametric distribution. We establish tight upper bounds on the excessive variance for any arbitrary sequence of quantization levels under general normalization and on the expected number of communication bits per iteration. Under standard assumptions, we establish a number of convergence guarantees for our adaptive methods. We demonstrate the superiority of ALQ and AMQ over nonadaptive methods empirically on deep models and large datasets. Broader impact This work provides additional understanding of statistical behaviour of deep machine learning models. We aim to train deep models using popular SGD algorithm as fast as possible without compromising learning outcome. As the amount of data gathered through web and a plethora of sensors deployed everywhere (e.g., IoT applications) is drastically increasing, the design of efficient machine learning algorithms that are capable of processing large-scale data in a reasonable time can improve everyone’s quality of life. Our compression schemes can be used in Federated Learning settings, where a deep model is trained on data distributed among multiple owners without exposing that data. Developing privacy-preserving learning algorithms is an integral part of responsible and ethical AI. However, the long-term impacts of our schemes may depend on how machine learning is used in society. Acknowledgement The authors would like to thank Blair Bilodeau, David Fleet, Mufan Li, and Jeffrey Negrea for helpful discussions. FF was supported by OGS Scholarship. DA and IM were supported the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 805223 ScaleML). DMR was supported by an NSERC Discovery Grant. ARK was supported by NSERC Postdoctoral Fellowship. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute.5
1. What is the focus and contribution of the paper regarding adaptive quantization? 2. What are the strengths of the proposed approach, particularly in terms of its mathematical formulation? 3. What are the weaknesses of the paper, especially regarding its experiments and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Do you have any concerns or suggestions regarding the paper's methodology or conclusions?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper proposed a new adaptive quantization scheme to reduce the communication cost during parallel training. Strengths The paper is well written and also provides step-by-step math computations. Weaknesses -- Why is the performance of SGD/SuperSGD so different? -- Figure 7 uses ResNet-8, which is weird to me since it is never used in pervious examples in the paper. -- Line 243 -- 250, the computation overhead is based on bucketsize=64, however all main results are based on bucketsize=8k/16k -- Though the method is proposed to speed up parallel training, the number nodes (or simulated nodes) is only 4. -- For AMQ, what p are you using? Except p=1/2, other should be hard to implement efficiently in practice. -- The theoretical results are useful but trivial to get. The authors should consider shorten that part. -- Could you compare with your results with other quantization methods, which aim to use better quantization scheme, like LQNet?
NIPS
Title Adaptive Gradient Quantization for Data-Parallel SGD Abstract Many communication-efficient variants of SGD use gradient quantization schemes. These schemes are often heuristic and fixed over the course of training. We empirically observe that the statistics of gradients of deep models change during the training. Motivated by this observation, we introduce two adaptive quantization schemes, ALQ and AMQ. In both schemes, processors update their compression schemes in parallel by efficiently computing sufficient statistics of a parametric distribution. We improve the validation accuracy by almost 2% on CIFAR-10 and 1% on ImageNet in challenging low-cost communication setups. Our adaptive methods are also significantly more robust to the choice of hyperparameters. N/A 1 Introduction Stochastic gradient descent (SGD) and its variants are currently the method of choice for training deep models. Yet, large datasets cannot always be trained on a single computational node due to memory and scalability limitations. Data-parallel SGD is a remarkably scalable variant, in particular on multi-GPU systems [1–10]. However, despite its many advantages, distribution introduces new challenges for optimization algorithms. In particular, data-parallel SGD has large communication cost due to the need to transmit potentially huge gradient vectors. Ideally, we want distributed optimization methods that match the performance of SGD on a single hypothetical super machine, while paying a negligible communication cost. A common approach to reducing the communication cost in data-parallel SGD is gradient compression and quantization [4, 11–16]. In full-precision data-parallel SGD, each processor broadcasts its locally computed stochastic gradient vector at every iteration, whereas in quantized data-parallel SGD, each processor compresses its stochastic gradient before broadcasting. Current quantization methods are either designed heuristically or fixed prior to training. Convergence rates in a stochastic optimization problem are controlled by the trace of the gradient covariance matrix, which is referred ⇤Equal contributions. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. as the gradient variance in this paper [17]. As Fig. 1 shows, no fixed method can be optimal throughout the entire training because the distribution of gradients changes. A quantization method that is optimal at the first iteration will not be optimal after only a single epoch. In this paper, we propose two adaptive methods for quantizing the gradients in data-parallel SGD. We study methods that are defined by a norm and a set of quantization levels. In Adaptive Level Quantization (ALQ), we minimize the excess variance of quantization given an estimate of the distribution of the gradients. In Adaptive Multiplier Quantization (AMQ), we minimize the same objective as ALQ by modelling quantization levels as exponentially spaced levels. AMQ solves for the optimal value of a single multiplier parametrizing the exponentially spaced levels. 1.1 Summary of contributions • We propose two adaptive gradient quantization methods, ALQ and AMQ, in which processors update their compression methods in parallel. • We establish an upper bound on the excess variance for any arbitrary sequence of quantization levels under general normalization that is tight in dimension, an upper bound on the expected number of communication bits per iteration, and strong convergence guarantees on a number of problems under standard assumptions. Our bounds hold for any adaptive method, including ALQ and AMQ. • We improve the validation accuracy by almost 2% on CIFAR-10 and 1% on ImageNet in challenging low-cost communication setups. Our adaptive methods are significantly more robust to the choice of hyperparameters.2 1.2 Related work Adaptive quantization has been used for speech communication and storage [18]. In machine learning, several biased and unbiased schemes have been proposed to compress networks and gradients. Recently, lattice-based quantization has been studied for distributed mean estimation and variance reduction [19]. In this work, we focus on unbiased and coordinate-wise schemes to compress gradients. Alistarh et al. [20] proposed Quantized SGD (QSGD) focusing on the uniform quantization of stochastic gradients normalized to have unit Euclidean norm. Their experiments illustrate a similar quantization method, where gradients are normalized to have unit L1 norm, achieves better performance. We refer to this method as QSGDinf or Qinf in short. Wen et al. [15] proposed TernGrad, which can be viewed as a special case of QSGDinf with three quantization levels. Ramezani-Kebrya et al. [21] proposed nonuniform quantization levels (NUQSGD) and demonstrated superior empirical results compared to QSGDinf. Horváth et al. [22] proposed natural compression and dithering schemes, where the latter is a special case of logarithmic quantization. There have been prior attempts at adaptive quantization methods. Zhang et al. [23] proposed ZipML, which is an optimal quantization method if all points to be quantized are known a priori. To find the optimal sequence of quantization levels, a dynamic program is solved whose computational and memory cost is quadratic in the number of points to be quantized, which in the case of gradients would correspond to their dimension. For this reason, ZipML is impractical for quantizing on the fly, and is in fact used for (offline) dataset compression. They also proposed an approximation where a subsampled set of points is used and proposed to scan the data once to find the subset. However, as we show in this paper, this one-time scan is not enough as the distribution of stochastic gradients changes during the training. Zhang et al. [24] proposed LQ-Net, where weights and activations are quantized such that the inner products can be computed efficiently with bitwise operations. Compared to LQ-Net, our methods do not need additional memory for encoding vectors. Concurrent with our work, Fu et al. [25] proposed to quantize activations and gradients by modelling them with Weibull distributions. In comparison, our proposed methods accommodate general distributions. Further, our approach does not require any assumptions on the upper bound of the gradients. 2Open source code: http://github.com/tabrizian/learning-to-quantize Input: Local data, parameter vector (local copy) wt, learning rate ↵, and set of update steps U 1 for t = 1 to T do 2 if t 2 U then 3 for i = 1 to M do 4 Compute sufficient statistics and update quantization levels `; 5 for i = 1 to M do 6 Compute gi(wt), encode ci,t ENCODE` gi(wt) , and broadcast ci,t; 7 for j = 1 to M do 8 Receive ci,t from each processor i and decode ĝi(wt) DECODE` ci,t ; 9 Aggregate wt+1 P⌦ wt ↵M PM i=1 ĝi(wt) ; Algorithm 1: Adaptive data-parallel SGD. Loops are executed in parallel on each machine. At certain steps, each processor computes sufficient statistics of a parametric distribution to estimate distribution of normalized coordinates. 2 Preliminaries: data-parallel SGD Consider the problem of training a model parametrized by a high-dimensional vector w 2 Rd. Let ⌦ ✓ Rd denote a closed and compact set. Our goal is to minimize f : ⌦ ! R. Assume we have access to unbiased stochastic gradients of f , which is g, such that E[g(w)] = rf(w) for all w 2 ⌦. The update rule for full-precision SGD is given by wt+1 = P⌦ wt ↵g(wt)) where wt is the current parameter vector, ↵ is the learning rate, and P⌦ is the Euclidean projection onto ⌦. We consider data-parallel SGD, which is a synchronous and distributed framework consisting of M processors. Each processor receives gradients from all other processors and aggregates them. In data-parallel SGD with compression, gradients are compressed by each processor before transmission and decompressed before aggregation [20–23]. A stochastic compression method is unbiased if the vector after decompression is in expectation the same as the original vector. 3 Adaptive quantization In this section, we introduce novel adaptive compression methods that adapt during the training (Algorithm 1). Let v 2 Rd be a vector we seek to quantize and ri = |vi|/kvk be its normalized coordinates for i = 1, . . . , d.3 Let q`(r) : [0, 1] ! [0, 1] denote a random quantization function applied to the normalized coordinate r using adaptable quantization levels, ` = [`0, . . . , `s+1]>, where 0 = `0 < `1 < · · · < `s < `s+1 = 1. For r 2 [0, 1], let ⌧(r) denote the index of a level such that `⌧(r) r < `⌧(r)+1. Let ⇢(r) = (r `⌧(r))/(`⌧(r)+1 `⌧(r)) be the relative distance of r to level ⌧(r) + 1. We define the random variable h(r) such that h(r) = `⌧(r) with probability 1 ⇢(r) and h(r) = `⌧(r)+1 with probability ⇢(r). We define the quantization of v as Q`(v) , [q`(v1), . . . , q`(vd)]> where q`(vi) = kvk · sign(vi) · h(ri) and h = {h(ri)}i=1,...,d are independent random variables. The encoding, ENCODE(v), of a stochastic gradient is the combined encoding of kvk using a standard floating point encoding along with an optimal encoding of h(ri) and binary encoding of sign(vi) for each coordinate i. The decoding, DECODE, recovers the norm, h(ri), and the sign. Additional details of the encoding method are described in Appendix D. We define the variance of vector quantization to be the trace of the covariance matrix, Eh[kQ`(v) vk22] = kvk2 dX i=1 2(ri), (1) where 2(r) = E[(q`(r) r)2] is the variance of quantization for a single coordinate that is given by 2(r) = (`⌧(r)+1 r)(r `⌧(r)). (2) 3In this section, we use k · k to denote a general Lq norm with q 1 for simplicity. Let v be a random vector corresponding to a stochastic gradient and h capture the randomness of quantization for this random vector as defined above. We define two minimization problems, expected variance and expected normalized variance minimization: min `2L Ev,h ⇥ kQ`(v) vk22 ⇤ and min `2L Ev,h ⇥ kQ`(v) vk22/kvk2 ⇤ , where L = {` : `j `j+1, 8 j, `0 = 0, `s+1 = 1} denotes the set of feasible solutions. We first focus on the problem of minimizing the expected normalized variance and then extend our methods to minimize the expected variance in Section 3.4. Let F (r) denote the marginal cumulative distribution function (CDF) of a normalized coordinate r. Assuming normalized coordinates ri are i.i.d. given kvk, the expected normalized variance minimization can be written as min `2L (`), where (`) , sX j=0 Z `j+1 `j 2(r) dF (r). (3) The following theorem suggests that solving (3) is challenging in general; however, the sub-problem of optimizing a single level given other levels can be solved efficiently in closed form. Proofs are provided in Appendix B. Theorem 1 (Expected normalized variance minimization). Problem (3) is nonconvex in general. However, the optimal solution to minimize one level given other levels, min`i (`), is given by `⇤i = (`i 1, `i+1), where (a, c) = F 1 ✓ F (c) Z c a r a c a dF (r) ◆ . (4) 3.1 ALQ: Adapting individual levels using coordinate descent Using the single level update rule in Eq. (4) we iteratively adapt individual levels to minimize the expected normalized variance in (3). We denote quantization levels at iteration t by `(t) starting from t = 0. The update rule is `j(t+ 1) = (`j 1(t), `j+1(t)) 8j = 1, . . . , s . (5) Performing the update rule above sequentially over coordinates j is a form of coordinate descent (CD) that is guaranteed to converge to a local minima. CD is particularly interesting because it does not involve any projection step to the feasible set L. In practice, we initialize the levels with either uniform levels [20] or exponentially spaced levels proposed in [21]. We observe that starting from either initialization CD converges in small number of steps (less than 10). 3.2 Gradient descent Computing r using Leibniz’s rule [26], the gradient descent (GD) algorithm to solve (3) is based on the following update rule: `j(t+ 1) = PL ✓ `j(t) ⌘(t) @ (`(t)) @`j ◆ @ (`(t)) @`j = Z `j(t) `j 1(t) (r `j 1(t)) dF (r) Z `j+1(t) `j(t) (`j+1(t) r) dF (r) (6) for t = 0, 1, . . . and j = 1, . . . , s. Note that the projection step in Eq. (6) is itself a convex optimization problem. We propose a projection-free modification of GD update rule to systematically ensure ` 2 L. Let j(t) = min{`j(t) `j 1(t), `j+1(t) `j(t)} denote the minimum distance between two neighbouring levels at iteration t for j = 1, . . . , s. If the change in level j is bounded by j(t)/2, it is guaranteed that ` 2 L. We propose to replace Eq. (6) with the following update rule: `j(t+ 1) = `j(t) sign ✓ @ (`(t)) @`j ◆ min ⇢ ⌘(t) @ (`(t)) @`j , j(t) 2 . (7) 3.3 AMQ: Exponentially spaced levels We now focus on ` = [ 1, p, . . . , ps, ps, . . . , p, 1]>, i.e., exponentially spaced levels with symmetry. We can update p efficiently by gradient descent using the first order derivative 1 2 d (p) dp = Z ps 0 2sp2s 1 dF (r) + s 1X j=0 Z pj pj+1 (jpj 1 + (j + 1)pj)r (2j + 1)p2j dF (r). (8) 3.4 Expected variance minimization In this section, we consider the problem of minimizing the expected variance of quantization: min `2L Ev,h ⇥ kQ`(v) vk22 ⇤ . (9) To solve the expected variance minimization problem, suppose that we observe N stochastic gradients {v1, . . . ,vN}. Let Fn(r) and pn(r) denote the CDF and PDF of normalized coordinate conditioned on observing kvnk, respectively. By taking into account randomness in kvk and using the law of total expectation, an approximation of the expected variance in (9) is given by E[kQs(v) vk22] ⇡ 1 N NX n=1 kvnk2 sX j=0 Z `j+1 `j 2(r) dFn(r). (10) The optimal levels to minimize Eq. (10) are a solution to the following problem: `⇤ = argmin `2L NX n=1 kvnk2 sX j=0 Z `j+1 `j 2(r) dFn(r) = argmin `2L sX j=0 Z `j+1 `j 2(r) dF (r), where `⇤ = [`⇤1, . . . , `⇤s]> and F (r) = PN n=1 nFn(r) is the weighted sum of the conditional CDFs with n = kvnk2/ PN n=1 kvnk2. Note that we can accommodate both normal and truncated normal distributions by substituting associated expressions into pn(r) and Fn(r). Exact update rules and analysis of computational complexity of ALQ, GD, and AMQ are discussed in Appendix C. 4 Theoretical guarantees One can alternatively design quantization levels to minimize the worst-case variance. However, compared to an optimal scheme, this worst-case scheme increases the expected variance by ⌦(d), which is prohibitive in deep networks. We quantify the gap in Appendix E. Proofs are in appendices. A stochastic gradient has a second-moment upper bound B when E[kg(w)k22] B for all w 2 ⌦. Similarly, it has a variance upper bound 2 when E[kg(w) rf(w)k22] 2 for all w 2 ⌦. We consider a general adaptively quantized SGD (AQSGD) algorithm, described in Algorithm 1, where compression schemes are updated over the course of training.4 Many convergence results in stochastic optimization rely on a variance bound. We establish such a variance bound for our adaptive methods. Further, we verify that these optimization results can be made to rely only on the average variance. In the following, we provide theoretical guarantees for AQSGD algorithm, obtain variance and code-length bounds, and convergence guarantees for convex, nonconvex, and momentum-based variants of AQSGD. The analysis of nonadaptive methods in [20–23] can be considered as special cases of our theorems with fixed levels over the course of training. A naive adoption of available convergence guarantees results in having worst-case variance bounds over the course of training. In this paper, we show that an average variance bound can be applied on a number of problems. Under general normalization, we first obtain variance upper bound for arbitrary levels, in particular, for those obtained adaptively. Theorem 2 (Variance bound). Let v 2 Rd and q 1. The quantization of v under Lq normalization satisfies E[Q`(v)] = v. Furthermore, we have E[kQ`(v) vk22] ✏Qkvk22, (11) 4Our results hold for any adaptive method, including ALQ and AMQ. where ✏Q = (`j⇤+1/`j⇤ 1)2 4(`j⇤+1/`j⇤ ) + inf0<p<1 Kp`1 (2 p)d 2 p min{q,2} with j⇤ = argmax1js `j+1/`j and Kp = 1 2 p 1 p 2 p (1 p). Theorem 2 implies that if g(w) is a stochastic gradient with a second-moment bound ⌘, then Q`(g(w)) is a stochastic gradient with a variance upper bound ✏Q⌘. Note that, as long as the maximum ratio of two consecutive levels does not change, the variance upper bound decreases with the number of quantization levels. In addition, our bound matches the known ⌦( p d) lower bound in [27]. Theorem 3 (Code-length bound). Let v 2 Rd and q 1. The expectation E[|ENCODE(v)|] of the number of communication bits needed to transmit Q`(v) under Lq normalization is bounded by E[|ENCODE(v)|] b+ n`1,d + d(H(L) + 1) b+ n`1,d + d(log2(s+ 2) + 1), (12) where b is a constant, n`1,d = min{`1 q + d1 1/q `1 , d}, H(L) is the entropy of L in bits, and L is a random variable with the probability mass function given by Pr(`j) = Z `j `j 1 r `j 1 `j `j 1 dF (r) + Z `j+1 `j `j+1 r `j+1 `j dF (r) for j = 1, . . . , s. In addition, we have Pr(`0 = 0) = Z `1 0 1 r `1 dF (r) and Pr(`s+1 = 1) = Z 1 `s r `s 1 `s dF (r). Theorem 3 provides a bound on the expected number of communication bits to encode the quantized stochastic gradients. As expected, the upper bound in (12) increases monotonically with d and s. We can combine variance and code-length upper bounds and obtain convergence guarantees for AQSGD when applied to various learning problems where we have convergence guarantees for full-precision SGD under standard assumptions. Let {`1, . . . , `K} denote the set of quantization levels that AQSGD experiences on the optimization trajectory. Suppose that `k is used for Tk iterations with PK k=1 Tk = T . For each particular `k, we can obtain corresponding variance bound ✏Q,k by substituting `k into (11). Then the average variance upper bound is given by ✏Q = PK k=1 Tk✏Q,k/T . For each particular `k, we can obtain corresponding expected code-length bound NQ,k by substituting random variable Lk into (12). The average expected code-length bound is given by NQ = PK k=1 TkNQ,k/T . On convex problems, convergence guarantees can be established along the lines of [17, Theorems 6.1]. Theorem 4 (AQSGD for nonsmooth convex optimization). Let f : ⌦! R denote a convex function and let R2 , supw2⌦ kw w0k22. Let B̂ = (1 + ✏Q)B and f⇤ = infw2⌦ f(w). Suppose that AQSGD is executed for T iterations with a learning rate ↵ = RM/(B̂ p T ) on M processors, each with access to independent stochastic gradients of f with a second-moment bound B, such that quantization levels are updated K times where `k with variance bound ✏Q,k and code-length bound NQ,k is used for Tk iterations. Then AQSGD satisfies E h f ⇣ 1 T PT t=0 wt ⌘i f⇤ RB̂/(M p T ). In addition, AQSGD requires at most NQ communication bits per iteration in expectation. In Appendix H and Appendix I, we obtain convergence guarantees on nonconvex problems and for momentum-based variants of AQSGD under standard assumptions, respectively. Theoretical guarantees for levels with symmetry are established in Appendix J. 5 Experimental evaluation In this section, we showcase the effectiveness of our adaptive quantization methods in speeding up training deep models. We compare our methods to the following baselines: single-GPU SGD (SGD), full-precision multi-GPU SGD (SuperSGD), uniform levels under L1 normalization (QSGDinf) [20], ternary levels under L1 normalization (TRN) [15], and exponential levels under L2 normalization with exponential factor p = 0.5 (NUQSGD) [21, 22]. We present results for the following variations of our proposed methods: ALQ and AMQ (with norm adjustments in Section 3.4), and their normalized variations ALQ-N and AMQ-N (Sections 3.1 and 3.3). We present full training results on ImageNet in Appendix K along with additional experimental details. We compare methods in terms of the number of training iterations that is independent of a particular distributed setup. In Table 1, we present results for training ResNet-32 and ResNet-110 [28] on CIFAR-10 [29], and ResNet-18 on ImageNet [30]. We simulate training with 4-GPUs on a single GPU by quantizing and dequantizing the gradient from 4 mini-batches in each training iteration. These simulations allow us to compare the performance of quantization methods to the hypothetical full-precision SuperSGD. All quantization methods studied in this section share two hyper-parameters: the number of bits (log2 of number of quantization levels) and a bucket size. A common trick used in normalized quantization is to encode and decode a high-dimensional vector in buckets such that each coordinate is normalized by the norm of its corresponding bucket instead of the norm of the entire vector [20]. The bucket size controls the tradeoff between extra communication cost and loss of precision. With a small bucket size, there are more bucket norms to be communicated, while with a large bucket size, we lose numerical precision as a result of dividing each coordinate by a large number. In Section 5.1, we provide an empirical study of the hyperparameters. Matching the accuracy of SuperSGD. Using only 3 bits (8 levels), our adaptive methods match the performance of SuperSGD on CIFAR-10 and close the gap on ImageNet (bold in Table 1). Our most flexible method, ALQ, achieves the best overall performance on ImageNet and the gap on CIFAR-10 with ALQ-N is less than 0.3%. There is at least 1.4% gap between our best performing method and previous work in training each model. To the best of our knowledge, matching the validation loss of SuperSGD has not been achieved in any previous work using only 3 bits. Fig. 3 shows the test loss and Fig. 4 shows the average gradient variance where the average is taken over gradient coordinates. Our adaptive methods successfully achieve lower variance during training. Comparison on the trajectory of SGD. Fig. 5 shows the average variance on the optimization trajectory of single-GPU without quantization. This graph provides a more fair comparison of the quantization error of different methods decoupled from their impact on the optimization trajectory. ALQ effectively finds an improved set of levels that reduce the variance in quantization. ALQ matches the variance of SuperSGD on Resnet-110 (Fig. 5b). In Figs. 5b and 5c, the variance of QSGDinf is as high as TRN in the first half of training. This shows that extra levels (8 uniform levels) do not perform better unless designed carefully. As expected, the variance of SuperSGD is always smaller than the variance of SGD by a constant factor of the number of GPUs. Negligible computational overhead. Our adaptive methods have similar per-step computation and communication cost compared to previous methods. On ImageNet, we save at least 60 hours from 95 hours of training and add only an additional cost of at most 10 minutes in total to adapt quantization. For bucket sizes 8192 and 16384 and 3–8 bits used in our experiments, the per-step cost relative to SuperSGD (32-bits) is 21–25% for ResNet-18 on ImageNet and 32–36% for ResNet-50. That is the same as the cost of NUQSGD and QSGDinf without additional coding or pruning with the same number of bits and bucket sizes. The cost of the additional update specific to ALQ is 0.4–0.5% of the total training time. In Appendix K.3, we provide tables with detailed timing results for varying bucket sizes and bits. 5.1 Hyperparameter studies Fig. 6 shows quantization levels for each method at the end of training ResNet-32 on CIFAR-10. The quantization levels for our adaptive methods are more concentrated near zero. In Figs. 7a and 7b, we study the impact of the bucket size and number of bits on the best validation accuracy achieved by quantization methods. Adaptive levels are the best quantization methods across all values of bucket size and number of bits. ALQ and ALQ-N are the best performing methods across all values of bucket size and number of bits. The good performance of ALQ-N is unexpected as it suggests quantization for vectors with different norms can be shared. In practice, ALQ-N is easier to implement and faster to update compared to ALQ. We observe a similar relation between AMQ and AMQ-N methods. Adaptive multiplier methods show inferior performance to adaptive level methods as the bucket size significantly grows (above 104) or shrinks (below 100) as well as for very few bits (2). Note that there exists a known generalization gap between SGD and SuperSGD in ResNet-110 that can be closed by extensive hyperparameter tuning [31]. Our adaptive methods reduce this gap with standard hyperparameters. Bucket size significantly impacts non-adaptive methods. For bucket size 100 and 3 bits, NUQSGD performs nearly as good as adaptive methods but quickly loses accuracy as the bucket size grows or shrinks. QSGDinf stays competitive for a wider range of bucket sizes but still loses accuracy faster than other methods. This shows the impact of bucketing as an understudied trick in evaluating quantization methods. Adaptive methods successfully scale to large number of GPUs. Table 2 shows the result of training CIFAR-10 on ResNet-32 using 16 and 32 GPUs. Note that with 32 GPUs, TRN is achieving almost the accuracy of SuperSGD with only 3 quantization levels, which is expected because TRN is unbiased and the variance of aggregated gradients decreases linearly with the number of GPUs. 6 Conclusions To reduce communication costs of data-parallel SGD, we introduce two adaptively quantized methods, ALQ and AMQ, to learn and adapt gradient quantization method on the fly. In addition to quantization method, in both methods, processors learn and adapt their coding methods in parallel by efficiently computing sufficient statistics of a parametric distribution. We establish tight upper bounds on the excessive variance for any arbitrary sequence of quantization levels under general normalization and on the expected number of communication bits per iteration. Under standard assumptions, we establish a number of convergence guarantees for our adaptive methods. We demonstrate the superiority of ALQ and AMQ over nonadaptive methods empirically on deep models and large datasets. Broader impact This work provides additional understanding of statistical behaviour of deep machine learning models. We aim to train deep models using popular SGD algorithm as fast as possible without compromising learning outcome. As the amount of data gathered through web and a plethora of sensors deployed everywhere (e.g., IoT applications) is drastically increasing, the design of efficient machine learning algorithms that are capable of processing large-scale data in a reasonable time can improve everyone’s quality of life. Our compression schemes can be used in Federated Learning settings, where a deep model is trained on data distributed among multiple owners without exposing that data. Developing privacy-preserving learning algorithms is an integral part of responsible and ethical AI. However, the long-term impacts of our schemes may depend on how machine learning is used in society. Acknowledgement The authors would like to thank Blair Bilodeau, David Fleet, Mufan Li, and Jeffrey Negrea for helpful discussions. FF was supported by OGS Scholarship. DA and IM were supported the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 805223 ScaleML). DMR was supported by an NSERC Discovery Grant. ARK was supported by NSERC Postdoctoral Fellowship. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute.5
1. What is the focus and contribution of the paper on distributed learning? 2. What are the strengths of the proposed approach, particularly in terms of theoretical guarantees and convergence bounds? 3. What are the weaknesses of the paper regarding its experimental methodology and comparison with prior works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper studies quantizing gradients in a distributed learning scenario in order to reduce communication burden. Specifically, they learn the quantization levels for both cases of uniform and exponential quantization by minimizing the variance between the quantized and the original values in expectation. They further prove the theoretical bounds for this variance. The authors further experimentally show they can quantize the gradients down to 3 bits without loss of accuracy. Strengths The proposed method is established using theoretical guarantees of convergence and derives the necessary communication bounds which can be used for the purpose of comparison with other gradient compression methods. Weaknesses Some aspects of the evaluation methodology have not been fully explained. It might be better to provide overall communication load requirements instead of quantization bitwidth for comparison with previous works as they may not compress all gradient values with a uniform bit-width (e.g. pruning).
NIPS
Title Adaptive Gradient Quantization for Data-Parallel SGD Abstract Many communication-efficient variants of SGD use gradient quantization schemes. These schemes are often heuristic and fixed over the course of training. We empirically observe that the statistics of gradients of deep models change during the training. Motivated by this observation, we introduce two adaptive quantization schemes, ALQ and AMQ. In both schemes, processors update their compression schemes in parallel by efficiently computing sufficient statistics of a parametric distribution. We improve the validation accuracy by almost 2% on CIFAR-10 and 1% on ImageNet in challenging low-cost communication setups. Our adaptive methods are also significantly more robust to the choice of hyperparameters. N/A 1 Introduction Stochastic gradient descent (SGD) and its variants are currently the method of choice for training deep models. Yet, large datasets cannot always be trained on a single computational node due to memory and scalability limitations. Data-parallel SGD is a remarkably scalable variant, in particular on multi-GPU systems [1–10]. However, despite its many advantages, distribution introduces new challenges for optimization algorithms. In particular, data-parallel SGD has large communication cost due to the need to transmit potentially huge gradient vectors. Ideally, we want distributed optimization methods that match the performance of SGD on a single hypothetical super machine, while paying a negligible communication cost. A common approach to reducing the communication cost in data-parallel SGD is gradient compression and quantization [4, 11–16]. In full-precision data-parallel SGD, each processor broadcasts its locally computed stochastic gradient vector at every iteration, whereas in quantized data-parallel SGD, each processor compresses its stochastic gradient before broadcasting. Current quantization methods are either designed heuristically or fixed prior to training. Convergence rates in a stochastic optimization problem are controlled by the trace of the gradient covariance matrix, which is referred ⇤Equal contributions. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. as the gradient variance in this paper [17]. As Fig. 1 shows, no fixed method can be optimal throughout the entire training because the distribution of gradients changes. A quantization method that is optimal at the first iteration will not be optimal after only a single epoch. In this paper, we propose two adaptive methods for quantizing the gradients in data-parallel SGD. We study methods that are defined by a norm and a set of quantization levels. In Adaptive Level Quantization (ALQ), we minimize the excess variance of quantization given an estimate of the distribution of the gradients. In Adaptive Multiplier Quantization (AMQ), we minimize the same objective as ALQ by modelling quantization levels as exponentially spaced levels. AMQ solves for the optimal value of a single multiplier parametrizing the exponentially spaced levels. 1.1 Summary of contributions • We propose two adaptive gradient quantization methods, ALQ and AMQ, in which processors update their compression methods in parallel. • We establish an upper bound on the excess variance for any arbitrary sequence of quantization levels under general normalization that is tight in dimension, an upper bound on the expected number of communication bits per iteration, and strong convergence guarantees on a number of problems under standard assumptions. Our bounds hold for any adaptive method, including ALQ and AMQ. • We improve the validation accuracy by almost 2% on CIFAR-10 and 1% on ImageNet in challenging low-cost communication setups. Our adaptive methods are significantly more robust to the choice of hyperparameters.2 1.2 Related work Adaptive quantization has been used for speech communication and storage [18]. In machine learning, several biased and unbiased schemes have been proposed to compress networks and gradients. Recently, lattice-based quantization has been studied for distributed mean estimation and variance reduction [19]. In this work, we focus on unbiased and coordinate-wise schemes to compress gradients. Alistarh et al. [20] proposed Quantized SGD (QSGD) focusing on the uniform quantization of stochastic gradients normalized to have unit Euclidean norm. Their experiments illustrate a similar quantization method, where gradients are normalized to have unit L1 norm, achieves better performance. We refer to this method as QSGDinf or Qinf in short. Wen et al. [15] proposed TernGrad, which can be viewed as a special case of QSGDinf with three quantization levels. Ramezani-Kebrya et al. [21] proposed nonuniform quantization levels (NUQSGD) and demonstrated superior empirical results compared to QSGDinf. Horváth et al. [22] proposed natural compression and dithering schemes, where the latter is a special case of logarithmic quantization. There have been prior attempts at adaptive quantization methods. Zhang et al. [23] proposed ZipML, which is an optimal quantization method if all points to be quantized are known a priori. To find the optimal sequence of quantization levels, a dynamic program is solved whose computational and memory cost is quadratic in the number of points to be quantized, which in the case of gradients would correspond to their dimension. For this reason, ZipML is impractical for quantizing on the fly, and is in fact used for (offline) dataset compression. They also proposed an approximation where a subsampled set of points is used and proposed to scan the data once to find the subset. However, as we show in this paper, this one-time scan is not enough as the distribution of stochastic gradients changes during the training. Zhang et al. [24] proposed LQ-Net, where weights and activations are quantized such that the inner products can be computed efficiently with bitwise operations. Compared to LQ-Net, our methods do not need additional memory for encoding vectors. Concurrent with our work, Fu et al. [25] proposed to quantize activations and gradients by modelling them with Weibull distributions. In comparison, our proposed methods accommodate general distributions. Further, our approach does not require any assumptions on the upper bound of the gradients. 2Open source code: http://github.com/tabrizian/learning-to-quantize Input: Local data, parameter vector (local copy) wt, learning rate ↵, and set of update steps U 1 for t = 1 to T do 2 if t 2 U then 3 for i = 1 to M do 4 Compute sufficient statistics and update quantization levels `; 5 for i = 1 to M do 6 Compute gi(wt), encode ci,t ENCODE` gi(wt) , and broadcast ci,t; 7 for j = 1 to M do 8 Receive ci,t from each processor i and decode ĝi(wt) DECODE` ci,t ; 9 Aggregate wt+1 P⌦ wt ↵M PM i=1 ĝi(wt) ; Algorithm 1: Adaptive data-parallel SGD. Loops are executed in parallel on each machine. At certain steps, each processor computes sufficient statistics of a parametric distribution to estimate distribution of normalized coordinates. 2 Preliminaries: data-parallel SGD Consider the problem of training a model parametrized by a high-dimensional vector w 2 Rd. Let ⌦ ✓ Rd denote a closed and compact set. Our goal is to minimize f : ⌦ ! R. Assume we have access to unbiased stochastic gradients of f , which is g, such that E[g(w)] = rf(w) for all w 2 ⌦. The update rule for full-precision SGD is given by wt+1 = P⌦ wt ↵g(wt)) where wt is the current parameter vector, ↵ is the learning rate, and P⌦ is the Euclidean projection onto ⌦. We consider data-parallel SGD, which is a synchronous and distributed framework consisting of M processors. Each processor receives gradients from all other processors and aggregates them. In data-parallel SGD with compression, gradients are compressed by each processor before transmission and decompressed before aggregation [20–23]. A stochastic compression method is unbiased if the vector after decompression is in expectation the same as the original vector. 3 Adaptive quantization In this section, we introduce novel adaptive compression methods that adapt during the training (Algorithm 1). Let v 2 Rd be a vector we seek to quantize and ri = |vi|/kvk be its normalized coordinates for i = 1, . . . , d.3 Let q`(r) : [0, 1] ! [0, 1] denote a random quantization function applied to the normalized coordinate r using adaptable quantization levels, ` = [`0, . . . , `s+1]>, where 0 = `0 < `1 < · · · < `s < `s+1 = 1. For r 2 [0, 1], let ⌧(r) denote the index of a level such that `⌧(r) r < `⌧(r)+1. Let ⇢(r) = (r `⌧(r))/(`⌧(r)+1 `⌧(r)) be the relative distance of r to level ⌧(r) + 1. We define the random variable h(r) such that h(r) = `⌧(r) with probability 1 ⇢(r) and h(r) = `⌧(r)+1 with probability ⇢(r). We define the quantization of v as Q`(v) , [q`(v1), . . . , q`(vd)]> where q`(vi) = kvk · sign(vi) · h(ri) and h = {h(ri)}i=1,...,d are independent random variables. The encoding, ENCODE(v), of a stochastic gradient is the combined encoding of kvk using a standard floating point encoding along with an optimal encoding of h(ri) and binary encoding of sign(vi) for each coordinate i. The decoding, DECODE, recovers the norm, h(ri), and the sign. Additional details of the encoding method are described in Appendix D. We define the variance of vector quantization to be the trace of the covariance matrix, Eh[kQ`(v) vk22] = kvk2 dX i=1 2(ri), (1) where 2(r) = E[(q`(r) r)2] is the variance of quantization for a single coordinate that is given by 2(r) = (`⌧(r)+1 r)(r `⌧(r)). (2) 3In this section, we use k · k to denote a general Lq norm with q 1 for simplicity. Let v be a random vector corresponding to a stochastic gradient and h capture the randomness of quantization for this random vector as defined above. We define two minimization problems, expected variance and expected normalized variance minimization: min `2L Ev,h ⇥ kQ`(v) vk22 ⇤ and min `2L Ev,h ⇥ kQ`(v) vk22/kvk2 ⇤ , where L = {` : `j `j+1, 8 j, `0 = 0, `s+1 = 1} denotes the set of feasible solutions. We first focus on the problem of minimizing the expected normalized variance and then extend our methods to minimize the expected variance in Section 3.4. Let F (r) denote the marginal cumulative distribution function (CDF) of a normalized coordinate r. Assuming normalized coordinates ri are i.i.d. given kvk, the expected normalized variance minimization can be written as min `2L (`), where (`) , sX j=0 Z `j+1 `j 2(r) dF (r). (3) The following theorem suggests that solving (3) is challenging in general; however, the sub-problem of optimizing a single level given other levels can be solved efficiently in closed form. Proofs are provided in Appendix B. Theorem 1 (Expected normalized variance minimization). Problem (3) is nonconvex in general. However, the optimal solution to minimize one level given other levels, min`i (`), is given by `⇤i = (`i 1, `i+1), where (a, c) = F 1 ✓ F (c) Z c a r a c a dF (r) ◆ . (4) 3.1 ALQ: Adapting individual levels using coordinate descent Using the single level update rule in Eq. (4) we iteratively adapt individual levels to minimize the expected normalized variance in (3). We denote quantization levels at iteration t by `(t) starting from t = 0. The update rule is `j(t+ 1) = (`j 1(t), `j+1(t)) 8j = 1, . . . , s . (5) Performing the update rule above sequentially over coordinates j is a form of coordinate descent (CD) that is guaranteed to converge to a local minima. CD is particularly interesting because it does not involve any projection step to the feasible set L. In practice, we initialize the levels with either uniform levels [20] or exponentially spaced levels proposed in [21]. We observe that starting from either initialization CD converges in small number of steps (less than 10). 3.2 Gradient descent Computing r using Leibniz’s rule [26], the gradient descent (GD) algorithm to solve (3) is based on the following update rule: `j(t+ 1) = PL ✓ `j(t) ⌘(t) @ (`(t)) @`j ◆ @ (`(t)) @`j = Z `j(t) `j 1(t) (r `j 1(t)) dF (r) Z `j+1(t) `j(t) (`j+1(t) r) dF (r) (6) for t = 0, 1, . . . and j = 1, . . . , s. Note that the projection step in Eq. (6) is itself a convex optimization problem. We propose a projection-free modification of GD update rule to systematically ensure ` 2 L. Let j(t) = min{`j(t) `j 1(t), `j+1(t) `j(t)} denote the minimum distance between two neighbouring levels at iteration t for j = 1, . . . , s. If the change in level j is bounded by j(t)/2, it is guaranteed that ` 2 L. We propose to replace Eq. (6) with the following update rule: `j(t+ 1) = `j(t) sign ✓ @ (`(t)) @`j ◆ min ⇢ ⌘(t) @ (`(t)) @`j , j(t) 2 . (7) 3.3 AMQ: Exponentially spaced levels We now focus on ` = [ 1, p, . . . , ps, ps, . . . , p, 1]>, i.e., exponentially spaced levels with symmetry. We can update p efficiently by gradient descent using the first order derivative 1 2 d (p) dp = Z ps 0 2sp2s 1 dF (r) + s 1X j=0 Z pj pj+1 (jpj 1 + (j + 1)pj)r (2j + 1)p2j dF (r). (8) 3.4 Expected variance minimization In this section, we consider the problem of minimizing the expected variance of quantization: min `2L Ev,h ⇥ kQ`(v) vk22 ⇤ . (9) To solve the expected variance minimization problem, suppose that we observe N stochastic gradients {v1, . . . ,vN}. Let Fn(r) and pn(r) denote the CDF and PDF of normalized coordinate conditioned on observing kvnk, respectively. By taking into account randomness in kvk and using the law of total expectation, an approximation of the expected variance in (9) is given by E[kQs(v) vk22] ⇡ 1 N NX n=1 kvnk2 sX j=0 Z `j+1 `j 2(r) dFn(r). (10) The optimal levels to minimize Eq. (10) are a solution to the following problem: `⇤ = argmin `2L NX n=1 kvnk2 sX j=0 Z `j+1 `j 2(r) dFn(r) = argmin `2L sX j=0 Z `j+1 `j 2(r) dF (r), where `⇤ = [`⇤1, . . . , `⇤s]> and F (r) = PN n=1 nFn(r) is the weighted sum of the conditional CDFs with n = kvnk2/ PN n=1 kvnk2. Note that we can accommodate both normal and truncated normal distributions by substituting associated expressions into pn(r) and Fn(r). Exact update rules and analysis of computational complexity of ALQ, GD, and AMQ are discussed in Appendix C. 4 Theoretical guarantees One can alternatively design quantization levels to minimize the worst-case variance. However, compared to an optimal scheme, this worst-case scheme increases the expected variance by ⌦(d), which is prohibitive in deep networks. We quantify the gap in Appendix E. Proofs are in appendices. A stochastic gradient has a second-moment upper bound B when E[kg(w)k22] B for all w 2 ⌦. Similarly, it has a variance upper bound 2 when E[kg(w) rf(w)k22] 2 for all w 2 ⌦. We consider a general adaptively quantized SGD (AQSGD) algorithm, described in Algorithm 1, where compression schemes are updated over the course of training.4 Many convergence results in stochastic optimization rely on a variance bound. We establish such a variance bound for our adaptive methods. Further, we verify that these optimization results can be made to rely only on the average variance. In the following, we provide theoretical guarantees for AQSGD algorithm, obtain variance and code-length bounds, and convergence guarantees for convex, nonconvex, and momentum-based variants of AQSGD. The analysis of nonadaptive methods in [20–23] can be considered as special cases of our theorems with fixed levels over the course of training. A naive adoption of available convergence guarantees results in having worst-case variance bounds over the course of training. In this paper, we show that an average variance bound can be applied on a number of problems. Under general normalization, we first obtain variance upper bound for arbitrary levels, in particular, for those obtained adaptively. Theorem 2 (Variance bound). Let v 2 Rd and q 1. The quantization of v under Lq normalization satisfies E[Q`(v)] = v. Furthermore, we have E[kQ`(v) vk22] ✏Qkvk22, (11) 4Our results hold for any adaptive method, including ALQ and AMQ. where ✏Q = (`j⇤+1/`j⇤ 1)2 4(`j⇤+1/`j⇤ ) + inf0<p<1 Kp`1 (2 p)d 2 p min{q,2} with j⇤ = argmax1js `j+1/`j and Kp = 1 2 p 1 p 2 p (1 p). Theorem 2 implies that if g(w) is a stochastic gradient with a second-moment bound ⌘, then Q`(g(w)) is a stochastic gradient with a variance upper bound ✏Q⌘. Note that, as long as the maximum ratio of two consecutive levels does not change, the variance upper bound decreases with the number of quantization levels. In addition, our bound matches the known ⌦( p d) lower bound in [27]. Theorem 3 (Code-length bound). Let v 2 Rd and q 1. The expectation E[|ENCODE(v)|] of the number of communication bits needed to transmit Q`(v) under Lq normalization is bounded by E[|ENCODE(v)|] b+ n`1,d + d(H(L) + 1) b+ n`1,d + d(log2(s+ 2) + 1), (12) where b is a constant, n`1,d = min{`1 q + d1 1/q `1 , d}, H(L) is the entropy of L in bits, and L is a random variable with the probability mass function given by Pr(`j) = Z `j `j 1 r `j 1 `j `j 1 dF (r) + Z `j+1 `j `j+1 r `j+1 `j dF (r) for j = 1, . . . , s. In addition, we have Pr(`0 = 0) = Z `1 0 1 r `1 dF (r) and Pr(`s+1 = 1) = Z 1 `s r `s 1 `s dF (r). Theorem 3 provides a bound on the expected number of communication bits to encode the quantized stochastic gradients. As expected, the upper bound in (12) increases monotonically with d and s. We can combine variance and code-length upper bounds and obtain convergence guarantees for AQSGD when applied to various learning problems where we have convergence guarantees for full-precision SGD under standard assumptions. Let {`1, . . . , `K} denote the set of quantization levels that AQSGD experiences on the optimization trajectory. Suppose that `k is used for Tk iterations with PK k=1 Tk = T . For each particular `k, we can obtain corresponding variance bound ✏Q,k by substituting `k into (11). Then the average variance upper bound is given by ✏Q = PK k=1 Tk✏Q,k/T . For each particular `k, we can obtain corresponding expected code-length bound NQ,k by substituting random variable Lk into (12). The average expected code-length bound is given by NQ = PK k=1 TkNQ,k/T . On convex problems, convergence guarantees can be established along the lines of [17, Theorems 6.1]. Theorem 4 (AQSGD for nonsmooth convex optimization). Let f : ⌦! R denote a convex function and let R2 , supw2⌦ kw w0k22. Let B̂ = (1 + ✏Q)B and f⇤ = infw2⌦ f(w). Suppose that AQSGD is executed for T iterations with a learning rate ↵ = RM/(B̂ p T ) on M processors, each with access to independent stochastic gradients of f with a second-moment bound B, such that quantization levels are updated K times where `k with variance bound ✏Q,k and code-length bound NQ,k is used for Tk iterations. Then AQSGD satisfies E h f ⇣ 1 T PT t=0 wt ⌘i f⇤ RB̂/(M p T ). In addition, AQSGD requires at most NQ communication bits per iteration in expectation. In Appendix H and Appendix I, we obtain convergence guarantees on nonconvex problems and for momentum-based variants of AQSGD under standard assumptions, respectively. Theoretical guarantees for levels with symmetry are established in Appendix J. 5 Experimental evaluation In this section, we showcase the effectiveness of our adaptive quantization methods in speeding up training deep models. We compare our methods to the following baselines: single-GPU SGD (SGD), full-precision multi-GPU SGD (SuperSGD), uniform levels under L1 normalization (QSGDinf) [20], ternary levels under L1 normalization (TRN) [15], and exponential levels under L2 normalization with exponential factor p = 0.5 (NUQSGD) [21, 22]. We present results for the following variations of our proposed methods: ALQ and AMQ (with norm adjustments in Section 3.4), and their normalized variations ALQ-N and AMQ-N (Sections 3.1 and 3.3). We present full training results on ImageNet in Appendix K along with additional experimental details. We compare methods in terms of the number of training iterations that is independent of a particular distributed setup. In Table 1, we present results for training ResNet-32 and ResNet-110 [28] on CIFAR-10 [29], and ResNet-18 on ImageNet [30]. We simulate training with 4-GPUs on a single GPU by quantizing and dequantizing the gradient from 4 mini-batches in each training iteration. These simulations allow us to compare the performance of quantization methods to the hypothetical full-precision SuperSGD. All quantization methods studied in this section share two hyper-parameters: the number of bits (log2 of number of quantization levels) and a bucket size. A common trick used in normalized quantization is to encode and decode a high-dimensional vector in buckets such that each coordinate is normalized by the norm of its corresponding bucket instead of the norm of the entire vector [20]. The bucket size controls the tradeoff between extra communication cost and loss of precision. With a small bucket size, there are more bucket norms to be communicated, while with a large bucket size, we lose numerical precision as a result of dividing each coordinate by a large number. In Section 5.1, we provide an empirical study of the hyperparameters. Matching the accuracy of SuperSGD. Using only 3 bits (8 levels), our adaptive methods match the performance of SuperSGD on CIFAR-10 and close the gap on ImageNet (bold in Table 1). Our most flexible method, ALQ, achieves the best overall performance on ImageNet and the gap on CIFAR-10 with ALQ-N is less than 0.3%. There is at least 1.4% gap between our best performing method and previous work in training each model. To the best of our knowledge, matching the validation loss of SuperSGD has not been achieved in any previous work using only 3 bits. Fig. 3 shows the test loss and Fig. 4 shows the average gradient variance where the average is taken over gradient coordinates. Our adaptive methods successfully achieve lower variance during training. Comparison on the trajectory of SGD. Fig. 5 shows the average variance on the optimization trajectory of single-GPU without quantization. This graph provides a more fair comparison of the quantization error of different methods decoupled from their impact on the optimization trajectory. ALQ effectively finds an improved set of levels that reduce the variance in quantization. ALQ matches the variance of SuperSGD on Resnet-110 (Fig. 5b). In Figs. 5b and 5c, the variance of QSGDinf is as high as TRN in the first half of training. This shows that extra levels (8 uniform levels) do not perform better unless designed carefully. As expected, the variance of SuperSGD is always smaller than the variance of SGD by a constant factor of the number of GPUs. Negligible computational overhead. Our adaptive methods have similar per-step computation and communication cost compared to previous methods. On ImageNet, we save at least 60 hours from 95 hours of training and add only an additional cost of at most 10 minutes in total to adapt quantization. For bucket sizes 8192 and 16384 and 3–8 bits used in our experiments, the per-step cost relative to SuperSGD (32-bits) is 21–25% for ResNet-18 on ImageNet and 32–36% for ResNet-50. That is the same as the cost of NUQSGD and QSGDinf without additional coding or pruning with the same number of bits and bucket sizes. The cost of the additional update specific to ALQ is 0.4–0.5% of the total training time. In Appendix K.3, we provide tables with detailed timing results for varying bucket sizes and bits. 5.1 Hyperparameter studies Fig. 6 shows quantization levels for each method at the end of training ResNet-32 on CIFAR-10. The quantization levels for our adaptive methods are more concentrated near zero. In Figs. 7a and 7b, we study the impact of the bucket size and number of bits on the best validation accuracy achieved by quantization methods. Adaptive levels are the best quantization methods across all values of bucket size and number of bits. ALQ and ALQ-N are the best performing methods across all values of bucket size and number of bits. The good performance of ALQ-N is unexpected as it suggests quantization for vectors with different norms can be shared. In practice, ALQ-N is easier to implement and faster to update compared to ALQ. We observe a similar relation between AMQ and AMQ-N methods. Adaptive multiplier methods show inferior performance to adaptive level methods as the bucket size significantly grows (above 104) or shrinks (below 100) as well as for very few bits (2). Note that there exists a known generalization gap between SGD and SuperSGD in ResNet-110 that can be closed by extensive hyperparameter tuning [31]. Our adaptive methods reduce this gap with standard hyperparameters. Bucket size significantly impacts non-adaptive methods. For bucket size 100 and 3 bits, NUQSGD performs nearly as good as adaptive methods but quickly loses accuracy as the bucket size grows or shrinks. QSGDinf stays competitive for a wider range of bucket sizes but still loses accuracy faster than other methods. This shows the impact of bucketing as an understudied trick in evaluating quantization methods. Adaptive methods successfully scale to large number of GPUs. Table 2 shows the result of training CIFAR-10 on ResNet-32 using 16 and 32 GPUs. Note that with 32 GPUs, TRN is achieving almost the accuracy of SuperSGD with only 3 quantization levels, which is expected because TRN is unbiased and the variance of aggregated gradients decreases linearly with the number of GPUs. 6 Conclusions To reduce communication costs of data-parallel SGD, we introduce two adaptively quantized methods, ALQ and AMQ, to learn and adapt gradient quantization method on the fly. In addition to quantization method, in both methods, processors learn and adapt their coding methods in parallel by efficiently computing sufficient statistics of a parametric distribution. We establish tight upper bounds on the excessive variance for any arbitrary sequence of quantization levels under general normalization and on the expected number of communication bits per iteration. Under standard assumptions, we establish a number of convergence guarantees for our adaptive methods. We demonstrate the superiority of ALQ and AMQ over nonadaptive methods empirically on deep models and large datasets. Broader impact This work provides additional understanding of statistical behaviour of deep machine learning models. We aim to train deep models using popular SGD algorithm as fast as possible without compromising learning outcome. As the amount of data gathered through web and a plethora of sensors deployed everywhere (e.g., IoT applications) is drastically increasing, the design of efficient machine learning algorithms that are capable of processing large-scale data in a reasonable time can improve everyone’s quality of life. Our compression schemes can be used in Federated Learning settings, where a deep model is trained on data distributed among multiple owners without exposing that data. Developing privacy-preserving learning algorithms is an integral part of responsible and ethical AI. However, the long-term impacts of our schemes may depend on how machine learning is used in society. Acknowledgement The authors would like to thank Blair Bilodeau, David Fleet, Mufan Li, and Jeffrey Negrea for helpful discussions. FF was supported by OGS Scholarship. DA and IM were supported the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 805223 ScaleML). DMR was supported by an NSERC Discovery Grant. ARK was supported by NSERC Postdoctoral Fellowship. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute.5
1. What are the key contributions of the paper in the context of distributed SGD setups? 2. What are the strengths of the proposed adaptive quantization techniques, particularly in terms of their ability to minimize variance and code length? 3. What are the weaknesses of the paper's claims, especially regarding its robustness to different hyperparameters? 4. How do the provided empirical results support the paper's claims, and what further experiments would be necessary to fully validate its findings? 5. Are there any concerns or limitations regarding the practical applicability of the proposed techniques in real-world scenarios?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper proposes two adaptive quantization techniques for efficient communication in distributed SGD setups. Quantization levels are chosen to minimize the expected (normalized) variance of the gradient and can either be: 1. Exponentially spaced levels controlled by a single multiplier 2. Adaptable individual levels chosen to minimize the variance. The paper provides theoretical worst-case guarantees for the variance and code-length. Strengths - Establishes an upper bound on variance and code length. - Empirical results support the claims made in the paper Weaknesses - The claim "robust to all values of hyperparameters" looks like it is hard to defend since the experiments are performed only on varying choices of bucket size and number of bits. What about momentum, batch size etc.
NIPS
Title Approximate optimization of convex functions with outlier noise Abstract We study the problem of minimizing a convex function given by a zeroth order oracle that is possibly corrupted by outlier noise. Specifically, we assume the function values at some points of the domain are corrupted arbitrarily by an adversary, with the only restriction being that the total volume of corrupted points is bounded. The goal then is to find a point close to the function’s minimizer using access to the corrupted oracle. We first prove a lower bound result showing that, somewhat surprisingly, one cannot hope to approximate the minimizer nearly as well as one might expect, even if one is allowed an unbounded number of queries to the oracle. Complementing this negative result, we then develop an efficient algorithm that outputs a point close to the minimizer of the convex function, where the specific distance matches exactly, up to constant factors, the distance bound shown in our lower bound result. 1 Introduction Unconstrained convex optimization is among the most well-studied problems in mathematical optimization and has extensive applications in machine learning [7]. In the classic unconstrained convex minimization problem, we are given oracle access to a convex function f : Rd → R, and seek to efficiently find a point x̃ that is close to the minimizer of f (call it x∗)1. To obtain meaningful guarantees for approximating the minimizer x∗, one needs to make certain assumptions on the convexity and smoothness of f . In particular, f is commonly assumed to be α-strongly convex and β-smooth for some β > α > 0. This means that, when f is twice differentiable, its second derivative in any direction is between α and β. For ease of presentation, we will say a convex function is (α, β)-nice if it satisfies these two conditions. It is well known that the minimizer of an (α, β)-nice function can be approximated arbitrarily well in polynomial time by the classic gradient descent algorithm, or its accelerated version due to Nesterov [18]. To formally state the performance of these two algorithms, let us suppose an (α, β)-nice function f : Rd → R is given to us as a zeroth order oracle, which returns the function value f(x) on any input point x ∈ Rd. Then we have: Theorem 1.1 ([7]). Given any initial point x0 ∈ Rd and > 0, the classic gradient descent algorithm outputs a point x̃ s.t. ‖x̃− x∗‖2 ≤ using O(d(β/α) log ‖x0−x∗‖ ) oracle queries. 1Sometimes, instead of finding a point close to x∗, the goal is to find a point whose function value is close to f(x∗). However, as we point out later, it is more natural to study approximations in the domain in our setting. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Theorem 1.2 ([18]). Given any initial point x0 ∈ Rd and > 0, the accelerated gradient descent algorithm outputs a point x̃ s.t. ‖x̃− x∗‖2 ≤ using O(d √ β/α log ‖x0−x ∗‖ ) oracle queries. We note that the quantity β/α is called the condition number of the function f . The efficiency of a convex minimization algorithm is usually measured with respect to such a quantity. Given that the complexity of convex minimization with exact oracles has been well understood, a natural question is “can we still minimize a convex function efficiently if the oracle is to some extent inaccurate?” There has in fact been a rich body of work in efforts to answer this question. Roughly, they can be divided into two categories based on their assumptions on the oracle’s inaccuracy. The first category studies the case when the oracle is corrupted by stochastic noise2 — namely, the errors of the oracle are assumed to be random and independently drawn from some distribution [10, 20, 19]. Notice that as the oracle is correct in expectation, one can obtain good estimates to the true values efficiently by averaging over sufficiently many points in a small neighborhood. As a result, it is still possible to approximate the minimizer x∗ to within arbitrarily small distance in polynomial time. The main focus of these results is thus to obtain optimal algorithm efficiency, ideally matching that of Nesterov’s accelerated gradient descent. The second category, on the other hand, considers the adversarial noise. That is, the errors are no longer drawn from certain distributions, but rather are added adversarially, while obeying certain constraints. A representative such model is what we call pointwise-bounded noise model [22, 5], in which the only assumption on the noise is that it is pointwise bounded in magnitude; other than that, the specific perturbation on each point can be arbitrary. Formally, it is allowed for the oracle to return, on an input point x, a value in the range [f(x)− , f(x) + ] (absolute errors) or [(1 − )f(x), (1 + )f(x)] (relative errors), for some > 0. [5] shows that when the errors are absolute and is on the order of 1/d, there is a polynomial-time algorithm that can find a point with function value arbitrarily close to f(x∗). Whereas [22] shows that, in sharp contrast, when is about 1/ √ d or larger, no polynomial-time algorithm is able to find a point with function value within some constant error of f(x∗), for both absolute and relative errors. Our model. Following the second line of research above, in this paper we study another type of adversarial noise model, called outlier noise model, which can be seen as a natural variant of the pointwise-bounded noise model in [22, 5]. In this model, the only assumption we make is a bound on the “number” of points corrupted by the noise; apart from that, we do not assume any bounds on the magnitude of the errors on the corrupted points. Formally, we are given access to an exact zeroth order oracle of a function f̂ that differs from the true convex function f only on a set C ⊂ Rd, with the guarantee that the volume of C is at most that of a d-dimensional ball of radius K, for some K > 0. Notice that although C is bounded in volume, for any x ∈ C, f̂(x) and f(x) can differ arbitrarily. Though variants of each other, both the pointwise-bounded noise model in [22, 5] and our model may be considered as special cases of a more general type of adversarial noise model, namely the `p-bounded noise model. To see this, let us write the noise as a function η : Rd → R, such that the noisy zeroth order oracle given to us corresponds to the function f + η. Then, in the (absolute) pointwise-bounded noise model, η is bounded in `∞-norm (they assume ‖η‖∞ ≤ ), whereas in the outlier noise model, η is bounded in `0-norm (we assume ‖η‖0 ≤ vol(radius-K ball)). It will certainly be an interesting future direction to explore what can be achieved for minimizing convex functions with general `p-bounded noise. We would also like to point out that in our noise model, it is more natural to consider getting as close as possible to x∗ as opposed to finding a point with function value close to f(x∗). This is because our only assumption on the noise is that the corrupted region C, which is in the domain of the function f , is bounded in volume, and in particular, it may be located around x∗. Therefore, we believe it makes the most sense to measure the quality of the solution of a minimization algorithm by its distance to the optimal in the domain (specifically, how the distance compares with the corruption radius K). Our results. Let us first consider what one might expect to achieve in the outlier noise model. By the observation made above, it is not hard to see that we cannot hope to always find a point that is within distance K of the minimizer x∗, as an adversary could potentially corrupt some radius-K ball 2This model more often deals with the first order oracle, which gives the (noisy) gradient∇f(x) at a point x. around x∗, making it impossible for us to know where the true minimizer lies. However, is it possible to obtain a distance bound that is close to K (e.g. O(K))? Our first result in this paper is a lower bound showing that, somewhat surprisingly, this goal is in general impossible to achieve, even for algorithms that are allowed an unbounded number of queries to the oracle. In fact, our lower bound indicates that the best distance bound one can hope for is at least Ω(K √ β/α), where we recall that α,β are the “niceness” of the function. This result is a consequence of the existence of two (α, β)-nice functions that only differ in a small region of the domain, but whose minimizers are sufficiently far apart. We then develop, as our second result, an efficient algorithm that finds a point within distance O(K √ β/α) of the minimizer x∗, thus matching our lower bound up to constant factors. Roughly, our algorithm performs two stages of gradient descent, using approximate gradients computed from the noisy zeroth order oracle. We state our results formally in the theorems below. For simplicity, let us say a zeroth order oracle of a function f is K-corrupted if it is perturbed by an outlier noise of corruption radius K. Theorem 1.3 (Lower bound; formal version appears as Theorem 3.1). For α, β,K > 0 and sufficiently large d, there exist two (α, β)-nice functions f0, f1 : Rd → R which satisfy the following conditions: (i) The `2-distance between the minimizers of f0, f1 is Ω(K √ β/α); (ii) The total volume of points where f0 and f1 differ is at most that of a d-dimensional ball of radius K. To see the implication of this theorem, consider an adversary that randomly picks an index i ∈ {0, 1} and lets fi be the true convex function, but always uses f0 as the K-corrupted oracle. Then no matter which point our algorithm outputs, with probability at least 1/2 over the randomness of i, it is Ω(K √ β/α) away from the minimizer of the true convex function fi. Theorem 1.4 (Algorithm; formal version appears as Theorems 4.2, 5.1). Let α, β,K > 0 and d be sufficiently large. There is an algorithm that, given access to a K-corrupted oracle of an (α, β)-nice function f : Rd → R and an initial point x0 ∈ Rd, makes Õ(d · (β2/α2) · log ‖x0 − x∗‖)3 queries to the oracle and outputs a point x̂ s.t. ‖x̂− x∗‖2 ≤ O(K √ β/α). Here x∗ is the minimizer of f . We note that in both our results, our conditions on d are roughly that d ≥ Ω(log β/α). In other words, we require the function’s condition number β/α to be at most 2O(d), a quantity exponential in d. Our techniques. We now briefly describe our techniques used to obtain the above results. Our lower bound proceeds by proving the existence of two (α, β)-nice functions f0 and f1 s.t. (i) they differ only in an ellipsoid with volume at most that of a ball of radius K; (ii) their minimizers are at `2-distance Ω(K √ β/α)-away from each other. As mentioned above, this implies that even with infinitely many queries, one cannot approximate the minimizer x∗ to within distance o(K √ β/α). More concretely, we start by letting f0 be a simple (α, β)-nice quadratic function whose minimizer is at the origin. Then the existence of our desired function f1 will follow by two steps: first we show that there exists another function f ′ with certain good properties, then we obtain f1 by taking a piecewise combination of f0 and f ′. Specifically, the properties that we need f ′ to satisfy is as follows: (i) f ′ is (α, β)-nice; (ii) both the function values and the gradients of f0 and f ′ agree on all points on the periphery of an ellipsoid E that is centered at the origin and has bounded volume; (iii) the minimizer of f ′ has coordinate of the form (Ω(K √ β/α), 0, . . . , 0). We show that this essentially boils down to a convex function interpolation task where the function values and gradients at some infinitely many points are given. To this end, we adapt an interpolation result from [23], which was proposed to accommodate interpolation from finitely many points, to prove the existence of f ′. Finally, we construct the function f1 by letting f1 = f ′ inside the ellipsoid E , but f1 = f0 outside of E . For the upper bound, our algorithm proceeds in two stages. In the first stage, we come to within distance O(K(β/α)) of the minimizer x∗ and in the subsequent stage, we improve it to O(K √ β/α). The first algorithm essentially follows a gradient descent, but using an approximate gradient at each step. In order to estimate the gradient at a point x, we set up a system of linear equations, where each equation adds a constraint on the derivative at x along a uniformly random direction. There are however two types of noise in these equations, one from using a zeroth order oracle to compute first 3Here and throughout the paper, we write Õ(f) to denote O(f · poly(log f)). order information, and the other from the outlier noise added to the oracle. While we can solve this noisy linear system with good enough accuracy via an exhaustive search, we show that using an LP decoding routine in [12], we can solve it more efficiently in polynomial time. We note that using the latter is only to improve the running time, as both approaches result in the same query complexity. However, the one bottleneck in this approach is that at any point, the ball of radius < K around it could potentially be (nearly) completely corrupted. Thus, to get a meaningful estimate of the gradient, we have to sample points which are more than distance K-far apart. This tradeoff eventually allows us to get O(K(β/α))-close to the minimizer. To get to the optimal closeness of O(K √ β/α), we next start at a point which is guaranteed to be O(K(β/α))-close to the true minimizer. We now consider the function f̄ which is defined as the average of f in a ball of radius (say) 2K. It is not hard to verify that f̄ continues to be (α, β)-nice. Moreover, the minimizers of f̄ and f can be shown to be O(K √ β/α)-close to each other. Thus, it suffices to get close to the minimizer of f̄ . We will do so by simulating a gradient descent on f̄ . It therefore boils down to how we can efficiently approximate the gradients of f̄ . By the definition of f̄ , the gradient∇f̄(x) is also equal to the average of the gradients∇f(y) over all points y within distance 2K of x. This suggests that we can approximate ∇f̄(x) by averaging the gradients ∇f(y) at sufficiently many randomly sampled y’s, and bounding the error using concentration inequalities for sums of random vectors. Now a key observation is that, since these y’s are sampled randomly from a radius 2K-ball, it is highly likely that each sampled y sits in a mostly uncorrupted neighborhood, as long as the dimension is sufficiently high. Consequently, we can use the LP decoding approach above to obtain an accurate estimate of the gradient at each of these points. Prior work on noisy convex optimization. Other than the works mentioned above, noisy convex optimization has also been investigated in the context of multi-armed bandits and regret minimization [2, 8, 1]. In the direction of convex optimization under adversarial noise, the early results in fact date back to the 90s by [3]. Specifically for the pointwise-bounded noise model, there are subsequent works such as [21, 24, 17] that have improved on the guarantees of [5]. Due to space limitation, we include some other related work in Appendix A. Organization. In Section 2, we set up a few notations and give some basic definitions and technical preliminaries. In Section 3, we prove our lower bound result. In Section 4, we give a first algorithm that gets us to within distance O(K(β/α)) of x∗. In Section 5, we give a second algorithm that gets us to within distance O(K √ β/α) of x∗. In Section 6, we propose several future directions. 2 Preliminaries Note that due to space limitation, we defer some of the preliminaries to Appendix B. While there are many known equivalent definitions of strong convexity and smoothness of a function, the specific ones that we use in this paper are as follows. Definition 2.1. A function f : Rd → R is β-smooth if it is differentiable and for all pairs x, y ∈ Rd we have ‖∇f(x)−∇f(y)‖ ≤ β ‖x− y‖4. Definition 2.2. A function f : Rd → R is α-strongly convex if f(x)− α2 ‖x‖ 2 is convex. We next set up notations of a ball and the uniform distribution over it. Definition 2.3. Let B(x, r) def= {y : ‖y − x‖ ≤ r} denote the ball of radius r centered at x. Let U(x, r) denote the uniform distribution over all points in the ball B(x, r). As a result, the fraction of corrupted volume in a ball B(x, r) is Pry∼U(x,r)[f(y) 6= f̂(y)], where we recall that f̂ denotes the corrupted version of f . For our second algorithm in Section 5, we will need to consider the “average” function, whose value at a point x is the average of f(y)’s where y is within some distance of x. 4Here and going forward, all norms are `2-norms unless stated otherwise Definition 2.4. For any r > 0, define the function f̄r as f̄r(x) def = Ey∼U(x,r) [f(y)]. It is not hard to verify the strong convexity and smoothness of f̄r: Lemma 2.5. If f is α-strongly convex and β-smooth, f̄r is also α-strongly convex and β-smooth. As a result of α-strong convexity and β-smoothness, we can upper bound the distance between the minimizers of f and f̄r by O(r √ β/α). Lemma 2.6. Let x∗, x̄r be the minimizers of f and f̄r respectively. Then ‖x∗ − x̄r‖ ≤ 2r √ β/α. A proof of this lemma is included in Appendix B. 3 An Ω(K √ β/α) lower bound In this section we show that getting O(K √ β/α)-close to x∗ is the best we can hope for even if we are allowed to query the function value at every point of the domain. We will prove this by showing that when the dimension is sufficiently high in terms of β/α, there exist two α-strongly convex, β-smooth functions that differ only in an ellipsoid of volume equal to a ball of radius K, but whose minimizers are Ω(K √ β/α)-apart. Theorem 3.1. Given 0 < α ≤ β with 1 + log βα ≤ d where d is the dimension, and a K > 0, there exist two α-strongly convex, β-smooth functions whose values differ only in an ellipsoid of volume equal to a radius-K ball, but whose minimizers are Ω( √ β αK)-far from each other. In order to prove Theorem 3.1, we shall prove several intermediate lemmas first, which are built on the interpolation results from [23]. We remark that the main results in [23] are stated for interpolating a set of finitely many points, while for our purpose we need to interpolate infinitely many. Therefore we cannot use their results directly in a black-box manner, but instead have to make certain adaptations. 3.1 Some interpolation results from [23] First let us define the notion of (α, β)-interpolability. Definition 3.2 ((α, β)-interpolability). Suppose we are given a set of (possibly infinitely many) tuples {(xi, gi, fi)}i∈I where each xi, gi ∈ Rd, fi ∈ R. Let α ∈ R≥0, β ∈ R≥0∪{+∞} where α < β. We say this set is (α, β)-interpolable if there is a proper and closed convex function f : Rd → R∪{+∞} that is α-strongly convex and β-smooth such that for all i ∈ I , gi ∈ ∂f(xi) and f(xi) = fi, where ∂f(xi) denotes the set of subgradients of f at xi. Note here that when α = 0, we only require f to be convex. When β =∞, we do not require f to be smooth and thus f is not necessarily differentiable; when β <∞, the condition gi ∈ ∂f(xi) is equivalent to gi = ∇f(xi) as the gradient is unique at any point when f is differentiable. The following two lemmas are proved in [23]. The first lemma enables us to reduce the (α, β)interpolation of some tuple set to the (0, β′)-interpolation of another tuple set, while the second lemma allows us to further reduce it to the (α′,∞)-interpolation of some other tuple set. We note that although [23] only states these lemmas for sets containing finitely many tuples, their proofs work for sets containing infinitely many tuples as well. Lemma 3.3. Given a set of (possibly infinitely many) tuples {(xi, gi, fi)}i∈I where xi, gi ∈ Rd, fi ∈ R and 0 ≤ α < β ≤ +∞. The following two statements are equivalent: 1. {(xi, gi, fi)}i∈I is (α, β)-interpolable. 2. { (xi, gi − αxi, fi − α2 ‖xi‖ 2 ) } i∈I is (0, β − α)-interpolable. Lemma 3.4. Given a set of (possibly infinitely many) tuples {(xi, gi, fi)}i∈I where xi, gi ∈ Rd, fi ∈ R and 0 < β ≤ +∞. The following two statements are equivalent: 1. {(xi, gi, fi)}i∈I is (0, β)-interpolable. 2. { (gi, xi, x > i gi − fi) } i∈I is (1/β,∞)-interpolable. Then as in [23], by alternately applying Lemmas 3.3 and 3.4 twice each, we are able to reduce any (α, β)-interpolation problem to a (0,∞)-interpolation problem, where we only want to interpolate some points with a proper and closed convex function. Formally, we have the following lemma, whose proof is deferred to Appendix C. Lemma 3.5. Given a set of (possibly infinitely many) tuples {(xi, gi, fi)}i∈I where xi, gi ∈ Rd, fi ∈ R and 0 ≤ α < β ≤ +∞. The following two statements are equivalent: 1. {(xi, gi, fi)}i∈I is (α, β)-interpolable. 2. {( βxi β−α − gi β−α , gi − αxi, αx>i gi β−α + fi − βα‖xi‖2 2(β−α) − ‖gi‖2 2(β−α) )} i∈I is (0,∞)-interpolable. 3.2 Our lower bound We first show that there exists an Ω(1)-strongly convex, O(1)-smooth function whose minimizer is 1/2-far from the origin, but whose function values and gradients agree with the quadratic function ‖x‖2 on all points on the surface of a unit ball. Formally, we have the following lemma. For ease of presentation, let us define X=1 def = {x : ‖x‖ = 1} and similarly X≥1 def= {x : ‖x‖ ≥ 1}. We also write e1 = (1, 0, . . . , 0)T to denote the first standard basis vector. Lemma 3.6. Let f(x) def= ‖x‖2 which is 2-strongly convex and 2-smooth. There exists a 12 -strongly convex, 16-smooth function f̃ such that 1. f̃ ’s minimizer is 12e1. 2. For all x ∈ X=1 we have f̃(x) = f(x) and ∇f̃(x) = ∇f(x). The proof of this lemma is deferred to Appendix C. Roughly, the proof consists of three steps: (i) formulate proving the existence of f̃ as a ( 12 , 16)-interpolation problem; (ii) use Lemma 3.5 to reduce it to the (0,∞)-interpolation of some infinitely many points; (iii) explicitly construct a proper and closed convex function that does interpolate these points. Now by taking a piecewise combination of the function f̃ in Lemma 3.6 and the quadratic function ‖x‖2, we can show that there exists an Ω(1)-strongly convex, O(1)-smooth function f̂ whose minimizer is 1/2-far from the origin, but whose function values and gradients agree with ‖x‖2 on every point with `2-norm greater than or equal to 1. Lemma 3.7. Let f(x) def= ‖x‖2 which is 2-strongly convex and 2-smooth. Define f̂ such that f̂(x) = f̃(x) if ‖x‖ ≤ 1 and f̂(x) = f(x) otherwise (‖x‖ > 1). Then we have 1. f̂ is 12 -strongly convex and 16-smooth. 2. f̂ ’s minimizer is 12e1. 3. For all x ∈ X≥1 we have f̂(x) = f(x) and ∇f̂(x) = ∇f(x). The proof of this lemma is included in Appendix C. Then by scaling the domains of f, f̂ in Lemma 3.7, we prove that for any κ ≥ 1, when the dimension is sufficiently high in terms of κ, there exist two Ω(1/κ)-strongly convex, O(1)-smooth functions whose function values and gradients agree on every point outside of an ellipsoid of volume equal to a unit ball, but whose minimizers are √ κ/2-apart. Here κ shall be thought of as β/α where β = Θ(1). Lemma 3.8. Given κ ≥ 1 with 1+log κ ≤ dwhere d is the dimension. Let γ def= (1/κ) 1d−1 ∈ [1/2, 1]. Let Sd×d = DIAG(κ, γ, . . . , γ). Define s(x) = x>S−1x, which is (2/κ)-strongly convex and (2/γ)smooth. Let Xs≥1 = {x : s(x) ≥ 1}. Also define ŝ(x) = f̂(S−1/2x). Then we have 1. ŝ is 1/(2κ)-strongly convex and (16/γ)-smooth. 2. ŝ’s minimizer is √ κ 2 e1. 3. For all x ∈ Xs≥1 we have ŝ(x) = s(x) and ∇ŝ(x) = ∇s(x). Finally, by further scaling (the domain and the function values of) s, ŝ in Lemma 3.8, we can prove Theorem 3.1. The proofs of Lemma 3.8 and Theorem 3.1 are both included in Appendix C. 4 An O(K(β/α))-close algorithm In this section we give an algorithm GDSTAGEI that finds a pointO(K(β/α))-close to the minimizer of f . GDSTAGEI essentially implements a gradient descent algorithm, but uses approximate gradient computed from the noisy oracle at each step. To begin with, we present a subroutine GRADIENTCOMP for computing the gradient at a point where a small neighborhood is mostly uncorrupted. Algorithm 1: GRADIENTCOMP(f̂ , x, β, τ) Input : f̂ : Rd → R, x ∈ Rd, β > 0, and τ > 0. Output : g ∈ Rd. 1 Randomly choose 1000d pairs of points a1, b1 . . . , a1000d, b1000d in the ball B(x, τ). 2 Query the function values f̂(aj), f̂(bj) for all j = 1, 2, . . . , 1000d. 3 Let g ∈ Rd be any vector such that, for at least 800d of the j’s, the following holds: ∣∣∣g>(bj − aj)− ( f̂(bj)− f̂(aj) )∣∣∣ ‖bj − aj‖ ≤ βτ. (1) If no such g exists, set g to be an arbitrary vector. We summarize the performance of GRADIENTCOMP below, with the proof deferred to Appendix D. Essentially, the error in the gradient computed by GRADIENTCOMP tends to zero as τ → 0. Lemma 4.1. Fix d > 0 and β > 0. There exists a function err(τ) satisfying limτ→0+ err(τ) = 0 such that the following holds. Fix any x ∈ Rd and τ > 0 such that the radius-τ ball centered at x is mostly uncorrupted: Pry∼U(x,τ) [ f(y) 6= f̂(y) ] ≤ 1 100 . (2) Then we have that with probability 1− 2−3d, the vector g returned by GRADIENTCOMP satisfies ‖g −∇f(x)‖ ≤ err(τ). (3) The number of queries made by GRADIENTCOMP is O(d). We now describe GDSTAGEI in Algorithm 2. Its performance is characterized in Theorem 4.2. Algorithm 2: GDSTAGEI(f̂ , α, β, x0, R0, δ) Input : f̂ : Rd → R, 0 < α < β, x0 ∈ Rd, R0 ≥ ‖x0 − x∗‖, and δ ∈ (0, 1). Output : x̂ ∈ Rd. 1 Let the iteration count be T ← 100βα log R0(β/α)K . 2 for t = 0, 1, . . . , T − 1 do 3 Let the sample count be s← 200 log(T/δ). 4 Sample s random points y1, y2, . . . , ys in the ball B(xt, 99K). 5 Compute gradients gi ← GRADIENTCOMP(f̂ , yi, β, τ) for some sufficiently small τ > 0. 6 Find a vector ĝ ∈ Rd such that at least (2s)/3 of the gi’s are within euclidean distance 99.5βK of ĝ; if no such ĝ exists, set ĝ to be an arbitrary vector. 7 Perform a descent step: xt+1 ← xt − 12β ĝ. 8 return xt. Theorem 4.2. Let d ≥ 2. Given an initial point x0 with ‖x0 − x∗‖ ≤ R0 and a δ ∈ (0, 1), the algorithm GDSTAGEI returns a point x̂ with ‖x̂− x∗‖ ≤ 10000(β/α)K with probability 1−δ, where x∗ is the minimizer of f . The number of queries made by GDSTAGEI is Õ(d(β/α) log R0(β/α)K log(1/δ)). Crucial to proving this theorem is to show that the gradients used by GDSTAGEI are accurate enough: Lemma 4.3. Let d ≥ 2. The ĝ computed at Line 6 of GDSTAGEI satisfies with probability 1− δT that ‖ĝ −∇f(xt)‖ ≤ 200βK. Full proofs of Theorem 4.2 and Lemma 4.3 are presented in Appendix D. A note on the running time of our algorithms. While we are mainly concerned about the query complexity, we remark that both of our algorithms above can be implemented in polynomial time. For GRADIENTCOMP, all steps except Line 3 are easily seen to be implementable in polynomial time (in particular, linear time). Therefore it suffices to show that Line 3 can be done efficiently. Claim 4.4. A vector g satisfying the condition at Line 3 of GRADIENTCOMP, if it exists, can be found in time polynomial in d. Our proof of this claim proceeds by presenting a poly(d)-time algorithm based on an LP-decoding routine in [12]. Therefore let us first introduce the specific result that we need from [12]. Let A ∈ Rn×d be a matrix and z ∈ Rd be a vector. Consider the linear system Ax = Az, to which x = z is clearly a solution. If n ≥ d and A has full rank, then we can retrieve the vector z given A and Az by solving the linear system Ax = Az in polynomial time using, e.g., Gaussian elimination. Now suppose the RHS of the linear system is corrupted by some noise e ∈ Rn, and we are only given A and the corrupted RHS Az + e, then can we still retrieve the vector z efficiently? [12] showed that under certain assumptions, we can obtain good estimates of z in poly-time by linear programming. Theorem 4.5 ([12]). There exist constants ρ∗ ≈ 0.239 and γ ≥ 1 such that the following holds. Suppose n ≥ γd and An×d’s entries are drawn independently from a standard Gaussian distribution. Suppose also the noise e can be written as e = e1 + e2 where ‖e1‖0 ≤ ρ∗n. Then given A and Az + e, we can find in polynomial time a vector z′ s.t. ‖z − z′‖2 ≤ O(‖e2‖∞), for any z ∈ Rd. Basically, this theorem assumes that the noise can be decomposed into the sum of two parts, one with small nonzero support, and the other with small entry-wise magnitude. Then the `2-error of the solution is on the order of the largest entry-wise magnitude of the second part of the noise. Proof of Claim 4.4. Let us define a matrix B ∈ R1000d×d whose ith row is equal to (bi−ai) T ‖bi−ai‖ , where ai, bi’s are the sampled points at Line 1 of GRADIENTCOMP. We also define vectors b, b̂ ∈ R1000d with b(i) = ∇f(x) T (bi−ai) ‖bi−ai‖ and b̂(i) = f̂(bi)−f̂(ai) ‖bi−ai‖ , where x is the input point of GRADIENTCOMP. Notice that each b(i) is the inner product of the ith row of B and ∇f(x), and therefore we have B∇f(x) = b. Consequently, given B and b we can retrieve ∇f(x) by solving the linear system By = b (y are the variables). Thus our goal becomes solving this linear system when only B and b̂ (a.k.a. a corrupted version of b) are given. While this looks like the task in Theorem 4.5, note that the entries of B are not drawn from independent Gaussian distributions, so we will need go a step further. As the ai, bi’s are sampled uniformly at random from a ball, each row (bi−ai)T ‖bi−ai‖ of B is a unit vector with a uniformly random direction. It is well known that a vector with independent standard Gaussian entries also points to a uniformly random direction. In fact, we can sample a d-dimensional such vector by a three-step process: (i) sample a unit vector with a random direction; (ii) sample a length ` from the χ2-distribution with d degrees of freedom (i.e., the sum of the squares of d independent standard Gaussians); (iii) scale the unit vector by √ `. In light of this, let us generate a diagonal matrix D ∈ R1000d×1000d such that each D(i, i) is independently sampled as in step (ii). Then we consider the linear system D1/2By = D1/2b to which y = ∇f(x) is a solution. Notice that we now have that each entry of D1/2B follows a standard Gaussian. By thinking of D1/2b̂ as a corrupted version of Db, we then need to show that the noise e def= D1/2(b̂− b) can be written as e1 + e2 such that ‖e1‖0 ≤ ρ∗(1000d) and ‖e2‖∞ is small. To this end, we notice that for each i such that both ai, bi are uncorrupted, we have by β-smoothness that ∣∣∣b(i)− b̂(i) ∣∣∣ = ∣∣∣∣∣ ∇f(x)T (bi − ai) ‖bi − ai‖ − f̂(bi)− f̂(ai)‖bi − ai‖ ∣∣∣∣∣ ≤ O(βτ), (4) where τ is the radius of the ball B(x, τ) from which ai, bi’s are sampled. Thus, if B(x, τ) is mostly (say 99%) uncorrupted, with probability 1− exp(−Ω(d)), (4) holds for most (say 90%) of the i’s. Also, by standard Markov’s inequality and Chernoff bounds, with probability 1− exp(−Ω(d)), for most (say 99%) of the i’s we have D(i, i) ≤ O(d). Combining this with (4), we have for 80% of the i’s that √ D(i, i) ∣∣∣b̂(i)− b(i) ∣∣∣ ≤ O( √ dβτ), implying the existence of e1, e2 s.t. e = e1 + e2 and ‖e1‖0 ≤ 0.2(1000d), ‖e2‖∞ ≤ O( √ dβτ). This means that by Theorem 4.5 we can use linear programming to find a g with ‖g −∇f(x)‖ ≤ O( √ dβτ), matching the guarantee in Lemma 4.1. Finally, to address a technicality about the constant γ in Theorem 4.5, we note that we can increase the number of sampled ai, bi pairs to max {1000d, γd}, and the rest of the analysis still follows. Then we consider the running time of GDSTAGEI. By Claim 4.4 and straightforward observations, all steps other than Line 6 run in polynomial time. Thus we focus on the efficiency of Line 6. Claim 4.6. A vector ĝ satisfying the condition at Line 6 of GDSTAGEI, if it exists, can be found in nearly-linear time in s, at the cost of an extra constant factor in the radius of the ball. We note that an extra constant factor in the radius of the ball will not affect the final distance to x∗ by more than a constant factor. The proof of this claim is deferred to Appendix D. Roughly, the proof proceeds by sampling Õ(1) points from g1, . . . , gs and checking for each sampled gj if at least 2/3 fraction of the total points are within euclidean distance 200βK of gj . Note that the radius now becomes 200βK as opposed to 100βK at Line 6 of GDSTAGEI. 5 An O(K √ β/α)-close algorithm In this section we give an algorithm GDSTAGEII that, when given an initial point which is O(K(β/α))-close to the minimizer x∗ of f , finds a point that is O(K √ β/α)-close to x∗. GDSTAGEII basically performs a gradient descent on the average function f̄2K (Definition 2.4). Algorithm 3: GDSTAGEII(f̂ , α, β, x0) Input : f̂ : Rd → R, 0 < α < β, and x0 ∈ Rd with ‖x0 − x∗‖ ≤ 10000(β/α)K. Output : x̂ ∈ Rd. 1 Let the iteration count be T ← 100βα log( β α + 1). 2 for t = 0, 1, . . . , T − 1 do 3 Let the sample count be s← 400βα log(dT ). 4 Sample s random points y1, y2, . . . , ys in the ball B(xt, 2K). 5 Compute gradients gi ← GRADIENTCOMP(f̂ , yi, β, τ) for sufficiently small τ > 0, and their average ḡ ← 1s ∑s i=1 gi. 6 Perform a descent step: xt+1 ← xt − 12β ḡ. 7 return xt. The performance of GDSTAGEII is characterized in Theorem 5.1, with the proof in Appendix E. Note that while the success probability in Theorem 5.1 is not arbitrarily large, we can amplify it to any 1− δ by repeating the algorithm O(log(1/δ)) times, as we show in Corollary E.2. Theorem 5.1. Suppose that d ≥ 100 log(β/α + 1). Then given an initial point x0 that satisfies ‖x0 − x∗‖ ≤ 10000(β/α)K, GDSTAGEII returns a point x̂ with ‖x̂− x∗‖ ≤ 1000 √ β/αK with probability at least 1 − 2−d/8, where x∗ is the minimizer of f . The number of queries made by GDSTAGEII is Õ(d(β/α)2). Moreover, the algorithm runs in polynomial time. The proof of Theorem 5.1 builds on a lemma showing that the gradients that GDSTAGEII uses are sufficiently precise. The lemma relies on an `2-concentration inequality for the sum of random vectors (i.e., the Vector Bernstein Inequality in Theorem E.1). Its proof also appears in Appendix E. Lemma 5.2. Let d ≥ 100 log(β/α+ 1). The vector ḡ computed at Line 5 of GDSTAGEII satisfies the following with probability at least 1− 2−d/8/T : ∥∥ḡ −∇f̄2K(xt) ∥∥ ≤ 16√αβK. 6 Future directions We obtained asymptotically matching upper and lower bounds on how well the minimizer of a convex function can be identified in presence of outlier noise. There are several natural directions for future work. First, while our algorithm’s query complexity has essentially the same dependence on d and ‖x0 − x∗‖ as Nesterov’s accelerated gradient descent, it is still off by a factor of (β/α)1.5. It will thus be interesting to understand if this remaining performance gap can be eliminated. Also, we note that both our results require the dimension to be sufficiently high. While we believe the high dimension regime is of the most interest, it will be an interesting exercise to understand how these bounds change in the low-dimensional setting. Finally, as pointed out in the introduction, an appealing future direction is to study convex minimization with the more general `p-bounded noise. Acknowledgments and Disclosure of Funding We thank the anonymous reviewers for their valuable feedback. This work was supported in part by NSF awards CCF-1763514, CCF-1934876, CCF-2008305, CCF-1910534, CCF-1926872, and CCF-2045128.
1. What is the main contribution of the paper regarding optimization problems? 2. What is the twist considered in the study, and how does it affect the results? 3. What are the lower and upper bounds established in the paper, and how do they relate to the number of queries? 4. How does the paper show that one cannot get better than K β/α close with an unbounded number of queries? 5. What is the significance of the fact that the results only apply to high-dimensional scenarios?
Summary Of The Paper Review
Summary Of The Paper The paper studies the question of optimizing a strongly convex function f given a zeroth' order oracle for the function. This is one of the most studied problems in optimization and there is a wealth of literature on the problem. The twist considered here is that the zeroth-order oracle has outlier noise: on some points in the domain, the oracle can output an arbitrary value (completely corrupted). The fraction of the domain that is corrupted is assumed to have a volume equal to that of a ball of radius K. Clearly, since we are allowed to completely corrupt a ball of radius K, one cannot expect to get more than K close to the true minimizer even with an unbounded number of queries. Interestingly, the authors also show that if the function f is alpha-strongly convex and beta-smooth, one, in fact, cannot even get better than K β / α close with an unbounded number of queries. The paper also shows that one can get relatively close: If the function f is alpha-strongly convex and beta-smooth, then one can get to a point that is O ( K β / α ) close and this with ≈ O ( d ( β / α ) 2 ) queries. The model and results are nice. One drawback though is that the results only apply to high dimensional scenarios. Review Lower bound: The lower bound that one cannot get better than O ( K β / α ) is obtained by explicitly constructing two functions whose minima are K β / α apart but differ from each other only on a ball of radius K . This critically relies on the space being high-dimensional. Upper bound: The authors first use plain gradient descent to argue that one can indeed get O ( K β / α ) close relatively easily. The authors then bootstrap by a clever argument of averaging the gradients in a ball of radius 2 K . Once again, as we are in high dimensions, the fractional volume covered by a ball of radius K in a ball of radius 2 K is exponentially small and as such most points will give the correct answer.
NIPS
Title Approximate optimization of convex functions with outlier noise Abstract We study the problem of minimizing a convex function given by a zeroth order oracle that is possibly corrupted by outlier noise. Specifically, we assume the function values at some points of the domain are corrupted arbitrarily by an adversary, with the only restriction being that the total volume of corrupted points is bounded. The goal then is to find a point close to the function’s minimizer using access to the corrupted oracle. We first prove a lower bound result showing that, somewhat surprisingly, one cannot hope to approximate the minimizer nearly as well as one might expect, even if one is allowed an unbounded number of queries to the oracle. Complementing this negative result, we then develop an efficient algorithm that outputs a point close to the minimizer of the convex function, where the specific distance matches exactly, up to constant factors, the distance bound shown in our lower bound result. 1 Introduction Unconstrained convex optimization is among the most well-studied problems in mathematical optimization and has extensive applications in machine learning [7]. In the classic unconstrained convex minimization problem, we are given oracle access to a convex function f : Rd → R, and seek to efficiently find a point x̃ that is close to the minimizer of f (call it x∗)1. To obtain meaningful guarantees for approximating the minimizer x∗, one needs to make certain assumptions on the convexity and smoothness of f . In particular, f is commonly assumed to be α-strongly convex and β-smooth for some β > α > 0. This means that, when f is twice differentiable, its second derivative in any direction is between α and β. For ease of presentation, we will say a convex function is (α, β)-nice if it satisfies these two conditions. It is well known that the minimizer of an (α, β)-nice function can be approximated arbitrarily well in polynomial time by the classic gradient descent algorithm, or its accelerated version due to Nesterov [18]. To formally state the performance of these two algorithms, let us suppose an (α, β)-nice function f : Rd → R is given to us as a zeroth order oracle, which returns the function value f(x) on any input point x ∈ Rd. Then we have: Theorem 1.1 ([7]). Given any initial point x0 ∈ Rd and > 0, the classic gradient descent algorithm outputs a point x̃ s.t. ‖x̃− x∗‖2 ≤ using O(d(β/α) log ‖x0−x∗‖ ) oracle queries. 1Sometimes, instead of finding a point close to x∗, the goal is to find a point whose function value is close to f(x∗). However, as we point out later, it is more natural to study approximations in the domain in our setting. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Theorem 1.2 ([18]). Given any initial point x0 ∈ Rd and > 0, the accelerated gradient descent algorithm outputs a point x̃ s.t. ‖x̃− x∗‖2 ≤ using O(d √ β/α log ‖x0−x ∗‖ ) oracle queries. We note that the quantity β/α is called the condition number of the function f . The efficiency of a convex minimization algorithm is usually measured with respect to such a quantity. Given that the complexity of convex minimization with exact oracles has been well understood, a natural question is “can we still minimize a convex function efficiently if the oracle is to some extent inaccurate?” There has in fact been a rich body of work in efforts to answer this question. Roughly, they can be divided into two categories based on their assumptions on the oracle’s inaccuracy. The first category studies the case when the oracle is corrupted by stochastic noise2 — namely, the errors of the oracle are assumed to be random and independently drawn from some distribution [10, 20, 19]. Notice that as the oracle is correct in expectation, one can obtain good estimates to the true values efficiently by averaging over sufficiently many points in a small neighborhood. As a result, it is still possible to approximate the minimizer x∗ to within arbitrarily small distance in polynomial time. The main focus of these results is thus to obtain optimal algorithm efficiency, ideally matching that of Nesterov’s accelerated gradient descent. The second category, on the other hand, considers the adversarial noise. That is, the errors are no longer drawn from certain distributions, but rather are added adversarially, while obeying certain constraints. A representative such model is what we call pointwise-bounded noise model [22, 5], in which the only assumption on the noise is that it is pointwise bounded in magnitude; other than that, the specific perturbation on each point can be arbitrary. Formally, it is allowed for the oracle to return, on an input point x, a value in the range [f(x)− , f(x) + ] (absolute errors) or [(1 − )f(x), (1 + )f(x)] (relative errors), for some > 0. [5] shows that when the errors are absolute and is on the order of 1/d, there is a polynomial-time algorithm that can find a point with function value arbitrarily close to f(x∗). Whereas [22] shows that, in sharp contrast, when is about 1/ √ d or larger, no polynomial-time algorithm is able to find a point with function value within some constant error of f(x∗), for both absolute and relative errors. Our model. Following the second line of research above, in this paper we study another type of adversarial noise model, called outlier noise model, which can be seen as a natural variant of the pointwise-bounded noise model in [22, 5]. In this model, the only assumption we make is a bound on the “number” of points corrupted by the noise; apart from that, we do not assume any bounds on the magnitude of the errors on the corrupted points. Formally, we are given access to an exact zeroth order oracle of a function f̂ that differs from the true convex function f only on a set C ⊂ Rd, with the guarantee that the volume of C is at most that of a d-dimensional ball of radius K, for some K > 0. Notice that although C is bounded in volume, for any x ∈ C, f̂(x) and f(x) can differ arbitrarily. Though variants of each other, both the pointwise-bounded noise model in [22, 5] and our model may be considered as special cases of a more general type of adversarial noise model, namely the `p-bounded noise model. To see this, let us write the noise as a function η : Rd → R, such that the noisy zeroth order oracle given to us corresponds to the function f + η. Then, in the (absolute) pointwise-bounded noise model, η is bounded in `∞-norm (they assume ‖η‖∞ ≤ ), whereas in the outlier noise model, η is bounded in `0-norm (we assume ‖η‖0 ≤ vol(radius-K ball)). It will certainly be an interesting future direction to explore what can be achieved for minimizing convex functions with general `p-bounded noise. We would also like to point out that in our noise model, it is more natural to consider getting as close as possible to x∗ as opposed to finding a point with function value close to f(x∗). This is because our only assumption on the noise is that the corrupted region C, which is in the domain of the function f , is bounded in volume, and in particular, it may be located around x∗. Therefore, we believe it makes the most sense to measure the quality of the solution of a minimization algorithm by its distance to the optimal in the domain (specifically, how the distance compares with the corruption radius K). Our results. Let us first consider what one might expect to achieve in the outlier noise model. By the observation made above, it is not hard to see that we cannot hope to always find a point that is within distance K of the minimizer x∗, as an adversary could potentially corrupt some radius-K ball 2This model more often deals with the first order oracle, which gives the (noisy) gradient∇f(x) at a point x. around x∗, making it impossible for us to know where the true minimizer lies. However, is it possible to obtain a distance bound that is close to K (e.g. O(K))? Our first result in this paper is a lower bound showing that, somewhat surprisingly, this goal is in general impossible to achieve, even for algorithms that are allowed an unbounded number of queries to the oracle. In fact, our lower bound indicates that the best distance bound one can hope for is at least Ω(K √ β/α), where we recall that α,β are the “niceness” of the function. This result is a consequence of the existence of two (α, β)-nice functions that only differ in a small region of the domain, but whose minimizers are sufficiently far apart. We then develop, as our second result, an efficient algorithm that finds a point within distance O(K √ β/α) of the minimizer x∗, thus matching our lower bound up to constant factors. Roughly, our algorithm performs two stages of gradient descent, using approximate gradients computed from the noisy zeroth order oracle. We state our results formally in the theorems below. For simplicity, let us say a zeroth order oracle of a function f is K-corrupted if it is perturbed by an outlier noise of corruption radius K. Theorem 1.3 (Lower bound; formal version appears as Theorem 3.1). For α, β,K > 0 and sufficiently large d, there exist two (α, β)-nice functions f0, f1 : Rd → R which satisfy the following conditions: (i) The `2-distance between the minimizers of f0, f1 is Ω(K √ β/α); (ii) The total volume of points where f0 and f1 differ is at most that of a d-dimensional ball of radius K. To see the implication of this theorem, consider an adversary that randomly picks an index i ∈ {0, 1} and lets fi be the true convex function, but always uses f0 as the K-corrupted oracle. Then no matter which point our algorithm outputs, with probability at least 1/2 over the randomness of i, it is Ω(K √ β/α) away from the minimizer of the true convex function fi. Theorem 1.4 (Algorithm; formal version appears as Theorems 4.2, 5.1). Let α, β,K > 0 and d be sufficiently large. There is an algorithm that, given access to a K-corrupted oracle of an (α, β)-nice function f : Rd → R and an initial point x0 ∈ Rd, makes Õ(d · (β2/α2) · log ‖x0 − x∗‖)3 queries to the oracle and outputs a point x̂ s.t. ‖x̂− x∗‖2 ≤ O(K √ β/α). Here x∗ is the minimizer of f . We note that in both our results, our conditions on d are roughly that d ≥ Ω(log β/α). In other words, we require the function’s condition number β/α to be at most 2O(d), a quantity exponential in d. Our techniques. We now briefly describe our techniques used to obtain the above results. Our lower bound proceeds by proving the existence of two (α, β)-nice functions f0 and f1 s.t. (i) they differ only in an ellipsoid with volume at most that of a ball of radius K; (ii) their minimizers are at `2-distance Ω(K √ β/α)-away from each other. As mentioned above, this implies that even with infinitely many queries, one cannot approximate the minimizer x∗ to within distance o(K √ β/α). More concretely, we start by letting f0 be a simple (α, β)-nice quadratic function whose minimizer is at the origin. Then the existence of our desired function f1 will follow by two steps: first we show that there exists another function f ′ with certain good properties, then we obtain f1 by taking a piecewise combination of f0 and f ′. Specifically, the properties that we need f ′ to satisfy is as follows: (i) f ′ is (α, β)-nice; (ii) both the function values and the gradients of f0 and f ′ agree on all points on the periphery of an ellipsoid E that is centered at the origin and has bounded volume; (iii) the minimizer of f ′ has coordinate of the form (Ω(K √ β/α), 0, . . . , 0). We show that this essentially boils down to a convex function interpolation task where the function values and gradients at some infinitely many points are given. To this end, we adapt an interpolation result from [23], which was proposed to accommodate interpolation from finitely many points, to prove the existence of f ′. Finally, we construct the function f1 by letting f1 = f ′ inside the ellipsoid E , but f1 = f0 outside of E . For the upper bound, our algorithm proceeds in two stages. In the first stage, we come to within distance O(K(β/α)) of the minimizer x∗ and in the subsequent stage, we improve it to O(K √ β/α). The first algorithm essentially follows a gradient descent, but using an approximate gradient at each step. In order to estimate the gradient at a point x, we set up a system of linear equations, where each equation adds a constraint on the derivative at x along a uniformly random direction. There are however two types of noise in these equations, one from using a zeroth order oracle to compute first 3Here and throughout the paper, we write Õ(f) to denote O(f · poly(log f)). order information, and the other from the outlier noise added to the oracle. While we can solve this noisy linear system with good enough accuracy via an exhaustive search, we show that using an LP decoding routine in [12], we can solve it more efficiently in polynomial time. We note that using the latter is only to improve the running time, as both approaches result in the same query complexity. However, the one bottleneck in this approach is that at any point, the ball of radius < K around it could potentially be (nearly) completely corrupted. Thus, to get a meaningful estimate of the gradient, we have to sample points which are more than distance K-far apart. This tradeoff eventually allows us to get O(K(β/α))-close to the minimizer. To get to the optimal closeness of O(K √ β/α), we next start at a point which is guaranteed to be O(K(β/α))-close to the true minimizer. We now consider the function f̄ which is defined as the average of f in a ball of radius (say) 2K. It is not hard to verify that f̄ continues to be (α, β)-nice. Moreover, the minimizers of f̄ and f can be shown to be O(K √ β/α)-close to each other. Thus, it suffices to get close to the minimizer of f̄ . We will do so by simulating a gradient descent on f̄ . It therefore boils down to how we can efficiently approximate the gradients of f̄ . By the definition of f̄ , the gradient∇f̄(x) is also equal to the average of the gradients∇f(y) over all points y within distance 2K of x. This suggests that we can approximate ∇f̄(x) by averaging the gradients ∇f(y) at sufficiently many randomly sampled y’s, and bounding the error using concentration inequalities for sums of random vectors. Now a key observation is that, since these y’s are sampled randomly from a radius 2K-ball, it is highly likely that each sampled y sits in a mostly uncorrupted neighborhood, as long as the dimension is sufficiently high. Consequently, we can use the LP decoding approach above to obtain an accurate estimate of the gradient at each of these points. Prior work on noisy convex optimization. Other than the works mentioned above, noisy convex optimization has also been investigated in the context of multi-armed bandits and regret minimization [2, 8, 1]. In the direction of convex optimization under adversarial noise, the early results in fact date back to the 90s by [3]. Specifically for the pointwise-bounded noise model, there are subsequent works such as [21, 24, 17] that have improved on the guarantees of [5]. Due to space limitation, we include some other related work in Appendix A. Organization. In Section 2, we set up a few notations and give some basic definitions and technical preliminaries. In Section 3, we prove our lower bound result. In Section 4, we give a first algorithm that gets us to within distance O(K(β/α)) of x∗. In Section 5, we give a second algorithm that gets us to within distance O(K √ β/α) of x∗. In Section 6, we propose several future directions. 2 Preliminaries Note that due to space limitation, we defer some of the preliminaries to Appendix B. While there are many known equivalent definitions of strong convexity and smoothness of a function, the specific ones that we use in this paper are as follows. Definition 2.1. A function f : Rd → R is β-smooth if it is differentiable and for all pairs x, y ∈ Rd we have ‖∇f(x)−∇f(y)‖ ≤ β ‖x− y‖4. Definition 2.2. A function f : Rd → R is α-strongly convex if f(x)− α2 ‖x‖ 2 is convex. We next set up notations of a ball and the uniform distribution over it. Definition 2.3. Let B(x, r) def= {y : ‖y − x‖ ≤ r} denote the ball of radius r centered at x. Let U(x, r) denote the uniform distribution over all points in the ball B(x, r). As a result, the fraction of corrupted volume in a ball B(x, r) is Pry∼U(x,r)[f(y) 6= f̂(y)], where we recall that f̂ denotes the corrupted version of f . For our second algorithm in Section 5, we will need to consider the “average” function, whose value at a point x is the average of f(y)’s where y is within some distance of x. 4Here and going forward, all norms are `2-norms unless stated otherwise Definition 2.4. For any r > 0, define the function f̄r as f̄r(x) def = Ey∼U(x,r) [f(y)]. It is not hard to verify the strong convexity and smoothness of f̄r: Lemma 2.5. If f is α-strongly convex and β-smooth, f̄r is also α-strongly convex and β-smooth. As a result of α-strong convexity and β-smoothness, we can upper bound the distance between the minimizers of f and f̄r by O(r √ β/α). Lemma 2.6. Let x∗, x̄r be the minimizers of f and f̄r respectively. Then ‖x∗ − x̄r‖ ≤ 2r √ β/α. A proof of this lemma is included in Appendix B. 3 An Ω(K √ β/α) lower bound In this section we show that getting O(K √ β/α)-close to x∗ is the best we can hope for even if we are allowed to query the function value at every point of the domain. We will prove this by showing that when the dimension is sufficiently high in terms of β/α, there exist two α-strongly convex, β-smooth functions that differ only in an ellipsoid of volume equal to a ball of radius K, but whose minimizers are Ω(K √ β/α)-apart. Theorem 3.1. Given 0 < α ≤ β with 1 + log βα ≤ d where d is the dimension, and a K > 0, there exist two α-strongly convex, β-smooth functions whose values differ only in an ellipsoid of volume equal to a radius-K ball, but whose minimizers are Ω( √ β αK)-far from each other. In order to prove Theorem 3.1, we shall prove several intermediate lemmas first, which are built on the interpolation results from [23]. We remark that the main results in [23] are stated for interpolating a set of finitely many points, while for our purpose we need to interpolate infinitely many. Therefore we cannot use their results directly in a black-box manner, but instead have to make certain adaptations. 3.1 Some interpolation results from [23] First let us define the notion of (α, β)-interpolability. Definition 3.2 ((α, β)-interpolability). Suppose we are given a set of (possibly infinitely many) tuples {(xi, gi, fi)}i∈I where each xi, gi ∈ Rd, fi ∈ R. Let α ∈ R≥0, β ∈ R≥0∪{+∞} where α < β. We say this set is (α, β)-interpolable if there is a proper and closed convex function f : Rd → R∪{+∞} that is α-strongly convex and β-smooth such that for all i ∈ I , gi ∈ ∂f(xi) and f(xi) = fi, where ∂f(xi) denotes the set of subgradients of f at xi. Note here that when α = 0, we only require f to be convex. When β =∞, we do not require f to be smooth and thus f is not necessarily differentiable; when β <∞, the condition gi ∈ ∂f(xi) is equivalent to gi = ∇f(xi) as the gradient is unique at any point when f is differentiable. The following two lemmas are proved in [23]. The first lemma enables us to reduce the (α, β)interpolation of some tuple set to the (0, β′)-interpolation of another tuple set, while the second lemma allows us to further reduce it to the (α′,∞)-interpolation of some other tuple set. We note that although [23] only states these lemmas for sets containing finitely many tuples, their proofs work for sets containing infinitely many tuples as well. Lemma 3.3. Given a set of (possibly infinitely many) tuples {(xi, gi, fi)}i∈I where xi, gi ∈ Rd, fi ∈ R and 0 ≤ α < β ≤ +∞. The following two statements are equivalent: 1. {(xi, gi, fi)}i∈I is (α, β)-interpolable. 2. { (xi, gi − αxi, fi − α2 ‖xi‖ 2 ) } i∈I is (0, β − α)-interpolable. Lemma 3.4. Given a set of (possibly infinitely many) tuples {(xi, gi, fi)}i∈I where xi, gi ∈ Rd, fi ∈ R and 0 < β ≤ +∞. The following two statements are equivalent: 1. {(xi, gi, fi)}i∈I is (0, β)-interpolable. 2. { (gi, xi, x > i gi − fi) } i∈I is (1/β,∞)-interpolable. Then as in [23], by alternately applying Lemmas 3.3 and 3.4 twice each, we are able to reduce any (α, β)-interpolation problem to a (0,∞)-interpolation problem, where we only want to interpolate some points with a proper and closed convex function. Formally, we have the following lemma, whose proof is deferred to Appendix C. Lemma 3.5. Given a set of (possibly infinitely many) tuples {(xi, gi, fi)}i∈I where xi, gi ∈ Rd, fi ∈ R and 0 ≤ α < β ≤ +∞. The following two statements are equivalent: 1. {(xi, gi, fi)}i∈I is (α, β)-interpolable. 2. {( βxi β−α − gi β−α , gi − αxi, αx>i gi β−α + fi − βα‖xi‖2 2(β−α) − ‖gi‖2 2(β−α) )} i∈I is (0,∞)-interpolable. 3.2 Our lower bound We first show that there exists an Ω(1)-strongly convex, O(1)-smooth function whose minimizer is 1/2-far from the origin, but whose function values and gradients agree with the quadratic function ‖x‖2 on all points on the surface of a unit ball. Formally, we have the following lemma. For ease of presentation, let us define X=1 def = {x : ‖x‖ = 1} and similarly X≥1 def= {x : ‖x‖ ≥ 1}. We also write e1 = (1, 0, . . . , 0)T to denote the first standard basis vector. Lemma 3.6. Let f(x) def= ‖x‖2 which is 2-strongly convex and 2-smooth. There exists a 12 -strongly convex, 16-smooth function f̃ such that 1. f̃ ’s minimizer is 12e1. 2. For all x ∈ X=1 we have f̃(x) = f(x) and ∇f̃(x) = ∇f(x). The proof of this lemma is deferred to Appendix C. Roughly, the proof consists of three steps: (i) formulate proving the existence of f̃ as a ( 12 , 16)-interpolation problem; (ii) use Lemma 3.5 to reduce it to the (0,∞)-interpolation of some infinitely many points; (iii) explicitly construct a proper and closed convex function that does interpolate these points. Now by taking a piecewise combination of the function f̃ in Lemma 3.6 and the quadratic function ‖x‖2, we can show that there exists an Ω(1)-strongly convex, O(1)-smooth function f̂ whose minimizer is 1/2-far from the origin, but whose function values and gradients agree with ‖x‖2 on every point with `2-norm greater than or equal to 1. Lemma 3.7. Let f(x) def= ‖x‖2 which is 2-strongly convex and 2-smooth. Define f̂ such that f̂(x) = f̃(x) if ‖x‖ ≤ 1 and f̂(x) = f(x) otherwise (‖x‖ > 1). Then we have 1. f̂ is 12 -strongly convex and 16-smooth. 2. f̂ ’s minimizer is 12e1. 3. For all x ∈ X≥1 we have f̂(x) = f(x) and ∇f̂(x) = ∇f(x). The proof of this lemma is included in Appendix C. Then by scaling the domains of f, f̂ in Lemma 3.7, we prove that for any κ ≥ 1, when the dimension is sufficiently high in terms of κ, there exist two Ω(1/κ)-strongly convex, O(1)-smooth functions whose function values and gradients agree on every point outside of an ellipsoid of volume equal to a unit ball, but whose minimizers are √ κ/2-apart. Here κ shall be thought of as β/α where β = Θ(1). Lemma 3.8. Given κ ≥ 1 with 1+log κ ≤ dwhere d is the dimension. Let γ def= (1/κ) 1d−1 ∈ [1/2, 1]. Let Sd×d = DIAG(κ, γ, . . . , γ). Define s(x) = x>S−1x, which is (2/κ)-strongly convex and (2/γ)smooth. Let Xs≥1 = {x : s(x) ≥ 1}. Also define ŝ(x) = f̂(S−1/2x). Then we have 1. ŝ is 1/(2κ)-strongly convex and (16/γ)-smooth. 2. ŝ’s minimizer is √ κ 2 e1. 3. For all x ∈ Xs≥1 we have ŝ(x) = s(x) and ∇ŝ(x) = ∇s(x). Finally, by further scaling (the domain and the function values of) s, ŝ in Lemma 3.8, we can prove Theorem 3.1. The proofs of Lemma 3.8 and Theorem 3.1 are both included in Appendix C. 4 An O(K(β/α))-close algorithm In this section we give an algorithm GDSTAGEI that finds a pointO(K(β/α))-close to the minimizer of f . GDSTAGEI essentially implements a gradient descent algorithm, but uses approximate gradient computed from the noisy oracle at each step. To begin with, we present a subroutine GRADIENTCOMP for computing the gradient at a point where a small neighborhood is mostly uncorrupted. Algorithm 1: GRADIENTCOMP(f̂ , x, β, τ) Input : f̂ : Rd → R, x ∈ Rd, β > 0, and τ > 0. Output : g ∈ Rd. 1 Randomly choose 1000d pairs of points a1, b1 . . . , a1000d, b1000d in the ball B(x, τ). 2 Query the function values f̂(aj), f̂(bj) for all j = 1, 2, . . . , 1000d. 3 Let g ∈ Rd be any vector such that, for at least 800d of the j’s, the following holds: ∣∣∣g>(bj − aj)− ( f̂(bj)− f̂(aj) )∣∣∣ ‖bj − aj‖ ≤ βτ. (1) If no such g exists, set g to be an arbitrary vector. We summarize the performance of GRADIENTCOMP below, with the proof deferred to Appendix D. Essentially, the error in the gradient computed by GRADIENTCOMP tends to zero as τ → 0. Lemma 4.1. Fix d > 0 and β > 0. There exists a function err(τ) satisfying limτ→0+ err(τ) = 0 such that the following holds. Fix any x ∈ Rd and τ > 0 such that the radius-τ ball centered at x is mostly uncorrupted: Pry∼U(x,τ) [ f(y) 6= f̂(y) ] ≤ 1 100 . (2) Then we have that with probability 1− 2−3d, the vector g returned by GRADIENTCOMP satisfies ‖g −∇f(x)‖ ≤ err(τ). (3) The number of queries made by GRADIENTCOMP is O(d). We now describe GDSTAGEI in Algorithm 2. Its performance is characterized in Theorem 4.2. Algorithm 2: GDSTAGEI(f̂ , α, β, x0, R0, δ) Input : f̂ : Rd → R, 0 < α < β, x0 ∈ Rd, R0 ≥ ‖x0 − x∗‖, and δ ∈ (0, 1). Output : x̂ ∈ Rd. 1 Let the iteration count be T ← 100βα log R0(β/α)K . 2 for t = 0, 1, . . . , T − 1 do 3 Let the sample count be s← 200 log(T/δ). 4 Sample s random points y1, y2, . . . , ys in the ball B(xt, 99K). 5 Compute gradients gi ← GRADIENTCOMP(f̂ , yi, β, τ) for some sufficiently small τ > 0. 6 Find a vector ĝ ∈ Rd such that at least (2s)/3 of the gi’s are within euclidean distance 99.5βK of ĝ; if no such ĝ exists, set ĝ to be an arbitrary vector. 7 Perform a descent step: xt+1 ← xt − 12β ĝ. 8 return xt. Theorem 4.2. Let d ≥ 2. Given an initial point x0 with ‖x0 − x∗‖ ≤ R0 and a δ ∈ (0, 1), the algorithm GDSTAGEI returns a point x̂ with ‖x̂− x∗‖ ≤ 10000(β/α)K with probability 1−δ, where x∗ is the minimizer of f . The number of queries made by GDSTAGEI is Õ(d(β/α) log R0(β/α)K log(1/δ)). Crucial to proving this theorem is to show that the gradients used by GDSTAGEI are accurate enough: Lemma 4.3. Let d ≥ 2. The ĝ computed at Line 6 of GDSTAGEI satisfies with probability 1− δT that ‖ĝ −∇f(xt)‖ ≤ 200βK. Full proofs of Theorem 4.2 and Lemma 4.3 are presented in Appendix D. A note on the running time of our algorithms. While we are mainly concerned about the query complexity, we remark that both of our algorithms above can be implemented in polynomial time. For GRADIENTCOMP, all steps except Line 3 are easily seen to be implementable in polynomial time (in particular, linear time). Therefore it suffices to show that Line 3 can be done efficiently. Claim 4.4. A vector g satisfying the condition at Line 3 of GRADIENTCOMP, if it exists, can be found in time polynomial in d. Our proof of this claim proceeds by presenting a poly(d)-time algorithm based on an LP-decoding routine in [12]. Therefore let us first introduce the specific result that we need from [12]. Let A ∈ Rn×d be a matrix and z ∈ Rd be a vector. Consider the linear system Ax = Az, to which x = z is clearly a solution. If n ≥ d and A has full rank, then we can retrieve the vector z given A and Az by solving the linear system Ax = Az in polynomial time using, e.g., Gaussian elimination. Now suppose the RHS of the linear system is corrupted by some noise e ∈ Rn, and we are only given A and the corrupted RHS Az + e, then can we still retrieve the vector z efficiently? [12] showed that under certain assumptions, we can obtain good estimates of z in poly-time by linear programming. Theorem 4.5 ([12]). There exist constants ρ∗ ≈ 0.239 and γ ≥ 1 such that the following holds. Suppose n ≥ γd and An×d’s entries are drawn independently from a standard Gaussian distribution. Suppose also the noise e can be written as e = e1 + e2 where ‖e1‖0 ≤ ρ∗n. Then given A and Az + e, we can find in polynomial time a vector z′ s.t. ‖z − z′‖2 ≤ O(‖e2‖∞), for any z ∈ Rd. Basically, this theorem assumes that the noise can be decomposed into the sum of two parts, one with small nonzero support, and the other with small entry-wise magnitude. Then the `2-error of the solution is on the order of the largest entry-wise magnitude of the second part of the noise. Proof of Claim 4.4. Let us define a matrix B ∈ R1000d×d whose ith row is equal to (bi−ai) T ‖bi−ai‖ , where ai, bi’s are the sampled points at Line 1 of GRADIENTCOMP. We also define vectors b, b̂ ∈ R1000d with b(i) = ∇f(x) T (bi−ai) ‖bi−ai‖ and b̂(i) = f̂(bi)−f̂(ai) ‖bi−ai‖ , where x is the input point of GRADIENTCOMP. Notice that each b(i) is the inner product of the ith row of B and ∇f(x), and therefore we have B∇f(x) = b. Consequently, given B and b we can retrieve ∇f(x) by solving the linear system By = b (y are the variables). Thus our goal becomes solving this linear system when only B and b̂ (a.k.a. a corrupted version of b) are given. While this looks like the task in Theorem 4.5, note that the entries of B are not drawn from independent Gaussian distributions, so we will need go a step further. As the ai, bi’s are sampled uniformly at random from a ball, each row (bi−ai)T ‖bi−ai‖ of B is a unit vector with a uniformly random direction. It is well known that a vector with independent standard Gaussian entries also points to a uniformly random direction. In fact, we can sample a d-dimensional such vector by a three-step process: (i) sample a unit vector with a random direction; (ii) sample a length ` from the χ2-distribution with d degrees of freedom (i.e., the sum of the squares of d independent standard Gaussians); (iii) scale the unit vector by √ `. In light of this, let us generate a diagonal matrix D ∈ R1000d×1000d such that each D(i, i) is independently sampled as in step (ii). Then we consider the linear system D1/2By = D1/2b to which y = ∇f(x) is a solution. Notice that we now have that each entry of D1/2B follows a standard Gaussian. By thinking of D1/2b̂ as a corrupted version of Db, we then need to show that the noise e def= D1/2(b̂− b) can be written as e1 + e2 such that ‖e1‖0 ≤ ρ∗(1000d) and ‖e2‖∞ is small. To this end, we notice that for each i such that both ai, bi are uncorrupted, we have by β-smoothness that ∣∣∣b(i)− b̂(i) ∣∣∣ = ∣∣∣∣∣ ∇f(x)T (bi − ai) ‖bi − ai‖ − f̂(bi)− f̂(ai)‖bi − ai‖ ∣∣∣∣∣ ≤ O(βτ), (4) where τ is the radius of the ball B(x, τ) from which ai, bi’s are sampled. Thus, if B(x, τ) is mostly (say 99%) uncorrupted, with probability 1− exp(−Ω(d)), (4) holds for most (say 90%) of the i’s. Also, by standard Markov’s inequality and Chernoff bounds, with probability 1− exp(−Ω(d)), for most (say 99%) of the i’s we have D(i, i) ≤ O(d). Combining this with (4), we have for 80% of the i’s that √ D(i, i) ∣∣∣b̂(i)− b(i) ∣∣∣ ≤ O( √ dβτ), implying the existence of e1, e2 s.t. e = e1 + e2 and ‖e1‖0 ≤ 0.2(1000d), ‖e2‖∞ ≤ O( √ dβτ). This means that by Theorem 4.5 we can use linear programming to find a g with ‖g −∇f(x)‖ ≤ O( √ dβτ), matching the guarantee in Lemma 4.1. Finally, to address a technicality about the constant γ in Theorem 4.5, we note that we can increase the number of sampled ai, bi pairs to max {1000d, γd}, and the rest of the analysis still follows. Then we consider the running time of GDSTAGEI. By Claim 4.4 and straightforward observations, all steps other than Line 6 run in polynomial time. Thus we focus on the efficiency of Line 6. Claim 4.6. A vector ĝ satisfying the condition at Line 6 of GDSTAGEI, if it exists, can be found in nearly-linear time in s, at the cost of an extra constant factor in the radius of the ball. We note that an extra constant factor in the radius of the ball will not affect the final distance to x∗ by more than a constant factor. The proof of this claim is deferred to Appendix D. Roughly, the proof proceeds by sampling Õ(1) points from g1, . . . , gs and checking for each sampled gj if at least 2/3 fraction of the total points are within euclidean distance 200βK of gj . Note that the radius now becomes 200βK as opposed to 100βK at Line 6 of GDSTAGEI. 5 An O(K √ β/α)-close algorithm In this section we give an algorithm GDSTAGEII that, when given an initial point which is O(K(β/α))-close to the minimizer x∗ of f , finds a point that is O(K √ β/α)-close to x∗. GDSTAGEII basically performs a gradient descent on the average function f̄2K (Definition 2.4). Algorithm 3: GDSTAGEII(f̂ , α, β, x0) Input : f̂ : Rd → R, 0 < α < β, and x0 ∈ Rd with ‖x0 − x∗‖ ≤ 10000(β/α)K. Output : x̂ ∈ Rd. 1 Let the iteration count be T ← 100βα log( β α + 1). 2 for t = 0, 1, . . . , T − 1 do 3 Let the sample count be s← 400βα log(dT ). 4 Sample s random points y1, y2, . . . , ys in the ball B(xt, 2K). 5 Compute gradients gi ← GRADIENTCOMP(f̂ , yi, β, τ) for sufficiently small τ > 0, and their average ḡ ← 1s ∑s i=1 gi. 6 Perform a descent step: xt+1 ← xt − 12β ḡ. 7 return xt. The performance of GDSTAGEII is characterized in Theorem 5.1, with the proof in Appendix E. Note that while the success probability in Theorem 5.1 is not arbitrarily large, we can amplify it to any 1− δ by repeating the algorithm O(log(1/δ)) times, as we show in Corollary E.2. Theorem 5.1. Suppose that d ≥ 100 log(β/α + 1). Then given an initial point x0 that satisfies ‖x0 − x∗‖ ≤ 10000(β/α)K, GDSTAGEII returns a point x̂ with ‖x̂− x∗‖ ≤ 1000 √ β/αK with probability at least 1 − 2−d/8, where x∗ is the minimizer of f . The number of queries made by GDSTAGEII is Õ(d(β/α)2). Moreover, the algorithm runs in polynomial time. The proof of Theorem 5.1 builds on a lemma showing that the gradients that GDSTAGEII uses are sufficiently precise. The lemma relies on an `2-concentration inequality for the sum of random vectors (i.e., the Vector Bernstein Inequality in Theorem E.1). Its proof also appears in Appendix E. Lemma 5.2. Let d ≥ 100 log(β/α+ 1). The vector ḡ computed at Line 5 of GDSTAGEII satisfies the following with probability at least 1− 2−d/8/T : ∥∥ḡ −∇f̄2K(xt) ∥∥ ≤ 16√αβK. 6 Future directions We obtained asymptotically matching upper and lower bounds on how well the minimizer of a convex function can be identified in presence of outlier noise. There are several natural directions for future work. First, while our algorithm’s query complexity has essentially the same dependence on d and ‖x0 − x∗‖ as Nesterov’s accelerated gradient descent, it is still off by a factor of (β/α)1.5. It will thus be interesting to understand if this remaining performance gap can be eliminated. Also, we note that both our results require the dimension to be sufficiently high. While we believe the high dimension regime is of the most interest, it will be an interesting exercise to understand how these bounds change in the low-dimensional setting. Finally, as pointed out in the introduction, an appealing future direction is to study convex minimization with the more general `p-bounded noise. Acknowledgments and Disclosure of Funding We thank the anonymous reviewers for their valuable feedback. This work was supported in part by NSF awards CCF-1763514, CCF-1934876, CCF-2008305, CCF-1910534, CCF-1926872, and CCF-2045128.
1. What is the focus of the paper regarding convex minimization problems? 2. What are the strengths of the proposed approach, particularly in its theoretical analysis? 3. Do you have any concerns or questions regarding the paper's assumptions, models, or results? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any minor comments or suggestions for improving the paper?
Summary Of The Paper Review
Summary Of The Paper The authors consider a convex minimization problem with a certain noisy oracle. Specifically, the function to be minimized is assumed to be alpha-strongly convex, and beta-Lipschitz (referred to as alpha-beta nice). It is also assumed that we have access to the function values everywhere except for an ellipsoid with bounded volume. Within the mentioned ellipsoid, the oracle may select the values "adversarially" so as to confuse the minimization algorithm. In this setting, the authors first show the existence of two alpha-beta nice functions that differ on a bounded ball and such that their minimizers are a certain distance apart. This provides a bound on what one could possibly achieve. Next, they provide two algorithms that guarantee to produce approximate minimizers that are sufficiently close to the minimizer (within the bounds permitted by the first negative result), along with an upper bound on the number of queries that the algorithms make. Review The paper is a purely theoretical paper that delivers what it promises. It is clearly written, and well-presented, with detailed proofs in an appendix. I think some of the intuition is hidden in the proofs, but that doesn't make the paper unreadable. There are no experiments to demonstrate how the framework could be useful in practice, and I think that's a weakness of the paper. While the authors place their noise model within the context of others' works, I think a stronger motivation is lacking -- especially since the whole framework (e.g., the selection of the family of functions to work with) relies on it. Overall, I think this paper would be of high interest to a few readers -- and I'd welcome authors' comments that may serve to broaden the readership. I have only a few minor comments : In presenting the model (starting line 54), the description reads as if there's no restriction on C, other than its volume. It becomes clear later that we also need C to be contained in an ellipsoid -- in fact, otherwise the oracle could have simply chosen to corrupt each query, adding up to a total of zero volume, since there are only a finite queries. Please clarify this part. About Thm 3.1 : While I don't have any objection to this statement, I found it a bit pessimistic. How about a statement in the following form : "Given an arbitrary (alpha-beta) nice f, there exists (alpha-beta) g such that f and g differ only on a set contained in a ball of radius R.". Does such a statement follow from your result? If not, is it possible that some functions are "harder" for the oracle to scramble? In the appendix, Remark D.1 mentions that, for simplicity, the gradients will be assumed to be exact, with a certain probability. Does this really not affect any of the coming proofs? When you work with a bounded error in the gradient (again with the same small probability), if the proofs extend easily, why do you make this assumption? The appendix is already detailed, so you might as well drop this assumption. If there's a compelling reason to keep it, I'd suggest to make it part of the statements of the results. Update after author responses : I read the other reviews and the authors' responses. Generally, I agree that the model considered is a bit unusual, but I think the manuscript is potentially interesting for a small community, and might spark a more useful discussion. I'm still somewhat positive, and will keep my score as is.
NIPS
Title Approximate optimization of convex functions with outlier noise Abstract We study the problem of minimizing a convex function given by a zeroth order oracle that is possibly corrupted by outlier noise. Specifically, we assume the function values at some points of the domain are corrupted arbitrarily by an adversary, with the only restriction being that the total volume of corrupted points is bounded. The goal then is to find a point close to the function’s minimizer using access to the corrupted oracle. We first prove a lower bound result showing that, somewhat surprisingly, one cannot hope to approximate the minimizer nearly as well as one might expect, even if one is allowed an unbounded number of queries to the oracle. Complementing this negative result, we then develop an efficient algorithm that outputs a point close to the minimizer of the convex function, where the specific distance matches exactly, up to constant factors, the distance bound shown in our lower bound result. 1 Introduction Unconstrained convex optimization is among the most well-studied problems in mathematical optimization and has extensive applications in machine learning [7]. In the classic unconstrained convex minimization problem, we are given oracle access to a convex function f : Rd → R, and seek to efficiently find a point x̃ that is close to the minimizer of f (call it x∗)1. To obtain meaningful guarantees for approximating the minimizer x∗, one needs to make certain assumptions on the convexity and smoothness of f . In particular, f is commonly assumed to be α-strongly convex and β-smooth for some β > α > 0. This means that, when f is twice differentiable, its second derivative in any direction is between α and β. For ease of presentation, we will say a convex function is (α, β)-nice if it satisfies these two conditions. It is well known that the minimizer of an (α, β)-nice function can be approximated arbitrarily well in polynomial time by the classic gradient descent algorithm, or its accelerated version due to Nesterov [18]. To formally state the performance of these two algorithms, let us suppose an (α, β)-nice function f : Rd → R is given to us as a zeroth order oracle, which returns the function value f(x) on any input point x ∈ Rd. Then we have: Theorem 1.1 ([7]). Given any initial point x0 ∈ Rd and > 0, the classic gradient descent algorithm outputs a point x̃ s.t. ‖x̃− x∗‖2 ≤ using O(d(β/α) log ‖x0−x∗‖ ) oracle queries. 1Sometimes, instead of finding a point close to x∗, the goal is to find a point whose function value is close to f(x∗). However, as we point out later, it is more natural to study approximations in the domain in our setting. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Theorem 1.2 ([18]). Given any initial point x0 ∈ Rd and > 0, the accelerated gradient descent algorithm outputs a point x̃ s.t. ‖x̃− x∗‖2 ≤ using O(d √ β/α log ‖x0−x ∗‖ ) oracle queries. We note that the quantity β/α is called the condition number of the function f . The efficiency of a convex minimization algorithm is usually measured with respect to such a quantity. Given that the complexity of convex minimization with exact oracles has been well understood, a natural question is “can we still minimize a convex function efficiently if the oracle is to some extent inaccurate?” There has in fact been a rich body of work in efforts to answer this question. Roughly, they can be divided into two categories based on their assumptions on the oracle’s inaccuracy. The first category studies the case when the oracle is corrupted by stochastic noise2 — namely, the errors of the oracle are assumed to be random and independently drawn from some distribution [10, 20, 19]. Notice that as the oracle is correct in expectation, one can obtain good estimates to the true values efficiently by averaging over sufficiently many points in a small neighborhood. As a result, it is still possible to approximate the minimizer x∗ to within arbitrarily small distance in polynomial time. The main focus of these results is thus to obtain optimal algorithm efficiency, ideally matching that of Nesterov’s accelerated gradient descent. The second category, on the other hand, considers the adversarial noise. That is, the errors are no longer drawn from certain distributions, but rather are added adversarially, while obeying certain constraints. A representative such model is what we call pointwise-bounded noise model [22, 5], in which the only assumption on the noise is that it is pointwise bounded in magnitude; other than that, the specific perturbation on each point can be arbitrary. Formally, it is allowed for the oracle to return, on an input point x, a value in the range [f(x)− , f(x) + ] (absolute errors) or [(1 − )f(x), (1 + )f(x)] (relative errors), for some > 0. [5] shows that when the errors are absolute and is on the order of 1/d, there is a polynomial-time algorithm that can find a point with function value arbitrarily close to f(x∗). Whereas [22] shows that, in sharp contrast, when is about 1/ √ d or larger, no polynomial-time algorithm is able to find a point with function value within some constant error of f(x∗), for both absolute and relative errors. Our model. Following the second line of research above, in this paper we study another type of adversarial noise model, called outlier noise model, which can be seen as a natural variant of the pointwise-bounded noise model in [22, 5]. In this model, the only assumption we make is a bound on the “number” of points corrupted by the noise; apart from that, we do not assume any bounds on the magnitude of the errors on the corrupted points. Formally, we are given access to an exact zeroth order oracle of a function f̂ that differs from the true convex function f only on a set C ⊂ Rd, with the guarantee that the volume of C is at most that of a d-dimensional ball of radius K, for some K > 0. Notice that although C is bounded in volume, for any x ∈ C, f̂(x) and f(x) can differ arbitrarily. Though variants of each other, both the pointwise-bounded noise model in [22, 5] and our model may be considered as special cases of a more general type of adversarial noise model, namely the `p-bounded noise model. To see this, let us write the noise as a function η : Rd → R, such that the noisy zeroth order oracle given to us corresponds to the function f + η. Then, in the (absolute) pointwise-bounded noise model, η is bounded in `∞-norm (they assume ‖η‖∞ ≤ ), whereas in the outlier noise model, η is bounded in `0-norm (we assume ‖η‖0 ≤ vol(radius-K ball)). It will certainly be an interesting future direction to explore what can be achieved for minimizing convex functions with general `p-bounded noise. We would also like to point out that in our noise model, it is more natural to consider getting as close as possible to x∗ as opposed to finding a point with function value close to f(x∗). This is because our only assumption on the noise is that the corrupted region C, which is in the domain of the function f , is bounded in volume, and in particular, it may be located around x∗. Therefore, we believe it makes the most sense to measure the quality of the solution of a minimization algorithm by its distance to the optimal in the domain (specifically, how the distance compares with the corruption radius K). Our results. Let us first consider what one might expect to achieve in the outlier noise model. By the observation made above, it is not hard to see that we cannot hope to always find a point that is within distance K of the minimizer x∗, as an adversary could potentially corrupt some radius-K ball 2This model more often deals with the first order oracle, which gives the (noisy) gradient∇f(x) at a point x. around x∗, making it impossible for us to know where the true minimizer lies. However, is it possible to obtain a distance bound that is close to K (e.g. O(K))? Our first result in this paper is a lower bound showing that, somewhat surprisingly, this goal is in general impossible to achieve, even for algorithms that are allowed an unbounded number of queries to the oracle. In fact, our lower bound indicates that the best distance bound one can hope for is at least Ω(K √ β/α), where we recall that α,β are the “niceness” of the function. This result is a consequence of the existence of two (α, β)-nice functions that only differ in a small region of the domain, but whose minimizers are sufficiently far apart. We then develop, as our second result, an efficient algorithm that finds a point within distance O(K √ β/α) of the minimizer x∗, thus matching our lower bound up to constant factors. Roughly, our algorithm performs two stages of gradient descent, using approximate gradients computed from the noisy zeroth order oracle. We state our results formally in the theorems below. For simplicity, let us say a zeroth order oracle of a function f is K-corrupted if it is perturbed by an outlier noise of corruption radius K. Theorem 1.3 (Lower bound; formal version appears as Theorem 3.1). For α, β,K > 0 and sufficiently large d, there exist two (α, β)-nice functions f0, f1 : Rd → R which satisfy the following conditions: (i) The `2-distance between the minimizers of f0, f1 is Ω(K √ β/α); (ii) The total volume of points where f0 and f1 differ is at most that of a d-dimensional ball of radius K. To see the implication of this theorem, consider an adversary that randomly picks an index i ∈ {0, 1} and lets fi be the true convex function, but always uses f0 as the K-corrupted oracle. Then no matter which point our algorithm outputs, with probability at least 1/2 over the randomness of i, it is Ω(K √ β/α) away from the minimizer of the true convex function fi. Theorem 1.4 (Algorithm; formal version appears as Theorems 4.2, 5.1). Let α, β,K > 0 and d be sufficiently large. There is an algorithm that, given access to a K-corrupted oracle of an (α, β)-nice function f : Rd → R and an initial point x0 ∈ Rd, makes Õ(d · (β2/α2) · log ‖x0 − x∗‖)3 queries to the oracle and outputs a point x̂ s.t. ‖x̂− x∗‖2 ≤ O(K √ β/α). Here x∗ is the minimizer of f . We note that in both our results, our conditions on d are roughly that d ≥ Ω(log β/α). In other words, we require the function’s condition number β/α to be at most 2O(d), a quantity exponential in d. Our techniques. We now briefly describe our techniques used to obtain the above results. Our lower bound proceeds by proving the existence of two (α, β)-nice functions f0 and f1 s.t. (i) they differ only in an ellipsoid with volume at most that of a ball of radius K; (ii) their minimizers are at `2-distance Ω(K √ β/α)-away from each other. As mentioned above, this implies that even with infinitely many queries, one cannot approximate the minimizer x∗ to within distance o(K √ β/α). More concretely, we start by letting f0 be a simple (α, β)-nice quadratic function whose minimizer is at the origin. Then the existence of our desired function f1 will follow by two steps: first we show that there exists another function f ′ with certain good properties, then we obtain f1 by taking a piecewise combination of f0 and f ′. Specifically, the properties that we need f ′ to satisfy is as follows: (i) f ′ is (α, β)-nice; (ii) both the function values and the gradients of f0 and f ′ agree on all points on the periphery of an ellipsoid E that is centered at the origin and has bounded volume; (iii) the minimizer of f ′ has coordinate of the form (Ω(K √ β/α), 0, . . . , 0). We show that this essentially boils down to a convex function interpolation task where the function values and gradients at some infinitely many points are given. To this end, we adapt an interpolation result from [23], which was proposed to accommodate interpolation from finitely many points, to prove the existence of f ′. Finally, we construct the function f1 by letting f1 = f ′ inside the ellipsoid E , but f1 = f0 outside of E . For the upper bound, our algorithm proceeds in two stages. In the first stage, we come to within distance O(K(β/α)) of the minimizer x∗ and in the subsequent stage, we improve it to O(K √ β/α). The first algorithm essentially follows a gradient descent, but using an approximate gradient at each step. In order to estimate the gradient at a point x, we set up a system of linear equations, where each equation adds a constraint on the derivative at x along a uniformly random direction. There are however two types of noise in these equations, one from using a zeroth order oracle to compute first 3Here and throughout the paper, we write Õ(f) to denote O(f · poly(log f)). order information, and the other from the outlier noise added to the oracle. While we can solve this noisy linear system with good enough accuracy via an exhaustive search, we show that using an LP decoding routine in [12], we can solve it more efficiently in polynomial time. We note that using the latter is only to improve the running time, as both approaches result in the same query complexity. However, the one bottleneck in this approach is that at any point, the ball of radius < K around it could potentially be (nearly) completely corrupted. Thus, to get a meaningful estimate of the gradient, we have to sample points which are more than distance K-far apart. This tradeoff eventually allows us to get O(K(β/α))-close to the minimizer. To get to the optimal closeness of O(K √ β/α), we next start at a point which is guaranteed to be O(K(β/α))-close to the true minimizer. We now consider the function f̄ which is defined as the average of f in a ball of radius (say) 2K. It is not hard to verify that f̄ continues to be (α, β)-nice. Moreover, the minimizers of f̄ and f can be shown to be O(K √ β/α)-close to each other. Thus, it suffices to get close to the minimizer of f̄ . We will do so by simulating a gradient descent on f̄ . It therefore boils down to how we can efficiently approximate the gradients of f̄ . By the definition of f̄ , the gradient∇f̄(x) is also equal to the average of the gradients∇f(y) over all points y within distance 2K of x. This suggests that we can approximate ∇f̄(x) by averaging the gradients ∇f(y) at sufficiently many randomly sampled y’s, and bounding the error using concentration inequalities for sums of random vectors. Now a key observation is that, since these y’s are sampled randomly from a radius 2K-ball, it is highly likely that each sampled y sits in a mostly uncorrupted neighborhood, as long as the dimension is sufficiently high. Consequently, we can use the LP decoding approach above to obtain an accurate estimate of the gradient at each of these points. Prior work on noisy convex optimization. Other than the works mentioned above, noisy convex optimization has also been investigated in the context of multi-armed bandits and regret minimization [2, 8, 1]. In the direction of convex optimization under adversarial noise, the early results in fact date back to the 90s by [3]. Specifically for the pointwise-bounded noise model, there are subsequent works such as [21, 24, 17] that have improved on the guarantees of [5]. Due to space limitation, we include some other related work in Appendix A. Organization. In Section 2, we set up a few notations and give some basic definitions and technical preliminaries. In Section 3, we prove our lower bound result. In Section 4, we give a first algorithm that gets us to within distance O(K(β/α)) of x∗. In Section 5, we give a second algorithm that gets us to within distance O(K √ β/α) of x∗. In Section 6, we propose several future directions. 2 Preliminaries Note that due to space limitation, we defer some of the preliminaries to Appendix B. While there are many known equivalent definitions of strong convexity and smoothness of a function, the specific ones that we use in this paper are as follows. Definition 2.1. A function f : Rd → R is β-smooth if it is differentiable and for all pairs x, y ∈ Rd we have ‖∇f(x)−∇f(y)‖ ≤ β ‖x− y‖4. Definition 2.2. A function f : Rd → R is α-strongly convex if f(x)− α2 ‖x‖ 2 is convex. We next set up notations of a ball and the uniform distribution over it. Definition 2.3. Let B(x, r) def= {y : ‖y − x‖ ≤ r} denote the ball of radius r centered at x. Let U(x, r) denote the uniform distribution over all points in the ball B(x, r). As a result, the fraction of corrupted volume in a ball B(x, r) is Pry∼U(x,r)[f(y) 6= f̂(y)], where we recall that f̂ denotes the corrupted version of f . For our second algorithm in Section 5, we will need to consider the “average” function, whose value at a point x is the average of f(y)’s where y is within some distance of x. 4Here and going forward, all norms are `2-norms unless stated otherwise Definition 2.4. For any r > 0, define the function f̄r as f̄r(x) def = Ey∼U(x,r) [f(y)]. It is not hard to verify the strong convexity and smoothness of f̄r: Lemma 2.5. If f is α-strongly convex and β-smooth, f̄r is also α-strongly convex and β-smooth. As a result of α-strong convexity and β-smoothness, we can upper bound the distance between the minimizers of f and f̄r by O(r √ β/α). Lemma 2.6. Let x∗, x̄r be the minimizers of f and f̄r respectively. Then ‖x∗ − x̄r‖ ≤ 2r √ β/α. A proof of this lemma is included in Appendix B. 3 An Ω(K √ β/α) lower bound In this section we show that getting O(K √ β/α)-close to x∗ is the best we can hope for even if we are allowed to query the function value at every point of the domain. We will prove this by showing that when the dimension is sufficiently high in terms of β/α, there exist two α-strongly convex, β-smooth functions that differ only in an ellipsoid of volume equal to a ball of radius K, but whose minimizers are Ω(K √ β/α)-apart. Theorem 3.1. Given 0 < α ≤ β with 1 + log βα ≤ d where d is the dimension, and a K > 0, there exist two α-strongly convex, β-smooth functions whose values differ only in an ellipsoid of volume equal to a radius-K ball, but whose minimizers are Ω( √ β αK)-far from each other. In order to prove Theorem 3.1, we shall prove several intermediate lemmas first, which are built on the interpolation results from [23]. We remark that the main results in [23] are stated for interpolating a set of finitely many points, while for our purpose we need to interpolate infinitely many. Therefore we cannot use their results directly in a black-box manner, but instead have to make certain adaptations. 3.1 Some interpolation results from [23] First let us define the notion of (α, β)-interpolability. Definition 3.2 ((α, β)-interpolability). Suppose we are given a set of (possibly infinitely many) tuples {(xi, gi, fi)}i∈I where each xi, gi ∈ Rd, fi ∈ R. Let α ∈ R≥0, β ∈ R≥0∪{+∞} where α < β. We say this set is (α, β)-interpolable if there is a proper and closed convex function f : Rd → R∪{+∞} that is α-strongly convex and β-smooth such that for all i ∈ I , gi ∈ ∂f(xi) and f(xi) = fi, where ∂f(xi) denotes the set of subgradients of f at xi. Note here that when α = 0, we only require f to be convex. When β =∞, we do not require f to be smooth and thus f is not necessarily differentiable; when β <∞, the condition gi ∈ ∂f(xi) is equivalent to gi = ∇f(xi) as the gradient is unique at any point when f is differentiable. The following two lemmas are proved in [23]. The first lemma enables us to reduce the (α, β)interpolation of some tuple set to the (0, β′)-interpolation of another tuple set, while the second lemma allows us to further reduce it to the (α′,∞)-interpolation of some other tuple set. We note that although [23] only states these lemmas for sets containing finitely many tuples, their proofs work for sets containing infinitely many tuples as well. Lemma 3.3. Given a set of (possibly infinitely many) tuples {(xi, gi, fi)}i∈I where xi, gi ∈ Rd, fi ∈ R and 0 ≤ α < β ≤ +∞. The following two statements are equivalent: 1. {(xi, gi, fi)}i∈I is (α, β)-interpolable. 2. { (xi, gi − αxi, fi − α2 ‖xi‖ 2 ) } i∈I is (0, β − α)-interpolable. Lemma 3.4. Given a set of (possibly infinitely many) tuples {(xi, gi, fi)}i∈I where xi, gi ∈ Rd, fi ∈ R and 0 < β ≤ +∞. The following two statements are equivalent: 1. {(xi, gi, fi)}i∈I is (0, β)-interpolable. 2. { (gi, xi, x > i gi − fi) } i∈I is (1/β,∞)-interpolable. Then as in [23], by alternately applying Lemmas 3.3 and 3.4 twice each, we are able to reduce any (α, β)-interpolation problem to a (0,∞)-interpolation problem, where we only want to interpolate some points with a proper and closed convex function. Formally, we have the following lemma, whose proof is deferred to Appendix C. Lemma 3.5. Given a set of (possibly infinitely many) tuples {(xi, gi, fi)}i∈I where xi, gi ∈ Rd, fi ∈ R and 0 ≤ α < β ≤ +∞. The following two statements are equivalent: 1. {(xi, gi, fi)}i∈I is (α, β)-interpolable. 2. {( βxi β−α − gi β−α , gi − αxi, αx>i gi β−α + fi − βα‖xi‖2 2(β−α) − ‖gi‖2 2(β−α) )} i∈I is (0,∞)-interpolable. 3.2 Our lower bound We first show that there exists an Ω(1)-strongly convex, O(1)-smooth function whose minimizer is 1/2-far from the origin, but whose function values and gradients agree with the quadratic function ‖x‖2 on all points on the surface of a unit ball. Formally, we have the following lemma. For ease of presentation, let us define X=1 def = {x : ‖x‖ = 1} and similarly X≥1 def= {x : ‖x‖ ≥ 1}. We also write e1 = (1, 0, . . . , 0)T to denote the first standard basis vector. Lemma 3.6. Let f(x) def= ‖x‖2 which is 2-strongly convex and 2-smooth. There exists a 12 -strongly convex, 16-smooth function f̃ such that 1. f̃ ’s minimizer is 12e1. 2. For all x ∈ X=1 we have f̃(x) = f(x) and ∇f̃(x) = ∇f(x). The proof of this lemma is deferred to Appendix C. Roughly, the proof consists of three steps: (i) formulate proving the existence of f̃ as a ( 12 , 16)-interpolation problem; (ii) use Lemma 3.5 to reduce it to the (0,∞)-interpolation of some infinitely many points; (iii) explicitly construct a proper and closed convex function that does interpolate these points. Now by taking a piecewise combination of the function f̃ in Lemma 3.6 and the quadratic function ‖x‖2, we can show that there exists an Ω(1)-strongly convex, O(1)-smooth function f̂ whose minimizer is 1/2-far from the origin, but whose function values and gradients agree with ‖x‖2 on every point with `2-norm greater than or equal to 1. Lemma 3.7. Let f(x) def= ‖x‖2 which is 2-strongly convex and 2-smooth. Define f̂ such that f̂(x) = f̃(x) if ‖x‖ ≤ 1 and f̂(x) = f(x) otherwise (‖x‖ > 1). Then we have 1. f̂ is 12 -strongly convex and 16-smooth. 2. f̂ ’s minimizer is 12e1. 3. For all x ∈ X≥1 we have f̂(x) = f(x) and ∇f̂(x) = ∇f(x). The proof of this lemma is included in Appendix C. Then by scaling the domains of f, f̂ in Lemma 3.7, we prove that for any κ ≥ 1, when the dimension is sufficiently high in terms of κ, there exist two Ω(1/κ)-strongly convex, O(1)-smooth functions whose function values and gradients agree on every point outside of an ellipsoid of volume equal to a unit ball, but whose minimizers are √ κ/2-apart. Here κ shall be thought of as β/α where β = Θ(1). Lemma 3.8. Given κ ≥ 1 with 1+log κ ≤ dwhere d is the dimension. Let γ def= (1/κ) 1d−1 ∈ [1/2, 1]. Let Sd×d = DIAG(κ, γ, . . . , γ). Define s(x) = x>S−1x, which is (2/κ)-strongly convex and (2/γ)smooth. Let Xs≥1 = {x : s(x) ≥ 1}. Also define ŝ(x) = f̂(S−1/2x). Then we have 1. ŝ is 1/(2κ)-strongly convex and (16/γ)-smooth. 2. ŝ’s minimizer is √ κ 2 e1. 3. For all x ∈ Xs≥1 we have ŝ(x) = s(x) and ∇ŝ(x) = ∇s(x). Finally, by further scaling (the domain and the function values of) s, ŝ in Lemma 3.8, we can prove Theorem 3.1. The proofs of Lemma 3.8 and Theorem 3.1 are both included in Appendix C. 4 An O(K(β/α))-close algorithm In this section we give an algorithm GDSTAGEI that finds a pointO(K(β/α))-close to the minimizer of f . GDSTAGEI essentially implements a gradient descent algorithm, but uses approximate gradient computed from the noisy oracle at each step. To begin with, we present a subroutine GRADIENTCOMP for computing the gradient at a point where a small neighborhood is mostly uncorrupted. Algorithm 1: GRADIENTCOMP(f̂ , x, β, τ) Input : f̂ : Rd → R, x ∈ Rd, β > 0, and τ > 0. Output : g ∈ Rd. 1 Randomly choose 1000d pairs of points a1, b1 . . . , a1000d, b1000d in the ball B(x, τ). 2 Query the function values f̂(aj), f̂(bj) for all j = 1, 2, . . . , 1000d. 3 Let g ∈ Rd be any vector such that, for at least 800d of the j’s, the following holds: ∣∣∣g>(bj − aj)− ( f̂(bj)− f̂(aj) )∣∣∣ ‖bj − aj‖ ≤ βτ. (1) If no such g exists, set g to be an arbitrary vector. We summarize the performance of GRADIENTCOMP below, with the proof deferred to Appendix D. Essentially, the error in the gradient computed by GRADIENTCOMP tends to zero as τ → 0. Lemma 4.1. Fix d > 0 and β > 0. There exists a function err(τ) satisfying limτ→0+ err(τ) = 0 such that the following holds. Fix any x ∈ Rd and τ > 0 such that the radius-τ ball centered at x is mostly uncorrupted: Pry∼U(x,τ) [ f(y) 6= f̂(y) ] ≤ 1 100 . (2) Then we have that with probability 1− 2−3d, the vector g returned by GRADIENTCOMP satisfies ‖g −∇f(x)‖ ≤ err(τ). (3) The number of queries made by GRADIENTCOMP is O(d). We now describe GDSTAGEI in Algorithm 2. Its performance is characterized in Theorem 4.2. Algorithm 2: GDSTAGEI(f̂ , α, β, x0, R0, δ) Input : f̂ : Rd → R, 0 < α < β, x0 ∈ Rd, R0 ≥ ‖x0 − x∗‖, and δ ∈ (0, 1). Output : x̂ ∈ Rd. 1 Let the iteration count be T ← 100βα log R0(β/α)K . 2 for t = 0, 1, . . . , T − 1 do 3 Let the sample count be s← 200 log(T/δ). 4 Sample s random points y1, y2, . . . , ys in the ball B(xt, 99K). 5 Compute gradients gi ← GRADIENTCOMP(f̂ , yi, β, τ) for some sufficiently small τ > 0. 6 Find a vector ĝ ∈ Rd such that at least (2s)/3 of the gi’s are within euclidean distance 99.5βK of ĝ; if no such ĝ exists, set ĝ to be an arbitrary vector. 7 Perform a descent step: xt+1 ← xt − 12β ĝ. 8 return xt. Theorem 4.2. Let d ≥ 2. Given an initial point x0 with ‖x0 − x∗‖ ≤ R0 and a δ ∈ (0, 1), the algorithm GDSTAGEI returns a point x̂ with ‖x̂− x∗‖ ≤ 10000(β/α)K with probability 1−δ, where x∗ is the minimizer of f . The number of queries made by GDSTAGEI is Õ(d(β/α) log R0(β/α)K log(1/δ)). Crucial to proving this theorem is to show that the gradients used by GDSTAGEI are accurate enough: Lemma 4.3. Let d ≥ 2. The ĝ computed at Line 6 of GDSTAGEI satisfies with probability 1− δT that ‖ĝ −∇f(xt)‖ ≤ 200βK. Full proofs of Theorem 4.2 and Lemma 4.3 are presented in Appendix D. A note on the running time of our algorithms. While we are mainly concerned about the query complexity, we remark that both of our algorithms above can be implemented in polynomial time. For GRADIENTCOMP, all steps except Line 3 are easily seen to be implementable in polynomial time (in particular, linear time). Therefore it suffices to show that Line 3 can be done efficiently. Claim 4.4. A vector g satisfying the condition at Line 3 of GRADIENTCOMP, if it exists, can be found in time polynomial in d. Our proof of this claim proceeds by presenting a poly(d)-time algorithm based on an LP-decoding routine in [12]. Therefore let us first introduce the specific result that we need from [12]. Let A ∈ Rn×d be a matrix and z ∈ Rd be a vector. Consider the linear system Ax = Az, to which x = z is clearly a solution. If n ≥ d and A has full rank, then we can retrieve the vector z given A and Az by solving the linear system Ax = Az in polynomial time using, e.g., Gaussian elimination. Now suppose the RHS of the linear system is corrupted by some noise e ∈ Rn, and we are only given A and the corrupted RHS Az + e, then can we still retrieve the vector z efficiently? [12] showed that under certain assumptions, we can obtain good estimates of z in poly-time by linear programming. Theorem 4.5 ([12]). There exist constants ρ∗ ≈ 0.239 and γ ≥ 1 such that the following holds. Suppose n ≥ γd and An×d’s entries are drawn independently from a standard Gaussian distribution. Suppose also the noise e can be written as e = e1 + e2 where ‖e1‖0 ≤ ρ∗n. Then given A and Az + e, we can find in polynomial time a vector z′ s.t. ‖z − z′‖2 ≤ O(‖e2‖∞), for any z ∈ Rd. Basically, this theorem assumes that the noise can be decomposed into the sum of two parts, one with small nonzero support, and the other with small entry-wise magnitude. Then the `2-error of the solution is on the order of the largest entry-wise magnitude of the second part of the noise. Proof of Claim 4.4. Let us define a matrix B ∈ R1000d×d whose ith row is equal to (bi−ai) T ‖bi−ai‖ , where ai, bi’s are the sampled points at Line 1 of GRADIENTCOMP. We also define vectors b, b̂ ∈ R1000d with b(i) = ∇f(x) T (bi−ai) ‖bi−ai‖ and b̂(i) = f̂(bi)−f̂(ai) ‖bi−ai‖ , where x is the input point of GRADIENTCOMP. Notice that each b(i) is the inner product of the ith row of B and ∇f(x), and therefore we have B∇f(x) = b. Consequently, given B and b we can retrieve ∇f(x) by solving the linear system By = b (y are the variables). Thus our goal becomes solving this linear system when only B and b̂ (a.k.a. a corrupted version of b) are given. While this looks like the task in Theorem 4.5, note that the entries of B are not drawn from independent Gaussian distributions, so we will need go a step further. As the ai, bi’s are sampled uniformly at random from a ball, each row (bi−ai)T ‖bi−ai‖ of B is a unit vector with a uniformly random direction. It is well known that a vector with independent standard Gaussian entries also points to a uniformly random direction. In fact, we can sample a d-dimensional such vector by a three-step process: (i) sample a unit vector with a random direction; (ii) sample a length ` from the χ2-distribution with d degrees of freedom (i.e., the sum of the squares of d independent standard Gaussians); (iii) scale the unit vector by √ `. In light of this, let us generate a diagonal matrix D ∈ R1000d×1000d such that each D(i, i) is independently sampled as in step (ii). Then we consider the linear system D1/2By = D1/2b to which y = ∇f(x) is a solution. Notice that we now have that each entry of D1/2B follows a standard Gaussian. By thinking of D1/2b̂ as a corrupted version of Db, we then need to show that the noise e def= D1/2(b̂− b) can be written as e1 + e2 such that ‖e1‖0 ≤ ρ∗(1000d) and ‖e2‖∞ is small. To this end, we notice that for each i such that both ai, bi are uncorrupted, we have by β-smoothness that ∣∣∣b(i)− b̂(i) ∣∣∣ = ∣∣∣∣∣ ∇f(x)T (bi − ai) ‖bi − ai‖ − f̂(bi)− f̂(ai)‖bi − ai‖ ∣∣∣∣∣ ≤ O(βτ), (4) where τ is the radius of the ball B(x, τ) from which ai, bi’s are sampled. Thus, if B(x, τ) is mostly (say 99%) uncorrupted, with probability 1− exp(−Ω(d)), (4) holds for most (say 90%) of the i’s. Also, by standard Markov’s inequality and Chernoff bounds, with probability 1− exp(−Ω(d)), for most (say 99%) of the i’s we have D(i, i) ≤ O(d). Combining this with (4), we have for 80% of the i’s that √ D(i, i) ∣∣∣b̂(i)− b(i) ∣∣∣ ≤ O( √ dβτ), implying the existence of e1, e2 s.t. e = e1 + e2 and ‖e1‖0 ≤ 0.2(1000d), ‖e2‖∞ ≤ O( √ dβτ). This means that by Theorem 4.5 we can use linear programming to find a g with ‖g −∇f(x)‖ ≤ O( √ dβτ), matching the guarantee in Lemma 4.1. Finally, to address a technicality about the constant γ in Theorem 4.5, we note that we can increase the number of sampled ai, bi pairs to max {1000d, γd}, and the rest of the analysis still follows. Then we consider the running time of GDSTAGEI. By Claim 4.4 and straightforward observations, all steps other than Line 6 run in polynomial time. Thus we focus on the efficiency of Line 6. Claim 4.6. A vector ĝ satisfying the condition at Line 6 of GDSTAGEI, if it exists, can be found in nearly-linear time in s, at the cost of an extra constant factor in the radius of the ball. We note that an extra constant factor in the radius of the ball will not affect the final distance to x∗ by more than a constant factor. The proof of this claim is deferred to Appendix D. Roughly, the proof proceeds by sampling Õ(1) points from g1, . . . , gs and checking for each sampled gj if at least 2/3 fraction of the total points are within euclidean distance 200βK of gj . Note that the radius now becomes 200βK as opposed to 100βK at Line 6 of GDSTAGEI. 5 An O(K √ β/α)-close algorithm In this section we give an algorithm GDSTAGEII that, when given an initial point which is O(K(β/α))-close to the minimizer x∗ of f , finds a point that is O(K √ β/α)-close to x∗. GDSTAGEII basically performs a gradient descent on the average function f̄2K (Definition 2.4). Algorithm 3: GDSTAGEII(f̂ , α, β, x0) Input : f̂ : Rd → R, 0 < α < β, and x0 ∈ Rd with ‖x0 − x∗‖ ≤ 10000(β/α)K. Output : x̂ ∈ Rd. 1 Let the iteration count be T ← 100βα log( β α + 1). 2 for t = 0, 1, . . . , T − 1 do 3 Let the sample count be s← 400βα log(dT ). 4 Sample s random points y1, y2, . . . , ys in the ball B(xt, 2K). 5 Compute gradients gi ← GRADIENTCOMP(f̂ , yi, β, τ) for sufficiently small τ > 0, and their average ḡ ← 1s ∑s i=1 gi. 6 Perform a descent step: xt+1 ← xt − 12β ḡ. 7 return xt. The performance of GDSTAGEII is characterized in Theorem 5.1, with the proof in Appendix E. Note that while the success probability in Theorem 5.1 is not arbitrarily large, we can amplify it to any 1− δ by repeating the algorithm O(log(1/δ)) times, as we show in Corollary E.2. Theorem 5.1. Suppose that d ≥ 100 log(β/α + 1). Then given an initial point x0 that satisfies ‖x0 − x∗‖ ≤ 10000(β/α)K, GDSTAGEII returns a point x̂ with ‖x̂− x∗‖ ≤ 1000 √ β/αK with probability at least 1 − 2−d/8, where x∗ is the minimizer of f . The number of queries made by GDSTAGEII is Õ(d(β/α)2). Moreover, the algorithm runs in polynomial time. The proof of Theorem 5.1 builds on a lemma showing that the gradients that GDSTAGEII uses are sufficiently precise. The lemma relies on an `2-concentration inequality for the sum of random vectors (i.e., the Vector Bernstein Inequality in Theorem E.1). Its proof also appears in Appendix E. Lemma 5.2. Let d ≥ 100 log(β/α+ 1). The vector ḡ computed at Line 5 of GDSTAGEII satisfies the following with probability at least 1− 2−d/8/T : ∥∥ḡ −∇f̄2K(xt) ∥∥ ≤ 16√αβK. 6 Future directions We obtained asymptotically matching upper and lower bounds on how well the minimizer of a convex function can be identified in presence of outlier noise. There are several natural directions for future work. First, while our algorithm’s query complexity has essentially the same dependence on d and ‖x0 − x∗‖ as Nesterov’s accelerated gradient descent, it is still off by a factor of (β/α)1.5. It will thus be interesting to understand if this remaining performance gap can be eliminated. Also, we note that both our results require the dimension to be sufficiently high. While we believe the high dimension regime is of the most interest, it will be an interesting exercise to understand how these bounds change in the low-dimensional setting. Finally, as pointed out in the introduction, an appealing future direction is to study convex minimization with the more general `p-bounded noise. Acknowledgments and Disclosure of Funding We thank the anonymous reviewers for their valuable feedback. This work was supported in part by NSF awards CCF-1763514, CCF-1934876, CCF-2008305, CCF-1910534, CCF-1926872, and CCF-2045128.
1. What is the focus of the paper regarding convex optimization? 2. What are the strengths of the proposed algorithm, particularly in handling adversarial noise? 3. What are the weaknesses or limitations of the paper's problem setup and assumptions? 4. How does the reviewer assess the significance and impact of the result, considering the constraints and conditions involved? 5. Are there any concerns or suggestions regarding the presentation, clarity, and reference to standard concepts in convex optimization?
Summary Of The Paper Review
Summary Of The Paper The paper studies the problem of convex optimization using a zeroth order oracle with adversarial noise. In particular, the paper studies the setting where there is some convex function f : R d → R and the learner may interact with an oracle that returns the value of f at a given point except the oracle is corrupted on some set of measure equal to Vol ( B ( 0 , K ) ) (where B ( 0 , K ) is the ball of radius K ) on which the oracle may return an arbitrary value. The paper gives an algorithm that outputs a point that is within K β / α of the minimizer of f (where β / α is the condition number of f ). They also prove a matching information-theoretic lower bound that no algorithm can guarantee to get closer than K β / α . At a high level, their algorithm is based on estimating the gradient at each point using the corrupted zeroth order oracle and then running gradient-descent. Review The overall contributions are ok. The presentation is also reasonable, but occasionally handwavy, which is understandable due to space constraints. One point that was not clear to me was the following: in the algorithm description, what does it mean for a ball centered at g ^ to trap another vector g i ? Does it just mean that the ball contains the vector? I have some concerns about the overall significance of the result because of limitations in the problem setup. My concern is that the authors rely on the dimension being sufficiently high so that failure probabilities from sampling are small. But in high dimensions, a ball of radius K only represents an exponentially small fraction of the entire domain so the result is really only able to deal with an exponentially small fraction of noise. Also, one minor comment: it seems that with α and β , the only thing that actually matters is the ratio β / α , which is just the condition number of the function. This is a very standard concept in convex optimization but is never referenced. After Author Response and Reviewer Discussion: Thanks for the clarifications! I do believe that this paper could be important in driving further research and have updated my score accordingly. I am a bit confused about a few points in the author response though. In terms of fraction of corruptions, the paper assumes that the learner is given a starting point within some radius R 0 of the optimum x ∗ (which is of course a fine and standard assumption). Thus, the learner is optimizing over a bounded set. My point was that almost all of the points in this set are actually uncorrupted and even within any ball of radius 2 K , almost all of the points are uncorrupted. Also in terms of the comment about log ⁡ ( β / α ) ≤ d , I'm not sure I understand the comparison to gradient descent needing β / α ≤ p o l y ( d ) . The algorithm presented in this paper still needs d p o l y ( β / α ) runtime so it needs the stronger condition as well to run in polynomial time in high dimensions. Moreover, gradient descent works in any dimension, even low dimensions such as d = 1 , whereas the algorithm presented here actually needs the dimension to be high.
NIPS
Title Mixtape: Breaking the Softmax Bottleneck Efficiently Abstract The softmax bottleneck has been shown to limit the expressiveness of neural language models. Mixture of Softmaxes (MoS) is an effective approach to address such a theoretical limitation, but are expensive compared to softmax in terms of both memory and time. We propose Mixtape, an output layer that breaks the softmax bottleneck more efficiently with three novel techniques—logit space vector gating, sigmoid tree decomposition, and gate sharing. On four benchmarks including language modeling and machine translation, the Mixtape layer substantially improves the efficiency over the MoS layer by 3.5x to 10.5x while obtaining similar performance. A network equipped with Mixtape is only 20% to 34% slower than a softmax-based network with 10-30K vocabulary sizes, and outperforms softmax in perplexity and translation quality. 1 Introduction Softmax has been a standard output layer for a wide variety of neural networks, including the majority of neural language models [5, 2, 3, 8, 11]. However, as pointed out by [19], softmax is a fundamental limitation of the expressiveness of neural language models, because it constrains the output representations to be low-rank, which might not be sufficient for modeling the complexity of natural language. Such a limitation is called the softmax bottleneck. To break the softmax bottleneck, [19] proposed Mixture of Softmaxes (MoS) that introduces discrete latent variables into the output layer so that the log probability matrix is high-rank because of the log-sum-exp nonlinear transformation. However, MoS is expensive compared to softmax in terms of both memory and time, which makes it less practically useful when computational budgets are limited. To reduce the computational cost of MoS, we propose a novel output layer Mixtape to break the softmax bottleneck efficiently. Mixtape can be plugged into any existing networks as an additional layer before the cross entropy loss. Instead of employing a scalar mixture in the probability space as in MoS, Mixtape applies a vector gating mechanism in the logit space to avoid using multiple expensive softmaxes. In addition, Mixtape uses two more novel techniques to further reduce the computational cost. First, the vector gating mechanism is expensive because we need to compute a softmax gate for each word in the vocabulary. We propose sigmoid tree decomposition that decomposes a softmax probability gating distribution into a depth-2 binary tree structure, where each branch carries a portion of the probability mass determined by a sigmoid function. Sigmoid tree decomposition is much more efficient because it avoids the reduction and division operations in softmax. The other technique gate sharing is to share the gate values for all infrequent words, resulting in partially high-rank representations. This technique saves a considerable amount of memory and computation without affecting the performance because the gate values of infrequent words are usually hard to accurately estimate even without sharing the gates. With all the above techniques combined, Mixtape substantially improves the efficiency of MoS while obtaining comparable or even better performances on four benchmarks, including language modeling and machine translation. With normal vocabulary sizes (e.g., 10K-30K), the Mixtape layer is 1.6x to 11.5x fater than the MoS layer given the same batch size, and is 3.5x to 10.5x faster given the 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. same memory budget. With normal vocabulary sizes, a Mixtape-based network is only 5% to 18% slower than a softmax-based network given the same batch size, and is only 20% to 34% slower given the same memory budget. With a large vocabulary of 100K tokens, a Mixtape-based network is still only 60% slower than a softmax-based network. Both Mixtape and MoS outperform softmax in perplexity and translation quality. Interestingly, these benchmarks have varied vocabulary sizes ranging from 10K to 100K and different input representations including words and BPE subwords, which demonstrates that Mixtape is effective and robust with a variety of inputs. 2 Softmax Bottleneck In the following, we will introduce the notations and review the softmax bottleneck problem pointed out by [19]. Consider a general setting of language modeling and text generation, where given the context C we want to estimate the conditional distribution of the next token P ∗(X|C). Here we use P ∗ to denote the true data distribution. The context C denotes the tokens that have occurred so far. For example, given a corpus (X1, X2, · · · , XT ), for each time step t, we aim to estimate the probability P ∗(Xt|C = X<t). For conditional generation, the probability is additionally conditioned on other inputs, which are omitted in our discussions without loss of generality. We consider a natural language modeling task as the problem of modeling a finite set of pairs of a context and its conditional next-token distribution L = {(c1, P ∗(X|c1)), · · · , (cN , P ∗(X|cN ))}, where N is the number of possible contexts. The validity of the finiteness assumption has been discussed in [19] and does not affect our conclusion that follows. A commonly-used approach for language modeling is to use neural networks to encode the context and the next token into vector representations hc and wx respectively. The conditional distribution is then modeled by a softmax function, Pθ(x|c) = exph > c wx∑ x′ exph > c wx′ where θ denotes the model parameters. The dot products between the two embeddings are called logits, and the corresponding feature space is termed a logit space. We write down the context embeddings, token embeddings, and log probabilities in matrix forms as follows, Hθ = h>c1 h>c2 · · · h>cN ; Wθ = w>x1 w>x2 · · · w>xM ;A = logP ∗(x1|c1) · · · logP ∗(xM |c1) logP ∗(x1|c2) · · · logP ∗(xM |c2) ... . . . ... logP ∗(x1|cN ) · · · logP ∗(xM |cN ) where M is the number of possible next tokens. The language modeling problem is now turned into a matrix factorization problem of finding model parameters θ such that HθW > θ = A + row-wise shift (1) The row-wise shift operation is defined as A + ΛJN,M where Λ is a diagonal matrix with size N ×N and JN,M is an all-ones matrix with size N ×M . Given the matrix factorization formulation, it follows that the rank of LHS in Eq. (1) is upper bounded by the embedding size d. Based on this key observation, the softmax bottleneck problem is identified as follows. Corollary 1 (Softmax bottleneck) [19] If d < rank(A) − 1, for any function family U and any model parameter θ, there exists a context c in L such that Pθ(X|c) 6= P ∗(X|c). In other words, given that most neural language models use distributed low-dimensional context and token embeddings, the softmax bottleneck indicates that these models do not have sufficient expressiveness to model complex, high-rank natural language. 3 Breaking the Softmax Bottleneck Efficiently Mixture of Softmaxes (MoS) [19] is an effective approach to break the softmax bottleneck. Specifically, MoS uses the following formulation for the conditional distribution, Pθ(x|c) = ∑K k=1 πc,k exph>c,kwx∑ x′ exph > c,kwx′ ; s.t. ∑K k=1 πc,k = 1 where the priors πc,k are obtained by another softmax-based function of the last-layer hidden states, and K is the number of mixture components. This formulation is not limited by the softmax bottleneck because the log probability matrix A is modeled by ÂMoS = log ∑K k=1 Πk exp(Hθ,kW > θ ) where the log-sum-exp nonlinearity produces a high-rank matrix ÂMoS. However, softmax involves applying nonlinear exp transformations for each token in the vocabulary, performing reduction across the vocabulary, followed by division, which are all computationally intensive. Moreover, softmax is memory intensive because it has to store the pre-activations h>∗ w∗, the post-activations exp(·), and the output probabilities for each token in the vocabulary. Since a normal vocabulary size is in the magnitude of 104, MoS dramatically increases the computational cost by using multiple softmaxes. Another approach to break the softmax bottleneck was recently introduced [9]. However, this approach is less efficient than MoS because it computes a mixture of sigmoid functions in addition to softmaxes. To alleviate the efficiency issue, we will introduce our novel method Mixtape that improves the efficiency over MoS without sacrificing the ability to learn high-rank representations. 3.1 Logit Space Vector Gating Since the most expensive part of MoS is to compute K softmaxes, significant computational budget can be saved if we manage to use only one softmax to compute the final probability distribution. It is tempting to move the mixture from the probability space into the logit space; i.e., mixing the representations before the softmax operation. This leads to the following conditional distribution, Pθ(x|c) = exp( ∑K k=1 πc,khc,k) > wx∑ x′ exp( ∑K k=1 πc,khc,k) > wx′ . However, as pointed out in [19], such a formulation will result in a low-rank representation because the matrix factorization form in Eq. (1) still applies. Nevertheless, we will now show that with a small modification, applying mixture operations in the logit space leads to high-rank representations. The key idea is to use a vector gating mechanism instead of scalar mixtures. In other words, instead of using a shared set of mixture weights for every token, we use a different set of weights for different tokens. Formally, with vector gating, the conditional distribution can be written as Pθ(x|c) = exp ∑K k=1 πc,x,kh > c,kwx∑ x′ exp ∑K k=1 πc,x′,kh > c,kwx′ ; s.t. K∑ k=1 πc,x,k = 1 (2) The log probability matrix A is now modeled as ÂMixtape = ∑K k=1 Πk ( Hθ,kW > θ ) Due to the elementwise multiplication introduced, the matrix factorization form in Eq. (1) does not apply, and the log probability matrix is therefore high-rank. In addition, the vector gating mechanism removes the necessity of computing K softmax probability distributions, which makes efficiency improvement possible. However, there is still a remaining obstacle before Mixtape is actually efficient enough. Notice that since the priors πc,x,k need to sum to one1 for each context-token pair (c, x), a naive implementation requires computing a softmax for the prior probabilities for each pair token x given the context c. Let lc,x,k be the pre-activation priors, we have πc,x,k = exp lc,x,k∑K k′=1 exp lc,x,k′ Unfortunately, this will be even slower than MoS because the number of tokens in the vocabulary is usually large. In the following, we will introduce a novel technique that avoids such an efficiency trap. 3.2 Sigmoid Tree Decomposition Now, we introduce how to efficiently compute the priors πc,x,k. Instead of using a softmax, we propose to decompose a softmax distribution into a tree structure of sigmoid functions. Specifically, we compute (K − 1) sigmoid outputs and use them to define the probabilities along the tree branches. For example, with K = 4, the priors are defined as: γc,x,k = σ(lc,x,k) for k = 1 . . .K − 1 πc,x,1 = γc,x,1γc,x,2 πc,x,2 = γc,x,1(1− γc,x,2) πc,x,3 = (1− γc,x,1)γc,x,3 πc,x,4 = (1− γc,x,1)(1− γc,x,3) (3) where γ∗ denotes the sigmoid probabilities and σ is the sigmoid function. The above equations are illustrated in Figure ??. We call this technique sigmoid tree decomposition. Such a decomposition is able to fully recover a K-way probability distribution with (K − 1) sigmoid functions. Using sigmoid functions removes the reduction and division operations in softmax and is more efficient. Although the sigmoid tree composition technique can be used with any K, in our experiments, we always use K = 4 for two reasons. First, we find Mixtape is effective with K = 4 for all the tasks in our experiments. Second, speed is core to Mixtape and we fix K to be the minimal possible value. Compared to MoS, using a fixed number of components K means Mixtape requires less hyperparameter tuning efforts. Moreover, K = 4 is relatively small compared to the number of components in MoS, which further reduces the computational cost. Let gc be a d1-dimensional last-layer hidden states given context c. The pre-activation priors l∗ are computed as lc,x,k = v > x tanh(Ukgc) + u > k gc + bx,k (4) where vx ∈ Rd2 , Uk ∈ Rd2×d1 , uk ∈ Rd1 , and bx,k ∈ R are model parameters. Here d2 is a hyperparameter that denotes the gate embedding size and is usually chosen to be much smaller than the normal word embedding size d. The context embeddings are obtained by hc,k = tanh(Hkgc) (5) where Hk ∈ Rd×d1 is a model parameter. 1We were not able to get good performance with unnormalized priors. 3.3 Gate Sharing So far we have arrived at an efficient high-rank model, but there is still room for further improvement. One observation is that we still have to compute a gate prior for each token in the vocabulary, which becomes the bottleneck of efficiency. However, for infrequent tokens, it is hard to estimate the gate priors accurately due to lack of training samples, and thus learning gate priors for infrequent tokens might simply be waste of computation. As a way to leverage this observation, the core idea of gate sharing is to share the same gate priors for all infrequent words. Specifically, for an infrequent token x, the pre-activation gate priors are defined as lc,x,k = u > k gc (6) which remains constant given c and k for different infrequent tokens x. The resulting representations are partially high-rank. Supposed the token indices are ranked by frequency. The log probability matrix is now modeled by  = [ K∑ k=1 Π (1) k ( H (1) θ,kW (1)> θ ) ;H (2) θ W (2) θ ] where the superscripts (1) and (2) denote the representations for frequent and infrequent tokens respectively. We have high-rank and low-rank representations for frequent and infrequent tokens respectively. For infrequent tokens, our formulation is equivalent to performing logit space scalar mixtures, also known as Mixture of Contexts in [19]. Similar ideas have been demonstrated in previous work [8] where infrequent tokens use less-expressive representations (smaller embedding sizes) to save memory and computation without affecting performance. With gate sharing, we use the shared gate prior to mix the context embedding hc,k before multiplication with the token embeddings wx, which saves memory because no gate logits are stored for infrequent tokens. Gate sharing also speeds up the computation by computing only one set of gate priors for all infrequent tokens. Let S be the number of frequent tokens and let r = S/M with M being the vocabulary size. In our experiments, we set r = 0.5 for machine translation and r = 0.1 for language modeling. 3.4 Summary and Discussion The Mixtape layer is summarized as follows: 1. Given the last-layer hidden states gc, compute the context embeddings hc,k using Eq. (5). 2. For each frequent token x, compute the pre-activation gate priors lc,x,k using Eq. (4). 3. For all infrequent tokens, compute a shared pre-activation gate prior lc,x,k using Eq. (6). 4. Use sigmoid tree decomposition to compute the gate priors πc,x,k as in Eq. (3). 5. Use vector gating to obtain the next-token probabilities using Eq. (2). The architecture of the Mixtape layer is illustrated in Figure 1. In our implementation, we also add biases to the matrix multiplication operations in Eq. (2), (4) and (5), which were omitted in the above text for simplicity. It is also optional to employ weight normalization [14] for the parameter Uk in Eq. (4). Different from [14], we use a constant scale instead of a learnable one as it leads to more stable optimization. In our experiments, we use weight normalization for language modeling but did not observe improvement on machine translation tasks. We also apply dropout on tanh(Ukgc) and hc,k in Eq. (4) and (5). To further regularize the networks, we also add a small amount of Gaussian noise on the pre-activation priors l∗ in the forward pass. If we neglect cheap operations and only consider matrix multiplication and softmax, MoS has 2(d1dK + dKM) FLOPs for matrix multiplication and K M -way softmaxes. For comparison, Mixtape has 2(d1dK + dKS) FLOPs for matrix multiplication and one M -way softmax. The speedup of Mixtape comes from a smaller number of softmaxes, a smaller K, and a smaller S < M . Suppose anM -way softmax uses 8M bytes for storing intermediate and final results. If we again only consider major operations of matrix multiplication and softmax, with FP32 tensors, MoS roughly uses (4dK + 12MK) bytes and Mixtape uses (4dK + 12SK + 8M) bytes. Mixtape uses less memory due to a smaller S and a smaller K. 4 Experiments Our experiments consist of three parts. First, we demonstrate that the proposed Mixtape layer is able to improve state-of-the-art machine translation systems by breaking the softmax bottleneck. Second, we compare the perplexity, translation quality, speed, and memory constraints of Mixtape, MoS, and softmax, to demonstrate that Mixtape is able to achieve a good balance between effectiveness and efficiency. Third, through ablation studies, we show the benefits of gate sharing. 4.1 Datasets We test Mixtape on two tasks, language modeling and machine translation. For language modeling, we exactly follow the settings in [19] on Penn Treebank [12] and One Billion Word [4] for fair comparison. We implement the same recurrent network architectures and follow the regularization and optimization techniques used in [19]. We tune the model size of Mixtape such that Mixtape has the same number of parameters as MoS in the corresponding settings. On One Billion Word, we also replicate the data preprocessing pipeline that lower-cases the text and chooses the top 100K tokens as the vocabulary. This results in a non-standard setting, but it enables fair comparison with MoS as well as excluding the orthogonal effects of techniques for a larger vocabulary such as adaptive softmax [8]. For machine translation, our experiments are based on two widely-used WMT’14 benchmarks, English to German (En-De) and English to French (En-Fr), following the setups in [13, 18]. For En-De, we train on the WMT’16 training data and test on newstest14. For En-Fr, we train on the WMT’14 training data and test on newstest14. We use BPE encodings [15] with a vocabulary size of 32K. Following [17], we use sacrebleu for evaluation. The statistics of different datasets and settings are shown in Table 2. The selected datasets present a degree of diversity in sizes, input units, and vocabulary sizes, which enables evaluating the robustness of Mixtape. 4.2 WMT’14 Results We apply Mixtape on top of Transformers [18] to have a comparison with state-of-the-art systems on WMT’14 benchmarks. We also incorporate relative positional encodings [16] in our architecture. On En-De, we employ a 6-layer Transformer with embedding size 1024, inner layer size 4096, and 16 attention heads. We train for 300K steps with a learning rate of 2.5, a batch size of 4096, and 16K warmup steps. We apply a dropout of 0.3 on the layer outputs, a dropout of 0.15 on attention probabilities, a dropout of 0.2 on tanh(Ukgc) in Eq. (4), and a Gaussian noise with 0.1 stdev on pre-activation gate priors. On En-Fr, we employ a 6-layer Transformer with embedding size 2048, inner layer size 8192, and 16 attention heads. We train for 1.2M steps with a learning rate of 2.0, a batch size of 4096, and 16K warmup steps. We apply a dropout 0.25 on the layer outputs, dropouts of 0.15 on attention probabilities and tanh(Ukgc) in Eq. (4), and a Gaussian noise with 0.1 stdev on pre-activation gate priors. The results of our method are shown in Table 1. Mixtape with Transformers achieves state-of-theart results on both En-De and En-Fr. Interestingly, Mixtape outperforms baselines that use MoS [10]. This demonstrates that breaking the softmax bottleneck significantly contributes to achieving state-of-the-art performance for machine translation, and Mixtape is an effective approach to break such a bottleneck. On En-Fr, Mixtape obtains the same performance with Transformers trained with Mesh Tensorflow [17]. However, Mixtape is much more parameter-efficient, using only 0.8 billion parameters v.s. 2.9 billion parameters in Mesh Tensorflow. Moreover, Mixtape outperforms Mesh Tensorflow by a large margin on En-De, demonstrating more robustness and generalization capabilities on relatively small datasets. Note that [7] reports better performance with back translation, which is not comparable with our setting. 4.3 Ablation Study and Comparison with Baselines We now compare the performance of Mixtape with MoS and softmax, as well as studying the effects of gate sharing. We report the training time used for both the sole output layer and the entire network. To take the memory usage of different methods into consideration, in addition to reporting training time with the same batch size, we also consider the training time with the same memory budget. In other words, a model that uses more memory will have a smaller batch size, and thus training time per instance will increase. The results of different methods on Penn Treebank, One Billion Word, WMT’14 En-De, and WMT’14 En-Fr are shown in Tables 3, 4, 5, and 6. We use baseline MoS results from [19, 10] whenever possible and avoid using our own implementation for fair comparison. There are three main messages delivered in these results. First, compared to softmax, Mixtape is comparably efficient while being more accurate at language modeling and translation. On tasks with normal vocabulary sizes including Penn Treebank, WMT’14 En-De, and WMT’14 En-Fr, a Mixtape-based network is only 5% to 18% slower than a softmax-based network given the same batch size and only 20% to 34% slower given the same memory budget. Even on One Billion Word with a 100K vocabulary, a Mixtape-based network is only 60% slower than a softmax-based network. On the other hand, Mixtape improves the perplexity over MoS by 2.8 points and 6.25 points on Penn Treebank and One Billion Word respectively. On translation tasks, Mixtape improves the BLUE scores from 29.0 to 29.3 on En-De and from 43.0 to 43.9 on En-Fr. Second, compared to MoS, Mixtape achieves similar or better performance in perplexity and BLEU while being much more efficient. Mixtape is 1.6x to 11.5x faster than MoS given the same batch size and 3.5x to 10.5x faster given the same memory budget. The speedup is usually more significant with the memory budget constraints, demonstrating that the ability to save memory also contributes to the efficiency of Mixtape. Mixtape has better performance than MoS on translation and comparable performance on language modeling. Third, gate sharing substantially reduces the computational cost without sacrificing accuracy. In Tables 3 and 4, the perplexities of Mixtape with and without gate sharing only have an almost negligible difference. Gating sharing improves the speed by 4.3x and 4.0x on Penn Treebank and One Billion Word respectively given the same memory budget. The speedup is 3.5x and 3.2x given the same batch size. This indicates that gate sharing reduces the memory cost as well as training time per forward-backward pass. 5 Conclusions We propose Mixtape to break the softmax bottleneck more efficiently. Compared to MoS, Mixtape is more computationally efficient. Compared to softmax, Mixtape has comparable efficiency and is superior in terms of accuracy. Based on the above results, it is possible that Mixtape can be used as a plug-and-play layer to improve conditional and unconditional text generation in general. In the future, it will be intriguing to further investigate more applications of Mixtape.
1. What is the focus of the paper regarding the softmax bottleneck problem? 2. What is the proposed solution to address the issue, and how does it differ from previous approaches? 3. How effective is the proposed method in terms of computational efficiency and empirical performance? 4. Are there any concerns or suggestions regarding the proposed approach, particularly regarding its theoretical analysis or empirical results? 5. Are there any questions or issues that the reviewer has raised but not explicitly stated, such as potential limitations or areas for future research?
Review
Review This work deals with the softmax bottleneck problem, which is noticed by Yang et al. (2017). Departing from the mixture of softmax used by previous work, which is computationally expensive, this work proposes to elementwise gating technique, which could potentially get around the softmax bottleneck problem by taking one single softmax computation. The gating vectors are efficiently computed with a series of sigmoid functions organize in a tree structure, and gate sharing is used to further promote efficiency. Empirical evaluation on machine translation and language modeling shows that the proposed method is able to achieve comparable accuracy to the mixture of softmax baseline with much less computation cost. Overall I think this is a solid work: it clearly presents an interesting idea, which yields strong empirical performance. Details: - Broken pointer in line 132. - Line 115, on the rank of \hat{A}_{Mistape}. If I understand it correctly, $\Pi_k$ matrix and elementwise product could bump the rank up, due to rank(A\odot B) \leq rank(A)rank(B). Therefore a Mixtape model with K=1 could potentially yield a high-rank log probability matrix. Have the authors considered comparing to this baseline? - Adding onto the above point, I suggest the authors tone down a bit the argument that it is `therefore high-rank`, unless a lower bound can be derived (which I think is tricky to do). And it would be interesting to empirically see its rank in practice.
NIPS
Title Mixtape: Breaking the Softmax Bottleneck Efficiently Abstract The softmax bottleneck has been shown to limit the expressiveness of neural language models. Mixture of Softmaxes (MoS) is an effective approach to address such a theoretical limitation, but are expensive compared to softmax in terms of both memory and time. We propose Mixtape, an output layer that breaks the softmax bottleneck more efficiently with three novel techniques—logit space vector gating, sigmoid tree decomposition, and gate sharing. On four benchmarks including language modeling and machine translation, the Mixtape layer substantially improves the efficiency over the MoS layer by 3.5x to 10.5x while obtaining similar performance. A network equipped with Mixtape is only 20% to 34% slower than a softmax-based network with 10-30K vocabulary sizes, and outperforms softmax in perplexity and translation quality. 1 Introduction Softmax has been a standard output layer for a wide variety of neural networks, including the majority of neural language models [5, 2, 3, 8, 11]. However, as pointed out by [19], softmax is a fundamental limitation of the expressiveness of neural language models, because it constrains the output representations to be low-rank, which might not be sufficient for modeling the complexity of natural language. Such a limitation is called the softmax bottleneck. To break the softmax bottleneck, [19] proposed Mixture of Softmaxes (MoS) that introduces discrete latent variables into the output layer so that the log probability matrix is high-rank because of the log-sum-exp nonlinear transformation. However, MoS is expensive compared to softmax in terms of both memory and time, which makes it less practically useful when computational budgets are limited. To reduce the computational cost of MoS, we propose a novel output layer Mixtape to break the softmax bottleneck efficiently. Mixtape can be plugged into any existing networks as an additional layer before the cross entropy loss. Instead of employing a scalar mixture in the probability space as in MoS, Mixtape applies a vector gating mechanism in the logit space to avoid using multiple expensive softmaxes. In addition, Mixtape uses two more novel techniques to further reduce the computational cost. First, the vector gating mechanism is expensive because we need to compute a softmax gate for each word in the vocabulary. We propose sigmoid tree decomposition that decomposes a softmax probability gating distribution into a depth-2 binary tree structure, where each branch carries a portion of the probability mass determined by a sigmoid function. Sigmoid tree decomposition is much more efficient because it avoids the reduction and division operations in softmax. The other technique gate sharing is to share the gate values for all infrequent words, resulting in partially high-rank representations. This technique saves a considerable amount of memory and computation without affecting the performance because the gate values of infrequent words are usually hard to accurately estimate even without sharing the gates. With all the above techniques combined, Mixtape substantially improves the efficiency of MoS while obtaining comparable or even better performances on four benchmarks, including language modeling and machine translation. With normal vocabulary sizes (e.g., 10K-30K), the Mixtape layer is 1.6x to 11.5x fater than the MoS layer given the same batch size, and is 3.5x to 10.5x faster given the 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. same memory budget. With normal vocabulary sizes, a Mixtape-based network is only 5% to 18% slower than a softmax-based network given the same batch size, and is only 20% to 34% slower given the same memory budget. With a large vocabulary of 100K tokens, a Mixtape-based network is still only 60% slower than a softmax-based network. Both Mixtape and MoS outperform softmax in perplexity and translation quality. Interestingly, these benchmarks have varied vocabulary sizes ranging from 10K to 100K and different input representations including words and BPE subwords, which demonstrates that Mixtape is effective and robust with a variety of inputs. 2 Softmax Bottleneck In the following, we will introduce the notations and review the softmax bottleneck problem pointed out by [19]. Consider a general setting of language modeling and text generation, where given the context C we want to estimate the conditional distribution of the next token P ∗(X|C). Here we use P ∗ to denote the true data distribution. The context C denotes the tokens that have occurred so far. For example, given a corpus (X1, X2, · · · , XT ), for each time step t, we aim to estimate the probability P ∗(Xt|C = X<t). For conditional generation, the probability is additionally conditioned on other inputs, which are omitted in our discussions without loss of generality. We consider a natural language modeling task as the problem of modeling a finite set of pairs of a context and its conditional next-token distribution L = {(c1, P ∗(X|c1)), · · · , (cN , P ∗(X|cN ))}, where N is the number of possible contexts. The validity of the finiteness assumption has been discussed in [19] and does not affect our conclusion that follows. A commonly-used approach for language modeling is to use neural networks to encode the context and the next token into vector representations hc and wx respectively. The conditional distribution is then modeled by a softmax function, Pθ(x|c) = exph > c wx∑ x′ exph > c wx′ where θ denotes the model parameters. The dot products between the two embeddings are called logits, and the corresponding feature space is termed a logit space. We write down the context embeddings, token embeddings, and log probabilities in matrix forms as follows, Hθ = h>c1 h>c2 · · · h>cN ; Wθ = w>x1 w>x2 · · · w>xM ;A = logP ∗(x1|c1) · · · logP ∗(xM |c1) logP ∗(x1|c2) · · · logP ∗(xM |c2) ... . . . ... logP ∗(x1|cN ) · · · logP ∗(xM |cN ) where M is the number of possible next tokens. The language modeling problem is now turned into a matrix factorization problem of finding model parameters θ such that HθW > θ = A + row-wise shift (1) The row-wise shift operation is defined as A + ΛJN,M where Λ is a diagonal matrix with size N ×N and JN,M is an all-ones matrix with size N ×M . Given the matrix factorization formulation, it follows that the rank of LHS in Eq. (1) is upper bounded by the embedding size d. Based on this key observation, the softmax bottleneck problem is identified as follows. Corollary 1 (Softmax bottleneck) [19] If d < rank(A) − 1, for any function family U and any model parameter θ, there exists a context c in L such that Pθ(X|c) 6= P ∗(X|c). In other words, given that most neural language models use distributed low-dimensional context and token embeddings, the softmax bottleneck indicates that these models do not have sufficient expressiveness to model complex, high-rank natural language. 3 Breaking the Softmax Bottleneck Efficiently Mixture of Softmaxes (MoS) [19] is an effective approach to break the softmax bottleneck. Specifically, MoS uses the following formulation for the conditional distribution, Pθ(x|c) = ∑K k=1 πc,k exph>c,kwx∑ x′ exph > c,kwx′ ; s.t. ∑K k=1 πc,k = 1 where the priors πc,k are obtained by another softmax-based function of the last-layer hidden states, and K is the number of mixture components. This formulation is not limited by the softmax bottleneck because the log probability matrix A is modeled by ÂMoS = log ∑K k=1 Πk exp(Hθ,kW > θ ) where the log-sum-exp nonlinearity produces a high-rank matrix ÂMoS. However, softmax involves applying nonlinear exp transformations for each token in the vocabulary, performing reduction across the vocabulary, followed by division, which are all computationally intensive. Moreover, softmax is memory intensive because it has to store the pre-activations h>∗ w∗, the post-activations exp(·), and the output probabilities for each token in the vocabulary. Since a normal vocabulary size is in the magnitude of 104, MoS dramatically increases the computational cost by using multiple softmaxes. Another approach to break the softmax bottleneck was recently introduced [9]. However, this approach is less efficient than MoS because it computes a mixture of sigmoid functions in addition to softmaxes. To alleviate the efficiency issue, we will introduce our novel method Mixtape that improves the efficiency over MoS without sacrificing the ability to learn high-rank representations. 3.1 Logit Space Vector Gating Since the most expensive part of MoS is to compute K softmaxes, significant computational budget can be saved if we manage to use only one softmax to compute the final probability distribution. It is tempting to move the mixture from the probability space into the logit space; i.e., mixing the representations before the softmax operation. This leads to the following conditional distribution, Pθ(x|c) = exp( ∑K k=1 πc,khc,k) > wx∑ x′ exp( ∑K k=1 πc,khc,k) > wx′ . However, as pointed out in [19], such a formulation will result in a low-rank representation because the matrix factorization form in Eq. (1) still applies. Nevertheless, we will now show that with a small modification, applying mixture operations in the logit space leads to high-rank representations. The key idea is to use a vector gating mechanism instead of scalar mixtures. In other words, instead of using a shared set of mixture weights for every token, we use a different set of weights for different tokens. Formally, with vector gating, the conditional distribution can be written as Pθ(x|c) = exp ∑K k=1 πc,x,kh > c,kwx∑ x′ exp ∑K k=1 πc,x′,kh > c,kwx′ ; s.t. K∑ k=1 πc,x,k = 1 (2) The log probability matrix A is now modeled as ÂMixtape = ∑K k=1 Πk ( Hθ,kW > θ ) Due to the elementwise multiplication introduced, the matrix factorization form in Eq. (1) does not apply, and the log probability matrix is therefore high-rank. In addition, the vector gating mechanism removes the necessity of computing K softmax probability distributions, which makes efficiency improvement possible. However, there is still a remaining obstacle before Mixtape is actually efficient enough. Notice that since the priors πc,x,k need to sum to one1 for each context-token pair (c, x), a naive implementation requires computing a softmax for the prior probabilities for each pair token x given the context c. Let lc,x,k be the pre-activation priors, we have πc,x,k = exp lc,x,k∑K k′=1 exp lc,x,k′ Unfortunately, this will be even slower than MoS because the number of tokens in the vocabulary is usually large. In the following, we will introduce a novel technique that avoids such an efficiency trap. 3.2 Sigmoid Tree Decomposition Now, we introduce how to efficiently compute the priors πc,x,k. Instead of using a softmax, we propose to decompose a softmax distribution into a tree structure of sigmoid functions. Specifically, we compute (K − 1) sigmoid outputs and use them to define the probabilities along the tree branches. For example, with K = 4, the priors are defined as: γc,x,k = σ(lc,x,k) for k = 1 . . .K − 1 πc,x,1 = γc,x,1γc,x,2 πc,x,2 = γc,x,1(1− γc,x,2) πc,x,3 = (1− γc,x,1)γc,x,3 πc,x,4 = (1− γc,x,1)(1− γc,x,3) (3) where γ∗ denotes the sigmoid probabilities and σ is the sigmoid function. The above equations are illustrated in Figure ??. We call this technique sigmoid tree decomposition. Such a decomposition is able to fully recover a K-way probability distribution with (K − 1) sigmoid functions. Using sigmoid functions removes the reduction and division operations in softmax and is more efficient. Although the sigmoid tree composition technique can be used with any K, in our experiments, we always use K = 4 for two reasons. First, we find Mixtape is effective with K = 4 for all the tasks in our experiments. Second, speed is core to Mixtape and we fix K to be the minimal possible value. Compared to MoS, using a fixed number of components K means Mixtape requires less hyperparameter tuning efforts. Moreover, K = 4 is relatively small compared to the number of components in MoS, which further reduces the computational cost. Let gc be a d1-dimensional last-layer hidden states given context c. The pre-activation priors l∗ are computed as lc,x,k = v > x tanh(Ukgc) + u > k gc + bx,k (4) where vx ∈ Rd2 , Uk ∈ Rd2×d1 , uk ∈ Rd1 , and bx,k ∈ R are model parameters. Here d2 is a hyperparameter that denotes the gate embedding size and is usually chosen to be much smaller than the normal word embedding size d. The context embeddings are obtained by hc,k = tanh(Hkgc) (5) where Hk ∈ Rd×d1 is a model parameter. 1We were not able to get good performance with unnormalized priors. 3.3 Gate Sharing So far we have arrived at an efficient high-rank model, but there is still room for further improvement. One observation is that we still have to compute a gate prior for each token in the vocabulary, which becomes the bottleneck of efficiency. However, for infrequent tokens, it is hard to estimate the gate priors accurately due to lack of training samples, and thus learning gate priors for infrequent tokens might simply be waste of computation. As a way to leverage this observation, the core idea of gate sharing is to share the same gate priors for all infrequent words. Specifically, for an infrequent token x, the pre-activation gate priors are defined as lc,x,k = u > k gc (6) which remains constant given c and k for different infrequent tokens x. The resulting representations are partially high-rank. Supposed the token indices are ranked by frequency. The log probability matrix is now modeled by  = [ K∑ k=1 Π (1) k ( H (1) θ,kW (1)> θ ) ;H (2) θ W (2) θ ] where the superscripts (1) and (2) denote the representations for frequent and infrequent tokens respectively. We have high-rank and low-rank representations for frequent and infrequent tokens respectively. For infrequent tokens, our formulation is equivalent to performing logit space scalar mixtures, also known as Mixture of Contexts in [19]. Similar ideas have been demonstrated in previous work [8] where infrequent tokens use less-expressive representations (smaller embedding sizes) to save memory and computation without affecting performance. With gate sharing, we use the shared gate prior to mix the context embedding hc,k before multiplication with the token embeddings wx, which saves memory because no gate logits are stored for infrequent tokens. Gate sharing also speeds up the computation by computing only one set of gate priors for all infrequent tokens. Let S be the number of frequent tokens and let r = S/M with M being the vocabulary size. In our experiments, we set r = 0.5 for machine translation and r = 0.1 for language modeling. 3.4 Summary and Discussion The Mixtape layer is summarized as follows: 1. Given the last-layer hidden states gc, compute the context embeddings hc,k using Eq. (5). 2. For each frequent token x, compute the pre-activation gate priors lc,x,k using Eq. (4). 3. For all infrequent tokens, compute a shared pre-activation gate prior lc,x,k using Eq. (6). 4. Use sigmoid tree decomposition to compute the gate priors πc,x,k as in Eq. (3). 5. Use vector gating to obtain the next-token probabilities using Eq. (2). The architecture of the Mixtape layer is illustrated in Figure 1. In our implementation, we also add biases to the matrix multiplication operations in Eq. (2), (4) and (5), which were omitted in the above text for simplicity. It is also optional to employ weight normalization [14] for the parameter Uk in Eq. (4). Different from [14], we use a constant scale instead of a learnable one as it leads to more stable optimization. In our experiments, we use weight normalization for language modeling but did not observe improvement on machine translation tasks. We also apply dropout on tanh(Ukgc) and hc,k in Eq. (4) and (5). To further regularize the networks, we also add a small amount of Gaussian noise on the pre-activation priors l∗ in the forward pass. If we neglect cheap operations and only consider matrix multiplication and softmax, MoS has 2(d1dK + dKM) FLOPs for matrix multiplication and K M -way softmaxes. For comparison, Mixtape has 2(d1dK + dKS) FLOPs for matrix multiplication and one M -way softmax. The speedup of Mixtape comes from a smaller number of softmaxes, a smaller K, and a smaller S < M . Suppose anM -way softmax uses 8M bytes for storing intermediate and final results. If we again only consider major operations of matrix multiplication and softmax, with FP32 tensors, MoS roughly uses (4dK + 12MK) bytes and Mixtape uses (4dK + 12SK + 8M) bytes. Mixtape uses less memory due to a smaller S and a smaller K. 4 Experiments Our experiments consist of three parts. First, we demonstrate that the proposed Mixtape layer is able to improve state-of-the-art machine translation systems by breaking the softmax bottleneck. Second, we compare the perplexity, translation quality, speed, and memory constraints of Mixtape, MoS, and softmax, to demonstrate that Mixtape is able to achieve a good balance between effectiveness and efficiency. Third, through ablation studies, we show the benefits of gate sharing. 4.1 Datasets We test Mixtape on two tasks, language modeling and machine translation. For language modeling, we exactly follow the settings in [19] on Penn Treebank [12] and One Billion Word [4] for fair comparison. We implement the same recurrent network architectures and follow the regularization and optimization techniques used in [19]. We tune the model size of Mixtape such that Mixtape has the same number of parameters as MoS in the corresponding settings. On One Billion Word, we also replicate the data preprocessing pipeline that lower-cases the text and chooses the top 100K tokens as the vocabulary. This results in a non-standard setting, but it enables fair comparison with MoS as well as excluding the orthogonal effects of techniques for a larger vocabulary such as adaptive softmax [8]. For machine translation, our experiments are based on two widely-used WMT’14 benchmarks, English to German (En-De) and English to French (En-Fr), following the setups in [13, 18]. For En-De, we train on the WMT’16 training data and test on newstest14. For En-Fr, we train on the WMT’14 training data and test on newstest14. We use BPE encodings [15] with a vocabulary size of 32K. Following [17], we use sacrebleu for evaluation. The statistics of different datasets and settings are shown in Table 2. The selected datasets present a degree of diversity in sizes, input units, and vocabulary sizes, which enables evaluating the robustness of Mixtape. 4.2 WMT’14 Results We apply Mixtape on top of Transformers [18] to have a comparison with state-of-the-art systems on WMT’14 benchmarks. We also incorporate relative positional encodings [16] in our architecture. On En-De, we employ a 6-layer Transformer with embedding size 1024, inner layer size 4096, and 16 attention heads. We train for 300K steps with a learning rate of 2.5, a batch size of 4096, and 16K warmup steps. We apply a dropout of 0.3 on the layer outputs, a dropout of 0.15 on attention probabilities, a dropout of 0.2 on tanh(Ukgc) in Eq. (4), and a Gaussian noise with 0.1 stdev on pre-activation gate priors. On En-Fr, we employ a 6-layer Transformer with embedding size 2048, inner layer size 8192, and 16 attention heads. We train for 1.2M steps with a learning rate of 2.0, a batch size of 4096, and 16K warmup steps. We apply a dropout 0.25 on the layer outputs, dropouts of 0.15 on attention probabilities and tanh(Ukgc) in Eq. (4), and a Gaussian noise with 0.1 stdev on pre-activation gate priors. The results of our method are shown in Table 1. Mixtape with Transformers achieves state-of-theart results on both En-De and En-Fr. Interestingly, Mixtape outperforms baselines that use MoS [10]. This demonstrates that breaking the softmax bottleneck significantly contributes to achieving state-of-the-art performance for machine translation, and Mixtape is an effective approach to break such a bottleneck. On En-Fr, Mixtape obtains the same performance with Transformers trained with Mesh Tensorflow [17]. However, Mixtape is much more parameter-efficient, using only 0.8 billion parameters v.s. 2.9 billion parameters in Mesh Tensorflow. Moreover, Mixtape outperforms Mesh Tensorflow by a large margin on En-De, demonstrating more robustness and generalization capabilities on relatively small datasets. Note that [7] reports better performance with back translation, which is not comparable with our setting. 4.3 Ablation Study and Comparison with Baselines We now compare the performance of Mixtape with MoS and softmax, as well as studying the effects of gate sharing. We report the training time used for both the sole output layer and the entire network. To take the memory usage of different methods into consideration, in addition to reporting training time with the same batch size, we also consider the training time with the same memory budget. In other words, a model that uses more memory will have a smaller batch size, and thus training time per instance will increase. The results of different methods on Penn Treebank, One Billion Word, WMT’14 En-De, and WMT’14 En-Fr are shown in Tables 3, 4, 5, and 6. We use baseline MoS results from [19, 10] whenever possible and avoid using our own implementation for fair comparison. There are three main messages delivered in these results. First, compared to softmax, Mixtape is comparably efficient while being more accurate at language modeling and translation. On tasks with normal vocabulary sizes including Penn Treebank, WMT’14 En-De, and WMT’14 En-Fr, a Mixtape-based network is only 5% to 18% slower than a softmax-based network given the same batch size and only 20% to 34% slower given the same memory budget. Even on One Billion Word with a 100K vocabulary, a Mixtape-based network is only 60% slower than a softmax-based network. On the other hand, Mixtape improves the perplexity over MoS by 2.8 points and 6.25 points on Penn Treebank and One Billion Word respectively. On translation tasks, Mixtape improves the BLUE scores from 29.0 to 29.3 on En-De and from 43.0 to 43.9 on En-Fr. Second, compared to MoS, Mixtape achieves similar or better performance in perplexity and BLEU while being much more efficient. Mixtape is 1.6x to 11.5x faster than MoS given the same batch size and 3.5x to 10.5x faster given the same memory budget. The speedup is usually more significant with the memory budget constraints, demonstrating that the ability to save memory also contributes to the efficiency of Mixtape. Mixtape has better performance than MoS on translation and comparable performance on language modeling. Third, gate sharing substantially reduces the computational cost without sacrificing accuracy. In Tables 3 and 4, the perplexities of Mixtape with and without gate sharing only have an almost negligible difference. Gating sharing improves the speed by 4.3x and 4.0x on Penn Treebank and One Billion Word respectively given the same memory budget. The speedup is 3.5x and 3.2x given the same batch size. This indicates that gate sharing reduces the memory cost as well as training time per forward-backward pass. 5 Conclusions We propose Mixtape to break the softmax bottleneck more efficiently. Compared to MoS, Mixtape is more computationally efficient. Compared to softmax, Mixtape has comparable efficiency and is superior in terms of accuracy. Based on the above results, it is possible that Mixtape can be used as a plug-and-play layer to improve conditional and unconditional text generation in general. In the future, it will be intriguing to further investigate more applications of Mixtape.
1. What is the main contribution of the paper regarding the softmax bottleneck problem? 2. What are the strengths and weaknesses of the proposed approach compared to existing solutions? 3. How does the reviewer assess the impact and practicality of the paper's contributions? 4. Are there any concerns or suggestions regarding the technical aspects and details of the paper? 5. Do you have any questions about the experimental setup, hyperparameter optimization, and comparisons with other works?
Review
Review POS-AUTHOR FEEDBACK I thank the authors for their feedback and clarifications. I have increased my score based on those answers, and trusting that the promised modifications will appear in the final version. I would strongly encourage to make the release of the code as easy to use as possible, ideally with plugins for major platforms. This would not only increase citations, but have a direct impact in a number of use-cases ORIGINAL REVIEW This paper addresses the softmax bottleneck problem: resolving it has shown to significantly improve results when the output is over a large space (eg: NLP). However, current solutions are very costly. This papers contributes with a tradeoff between efficiency and cost: it obtains worse results than the full mixture-of-softmax, but does so much cheaper. There is much to like of this paper, as it contributes an important tool. I believe however that the impact would be much higher if the authors would provide a “plug-and-play” layer for at least one popular deep learning toolkit. People don’t use the best algorithm, the use the best one available. Improvement-wise: - Some technical comments are only glossed over and would merit a more detailed discussion: o Line 117: “is therefore high-rank”. This seems very important, but this comment is never proved or expanded upon o Same applies for line 157: “partially high-rank”. What does this mean? o Line 121 (footnote 1 p4): the priors need to sum to one. I don’t see why they need to. The footnote just details that worse performance are obtained. This seems rather a crucial point as solving it would render 3.2 and 3.3 unnecessary. Similarly, the author seems to assume that softmax is the only normalization technique. Why not trying simple (l1) norm? This would avoid the exp computation - There is a huge amount of hyper-parameter optimization going on. Different from what is said in the reproducibility criteria (“The range of hyper-parameters considered, method to select the best hyper-parameter configuration, and specification of all hyper-parameters used to generate results.”), it is never specified how this is done. This includes setting r (line 169), and non-standard decisions like adding Gaussian noise (185). At the same time, it is not clear what experiments were run by the authors: it seems the translation experiments were not, but then training time is reported in Table 5 - No comparison with [9] is reported Other comments: - it seems that hierarchical softmax could be a way of solving the efficiency problem of the softmax for MoS. As it shares the tree-structure idea of sigmoid tree decomposition, I believe this merits a discussion. - the gate sharing idea is reminiscent of some interpolation techniques from the time of n-gram LM (given different weight to frequent and unfrequent tokens). As at that time, this idea can be used at very different levels to bin parameters: not one per word or one for all unfrequent words but clustering them and sharing the gates across clusters. - Line 132: the Fig cross-reference is not resolved
NIPS
Title A Domain-Shrinking based Bayesian Optimization Algorithm with Order-Optimal Regret Performance Abstract We consider sequential optimization of an unknown function in a reproducing kernel Hilbert space. We propose a Gaussian process-based algorithm and establish its order-optimal regret performance (up to a poly-logarithmic factor). This is the first GP-based algorithm with an order-optimal regret guarantee. The proposed algorithm is rooted in the methodology of domain shrinking realized through a sequence of tree-based region pruning and refining to concentrate queries in increasingly smaller high-performing regions of the function domain. The search for high-performing regions is localized and guided by an iterative estimation of the optimal function value to ensure both learning efficiency and computational efficiency. Compared with the prevailing GP-UCB family of algorithms, the proposed algorithm reduces computational complexity by a factor of O(T 2d−1) (where T is the time horizon and d the dimension of the function domain). 1 Introduction Consider a black-box optimization problem with an unknown objective function f : X → R, where X ⊂ Rd is a convex and compact set. The learner can access the function only through a noisy oracle, which, when queried with a point x ∈ X , returns a noisy function value at that point. The learning objective is to approach the maximizer x∗ of the function through a sequence of query points {xt}Tt=1 chosen sequentially in time. The learning efficiency is measured by cumulative regret given by R(T ) = T∑ t=1 [f(x∗)− f(xt)] . (1) This cumulative regret measure dictates the online nature of the problem: every query point during the learning process carries loss, not just the end point xT after learning concludes. The classical exploration-exploitation tradeoff in online learning hence ensues. 1.1 Gaussian Process Models The above problem is ill-posed unless certain structure of the unknown objective function f is assumed to make learning x∗ feasible. One such structural assumption is the convexity of f , which leads to the class of stochastic convex optimization problems. Another class of black-box optimization problems that is gaining interest in recent years is kernel-based learning where f is assumed to live in a Reproducing Kernel Hilbert Space (RKHS) associated with a positive-definite kernel. An effective approach to kernel-based black-box optimization is Bayesian optimization that adopts a fictitious prior on the unknown function f . In other words, while f is deterministic, it is viewed internally by 35th Conference on Neural Information Processing Systems (NeurIPS 2021). the learning algorithm as a realization of a random process over X . A natural choice is the Gaussian process (GP) with a Gaussian prior due to the conjugate property that significantly simplifies the analytical form of the posterior distribution at each newly obtained observation. In a celebrated work, Srinivas et al. [1] proposed the GP-UCB algorithm that constructs a proxy of f using the upper confidence bound (UCB) concept first introduced in the classical multi-armed bandit problem [2, 3]. Specifically, at each time instant t, a UCB of f is constructed using the closed-form posterior mean and standard deviation of the GP model of f . The algorithm then sets the next query point to be the maximizer of the UCB. Several variations of GP-UCB, tailored for different settings (see Sec 1.3), have since been developed. The GP-UCB family of algorithms generally enjoy good empirical performance in terms of regret. The analytical guarantees of their regret performance, however, leave considerable gaps to the existing lower bound [4]. More significantly, the state-of-the-art regret bound of GP-UCB does not guarantee a sublinear order in T for certain kernels, hence a lack of guaranteed convergence to f(x∗) [4, 5]. Another difficulty with the GP-UCB family of algorithms is their computational complexity, which can be prohibitive as the dimension d and/or the horizon length T grows. The computational complexity has two main sources: (i) the inversion of the covariance matrix in updating the posterior GP distribution, which has an O(t3) complexity with t samples; (ii) the maximization of the UCB proxy over the entire domain X at each time instant. In particular, due to the multi-modality of the UCB score, its maximization is often carried out using a grid search with an increasingly finer discretization of the entire domain. Specifically, due to analytical requirements, the discretization is typically assumed to grow in the order of O(t2d) [1, 6], resulting in an overall computational complexity of O(T 2d+3). Several studies exist that tackle the first source of high complexity of GP-UCB, using sparse matrix approximation techniques to reduce the complexity in the inversion of the covariance matrix (see, e.g., [7, 8]). The second source, which is the dominating factor, has not been effectively addressed. 1.2 Main results The goal of this work is to develop a GP-based Bayesian optimization algorithm with a regret guarantee that closes the gap to the lower bound. Furthermore, we tackle the second source of the complexity to ensure both learning efficiency and computational efficiency. Referred to as GP-ThreDS (Thresholded Domain Shrinking), the proposed algorithm is rooted in the methodology of domain shrinking: it continuously prunes sub-performing regions of the domain X and zooms into increasingly smaller high-performing regions of X as time goes. The purpose of the domain shrinking is twofold. First, it ensures high learning efficiency by focusing queries on regions of X with function values approaching f(x∗). Second, it achieves computational efficiency by avoiding a global maximization of the proxy function over the entire domain X . Our specific approach to domain shrinking is built upon a sequence of localized searches on a growing binary tree that forms successively refined partitions of X . Starting from the root of the tree that represents the entire domain, the search progresses down the tree by adaptively pruning nodes that do not contain the maximizer with high probability, consequently zooming into increasingly smaller high-performing regions of X as the search deepens. Another progressive thread in this sequence of localized searches is the criterion for pruning the tree. Each localized search aims to identify nodes at a certain depth of the tree that contain points with function values exceeding a given threshold. The threshold is updated iteratively to approach the maximum function value f(x∗). More succinctly, the proposed algorithm is a sequence of localized searches in the domain of the function guided by an iterative search in the range of the function. The above domain shrinking approach via localized search is the primary contributing factor to improved performance in terms of both regret guarantee and computational complexity. In particular, the rate of domain shrinking is controlled to ensure not only the concentration of query points in highperforming regions, but also a constant-sized discretization at all times when estimating the function values. This constant-sized discretization allows a tighter regret analysis and results in a regret upper bound for GP-ThreDS that matches with the lower bound (up to a poly-logarithmic factor). We show that the regret of GP-ThreDS is O( √ TγT ) (up to a poly-logarithmic factor), where γT denotes the maximum information gain after T steps and is representative of the effective dimension of the problem [9, 10]. In the case of Matérn and Squared Exponential (SE) kernels where the lower bounds on regret are known, on substituting the improved bounds on γT from [11], our results match the lower bounds and close the gap reported in [4, 12]. In comparison, the state-of-the-art analysis of GP-UCB yields an O(γT √ T ) regret bound [e.g., see, 6, Theorem 3]. The O( √ γT ) gap between the regret guarantees of GP-UCB and the proposed GP-ThreDS is significant: it can grow polynomially in T (e.g. in the case of Matérn kernel). Computation-wise, the constant-sized discretization contrasts sharply with the growing (at rateO(t2d) with time t) discretization required by the GP-UCB family of algorithms. Another factor contributing to the reduced complexity is the relaxed search criterion that aims to determine only the existence of threshold-exceeding points, in contrast to finding a global maximizer as in the GP-UCB family of algorithms. As a result, GP-ThreDS reduces the computational complexity from O(T 2d+3) as required by GP-UCB family of algorithms to O(T 4). 1.3 Related Work There is a vast body of literature on numerical and theoretical analysis of Bayesian optimization algorithms. With our focus on a computationally efficient algorithm with a provable regret guarantee, the most relevant results to ours are [1] and [6] discussed above. [6] also proved the same O(γT √ T ) regret holds for GP-TS, a Bayesian optimization algorithm based on Thompson sampling principle. Augmenting GP models with local polynomial estimators, [13] introduced LP-GP-UCB and established improved regret bounds for it under special cases [see, 13, Sec. 3.2]. However, for other cases, the regret guarantees for LP-GP-UCB remain in the same order as GP-UCB. More recently, [5] introduced π-GP-UCB, specific to Matérn family of kernels, that constructs a cover for the search space, as many hypercubes, and fits an independent GP to each cover element. This algorithm was proven to achieve sublinear regret across all parameters of the Matérn family. Almost all other algorithms in the GP-UCB family have a regret guarantee of O(γT √ T ), which is O( √ γT ) greater than the lower bound and can grow polynomially in T . Two exceptions to this are the SupKernelUCB and the RIPS algorithms proposed in [10] and [14] which achieve a regret of O( √ TγT ) for discrete action spaces.While this may be extendable to continuous spaces via a discretization argument as recently pointed out in [5, 12], the required discretization needs to grow polynomially in T , making it computationally expensive. Moreover, it has been noted that SupKernelUCB performs poorly in practice [5, 8, 12]. GP-ThreDS, on the other hand, is a computationally efficient algorithm that achieves tight regret bounds with good empirical performance (see Sec. 5). A comparison with other related works including the ones in different settings such as noise-free observations and random f are deferred to the supplementary. 2 Problem Statement 2.1 Problem Formulation We consider the problem of optimizing a fixed and unknown function f : X → R, where X ⊂ Rd is a convex and compact domain. A sequential optimization algorithm chooses a point xt ∈ X at each time instant t = 1, 2, . . . , and observes yt = f(xt)+ t, where the noise sequence { t}∞t=1 is assumed to be i.i.d. over t and R-sub-Gaussian for a fixed constant R ≥ 0, i.e., E [ eζ t ] ≤ exp ( ζ2R2/2 ) for all ζ ∈ R and t ∈ N. We assume a regularity condition on the objective function f that is commonly adopted under kernelized learning models. Specifically, we assume that f lives in a Reproducing Kernel Hilbert Space (RKHS)1 associated with a positive definite kernel k : X × X → R. The RKHS norm of f is assumed to be bounded by a known constant B, that is, ‖f‖k ≤ B. We further assume that f is α-Hölder continuous, that is, |f(x) − f(x′)| ≤ L‖x − x′‖α for all x, x′ ∈ X for some α ∈ (0, 1] and L > 0. This is a mild assumption as this is a direct consequence of RKHS assumption for commonly used kernels as shown in [13]. We also assume the knowledge of an interval [a, b], such 1The RKHS, denoted by Hk, is a Hilbert space associated with a positive definite kernel k(·, ·) and is fully specified by the kernel and vice versa. It is endowed with an inner product 〈·〉k that obeys the reproducing property, i.e., g(x) = 〈g, k(x, ·)〉k for all g ∈ Hk. The inner product also induces a norm ‖g‖k = 〈g, g〉k. This norm is a measure of the smoothness of the function f with respect to the kernel k and is finite if and only if f ∈ Hk. that f(x∗) ∈ [a, b]. This is also a mild assumption as domain-specific knowledge often provides us with bounds. For example, a common application of black-box optimization is hyperparameter tuning in deep learning models. The unknown function represents the accuracy of the model for a given set of hyperparameters. Since f represents the accuracy of the model, we have f(x∗) ∈ [0, 1]. For simplicity of notation, we assume X = [0, 1]d and f(x∗) ∈ [0, 1]. It is straightforward to relax these assumptions to general compact domains and arbitrary bounded ranges [a, b]. Our objective is a computationally efficient algorithm with a guarantee on regret performance as defined in (1). We provide high probability regret bounds that hold with probability at least 1− δ0 for any given δ0 ∈ (0, 1), a stronger performance guarantee than bounds on expected regret. 2.2 Preliminaries on Gaussian processes Under the GP model, the unknown function f is treated hypothetically as a realization of a Gaussian process over X . A Gaussian Process {F (x)}x∈X is fully specified by its mean function µ(·) and covariance function k(·, ·). All finite samples of the process are jointly Gaussian with mean E[F (xi)] = µ(xi) and covariance E[(F (xi)−µ(xi))(F (xj)−µ(xj))] = k(xi, xj) for 1 ≤ i, j ≤ n and n ∈ N [15]. The noise t is also viewed as Gaussian. The conjugate property of Gaussian processes with Gaussian noise allows for a closed-form expression of the posterior distribution. Consider a set of observations Ht = {xt,yt} where xt = (x1, x2, . . . , xt) T and yt = (y1, y2, . . . , yt)T . Here ys = f(xs) + s where xs ∈ X and s are the zero-mean noise terms, i.i.d. over s for s ∈ N. Conditioned on the history of observations Ht, the posterior for f is also a Gaussian process with mean and covariance functions given as µt(x) = E [F (x)|Ht] = kTxt,x (Kxt,xt + λI) −1 yt (2) kt(x, x ′) = E [(F (x)− µt(x))(F (x′)− µt(x′))|Ht] = k(x, x′)− kTxt,x (Kxt,xt + λI) −1 kxt,x′ . (3) In the above expressions, kxt,x = [k(x1, x), . . . , k(xt, x)] T , Kxt,xt is the t × t covariance matrix [k(xi, xj)] t i,j=1, I is the t× t identity matrix and λ is the variance of the Gaussian model assumed for the noise terms. Gaussian processes are powerful non-parametric Bayesian models for functions in RKHSs [16]. In particular, the mean function of the GP regression (eqn. (2)) lies in the RKHS with kernel k(·, ·) with high probability. We emphasize that the GP model of f and the Gaussian noise assumption are internal to the learning algorithm. The underlying objective function f is an arbitrary deterministic function in an RKHS, and the noise obeys an arbitrary R-sub-Gaussian distribution. 3 The GP-ThreDS Algorithm In Sec. 3.1, we present the basic domain-shrinking structure of GP-ThreDS that continuously prunes sub-performing regions of X and zooms into increasingly smaller high-performing regions of X . In Sec. 3.2, we present the method for identifying high-performing regions of X . 3.1 Thresholded domain shrinking GP-ThreDS operates in epochs. Each epoch completes one cycle of pruning, refining, and threshold updating as detailed below. (i) Pruning: removing sub-performing regions of X from future consideration; (ii) Refining: splitting high-performing regions of X into smaller regions for refined search (i.e., zooming in) in future epochs; (iii) Threshold updating: updating the threshold on function values that defines the criterion for high/sub-performance to be used in the next epoch. The pruning and refining conform to a binary-tree representation of X with nodes representing regions of X and edges the subset relation (i.e., region splitting). Throughout the paper, we use nodes and regions of X interchangeably. We explain the details with an example. Consider a one-dimensional function over X = [0, 1] as shown in Fig. 1. Assume that it is known f(x∗) ∈ [0, 1.4]. The function threshold τ1 defining the pruning criterion in the first epoch is set to the mid-point: τ1 = 0.7. In epoch 1, the domain X is represented by a tree of height 1 with the root representing the entire domain [0, 1] and the two leaf f(x) x 0 0 0 0.25 0.5 0.5 0.75 1 1 1 ⌧1 = 0.7 ⌧2 = 0.95 D = 0 D = 1 D = 2 Figure 1: Thresholded domain shrinking. 1 2 3 4 5 6 7 Figure 2: An illustration of the random-walk based search. (Node 6 is the single highperforming leaf node. If the random walk is currently at node 2, the correct direction is along the shortest path to node 6: via node 1 and then node 3.) nodes representing the two sub-intervals [0, 0.5] and (0.5, 1] (see Fig. 1). In the pruning stage of this epoch, the algorithm determines, with a required confidence, whether each leaf node contains a point with function value exceeding τ1. Such threshold-exceeding leaf nodes are referred to as high-performing nodes. Otherwise, they are called sub-performing nodes and are pruned, along with their ancestors, from the tree. Suppose that in this example, both sub-intervals [0, 0.5] and (0.5, 1] are identified as high-performing (see Sec. 3.2 on identifying high-performing nodes). Consequently, no node is pruned, and the algorithm proceeds to the refining stage, where each sub-interval splits, and the tree grows to a height of 2 with four leaf nodes. The threshold is then updated to τ2 = 0.95 (see below on threshold updating). The increased threshold reflects an adjustment toward a more aggressive pruning in the next epoch as suggested by the presence of (multiple) high-performing nodes in the current epoch. In the second epoch, the pruning stage aims to identify high-performing (defined by τ2) nodes among the four leaf nodes. Supposed that it is determined leaf node (0.25, 0.5] is the only high-performing node. Then the nodes [0, 0.25], (0.5, 0.75] and (0.75, 1] and all their ancestors are pruned. In the refining stage, the high-performing node (0.25, 0.5] splits into two. The threshold is updated to τ3. The algorithm then progresses into the third epoch, facing the same decision problem on the two leaf nodes (the two children of (0.25, 0.5]) of the pruned tree and following the same pruning-refiningthreshold updating cycle. For a general d-dimensional problem, the basic structure is the same with three simple generalizations. First, the two children of any given node are formed by equally splitting the longest edge of the corresponding d-dimensional cuboid (ties broken arbitrarily). Second, in each epoch, the tree grows by d levels (d = 1 in the above example) in the refining stage by following successive binary splitting d times. The last detail to specify is that if no leaf node is identified as high-performing in an epoch k, then the refining stage is bypassed, and the algorithm repeats the search on the same tree (no pruning or refining) with a decreased threshold τk+1 in the next epoch. The decreased threshold reflects a lowered estimate of f(x∗) based on the absence of high-performing nodes in the current epoch. The thresholds {τk}k≥1 are updated iteratively using a binary search to approach f(x∗). For each epoch k, the algorithm maintains an interval [ak, bk] which is believed to contain f(x∗). The threshold τk is set to the mid-point of [ak, bk]. The initial interval [a1, b1] is set to the known range [a, b] of f(x∗). At the end of epoch k, if no leaf node is identified as high-performing, we set ak+1 = ak − (bk − ak)/2 and bk+1 = bk − (bk − ak)/2, which leads to a decreased threshold in the next epoch. Otherwise, we set ak+1 = τk − c2−αρk/d+1 and bk+1 = bk, where ρk is the height of the tree before the pruning stage of epoch k, and c ∈ (0, 1/2) is a hyperparameter (specified in Sec. 3.2.2). We emphasize that while the proposed domain-shrinking approach conforms to an ever-growing tree, the algorithm can be implemented without storing the entire tree. The only information about the tree that needs to be maintained is the set Dk of high-performing leaf nodes identified in the pruning stage of each epoch k. A pseudo-code of GP-ThreDS is provided in the supplementary material. 3.2 Identifying high-performing nodes We now specify the local algorithm for identifying high-performing nodes in a given epoch k. Recall that Dk denotes the set of high-performing nodes identified in epoch k. Each node in Dk has grown d levels and produced 2d leaf nodes in the refining stage of epoch k. The objective of epoch k + 1 is to determine which of the 2d |Dk| newly grown leaves are high-performing nodes defined by τk+1. In epoch k + 1, the only portion of the tree that is of interest is the |Dk| subtrees, each of height d with a root in Dk. Our approach is to treat these subtrees separately, one at a time. We can thus focus on one subtree to describe the algorithm for identifying which of the 2d leaves are high-performing. The terms root, node, and leaf all pertain to this subtree. We also omit the epoch index for simplicity. 3.2.1 A random-walk based search for high-performing nodes A straightforward approach to identifying the high-performing nodes is to test each of the 2d leaf nodes directly. This, however, results in a large number of samples at suboptimal points when the dimension d is high. Our approach is inspired by the RWT (Random Walk on a Tree) algorithm recently proposed as a robust and adaptive algorithm for stochastic convex optimization [17, 18, 19]. Assume first there is exactly one high-performing node among the 2d leaf nodes. The basic idea is to devise a biased random walk on the tree that initiates at the root and walks towards the highperforming node at the leaf level. As illustrated in Fig. 2 with d = 2, at a non-leaf node, the random walk can take one of three directions: towards the parent or one of the two children (the parent of the root is itself). The correct direction is to walk along the shortest path to the high-performing leaf node. With the subset relation encoded by the tree, this implies moving to the child containing a threshold-exceeding point or to the parent when neither child contains threshold-exceeding points. Hence, to guide the random walk, a local sequential test is carried out on the two children, one at a time, to determine, at a required confidence level, whether it is threshold-exceeding (see Sec. 3.2.2). The walk then moves to the first child identified as threshold-exceeding (if any) or to the parent otherwise. The confidence level of the local sequential test at each non-leaf node is only required to ensure the walk is correctly biased, i.e., the probability of walking in the correct direction is greater than 1/2. On reaching a leaf node, the algorithm enters the verification stage to determine whether this node is the high-performing leaf node. If the decision is no, it moves back to the parent of this leaf node, and the random walk resumes. If yes, the algorithm exits (under the assumption of a single high-performing leaf node). This decision can be made by carrying out the same local sequential test that guides the random walk at non-leaf nodes. The only difference is in the required confidence level. Given that a false positive at a leaf cannot be corrected due to exiting while a false negative only resumes the random walk (hence retractable in the future), the confidence level for a positive decision needs to be sufficiently high to ensure the overall regret performance, while a negative decision only needs to ensure the bias of the walk (as in the non-leaf nodes). When the number of high-performing leaf nodes is unknown and arbitrary in {0, 1, 2, . . . , 2d}, multiple runs of the random walk are carried out to identify them one by one. In addition, a termination test on the root node is carried out before each run to determine whether there are still unidentified high-performing leaf nodes. See the supplementary for details along with a pseudo code. We emphasize that the local test is carried out using only observations from the current visit to this node; observations from past visits are forgotten. This is to ensure the random-walk nature of the process for tight-analysis. A computational benefit is that the matrices being inverted to compute the posterior distribution are always small, improving the run-time efficiency of the algorithm. 3.2.2 The local sequential test The last piece of the puzzle in GP-ThreDS is the local sequential test on a given node of a subtree. Given a node/region, D ⊆ X , a threshold τ , and a confidence parameter η ∈ (0, 1), the local sequential test needs to determine, with a 1− η confidence level, whether D contains a point with function value exceeding τ . The test first builds a discretization of the region D, denoted by the set Dg = {xi} |Dg| i=1 . The set of points in Dg are chosen to ensure that supx∈D infy∈Dg ‖x − y‖ ≤ ∆. A simple way to construct such a discretization is to use uniform grids parallel to the axes with a resolution small enough to satisfy the above constraint. The parameter ∆ in epoch k is set to ∆k = (c/L)1/α2−ρk/d and is used to control the approximation of the function values in D. Recall that L is the Hölder continuity constant while c ∈ (0, 1/2) is a hyperparameter. The local test sequentially queries points in the set Dg to locally estimate f . To determine whether there exists a point x ∈ D with f(x) ≥ τ , the test builds a pair of Upper and Lower Confidence Bounds using sequentially drawn samples and compares each of them to prescribed values. If the UCB goes below τ − L∆α, indicating that the node is unlikely to contain a τ -exceeding point, the test terminates and outputs a negative outcome. On the other hand, if LCB exceeds τ , then this is a τ -exceeding point with the required confidence level. The test terminates and outputs a positive outcome. If both the UCB and LCB are within their prescribed “uncertainty" range, the test draws one more sample and repeats the process. A cap is imposed on the total number of samples. Specifically, the test terminates and outputs a positive outcome when the total number of samples exceeds S̄(p, L∆α). A description of the test for s ≥ 1 after being initialized with a point x1 ∈ Dg is given in Fig. 3.We would like to emphasize that the posterior mean and variance µs−1 and σ2s−1 considered in the description below are constructed only from the samples collected during that particular visit to the current node. The parameter βs(ν) := B + R √ 2(γs−1 + 1 + log(1/ν)) for ν ∈ (0, 1). γt is the maximum information gain at time t, defined as γt := maxA⊂X :|A|=t I(yA; fA). Here, I(yA; fA) denotes the mutual information between fA = [f(x)]x∈A and yA = fA + A. Bounds on γt for several common kernels are known [20, 21] and are sublinear functions of t. The cap S̄(η, L∆α) on the maximum number of samples is given by S̄(η, L∆α) = min { t ∈ N : 2(1 + 2λ)βt(η)|Dg| 1 2 (L∆α) √ t ≤ 1 } + 1. (4) The cap on the total number of samples prevents the algorithm from wasting too many queries on suboptimal nodes. Without such a cap, the expected number of queries issued by the local test is inversely proportional to |f(x∗Dg ) − τ |, where x ∗ Dg = arg maxx∈Dg f(x). Consequently, small values of |f(x∗Dg )− τ | would lead to a large number of queries at highly suboptimal points when f(x∗Dg ) is far from f(x ∗). The cap on the number of samples thus helps control the growth of regret at the cost of a potential increase in the approximation error. It also reduces the cost in computing the posterior distribution by limiting the number of queries at a node. Note that when the sequential test reaches the maximum allowable samples and exits with an outcome of +1, it is possible that f(x∗Dg ) < τ (i.e., no τ -exceeding points in Dg). Thus, τ may not be a lower bound for the updated belief of f(x∗), as one would expect in the case of an output of +1 from the sequential test. However, using Lemma 2, we can obtain a high probability lower bound on τ − f(x∗Dg ). This additional error term is taken into account while updating the threshold as described in Sec. 3.1. The hyperparameter c trades off this error with the size of the discretization. The sequential test can be easily modified to offer asymmetric confidence levels for declaring positive and negative outcomes (as required for in the verification stage of the RWT search) by changing the confidence parameter in βs. Details are given in the supplementary material. We point out that the construction of the UCB is based on the UCB score employed in IGP-UCB [6]. It is straightforward to replace it with other types of UCB scores. The basic thresholded domain shrinking structure of the proposed algorithm is independent of the specific UCB scores, hence generally applicable as a method for improving the computational efficiency and regret performance of GP-UCB family of algorithms. 4 Performance Analysis In this section, we analyze the regret and computational complexity of GP-ThreDS. Throughout the section, D ⊆ X denotes a node visited by GP-ThreDS, Dg denotes its associated discretization, constructed as described in Sec. 3.2.2, and x∗Dg = arg maxx∈Dg f(x). 4.1 Regret Analysis The following theorem establishes the regret order of GP-ThreDS. Theorem 1. Consider the GP-ThreDS algorithm as described in Sec. 3. Then, for any δ0 ∈ (0, 1), with probability at least 1− δ0, the regret incurred by the algorithm is given as R(T ) = O( √ TγT log T (log T + √ log T log(1/δ0))). We provide here a sketch of the proof. The regret incurred by GP-ThreDS is analysed by decomposing it into two terms: the regret in the first k0 epochs referred to as R1, and the regret after the completion of the first k0 epochs referred to as R2, where k0 = max{k : ρk ≤ d2α log T}. To bound R1, we first bound the regret incurred at any node visited during the first k0 epochs using the following decomposition of the instantaneous regret: f(x∗)− f(xt) = [f(x∗)− τk + L∆αk ] + [τk − f(x∗Dg )− L∆ α k ] + [f(x ∗ Dg )− f(xt)]. In the above decomposition, k denotes the epoch index during which the node is visited. Each of these three terms are then bounded separately. The third term in the expression is bounded using a similar approach to the analysis of IGP-UCB [6] (notice that xt is the maximizer of the UCB score) that is to bound it by the cumulative standard deviation ( ∑t s=1 σs−1(xs)). Lemma 1. For any set of sampling points {x1, x2, . . . , xt} chosen from Dg (under any choice of algorithm), the following relation holds: ∑t s=1 σs−1(xs) ≤ (1+2λ) √ |Dg|t, where σs(x) is defined in (3). Since GP-ThreDS ensures a constant-sized discretization at all times (See Lemma 4), the above lemma implies that the sum of posterior standard deviations is O( √ t) resulting in a tight bound corresponding to the third term (that is an O( √ γt) tighter than the bound for IGP-UCB which optimizes the UCB score over the entire domain). The first two terms are bounded using the following lemma with an appropriate choice of ∆f . Lemma 2. If the local test is terminated by the termination condition at instant S̄(δ2,∆f ) as defined in (4), then with probability at least 1− δ2, we have τ − L∆α −∆f ≤ f(x∗Dg ) ≤ τ + ∆f . The final bound on R1 is obtained by a combination of the upper bound on regret on each node and the bound on the total number of nodes visited by GP-ThreDS, captured in the following lemma. Lemma 3. Consider the random walk based routine described in Section 3.2 with a local confidence parameter p ∈ (0, 1/2). Then with probability at least 1− δ1, one iteration of RWT visits less than log(d/δ1) 2(p−1/2)2 nodes before termination. To bound R2, we bound the difference in function values using the Hölder continuity of the function along with the upper bound on the diameter of the nodes after k0 epochs. Adding the bounds on R1 and R2, we arrive at the theorem. The detailed proofs are provided in the supplementary material. We would like to point out that the regret analysis depends on the choice of the UCB score. While we have used the UCB score of IGP-UCB, this analysis is straightforward to extend to other UCB scores. Remark 1. We note that our assumptions are consistent with those used in proving the lower bounds. In particular, the lower bounds are proven for the Matérn family of kernels including the SE kernel in [4]. [13, Proposition 1] proves the Hölder continuity of this family of kernels. Thus, our assumption on Hölder continuity is consistent with the lower bound. In addition, the proof of lower bound considers a class of functions whose RKHS norm is upper bounded by a known constant [4, Sec 1.1]. This upper bound translates to an upper bound on the absolute value of f , which is consistent with our assumption on having a finite range for f . 4.2 Computational Complexity The following theorem bounds the worst-case overall computational complexity of GP-ThreDS. Theorem 2. The worst-case overall computational complexity of GP-ThreDS is O(T 4), where T is the time horizon. The proof of theorem follows from the following lemma. Lemma 4. The number of points in the discretization, |Dg|, for any node D, is upper bounded by a constant, independent of time. i.e., |Dg| = O(1), ∀ t ≤ T . From the lemma, we can conclude that the number of UCB score evaluations in GP-ThreDS is constant at all times t, hence matrix inversion becomes the dominant source of computational complexity. Since no more than t samples are used to compute the posterior distribution at time t, the worst-case cost associated with matrix inversion step is O(t3) and consequently the worst-case computational complexity of GP-ThreDS is O(T 4) leading to computational savings of O(T 2d−1) over GP-UCB family of algorithms. Lemma 4 is proven by showing that the rate of domain size shrinking matches the rate of granularity of the discretization across epochs. Thus, the size of discretization does not need to increase with time. Please refer to the supplementary for a detailed proof. While the discretization does not grow with t, it is exponential in d. Since non-convex optimization is NP-Hard, such an exponential dependence on d is inevitable for maintaining the optimal learning efficiency. In this work, we focus on reducing the computational complexity with respect to the time horizon T . The proposed domain shrinking technique can be used in conjunction with dimension reduction techniques (e.g., [22]) to achieve efficiency in both T and d (although at the price of invalidating the regret bounds). 5 Empirical Studies In this section, we compare the performance of GP-ThreDS with several commonly used Bayesian optimization algorithms: IGP-UCB [6], Adaptive Discretization (AD) [23], Expected Improvement (EI) [24] and Probability of Improvement (PI) [25]. For the local test of GP-ThreDS we use the exact same UCB score as the one in IGP-UCB. We compare these algorithms on two standard benchmark functions for Bayesian optimization: Branin and Rosenbrock (see [26, 27] as well as the supplementary material for their analytical expressions). We use the SE kernel with lengthscale of l = 0.2 on domain [0, 1]2. We use a Gaussian noise with variance of 0.01. The parameters λ in the GP model and R in βt are also set to 0.01. The value of δ0 is set to 10−3. To limit the computational cost in the standard implementation of IGP-UCB, we consider a maximum of 6400 points in the grid. Figures 4a, 4b, show the per-sample average regret, in log scale, measured at every 0.1 seconds, wall clock time. Specifically, within a given time (the X-axis of the figures), different algorithms process different number of samples determined by their computational complexity. The average per sample regret is then shown against the time taken. The plots are the average performance over 10 Monte Carlo runs. As expected from theoretical results, GP-ThreDS achieves the best performance especially as time grows. Figure 4c directly compares the computation time of all algorithms for processing 1000 samples, averaged over 10 Monte Carlo runs. GP-ThreDS enjoys a much smaller computation cost in terms of time taken (in seconds). The details of algorithm parameters, benchmark functions, as well as additional experiments on hyperparameter tuning of a convolutional neural network for image classification are in the supplementary. 6 Conclusion A GP-based algorithm witha regret of Õ( √ TγT ) for black-box optimization under noisy bandit feedback was proposed. That is order optimal, up to poly-logarithmic factors, for the cases where a lower bound on regret is known. The proposed approach is rooted in the methodology of domain shrinking realized through a sequence of tree-based region pruning and refining to concentrate queries in high-performing regions of the function domain. It offers high learning efficiency, allows tight regret analysis, and achieves a computational saving of O(T 2d−1) over GP-UCB family of algorithms. Acknowledgments and Disclosure of Funding The work of Sudeep Salgia and Qing Zhao was supported by the National Science Foundation under Grants CCF-1815559 and CCF-1934985. The work of Sattar Vakili was supported by MediaTek Research. We would also like to thank the anonymous reviewers for their constructive comments.
1. What is the primary contribution of the paper in terms of Bayesian optimization? 2. What are the strengths of the proposed approach, particularly in terms of theoretical analysis and empirical results? 3. Are there any suggestions or questions regarding the clarity and illustration of the method's concept? 4. How can the algorithm's time efficiency be improved? 5. What are some potential limitations or areas for improvement regarding the benchmark functions used in the paper's experiments?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a Guassian process-based Bayesian optimization algorithm for solving the black-box optimization problems with order-optimal regret guarantee. The main contribution of the paper is in domain-shrinking strategy implemented through tree-based region pruning and focusing on high-performing regions to offer high learning and computational efficiency. The paper presents solid theoretical proofs of performance analysis, as well as empirical results on benchmark problems. Review The paper is well-written, clear, and easy to follow. It solves an interesting problem with many potential applications. It is both theoretically sound and includes good empirical studies. A few comments and questions for potential improvement/clarification of the method: Section 1.2: an illustration of the general idea (domain shrinking concept) would help in understanding the method. Potentially simply refer to Figure 1. Figure 1: Extend the caption to explain what is presented in the figure and what each symbol stands for. Could the algorithm be parallelized to improve the time efficiency? Only 2 benchmark functions are shown in the results of the paper. More test problems would be appreciated.
NIPS
Title A Domain-Shrinking based Bayesian Optimization Algorithm with Order-Optimal Regret Performance Abstract We consider sequential optimization of an unknown function in a reproducing kernel Hilbert space. We propose a Gaussian process-based algorithm and establish its order-optimal regret performance (up to a poly-logarithmic factor). This is the first GP-based algorithm with an order-optimal regret guarantee. The proposed algorithm is rooted in the methodology of domain shrinking realized through a sequence of tree-based region pruning and refining to concentrate queries in increasingly smaller high-performing regions of the function domain. The search for high-performing regions is localized and guided by an iterative estimation of the optimal function value to ensure both learning efficiency and computational efficiency. Compared with the prevailing GP-UCB family of algorithms, the proposed algorithm reduces computational complexity by a factor of O(T 2d−1) (where T is the time horizon and d the dimension of the function domain). 1 Introduction Consider a black-box optimization problem with an unknown objective function f : X → R, where X ⊂ Rd is a convex and compact set. The learner can access the function only through a noisy oracle, which, when queried with a point x ∈ X , returns a noisy function value at that point. The learning objective is to approach the maximizer x∗ of the function through a sequence of query points {xt}Tt=1 chosen sequentially in time. The learning efficiency is measured by cumulative regret given by R(T ) = T∑ t=1 [f(x∗)− f(xt)] . (1) This cumulative regret measure dictates the online nature of the problem: every query point during the learning process carries loss, not just the end point xT after learning concludes. The classical exploration-exploitation tradeoff in online learning hence ensues. 1.1 Gaussian Process Models The above problem is ill-posed unless certain structure of the unknown objective function f is assumed to make learning x∗ feasible. One such structural assumption is the convexity of f , which leads to the class of stochastic convex optimization problems. Another class of black-box optimization problems that is gaining interest in recent years is kernel-based learning where f is assumed to live in a Reproducing Kernel Hilbert Space (RKHS) associated with a positive-definite kernel. An effective approach to kernel-based black-box optimization is Bayesian optimization that adopts a fictitious prior on the unknown function f . In other words, while f is deterministic, it is viewed internally by 35th Conference on Neural Information Processing Systems (NeurIPS 2021). the learning algorithm as a realization of a random process over X . A natural choice is the Gaussian process (GP) with a Gaussian prior due to the conjugate property that significantly simplifies the analytical form of the posterior distribution at each newly obtained observation. In a celebrated work, Srinivas et al. [1] proposed the GP-UCB algorithm that constructs a proxy of f using the upper confidence bound (UCB) concept first introduced in the classical multi-armed bandit problem [2, 3]. Specifically, at each time instant t, a UCB of f is constructed using the closed-form posterior mean and standard deviation of the GP model of f . The algorithm then sets the next query point to be the maximizer of the UCB. Several variations of GP-UCB, tailored for different settings (see Sec 1.3), have since been developed. The GP-UCB family of algorithms generally enjoy good empirical performance in terms of regret. The analytical guarantees of their regret performance, however, leave considerable gaps to the existing lower bound [4]. More significantly, the state-of-the-art regret bound of GP-UCB does not guarantee a sublinear order in T for certain kernels, hence a lack of guaranteed convergence to f(x∗) [4, 5]. Another difficulty with the GP-UCB family of algorithms is their computational complexity, which can be prohibitive as the dimension d and/or the horizon length T grows. The computational complexity has two main sources: (i) the inversion of the covariance matrix in updating the posterior GP distribution, which has an O(t3) complexity with t samples; (ii) the maximization of the UCB proxy over the entire domain X at each time instant. In particular, due to the multi-modality of the UCB score, its maximization is often carried out using a grid search with an increasingly finer discretization of the entire domain. Specifically, due to analytical requirements, the discretization is typically assumed to grow in the order of O(t2d) [1, 6], resulting in an overall computational complexity of O(T 2d+3). Several studies exist that tackle the first source of high complexity of GP-UCB, using sparse matrix approximation techniques to reduce the complexity in the inversion of the covariance matrix (see, e.g., [7, 8]). The second source, which is the dominating factor, has not been effectively addressed. 1.2 Main results The goal of this work is to develop a GP-based Bayesian optimization algorithm with a regret guarantee that closes the gap to the lower bound. Furthermore, we tackle the second source of the complexity to ensure both learning efficiency and computational efficiency. Referred to as GP-ThreDS (Thresholded Domain Shrinking), the proposed algorithm is rooted in the methodology of domain shrinking: it continuously prunes sub-performing regions of the domain X and zooms into increasingly smaller high-performing regions of X as time goes. The purpose of the domain shrinking is twofold. First, it ensures high learning efficiency by focusing queries on regions of X with function values approaching f(x∗). Second, it achieves computational efficiency by avoiding a global maximization of the proxy function over the entire domain X . Our specific approach to domain shrinking is built upon a sequence of localized searches on a growing binary tree that forms successively refined partitions of X . Starting from the root of the tree that represents the entire domain, the search progresses down the tree by adaptively pruning nodes that do not contain the maximizer with high probability, consequently zooming into increasingly smaller high-performing regions of X as the search deepens. Another progressive thread in this sequence of localized searches is the criterion for pruning the tree. Each localized search aims to identify nodes at a certain depth of the tree that contain points with function values exceeding a given threshold. The threshold is updated iteratively to approach the maximum function value f(x∗). More succinctly, the proposed algorithm is a sequence of localized searches in the domain of the function guided by an iterative search in the range of the function. The above domain shrinking approach via localized search is the primary contributing factor to improved performance in terms of both regret guarantee and computational complexity. In particular, the rate of domain shrinking is controlled to ensure not only the concentration of query points in highperforming regions, but also a constant-sized discretization at all times when estimating the function values. This constant-sized discretization allows a tighter regret analysis and results in a regret upper bound for GP-ThreDS that matches with the lower bound (up to a poly-logarithmic factor). We show that the regret of GP-ThreDS is O( √ TγT ) (up to a poly-logarithmic factor), where γT denotes the maximum information gain after T steps and is representative of the effective dimension of the problem [9, 10]. In the case of Matérn and Squared Exponential (SE) kernels where the lower bounds on regret are known, on substituting the improved bounds on γT from [11], our results match the lower bounds and close the gap reported in [4, 12]. In comparison, the state-of-the-art analysis of GP-UCB yields an O(γT √ T ) regret bound [e.g., see, 6, Theorem 3]. The O( √ γT ) gap between the regret guarantees of GP-UCB and the proposed GP-ThreDS is significant: it can grow polynomially in T (e.g. in the case of Matérn kernel). Computation-wise, the constant-sized discretization contrasts sharply with the growing (at rateO(t2d) with time t) discretization required by the GP-UCB family of algorithms. Another factor contributing to the reduced complexity is the relaxed search criterion that aims to determine only the existence of threshold-exceeding points, in contrast to finding a global maximizer as in the GP-UCB family of algorithms. As a result, GP-ThreDS reduces the computational complexity from O(T 2d+3) as required by GP-UCB family of algorithms to O(T 4). 1.3 Related Work There is a vast body of literature on numerical and theoretical analysis of Bayesian optimization algorithms. With our focus on a computationally efficient algorithm with a provable regret guarantee, the most relevant results to ours are [1] and [6] discussed above. [6] also proved the same O(γT √ T ) regret holds for GP-TS, a Bayesian optimization algorithm based on Thompson sampling principle. Augmenting GP models with local polynomial estimators, [13] introduced LP-GP-UCB and established improved regret bounds for it under special cases [see, 13, Sec. 3.2]. However, for other cases, the regret guarantees for LP-GP-UCB remain in the same order as GP-UCB. More recently, [5] introduced π-GP-UCB, specific to Matérn family of kernels, that constructs a cover for the search space, as many hypercubes, and fits an independent GP to each cover element. This algorithm was proven to achieve sublinear regret across all parameters of the Matérn family. Almost all other algorithms in the GP-UCB family have a regret guarantee of O(γT √ T ), which is O( √ γT ) greater than the lower bound and can grow polynomially in T . Two exceptions to this are the SupKernelUCB and the RIPS algorithms proposed in [10] and [14] which achieve a regret of O( √ TγT ) for discrete action spaces.While this may be extendable to continuous spaces via a discretization argument as recently pointed out in [5, 12], the required discretization needs to grow polynomially in T , making it computationally expensive. Moreover, it has been noted that SupKernelUCB performs poorly in practice [5, 8, 12]. GP-ThreDS, on the other hand, is a computationally efficient algorithm that achieves tight regret bounds with good empirical performance (see Sec. 5). A comparison with other related works including the ones in different settings such as noise-free observations and random f are deferred to the supplementary. 2 Problem Statement 2.1 Problem Formulation We consider the problem of optimizing a fixed and unknown function f : X → R, where X ⊂ Rd is a convex and compact domain. A sequential optimization algorithm chooses a point xt ∈ X at each time instant t = 1, 2, . . . , and observes yt = f(xt)+ t, where the noise sequence { t}∞t=1 is assumed to be i.i.d. over t and R-sub-Gaussian for a fixed constant R ≥ 0, i.e., E [ eζ t ] ≤ exp ( ζ2R2/2 ) for all ζ ∈ R and t ∈ N. We assume a regularity condition on the objective function f that is commonly adopted under kernelized learning models. Specifically, we assume that f lives in a Reproducing Kernel Hilbert Space (RKHS)1 associated with a positive definite kernel k : X × X → R. The RKHS norm of f is assumed to be bounded by a known constant B, that is, ‖f‖k ≤ B. We further assume that f is α-Hölder continuous, that is, |f(x) − f(x′)| ≤ L‖x − x′‖α for all x, x′ ∈ X for some α ∈ (0, 1] and L > 0. This is a mild assumption as this is a direct consequence of RKHS assumption for commonly used kernels as shown in [13]. We also assume the knowledge of an interval [a, b], such 1The RKHS, denoted by Hk, is a Hilbert space associated with a positive definite kernel k(·, ·) and is fully specified by the kernel and vice versa. It is endowed with an inner product 〈·〉k that obeys the reproducing property, i.e., g(x) = 〈g, k(x, ·)〉k for all g ∈ Hk. The inner product also induces a norm ‖g‖k = 〈g, g〉k. This norm is a measure of the smoothness of the function f with respect to the kernel k and is finite if and only if f ∈ Hk. that f(x∗) ∈ [a, b]. This is also a mild assumption as domain-specific knowledge often provides us with bounds. For example, a common application of black-box optimization is hyperparameter tuning in deep learning models. The unknown function represents the accuracy of the model for a given set of hyperparameters. Since f represents the accuracy of the model, we have f(x∗) ∈ [0, 1]. For simplicity of notation, we assume X = [0, 1]d and f(x∗) ∈ [0, 1]. It is straightforward to relax these assumptions to general compact domains and arbitrary bounded ranges [a, b]. Our objective is a computationally efficient algorithm with a guarantee on regret performance as defined in (1). We provide high probability regret bounds that hold with probability at least 1− δ0 for any given δ0 ∈ (0, 1), a stronger performance guarantee than bounds on expected regret. 2.2 Preliminaries on Gaussian processes Under the GP model, the unknown function f is treated hypothetically as a realization of a Gaussian process over X . A Gaussian Process {F (x)}x∈X is fully specified by its mean function µ(·) and covariance function k(·, ·). All finite samples of the process are jointly Gaussian with mean E[F (xi)] = µ(xi) and covariance E[(F (xi)−µ(xi))(F (xj)−µ(xj))] = k(xi, xj) for 1 ≤ i, j ≤ n and n ∈ N [15]. The noise t is also viewed as Gaussian. The conjugate property of Gaussian processes with Gaussian noise allows for a closed-form expression of the posterior distribution. Consider a set of observations Ht = {xt,yt} where xt = (x1, x2, . . . , xt) T and yt = (y1, y2, . . . , yt)T . Here ys = f(xs) + s where xs ∈ X and s are the zero-mean noise terms, i.i.d. over s for s ∈ N. Conditioned on the history of observations Ht, the posterior for f is also a Gaussian process with mean and covariance functions given as µt(x) = E [F (x)|Ht] = kTxt,x (Kxt,xt + λI) −1 yt (2) kt(x, x ′) = E [(F (x)− µt(x))(F (x′)− µt(x′))|Ht] = k(x, x′)− kTxt,x (Kxt,xt + λI) −1 kxt,x′ . (3) In the above expressions, kxt,x = [k(x1, x), . . . , k(xt, x)] T , Kxt,xt is the t × t covariance matrix [k(xi, xj)] t i,j=1, I is the t× t identity matrix and λ is the variance of the Gaussian model assumed for the noise terms. Gaussian processes are powerful non-parametric Bayesian models for functions in RKHSs [16]. In particular, the mean function of the GP regression (eqn. (2)) lies in the RKHS with kernel k(·, ·) with high probability. We emphasize that the GP model of f and the Gaussian noise assumption are internal to the learning algorithm. The underlying objective function f is an arbitrary deterministic function in an RKHS, and the noise obeys an arbitrary R-sub-Gaussian distribution. 3 The GP-ThreDS Algorithm In Sec. 3.1, we present the basic domain-shrinking structure of GP-ThreDS that continuously prunes sub-performing regions of X and zooms into increasingly smaller high-performing regions of X . In Sec. 3.2, we present the method for identifying high-performing regions of X . 3.1 Thresholded domain shrinking GP-ThreDS operates in epochs. Each epoch completes one cycle of pruning, refining, and threshold updating as detailed below. (i) Pruning: removing sub-performing regions of X from future consideration; (ii) Refining: splitting high-performing regions of X into smaller regions for refined search (i.e., zooming in) in future epochs; (iii) Threshold updating: updating the threshold on function values that defines the criterion for high/sub-performance to be used in the next epoch. The pruning and refining conform to a binary-tree representation of X with nodes representing regions of X and edges the subset relation (i.e., region splitting). Throughout the paper, we use nodes and regions of X interchangeably. We explain the details with an example. Consider a one-dimensional function over X = [0, 1] as shown in Fig. 1. Assume that it is known f(x∗) ∈ [0, 1.4]. The function threshold τ1 defining the pruning criterion in the first epoch is set to the mid-point: τ1 = 0.7. In epoch 1, the domain X is represented by a tree of height 1 with the root representing the entire domain [0, 1] and the two leaf f(x) x 0 0 0 0.25 0.5 0.5 0.75 1 1 1 ⌧1 = 0.7 ⌧2 = 0.95 D = 0 D = 1 D = 2 Figure 1: Thresholded domain shrinking. 1 2 3 4 5 6 7 Figure 2: An illustration of the random-walk based search. (Node 6 is the single highperforming leaf node. If the random walk is currently at node 2, the correct direction is along the shortest path to node 6: via node 1 and then node 3.) nodes representing the two sub-intervals [0, 0.5] and (0.5, 1] (see Fig. 1). In the pruning stage of this epoch, the algorithm determines, with a required confidence, whether each leaf node contains a point with function value exceeding τ1. Such threshold-exceeding leaf nodes are referred to as high-performing nodes. Otherwise, they are called sub-performing nodes and are pruned, along with their ancestors, from the tree. Suppose that in this example, both sub-intervals [0, 0.5] and (0.5, 1] are identified as high-performing (see Sec. 3.2 on identifying high-performing nodes). Consequently, no node is pruned, and the algorithm proceeds to the refining stage, where each sub-interval splits, and the tree grows to a height of 2 with four leaf nodes. The threshold is then updated to τ2 = 0.95 (see below on threshold updating). The increased threshold reflects an adjustment toward a more aggressive pruning in the next epoch as suggested by the presence of (multiple) high-performing nodes in the current epoch. In the second epoch, the pruning stage aims to identify high-performing (defined by τ2) nodes among the four leaf nodes. Supposed that it is determined leaf node (0.25, 0.5] is the only high-performing node. Then the nodes [0, 0.25], (0.5, 0.75] and (0.75, 1] and all their ancestors are pruned. In the refining stage, the high-performing node (0.25, 0.5] splits into two. The threshold is updated to τ3. The algorithm then progresses into the third epoch, facing the same decision problem on the two leaf nodes (the two children of (0.25, 0.5]) of the pruned tree and following the same pruning-refiningthreshold updating cycle. For a general d-dimensional problem, the basic structure is the same with three simple generalizations. First, the two children of any given node are formed by equally splitting the longest edge of the corresponding d-dimensional cuboid (ties broken arbitrarily). Second, in each epoch, the tree grows by d levels (d = 1 in the above example) in the refining stage by following successive binary splitting d times. The last detail to specify is that if no leaf node is identified as high-performing in an epoch k, then the refining stage is bypassed, and the algorithm repeats the search on the same tree (no pruning or refining) with a decreased threshold τk+1 in the next epoch. The decreased threshold reflects a lowered estimate of f(x∗) based on the absence of high-performing nodes in the current epoch. The thresholds {τk}k≥1 are updated iteratively using a binary search to approach f(x∗). For each epoch k, the algorithm maintains an interval [ak, bk] which is believed to contain f(x∗). The threshold τk is set to the mid-point of [ak, bk]. The initial interval [a1, b1] is set to the known range [a, b] of f(x∗). At the end of epoch k, if no leaf node is identified as high-performing, we set ak+1 = ak − (bk − ak)/2 and bk+1 = bk − (bk − ak)/2, which leads to a decreased threshold in the next epoch. Otherwise, we set ak+1 = τk − c2−αρk/d+1 and bk+1 = bk, where ρk is the height of the tree before the pruning stage of epoch k, and c ∈ (0, 1/2) is a hyperparameter (specified in Sec. 3.2.2). We emphasize that while the proposed domain-shrinking approach conforms to an ever-growing tree, the algorithm can be implemented without storing the entire tree. The only information about the tree that needs to be maintained is the set Dk of high-performing leaf nodes identified in the pruning stage of each epoch k. A pseudo-code of GP-ThreDS is provided in the supplementary material. 3.2 Identifying high-performing nodes We now specify the local algorithm for identifying high-performing nodes in a given epoch k. Recall that Dk denotes the set of high-performing nodes identified in epoch k. Each node in Dk has grown d levels and produced 2d leaf nodes in the refining stage of epoch k. The objective of epoch k + 1 is to determine which of the 2d |Dk| newly grown leaves are high-performing nodes defined by τk+1. In epoch k + 1, the only portion of the tree that is of interest is the |Dk| subtrees, each of height d with a root in Dk. Our approach is to treat these subtrees separately, one at a time. We can thus focus on one subtree to describe the algorithm for identifying which of the 2d leaves are high-performing. The terms root, node, and leaf all pertain to this subtree. We also omit the epoch index for simplicity. 3.2.1 A random-walk based search for high-performing nodes A straightforward approach to identifying the high-performing nodes is to test each of the 2d leaf nodes directly. This, however, results in a large number of samples at suboptimal points when the dimension d is high. Our approach is inspired by the RWT (Random Walk on a Tree) algorithm recently proposed as a robust and adaptive algorithm for stochastic convex optimization [17, 18, 19]. Assume first there is exactly one high-performing node among the 2d leaf nodes. The basic idea is to devise a biased random walk on the tree that initiates at the root and walks towards the highperforming node at the leaf level. As illustrated in Fig. 2 with d = 2, at a non-leaf node, the random walk can take one of three directions: towards the parent or one of the two children (the parent of the root is itself). The correct direction is to walk along the shortest path to the high-performing leaf node. With the subset relation encoded by the tree, this implies moving to the child containing a threshold-exceeding point or to the parent when neither child contains threshold-exceeding points. Hence, to guide the random walk, a local sequential test is carried out on the two children, one at a time, to determine, at a required confidence level, whether it is threshold-exceeding (see Sec. 3.2.2). The walk then moves to the first child identified as threshold-exceeding (if any) or to the parent otherwise. The confidence level of the local sequential test at each non-leaf node is only required to ensure the walk is correctly biased, i.e., the probability of walking in the correct direction is greater than 1/2. On reaching a leaf node, the algorithm enters the verification stage to determine whether this node is the high-performing leaf node. If the decision is no, it moves back to the parent of this leaf node, and the random walk resumes. If yes, the algorithm exits (under the assumption of a single high-performing leaf node). This decision can be made by carrying out the same local sequential test that guides the random walk at non-leaf nodes. The only difference is in the required confidence level. Given that a false positive at a leaf cannot be corrected due to exiting while a false negative only resumes the random walk (hence retractable in the future), the confidence level for a positive decision needs to be sufficiently high to ensure the overall regret performance, while a negative decision only needs to ensure the bias of the walk (as in the non-leaf nodes). When the number of high-performing leaf nodes is unknown and arbitrary in {0, 1, 2, . . . , 2d}, multiple runs of the random walk are carried out to identify them one by one. In addition, a termination test on the root node is carried out before each run to determine whether there are still unidentified high-performing leaf nodes. See the supplementary for details along with a pseudo code. We emphasize that the local test is carried out using only observations from the current visit to this node; observations from past visits are forgotten. This is to ensure the random-walk nature of the process for tight-analysis. A computational benefit is that the matrices being inverted to compute the posterior distribution are always small, improving the run-time efficiency of the algorithm. 3.2.2 The local sequential test The last piece of the puzzle in GP-ThreDS is the local sequential test on a given node of a subtree. Given a node/region, D ⊆ X , a threshold τ , and a confidence parameter η ∈ (0, 1), the local sequential test needs to determine, with a 1− η confidence level, whether D contains a point with function value exceeding τ . The test first builds a discretization of the region D, denoted by the set Dg = {xi} |Dg| i=1 . The set of points in Dg are chosen to ensure that supx∈D infy∈Dg ‖x − y‖ ≤ ∆. A simple way to construct such a discretization is to use uniform grids parallel to the axes with a resolution small enough to satisfy the above constraint. The parameter ∆ in epoch k is set to ∆k = (c/L)1/α2−ρk/d and is used to control the approximation of the function values in D. Recall that L is the Hölder continuity constant while c ∈ (0, 1/2) is a hyperparameter. The local test sequentially queries points in the set Dg to locally estimate f . To determine whether there exists a point x ∈ D with f(x) ≥ τ , the test builds a pair of Upper and Lower Confidence Bounds using sequentially drawn samples and compares each of them to prescribed values. If the UCB goes below τ − L∆α, indicating that the node is unlikely to contain a τ -exceeding point, the test terminates and outputs a negative outcome. On the other hand, if LCB exceeds τ , then this is a τ -exceeding point with the required confidence level. The test terminates and outputs a positive outcome. If both the UCB and LCB are within their prescribed “uncertainty" range, the test draws one more sample and repeats the process. A cap is imposed on the total number of samples. Specifically, the test terminates and outputs a positive outcome when the total number of samples exceeds S̄(p, L∆α). A description of the test for s ≥ 1 after being initialized with a point x1 ∈ Dg is given in Fig. 3.We would like to emphasize that the posterior mean and variance µs−1 and σ2s−1 considered in the description below are constructed only from the samples collected during that particular visit to the current node. The parameter βs(ν) := B + R √ 2(γs−1 + 1 + log(1/ν)) for ν ∈ (0, 1). γt is the maximum information gain at time t, defined as γt := maxA⊂X :|A|=t I(yA; fA). Here, I(yA; fA) denotes the mutual information between fA = [f(x)]x∈A and yA = fA + A. Bounds on γt for several common kernels are known [20, 21] and are sublinear functions of t. The cap S̄(η, L∆α) on the maximum number of samples is given by S̄(η, L∆α) = min { t ∈ N : 2(1 + 2λ)βt(η)|Dg| 1 2 (L∆α) √ t ≤ 1 } + 1. (4) The cap on the total number of samples prevents the algorithm from wasting too many queries on suboptimal nodes. Without such a cap, the expected number of queries issued by the local test is inversely proportional to |f(x∗Dg ) − τ |, where x ∗ Dg = arg maxx∈Dg f(x). Consequently, small values of |f(x∗Dg )− τ | would lead to a large number of queries at highly suboptimal points when f(x∗Dg ) is far from f(x ∗). The cap on the number of samples thus helps control the growth of regret at the cost of a potential increase in the approximation error. It also reduces the cost in computing the posterior distribution by limiting the number of queries at a node. Note that when the sequential test reaches the maximum allowable samples and exits with an outcome of +1, it is possible that f(x∗Dg ) < τ (i.e., no τ -exceeding points in Dg). Thus, τ may not be a lower bound for the updated belief of f(x∗), as one would expect in the case of an output of +1 from the sequential test. However, using Lemma 2, we can obtain a high probability lower bound on τ − f(x∗Dg ). This additional error term is taken into account while updating the threshold as described in Sec. 3.1. The hyperparameter c trades off this error with the size of the discretization. The sequential test can be easily modified to offer asymmetric confidence levels for declaring positive and negative outcomes (as required for in the verification stage of the RWT search) by changing the confidence parameter in βs. Details are given in the supplementary material. We point out that the construction of the UCB is based on the UCB score employed in IGP-UCB [6]. It is straightforward to replace it with other types of UCB scores. The basic thresholded domain shrinking structure of the proposed algorithm is independent of the specific UCB scores, hence generally applicable as a method for improving the computational efficiency and regret performance of GP-UCB family of algorithms. 4 Performance Analysis In this section, we analyze the regret and computational complexity of GP-ThreDS. Throughout the section, D ⊆ X denotes a node visited by GP-ThreDS, Dg denotes its associated discretization, constructed as described in Sec. 3.2.2, and x∗Dg = arg maxx∈Dg f(x). 4.1 Regret Analysis The following theorem establishes the regret order of GP-ThreDS. Theorem 1. Consider the GP-ThreDS algorithm as described in Sec. 3. Then, for any δ0 ∈ (0, 1), with probability at least 1− δ0, the regret incurred by the algorithm is given as R(T ) = O( √ TγT log T (log T + √ log T log(1/δ0))). We provide here a sketch of the proof. The regret incurred by GP-ThreDS is analysed by decomposing it into two terms: the regret in the first k0 epochs referred to as R1, and the regret after the completion of the first k0 epochs referred to as R2, where k0 = max{k : ρk ≤ d2α log T}. To bound R1, we first bound the regret incurred at any node visited during the first k0 epochs using the following decomposition of the instantaneous regret: f(x∗)− f(xt) = [f(x∗)− τk + L∆αk ] + [τk − f(x∗Dg )− L∆ α k ] + [f(x ∗ Dg )− f(xt)]. In the above decomposition, k denotes the epoch index during which the node is visited. Each of these three terms are then bounded separately. The third term in the expression is bounded using a similar approach to the analysis of IGP-UCB [6] (notice that xt is the maximizer of the UCB score) that is to bound it by the cumulative standard deviation ( ∑t s=1 σs−1(xs)). Lemma 1. For any set of sampling points {x1, x2, . . . , xt} chosen from Dg (under any choice of algorithm), the following relation holds: ∑t s=1 σs−1(xs) ≤ (1+2λ) √ |Dg|t, where σs(x) is defined in (3). Since GP-ThreDS ensures a constant-sized discretization at all times (See Lemma 4), the above lemma implies that the sum of posterior standard deviations is O( √ t) resulting in a tight bound corresponding to the third term (that is an O( √ γt) tighter than the bound for IGP-UCB which optimizes the UCB score over the entire domain). The first two terms are bounded using the following lemma with an appropriate choice of ∆f . Lemma 2. If the local test is terminated by the termination condition at instant S̄(δ2,∆f ) as defined in (4), then with probability at least 1− δ2, we have τ − L∆α −∆f ≤ f(x∗Dg ) ≤ τ + ∆f . The final bound on R1 is obtained by a combination of the upper bound on regret on each node and the bound on the total number of nodes visited by GP-ThreDS, captured in the following lemma. Lemma 3. Consider the random walk based routine described in Section 3.2 with a local confidence parameter p ∈ (0, 1/2). Then with probability at least 1− δ1, one iteration of RWT visits less than log(d/δ1) 2(p−1/2)2 nodes before termination. To bound R2, we bound the difference in function values using the Hölder continuity of the function along with the upper bound on the diameter of the nodes after k0 epochs. Adding the bounds on R1 and R2, we arrive at the theorem. The detailed proofs are provided in the supplementary material. We would like to point out that the regret analysis depends on the choice of the UCB score. While we have used the UCB score of IGP-UCB, this analysis is straightforward to extend to other UCB scores. Remark 1. We note that our assumptions are consistent with those used in proving the lower bounds. In particular, the lower bounds are proven for the Matérn family of kernels including the SE kernel in [4]. [13, Proposition 1] proves the Hölder continuity of this family of kernels. Thus, our assumption on Hölder continuity is consistent with the lower bound. In addition, the proof of lower bound considers a class of functions whose RKHS norm is upper bounded by a known constant [4, Sec 1.1]. This upper bound translates to an upper bound on the absolute value of f , which is consistent with our assumption on having a finite range for f . 4.2 Computational Complexity The following theorem bounds the worst-case overall computational complexity of GP-ThreDS. Theorem 2. The worst-case overall computational complexity of GP-ThreDS is O(T 4), where T is the time horizon. The proof of theorem follows from the following lemma. Lemma 4. The number of points in the discretization, |Dg|, for any node D, is upper bounded by a constant, independent of time. i.e., |Dg| = O(1), ∀ t ≤ T . From the lemma, we can conclude that the number of UCB score evaluations in GP-ThreDS is constant at all times t, hence matrix inversion becomes the dominant source of computational complexity. Since no more than t samples are used to compute the posterior distribution at time t, the worst-case cost associated with matrix inversion step is O(t3) and consequently the worst-case computational complexity of GP-ThreDS is O(T 4) leading to computational savings of O(T 2d−1) over GP-UCB family of algorithms. Lemma 4 is proven by showing that the rate of domain size shrinking matches the rate of granularity of the discretization across epochs. Thus, the size of discretization does not need to increase with time. Please refer to the supplementary for a detailed proof. While the discretization does not grow with t, it is exponential in d. Since non-convex optimization is NP-Hard, such an exponential dependence on d is inevitable for maintaining the optimal learning efficiency. In this work, we focus on reducing the computational complexity with respect to the time horizon T . The proposed domain shrinking technique can be used in conjunction with dimension reduction techniques (e.g., [22]) to achieve efficiency in both T and d (although at the price of invalidating the regret bounds). 5 Empirical Studies In this section, we compare the performance of GP-ThreDS with several commonly used Bayesian optimization algorithms: IGP-UCB [6], Adaptive Discretization (AD) [23], Expected Improvement (EI) [24] and Probability of Improvement (PI) [25]. For the local test of GP-ThreDS we use the exact same UCB score as the one in IGP-UCB. We compare these algorithms on two standard benchmark functions for Bayesian optimization: Branin and Rosenbrock (see [26, 27] as well as the supplementary material for their analytical expressions). We use the SE kernel with lengthscale of l = 0.2 on domain [0, 1]2. We use a Gaussian noise with variance of 0.01. The parameters λ in the GP model and R in βt are also set to 0.01. The value of δ0 is set to 10−3. To limit the computational cost in the standard implementation of IGP-UCB, we consider a maximum of 6400 points in the grid. Figures 4a, 4b, show the per-sample average regret, in log scale, measured at every 0.1 seconds, wall clock time. Specifically, within a given time (the X-axis of the figures), different algorithms process different number of samples determined by their computational complexity. The average per sample regret is then shown against the time taken. The plots are the average performance over 10 Monte Carlo runs. As expected from theoretical results, GP-ThreDS achieves the best performance especially as time grows. Figure 4c directly compares the computation time of all algorithms for processing 1000 samples, averaged over 10 Monte Carlo runs. GP-ThreDS enjoys a much smaller computation cost in terms of time taken (in seconds). The details of algorithm parameters, benchmark functions, as well as additional experiments on hyperparameter tuning of a convolutional neural network for image classification are in the supplementary. 6 Conclusion A GP-based algorithm witha regret of Õ( √ TγT ) for black-box optimization under noisy bandit feedback was proposed. That is order optimal, up to poly-logarithmic factors, for the cases where a lower bound on regret is known. The proposed approach is rooted in the methodology of domain shrinking realized through a sequence of tree-based region pruning and refining to concentrate queries in high-performing regions of the function domain. It offers high learning efficiency, allows tight regret analysis, and achieves a computational saving of O(T 2d−1) over GP-UCB family of algorithms. Acknowledgments and Disclosure of Funding The work of Sudeep Salgia and Qing Zhao was supported by the National Science Foundation under Grants CCF-1815559 and CCF-1934985. The work of Sattar Vakili was supported by MediaTek Research. We would also like to thank the anonymous reviewers for their constructive comments.
1. What is the focus of the paper regarding GP optimization? 2. What is the novel aspect of the proposed algorithm compared to previous works? 3. What is the regret guarantee achieved by the algorithm, and how does it compare to other methods? 4. How does the algorithm's computational complexity fare against other approaches? 5. Are there any limitations or areas for improvement in the paper's content or proposed method?
Summary Of The Paper Review
Summary Of The Paper This paper introduces a GP optimization algorithm under the RKHS setting. The algorithm uses a tree-based domain shrinking approach where elimination, region splitting, and threshold updating are conducted in each epoch. This algorithm achieves a regret guarantee of O ( T γ T ) and has a low computational complexity of O ( T 4 ) . Review This paper is well-written overall and easy to understand. The tree-based domain shrinking strategy presents its novelty over prior works. The nearly optimal regret guarantee is also a highlight. Most of the proofs seem correct and easy to follow. A comprehensive set of experiments shows that the proposed algorithm also performs well empirically. Some suggestions are given in the Limitations section below.
NIPS
Title A Domain-Shrinking based Bayesian Optimization Algorithm with Order-Optimal Regret Performance Abstract We consider sequential optimization of an unknown function in a reproducing kernel Hilbert space. We propose a Gaussian process-based algorithm and establish its order-optimal regret performance (up to a poly-logarithmic factor). This is the first GP-based algorithm with an order-optimal regret guarantee. The proposed algorithm is rooted in the methodology of domain shrinking realized through a sequence of tree-based region pruning and refining to concentrate queries in increasingly smaller high-performing regions of the function domain. The search for high-performing regions is localized and guided by an iterative estimation of the optimal function value to ensure both learning efficiency and computational efficiency. Compared with the prevailing GP-UCB family of algorithms, the proposed algorithm reduces computational complexity by a factor of O(T 2d−1) (where T is the time horizon and d the dimension of the function domain). 1 Introduction Consider a black-box optimization problem with an unknown objective function f : X → R, where X ⊂ Rd is a convex and compact set. The learner can access the function only through a noisy oracle, which, when queried with a point x ∈ X , returns a noisy function value at that point. The learning objective is to approach the maximizer x∗ of the function through a sequence of query points {xt}Tt=1 chosen sequentially in time. The learning efficiency is measured by cumulative regret given by R(T ) = T∑ t=1 [f(x∗)− f(xt)] . (1) This cumulative regret measure dictates the online nature of the problem: every query point during the learning process carries loss, not just the end point xT after learning concludes. The classical exploration-exploitation tradeoff in online learning hence ensues. 1.1 Gaussian Process Models The above problem is ill-posed unless certain structure of the unknown objective function f is assumed to make learning x∗ feasible. One such structural assumption is the convexity of f , which leads to the class of stochastic convex optimization problems. Another class of black-box optimization problems that is gaining interest in recent years is kernel-based learning where f is assumed to live in a Reproducing Kernel Hilbert Space (RKHS) associated with a positive-definite kernel. An effective approach to kernel-based black-box optimization is Bayesian optimization that adopts a fictitious prior on the unknown function f . In other words, while f is deterministic, it is viewed internally by 35th Conference on Neural Information Processing Systems (NeurIPS 2021). the learning algorithm as a realization of a random process over X . A natural choice is the Gaussian process (GP) with a Gaussian prior due to the conjugate property that significantly simplifies the analytical form of the posterior distribution at each newly obtained observation. In a celebrated work, Srinivas et al. [1] proposed the GP-UCB algorithm that constructs a proxy of f using the upper confidence bound (UCB) concept first introduced in the classical multi-armed bandit problem [2, 3]. Specifically, at each time instant t, a UCB of f is constructed using the closed-form posterior mean and standard deviation of the GP model of f . The algorithm then sets the next query point to be the maximizer of the UCB. Several variations of GP-UCB, tailored for different settings (see Sec 1.3), have since been developed. The GP-UCB family of algorithms generally enjoy good empirical performance in terms of regret. The analytical guarantees of their regret performance, however, leave considerable gaps to the existing lower bound [4]. More significantly, the state-of-the-art regret bound of GP-UCB does not guarantee a sublinear order in T for certain kernels, hence a lack of guaranteed convergence to f(x∗) [4, 5]. Another difficulty with the GP-UCB family of algorithms is their computational complexity, which can be prohibitive as the dimension d and/or the horizon length T grows. The computational complexity has two main sources: (i) the inversion of the covariance matrix in updating the posterior GP distribution, which has an O(t3) complexity with t samples; (ii) the maximization of the UCB proxy over the entire domain X at each time instant. In particular, due to the multi-modality of the UCB score, its maximization is often carried out using a grid search with an increasingly finer discretization of the entire domain. Specifically, due to analytical requirements, the discretization is typically assumed to grow in the order of O(t2d) [1, 6], resulting in an overall computational complexity of O(T 2d+3). Several studies exist that tackle the first source of high complexity of GP-UCB, using sparse matrix approximation techniques to reduce the complexity in the inversion of the covariance matrix (see, e.g., [7, 8]). The second source, which is the dominating factor, has not been effectively addressed. 1.2 Main results The goal of this work is to develop a GP-based Bayesian optimization algorithm with a regret guarantee that closes the gap to the lower bound. Furthermore, we tackle the second source of the complexity to ensure both learning efficiency and computational efficiency. Referred to as GP-ThreDS (Thresholded Domain Shrinking), the proposed algorithm is rooted in the methodology of domain shrinking: it continuously prunes sub-performing regions of the domain X and zooms into increasingly smaller high-performing regions of X as time goes. The purpose of the domain shrinking is twofold. First, it ensures high learning efficiency by focusing queries on regions of X with function values approaching f(x∗). Second, it achieves computational efficiency by avoiding a global maximization of the proxy function over the entire domain X . Our specific approach to domain shrinking is built upon a sequence of localized searches on a growing binary tree that forms successively refined partitions of X . Starting from the root of the tree that represents the entire domain, the search progresses down the tree by adaptively pruning nodes that do not contain the maximizer with high probability, consequently zooming into increasingly smaller high-performing regions of X as the search deepens. Another progressive thread in this sequence of localized searches is the criterion for pruning the tree. Each localized search aims to identify nodes at a certain depth of the tree that contain points with function values exceeding a given threshold. The threshold is updated iteratively to approach the maximum function value f(x∗). More succinctly, the proposed algorithm is a sequence of localized searches in the domain of the function guided by an iterative search in the range of the function. The above domain shrinking approach via localized search is the primary contributing factor to improved performance in terms of both regret guarantee and computational complexity. In particular, the rate of domain shrinking is controlled to ensure not only the concentration of query points in highperforming regions, but also a constant-sized discretization at all times when estimating the function values. This constant-sized discretization allows a tighter regret analysis and results in a regret upper bound for GP-ThreDS that matches with the lower bound (up to a poly-logarithmic factor). We show that the regret of GP-ThreDS is O( √ TγT ) (up to a poly-logarithmic factor), where γT denotes the maximum information gain after T steps and is representative of the effective dimension of the problem [9, 10]. In the case of Matérn and Squared Exponential (SE) kernels where the lower bounds on regret are known, on substituting the improved bounds on γT from [11], our results match the lower bounds and close the gap reported in [4, 12]. In comparison, the state-of-the-art analysis of GP-UCB yields an O(γT √ T ) regret bound [e.g., see, 6, Theorem 3]. The O( √ γT ) gap between the regret guarantees of GP-UCB and the proposed GP-ThreDS is significant: it can grow polynomially in T (e.g. in the case of Matérn kernel). Computation-wise, the constant-sized discretization contrasts sharply with the growing (at rateO(t2d) with time t) discretization required by the GP-UCB family of algorithms. Another factor contributing to the reduced complexity is the relaxed search criterion that aims to determine only the existence of threshold-exceeding points, in contrast to finding a global maximizer as in the GP-UCB family of algorithms. As a result, GP-ThreDS reduces the computational complexity from O(T 2d+3) as required by GP-UCB family of algorithms to O(T 4). 1.3 Related Work There is a vast body of literature on numerical and theoretical analysis of Bayesian optimization algorithms. With our focus on a computationally efficient algorithm with a provable regret guarantee, the most relevant results to ours are [1] and [6] discussed above. [6] also proved the same O(γT √ T ) regret holds for GP-TS, a Bayesian optimization algorithm based on Thompson sampling principle. Augmenting GP models with local polynomial estimators, [13] introduced LP-GP-UCB and established improved regret bounds for it under special cases [see, 13, Sec. 3.2]. However, for other cases, the regret guarantees for LP-GP-UCB remain in the same order as GP-UCB. More recently, [5] introduced π-GP-UCB, specific to Matérn family of kernels, that constructs a cover for the search space, as many hypercubes, and fits an independent GP to each cover element. This algorithm was proven to achieve sublinear regret across all parameters of the Matérn family. Almost all other algorithms in the GP-UCB family have a regret guarantee of O(γT √ T ), which is O( √ γT ) greater than the lower bound and can grow polynomially in T . Two exceptions to this are the SupKernelUCB and the RIPS algorithms proposed in [10] and [14] which achieve a regret of O( √ TγT ) for discrete action spaces.While this may be extendable to continuous spaces via a discretization argument as recently pointed out in [5, 12], the required discretization needs to grow polynomially in T , making it computationally expensive. Moreover, it has been noted that SupKernelUCB performs poorly in practice [5, 8, 12]. GP-ThreDS, on the other hand, is a computationally efficient algorithm that achieves tight regret bounds with good empirical performance (see Sec. 5). A comparison with other related works including the ones in different settings such as noise-free observations and random f are deferred to the supplementary. 2 Problem Statement 2.1 Problem Formulation We consider the problem of optimizing a fixed and unknown function f : X → R, where X ⊂ Rd is a convex and compact domain. A sequential optimization algorithm chooses a point xt ∈ X at each time instant t = 1, 2, . . . , and observes yt = f(xt)+ t, where the noise sequence { t}∞t=1 is assumed to be i.i.d. over t and R-sub-Gaussian for a fixed constant R ≥ 0, i.e., E [ eζ t ] ≤ exp ( ζ2R2/2 ) for all ζ ∈ R and t ∈ N. We assume a regularity condition on the objective function f that is commonly adopted under kernelized learning models. Specifically, we assume that f lives in a Reproducing Kernel Hilbert Space (RKHS)1 associated with a positive definite kernel k : X × X → R. The RKHS norm of f is assumed to be bounded by a known constant B, that is, ‖f‖k ≤ B. We further assume that f is α-Hölder continuous, that is, |f(x) − f(x′)| ≤ L‖x − x′‖α for all x, x′ ∈ X for some α ∈ (0, 1] and L > 0. This is a mild assumption as this is a direct consequence of RKHS assumption for commonly used kernels as shown in [13]. We also assume the knowledge of an interval [a, b], such 1The RKHS, denoted by Hk, is a Hilbert space associated with a positive definite kernel k(·, ·) and is fully specified by the kernel and vice versa. It is endowed with an inner product 〈·〉k that obeys the reproducing property, i.e., g(x) = 〈g, k(x, ·)〉k for all g ∈ Hk. The inner product also induces a norm ‖g‖k = 〈g, g〉k. This norm is a measure of the smoothness of the function f with respect to the kernel k and is finite if and only if f ∈ Hk. that f(x∗) ∈ [a, b]. This is also a mild assumption as domain-specific knowledge often provides us with bounds. For example, a common application of black-box optimization is hyperparameter tuning in deep learning models. The unknown function represents the accuracy of the model for a given set of hyperparameters. Since f represents the accuracy of the model, we have f(x∗) ∈ [0, 1]. For simplicity of notation, we assume X = [0, 1]d and f(x∗) ∈ [0, 1]. It is straightforward to relax these assumptions to general compact domains and arbitrary bounded ranges [a, b]. Our objective is a computationally efficient algorithm with a guarantee on regret performance as defined in (1). We provide high probability regret bounds that hold with probability at least 1− δ0 for any given δ0 ∈ (0, 1), a stronger performance guarantee than bounds on expected regret. 2.2 Preliminaries on Gaussian processes Under the GP model, the unknown function f is treated hypothetically as a realization of a Gaussian process over X . A Gaussian Process {F (x)}x∈X is fully specified by its mean function µ(·) and covariance function k(·, ·). All finite samples of the process are jointly Gaussian with mean E[F (xi)] = µ(xi) and covariance E[(F (xi)−µ(xi))(F (xj)−µ(xj))] = k(xi, xj) for 1 ≤ i, j ≤ n and n ∈ N [15]. The noise t is also viewed as Gaussian. The conjugate property of Gaussian processes with Gaussian noise allows for a closed-form expression of the posterior distribution. Consider a set of observations Ht = {xt,yt} where xt = (x1, x2, . . . , xt) T and yt = (y1, y2, . . . , yt)T . Here ys = f(xs) + s where xs ∈ X and s are the zero-mean noise terms, i.i.d. over s for s ∈ N. Conditioned on the history of observations Ht, the posterior for f is also a Gaussian process with mean and covariance functions given as µt(x) = E [F (x)|Ht] = kTxt,x (Kxt,xt + λI) −1 yt (2) kt(x, x ′) = E [(F (x)− µt(x))(F (x′)− µt(x′))|Ht] = k(x, x′)− kTxt,x (Kxt,xt + λI) −1 kxt,x′ . (3) In the above expressions, kxt,x = [k(x1, x), . . . , k(xt, x)] T , Kxt,xt is the t × t covariance matrix [k(xi, xj)] t i,j=1, I is the t× t identity matrix and λ is the variance of the Gaussian model assumed for the noise terms. Gaussian processes are powerful non-parametric Bayesian models for functions in RKHSs [16]. In particular, the mean function of the GP regression (eqn. (2)) lies in the RKHS with kernel k(·, ·) with high probability. We emphasize that the GP model of f and the Gaussian noise assumption are internal to the learning algorithm. The underlying objective function f is an arbitrary deterministic function in an RKHS, and the noise obeys an arbitrary R-sub-Gaussian distribution. 3 The GP-ThreDS Algorithm In Sec. 3.1, we present the basic domain-shrinking structure of GP-ThreDS that continuously prunes sub-performing regions of X and zooms into increasingly smaller high-performing regions of X . In Sec. 3.2, we present the method for identifying high-performing regions of X . 3.1 Thresholded domain shrinking GP-ThreDS operates in epochs. Each epoch completes one cycle of pruning, refining, and threshold updating as detailed below. (i) Pruning: removing sub-performing regions of X from future consideration; (ii) Refining: splitting high-performing regions of X into smaller regions for refined search (i.e., zooming in) in future epochs; (iii) Threshold updating: updating the threshold on function values that defines the criterion for high/sub-performance to be used in the next epoch. The pruning and refining conform to a binary-tree representation of X with nodes representing regions of X and edges the subset relation (i.e., region splitting). Throughout the paper, we use nodes and regions of X interchangeably. We explain the details with an example. Consider a one-dimensional function over X = [0, 1] as shown in Fig. 1. Assume that it is known f(x∗) ∈ [0, 1.4]. The function threshold τ1 defining the pruning criterion in the first epoch is set to the mid-point: τ1 = 0.7. In epoch 1, the domain X is represented by a tree of height 1 with the root representing the entire domain [0, 1] and the two leaf f(x) x 0 0 0 0.25 0.5 0.5 0.75 1 1 1 ⌧1 = 0.7 ⌧2 = 0.95 D = 0 D = 1 D = 2 Figure 1: Thresholded domain shrinking. 1 2 3 4 5 6 7 Figure 2: An illustration of the random-walk based search. (Node 6 is the single highperforming leaf node. If the random walk is currently at node 2, the correct direction is along the shortest path to node 6: via node 1 and then node 3.) nodes representing the two sub-intervals [0, 0.5] and (0.5, 1] (see Fig. 1). In the pruning stage of this epoch, the algorithm determines, with a required confidence, whether each leaf node contains a point with function value exceeding τ1. Such threshold-exceeding leaf nodes are referred to as high-performing nodes. Otherwise, they are called sub-performing nodes and are pruned, along with their ancestors, from the tree. Suppose that in this example, both sub-intervals [0, 0.5] and (0.5, 1] are identified as high-performing (see Sec. 3.2 on identifying high-performing nodes). Consequently, no node is pruned, and the algorithm proceeds to the refining stage, where each sub-interval splits, and the tree grows to a height of 2 with four leaf nodes. The threshold is then updated to τ2 = 0.95 (see below on threshold updating). The increased threshold reflects an adjustment toward a more aggressive pruning in the next epoch as suggested by the presence of (multiple) high-performing nodes in the current epoch. In the second epoch, the pruning stage aims to identify high-performing (defined by τ2) nodes among the four leaf nodes. Supposed that it is determined leaf node (0.25, 0.5] is the only high-performing node. Then the nodes [0, 0.25], (0.5, 0.75] and (0.75, 1] and all their ancestors are pruned. In the refining stage, the high-performing node (0.25, 0.5] splits into two. The threshold is updated to τ3. The algorithm then progresses into the third epoch, facing the same decision problem on the two leaf nodes (the two children of (0.25, 0.5]) of the pruned tree and following the same pruning-refiningthreshold updating cycle. For a general d-dimensional problem, the basic structure is the same with three simple generalizations. First, the two children of any given node are formed by equally splitting the longest edge of the corresponding d-dimensional cuboid (ties broken arbitrarily). Second, in each epoch, the tree grows by d levels (d = 1 in the above example) in the refining stage by following successive binary splitting d times. The last detail to specify is that if no leaf node is identified as high-performing in an epoch k, then the refining stage is bypassed, and the algorithm repeats the search on the same tree (no pruning or refining) with a decreased threshold τk+1 in the next epoch. The decreased threshold reflects a lowered estimate of f(x∗) based on the absence of high-performing nodes in the current epoch. The thresholds {τk}k≥1 are updated iteratively using a binary search to approach f(x∗). For each epoch k, the algorithm maintains an interval [ak, bk] which is believed to contain f(x∗). The threshold τk is set to the mid-point of [ak, bk]. The initial interval [a1, b1] is set to the known range [a, b] of f(x∗). At the end of epoch k, if no leaf node is identified as high-performing, we set ak+1 = ak − (bk − ak)/2 and bk+1 = bk − (bk − ak)/2, which leads to a decreased threshold in the next epoch. Otherwise, we set ak+1 = τk − c2−αρk/d+1 and bk+1 = bk, where ρk is the height of the tree before the pruning stage of epoch k, and c ∈ (0, 1/2) is a hyperparameter (specified in Sec. 3.2.2). We emphasize that while the proposed domain-shrinking approach conforms to an ever-growing tree, the algorithm can be implemented without storing the entire tree. The only information about the tree that needs to be maintained is the set Dk of high-performing leaf nodes identified in the pruning stage of each epoch k. A pseudo-code of GP-ThreDS is provided in the supplementary material. 3.2 Identifying high-performing nodes We now specify the local algorithm for identifying high-performing nodes in a given epoch k. Recall that Dk denotes the set of high-performing nodes identified in epoch k. Each node in Dk has grown d levels and produced 2d leaf nodes in the refining stage of epoch k. The objective of epoch k + 1 is to determine which of the 2d |Dk| newly grown leaves are high-performing nodes defined by τk+1. In epoch k + 1, the only portion of the tree that is of interest is the |Dk| subtrees, each of height d with a root in Dk. Our approach is to treat these subtrees separately, one at a time. We can thus focus on one subtree to describe the algorithm for identifying which of the 2d leaves are high-performing. The terms root, node, and leaf all pertain to this subtree. We also omit the epoch index for simplicity. 3.2.1 A random-walk based search for high-performing nodes A straightforward approach to identifying the high-performing nodes is to test each of the 2d leaf nodes directly. This, however, results in a large number of samples at suboptimal points when the dimension d is high. Our approach is inspired by the RWT (Random Walk on a Tree) algorithm recently proposed as a robust and adaptive algorithm for stochastic convex optimization [17, 18, 19]. Assume first there is exactly one high-performing node among the 2d leaf nodes. The basic idea is to devise a biased random walk on the tree that initiates at the root and walks towards the highperforming node at the leaf level. As illustrated in Fig. 2 with d = 2, at a non-leaf node, the random walk can take one of three directions: towards the parent or one of the two children (the parent of the root is itself). The correct direction is to walk along the shortest path to the high-performing leaf node. With the subset relation encoded by the tree, this implies moving to the child containing a threshold-exceeding point or to the parent when neither child contains threshold-exceeding points. Hence, to guide the random walk, a local sequential test is carried out on the two children, one at a time, to determine, at a required confidence level, whether it is threshold-exceeding (see Sec. 3.2.2). The walk then moves to the first child identified as threshold-exceeding (if any) or to the parent otherwise. The confidence level of the local sequential test at each non-leaf node is only required to ensure the walk is correctly biased, i.e., the probability of walking in the correct direction is greater than 1/2. On reaching a leaf node, the algorithm enters the verification stage to determine whether this node is the high-performing leaf node. If the decision is no, it moves back to the parent of this leaf node, and the random walk resumes. If yes, the algorithm exits (under the assumption of a single high-performing leaf node). This decision can be made by carrying out the same local sequential test that guides the random walk at non-leaf nodes. The only difference is in the required confidence level. Given that a false positive at a leaf cannot be corrected due to exiting while a false negative only resumes the random walk (hence retractable in the future), the confidence level for a positive decision needs to be sufficiently high to ensure the overall regret performance, while a negative decision only needs to ensure the bias of the walk (as in the non-leaf nodes). When the number of high-performing leaf nodes is unknown and arbitrary in {0, 1, 2, . . . , 2d}, multiple runs of the random walk are carried out to identify them one by one. In addition, a termination test on the root node is carried out before each run to determine whether there are still unidentified high-performing leaf nodes. See the supplementary for details along with a pseudo code. We emphasize that the local test is carried out using only observations from the current visit to this node; observations from past visits are forgotten. This is to ensure the random-walk nature of the process for tight-analysis. A computational benefit is that the matrices being inverted to compute the posterior distribution are always small, improving the run-time efficiency of the algorithm. 3.2.2 The local sequential test The last piece of the puzzle in GP-ThreDS is the local sequential test on a given node of a subtree. Given a node/region, D ⊆ X , a threshold τ , and a confidence parameter η ∈ (0, 1), the local sequential test needs to determine, with a 1− η confidence level, whether D contains a point with function value exceeding τ . The test first builds a discretization of the region D, denoted by the set Dg = {xi} |Dg| i=1 . The set of points in Dg are chosen to ensure that supx∈D infy∈Dg ‖x − y‖ ≤ ∆. A simple way to construct such a discretization is to use uniform grids parallel to the axes with a resolution small enough to satisfy the above constraint. The parameter ∆ in epoch k is set to ∆k = (c/L)1/α2−ρk/d and is used to control the approximation of the function values in D. Recall that L is the Hölder continuity constant while c ∈ (0, 1/2) is a hyperparameter. The local test sequentially queries points in the set Dg to locally estimate f . To determine whether there exists a point x ∈ D with f(x) ≥ τ , the test builds a pair of Upper and Lower Confidence Bounds using sequentially drawn samples and compares each of them to prescribed values. If the UCB goes below τ − L∆α, indicating that the node is unlikely to contain a τ -exceeding point, the test terminates and outputs a negative outcome. On the other hand, if LCB exceeds τ , then this is a τ -exceeding point with the required confidence level. The test terminates and outputs a positive outcome. If both the UCB and LCB are within their prescribed “uncertainty" range, the test draws one more sample and repeats the process. A cap is imposed on the total number of samples. Specifically, the test terminates and outputs a positive outcome when the total number of samples exceeds S̄(p, L∆α). A description of the test for s ≥ 1 after being initialized with a point x1 ∈ Dg is given in Fig. 3.We would like to emphasize that the posterior mean and variance µs−1 and σ2s−1 considered in the description below are constructed only from the samples collected during that particular visit to the current node. The parameter βs(ν) := B + R √ 2(γs−1 + 1 + log(1/ν)) for ν ∈ (0, 1). γt is the maximum information gain at time t, defined as γt := maxA⊂X :|A|=t I(yA; fA). Here, I(yA; fA) denotes the mutual information between fA = [f(x)]x∈A and yA = fA + A. Bounds on γt for several common kernels are known [20, 21] and are sublinear functions of t. The cap S̄(η, L∆α) on the maximum number of samples is given by S̄(η, L∆α) = min { t ∈ N : 2(1 + 2λ)βt(η)|Dg| 1 2 (L∆α) √ t ≤ 1 } + 1. (4) The cap on the total number of samples prevents the algorithm from wasting too many queries on suboptimal nodes. Without such a cap, the expected number of queries issued by the local test is inversely proportional to |f(x∗Dg ) − τ |, where x ∗ Dg = arg maxx∈Dg f(x). Consequently, small values of |f(x∗Dg )− τ | would lead to a large number of queries at highly suboptimal points when f(x∗Dg ) is far from f(x ∗). The cap on the number of samples thus helps control the growth of regret at the cost of a potential increase in the approximation error. It also reduces the cost in computing the posterior distribution by limiting the number of queries at a node. Note that when the sequential test reaches the maximum allowable samples and exits with an outcome of +1, it is possible that f(x∗Dg ) < τ (i.e., no τ -exceeding points in Dg). Thus, τ may not be a lower bound for the updated belief of f(x∗), as one would expect in the case of an output of +1 from the sequential test. However, using Lemma 2, we can obtain a high probability lower bound on τ − f(x∗Dg ). This additional error term is taken into account while updating the threshold as described in Sec. 3.1. The hyperparameter c trades off this error with the size of the discretization. The sequential test can be easily modified to offer asymmetric confidence levels for declaring positive and negative outcomes (as required for in the verification stage of the RWT search) by changing the confidence parameter in βs. Details are given in the supplementary material. We point out that the construction of the UCB is based on the UCB score employed in IGP-UCB [6]. It is straightforward to replace it with other types of UCB scores. The basic thresholded domain shrinking structure of the proposed algorithm is independent of the specific UCB scores, hence generally applicable as a method for improving the computational efficiency and regret performance of GP-UCB family of algorithms. 4 Performance Analysis In this section, we analyze the regret and computational complexity of GP-ThreDS. Throughout the section, D ⊆ X denotes a node visited by GP-ThreDS, Dg denotes its associated discretization, constructed as described in Sec. 3.2.2, and x∗Dg = arg maxx∈Dg f(x). 4.1 Regret Analysis The following theorem establishes the regret order of GP-ThreDS. Theorem 1. Consider the GP-ThreDS algorithm as described in Sec. 3. Then, for any δ0 ∈ (0, 1), with probability at least 1− δ0, the regret incurred by the algorithm is given as R(T ) = O( √ TγT log T (log T + √ log T log(1/δ0))). We provide here a sketch of the proof. The regret incurred by GP-ThreDS is analysed by decomposing it into two terms: the regret in the first k0 epochs referred to as R1, and the regret after the completion of the first k0 epochs referred to as R2, where k0 = max{k : ρk ≤ d2α log T}. To bound R1, we first bound the regret incurred at any node visited during the first k0 epochs using the following decomposition of the instantaneous regret: f(x∗)− f(xt) = [f(x∗)− τk + L∆αk ] + [τk − f(x∗Dg )− L∆ α k ] + [f(x ∗ Dg )− f(xt)]. In the above decomposition, k denotes the epoch index during which the node is visited. Each of these three terms are then bounded separately. The third term in the expression is bounded using a similar approach to the analysis of IGP-UCB [6] (notice that xt is the maximizer of the UCB score) that is to bound it by the cumulative standard deviation ( ∑t s=1 σs−1(xs)). Lemma 1. For any set of sampling points {x1, x2, . . . , xt} chosen from Dg (under any choice of algorithm), the following relation holds: ∑t s=1 σs−1(xs) ≤ (1+2λ) √ |Dg|t, where σs(x) is defined in (3). Since GP-ThreDS ensures a constant-sized discretization at all times (See Lemma 4), the above lemma implies that the sum of posterior standard deviations is O( √ t) resulting in a tight bound corresponding to the third term (that is an O( √ γt) tighter than the bound for IGP-UCB which optimizes the UCB score over the entire domain). The first two terms are bounded using the following lemma with an appropriate choice of ∆f . Lemma 2. If the local test is terminated by the termination condition at instant S̄(δ2,∆f ) as defined in (4), then with probability at least 1− δ2, we have τ − L∆α −∆f ≤ f(x∗Dg ) ≤ τ + ∆f . The final bound on R1 is obtained by a combination of the upper bound on regret on each node and the bound on the total number of nodes visited by GP-ThreDS, captured in the following lemma. Lemma 3. Consider the random walk based routine described in Section 3.2 with a local confidence parameter p ∈ (0, 1/2). Then with probability at least 1− δ1, one iteration of RWT visits less than log(d/δ1) 2(p−1/2)2 nodes before termination. To bound R2, we bound the difference in function values using the Hölder continuity of the function along with the upper bound on the diameter of the nodes after k0 epochs. Adding the bounds on R1 and R2, we arrive at the theorem. The detailed proofs are provided in the supplementary material. We would like to point out that the regret analysis depends on the choice of the UCB score. While we have used the UCB score of IGP-UCB, this analysis is straightforward to extend to other UCB scores. Remark 1. We note that our assumptions are consistent with those used in proving the lower bounds. In particular, the lower bounds are proven for the Matérn family of kernels including the SE kernel in [4]. [13, Proposition 1] proves the Hölder continuity of this family of kernels. Thus, our assumption on Hölder continuity is consistent with the lower bound. In addition, the proof of lower bound considers a class of functions whose RKHS norm is upper bounded by a known constant [4, Sec 1.1]. This upper bound translates to an upper bound on the absolute value of f , which is consistent with our assumption on having a finite range for f . 4.2 Computational Complexity The following theorem bounds the worst-case overall computational complexity of GP-ThreDS. Theorem 2. The worst-case overall computational complexity of GP-ThreDS is O(T 4), where T is the time horizon. The proof of theorem follows from the following lemma. Lemma 4. The number of points in the discretization, |Dg|, for any node D, is upper bounded by a constant, independent of time. i.e., |Dg| = O(1), ∀ t ≤ T . From the lemma, we can conclude that the number of UCB score evaluations in GP-ThreDS is constant at all times t, hence matrix inversion becomes the dominant source of computational complexity. Since no more than t samples are used to compute the posterior distribution at time t, the worst-case cost associated with matrix inversion step is O(t3) and consequently the worst-case computational complexity of GP-ThreDS is O(T 4) leading to computational savings of O(T 2d−1) over GP-UCB family of algorithms. Lemma 4 is proven by showing that the rate of domain size shrinking matches the rate of granularity of the discretization across epochs. Thus, the size of discretization does not need to increase with time. Please refer to the supplementary for a detailed proof. While the discretization does not grow with t, it is exponential in d. Since non-convex optimization is NP-Hard, such an exponential dependence on d is inevitable for maintaining the optimal learning efficiency. In this work, we focus on reducing the computational complexity with respect to the time horizon T . The proposed domain shrinking technique can be used in conjunction with dimension reduction techniques (e.g., [22]) to achieve efficiency in both T and d (although at the price of invalidating the regret bounds). 5 Empirical Studies In this section, we compare the performance of GP-ThreDS with several commonly used Bayesian optimization algorithms: IGP-UCB [6], Adaptive Discretization (AD) [23], Expected Improvement (EI) [24] and Probability of Improvement (PI) [25]. For the local test of GP-ThreDS we use the exact same UCB score as the one in IGP-UCB. We compare these algorithms on two standard benchmark functions for Bayesian optimization: Branin and Rosenbrock (see [26, 27] as well as the supplementary material for their analytical expressions). We use the SE kernel with lengthscale of l = 0.2 on domain [0, 1]2. We use a Gaussian noise with variance of 0.01. The parameters λ in the GP model and R in βt are also set to 0.01. The value of δ0 is set to 10−3. To limit the computational cost in the standard implementation of IGP-UCB, we consider a maximum of 6400 points in the grid. Figures 4a, 4b, show the per-sample average regret, in log scale, measured at every 0.1 seconds, wall clock time. Specifically, within a given time (the X-axis of the figures), different algorithms process different number of samples determined by their computational complexity. The average per sample regret is then shown against the time taken. The plots are the average performance over 10 Monte Carlo runs. As expected from theoretical results, GP-ThreDS achieves the best performance especially as time grows. Figure 4c directly compares the computation time of all algorithms for processing 1000 samples, averaged over 10 Monte Carlo runs. GP-ThreDS enjoys a much smaller computation cost in terms of time taken (in seconds). The details of algorithm parameters, benchmark functions, as well as additional experiments on hyperparameter tuning of a convolutional neural network for image classification are in the supplementary. 6 Conclusion A GP-based algorithm witha regret of Õ( √ TγT ) for black-box optimization under noisy bandit feedback was proposed. That is order optimal, up to poly-logarithmic factors, for the cases where a lower bound on regret is known. The proposed approach is rooted in the methodology of domain shrinking realized through a sequence of tree-based region pruning and refining to concentrate queries in high-performing regions of the function domain. It offers high learning efficiency, allows tight regret analysis, and achieves a computational saving of O(T 2d−1) over GP-UCB family of algorithms. Acknowledgments and Disclosure of Funding The work of Sudeep Salgia and Qing Zhao was supported by the National Science Foundation under Grants CCF-1815559 and CCF-1934985. The work of Sattar Vakili was supported by MediaTek Research. We would also like to thank the anonymous reviewers for their constructive comments.
1. What is the focus and contribution of the paper regarding Bayesian optimization? 2. What are the strengths of the proposed algorithm, particularly in terms of computational complexity and theoretical performance? 3. What are the weaknesses of the paper, especially regarding experiment presentation and interpretation? 4. Do you have any concerns or questions about the implementation and analysis of the algorithm? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a GP based Bayesian optimization algorithm for regret minimization. This paper uses a domain-shrinking approach where the domain is explored as a tree. Using smoothness assumptions on the true function, it is possible to get upper and lower bounds on the function value in an entire region and this allows the algorithm to decide whether to subdivide a region to explore further or remove it entirely. A key idea and contribution of this work is that the number of leaves in the tree at any one time is a constant. Hence, the total computational complexity of this algorithm is reduced greatly compared to the standard complexity incurred by the famed GP-UCB algorithm. Additionally, the algorithm achieves order optimal performance, closing a gap of \sqrt(\gamma_t) observed in previous work. The paper closes with simple numerical experiments. Review Strengths: The paper is very well written. The algorithm, which complicated, is clearly stated and easily understood. The theoretical performance is strong as well and the paper closes a known gap. Furthermore, the computational gain is large. Weaknesses: Error bars are omitted from the experiments in the main paper and moved to the appendix where they are somewhat large. Presumably this can be fixed by running more repetitions for a final draft. Axis and legend labels on the figures should also be larger to be more legible. Additionally, it is somewhat hard to interpret the experimental results. The results are stated in terms of wall clock time, but no significant discussion of the implementations is given, though this can have a large impact on wall clock performance. Additionally, the theoretical results of this paper state a gain in terms of the regret bound. However, as the results are given for wall-clock time, it is not possible to evaluate this empirically- though this seems like a significant contribution of this paper. Other comments and questions: - Saying that the prior is fictitious is somewhat misleading. In terms of implementing the algorithm that is true, but all of the analysis assumes that the prior is real and so does this work. - For the statement of Thm 1, \delta_0 is presumably user provided? Please clarify.
NIPS
Title Adversarial Robustness of Streaming Algorithms through Importance Sampling Abstract Robustness against adversarial attacks has recently been at the forefront of algorithmic design for machine learning tasks. In the adversarial streaming model, an adversary gives an algorithm a sequence of adaptively chosen updates u1, . . . , un as a data stream. The goal of the algorithm is to compute or approximate some predetermined function for every prefix of the adversarial stream, but the adversary may generate future updates based on previous outputs of the algorithm. In particular, the adversary may gradually learn the random bits internally used by an algorithm to manipulate dependencies in the input. This is especially problematic as many important problems in the streaming model require randomized algorithms, as they are known to not admit any deterministic algorithms that use sublinear space. In this paper, we introduce adversarially robust streaming algorithms for central machine learning and algorithmic tasks, such as regression and clustering, as well as their more general counterparts, subspace embedding, low-rank approximation, and coreset construction. For regression and other numerical linear algebra related tasks, we consider the row arrival streaming model. Our results are based on a simple, but powerful, observation that many importance sampling-based algorithms give rise to adversarial robustness which is in contrast to sketching based algorithms, which are very prevalent in the streaming literature but suffer from adversarial attacks. In addition, we show that the well-known merge and reduce paradigm in streaming is adversarially robust. Since the merge and reduce paradigm allows coreset constructions in the streaming setting, we thus obtain robust algorithms for k-means, k-median, k-center, Bregman clustering, projective clustering, principal component analysis (PCA) and non-negative matrix factorization. To the best of our knowledge, these are the first adversarially robust results for these problems yet require no new algorithmic implementations. Finally, we empirically confirm the robustness of our algorithms on various adversarial attacks and demonstrate that by contrast, some common existing algorithms are not robust. 1 Introduction Robustness against adversarial attacks have recently been at the forefront of algorithmic design for machine learning tasks [GSS15, CW17, AEIK18, MMS+18, TSE+19]. We extend this line of work by studying adversarially robust streaming algorithms. In the streaming model, data points are generated one at a time in a stream and the goal is to compute some meaningful function of the input points while using a limited amount of memory, typically 35th Conference on Neural Information Processing Systems (NeurIPS 2021). sublinear in the total size of the input. The streaming model is applicable in many algorithmic and ML related tasks where the size of the data far exceeds the available storage. Applications of the streaming model include monitoring IP traffic flow, analyzing web search queries [LMV+16], processing large scientific data, feature selection in machine learning [HZZ21, GRB+19, WYWD10], and estimating word statistics in natural language processing [GDC12] to name a few. Streaming algorithms have also been implemented in popular data processing libraries such as Apache Spark which have implementations for streaming tasks such as clustering and linear regression [ZXW+16a]. In the adversarial streaming model [MBN+17, BMSC17, AMYZ19, BY20, BJWY20, HKM+20, WZ20, ABD+21, KMNS21], an adversary gives an algorithm a sequence of adaptively chosen updates u1, . . . , un as a data stream. The goal of the algorithm is to compute or approximate some predetermined function for every prefix of the adversarial stream, but the adversary may generate future updates based on previous outputs of the algorithm. In particular, the adversary may gradually learn the random bits internally used by an algorithm to manipulate dependencies in the input. This is especially problematic as many important problems in the streaming model require randomized algorithms, as they are known to not admit any deterministic algorithms that use sublinear space. Studying when adversarially robust streaming algorithms are possible is an important problem in lieu of recent interest in adversarial attacks in ML with applications to adaptive data analysis. Formally, we define the model as a two-player game between a streaming algorithm StreamAlg and a source Adversary of adaptive and adversarial input to StreamAlg. At the beginning of the game, a fixed queryQ is determined and asks for a fixed function for the underlying dataset implicitly defined by the stream. The game then proceeds in rounds, and in the t-th round, (1) Adversary computes an update ut ∈ [n] for the stream, which depends on all previous stream updates and all previous outputs from StreamAlg. (2) StreamAlg uses ut to update its data structures Dt, acquires a fresh batch Rt of random bits, and outputs a response At to the query Q. (3) Adversary observes and records the response At. The goal of Adversary is to induce StreamAlg to make an incorrect response At to the query Q at some time t ∈ [m] throughout the stream. Related Works. Adversarial robustness of streaming algorithms has been an important topic of recent research. On the positive note, [BJWY20] gave a robust framework for estimating the Lp norm of points in a stream in the insertion-only model, where previous stream updates cannot later be deleted. Their work thus shows that deletions are integral to the attack of [HW13]. Subsequently, [HKM+20] introduced a new algorithmic design for robust Lp norm estimation algorithms, by using differential privacy to protect the internal randomness of algorithms against the adversary. Although [WZ20] tightened these bounds, showing that essentially no losses related to the size of the input n or the accuracy parameter ε were needed, [KMNS21] showed that this may not be true in general. Specifically, they showed a separation between oblivious and adversarial streaming in the adaptive data analysis problem. [BY20] showed that sampling is not necessarily adversarially robust; they introduce an exponentially sized set system where a constant number of samples, corresponding to the VC-dimension of the set system, may result in a very unrepresentative set of samples. However, they show that with an additional logarithmic overhead in the number of samples, then Bernoulli and or reservoir sampling are adversarially robust. This notion is further formalized by [ABD+21], who showed that the classes that are online learnable requires essentially sample-complexity proportional to the Littlestone’s dimension of the underlying set system, rather than VC dimension. However, these sampling procedures are uniform in the sense that each item in the stream is sampled with the same probability. Thus the sampling probability of each item is oblivious to the identity of the item. By contrast, we show the robustness for a variety of algorithms based on non-oblivious sampling, where each stream item is sampled with probability roughly proportional to the “importance” of the item. 1.1 Our Contributions Our main contribution is a powerful yet simple statement that algorithms based on non-oblivious sampling are adversarially robust if informally speaking, the process of sampling each item in the stream can be viewed as using fresh randomness independent of previous steps, even if the sampling probabilities depend on previous steps. Let us describe, very informally, our meta-approach. Suppose we have an adversarial stream of elements given by u1, . . . , un. Our algorithm A will maintain a data structure At at time t which updates as the stream progresses. A will use a function g(At, ut) to determine the probability of sampling item ut to update At to At+1. The function g measures the ‘importance’ of the element ut to the overall problem that we wish to solve. For example, if our application is k-means clustering and ut is a point far away from all previously seen points so far, we want to sample it with a higher probability. We highlight that even though the sampling probability for ut given by g(At, ut) is adversarial, since the adversary designs ut and previous streaming elements, the coin toss performed by our algorithm A to keep item ut is independent of any events that have occurred so far, including the adversary’s actions. This new randomness introduced by the independent coin toss is a key conceptual step in the analysis for all of the applications listed in Figure 1. Contrast this to the situation where a “fixed” data structure or sketch is specified upfront. In this case, we would not be adaptive to which inputs ut the adversary designs to be “important” for our problem which would lead us to potentially disregard such important items rendering the algorithm ineffective. As applications of our meta-approach, we introduce adversarially robust streaming algorithms for two central machine learning tasks, regression and clustering, as well as their more general counterparts, subspace embedding, low-rank approximation, and coreset construction. We show that several methods from the streaming algorithms “toolbox”, namely merge and reduce, online leverage score sampling, and edge sampling are adversarially robust “for free.” As a result, existing (and future) streaming algorithms that use these tools are robust as well. We discuss our results in more detail below and provide a summary of our results and applications in Figure 1. We first show that the well-known merge and reduce paradigm is adversarially robust. Since the merge and reduce paradigm defines coreset constructions, we thus obtain robust algorithms for k-means, k-median, Bregman clustering, projective clustering, principal component analysis (PCA), non-negative matrix factorization (NNMF) [LK17]. Theorem 1.1 (Merge and reduce is adversarially robust) Given an offline ε-coreset construction, the merge and reduce framework gives an adversarially robust streaming construction for an ε-coreset with high probability. For regression and other numerical linear algebra related tasks, we consider the row arrival streaming model, in which the adversary generates a sequence of row vectors a1, . . . ,an in d-dimensional vector space. For t ∈ [n], the t-th prefix of the stream induces a matrix At ∈ Rt×d with rows a1, . . . ,at. We denote this matrix as At = a1 ◦ . . . ◦ at and define κ to be an upper bound on the largest condition number1 of the matrices A1, . . . ,An. Theorem 1.2 (Row sampling is adversarially robust) There exists a row sampling based framework for adversarially robust streaming algorithms that at each time t ∈ [n]: 1the ratio of the largest and smallest nonzero singular values (1) Outputs a matrix Mt such that (1 − ε)A>t At M>t Mt (1 + ε)A>t At, while sampling O ( d2κ ε2 log n log κ ) rows (spectral approximation/subspace embedding/linear regres- sion/generalized regression). (2) Outputs a matrix Mt such that for all rank k orthogonal projection matrices P ∈ Rd×d, (1− ε) ‖At −AtP‖2F ≤ ‖Mt −MtP‖ 2 F ≤ (1 + ε) ‖At −AtP‖ 2 F , while sampling O ( dkκ ε2 log n log 2 κ ) rows (projection-cost preservation/low-rank approximation). (3) Outputs a matrix Mt such that (1 − ε) ‖Atx‖1 ≤ ‖Mtx‖1 ≤ (1 + ε) ‖Atx‖1, while sampling O ( d2κ ε2 log 2 n log κ ) rows (L1 subspace embedding). Finally, we show that our analysis also applies to algorithms for graph sparsification for in which edges are sampled according to their “importance”. Define κ as the ratio of the largest and the smallest cut sizes in G (see Section 4 and Supplementary Section C for exact details). Theorem 1.3 Given a weighted graph G = (V,E) with |V | = n whose edges e1, . . . , em arrive sequentially in a stream, there exists an adversarially robust streaming algorithm that outputs a 1± ε cut sparsifier with O ( κ2n logn ε2 ) edges with probability 1− 1/ poly(n). Sketching vs Sampling Algorithms. A central tool for randomized streaming algorithms is the use of linear sketches. These methods maintain a data structure f such that after the (i+ 1)-th input xi, we can update f by computing a linear function of xi. Typically, these methods employ a random matrix. For example, if the input consists of vectors, sketching methods will use a random matrix to project the vector into a much smaller dimension space. In [HW13], it was proved no linear sketch can approximate the L2-norm within a polynomial multiplicative factor against such an adaptive adversary. In general, streaming algorithms that use sketching are highly susceptible to the type of attack described in [HW13] where the adversary can effectively ‘learn’ the kernel of the linear function used and send inputs along the kernel. For example, if an adversary knows the kernel of the random matrix used to project the input points, then by sending points that lie on the kernel of the matrix as inputs, the adversary can render the whole streaming algorithm useless. One the other hand, we employ a different family of streaming algorithms that are based on sampling the input rather than sketching it. Surprisingly, this simple change allows one to automatically get many adversarially robust algorithms either “for free” or without new algorithmic overheads. For more information, see Section 1.1. We emphasize that while our techniques are not theoretically sophisticated, we believe its power lies in its simple message that sampling is often superior to sketching for adversarial robustness. In addition to downstream algorithmic and ML applications, this provides an interesting separation and trade-offs between the two paradigms; for non adversarial inputs sketching often gives similar or better performance guarantees for many tasks [BYKS01]. 2 Merge and Reduce We show that the general merge and reduce paradigm is adversarially robust. Merge and reduce is widely used for the construction of a coreset, which provides dimensionality reduction on the size of an underlying dataset, so that algorithms for downstream applications can run more efficiently: Definition 2.1 (ε coreset) Let P ⊂ X be a set of elements from a universe X , z ≥ 0, ε ∈ (0, 1), and (P,dist, Q) be a query space. Then a subset C equipped with a weight function w : P → R is called an ε-coreset with respect to the query space (P,dist, Q) if (1− ε) ∑ p∈P dist(p, Q)z ≤ ∑ p∈C w(p) dist(p, Q)z ≤ (1 + ε) ∑ p∈P dist(p, Q)z. The study of efficient offline coreset constructions for a variety of geometric and algebraic problems forms a long line of active research. For example, offline coreset constructions are known for linear regression, low-rank approximation, L1-subspace embedding, k-means clustering, k-median clustering, k-center, support vector machine, Gaussian mixture models, M -estimators, Bregman clustering, projective clustering, principal component analysis, k-line center, j-subspace approximation, and so on. Thus, our result essentially shows that using the merge and reduce paradigm, these offline coreset constructions can be extended to obtain robust and accurate streaming algorithms. The merge and reduce paradigm works as follows. Suppose we have a stream p1, . . . , pn of length n = 2k for some Theorem 2.2 There exists a merge-and-reduce row sampling based framework for adversarially robust streaming algorithms that at each time t ∈ [n]: (1) Outputs a matrix Mt such that (1 − ε)A>t At M>t Mt (1 + ε)A>t At, while sampling O ( d2 ε2 log 4 n log κ ) rows (spectral approximation/subspace embedding/linear regres- sion/generalized regression). (2) Outputs a matrix Mt such that for all rank k orthogonal projection matrices P ∈ Rd×d, (1− ε) ‖At −AtP‖2F ≤ ‖Mt −MtP‖ 2 F ≤ (1 + ε) ‖At −AtP‖ 2 F , while sampling O ( k ε2 log 4 n log2 κ ) rows (projection-cost preservation/low-rank approximation). (3) Outputs a matrix Mt such that (1 − ε) ‖Atx‖1 ≤ ‖Mtx‖1 ≤ (1 + ε) ‖Atx‖1, while sampling O ( d ε2 log 5 n log κ ) rows (L1 subspace embedding). Using coresets of [HV20], then Theorem 1.1 also gives applications for (k, z)-clustering such as k-median for z = 1 and k-means for z = 2. Moreover, [LK17] noted that constructions of [FL11] give coresets for Bregman clustering, which handles µ-similar Bregman divergences such as the Itakura-Saito distance, KL-divergence, Mahalanobis distance, etc. Theorem 2.3 There exists a merge-and-reduce importance sampling based framework for adversarially robust streaming algorithms that at each time t: (1) Outputs a set of centers that gives a (1 + ε)-approximation to the optimal (k, z)clustering, k-means clustering (z = 2), and k-median clustering (z = 1), while storing O ( 1 ε2z+2 k log 2z+2 n log k log k lognε ) points. (2) Outputs a set of centers that gives a (1 + ε)-approximation to the optimal k-Bregman clustering, while storing O ( 1 ε2 dk 3 log3 n ) points. Using the sensitivity bounds of [VX12a, VX12b] and the coreset constructions of [BFL16], then Theorem 1.1 also gives applications for the following shape fitting problems: Theorem 2.4 There exists a merge-and-reduce importance sampling based framework for adversarially robust streaming algorithms that at each time t: (1) Outputs a set of lines that gives a (1 + ε)-approximation to the optimal k-lines clustering, while storing O ( d ε2 f(d, k)k f(d,k) log4 n ) points of Rd, for a fixed function f(d, k). (2) Outputs a subspace that gives a (1 + ε)-approximation to the optimal dimension j subspace approximation, while storing O ( d ε2 g(d, j)k g(d,j) log4 n ) points of Rd, for a fixed function g(d, j). (3) Outputs a set of subspaces that gives a (1+ε)-approximation to the optimal (j, k)-projective clustering, while storing O ( d ε2 h(d, j, k) log 3 n(log n)h(d,j,k) ) points of Rds, for a fixed function h(d, j, k), for a set of input points with integer coordinates. Adversarially robust approximation algorithms for Bayesian logistic regression, Gaussian mixture models, generative adversarial networks (GANs), and support vector machine can be obtained from Theorem 1.1 and coreset constructions of [HCB16, FKW19, SZG+20, TBFR20]; a significant number of additional applications of Theorem 1.1 using coreset constructions can be seen from recent surveys on coresets, e.g., see [LK17, Fel20]. The merge-and-reduce framework also has applications to a large number of other problems such as finding heavy-hitters [MG82] or frequent directions [GLPW16] and in various settings, such as the sliding window model [DGIM02], time decay models [BLUZ19] or for at-the-time or back-in-time queries [SZP+21]. 3 Adversarial Robustness of Subspace Embedding and Applications We use [n] to represent the set {1, . . . , n} for an integer n > 0. We typically use bold font to denote vectors and matrices. For a matrix A, we use A−1 to denote the Moore-Penrose inverse of A. We first formally define the goals of our algorithms: Problem 3.1 (Spectral Approximation) Given a matrix A ∈ Rn×d and an approximation parameter ε > 0, the goal is to output a matrix M ∈ Rm×d with m n such that (1 − ε) ‖Ax‖2 ≤ ‖Mx‖2 ≤ (1 + ε) ‖Ax‖2 for all x ∈ Rd or equivalently, (1− ε)A>A M>M (1 + ε)A>A. We note that linear regression is a well-known specific application of spectral approximation. Problem 3.2 (Projection-Cost Preservation) Given a matrix A ∈ Rn×d, a rank parameter k > 0, and an approximation parameter ε > 0, the goal is to find a matrix M ∈ Rm×d with m n such that for all rank k orthogonal projection matrices P ∈ Rd×d, (1− ε) ‖A−AP‖2F ≤ ‖M−MP‖ 2 F ≤ (1 + ε) ‖A−AP‖ 2 F . Note if M is a projection-cost preservation of A, then its best low-rank approximation can be used to find a projection matrix that gives an approximation of the best low-rank approximation to A. Problem 3.3 (Low-Rank Approximation) Given a matrix A ∈ Rn×d, a rank parameter k > 0, and an approximation parameter ε > 0, find a rank k matrix M ∈ Rn×d such that (1 − ε) ∥∥A−A(k)∥∥2F ≤ ‖A−M‖2F ≤ (1 + ε)∥∥A−A(k)∥∥2F , where A(k) for a matrix A denotes the best rank k approximation to A. Problem 3.4 (L1-Subspace Embedding) Given a matrix A ∈ Rn×d and an approximation parameter ε > 0, the goal is to output a matrix M ∈ Rm×d with m n such that (1 − ε) ‖Ax‖1 ≤ ‖Mx‖1 ≤ (1 + ε) ‖Ax‖1 for all x ∈ Rd. We consider the general class of row sampling algorithms, e.g., [CMP16, BDM+20]. Here we maintain a Lp subspace embedding of the underlying matrix by approximating the online Lp sensitivities of each row as a measure of importance to perform sampling. For more details, see Algorithm 1. Definition 3.5 (Online Lp Sensitivities) For a matrix A = a1 ◦ . . . ◦ an ∈ Rn×d, the online sensitivity of row ai for each i ∈ [n] is the quantity maxx∈Rd |〈ai,x〉|p ‖Aix‖pp , where Ai−1 = a1 ◦ . . .◦ai−1. Algorithm 1 Row sampling based algorithms, e.g., [CMP16, BDM+20] Input: A stream of rows a1, . . . ,an ∈ Rd, parameter p > 0, and an accuracy parameter ε > 0 Output: A (1 + ε) Lp subspace embedding. 1: M← ∅ 2: α← Cdε2 log n with sufficiently large parameter C > 0 3: for each row ai, i ∈ [n] do 4: if ai ∈ span(M) then 5: τi ← 2 ·maxx∈Rd,x∈span(M) |〈ai,x〉|p ‖Mx‖pp+|〈ai,x〉|p .See Remark 3.6 6: else 7: τi ← 1 8: pi ← min(1, ατi) 9: With probability pi, M←M ◦ ai p 1/p i .Online sensitivity sampling 10: return M We remark on standard computation or approximation of the online Lp sensitivities, e.g., see [CEM+15, CMP16, CMM17, BDM+20]. Remark 3.6 We note that for p = 1, a constant fraction approximation to any online Lp sensitivity τi such that τi > 1poly(n) can be computed in polynomial time using (offline) linear programming while for p = 2, τi is equivalent to the online leverage score of ai, which has the closed form expression a>i (A > i Ai) −1ai, which can be approximated by a>i (M >M)−1ai, conditioned on M being a good approximation to Ai−1 when ai is in the span of M. Otherwise, τi takes value 1 when ai is not in the span of M. Lemma 3.7 (Adversarially Robust Lp Subspace Embedding and Linear Regression) Given ε > 0, p ∈ {1, 2}, and a matrix A ∈ Rn×d whose rows a1, . . . ,an arrive sequentially in a stream with condition number at most κ, there exists an adversarially robust streaming algorithm that outputs a (1 + ε) spectral approximation with high probability. The algorithm samples O ( d2κ2 ε2 log n log κ ) rows for p = 2 and O ( d2λ2 ε2 log 2 n log κ ) rows for p = 1, with high probability, where λ is a ratio between upper and lower bounds on ‖A‖1. We also show robustness of row sampling for low-rank approximation by using online ridge-leverage scores. Together, Lemma 3.7 and Lemma 3.8 give Theorem 1.2. Lemma 3.8 (Adversarially Robust Low-Rank Approximation) Given accuracy parameter ε > 0, rank parameter k > 0, and a matrix A ∈ Rn×d whose rows a1, . . . ,an arrive sequentially in a stream with condition number at most κ, there exists an adversarially robust streaming algorithm that outputs a (1 + ε) low-rank approximation with high probability. The algorithm samples O ( kdκ2 ε2 log n log κ ) rows with high probability. 4 Graph Sparsification In this section, we highlight how the sampling paradigm gives rise to an adversarially robust streaming algorithm for graph sparsification. First, we motivate the problem of graph sparsification. Massive graphs arise in many theoretical and applied settings, such as in the analysis of large social or biological networks. A key bottleneck in such analysis is the large computational resources, in both memory and time, needed. Therefore, it is desirable to get a representation of graphs that take up far less space while still preserving the underlying “structure” of the graph. Usually the number of vertices is much fewer than the number of edges; for example in typical real world graphs, the number of vertices can be several orders of magnitude smaller than the number of edges (for example, see the graph datasets in [RA15]). Hence, a natural benchmark is to reduce the number of edges to be comparable to the number of vertices. The most common notion of graph sparsification is that of preserving the value of all cuts in the graph by keeping a small weighted set of edges of the original graph. More specifically, suppose our graph is G = (V,E) and for simplicity assume all the edges have weight 1. A cut of the graph is a partition of V = (C, V \ C) and the value of a cut, ValG(C), is defined as the number of edges that cross between the vertices in C and V \C. A graph H on the same set of vertices as V is a sparsifier if it preserves the value of every cut in G and has a few number of weighted edges. For a precise formulation, see Problem 4.1. In addition to being algorithmically tractable, this formulation is natural since it preserves the underlying cluster structure of the graph. For example, if there are two well connected components separated by a sparse cut, i.e. two distinct communities, then the sparsifier according to the definition above will ensure that the two communities are still well separated. Conversely, by considering any cut within a well connected component, it will also ensure that any community remains well connected (for more details, see [SPR11] and references therein). Lastly, graph sparsification has been considered in other frameworks such as differential privacy [EKKL20], distributed optimization [WWLZ18], and even learning graph sparsification using deep learning methods [ZZC+20]. The formal problem definition of graph sparsification is as follows. Problem 4.1 (Graph Sparsification) Given a graph weighted G = (V,E) with |V | = n, |E| = m, and an approximation parameter ε > 0, compute a weighted subgraph H of G on the same set of vertices such that (1) every cut in H has value between 1− ε and 1 + ε times its value in G: (1− ε)ValG(C) ≤ ValH(C) ≤ (1 + ε)ValG(C) for all cuts C where ValG(C),ValH(C) denotes the cost of the cut in the graphs G and H respectively and for the latter quantity, the edges are weighted, (2) the number of edges in H is O ( n logn ε2 ) . Ignoring dependence on ε, there are previous results that already get sparsifiers H with O(n log n) edges [BK96, SS08]. Their setting is when the entire graph is present up-front in memory. In contrast, we are interested in the streaming setting where future edges can depend on past edges as well as revealed randomness of an algorithm while processing the edges. Our main goal is to show that the streaming algorithm from [AG09] (presented in Algorithm 2 in the supplementary section), which uses a sampling procedure to sample edges in a stream, is adversarially robust, albeit with a slightly worse guarantee for the number of edges. Following the proof techniques of the non streaming algorithm given in [BK96], it is shown in [AG09] that Algorithm 2 outputs a subgraph H such that H satisfies the conditions of Problem 4.1 with probability 1− 1/ poly(n) where the probability can be boosted by taking a larger constant C. We must show that this still holds true if the edges of the stream are adversarially chosen, i.e., when new edges in the stream depend on the previous edges and the randomness used by the algorithm so far. We thus again use a martingale argument; the full details are given in Supplementary Section C. As in Section 3, we let κ1 and κ2 to be deterministic lower/upper bounds on the size of any cut in G and define κ = κ2/κ1. Theorem 1.3 Given a weighted graph G = (V,E) with |V | = n whose edges e1, . . . , em arrive sequentially in a stream, there exists an adversarially robust streaming algorithm that outputs a 1± ε cut sparsifier with O ( κ2n logn ε2 ) edges with probability 1− 1/ poly(n). 5 Experiments To illustrate the robustness of importance-sampling-based streaming algorithms we devise adversarial settings for clustering and linear regression. With respect to our adversarial setting, we show that the performance of a merge-and-reduce based streaming k-means algorithm is robust while a popular streaming k-means implementation (not based on importance sampling) is not. Similarly, we show the robustness superiority of a streaming linear regression algorithm based on row sampling over a popular streaming linear regression implementation and over sketching. Streaming k-means In this adversarial clustering setting we consider a series of point batches where all points except those in the last batch are randomly sampled from a two dimensional standard normal distribution and points in the last batch similarly sampled but around a distant center (see the data points realization in both panels of Figure 3). We then feed the point sequence to StreamingKMeans, the streaming k-means implementation of Spark [ZXW+16b] the popular big-data processing framework. As illustrated in the left panel of Figure 3, the resulting two centers are both within the origin. Now, this result occurs regardless of the last batch’s distance from the origin, implying that the per-sample loss performance of the algorithm can be made arbitrarily large. Alternatively, we used a merge-and-reduce based streaming k-means algorithm and show that one of the resulting cluster centers is at the distant cluster (as illustrated in the right panel of Figure 3) thereby keeping the resulting per sample loss at the desired minimum. Specifically we use Streamkm, an implementation of StreamKM++ [AMR+12] from the ClusOpt Core library [Mac20]. Streaming linear regression. Similar to the clustering setting, in the adversarial setting for streaming linear regression all batches except the last one are sampled around a constellation of four points in the plane such that the optimal regression line is of −1 slope through the origin (see the leftmost panel of Figure 4). The last batch of points however, is again far from the origin (L,L) such that the resulting optimal regression line is of slope 1 through the origin2. We compare the performance of LinearRegression from the popular streaming machine learning library River [MHM+20] to our own row sampling based implementation of streaming linear regression along the lines of Algorithm 1 and observe the following: Without the last batch of points, both implementations result in the optimal regression line, however, the River implementation reaches that line only after several iterations, while our implementation is accurate throughout (This is illustrated in the second-left panel of Figure 4). When the last batch is used, nevertheless, Algorithm 1 picks up the drastic change and adapts immediately to a line of the optimal slope (the blue line of the second right panel of Figure 4) while the River implementation update merely moves the line in the desired direction (the orange line in that same panel) but is far from catching up. Finally, the rightmost panel of Figure 4) details the loss trajectory for both implementations. While the River loss skyrockets upon the last batch, the loss of Algorithm 1 remains relatively unaffected, illustrating its adversarial robustness. Note that in both the clustering and linear regression settings above the adversary was not required to consider the algorithms internal randomization to achieve the desired effect (this is due to the local nature of the algorithms computations). This is no longer the case in the following last setting. Sampling vs. sketching. Finally, we compare the performance of the leverage sampling Algorithm 1 to sketching. In this setting, for a random unit sketching matrix S (that is, each of its elements is sampled from {−1, 1} with equal probability), we create an adversarial data stream A such that its columns are in the null space of S. As a result, the linear regression as applied to the sketched data S ·A as a whole is unstable and might significantly differ from the resulting linear regression applied to streamed prefixes of the sketched data. As illustrated in Figure 5, this is not the case when applying the linear regression to the original streamed data A using Algorithm 1. Upon the last batch, the performance of the sketching-based regression deteriorates by orders of magnitude, while the performance of Algorithm 1 is not affected. Moreover, the data reduction factor achieved by leveraged sampling3 is almost double compared to the data reduction factor achieved by sketching. Acknowledgments Sandeep Silwal was supported in part by a NSF Graduate Research Fellowship Program. Samson Zhou was supported by a Simons Investigator Award of David P. Woodruff. 2For MSE loss, this occurs for L at least the square root of the number of batches. 3The original stream A contained 2000 samples, each of dimension 10.
1. What is the focus of the paper regarding streaming algorithms? 2. What are the strengths and weaknesses of the paper's approach to studying adaptive adversaries? 3. Do you have any questions about the paper's methodology or conclusions? 4. How does the reviewer assess the novelty and significance of the paper's contributions? 5. Are there any specific areas where the reviewer suggests improvements or further discussions?
Summary Of The Paper Review
Summary Of The Paper Many streaming algorithms developed during the last decades are randomized and have an expected solution guarantee (approximation guarantee). However, the promised guarantee is not with respect to an adaptive adversary that can choose the remaining part of the instance as a function of the decisions taken so far of the algorithm (and thus as a function of the random bits). This has prompted recent work on "adversarial robustness of streaming" to obtain guarantees regarding these stronger adaptive adversaries. First it has been shown that sketching based algorithms (i.e., sample a dimension-reduction matrix before the stream) is not robust against adaptive adversaries. This is because, the adversary can figure out the used matrix and then feed a worst-case instance. The present paper has a more positive message. They show that a large family of streaming algorithms are in fact also robust against adaptive adversaries. On a high level they show that sampling based algorithms that roughly work as follows: before the arrival of an element e, we calculate a threshold for taking that element e based on the current state of the algorithm (and thus already used randomness). However, we use new randomness for deciding wether to take the element or not. Such sampling based algorithms have been heavily used for e.g. clustering, graph sparsification and regression. The main result of the paper is this observation together with tons of applications of prior algorithms that the authors list. Review The paper is original in that it studies a stronger adversary, makes the insight that certain kind of sampling-based algorithms are robust against this stronger type of adversaries. Then the paper is less original in the sense that it feels like a long list of applications where the authors basically has verified that the known algorithms "pass the test." (Personally, I'd prefer if they would have focused on one or few applications in the main body of the paper and showed some nice insights there. But if I understand the authors correctly the main result is this important insight (point of view) and then all the results basically follows from previous work.) Otherwise, I would say that the paper reads well and is clear (although they don't define all the problems that they list). I also think it is a good paper in terms of significance in the sense that it is good to know that all these algorithms are in fact robust. So I think the paper is definitely valuable to the research community. A more detailed comment to the authors: How important is it for your algorithms to know the length of the stream? It seems pretty crucial when you take a union bound over error probabilities but maybe I am missing something? If not, it would be good with a discussion if it is really necessary to know the length of the stream...
NIPS
Title Adversarial Robustness of Streaming Algorithms through Importance Sampling Abstract Robustness against adversarial attacks has recently been at the forefront of algorithmic design for machine learning tasks. In the adversarial streaming model, an adversary gives an algorithm a sequence of adaptively chosen updates u1, . . . , un as a data stream. The goal of the algorithm is to compute or approximate some predetermined function for every prefix of the adversarial stream, but the adversary may generate future updates based on previous outputs of the algorithm. In particular, the adversary may gradually learn the random bits internally used by an algorithm to manipulate dependencies in the input. This is especially problematic as many important problems in the streaming model require randomized algorithms, as they are known to not admit any deterministic algorithms that use sublinear space. In this paper, we introduce adversarially robust streaming algorithms for central machine learning and algorithmic tasks, such as regression and clustering, as well as their more general counterparts, subspace embedding, low-rank approximation, and coreset construction. For regression and other numerical linear algebra related tasks, we consider the row arrival streaming model. Our results are based on a simple, but powerful, observation that many importance sampling-based algorithms give rise to adversarial robustness which is in contrast to sketching based algorithms, which are very prevalent in the streaming literature but suffer from adversarial attacks. In addition, we show that the well-known merge and reduce paradigm in streaming is adversarially robust. Since the merge and reduce paradigm allows coreset constructions in the streaming setting, we thus obtain robust algorithms for k-means, k-median, k-center, Bregman clustering, projective clustering, principal component analysis (PCA) and non-negative matrix factorization. To the best of our knowledge, these are the first adversarially robust results for these problems yet require no new algorithmic implementations. Finally, we empirically confirm the robustness of our algorithms on various adversarial attacks and demonstrate that by contrast, some common existing algorithms are not robust. 1 Introduction Robustness against adversarial attacks have recently been at the forefront of algorithmic design for machine learning tasks [GSS15, CW17, AEIK18, MMS+18, TSE+19]. We extend this line of work by studying adversarially robust streaming algorithms. In the streaming model, data points are generated one at a time in a stream and the goal is to compute some meaningful function of the input points while using a limited amount of memory, typically 35th Conference on Neural Information Processing Systems (NeurIPS 2021). sublinear in the total size of the input. The streaming model is applicable in many algorithmic and ML related tasks where the size of the data far exceeds the available storage. Applications of the streaming model include monitoring IP traffic flow, analyzing web search queries [LMV+16], processing large scientific data, feature selection in machine learning [HZZ21, GRB+19, WYWD10], and estimating word statistics in natural language processing [GDC12] to name a few. Streaming algorithms have also been implemented in popular data processing libraries such as Apache Spark which have implementations for streaming tasks such as clustering and linear regression [ZXW+16a]. In the adversarial streaming model [MBN+17, BMSC17, AMYZ19, BY20, BJWY20, HKM+20, WZ20, ABD+21, KMNS21], an adversary gives an algorithm a sequence of adaptively chosen updates u1, . . . , un as a data stream. The goal of the algorithm is to compute or approximate some predetermined function for every prefix of the adversarial stream, but the adversary may generate future updates based on previous outputs of the algorithm. In particular, the adversary may gradually learn the random bits internally used by an algorithm to manipulate dependencies in the input. This is especially problematic as many important problems in the streaming model require randomized algorithms, as they are known to not admit any deterministic algorithms that use sublinear space. Studying when adversarially robust streaming algorithms are possible is an important problem in lieu of recent interest in adversarial attacks in ML with applications to adaptive data analysis. Formally, we define the model as a two-player game between a streaming algorithm StreamAlg and a source Adversary of adaptive and adversarial input to StreamAlg. At the beginning of the game, a fixed queryQ is determined and asks for a fixed function for the underlying dataset implicitly defined by the stream. The game then proceeds in rounds, and in the t-th round, (1) Adversary computes an update ut ∈ [n] for the stream, which depends on all previous stream updates and all previous outputs from StreamAlg. (2) StreamAlg uses ut to update its data structures Dt, acquires a fresh batch Rt of random bits, and outputs a response At to the query Q. (3) Adversary observes and records the response At. The goal of Adversary is to induce StreamAlg to make an incorrect response At to the query Q at some time t ∈ [m] throughout the stream. Related Works. Adversarial robustness of streaming algorithms has been an important topic of recent research. On the positive note, [BJWY20] gave a robust framework for estimating the Lp norm of points in a stream in the insertion-only model, where previous stream updates cannot later be deleted. Their work thus shows that deletions are integral to the attack of [HW13]. Subsequently, [HKM+20] introduced a new algorithmic design for robust Lp norm estimation algorithms, by using differential privacy to protect the internal randomness of algorithms against the adversary. Although [WZ20] tightened these bounds, showing that essentially no losses related to the size of the input n or the accuracy parameter ε were needed, [KMNS21] showed that this may not be true in general. Specifically, they showed a separation between oblivious and adversarial streaming in the adaptive data analysis problem. [BY20] showed that sampling is not necessarily adversarially robust; they introduce an exponentially sized set system where a constant number of samples, corresponding to the VC-dimension of the set system, may result in a very unrepresentative set of samples. However, they show that with an additional logarithmic overhead in the number of samples, then Bernoulli and or reservoir sampling are adversarially robust. This notion is further formalized by [ABD+21], who showed that the classes that are online learnable requires essentially sample-complexity proportional to the Littlestone’s dimension of the underlying set system, rather than VC dimension. However, these sampling procedures are uniform in the sense that each item in the stream is sampled with the same probability. Thus the sampling probability of each item is oblivious to the identity of the item. By contrast, we show the robustness for a variety of algorithms based on non-oblivious sampling, where each stream item is sampled with probability roughly proportional to the “importance” of the item. 1.1 Our Contributions Our main contribution is a powerful yet simple statement that algorithms based on non-oblivious sampling are adversarially robust if informally speaking, the process of sampling each item in the stream can be viewed as using fresh randomness independent of previous steps, even if the sampling probabilities depend on previous steps. Let us describe, very informally, our meta-approach. Suppose we have an adversarial stream of elements given by u1, . . . , un. Our algorithm A will maintain a data structure At at time t which updates as the stream progresses. A will use a function g(At, ut) to determine the probability of sampling item ut to update At to At+1. The function g measures the ‘importance’ of the element ut to the overall problem that we wish to solve. For example, if our application is k-means clustering and ut is a point far away from all previously seen points so far, we want to sample it with a higher probability. We highlight that even though the sampling probability for ut given by g(At, ut) is adversarial, since the adversary designs ut and previous streaming elements, the coin toss performed by our algorithm A to keep item ut is independent of any events that have occurred so far, including the adversary’s actions. This new randomness introduced by the independent coin toss is a key conceptual step in the analysis for all of the applications listed in Figure 1. Contrast this to the situation where a “fixed” data structure or sketch is specified upfront. In this case, we would not be adaptive to which inputs ut the adversary designs to be “important” for our problem which would lead us to potentially disregard such important items rendering the algorithm ineffective. As applications of our meta-approach, we introduce adversarially robust streaming algorithms for two central machine learning tasks, regression and clustering, as well as their more general counterparts, subspace embedding, low-rank approximation, and coreset construction. We show that several methods from the streaming algorithms “toolbox”, namely merge and reduce, online leverage score sampling, and edge sampling are adversarially robust “for free.” As a result, existing (and future) streaming algorithms that use these tools are robust as well. We discuss our results in more detail below and provide a summary of our results and applications in Figure 1. We first show that the well-known merge and reduce paradigm is adversarially robust. Since the merge and reduce paradigm defines coreset constructions, we thus obtain robust algorithms for k-means, k-median, Bregman clustering, projective clustering, principal component analysis (PCA), non-negative matrix factorization (NNMF) [LK17]. Theorem 1.1 (Merge and reduce is adversarially robust) Given an offline ε-coreset construction, the merge and reduce framework gives an adversarially robust streaming construction for an ε-coreset with high probability. For regression and other numerical linear algebra related tasks, we consider the row arrival streaming model, in which the adversary generates a sequence of row vectors a1, . . . ,an in d-dimensional vector space. For t ∈ [n], the t-th prefix of the stream induces a matrix At ∈ Rt×d with rows a1, . . . ,at. We denote this matrix as At = a1 ◦ . . . ◦ at and define κ to be an upper bound on the largest condition number1 of the matrices A1, . . . ,An. Theorem 1.2 (Row sampling is adversarially robust) There exists a row sampling based framework for adversarially robust streaming algorithms that at each time t ∈ [n]: 1the ratio of the largest and smallest nonzero singular values (1) Outputs a matrix Mt such that (1 − ε)A>t At M>t Mt (1 + ε)A>t At, while sampling O ( d2κ ε2 log n log κ ) rows (spectral approximation/subspace embedding/linear regres- sion/generalized regression). (2) Outputs a matrix Mt such that for all rank k orthogonal projection matrices P ∈ Rd×d, (1− ε) ‖At −AtP‖2F ≤ ‖Mt −MtP‖ 2 F ≤ (1 + ε) ‖At −AtP‖ 2 F , while sampling O ( dkκ ε2 log n log 2 κ ) rows (projection-cost preservation/low-rank approximation). (3) Outputs a matrix Mt such that (1 − ε) ‖Atx‖1 ≤ ‖Mtx‖1 ≤ (1 + ε) ‖Atx‖1, while sampling O ( d2κ ε2 log 2 n log κ ) rows (L1 subspace embedding). Finally, we show that our analysis also applies to algorithms for graph sparsification for in which edges are sampled according to their “importance”. Define κ as the ratio of the largest and the smallest cut sizes in G (see Section 4 and Supplementary Section C for exact details). Theorem 1.3 Given a weighted graph G = (V,E) with |V | = n whose edges e1, . . . , em arrive sequentially in a stream, there exists an adversarially robust streaming algorithm that outputs a 1± ε cut sparsifier with O ( κ2n logn ε2 ) edges with probability 1− 1/ poly(n). Sketching vs Sampling Algorithms. A central tool for randomized streaming algorithms is the use of linear sketches. These methods maintain a data structure f such that after the (i+ 1)-th input xi, we can update f by computing a linear function of xi. Typically, these methods employ a random matrix. For example, if the input consists of vectors, sketching methods will use a random matrix to project the vector into a much smaller dimension space. In [HW13], it was proved no linear sketch can approximate the L2-norm within a polynomial multiplicative factor against such an adaptive adversary. In general, streaming algorithms that use sketching are highly susceptible to the type of attack described in [HW13] where the adversary can effectively ‘learn’ the kernel of the linear function used and send inputs along the kernel. For example, if an adversary knows the kernel of the random matrix used to project the input points, then by sending points that lie on the kernel of the matrix as inputs, the adversary can render the whole streaming algorithm useless. One the other hand, we employ a different family of streaming algorithms that are based on sampling the input rather than sketching it. Surprisingly, this simple change allows one to automatically get many adversarially robust algorithms either “for free” or without new algorithmic overheads. For more information, see Section 1.1. We emphasize that while our techniques are not theoretically sophisticated, we believe its power lies in its simple message that sampling is often superior to sketching for adversarial robustness. In addition to downstream algorithmic and ML applications, this provides an interesting separation and trade-offs between the two paradigms; for non adversarial inputs sketching often gives similar or better performance guarantees for many tasks [BYKS01]. 2 Merge and Reduce We show that the general merge and reduce paradigm is adversarially robust. Merge and reduce is widely used for the construction of a coreset, which provides dimensionality reduction on the size of an underlying dataset, so that algorithms for downstream applications can run more efficiently: Definition 2.1 (ε coreset) Let P ⊂ X be a set of elements from a universe X , z ≥ 0, ε ∈ (0, 1), and (P,dist, Q) be a query space. Then a subset C equipped with a weight function w : P → R is called an ε-coreset with respect to the query space (P,dist, Q) if (1− ε) ∑ p∈P dist(p, Q)z ≤ ∑ p∈C w(p) dist(p, Q)z ≤ (1 + ε) ∑ p∈P dist(p, Q)z. The study of efficient offline coreset constructions for a variety of geometric and algebraic problems forms a long line of active research. For example, offline coreset constructions are known for linear regression, low-rank approximation, L1-subspace embedding, k-means clustering, k-median clustering, k-center, support vector machine, Gaussian mixture models, M -estimators, Bregman clustering, projective clustering, principal component analysis, k-line center, j-subspace approximation, and so on. Thus, our result essentially shows that using the merge and reduce paradigm, these offline coreset constructions can be extended to obtain robust and accurate streaming algorithms. The merge and reduce paradigm works as follows. Suppose we have a stream p1, . . . , pn of length n = 2k for some Theorem 2.2 There exists a merge-and-reduce row sampling based framework for adversarially robust streaming algorithms that at each time t ∈ [n]: (1) Outputs a matrix Mt such that (1 − ε)A>t At M>t Mt (1 + ε)A>t At, while sampling O ( d2 ε2 log 4 n log κ ) rows (spectral approximation/subspace embedding/linear regres- sion/generalized regression). (2) Outputs a matrix Mt such that for all rank k orthogonal projection matrices P ∈ Rd×d, (1− ε) ‖At −AtP‖2F ≤ ‖Mt −MtP‖ 2 F ≤ (1 + ε) ‖At −AtP‖ 2 F , while sampling O ( k ε2 log 4 n log2 κ ) rows (projection-cost preservation/low-rank approximation). (3) Outputs a matrix Mt such that (1 − ε) ‖Atx‖1 ≤ ‖Mtx‖1 ≤ (1 + ε) ‖Atx‖1, while sampling O ( d ε2 log 5 n log κ ) rows (L1 subspace embedding). Using coresets of [HV20], then Theorem 1.1 also gives applications for (k, z)-clustering such as k-median for z = 1 and k-means for z = 2. Moreover, [LK17] noted that constructions of [FL11] give coresets for Bregman clustering, which handles µ-similar Bregman divergences such as the Itakura-Saito distance, KL-divergence, Mahalanobis distance, etc. Theorem 2.3 There exists a merge-and-reduce importance sampling based framework for adversarially robust streaming algorithms that at each time t: (1) Outputs a set of centers that gives a (1 + ε)-approximation to the optimal (k, z)clustering, k-means clustering (z = 2), and k-median clustering (z = 1), while storing O ( 1 ε2z+2 k log 2z+2 n log k log k lognε ) points. (2) Outputs a set of centers that gives a (1 + ε)-approximation to the optimal k-Bregman clustering, while storing O ( 1 ε2 dk 3 log3 n ) points. Using the sensitivity bounds of [VX12a, VX12b] and the coreset constructions of [BFL16], then Theorem 1.1 also gives applications for the following shape fitting problems: Theorem 2.4 There exists a merge-and-reduce importance sampling based framework for adversarially robust streaming algorithms that at each time t: (1) Outputs a set of lines that gives a (1 + ε)-approximation to the optimal k-lines clustering, while storing O ( d ε2 f(d, k)k f(d,k) log4 n ) points of Rd, for a fixed function f(d, k). (2) Outputs a subspace that gives a (1 + ε)-approximation to the optimal dimension j subspace approximation, while storing O ( d ε2 g(d, j)k g(d,j) log4 n ) points of Rd, for a fixed function g(d, j). (3) Outputs a set of subspaces that gives a (1+ε)-approximation to the optimal (j, k)-projective clustering, while storing O ( d ε2 h(d, j, k) log 3 n(log n)h(d,j,k) ) points of Rds, for a fixed function h(d, j, k), for a set of input points with integer coordinates. Adversarially robust approximation algorithms for Bayesian logistic regression, Gaussian mixture models, generative adversarial networks (GANs), and support vector machine can be obtained from Theorem 1.1 and coreset constructions of [HCB16, FKW19, SZG+20, TBFR20]; a significant number of additional applications of Theorem 1.1 using coreset constructions can be seen from recent surveys on coresets, e.g., see [LK17, Fel20]. The merge-and-reduce framework also has applications to a large number of other problems such as finding heavy-hitters [MG82] or frequent directions [GLPW16] and in various settings, such as the sliding window model [DGIM02], time decay models [BLUZ19] or for at-the-time or back-in-time queries [SZP+21]. 3 Adversarial Robustness of Subspace Embedding and Applications We use [n] to represent the set {1, . . . , n} for an integer n > 0. We typically use bold font to denote vectors and matrices. For a matrix A, we use A−1 to denote the Moore-Penrose inverse of A. We first formally define the goals of our algorithms: Problem 3.1 (Spectral Approximation) Given a matrix A ∈ Rn×d and an approximation parameter ε > 0, the goal is to output a matrix M ∈ Rm×d with m n such that (1 − ε) ‖Ax‖2 ≤ ‖Mx‖2 ≤ (1 + ε) ‖Ax‖2 for all x ∈ Rd or equivalently, (1− ε)A>A M>M (1 + ε)A>A. We note that linear regression is a well-known specific application of spectral approximation. Problem 3.2 (Projection-Cost Preservation) Given a matrix A ∈ Rn×d, a rank parameter k > 0, and an approximation parameter ε > 0, the goal is to find a matrix M ∈ Rm×d with m n such that for all rank k orthogonal projection matrices P ∈ Rd×d, (1− ε) ‖A−AP‖2F ≤ ‖M−MP‖ 2 F ≤ (1 + ε) ‖A−AP‖ 2 F . Note if M is a projection-cost preservation of A, then its best low-rank approximation can be used to find a projection matrix that gives an approximation of the best low-rank approximation to A. Problem 3.3 (Low-Rank Approximation) Given a matrix A ∈ Rn×d, a rank parameter k > 0, and an approximation parameter ε > 0, find a rank k matrix M ∈ Rn×d such that (1 − ε) ∥∥A−A(k)∥∥2F ≤ ‖A−M‖2F ≤ (1 + ε)∥∥A−A(k)∥∥2F , where A(k) for a matrix A denotes the best rank k approximation to A. Problem 3.4 (L1-Subspace Embedding) Given a matrix A ∈ Rn×d and an approximation parameter ε > 0, the goal is to output a matrix M ∈ Rm×d with m n such that (1 − ε) ‖Ax‖1 ≤ ‖Mx‖1 ≤ (1 + ε) ‖Ax‖1 for all x ∈ Rd. We consider the general class of row sampling algorithms, e.g., [CMP16, BDM+20]. Here we maintain a Lp subspace embedding of the underlying matrix by approximating the online Lp sensitivities of each row as a measure of importance to perform sampling. For more details, see Algorithm 1. Definition 3.5 (Online Lp Sensitivities) For a matrix A = a1 ◦ . . . ◦ an ∈ Rn×d, the online sensitivity of row ai for each i ∈ [n] is the quantity maxx∈Rd |〈ai,x〉|p ‖Aix‖pp , where Ai−1 = a1 ◦ . . .◦ai−1. Algorithm 1 Row sampling based algorithms, e.g., [CMP16, BDM+20] Input: A stream of rows a1, . . . ,an ∈ Rd, parameter p > 0, and an accuracy parameter ε > 0 Output: A (1 + ε) Lp subspace embedding. 1: M← ∅ 2: α← Cdε2 log n with sufficiently large parameter C > 0 3: for each row ai, i ∈ [n] do 4: if ai ∈ span(M) then 5: τi ← 2 ·maxx∈Rd,x∈span(M) |〈ai,x〉|p ‖Mx‖pp+|〈ai,x〉|p .See Remark 3.6 6: else 7: τi ← 1 8: pi ← min(1, ατi) 9: With probability pi, M←M ◦ ai p 1/p i .Online sensitivity sampling 10: return M We remark on standard computation or approximation of the online Lp sensitivities, e.g., see [CEM+15, CMP16, CMM17, BDM+20]. Remark 3.6 We note that for p = 1, a constant fraction approximation to any online Lp sensitivity τi such that τi > 1poly(n) can be computed in polynomial time using (offline) linear programming while for p = 2, τi is equivalent to the online leverage score of ai, which has the closed form expression a>i (A > i Ai) −1ai, which can be approximated by a>i (M >M)−1ai, conditioned on M being a good approximation to Ai−1 when ai is in the span of M. Otherwise, τi takes value 1 when ai is not in the span of M. Lemma 3.7 (Adversarially Robust Lp Subspace Embedding and Linear Regression) Given ε > 0, p ∈ {1, 2}, and a matrix A ∈ Rn×d whose rows a1, . . . ,an arrive sequentially in a stream with condition number at most κ, there exists an adversarially robust streaming algorithm that outputs a (1 + ε) spectral approximation with high probability. The algorithm samples O ( d2κ2 ε2 log n log κ ) rows for p = 2 and O ( d2λ2 ε2 log 2 n log κ ) rows for p = 1, with high probability, where λ is a ratio between upper and lower bounds on ‖A‖1. We also show robustness of row sampling for low-rank approximation by using online ridge-leverage scores. Together, Lemma 3.7 and Lemma 3.8 give Theorem 1.2. Lemma 3.8 (Adversarially Robust Low-Rank Approximation) Given accuracy parameter ε > 0, rank parameter k > 0, and a matrix A ∈ Rn×d whose rows a1, . . . ,an arrive sequentially in a stream with condition number at most κ, there exists an adversarially robust streaming algorithm that outputs a (1 + ε) low-rank approximation with high probability. The algorithm samples O ( kdκ2 ε2 log n log κ ) rows with high probability. 4 Graph Sparsification In this section, we highlight how the sampling paradigm gives rise to an adversarially robust streaming algorithm for graph sparsification. First, we motivate the problem of graph sparsification. Massive graphs arise in many theoretical and applied settings, such as in the analysis of large social or biological networks. A key bottleneck in such analysis is the large computational resources, in both memory and time, needed. Therefore, it is desirable to get a representation of graphs that take up far less space while still preserving the underlying “structure” of the graph. Usually the number of vertices is much fewer than the number of edges; for example in typical real world graphs, the number of vertices can be several orders of magnitude smaller than the number of edges (for example, see the graph datasets in [RA15]). Hence, a natural benchmark is to reduce the number of edges to be comparable to the number of vertices. The most common notion of graph sparsification is that of preserving the value of all cuts in the graph by keeping a small weighted set of edges of the original graph. More specifically, suppose our graph is G = (V,E) and for simplicity assume all the edges have weight 1. A cut of the graph is a partition of V = (C, V \ C) and the value of a cut, ValG(C), is defined as the number of edges that cross between the vertices in C and V \C. A graph H on the same set of vertices as V is a sparsifier if it preserves the value of every cut in G and has a few number of weighted edges. For a precise formulation, see Problem 4.1. In addition to being algorithmically tractable, this formulation is natural since it preserves the underlying cluster structure of the graph. For example, if there are two well connected components separated by a sparse cut, i.e. two distinct communities, then the sparsifier according to the definition above will ensure that the two communities are still well separated. Conversely, by considering any cut within a well connected component, it will also ensure that any community remains well connected (for more details, see [SPR11] and references therein). Lastly, graph sparsification has been considered in other frameworks such as differential privacy [EKKL20], distributed optimization [WWLZ18], and even learning graph sparsification using deep learning methods [ZZC+20]. The formal problem definition of graph sparsification is as follows. Problem 4.1 (Graph Sparsification) Given a graph weighted G = (V,E) with |V | = n, |E| = m, and an approximation parameter ε > 0, compute a weighted subgraph H of G on the same set of vertices such that (1) every cut in H has value between 1− ε and 1 + ε times its value in G: (1− ε)ValG(C) ≤ ValH(C) ≤ (1 + ε)ValG(C) for all cuts C where ValG(C),ValH(C) denotes the cost of the cut in the graphs G and H respectively and for the latter quantity, the edges are weighted, (2) the number of edges in H is O ( n logn ε2 ) . Ignoring dependence on ε, there are previous results that already get sparsifiers H with O(n log n) edges [BK96, SS08]. Their setting is when the entire graph is present up-front in memory. In contrast, we are interested in the streaming setting where future edges can depend on past edges as well as revealed randomness of an algorithm while processing the edges. Our main goal is to show that the streaming algorithm from [AG09] (presented in Algorithm 2 in the supplementary section), which uses a sampling procedure to sample edges in a stream, is adversarially robust, albeit with a slightly worse guarantee for the number of edges. Following the proof techniques of the non streaming algorithm given in [BK96], it is shown in [AG09] that Algorithm 2 outputs a subgraph H such that H satisfies the conditions of Problem 4.1 with probability 1− 1/ poly(n) where the probability can be boosted by taking a larger constant C. We must show that this still holds true if the edges of the stream are adversarially chosen, i.e., when new edges in the stream depend on the previous edges and the randomness used by the algorithm so far. We thus again use a martingale argument; the full details are given in Supplementary Section C. As in Section 3, we let κ1 and κ2 to be deterministic lower/upper bounds on the size of any cut in G and define κ = κ2/κ1. Theorem 1.3 Given a weighted graph G = (V,E) with |V | = n whose edges e1, . . . , em arrive sequentially in a stream, there exists an adversarially robust streaming algorithm that outputs a 1± ε cut sparsifier with O ( κ2n logn ε2 ) edges with probability 1− 1/ poly(n). 5 Experiments To illustrate the robustness of importance-sampling-based streaming algorithms we devise adversarial settings for clustering and linear regression. With respect to our adversarial setting, we show that the performance of a merge-and-reduce based streaming k-means algorithm is robust while a popular streaming k-means implementation (not based on importance sampling) is not. Similarly, we show the robustness superiority of a streaming linear regression algorithm based on row sampling over a popular streaming linear regression implementation and over sketching. Streaming k-means In this adversarial clustering setting we consider a series of point batches where all points except those in the last batch are randomly sampled from a two dimensional standard normal distribution and points in the last batch similarly sampled but around a distant center (see the data points realization in both panels of Figure 3). We then feed the point sequence to StreamingKMeans, the streaming k-means implementation of Spark [ZXW+16b] the popular big-data processing framework. As illustrated in the left panel of Figure 3, the resulting two centers are both within the origin. Now, this result occurs regardless of the last batch’s distance from the origin, implying that the per-sample loss performance of the algorithm can be made arbitrarily large. Alternatively, we used a merge-and-reduce based streaming k-means algorithm and show that one of the resulting cluster centers is at the distant cluster (as illustrated in the right panel of Figure 3) thereby keeping the resulting per sample loss at the desired minimum. Specifically we use Streamkm, an implementation of StreamKM++ [AMR+12] from the ClusOpt Core library [Mac20]. Streaming linear regression. Similar to the clustering setting, in the adversarial setting for streaming linear regression all batches except the last one are sampled around a constellation of four points in the plane such that the optimal regression line is of −1 slope through the origin (see the leftmost panel of Figure 4). The last batch of points however, is again far from the origin (L,L) such that the resulting optimal regression line is of slope 1 through the origin2. We compare the performance of LinearRegression from the popular streaming machine learning library River [MHM+20] to our own row sampling based implementation of streaming linear regression along the lines of Algorithm 1 and observe the following: Without the last batch of points, both implementations result in the optimal regression line, however, the River implementation reaches that line only after several iterations, while our implementation is accurate throughout (This is illustrated in the second-left panel of Figure 4). When the last batch is used, nevertheless, Algorithm 1 picks up the drastic change and adapts immediately to a line of the optimal slope (the blue line of the second right panel of Figure 4) while the River implementation update merely moves the line in the desired direction (the orange line in that same panel) but is far from catching up. Finally, the rightmost panel of Figure 4) details the loss trajectory for both implementations. While the River loss skyrockets upon the last batch, the loss of Algorithm 1 remains relatively unaffected, illustrating its adversarial robustness. Note that in both the clustering and linear regression settings above the adversary was not required to consider the algorithms internal randomization to achieve the desired effect (this is due to the local nature of the algorithms computations). This is no longer the case in the following last setting. Sampling vs. sketching. Finally, we compare the performance of the leverage sampling Algorithm 1 to sketching. In this setting, for a random unit sketching matrix S (that is, each of its elements is sampled from {−1, 1} with equal probability), we create an adversarial data stream A such that its columns are in the null space of S. As a result, the linear regression as applied to the sketched data S ·A as a whole is unstable and might significantly differ from the resulting linear regression applied to streamed prefixes of the sketched data. As illustrated in Figure 5, this is not the case when applying the linear regression to the original streamed data A using Algorithm 1. Upon the last batch, the performance of the sketching-based regression deteriorates by orders of magnitude, while the performance of Algorithm 1 is not affected. Moreover, the data reduction factor achieved by leveraged sampling3 is almost double compared to the data reduction factor achieved by sketching. Acknowledgments Sandeep Silwal was supported in part by a NSF Graduate Research Fellowship Program. Samson Zhou was supported by a Simons Investigator Award of David P. Woodruff. 2For MSE loss, this occurs for L at least the square root of the number of batches. 3The original stream A contained 2000 samples, each of dimension 10.
1. What is the focus of the paper regarding streaming algorithms? 2. What are the advantages of the proposed approach, particularly in terms of adversarial robustness? 3. What are the limitations of the paper, especially regarding its technical strength and originality? 4. How does the reviewer assess the significance of the results obtained by the paper? 5. Are there any concerns or questions regarding the paper's content, experimental support, or overall impact?
Summary Of The Paper Review
Summary Of The Paper This paper studies adversarial robust streaming algorithms. In the streaming model, there are many randomized algorithms. An adversary gives a sequence of update to the algorithm adaptively to learn the random bits used by the outputs of the algorithm. A adversarial robust streaming algorithm is robust to any adversarial updates. This paper gives an observation that if the streaming algorithm is sampling based and the random bit for each item is fresh, then the algorithm is robust even though the sampling probability depends on the previous random bits. Review Pros: -Adversarial robustness streaming algorithms are important in the literature of streaming algorithm. Based on the observations of this paper, many previous streaming algorithms automatically are adversarially robust. These problems include many fundamental ML, numerical linear algebra and graph problems including clustering, Gaussian mixture models, SVM, k-clustering, regression, GANs, low rank approximation, spectral approximation, graph scarification, and etc. -The paper is well-written in general. -The experiments support their results. Cons: -My major concern is the technical strength of the paper. Although the paper obtains many important results, all of them directly follows from previous streaming algorithms. Although authors show that Merge-and-Reduce framework and row sampling framework are adversarially robust in general. The techniques for analysis are not hard. Basically the structures of these frameworks make the analysis works in some sort of a straightforward way. Overall, since results are important but the technical strength is limited, I think the paper is marginally above the bar.
NIPS
Title Adversarial Robustness of Streaming Algorithms through Importance Sampling Abstract Robustness against adversarial attacks has recently been at the forefront of algorithmic design for machine learning tasks. In the adversarial streaming model, an adversary gives an algorithm a sequence of adaptively chosen updates u1, . . . , un as a data stream. The goal of the algorithm is to compute or approximate some predetermined function for every prefix of the adversarial stream, but the adversary may generate future updates based on previous outputs of the algorithm. In particular, the adversary may gradually learn the random bits internally used by an algorithm to manipulate dependencies in the input. This is especially problematic as many important problems in the streaming model require randomized algorithms, as they are known to not admit any deterministic algorithms that use sublinear space. In this paper, we introduce adversarially robust streaming algorithms for central machine learning and algorithmic tasks, such as regression and clustering, as well as their more general counterparts, subspace embedding, low-rank approximation, and coreset construction. For regression and other numerical linear algebra related tasks, we consider the row arrival streaming model. Our results are based on a simple, but powerful, observation that many importance sampling-based algorithms give rise to adversarial robustness which is in contrast to sketching based algorithms, which are very prevalent in the streaming literature but suffer from adversarial attacks. In addition, we show that the well-known merge and reduce paradigm in streaming is adversarially robust. Since the merge and reduce paradigm allows coreset constructions in the streaming setting, we thus obtain robust algorithms for k-means, k-median, k-center, Bregman clustering, projective clustering, principal component analysis (PCA) and non-negative matrix factorization. To the best of our knowledge, these are the first adversarially robust results for these problems yet require no new algorithmic implementations. Finally, we empirically confirm the robustness of our algorithms on various adversarial attacks and demonstrate that by contrast, some common existing algorithms are not robust. 1 Introduction Robustness against adversarial attacks have recently been at the forefront of algorithmic design for machine learning tasks [GSS15, CW17, AEIK18, MMS+18, TSE+19]. We extend this line of work by studying adversarially robust streaming algorithms. In the streaming model, data points are generated one at a time in a stream and the goal is to compute some meaningful function of the input points while using a limited amount of memory, typically 35th Conference on Neural Information Processing Systems (NeurIPS 2021). sublinear in the total size of the input. The streaming model is applicable in many algorithmic and ML related tasks where the size of the data far exceeds the available storage. Applications of the streaming model include monitoring IP traffic flow, analyzing web search queries [LMV+16], processing large scientific data, feature selection in machine learning [HZZ21, GRB+19, WYWD10], and estimating word statistics in natural language processing [GDC12] to name a few. Streaming algorithms have also been implemented in popular data processing libraries such as Apache Spark which have implementations for streaming tasks such as clustering and linear regression [ZXW+16a]. In the adversarial streaming model [MBN+17, BMSC17, AMYZ19, BY20, BJWY20, HKM+20, WZ20, ABD+21, KMNS21], an adversary gives an algorithm a sequence of adaptively chosen updates u1, . . . , un as a data stream. The goal of the algorithm is to compute or approximate some predetermined function for every prefix of the adversarial stream, but the adversary may generate future updates based on previous outputs of the algorithm. In particular, the adversary may gradually learn the random bits internally used by an algorithm to manipulate dependencies in the input. This is especially problematic as many important problems in the streaming model require randomized algorithms, as they are known to not admit any deterministic algorithms that use sublinear space. Studying when adversarially robust streaming algorithms are possible is an important problem in lieu of recent interest in adversarial attacks in ML with applications to adaptive data analysis. Formally, we define the model as a two-player game between a streaming algorithm StreamAlg and a source Adversary of adaptive and adversarial input to StreamAlg. At the beginning of the game, a fixed queryQ is determined and asks for a fixed function for the underlying dataset implicitly defined by the stream. The game then proceeds in rounds, and in the t-th round, (1) Adversary computes an update ut ∈ [n] for the stream, which depends on all previous stream updates and all previous outputs from StreamAlg. (2) StreamAlg uses ut to update its data structures Dt, acquires a fresh batch Rt of random bits, and outputs a response At to the query Q. (3) Adversary observes and records the response At. The goal of Adversary is to induce StreamAlg to make an incorrect response At to the query Q at some time t ∈ [m] throughout the stream. Related Works. Adversarial robustness of streaming algorithms has been an important topic of recent research. On the positive note, [BJWY20] gave a robust framework for estimating the Lp norm of points in a stream in the insertion-only model, where previous stream updates cannot later be deleted. Their work thus shows that deletions are integral to the attack of [HW13]. Subsequently, [HKM+20] introduced a new algorithmic design for robust Lp norm estimation algorithms, by using differential privacy to protect the internal randomness of algorithms against the adversary. Although [WZ20] tightened these bounds, showing that essentially no losses related to the size of the input n or the accuracy parameter ε were needed, [KMNS21] showed that this may not be true in general. Specifically, they showed a separation between oblivious and adversarial streaming in the adaptive data analysis problem. [BY20] showed that sampling is not necessarily adversarially robust; they introduce an exponentially sized set system where a constant number of samples, corresponding to the VC-dimension of the set system, may result in a very unrepresentative set of samples. However, they show that with an additional logarithmic overhead in the number of samples, then Bernoulli and or reservoir sampling are adversarially robust. This notion is further formalized by [ABD+21], who showed that the classes that are online learnable requires essentially sample-complexity proportional to the Littlestone’s dimension of the underlying set system, rather than VC dimension. However, these sampling procedures are uniform in the sense that each item in the stream is sampled with the same probability. Thus the sampling probability of each item is oblivious to the identity of the item. By contrast, we show the robustness for a variety of algorithms based on non-oblivious sampling, where each stream item is sampled with probability roughly proportional to the “importance” of the item. 1.1 Our Contributions Our main contribution is a powerful yet simple statement that algorithms based on non-oblivious sampling are adversarially robust if informally speaking, the process of sampling each item in the stream can be viewed as using fresh randomness independent of previous steps, even if the sampling probabilities depend on previous steps. Let us describe, very informally, our meta-approach. Suppose we have an adversarial stream of elements given by u1, . . . , un. Our algorithm A will maintain a data structure At at time t which updates as the stream progresses. A will use a function g(At, ut) to determine the probability of sampling item ut to update At to At+1. The function g measures the ‘importance’ of the element ut to the overall problem that we wish to solve. For example, if our application is k-means clustering and ut is a point far away from all previously seen points so far, we want to sample it with a higher probability. We highlight that even though the sampling probability for ut given by g(At, ut) is adversarial, since the adversary designs ut and previous streaming elements, the coin toss performed by our algorithm A to keep item ut is independent of any events that have occurred so far, including the adversary’s actions. This new randomness introduced by the independent coin toss is a key conceptual step in the analysis for all of the applications listed in Figure 1. Contrast this to the situation where a “fixed” data structure or sketch is specified upfront. In this case, we would not be adaptive to which inputs ut the adversary designs to be “important” for our problem which would lead us to potentially disregard such important items rendering the algorithm ineffective. As applications of our meta-approach, we introduce adversarially robust streaming algorithms for two central machine learning tasks, regression and clustering, as well as their more general counterparts, subspace embedding, low-rank approximation, and coreset construction. We show that several methods from the streaming algorithms “toolbox”, namely merge and reduce, online leverage score sampling, and edge sampling are adversarially robust “for free.” As a result, existing (and future) streaming algorithms that use these tools are robust as well. We discuss our results in more detail below and provide a summary of our results and applications in Figure 1. We first show that the well-known merge and reduce paradigm is adversarially robust. Since the merge and reduce paradigm defines coreset constructions, we thus obtain robust algorithms for k-means, k-median, Bregman clustering, projective clustering, principal component analysis (PCA), non-negative matrix factorization (NNMF) [LK17]. Theorem 1.1 (Merge and reduce is adversarially robust) Given an offline ε-coreset construction, the merge and reduce framework gives an adversarially robust streaming construction for an ε-coreset with high probability. For regression and other numerical linear algebra related tasks, we consider the row arrival streaming model, in which the adversary generates a sequence of row vectors a1, . . . ,an in d-dimensional vector space. For t ∈ [n], the t-th prefix of the stream induces a matrix At ∈ Rt×d with rows a1, . . . ,at. We denote this matrix as At = a1 ◦ . . . ◦ at and define κ to be an upper bound on the largest condition number1 of the matrices A1, . . . ,An. Theorem 1.2 (Row sampling is adversarially robust) There exists a row sampling based framework for adversarially robust streaming algorithms that at each time t ∈ [n]: 1the ratio of the largest and smallest nonzero singular values (1) Outputs a matrix Mt such that (1 − ε)A>t At M>t Mt (1 + ε)A>t At, while sampling O ( d2κ ε2 log n log κ ) rows (spectral approximation/subspace embedding/linear regres- sion/generalized regression). (2) Outputs a matrix Mt such that for all rank k orthogonal projection matrices P ∈ Rd×d, (1− ε) ‖At −AtP‖2F ≤ ‖Mt −MtP‖ 2 F ≤ (1 + ε) ‖At −AtP‖ 2 F , while sampling O ( dkκ ε2 log n log 2 κ ) rows (projection-cost preservation/low-rank approximation). (3) Outputs a matrix Mt such that (1 − ε) ‖Atx‖1 ≤ ‖Mtx‖1 ≤ (1 + ε) ‖Atx‖1, while sampling O ( d2κ ε2 log 2 n log κ ) rows (L1 subspace embedding). Finally, we show that our analysis also applies to algorithms for graph sparsification for in which edges are sampled according to their “importance”. Define κ as the ratio of the largest and the smallest cut sizes in G (see Section 4 and Supplementary Section C for exact details). Theorem 1.3 Given a weighted graph G = (V,E) with |V | = n whose edges e1, . . . , em arrive sequentially in a stream, there exists an adversarially robust streaming algorithm that outputs a 1± ε cut sparsifier with O ( κ2n logn ε2 ) edges with probability 1− 1/ poly(n). Sketching vs Sampling Algorithms. A central tool for randomized streaming algorithms is the use of linear sketches. These methods maintain a data structure f such that after the (i+ 1)-th input xi, we can update f by computing a linear function of xi. Typically, these methods employ a random matrix. For example, if the input consists of vectors, sketching methods will use a random matrix to project the vector into a much smaller dimension space. In [HW13], it was proved no linear sketch can approximate the L2-norm within a polynomial multiplicative factor against such an adaptive adversary. In general, streaming algorithms that use sketching are highly susceptible to the type of attack described in [HW13] where the adversary can effectively ‘learn’ the kernel of the linear function used and send inputs along the kernel. For example, if an adversary knows the kernel of the random matrix used to project the input points, then by sending points that lie on the kernel of the matrix as inputs, the adversary can render the whole streaming algorithm useless. One the other hand, we employ a different family of streaming algorithms that are based on sampling the input rather than sketching it. Surprisingly, this simple change allows one to automatically get many adversarially robust algorithms either “for free” or without new algorithmic overheads. For more information, see Section 1.1. We emphasize that while our techniques are not theoretically sophisticated, we believe its power lies in its simple message that sampling is often superior to sketching for adversarial robustness. In addition to downstream algorithmic and ML applications, this provides an interesting separation and trade-offs between the two paradigms; for non adversarial inputs sketching often gives similar or better performance guarantees for many tasks [BYKS01]. 2 Merge and Reduce We show that the general merge and reduce paradigm is adversarially robust. Merge and reduce is widely used for the construction of a coreset, which provides dimensionality reduction on the size of an underlying dataset, so that algorithms for downstream applications can run more efficiently: Definition 2.1 (ε coreset) Let P ⊂ X be a set of elements from a universe X , z ≥ 0, ε ∈ (0, 1), and (P,dist, Q) be a query space. Then a subset C equipped with a weight function w : P → R is called an ε-coreset with respect to the query space (P,dist, Q) if (1− ε) ∑ p∈P dist(p, Q)z ≤ ∑ p∈C w(p) dist(p, Q)z ≤ (1 + ε) ∑ p∈P dist(p, Q)z. The study of efficient offline coreset constructions for a variety of geometric and algebraic problems forms a long line of active research. For example, offline coreset constructions are known for linear regression, low-rank approximation, L1-subspace embedding, k-means clustering, k-median clustering, k-center, support vector machine, Gaussian mixture models, M -estimators, Bregman clustering, projective clustering, principal component analysis, k-line center, j-subspace approximation, and so on. Thus, our result essentially shows that using the merge and reduce paradigm, these offline coreset constructions can be extended to obtain robust and accurate streaming algorithms. The merge and reduce paradigm works as follows. Suppose we have a stream p1, . . . , pn of length n = 2k for some Theorem 2.2 There exists a merge-and-reduce row sampling based framework for adversarially robust streaming algorithms that at each time t ∈ [n]: (1) Outputs a matrix Mt such that (1 − ε)A>t At M>t Mt (1 + ε)A>t At, while sampling O ( d2 ε2 log 4 n log κ ) rows (spectral approximation/subspace embedding/linear regres- sion/generalized regression). (2) Outputs a matrix Mt such that for all rank k orthogonal projection matrices P ∈ Rd×d, (1− ε) ‖At −AtP‖2F ≤ ‖Mt −MtP‖ 2 F ≤ (1 + ε) ‖At −AtP‖ 2 F , while sampling O ( k ε2 log 4 n log2 κ ) rows (projection-cost preservation/low-rank approximation). (3) Outputs a matrix Mt such that (1 − ε) ‖Atx‖1 ≤ ‖Mtx‖1 ≤ (1 + ε) ‖Atx‖1, while sampling O ( d ε2 log 5 n log κ ) rows (L1 subspace embedding). Using coresets of [HV20], then Theorem 1.1 also gives applications for (k, z)-clustering such as k-median for z = 1 and k-means for z = 2. Moreover, [LK17] noted that constructions of [FL11] give coresets for Bregman clustering, which handles µ-similar Bregman divergences such as the Itakura-Saito distance, KL-divergence, Mahalanobis distance, etc. Theorem 2.3 There exists a merge-and-reduce importance sampling based framework for adversarially robust streaming algorithms that at each time t: (1) Outputs a set of centers that gives a (1 + ε)-approximation to the optimal (k, z)clustering, k-means clustering (z = 2), and k-median clustering (z = 1), while storing O ( 1 ε2z+2 k log 2z+2 n log k log k lognε ) points. (2) Outputs a set of centers that gives a (1 + ε)-approximation to the optimal k-Bregman clustering, while storing O ( 1 ε2 dk 3 log3 n ) points. Using the sensitivity bounds of [VX12a, VX12b] and the coreset constructions of [BFL16], then Theorem 1.1 also gives applications for the following shape fitting problems: Theorem 2.4 There exists a merge-and-reduce importance sampling based framework for adversarially robust streaming algorithms that at each time t: (1) Outputs a set of lines that gives a (1 + ε)-approximation to the optimal k-lines clustering, while storing O ( d ε2 f(d, k)k f(d,k) log4 n ) points of Rd, for a fixed function f(d, k). (2) Outputs a subspace that gives a (1 + ε)-approximation to the optimal dimension j subspace approximation, while storing O ( d ε2 g(d, j)k g(d,j) log4 n ) points of Rd, for a fixed function g(d, j). (3) Outputs a set of subspaces that gives a (1+ε)-approximation to the optimal (j, k)-projective clustering, while storing O ( d ε2 h(d, j, k) log 3 n(log n)h(d,j,k) ) points of Rds, for a fixed function h(d, j, k), for a set of input points with integer coordinates. Adversarially robust approximation algorithms for Bayesian logistic regression, Gaussian mixture models, generative adversarial networks (GANs), and support vector machine can be obtained from Theorem 1.1 and coreset constructions of [HCB16, FKW19, SZG+20, TBFR20]; a significant number of additional applications of Theorem 1.1 using coreset constructions can be seen from recent surveys on coresets, e.g., see [LK17, Fel20]. The merge-and-reduce framework also has applications to a large number of other problems such as finding heavy-hitters [MG82] or frequent directions [GLPW16] and in various settings, such as the sliding window model [DGIM02], time decay models [BLUZ19] or for at-the-time or back-in-time queries [SZP+21]. 3 Adversarial Robustness of Subspace Embedding and Applications We use [n] to represent the set {1, . . . , n} for an integer n > 0. We typically use bold font to denote vectors and matrices. For a matrix A, we use A−1 to denote the Moore-Penrose inverse of A. We first formally define the goals of our algorithms: Problem 3.1 (Spectral Approximation) Given a matrix A ∈ Rn×d and an approximation parameter ε > 0, the goal is to output a matrix M ∈ Rm×d with m n such that (1 − ε) ‖Ax‖2 ≤ ‖Mx‖2 ≤ (1 + ε) ‖Ax‖2 for all x ∈ Rd or equivalently, (1− ε)A>A M>M (1 + ε)A>A. We note that linear regression is a well-known specific application of spectral approximation. Problem 3.2 (Projection-Cost Preservation) Given a matrix A ∈ Rn×d, a rank parameter k > 0, and an approximation parameter ε > 0, the goal is to find a matrix M ∈ Rm×d with m n such that for all rank k orthogonal projection matrices P ∈ Rd×d, (1− ε) ‖A−AP‖2F ≤ ‖M−MP‖ 2 F ≤ (1 + ε) ‖A−AP‖ 2 F . Note if M is a projection-cost preservation of A, then its best low-rank approximation can be used to find a projection matrix that gives an approximation of the best low-rank approximation to A. Problem 3.3 (Low-Rank Approximation) Given a matrix A ∈ Rn×d, a rank parameter k > 0, and an approximation parameter ε > 0, find a rank k matrix M ∈ Rn×d such that (1 − ε) ∥∥A−A(k)∥∥2F ≤ ‖A−M‖2F ≤ (1 + ε)∥∥A−A(k)∥∥2F , where A(k) for a matrix A denotes the best rank k approximation to A. Problem 3.4 (L1-Subspace Embedding) Given a matrix A ∈ Rn×d and an approximation parameter ε > 0, the goal is to output a matrix M ∈ Rm×d with m n such that (1 − ε) ‖Ax‖1 ≤ ‖Mx‖1 ≤ (1 + ε) ‖Ax‖1 for all x ∈ Rd. We consider the general class of row sampling algorithms, e.g., [CMP16, BDM+20]. Here we maintain a Lp subspace embedding of the underlying matrix by approximating the online Lp sensitivities of each row as a measure of importance to perform sampling. For more details, see Algorithm 1. Definition 3.5 (Online Lp Sensitivities) For a matrix A = a1 ◦ . . . ◦ an ∈ Rn×d, the online sensitivity of row ai for each i ∈ [n] is the quantity maxx∈Rd |〈ai,x〉|p ‖Aix‖pp , where Ai−1 = a1 ◦ . . .◦ai−1. Algorithm 1 Row sampling based algorithms, e.g., [CMP16, BDM+20] Input: A stream of rows a1, . . . ,an ∈ Rd, parameter p > 0, and an accuracy parameter ε > 0 Output: A (1 + ε) Lp subspace embedding. 1: M← ∅ 2: α← Cdε2 log n with sufficiently large parameter C > 0 3: for each row ai, i ∈ [n] do 4: if ai ∈ span(M) then 5: τi ← 2 ·maxx∈Rd,x∈span(M) |〈ai,x〉|p ‖Mx‖pp+|〈ai,x〉|p .See Remark 3.6 6: else 7: τi ← 1 8: pi ← min(1, ατi) 9: With probability pi, M←M ◦ ai p 1/p i .Online sensitivity sampling 10: return M We remark on standard computation or approximation of the online Lp sensitivities, e.g., see [CEM+15, CMP16, CMM17, BDM+20]. Remark 3.6 We note that for p = 1, a constant fraction approximation to any online Lp sensitivity τi such that τi > 1poly(n) can be computed in polynomial time using (offline) linear programming while for p = 2, τi is equivalent to the online leverage score of ai, which has the closed form expression a>i (A > i Ai) −1ai, which can be approximated by a>i (M >M)−1ai, conditioned on M being a good approximation to Ai−1 when ai is in the span of M. Otherwise, τi takes value 1 when ai is not in the span of M. Lemma 3.7 (Adversarially Robust Lp Subspace Embedding and Linear Regression) Given ε > 0, p ∈ {1, 2}, and a matrix A ∈ Rn×d whose rows a1, . . . ,an arrive sequentially in a stream with condition number at most κ, there exists an adversarially robust streaming algorithm that outputs a (1 + ε) spectral approximation with high probability. The algorithm samples O ( d2κ2 ε2 log n log κ ) rows for p = 2 and O ( d2λ2 ε2 log 2 n log κ ) rows for p = 1, with high probability, where λ is a ratio between upper and lower bounds on ‖A‖1. We also show robustness of row sampling for low-rank approximation by using online ridge-leverage scores. Together, Lemma 3.7 and Lemma 3.8 give Theorem 1.2. Lemma 3.8 (Adversarially Robust Low-Rank Approximation) Given accuracy parameter ε > 0, rank parameter k > 0, and a matrix A ∈ Rn×d whose rows a1, . . . ,an arrive sequentially in a stream with condition number at most κ, there exists an adversarially robust streaming algorithm that outputs a (1 + ε) low-rank approximation with high probability. The algorithm samples O ( kdκ2 ε2 log n log κ ) rows with high probability. 4 Graph Sparsification In this section, we highlight how the sampling paradigm gives rise to an adversarially robust streaming algorithm for graph sparsification. First, we motivate the problem of graph sparsification. Massive graphs arise in many theoretical and applied settings, such as in the analysis of large social or biological networks. A key bottleneck in such analysis is the large computational resources, in both memory and time, needed. Therefore, it is desirable to get a representation of graphs that take up far less space while still preserving the underlying “structure” of the graph. Usually the number of vertices is much fewer than the number of edges; for example in typical real world graphs, the number of vertices can be several orders of magnitude smaller than the number of edges (for example, see the graph datasets in [RA15]). Hence, a natural benchmark is to reduce the number of edges to be comparable to the number of vertices. The most common notion of graph sparsification is that of preserving the value of all cuts in the graph by keeping a small weighted set of edges of the original graph. More specifically, suppose our graph is G = (V,E) and for simplicity assume all the edges have weight 1. A cut of the graph is a partition of V = (C, V \ C) and the value of a cut, ValG(C), is defined as the number of edges that cross between the vertices in C and V \C. A graph H on the same set of vertices as V is a sparsifier if it preserves the value of every cut in G and has a few number of weighted edges. For a precise formulation, see Problem 4.1. In addition to being algorithmically tractable, this formulation is natural since it preserves the underlying cluster structure of the graph. For example, if there are two well connected components separated by a sparse cut, i.e. two distinct communities, then the sparsifier according to the definition above will ensure that the two communities are still well separated. Conversely, by considering any cut within a well connected component, it will also ensure that any community remains well connected (for more details, see [SPR11] and references therein). Lastly, graph sparsification has been considered in other frameworks such as differential privacy [EKKL20], distributed optimization [WWLZ18], and even learning graph sparsification using deep learning methods [ZZC+20]. The formal problem definition of graph sparsification is as follows. Problem 4.1 (Graph Sparsification) Given a graph weighted G = (V,E) with |V | = n, |E| = m, and an approximation parameter ε > 0, compute a weighted subgraph H of G on the same set of vertices such that (1) every cut in H has value between 1− ε and 1 + ε times its value in G: (1− ε)ValG(C) ≤ ValH(C) ≤ (1 + ε)ValG(C) for all cuts C where ValG(C),ValH(C) denotes the cost of the cut in the graphs G and H respectively and for the latter quantity, the edges are weighted, (2) the number of edges in H is O ( n logn ε2 ) . Ignoring dependence on ε, there are previous results that already get sparsifiers H with O(n log n) edges [BK96, SS08]. Their setting is when the entire graph is present up-front in memory. In contrast, we are interested in the streaming setting where future edges can depend on past edges as well as revealed randomness of an algorithm while processing the edges. Our main goal is to show that the streaming algorithm from [AG09] (presented in Algorithm 2 in the supplementary section), which uses a sampling procedure to sample edges in a stream, is adversarially robust, albeit with a slightly worse guarantee for the number of edges. Following the proof techniques of the non streaming algorithm given in [BK96], it is shown in [AG09] that Algorithm 2 outputs a subgraph H such that H satisfies the conditions of Problem 4.1 with probability 1− 1/ poly(n) where the probability can be boosted by taking a larger constant C. We must show that this still holds true if the edges of the stream are adversarially chosen, i.e., when new edges in the stream depend on the previous edges and the randomness used by the algorithm so far. We thus again use a martingale argument; the full details are given in Supplementary Section C. As in Section 3, we let κ1 and κ2 to be deterministic lower/upper bounds on the size of any cut in G and define κ = κ2/κ1. Theorem 1.3 Given a weighted graph G = (V,E) with |V | = n whose edges e1, . . . , em arrive sequentially in a stream, there exists an adversarially robust streaming algorithm that outputs a 1± ε cut sparsifier with O ( κ2n logn ε2 ) edges with probability 1− 1/ poly(n). 5 Experiments To illustrate the robustness of importance-sampling-based streaming algorithms we devise adversarial settings for clustering and linear regression. With respect to our adversarial setting, we show that the performance of a merge-and-reduce based streaming k-means algorithm is robust while a popular streaming k-means implementation (not based on importance sampling) is not. Similarly, we show the robustness superiority of a streaming linear regression algorithm based on row sampling over a popular streaming linear regression implementation and over sketching. Streaming k-means In this adversarial clustering setting we consider a series of point batches where all points except those in the last batch are randomly sampled from a two dimensional standard normal distribution and points in the last batch similarly sampled but around a distant center (see the data points realization in both panels of Figure 3). We then feed the point sequence to StreamingKMeans, the streaming k-means implementation of Spark [ZXW+16b] the popular big-data processing framework. As illustrated in the left panel of Figure 3, the resulting two centers are both within the origin. Now, this result occurs regardless of the last batch’s distance from the origin, implying that the per-sample loss performance of the algorithm can be made arbitrarily large. Alternatively, we used a merge-and-reduce based streaming k-means algorithm and show that one of the resulting cluster centers is at the distant cluster (as illustrated in the right panel of Figure 3) thereby keeping the resulting per sample loss at the desired minimum. Specifically we use Streamkm, an implementation of StreamKM++ [AMR+12] from the ClusOpt Core library [Mac20]. Streaming linear regression. Similar to the clustering setting, in the adversarial setting for streaming linear regression all batches except the last one are sampled around a constellation of four points in the plane such that the optimal regression line is of −1 slope through the origin (see the leftmost panel of Figure 4). The last batch of points however, is again far from the origin (L,L) such that the resulting optimal regression line is of slope 1 through the origin2. We compare the performance of LinearRegression from the popular streaming machine learning library River [MHM+20] to our own row sampling based implementation of streaming linear regression along the lines of Algorithm 1 and observe the following: Without the last batch of points, both implementations result in the optimal regression line, however, the River implementation reaches that line only after several iterations, while our implementation is accurate throughout (This is illustrated in the second-left panel of Figure 4). When the last batch is used, nevertheless, Algorithm 1 picks up the drastic change and adapts immediately to a line of the optimal slope (the blue line of the second right panel of Figure 4) while the River implementation update merely moves the line in the desired direction (the orange line in that same panel) but is far from catching up. Finally, the rightmost panel of Figure 4) details the loss trajectory for both implementations. While the River loss skyrockets upon the last batch, the loss of Algorithm 1 remains relatively unaffected, illustrating its adversarial robustness. Note that in both the clustering and linear regression settings above the adversary was not required to consider the algorithms internal randomization to achieve the desired effect (this is due to the local nature of the algorithms computations). This is no longer the case in the following last setting. Sampling vs. sketching. Finally, we compare the performance of the leverage sampling Algorithm 1 to sketching. In this setting, for a random unit sketching matrix S (that is, each of its elements is sampled from {−1, 1} with equal probability), we create an adversarial data stream A such that its columns are in the null space of S. As a result, the linear regression as applied to the sketched data S ·A as a whole is unstable and might significantly differ from the resulting linear regression applied to streamed prefixes of the sketched data. As illustrated in Figure 5, this is not the case when applying the linear regression to the original streamed data A using Algorithm 1. Upon the last batch, the performance of the sketching-based regression deteriorates by orders of magnitude, while the performance of Algorithm 1 is not affected. Moreover, the data reduction factor achieved by leveraged sampling3 is almost double compared to the data reduction factor achieved by sketching. Acknowledgments Sandeep Silwal was supported in part by a NSF Graduate Research Fellowship Program. Samson Zhou was supported by a Simons Investigator Award of David P. Woodruff. 2For MSE loss, this occurs for L at least the square root of the number of batches. 3The original stream A contained 2000 samples, each of dimension 10.
1. What is the focus of the paper regarding streaming algorithms and adversarial robustness? 2. What are the strengths of the proposed approach, particularly in comparison to prior works? 3. What are the weaknesses of the paper, especially regarding its presentation and experimental evaluation? 4. Do you have any concerns or suggestions for improving the figures and their discussion? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper show how streaming algorithms based on importance sampling can achieve adversarial robustness in a specific but quite general adversarial model, where the adversary observes all the intermediate outputs of the algorithm and may even learn past random bits, but has no access to the "new" random bits used at each time. Any method that uses importance sampling (in the way described in the paper) is adversarial-robust, and many existing streaming approaches enjoy this property e.g., , those using coresets and the merge-and-reduce approach, and methods for regression using row sampling and methods for graph sparsification using edge sampling. Review Comments after response I thank the Authors for their response. I read it carefully, as I did the comments from other Reviewers and the responses to them. I believe should be accepted, and I increased my original score to reflect this new opinion. The new figures look good and should be included in the final version of the paper in place of the original ones. Original Review The results in this paper seem quite interesting. After [HV20], it is nice to see more work remarking on the beneficial properties of importance sampling, in this case versus "non-adaptive" sketching. One downside of the paper is that the main text contains no proofs at all, not even sketches, roadmaps, intuitions, informal ideas, or anything that may guide the reader, who may not want to delve into the whole supplementary materials, into understanding why the proposed results are true. Indeed the paper, due to space restrictions, reads almost as just a list of results, which is a pity. I wonder whether the Authors could avoid presenting some results (deferring those to the supplementary materials), and present more discussion and possibly some proof sketches. The experimental evaluation shows the large difference between adversarially-robust and "non-robust" methods. The settings and experiments feel a bit artificial, but they do drive home the point. The figures and their discussion make reference to colors and other properties that are lost when the paper is printed in grayscale, and also to colorblind people. Additionally, some figures (e.g., Fig. 4) are too small to be readable when the paper is printed. Using different line styles, bigger markers / thicker lines, and other visual clues would be beneficial. This Reviewer did not check all the details of all the proofs, but the general "flow" of the proofs seems correct.
NIPS
Title Dynamic Bottleneck for Robust Self-Supervised Exploration Abstract Exploration methods based on pseudo-count of transitions or curiosity of dynamics have achieved promising results in solving reinforcement learning with sparse rewards. However, such methods are usually sensitive to environmental dynamicsirrelevant information, e.g., white-noise. To handle such dynamics-irrelevant information, we propose a Dynamic Bottleneck (DB) model, which attains a dynamics-relevant representation based on the information-bottleneck principle. Based on the DB model, we further propose DB-bonus, which encourages the agent to explore state-action pairs with high information gain. We establish theoretical connections between the proposed DB-bonus, the upper confidence bound (UCB) for linear case, and the visiting count for tabular case. We evaluate the proposed method on Atari suits with dynamics-irrelevant noises. Our experiments show that exploration with DB bonus outperforms several state-of-the-art exploration methods in noisy environments. 1 Introduction The tradeoff between exploration and exploitation has long been a major challenge in reinforcement learning (RL) [35, 50, 58]. Generally, excessive exploitation of the experience suffers from the potential risk of being suboptimal, whereas excessive exploration of novel states hinders the improvement of the policy. A straightforward way to tackle the exploration-exploitation dilemma is to enhance exploration efficiency while keeping exploitation in pace. When the extrinsic rewards are dense, reward shaping is commonly adopted for efficient exploration. However, in many real-world applications such as autonomous driving [34], the extrinsic rewards are sparse, making efficient exploration a challenging task in developing practical RL algorithms. The situations become even worse when the extrinsic rewards are entirely unavailable. In such a scenario, the task of collecting informative trajectories from exploration is known as the self-supervised exploration [11]. An effective approach to self-supervised exploration is to design a dense intrinsic reward that motivates the agent to explore novel transitions. Previous attempts include count-based [9] and curiosity-driven [39] explorations. The count-based exploration builds a density model to measure the pseudo-count of state visitation and assign high intrinsic rewards to less frequently visited states. In contrast, the 35th Conference on Neural Information Processing Systems (NeurIPS 2021). curiosity-driven methods maintain a predictive model of the transitions and encourage the agent to visit transitions with high prediction errors. However, all these methods becomes unstable when the states are noisy, e.g., containing dynamics-irrelevant information. For example, in autonomous driving tasks, the states captured by the camera may contain irrelevant objects, such as clouds that behave similar to Brownian movement. Hence, if we measure the novelty of states or the curiosity of transitions through raw observed pixels, exploration are likely to be affected by the dynamics of these irrelevant objects. To encourage the agent to explore the most informative transitions of dynamics, we propose a Dynamic Bottleneck (DB) model, which generates a dynamics-relevant representation Zt of the current state-action pair (St, At) through the Information-Bottleneck (IB) principle [55]. The goal of training DB model is to acquire dynamics-relevant information and discard dynamics-irrelevant features simultaneously. To this end, we maximize the mutual-information I(Zt;St+1) between a latent representation Zt and the next state St+1 through maximizing its lower bound and using contrastive learning. Meanwhile, we minimize the mutual-information I([St, At];Zt) between the state-action pair and the corresponding representation to compress dynamics-irrelevant information. Based on our proposed DB model, we further construct a DB-bonus for exploration. DB-bonus measures the novelty of state-action pairs by their information gain with respect to the representation computed from the DB model. We show that the DB-bonus are closely related to the provably efficient UCB-bonus in linear Markov Decision Processes (MDPs) [1] and the visiting count in tabular MDPs [3, 22]. We further estimate the DB-bonus by the learned dynamics-relevant representation from the DB model. We highlight that exploration based on DB-bonus directly utilize the information gain of the transitions, which filters out dynamics-irrelevant noise. We conduct experiments on the Atari suit with dynamics-irrelevant noise injected. Results demonstrate that our proposed selfsupervised exploration with DB-bonus is robust to dynamics-irrelevant noise and outperforms several state-of-the-art exploration methods. 2 Related Work Our work is closely related to previous exploration algorithms that construct intrinsic rewards to quantify the novelty of states and transitions. Several early approaches directly define the pseudocount by certain statistics to measure the novelty of states [41, 30]; more recent methods utilize density model [9, 38] or hash map [52, 42] for state statistics. Nevertheless, these approaches are easily affected by dynamics-irrelevant information such as white-noise. The contingency awareness method [16] addresses such an issue by using an attentive model to locate the agent and computes the pseudocount based on regions around the agent. However, such an approach could ignore features that are distant from the agent but relevant to the transition dynamics. Another line of research measures the novelty through learning a dynamics model and then use the prediction error to generate an intrinsic reward. These methods are known as the curiosity-driven exploration algorithms. Similar to the pseudo-count based methods, curiosity-driven methods become unstable in the presence of noises, because the prediction model is likely to yield high error for stochastic inputs or targets. Some recent attempts improve the curiosity-driven approach by learning the inverse dynamics [39] and variational dynamics [7] to define curiosity, or utilizes the prediction error of a random network to construct intrinsic rewards [12]. However, without explicitly removing dynamics-irrelevant information, these methods are still vulnerable to noises in practice [11]. The entropy-based exploration uses state entropy as the intrinsic reward. VISR [19], APT [29] and APS [28] use unsupervised skill discovery for fast task adaptation. In the unsupervised stage, they use k-nearest-neighbor entropy estimator to measure the entropy of state, and then use it as the intrinsic reward. RE3 [47] and ProtoRL [59] use random encoder and prototypes to learn the representation and use state-entropy as bonuses in exploration. Nevertheless, the state entropy will increase significantly if we inject noises in the state space. The entropy-based exploration will be misled by the noises. Previous approaches also quantify the epistemic uncertainty of dynamics through Bayesian network [21], bootstrapped Q-functions [37, 8], ensemble dynamics [40], and Stein variational inference [43] to tackle noisy environments. However, they typically require either complicated optimization methods or large networks. In contrast, DB learns a dynamics-relevant representation and encourages exploration by directly accessing the information gain of new transitions via DB-bonus. Another closely related line of studies uses the mutual information to promote exploration in RL. Novelty Search (NS) [53] proposes to learn a representation through IB. Curiosity Bottleneck (CB) [26] also performs exploration based on IB by measuring the task-relevant novelty. However, both NS and CB require extrinsic rewards to learn a value function and are not applicable for selfsupervised exploration. Moreover, NS contains additional k-nearest-neighbor to generate intrinsic reward and representation loss to constrain the distance of consecutive states, which are costly for computation. In contrast, our DB model handles self-supervised exploration without accessing extrinsic rewards. EMI [25] learns a representation by maximizing the mutual information in the forward dynamics and the inverse dynamics , which is different from the IB principle used in our method. In addition, we aim to perform robust exploration to overcome the white-noise problem, while EMI does not have an explicit mechanism to address the noise. Our work is also related to representation learning in RL. DrQ [60], RAD [27] and CURL [49] learn the state representation by data augmentation and contrastive learning [14, 20, 36] to improve the data-efficiency of DRL. Deep InfoMax [33] and Self-Predictive Representation (SPR) [46] learn the contrastive and predictive representations of dynamics, respectively, and utilize such representations as auxiliary losses for policy optimization. However, none of these existing approaches extracts information that benefits exploration. In contrast, we show that the dynamics-relevant representation learned by DB can be utilized for efficient exploration. 3 The Dynamic Bottleneck In this section, we introduce the objective function and architecture of the DB model. We consider an MDP that can be described by a tuple (O,A,P, r, γ), which consists of the observation space O, the action space A, the transition dynamics P, the reward function r, and the discount factor γ ∈ (0, 1). At each time step, an agent decides to perform an action at ∈ A after observing ot ∈ O, and then the observation transits to ot+1 with a reward rt received. In this paper, we use upper letters, such as Ot, to denote random variables and the corresponding lower case letter, such as ot, to represent their corresponding realizations. We first briefly introduce the IB principle [55]. In supervised setting that aims to learn a representation Z of a given input source X with the target source Y , IB maximizes the mutual information between Z and Y (i.e. max I(Z;Y )) and restricts the complexity of Z by using the constrain as I(Z;X) < Ic. Combining the two terms, the objective of IB is equal to max I(Z;Y )− αI(Z;X) with the introduction of a Lagrange multiplier. DB follows the IB principle [55] to learn dynamics-relevant representation. The input variable of the DB model is a tuple (Ot, At) that contains the current observation and action, and the target is the next observation Ot+1. We denote by St and St+1 the encoding of observations Ot and Ot+1. The goal of the DB model is to obtain a compressed latent representation Zt of (St, At), that preserves the information that is relevant to St+1 only. Specifically, we use fSo and f S m as the encoders of two consecutive observations ot and ot+1, respectively. We parameterize the dynamics-relevant representation zt by a Gaussian distribution with parameter φ, and it takes (st, at) as input. We summarize the DB model as follows, st = f S o (ot; θo), st+1 = f S m(ot+1; θm), zt ∼ gZ(st, at;φ). (1) Following the IB principle, the objective of the DB model seeks to maximize the mutual information I(Zt;St+1) while minimizing the mutual information I([St, At];Zt). To this end, we propose the DB objective by following the IB Lagrangian [55], which takes the form of min−I(Zt;St+1) + α1I([St, At];Zt). (2) Here α1 is a Lagrange multiplier that quantifies the amount of information about the next state preserved in Zt. Fig. 1 illustrates the DB objective. We minimize I([St, At];Zt) and consider it as a regularizer in the representation learning. Then the representation learning is done by maximizing the mutual information I(Zt, St+1). Maximizing I(Zt, St+1) ensures that we do not discard useful information from (St, At). In DB, the mutual information is estimated by several variational bounds parameterized by neural networks to enable differentiable and tractable computations. In what follows, we propose a lower bound of (2), which we optimize to train the DB model. 3.1 Maximizing the lower bound of I(Zt;St+1) As directly maximizing I(Zt;St+1) is intractable, we propose to optimize a predictive objective, which is a lower bound of I(Zt;St+1) [2]. It holds that I(Zt;St+1) = Ep(zt,st+1) [ log p(st+1|zt) p(st+1) ] = E [ log q(st+1|zt;ψ) p(st+1) ] +DKL[p(st+1|zt)‖q(st+1|zt;ψ)], (3) where p(st+1|zt) is an intractable conditional distribution and q(st+1|zt;ψ) is a tractable variational decoder with parameter ψ. By the non-negativity of the KL-divergence, we obtain the following lower bound, I(Zt;St+1) ≥ Ep(zt,st+1)[log q(st+1|zt;ψ)] +H(St+1), whereH(·) is the entropy. SinceH(St+1) is irrelevant to the parameter ψ, maximizing I(Zt;St+1) is equivalent to maximizing the following lower bound, Ipred , Ep(zt,st+1)[log q(st+1|zt;ψ)]. (4) Ipred can be interpreted as the log-likelihood of the next-state encoding st+1 given the dynamicsrelevant representation zt. In practice, we parameterize the prediction head q(st+1|zt;ψ) by a neural network that outputs a diagonal Gaussian random variable. Since st, st+1 and zt are all low-dimensional vectors instead of raw image pixels, optimizing Ipred is computationally efficient. Momentum Encoder To encode the consecutive observations ot and ot+1, we adopt the Siamese architecture [10] that uses the same neural network structures for the two encoders. Nevertheless, we observe that if we train both the encoders by directly maximizing Ipred, the Siamese architecture tends to converge to a collapsed solution. That is, the generated encodings appears to be uninformative constants. A simple fact is that if both the encoders generate zero vectors, predicting zeros conditioning on zt (or any variables) is a trivial solution. To address such issue, we update the parameter θo of fSo in (1) by directly optimizing Ipred. Meanwhile, we update the parameter θm of fSm by a momentum moving average of θo, which takes the form of θm ← τθm + (1 − τ)θo. In the sequel, we call fSo the online encoder and f S m the momentum encoder, respectively. Similar techniques is also adopted in previous study [18, 46] to avoid the mode collapse. 3.2 Contrastive Objective for Maximizing I(Zt;St+1) In addition to the lower bound of I(Zt;St+1) in §3.1, we also investigate the approach of maximizing the mutual information by contrastive learning (CL) [36]. CL classifies positive samples and negative samples in the learned representation space. An advantage of adopting CL is that training with negative samples plays the role of regularizer, which avoids collapsed solutions. Moreover, the contrastive objective yields a variational lower bound of the mutual information I(Z;St+1). To see such a fact, note that by the Bayes rule, we have I(Zt;St+1) ≥ Ep(zt,st+1)ES− [ log exp(h(zt, st+1))∑ sj∈S−∪st+1 exp(h(zt, sj)) ] , Ince. (5) Here h is a score function which assigns high scores to positive pairs and low score to negative pairs. We refer to Appendix A for a detailed proof of (5). The right-hand side of (5) is known as the InfoNCE objective [36]. The positive samples are obtained by directly sampling the transitions (s, a, s′). In contrast, the negative samples are obtained by first sampling a state-action pair (s, a), and then sampling a state s̃ independently. Then a negative sample is obtained by concatenating them together to form a tuple (s, a, s̃). The negative samples do not follow the transition dynamics. In practice, we collect the negative sample by sampling observation encodings randomly from the batch. We remark that comparing with methods that require data augmentation to construct negative samples [20, 49], DB utilizes a simple scheme to obtain positive and negative samples from on-policy experiences. In (5), we adopt the standard bilinear function as the score function h, which is defined as follows, h(zt, st+1) = f P o (q̄(zt;ψ)) >WfPm(st+1), (6) where fPo (·;ϕo) and fPm(·;ϕm) project st+1 and the mean value of next-state prediction q(st+1|zt;ϕ), i.e., q̄(·;ψ), to a latent space to apply the contrastive loss Ince in (5), andW is the parameter of the score function. Similar to the observation encoder and MoCo-based architectures [49, 20, 15], we also adopt an online projector fPo and a momentum projector f P m for zt and st+1, respectively. The momentum projector is updated by ϕm ← τϕm + (1− τ)ϕo. 3.3 Minimizing the Upper Bound of I([St, At];Zt) We minimize the mutual information I([St, At];Zt) through minimizing a tractable upper bound of the mutual information. To this end, we introduce a variational approximation q(zt) to the intractable marginal p(zt) = ∫ p(st, at)p(zt|st, at)dstat. Specifically, the following upper-bound of I([St, At];Zt) holds, I([St, At];Zt) = Ep(st,at) [p(zt|st, at) p(zt) ] = Ep(st,at) [p(zt|st, at) q(zt) ] −DKL [ p(zt)‖q(zt) ] ≤ Ep(st,at) [ DKL[p(zt|st, at)‖q(zt)] ] , Iupper, (7) where the inequality follows from the non-negativity of the KL divergence, and q(zt) is an approximation of the marginal distribution of Zt. We follow Alemi et al. [2] and use a standard spherical Gaussian distribution q(zt) = N (0, I) as the approximation. The expectation of Iupper is estimated by sampling from on-policy experiences. 3.4 The Loss Function and Architecture The final loss for training the DB model is a combination of the upper and lower bounds established in previous sections, min θo,φ,ψ,ϕo,W LDB = α1Iupper − α2Ipred − α3Ince, (8) where α1, α2 and α3 are hyper-parameters. As we show in the ablation study (§5), all of the three components in the loss plays an important role in learning dynamics-relevant representations. We illustrate the architecture of the DB model in Fig. 2. In practice, we minimize LDB in (8) by gradient descent, which iteratively updates the parameters of fSo , g Z , qψ,W and fPo . Meanwhile, we adopt exponential moving average to update the parameters of fSm and f P m to avoid collapsed solutions. We refer to Appendix B for the pseudocode of training DB model. 4 Exploration with DB-Bonus We are now ready to introduce the DB-bonus rdb for exploration. In this section, we first present the DB-bonus for self-supervised exploration. We establish the theoretical connections between the DB-bonus and provably efficient bonus functions. We further present the empirical estimation of DB-bonus and the policy optimization algorithm that utilizes the DB-bonus. In the sequel, we assume that the learned parameter Θ of the DB model follows a Bayesian posterior distribution given the training dataset Dm = {(sit, ait, sit+1)}i∈[0,m], which is a collection of past experiences from m episodes performed by the agent to train the DB model. We aim to estimate the following conceptual reward, which is defined by the mutual information between the parameter of the DB model and the transition dynamics given the training dataset, rdb(st, at) , I ( Θ; (st, at, St+1)|Dm )1/2 = [ H ( (st, at, St+1)|Dm ) −H ( (st, at, St+1)|Θ,Dm )]1/2 . (9) Intuitively, DB-bonus defined in (9) encourages the agent to explore transitions that are maximally informative to the improvement of the DB model. 4.1 Theoretical Analysis We show that the DB-bonus defined in (9) enjoys well theoretical properties, and establish theoretical connections between rdb and bonuses based on the optimism in the face of uncertainty [4, 23], which incorporates UCB into value functions in both tabular [5, 22, 17] and linear MDPs [24, 13]. Connection to UCB-bonus in linear MDPs In linear MDPs, the transition kernel and reward function are assumed to be linear. In such a setting, LSVI-UCB [24] provably attains a near-optimal worst-case regret, and we refer to Appendix C.1 for the details. The idea of LSVI-UCB is using an optimistic Q-value, which is obtained by adding an UCB-bonus rucb [1] to the estimation of the Q-value. The UCB-bonus is defined as rucbt = β · [ η(st, at) >Λ−1t η(st, at) ]1/2 , where β is a constant, Λt = ∑m i=0 η(x i t, a i t)η(x i t, a i t) > + λ · I is the Gram matrix, and m is the index of the current episode. The UCB-bonus measures the epistemic uncertainty of the state-action and is provably efficient [24]. For linear MDPs, we consider representation z ∈ Rc as the mean of the posterior gZ from the DB model, and set zt to be a linear function of the state-action encoding, i.e., zt = Wtη(st, at) parameterized by Wt ∈ Rc×d. Then, the following theorem establishes a connection between the DB-bonus rdb and the UCB-bonus rucb. Theorem 1. In linear MDPs, for tuning parameter β0 > 0, it holds that β0/ √ 2 · rucbt ≤ I(Wt; (st, at, St+1)|Dm) 1/2 ≤ β0 · rucbt , (10) where I(Wt; (st, at, St+1)|Dm)1/2 is the DB-bonus rdb(st, at) under the linear MDP setting. In addition, using rdb as bonus leads to the same regret as LSVI-UCB by following a similar proof to Jin et al. [24]. We refer to Appendix C.2 for the problem setup and the detailed proofs. We remark that Theorem 1 is an approximate derivation because we only consider the predictive objective Ipred in (8) in Theorem 1. Nevertheless, introducing the contrastive objective Ince is important in the training of the DB model as it prevents the mode collapse issue. Theorem 1 shows that the DB-bonus provides an instantiation of the UCB-bonus in DRL, which enables us to measure the epistemic uncertainty of high-dimensional states and actions without the linear MDP assumption. Connection to visiting count in tabular MDP The following theorem establishes connections between DB-bonus and the count-based bonus rcount(st, at) = β√ Nst,at+λ in tabular MDPs. Theorem 2. In tabular MDPs, it holds for the DB-bonus rdb(st, at) and the count-based intrinsic reward rcount(st, at) that, rdb(st, at) ≈ √ |S|/2√ Nst,at + λ = β0 · rcount(st, at), (11) when Nst,at is large, where λ > 0 is a tuning parameter, |S| is the number of states in tabular setting. We refer to Appendix C.3 for a detailed proofs. As a result, DB-bonus can also be considered as a count-based intrinsic reward in the space of dynamics-relevant representations. 4.2 Empirical Estimation To estimate such a bonus under our DB model, we face several challenges. (i) Firstly, estimating the bonus defined in (9) requires us to parameterize representation under a Bayesian learning framework, whereas our DB model is parameterized by non-Bayesian neural networks. (ii) Secondly, estimating the DB-bonus defined in (9) requires us to compute the mutual information between the unknown transitions and the estimated model, which is in general hard as we do not have access to such transitions in general. To address such challenges, we estimate a lower bound of the DB-bonus, which is easily implementable and achieves reasonable performance empirically. Specifically, we consider to use rdbl (st, at) as the lower bound of the information gain in (9), rdb(st, at) ≥ [ H ( g(st, at, St+1)|Dm ) −H ( g(st, at, St+1)|Θ,Dm )]1/2 , rdbl (st, at), (12) which holds for any mapping g according to Data Processing Inequality (DPI). DPI is an information theoretic concept that can be understood as ‘post-processing’ cannot increase information. Since g(st, at, St+1) is a post-processing of (st, at, St+1), we have I(Θ; (st, at, St+1)) > I(Θ; g(st, at, St+1)), where g is a neural network in practice. In our model, we adopt the following mapping, g(st, at, St+1)|Θ,Dm = gZ(st, at;φ), (13) where gZ is the representation distribution of DB, and φ constitutes a part of parameters of the total parameters Θ. Intuitively, since gZ is trained by IB principle to capture information of transitions, adopting the mapping gZ to (12) yields a reasonable approximation of the DB-bonus. It further holds rdbl (st, at) = [ H ( gmargin ) −H ( gZ(st, at;φ) )]1/2 = EΘDKL [ gZ(zt|st, at;φ)‖gmargin ]1/2 , (14) where we define gmargin = g(st, at, St+1)|Dm as the marginal of the encodings over the posterior of the parameters Θ of the DB model. In practice, since gmargin is intractable, we approximate gmargin with standard Gaussian distribution. We remark that such approximation is motivated by the training of DB model, which drives the marginal of representation gZ toward N (0, I) through minimizing Iupper in (7). Such approximation leads to a tractable estimation and stable empirical performances. In addition, since we do not train the DB model with Bayesian approach, we replace the expectation over posterior Θ in (14) by the corresponding point estimation, namely the parameter Θ of the neural networks trained with DB model on the dataset Dm. To summarize, we utilize the following approximation of the DB-bonus rdb proposed in (9), r̂dbl (st, at) = DKL [ gZ(·|st, at;φ) ‖ N (0, I) ]1/2 ≈ rdbl (st, at). (15) Since DB is trained by IB principle, which filters out the dynamics-irrelevant information, utilizing the bonus defined in (15) allows the agent to conduct robust exploration in noisy environments. We summarize the the overall RL algorithm with self-supervised exploration induced by the DB-bonus in Algorithm 1, which we refer to as Self-Supervised Exploration with DB-bonus (SSE-DB). For the RL implementation, we adopt Proximal Policy Optimization (PPO) [45] with generalized advantage estimation [44] and the normalization schemes from Burda et al. [11]. We refer to Appendix D for the implementation details. The codes are available at https://github.com/Baichenjia/DB. Algorithm 1 SSE-DB 1: Initialize: The DB model and the actor-critic network 2: for episode i = 1 to M do 3: for timestep i = 0 to T − 1 do 4: Obtain action from the actor at = π(st), then execute at and observe the state st+1; 5: Add (st, at, st+1) into the on-policy experiences; 6: Obtain the DB-bonus r̂dbl of (st, at) by (15); 7: end for 8: Update the actor and critic by PPO with the collected on-policy experiences as the input; 9: Update DB by gradient descent based on (8) with the collected on-policy experiences; 10: end for 100 200 300 400 500 Alien 200 400 600 800 Asteroids 0 200 400 600 800 BankHeist −60 −40 −20 0 20 40 60 Boxing 0 50 100 150 200 250 300 Breakout 1000 1500 2000 2500 3000 3500 Centipede 5000 10000 15000 20000 25000 30000 35000 CrazyClimber 2000 4000 6000 8000 Gopher 0 50 100 150 200 250 300 350 Gravitar 100 200 300 400 500 600 700 Kangaroo 0 5000 10000 15000 20000 KungFuMaster 200 300 400 500 600 MsPacman 0 100 200 300 400 0 100 200 300 400 500 600 700 Seaquest 0 100 200 300 400 250 500 750 1000 1250 1500 1750 Solaris 0 100 200 300 400 −25 −20 −15 −10 −5 0 Tennis 0 100 200 300 400 1000 1500 2000 2500 3000 3500 4000 TimePilot 0 100 200 300 400 2500 5000 7500 10000 12500 15000 17500 UpNDown 0 100 200 300 400 400 600 800 1000 1200 1400 1600 1800 WizardOfWor Frames (millions) Ex tri ns ic Re wa rd p er E pi so de SSE-DB(ours) ICM Disagreement CB random Figure 3: The evaluation curve in Atari games. The different methods are trained with different intrinsic rewards. The extrinsic rewards are only used to measure the performance. Each method was run with three random seeds. 5 Experiments We evaluate SSE-DB on Atari games. We conduct experiments to compare the following methods. (i) SSE-DB. The proposed method in Alg. 1. (ii) Intrinsic Curiosity Model (ICM) [39]. ICM uses an inverse dynamics model to extract features related to the actions. ICM further adopts the prediction error of dynamics as the intrinsic reward for exploration. (iii) Disagreement [40]. This method captures epistemic uncertainty by the disagreement among predictions from an ensemble of dynamics models. Disagreement performs competitive to ICM and RND [12]. Also, this method is robust to white-noise. (iv) Curiosity Bottleneck (CB) [26]. CB quantifies the compressiveness of observation with respect to the representation as the bonus. CB is originally proposed for exploration with extrinsic rewards. We adapt CB for self-supervised exploration by setting the extrinsic reward zero. We compare the model complexity of all the methods in Appendix D. Other methods including Novelty Search [53] and Contingency-aware exploration [16] are also deserve to compare. However, we find Novelty Search ineffective in our implementation since the detailed hyper-parameters and empirical results in Atari are not available. Contingency-aware exploration is related to DB while the attention module is relatively complicated and the code is not achievable. 5.1 The Main Results We evaluate all methods on Atari games with high-dimensional observations. The selected 18 games are frequently used in previous approaches for efficient exploration. The overall results are provided in Fig. 3. We highlight that in our experiments, the agents are trained without accessing the extrinsic rewards. The extrinsic rewards are ONLY utilized to evaluate the performance of the policies obtained from self-supervised exploration. Our experiments show that SSE-DB performs the best in 15 of 18 tasks, suggesting that dynamics-relevant feature together with DB-bonus helps the exploration of states with high extrinsic rewards. In addition, since pure exploration without extrinsic rewards is very difficult in most tasks, a random baseline is required to show whether the exploration methods learn meaningful behaviors. We adopt the random score from DQN [35] and show the comparison in the figure. In Solaris, Centipede and TimePilot, our method obtains similar scores to random policy, which suggests that relying solely on intrinsic rewards is insufficient to solve these tasks. We also observe that SSE-DB is suboptimal in Tennis. A possible explanation is that for Tennis, the prediction error based methods, such as ICM, could capture additional information in the intrinsic rewards. For example, in Tennis, the prediction error becomes higher when the ball moves faster or when the agent hits the ball towards a tricky direction. The prediction-error based methods can naturally benefit from such nature of the game. In contrast, SSE-DB encourages exploration based on the information gain from learning the dynamics-relevant representation, which may not capture such critical events in Tennis. 5.2 Robustness in the Presence of Noises Observation Noises. To analyze the robustness of SSE-DB to observation noises, an important evaluation metric is the performance of SSE-DB in the presence of dynamics-irrelevant information. A particularly challenging distractor is the white-noise [11, 26], which incorporates random taskirrelevant patterns to the observations. In such a scenario, a frequently visited state by injecting an unseen noise pattern may be mistakenly assigned with a high intrinsic reward by curiosity or pseudo-count based methods. We use two types of distractors for the observations of Atari games, namely, (1) the random-box noise distractor, which places boxes filled with random Gaussian noise over the raw pixels, and (2) the pixellevel noise distractor, which adds pixel-wise Gaussian noise to the observations. Fig. 4 shows examples of the two types of distractors. In the sequel, we discuss results for the random-box noise distractor on selected Atari games, which we find sufficiently representative, and defer the complete report to Appendix E.1 and E.2. Fig. 5(a) shows the performance of the compared methods on Alien, Breakout and TimePilot with and without noises. We observe that SSE-DB outperforms ICM on Alien and TimePilot with random-box noises. Nevertheless, in Breakout, we observe that both the methods fail to learn informative policies. A possible explanation is that, in Breakout, the ball is easily masked by the box-shaped noise (i.e., middle of Fig. 4). The random-box noise therefore buries critical transition information of the ball, which hinders all the baselines to extract dynamics-relevant information and leads to failures on Breakout with random-box noise as shown in Fig. 5(a). Action Noise. In addition to observation noises, noises in actions also raise challenges for learning the transition dynamics. To further study the robustness of SSE-DB, we conduct experiments on Atari games with sticky actions [32, 40]. At each time step, the agent may execute the previous action instead of the output action of current policy with a probability of 0.25. We illustrate results on three selected Atari games, i.e., Boxing, Gravitar and MsPacman, in Fig. 5(b) and defer the complete results to Appendix E.3. Our experiments show that SSE-DB is robust to action noises, whereas ICM suffers significant performance drop from the action noise. 5.3 Visualization and Ablation Study Visualization of the Learned Representations. To understand the latent representation learned by the DB model, we visualize the learned Z with t-SNE [31] plots, which projects a 128d z-vector to a 2d one through dimensionality reduction. We compare the representations learned by SSEDB and ICM with random-box noise. We illustrate the learned representations of MsPacman in Fig. 6. According to the visualization, representations learned by DB tends to align temporallyconsecutive movements on the same curve. Moreover, each segment of a curve corresponds to a semantic component of the trajectory, such as eating pellets aligned together or avoiding ghosts. The segments of a curve end up with critical states, including death and reborn of the agent and ghosts. The visualization indicates that DB well captures the dynamics-relevant information in the learned representations. In contrast, such temporally-consecutive patterns are missing in the learned representation of ICM. Visualization of the DB bonus. We provide visualization of the DB-bonus in Appendix E.5. The results show the DB-bonus effectively encourages the agent to explore the informative transitions. Ablation Study. The training of DB loss consists of multiple components, including Ipred, Ince, Iupper, and the momentum observation encoder. To analyze the importance of the components, we conduct an ablation study by removing each of them respectively and evaluate the DB model correspondingly. The ablation study suggests that the all the components are crucial for learning effective dynamics-relevant representations. In addition, we observe that Iupper is particularly important in the environment with dynamics-irrelevant noise. Please refer to Appendix E.4 for details. 6 Conclusion In this paper, we introduce Dynamic Bottleneck model that learns dynamics-relevant representations based on the IB principle. Based on the DB model, we further propose DB-bonus based on the DB model for efficient exploration. We establish theoretical connections between the proposed DB-bonus and provably efficient bonuses. Our experiments show that SSE-DB outperforms several strong baselines in stochastic environments for self-supervised exploration. Moreover, we observe that DB learns well-structured representations and the DB-bonus characterizes informative transitions for exploration. For our future work, we wish to combine the DB representation to effective exploration methods including BeBold [61] and NGU [6] to enhance their robustness in stochastic environments. Acknowledgements The authors thank Tencent Robotics X and Vector Institute for the computation resources supported. Part of the work was done during internship at Tencent Robotics X lab. The authors also thank the anonymous reviewers, whose invaluable suggestions have helped us to improve the paper.
1. What is the focus and contribution of the paper on dynamic bottleneck models? 2. What are the strengths of the proposed approach, particularly in combating noise in observations? 3. What are the weaknesses of the paper regarding its examples and experiments? 4. Do you have any concerns about the ability of the proposed method to handle disturbances with dynamics? 5. Are there any typos or grammatical errors in the review that should be addressed?
Summary Of The Paper Review
Summary Of The Paper In this paper, the authors introduced Dynamic Bottleneck (DB) model that learns dynamics-relevant representations based on the Information Bottleneck principle. They further proposed DB-bonus for efficient exploration and established theoretical connections between the proposed DB-bonus and provably efficient bonuses. The experiments show that the proposed method outperforms several strong baselines in stochastic environments for self-supervised exploration. Review Pros: The information-based DB method is novel in combating noise in observation. The variational methods help to scale the method up to solve high dimensional problems. The authors show the link of the DB bonus to two provably efficient cases: linear MDP and tabular MDP. The ablation studies are sufficient that unveil interesting patterns latent space. Cons: I am a little bit confused about the definition of “noisy states”. In the introduction, the authors give an example: “For example, in autonomous driving tasks, the states captured by the camera may contain irrelevant objects, such as clouds, birds, and aircraft.” However, most of these objectives may would have their own dynamics, it is just those dynamics are irrelevant to rewards. Could the proposed method be able to handle disturbance with dynamics? If not, I suggest change this example and make it clear in the paper. If so, then maybe add experiments in this case. Typos: The “log” operators are missing in Eq. 7. There are some obvious grammar errors.
NIPS
Title Dynamic Bottleneck for Robust Self-Supervised Exploration Abstract Exploration methods based on pseudo-count of transitions or curiosity of dynamics have achieved promising results in solving reinforcement learning with sparse rewards. However, such methods are usually sensitive to environmental dynamicsirrelevant information, e.g., white-noise. To handle such dynamics-irrelevant information, we propose a Dynamic Bottleneck (DB) model, which attains a dynamics-relevant representation based on the information-bottleneck principle. Based on the DB model, we further propose DB-bonus, which encourages the agent to explore state-action pairs with high information gain. We establish theoretical connections between the proposed DB-bonus, the upper confidence bound (UCB) for linear case, and the visiting count for tabular case. We evaluate the proposed method on Atari suits with dynamics-irrelevant noises. Our experiments show that exploration with DB bonus outperforms several state-of-the-art exploration methods in noisy environments. 1 Introduction The tradeoff between exploration and exploitation has long been a major challenge in reinforcement learning (RL) [35, 50, 58]. Generally, excessive exploitation of the experience suffers from the potential risk of being suboptimal, whereas excessive exploration of novel states hinders the improvement of the policy. A straightforward way to tackle the exploration-exploitation dilemma is to enhance exploration efficiency while keeping exploitation in pace. When the extrinsic rewards are dense, reward shaping is commonly adopted for efficient exploration. However, in many real-world applications such as autonomous driving [34], the extrinsic rewards are sparse, making efficient exploration a challenging task in developing practical RL algorithms. The situations become even worse when the extrinsic rewards are entirely unavailable. In such a scenario, the task of collecting informative trajectories from exploration is known as the self-supervised exploration [11]. An effective approach to self-supervised exploration is to design a dense intrinsic reward that motivates the agent to explore novel transitions. Previous attempts include count-based [9] and curiosity-driven [39] explorations. The count-based exploration builds a density model to measure the pseudo-count of state visitation and assign high intrinsic rewards to less frequently visited states. In contrast, the 35th Conference on Neural Information Processing Systems (NeurIPS 2021). curiosity-driven methods maintain a predictive model of the transitions and encourage the agent to visit transitions with high prediction errors. However, all these methods becomes unstable when the states are noisy, e.g., containing dynamics-irrelevant information. For example, in autonomous driving tasks, the states captured by the camera may contain irrelevant objects, such as clouds that behave similar to Brownian movement. Hence, if we measure the novelty of states or the curiosity of transitions through raw observed pixels, exploration are likely to be affected by the dynamics of these irrelevant objects. To encourage the agent to explore the most informative transitions of dynamics, we propose a Dynamic Bottleneck (DB) model, which generates a dynamics-relevant representation Zt of the current state-action pair (St, At) through the Information-Bottleneck (IB) principle [55]. The goal of training DB model is to acquire dynamics-relevant information and discard dynamics-irrelevant features simultaneously. To this end, we maximize the mutual-information I(Zt;St+1) between a latent representation Zt and the next state St+1 through maximizing its lower bound and using contrastive learning. Meanwhile, we minimize the mutual-information I([St, At];Zt) between the state-action pair and the corresponding representation to compress dynamics-irrelevant information. Based on our proposed DB model, we further construct a DB-bonus for exploration. DB-bonus measures the novelty of state-action pairs by their information gain with respect to the representation computed from the DB model. We show that the DB-bonus are closely related to the provably efficient UCB-bonus in linear Markov Decision Processes (MDPs) [1] and the visiting count in tabular MDPs [3, 22]. We further estimate the DB-bonus by the learned dynamics-relevant representation from the DB model. We highlight that exploration based on DB-bonus directly utilize the information gain of the transitions, which filters out dynamics-irrelevant noise. We conduct experiments on the Atari suit with dynamics-irrelevant noise injected. Results demonstrate that our proposed selfsupervised exploration with DB-bonus is robust to dynamics-irrelevant noise and outperforms several state-of-the-art exploration methods. 2 Related Work Our work is closely related to previous exploration algorithms that construct intrinsic rewards to quantify the novelty of states and transitions. Several early approaches directly define the pseudocount by certain statistics to measure the novelty of states [41, 30]; more recent methods utilize density model [9, 38] or hash map [52, 42] for state statistics. Nevertheless, these approaches are easily affected by dynamics-irrelevant information such as white-noise. The contingency awareness method [16] addresses such an issue by using an attentive model to locate the agent and computes the pseudocount based on regions around the agent. However, such an approach could ignore features that are distant from the agent but relevant to the transition dynamics. Another line of research measures the novelty through learning a dynamics model and then use the prediction error to generate an intrinsic reward. These methods are known as the curiosity-driven exploration algorithms. Similar to the pseudo-count based methods, curiosity-driven methods become unstable in the presence of noises, because the prediction model is likely to yield high error for stochastic inputs or targets. Some recent attempts improve the curiosity-driven approach by learning the inverse dynamics [39] and variational dynamics [7] to define curiosity, or utilizes the prediction error of a random network to construct intrinsic rewards [12]. However, without explicitly removing dynamics-irrelevant information, these methods are still vulnerable to noises in practice [11]. The entropy-based exploration uses state entropy as the intrinsic reward. VISR [19], APT [29] and APS [28] use unsupervised skill discovery for fast task adaptation. In the unsupervised stage, they use k-nearest-neighbor entropy estimator to measure the entropy of state, and then use it as the intrinsic reward. RE3 [47] and ProtoRL [59] use random encoder and prototypes to learn the representation and use state-entropy as bonuses in exploration. Nevertheless, the state entropy will increase significantly if we inject noises in the state space. The entropy-based exploration will be misled by the noises. Previous approaches also quantify the epistemic uncertainty of dynamics through Bayesian network [21], bootstrapped Q-functions [37, 8], ensemble dynamics [40], and Stein variational inference [43] to tackle noisy environments. However, they typically require either complicated optimization methods or large networks. In contrast, DB learns a dynamics-relevant representation and encourages exploration by directly accessing the information gain of new transitions via DB-bonus. Another closely related line of studies uses the mutual information to promote exploration in RL. Novelty Search (NS) [53] proposes to learn a representation through IB. Curiosity Bottleneck (CB) [26] also performs exploration based on IB by measuring the task-relevant novelty. However, both NS and CB require extrinsic rewards to learn a value function and are not applicable for selfsupervised exploration. Moreover, NS contains additional k-nearest-neighbor to generate intrinsic reward and representation loss to constrain the distance of consecutive states, which are costly for computation. In contrast, our DB model handles self-supervised exploration without accessing extrinsic rewards. EMI [25] learns a representation by maximizing the mutual information in the forward dynamics and the inverse dynamics , which is different from the IB principle used in our method. In addition, we aim to perform robust exploration to overcome the white-noise problem, while EMI does not have an explicit mechanism to address the noise. Our work is also related to representation learning in RL. DrQ [60], RAD [27] and CURL [49] learn the state representation by data augmentation and contrastive learning [14, 20, 36] to improve the data-efficiency of DRL. Deep InfoMax [33] and Self-Predictive Representation (SPR) [46] learn the contrastive and predictive representations of dynamics, respectively, and utilize such representations as auxiliary losses for policy optimization. However, none of these existing approaches extracts information that benefits exploration. In contrast, we show that the dynamics-relevant representation learned by DB can be utilized for efficient exploration. 3 The Dynamic Bottleneck In this section, we introduce the objective function and architecture of the DB model. We consider an MDP that can be described by a tuple (O,A,P, r, γ), which consists of the observation space O, the action space A, the transition dynamics P, the reward function r, and the discount factor γ ∈ (0, 1). At each time step, an agent decides to perform an action at ∈ A after observing ot ∈ O, and then the observation transits to ot+1 with a reward rt received. In this paper, we use upper letters, such as Ot, to denote random variables and the corresponding lower case letter, such as ot, to represent their corresponding realizations. We first briefly introduce the IB principle [55]. In supervised setting that aims to learn a representation Z of a given input source X with the target source Y , IB maximizes the mutual information between Z and Y (i.e. max I(Z;Y )) and restricts the complexity of Z by using the constrain as I(Z;X) < Ic. Combining the two terms, the objective of IB is equal to max I(Z;Y )− αI(Z;X) with the introduction of a Lagrange multiplier. DB follows the IB principle [55] to learn dynamics-relevant representation. The input variable of the DB model is a tuple (Ot, At) that contains the current observation and action, and the target is the next observation Ot+1. We denote by St and St+1 the encoding of observations Ot and Ot+1. The goal of the DB model is to obtain a compressed latent representation Zt of (St, At), that preserves the information that is relevant to St+1 only. Specifically, we use fSo and f S m as the encoders of two consecutive observations ot and ot+1, respectively. We parameterize the dynamics-relevant representation zt by a Gaussian distribution with parameter φ, and it takes (st, at) as input. We summarize the DB model as follows, st = f S o (ot; θo), st+1 = f S m(ot+1; θm), zt ∼ gZ(st, at;φ). (1) Following the IB principle, the objective of the DB model seeks to maximize the mutual information I(Zt;St+1) while minimizing the mutual information I([St, At];Zt). To this end, we propose the DB objective by following the IB Lagrangian [55], which takes the form of min−I(Zt;St+1) + α1I([St, At];Zt). (2) Here α1 is a Lagrange multiplier that quantifies the amount of information about the next state preserved in Zt. Fig. 1 illustrates the DB objective. We minimize I([St, At];Zt) and consider it as a regularizer in the representation learning. Then the representation learning is done by maximizing the mutual information I(Zt, St+1). Maximizing I(Zt, St+1) ensures that we do not discard useful information from (St, At). In DB, the mutual information is estimated by several variational bounds parameterized by neural networks to enable differentiable and tractable computations. In what follows, we propose a lower bound of (2), which we optimize to train the DB model. 3.1 Maximizing the lower bound of I(Zt;St+1) As directly maximizing I(Zt;St+1) is intractable, we propose to optimize a predictive objective, which is a lower bound of I(Zt;St+1) [2]. It holds that I(Zt;St+1) = Ep(zt,st+1) [ log p(st+1|zt) p(st+1) ] = E [ log q(st+1|zt;ψ) p(st+1) ] +DKL[p(st+1|zt)‖q(st+1|zt;ψ)], (3) where p(st+1|zt) is an intractable conditional distribution and q(st+1|zt;ψ) is a tractable variational decoder with parameter ψ. By the non-negativity of the KL-divergence, we obtain the following lower bound, I(Zt;St+1) ≥ Ep(zt,st+1)[log q(st+1|zt;ψ)] +H(St+1), whereH(·) is the entropy. SinceH(St+1) is irrelevant to the parameter ψ, maximizing I(Zt;St+1) is equivalent to maximizing the following lower bound, Ipred , Ep(zt,st+1)[log q(st+1|zt;ψ)]. (4) Ipred can be interpreted as the log-likelihood of the next-state encoding st+1 given the dynamicsrelevant representation zt. In practice, we parameterize the prediction head q(st+1|zt;ψ) by a neural network that outputs a diagonal Gaussian random variable. Since st, st+1 and zt are all low-dimensional vectors instead of raw image pixels, optimizing Ipred is computationally efficient. Momentum Encoder To encode the consecutive observations ot and ot+1, we adopt the Siamese architecture [10] that uses the same neural network structures for the two encoders. Nevertheless, we observe that if we train both the encoders by directly maximizing Ipred, the Siamese architecture tends to converge to a collapsed solution. That is, the generated encodings appears to be uninformative constants. A simple fact is that if both the encoders generate zero vectors, predicting zeros conditioning on zt (or any variables) is a trivial solution. To address such issue, we update the parameter θo of fSo in (1) by directly optimizing Ipred. Meanwhile, we update the parameter θm of fSm by a momentum moving average of θo, which takes the form of θm ← τθm + (1 − τ)θo. In the sequel, we call fSo the online encoder and f S m the momentum encoder, respectively. Similar techniques is also adopted in previous study [18, 46] to avoid the mode collapse. 3.2 Contrastive Objective for Maximizing I(Zt;St+1) In addition to the lower bound of I(Zt;St+1) in §3.1, we also investigate the approach of maximizing the mutual information by contrastive learning (CL) [36]. CL classifies positive samples and negative samples in the learned representation space. An advantage of adopting CL is that training with negative samples plays the role of regularizer, which avoids collapsed solutions. Moreover, the contrastive objective yields a variational lower bound of the mutual information I(Z;St+1). To see such a fact, note that by the Bayes rule, we have I(Zt;St+1) ≥ Ep(zt,st+1)ES− [ log exp(h(zt, st+1))∑ sj∈S−∪st+1 exp(h(zt, sj)) ] , Ince. (5) Here h is a score function which assigns high scores to positive pairs and low score to negative pairs. We refer to Appendix A for a detailed proof of (5). The right-hand side of (5) is known as the InfoNCE objective [36]. The positive samples are obtained by directly sampling the transitions (s, a, s′). In contrast, the negative samples are obtained by first sampling a state-action pair (s, a), and then sampling a state s̃ independently. Then a negative sample is obtained by concatenating them together to form a tuple (s, a, s̃). The negative samples do not follow the transition dynamics. In practice, we collect the negative sample by sampling observation encodings randomly from the batch. We remark that comparing with methods that require data augmentation to construct negative samples [20, 49], DB utilizes a simple scheme to obtain positive and negative samples from on-policy experiences. In (5), we adopt the standard bilinear function as the score function h, which is defined as follows, h(zt, st+1) = f P o (q̄(zt;ψ)) >WfPm(st+1), (6) where fPo (·;ϕo) and fPm(·;ϕm) project st+1 and the mean value of next-state prediction q(st+1|zt;ϕ), i.e., q̄(·;ψ), to a latent space to apply the contrastive loss Ince in (5), andW is the parameter of the score function. Similar to the observation encoder and MoCo-based architectures [49, 20, 15], we also adopt an online projector fPo and a momentum projector f P m for zt and st+1, respectively. The momentum projector is updated by ϕm ← τϕm + (1− τ)ϕo. 3.3 Minimizing the Upper Bound of I([St, At];Zt) We minimize the mutual information I([St, At];Zt) through minimizing a tractable upper bound of the mutual information. To this end, we introduce a variational approximation q(zt) to the intractable marginal p(zt) = ∫ p(st, at)p(zt|st, at)dstat. Specifically, the following upper-bound of I([St, At];Zt) holds, I([St, At];Zt) = Ep(st,at) [p(zt|st, at) p(zt) ] = Ep(st,at) [p(zt|st, at) q(zt) ] −DKL [ p(zt)‖q(zt) ] ≤ Ep(st,at) [ DKL[p(zt|st, at)‖q(zt)] ] , Iupper, (7) where the inequality follows from the non-negativity of the KL divergence, and q(zt) is an approximation of the marginal distribution of Zt. We follow Alemi et al. [2] and use a standard spherical Gaussian distribution q(zt) = N (0, I) as the approximation. The expectation of Iupper is estimated by sampling from on-policy experiences. 3.4 The Loss Function and Architecture The final loss for training the DB model is a combination of the upper and lower bounds established in previous sections, min θo,φ,ψ,ϕo,W LDB = α1Iupper − α2Ipred − α3Ince, (8) where α1, α2 and α3 are hyper-parameters. As we show in the ablation study (§5), all of the three components in the loss plays an important role in learning dynamics-relevant representations. We illustrate the architecture of the DB model in Fig. 2. In practice, we minimize LDB in (8) by gradient descent, which iteratively updates the parameters of fSo , g Z , qψ,W and fPo . Meanwhile, we adopt exponential moving average to update the parameters of fSm and f P m to avoid collapsed solutions. We refer to Appendix B for the pseudocode of training DB model. 4 Exploration with DB-Bonus We are now ready to introduce the DB-bonus rdb for exploration. In this section, we first present the DB-bonus for self-supervised exploration. We establish the theoretical connections between the DB-bonus and provably efficient bonus functions. We further present the empirical estimation of DB-bonus and the policy optimization algorithm that utilizes the DB-bonus. In the sequel, we assume that the learned parameter Θ of the DB model follows a Bayesian posterior distribution given the training dataset Dm = {(sit, ait, sit+1)}i∈[0,m], which is a collection of past experiences from m episodes performed by the agent to train the DB model. We aim to estimate the following conceptual reward, which is defined by the mutual information between the parameter of the DB model and the transition dynamics given the training dataset, rdb(st, at) , I ( Θ; (st, at, St+1)|Dm )1/2 = [ H ( (st, at, St+1)|Dm ) −H ( (st, at, St+1)|Θ,Dm )]1/2 . (9) Intuitively, DB-bonus defined in (9) encourages the agent to explore transitions that are maximally informative to the improvement of the DB model. 4.1 Theoretical Analysis We show that the DB-bonus defined in (9) enjoys well theoretical properties, and establish theoretical connections between rdb and bonuses based on the optimism in the face of uncertainty [4, 23], which incorporates UCB into value functions in both tabular [5, 22, 17] and linear MDPs [24, 13]. Connection to UCB-bonus in linear MDPs In linear MDPs, the transition kernel and reward function are assumed to be linear. In such a setting, LSVI-UCB [24] provably attains a near-optimal worst-case regret, and we refer to Appendix C.1 for the details. The idea of LSVI-UCB is using an optimistic Q-value, which is obtained by adding an UCB-bonus rucb [1] to the estimation of the Q-value. The UCB-bonus is defined as rucbt = β · [ η(st, at) >Λ−1t η(st, at) ]1/2 , where β is a constant, Λt = ∑m i=0 η(x i t, a i t)η(x i t, a i t) > + λ · I is the Gram matrix, and m is the index of the current episode. The UCB-bonus measures the epistemic uncertainty of the state-action and is provably efficient [24]. For linear MDPs, we consider representation z ∈ Rc as the mean of the posterior gZ from the DB model, and set zt to be a linear function of the state-action encoding, i.e., zt = Wtη(st, at) parameterized by Wt ∈ Rc×d. Then, the following theorem establishes a connection between the DB-bonus rdb and the UCB-bonus rucb. Theorem 1. In linear MDPs, for tuning parameter β0 > 0, it holds that β0/ √ 2 · rucbt ≤ I(Wt; (st, at, St+1)|Dm) 1/2 ≤ β0 · rucbt , (10) where I(Wt; (st, at, St+1)|Dm)1/2 is the DB-bonus rdb(st, at) under the linear MDP setting. In addition, using rdb as bonus leads to the same regret as LSVI-UCB by following a similar proof to Jin et al. [24]. We refer to Appendix C.2 for the problem setup and the detailed proofs. We remark that Theorem 1 is an approximate derivation because we only consider the predictive objective Ipred in (8) in Theorem 1. Nevertheless, introducing the contrastive objective Ince is important in the training of the DB model as it prevents the mode collapse issue. Theorem 1 shows that the DB-bonus provides an instantiation of the UCB-bonus in DRL, which enables us to measure the epistemic uncertainty of high-dimensional states and actions without the linear MDP assumption. Connection to visiting count in tabular MDP The following theorem establishes connections between DB-bonus and the count-based bonus rcount(st, at) = β√ Nst,at+λ in tabular MDPs. Theorem 2. In tabular MDPs, it holds for the DB-bonus rdb(st, at) and the count-based intrinsic reward rcount(st, at) that, rdb(st, at) ≈ √ |S|/2√ Nst,at + λ = β0 · rcount(st, at), (11) when Nst,at is large, where λ > 0 is a tuning parameter, |S| is the number of states in tabular setting. We refer to Appendix C.3 for a detailed proofs. As a result, DB-bonus can also be considered as a count-based intrinsic reward in the space of dynamics-relevant representations. 4.2 Empirical Estimation To estimate such a bonus under our DB model, we face several challenges. (i) Firstly, estimating the bonus defined in (9) requires us to parameterize representation under a Bayesian learning framework, whereas our DB model is parameterized by non-Bayesian neural networks. (ii) Secondly, estimating the DB-bonus defined in (9) requires us to compute the mutual information between the unknown transitions and the estimated model, which is in general hard as we do not have access to such transitions in general. To address such challenges, we estimate a lower bound of the DB-bonus, which is easily implementable and achieves reasonable performance empirically. Specifically, we consider to use rdbl (st, at) as the lower bound of the information gain in (9), rdb(st, at) ≥ [ H ( g(st, at, St+1)|Dm ) −H ( g(st, at, St+1)|Θ,Dm )]1/2 , rdbl (st, at), (12) which holds for any mapping g according to Data Processing Inequality (DPI). DPI is an information theoretic concept that can be understood as ‘post-processing’ cannot increase information. Since g(st, at, St+1) is a post-processing of (st, at, St+1), we have I(Θ; (st, at, St+1)) > I(Θ; g(st, at, St+1)), where g is a neural network in practice. In our model, we adopt the following mapping, g(st, at, St+1)|Θ,Dm = gZ(st, at;φ), (13) where gZ is the representation distribution of DB, and φ constitutes a part of parameters of the total parameters Θ. Intuitively, since gZ is trained by IB principle to capture information of transitions, adopting the mapping gZ to (12) yields a reasonable approximation of the DB-bonus. It further holds rdbl (st, at) = [ H ( gmargin ) −H ( gZ(st, at;φ) )]1/2 = EΘDKL [ gZ(zt|st, at;φ)‖gmargin ]1/2 , (14) where we define gmargin = g(st, at, St+1)|Dm as the marginal of the encodings over the posterior of the parameters Θ of the DB model. In practice, since gmargin is intractable, we approximate gmargin with standard Gaussian distribution. We remark that such approximation is motivated by the training of DB model, which drives the marginal of representation gZ toward N (0, I) through minimizing Iupper in (7). Such approximation leads to a tractable estimation and stable empirical performances. In addition, since we do not train the DB model with Bayesian approach, we replace the expectation over posterior Θ in (14) by the corresponding point estimation, namely the parameter Θ of the neural networks trained with DB model on the dataset Dm. To summarize, we utilize the following approximation of the DB-bonus rdb proposed in (9), r̂dbl (st, at) = DKL [ gZ(·|st, at;φ) ‖ N (0, I) ]1/2 ≈ rdbl (st, at). (15) Since DB is trained by IB principle, which filters out the dynamics-irrelevant information, utilizing the bonus defined in (15) allows the agent to conduct robust exploration in noisy environments. We summarize the the overall RL algorithm with self-supervised exploration induced by the DB-bonus in Algorithm 1, which we refer to as Self-Supervised Exploration with DB-bonus (SSE-DB). For the RL implementation, we adopt Proximal Policy Optimization (PPO) [45] with generalized advantage estimation [44] and the normalization schemes from Burda et al. [11]. We refer to Appendix D for the implementation details. The codes are available at https://github.com/Baichenjia/DB. Algorithm 1 SSE-DB 1: Initialize: The DB model and the actor-critic network 2: for episode i = 1 to M do 3: for timestep i = 0 to T − 1 do 4: Obtain action from the actor at = π(st), then execute at and observe the state st+1; 5: Add (st, at, st+1) into the on-policy experiences; 6: Obtain the DB-bonus r̂dbl of (st, at) by (15); 7: end for 8: Update the actor and critic by PPO with the collected on-policy experiences as the input; 9: Update DB by gradient descent based on (8) with the collected on-policy experiences; 10: end for 100 200 300 400 500 Alien 200 400 600 800 Asteroids 0 200 400 600 800 BankHeist −60 −40 −20 0 20 40 60 Boxing 0 50 100 150 200 250 300 Breakout 1000 1500 2000 2500 3000 3500 Centipede 5000 10000 15000 20000 25000 30000 35000 CrazyClimber 2000 4000 6000 8000 Gopher 0 50 100 150 200 250 300 350 Gravitar 100 200 300 400 500 600 700 Kangaroo 0 5000 10000 15000 20000 KungFuMaster 200 300 400 500 600 MsPacman 0 100 200 300 400 0 100 200 300 400 500 600 700 Seaquest 0 100 200 300 400 250 500 750 1000 1250 1500 1750 Solaris 0 100 200 300 400 −25 −20 −15 −10 −5 0 Tennis 0 100 200 300 400 1000 1500 2000 2500 3000 3500 4000 TimePilot 0 100 200 300 400 2500 5000 7500 10000 12500 15000 17500 UpNDown 0 100 200 300 400 400 600 800 1000 1200 1400 1600 1800 WizardOfWor Frames (millions) Ex tri ns ic Re wa rd p er E pi so de SSE-DB(ours) ICM Disagreement CB random Figure 3: The evaluation curve in Atari games. The different methods are trained with different intrinsic rewards. The extrinsic rewards are only used to measure the performance. Each method was run with three random seeds. 5 Experiments We evaluate SSE-DB on Atari games. We conduct experiments to compare the following methods. (i) SSE-DB. The proposed method in Alg. 1. (ii) Intrinsic Curiosity Model (ICM) [39]. ICM uses an inverse dynamics model to extract features related to the actions. ICM further adopts the prediction error of dynamics as the intrinsic reward for exploration. (iii) Disagreement [40]. This method captures epistemic uncertainty by the disagreement among predictions from an ensemble of dynamics models. Disagreement performs competitive to ICM and RND [12]. Also, this method is robust to white-noise. (iv) Curiosity Bottleneck (CB) [26]. CB quantifies the compressiveness of observation with respect to the representation as the bonus. CB is originally proposed for exploration with extrinsic rewards. We adapt CB for self-supervised exploration by setting the extrinsic reward zero. We compare the model complexity of all the methods in Appendix D. Other methods including Novelty Search [53] and Contingency-aware exploration [16] are also deserve to compare. However, we find Novelty Search ineffective in our implementation since the detailed hyper-parameters and empirical results in Atari are not available. Contingency-aware exploration is related to DB while the attention module is relatively complicated and the code is not achievable. 5.1 The Main Results We evaluate all methods on Atari games with high-dimensional observations. The selected 18 games are frequently used in previous approaches for efficient exploration. The overall results are provided in Fig. 3. We highlight that in our experiments, the agents are trained without accessing the extrinsic rewards. The extrinsic rewards are ONLY utilized to evaluate the performance of the policies obtained from self-supervised exploration. Our experiments show that SSE-DB performs the best in 15 of 18 tasks, suggesting that dynamics-relevant feature together with DB-bonus helps the exploration of states with high extrinsic rewards. In addition, since pure exploration without extrinsic rewards is very difficult in most tasks, a random baseline is required to show whether the exploration methods learn meaningful behaviors. We adopt the random score from DQN [35] and show the comparison in the figure. In Solaris, Centipede and TimePilot, our method obtains similar scores to random policy, which suggests that relying solely on intrinsic rewards is insufficient to solve these tasks. We also observe that SSE-DB is suboptimal in Tennis. A possible explanation is that for Tennis, the prediction error based methods, such as ICM, could capture additional information in the intrinsic rewards. For example, in Tennis, the prediction error becomes higher when the ball moves faster or when the agent hits the ball towards a tricky direction. The prediction-error based methods can naturally benefit from such nature of the game. In contrast, SSE-DB encourages exploration based on the information gain from learning the dynamics-relevant representation, which may not capture such critical events in Tennis. 5.2 Robustness in the Presence of Noises Observation Noises. To analyze the robustness of SSE-DB to observation noises, an important evaluation metric is the performance of SSE-DB in the presence of dynamics-irrelevant information. A particularly challenging distractor is the white-noise [11, 26], which incorporates random taskirrelevant patterns to the observations. In such a scenario, a frequently visited state by injecting an unseen noise pattern may be mistakenly assigned with a high intrinsic reward by curiosity or pseudo-count based methods. We use two types of distractors for the observations of Atari games, namely, (1) the random-box noise distractor, which places boxes filled with random Gaussian noise over the raw pixels, and (2) the pixellevel noise distractor, which adds pixel-wise Gaussian noise to the observations. Fig. 4 shows examples of the two types of distractors. In the sequel, we discuss results for the random-box noise distractor on selected Atari games, which we find sufficiently representative, and defer the complete report to Appendix E.1 and E.2. Fig. 5(a) shows the performance of the compared methods on Alien, Breakout and TimePilot with and without noises. We observe that SSE-DB outperforms ICM on Alien and TimePilot with random-box noises. Nevertheless, in Breakout, we observe that both the methods fail to learn informative policies. A possible explanation is that, in Breakout, the ball is easily masked by the box-shaped noise (i.e., middle of Fig. 4). The random-box noise therefore buries critical transition information of the ball, which hinders all the baselines to extract dynamics-relevant information and leads to failures on Breakout with random-box noise as shown in Fig. 5(a). Action Noise. In addition to observation noises, noises in actions also raise challenges for learning the transition dynamics. To further study the robustness of SSE-DB, we conduct experiments on Atari games with sticky actions [32, 40]. At each time step, the agent may execute the previous action instead of the output action of current policy with a probability of 0.25. We illustrate results on three selected Atari games, i.e., Boxing, Gravitar and MsPacman, in Fig. 5(b) and defer the complete results to Appendix E.3. Our experiments show that SSE-DB is robust to action noises, whereas ICM suffers significant performance drop from the action noise. 5.3 Visualization and Ablation Study Visualization of the Learned Representations. To understand the latent representation learned by the DB model, we visualize the learned Z with t-SNE [31] plots, which projects a 128d z-vector to a 2d one through dimensionality reduction. We compare the representations learned by SSEDB and ICM with random-box noise. We illustrate the learned representations of MsPacman in Fig. 6. According to the visualization, representations learned by DB tends to align temporallyconsecutive movements on the same curve. Moreover, each segment of a curve corresponds to a semantic component of the trajectory, such as eating pellets aligned together or avoiding ghosts. The segments of a curve end up with critical states, including death and reborn of the agent and ghosts. The visualization indicates that DB well captures the dynamics-relevant information in the learned representations. In contrast, such temporally-consecutive patterns are missing in the learned representation of ICM. Visualization of the DB bonus. We provide visualization of the DB-bonus in Appendix E.5. The results show the DB-bonus effectively encourages the agent to explore the informative transitions. Ablation Study. The training of DB loss consists of multiple components, including Ipred, Ince, Iupper, and the momentum observation encoder. To analyze the importance of the components, we conduct an ablation study by removing each of them respectively and evaluate the DB model correspondingly. The ablation study suggests that the all the components are crucial for learning effective dynamics-relevant representations. In addition, we observe that Iupper is particularly important in the environment with dynamics-irrelevant noise. Please refer to Appendix E.4 for details. 6 Conclusion In this paper, we introduce Dynamic Bottleneck model that learns dynamics-relevant representations based on the IB principle. Based on the DB model, we further propose DB-bonus based on the DB model for efficient exploration. We establish theoretical connections between the proposed DB-bonus and provably efficient bonuses. Our experiments show that SSE-DB outperforms several strong baselines in stochastic environments for self-supervised exploration. Moreover, we observe that DB learns well-structured representations and the DB-bonus characterizes informative transitions for exploration. For our future work, we wish to combine the DB representation to effective exploration methods including BeBold [61] and NGU [6] to enhance their robustness in stochastic environments. Acknowledgements The authors thank Tencent Robotics X and Vector Institute for the computation resources supported. Part of the work was done during internship at Tencent Robotics X lab. The authors also thank the anonymous reviewers, whose invaluable suggestions have helped us to improve the paper.
1. What is the main idea of the paper and how does it contribute to exploration methods in reinforcement learning? 2. How does the proposed method utilize the information bottleneck principle and contrastive learning? 3. What are some potential concerns regarding the overlap between this work and previous studies on exploration using the information bottleneck principle? 4. Why minimize I([St, At]; Zt) instead of maximizing it, given that the goal is to acquire dynamics-relevant information and discard irrelevant information? 5. Would including experiments with rewards combined as DB-Bonus + extrinsic-reward strengthen the claim of the paper? 6. How do negative samples differ from positive ones in Equation 5, and what does "sampling observation encodings randomly from the batch" mean? 7. Is Theory 1 useful and relevant despite only holding without the contrastive objective? 8. Does SSE-DB perform better than other curiosity-driven methods in all tasks or only in certain environments? 9. What is the significance of the data processing inequality mentioned in line 222? 10. Could you clarify what \mu represents in line 191? 11. Should a different notation be used for variational approximation r(z_t) to avoid confusion with the reward function?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a new self-supervised exploration method for RL that encourages an agent to explore states which are more relevant in terms of information gain. To this aim, the Information-Bottleneck (IB) principle has been utilized to learn a model that can obtain dynamics-relevant information and discard dynamics-irrelevant features together. The paper evaluates the proposed technique in multiple Atari environments without accessing the extrinsic rewards. Review Overall, the main idea of the paper is interesting and it is well explained and well written. In addition, the experiments in this paper support the main claim of the paper and show better performance than some of the curiosity-driven methods. While the idea of the paper seems promising, utilizing information-bottleneck for exploration has been well-studied and it seems this paper has a good overlap with previous works such as [1] and [2]. The main difference that I see between this work and previous works is using contrastive learning. Is it fair? or are there many more differences between them? It is not clear to me why I([St, At]; Zt) should be minimized rather than maximized? Since the main claim of the paper is to acquire dynamics-relevant information and discard dynamics-irrelevant, why does minimizing I([St, At]; Zt) not result in discarding "relevant" information? I might miss something here, but can the authors elaborate on this? There'd have been interesting if the authors had included experiments in which reward = DB-Bonus + extrinsic-reward for a game like Montezuma’s Revenge. The reason I mention this is because if combining these two works, it further supports the claim of this paper (i.e. acquiring dynamics-relevant information and discard irrelevant ones). It is not clear how negative samples are collected for Eq 5? What does "sampling observation encodings randomly from the batch" [line 143] mean? Don't we do the same thing for regular samples ( i.e. positive ones)? Is it fair to say that Theory 1 is not that useful and relevant as it only holds without the contrastive objective? as such it shouldn't be claimed as a contribution in this paper. SSE-DB seems only better than 12-13 tasks, not 15 tasks (Not better in Tennis, Crazy Climber, Centipede, Time Plot, MasPAcman). Refer to results in Figure 3. Is that right? Minor comments: Explain the information-bottleneck principle in the paper or at least in the appendix. The paper should be self-contained. Explain what is "data processing inequality" in line 222. What is \mu in line 191? Use a different notation for variational approximation r(z_t), it can be mistaken with reward function. [1] Curiosity-Bottleneck: Exploration By Distilling Task-Specific Novelty, Youngjin Kim, Wontae Nam, Hyunwoo Kim, Ji-Hoon Kim, Gunhee Kim [2] EMI: Exploration with Mutual Information Hyoungseok Kim, Jaekyeom Kim, Yeonwoo Jeong, Sergey Levine, Hyun Oh Song
NIPS
Title Dynamic Bottleneck for Robust Self-Supervised Exploration Abstract Exploration methods based on pseudo-count of transitions or curiosity of dynamics have achieved promising results in solving reinforcement learning with sparse rewards. However, such methods are usually sensitive to environmental dynamicsirrelevant information, e.g., white-noise. To handle such dynamics-irrelevant information, we propose a Dynamic Bottleneck (DB) model, which attains a dynamics-relevant representation based on the information-bottleneck principle. Based on the DB model, we further propose DB-bonus, which encourages the agent to explore state-action pairs with high information gain. We establish theoretical connections between the proposed DB-bonus, the upper confidence bound (UCB) for linear case, and the visiting count for tabular case. We evaluate the proposed method on Atari suits with dynamics-irrelevant noises. Our experiments show that exploration with DB bonus outperforms several state-of-the-art exploration methods in noisy environments. 1 Introduction The tradeoff between exploration and exploitation has long been a major challenge in reinforcement learning (RL) [35, 50, 58]. Generally, excessive exploitation of the experience suffers from the potential risk of being suboptimal, whereas excessive exploration of novel states hinders the improvement of the policy. A straightforward way to tackle the exploration-exploitation dilemma is to enhance exploration efficiency while keeping exploitation in pace. When the extrinsic rewards are dense, reward shaping is commonly adopted for efficient exploration. However, in many real-world applications such as autonomous driving [34], the extrinsic rewards are sparse, making efficient exploration a challenging task in developing practical RL algorithms. The situations become even worse when the extrinsic rewards are entirely unavailable. In such a scenario, the task of collecting informative trajectories from exploration is known as the self-supervised exploration [11]. An effective approach to self-supervised exploration is to design a dense intrinsic reward that motivates the agent to explore novel transitions. Previous attempts include count-based [9] and curiosity-driven [39] explorations. The count-based exploration builds a density model to measure the pseudo-count of state visitation and assign high intrinsic rewards to less frequently visited states. In contrast, the 35th Conference on Neural Information Processing Systems (NeurIPS 2021). curiosity-driven methods maintain a predictive model of the transitions and encourage the agent to visit transitions with high prediction errors. However, all these methods becomes unstable when the states are noisy, e.g., containing dynamics-irrelevant information. For example, in autonomous driving tasks, the states captured by the camera may contain irrelevant objects, such as clouds that behave similar to Brownian movement. Hence, if we measure the novelty of states or the curiosity of transitions through raw observed pixels, exploration are likely to be affected by the dynamics of these irrelevant objects. To encourage the agent to explore the most informative transitions of dynamics, we propose a Dynamic Bottleneck (DB) model, which generates a dynamics-relevant representation Zt of the current state-action pair (St, At) through the Information-Bottleneck (IB) principle [55]. The goal of training DB model is to acquire dynamics-relevant information and discard dynamics-irrelevant features simultaneously. To this end, we maximize the mutual-information I(Zt;St+1) between a latent representation Zt and the next state St+1 through maximizing its lower bound and using contrastive learning. Meanwhile, we minimize the mutual-information I([St, At];Zt) between the state-action pair and the corresponding representation to compress dynamics-irrelevant information. Based on our proposed DB model, we further construct a DB-bonus for exploration. DB-bonus measures the novelty of state-action pairs by their information gain with respect to the representation computed from the DB model. We show that the DB-bonus are closely related to the provably efficient UCB-bonus in linear Markov Decision Processes (MDPs) [1] and the visiting count in tabular MDPs [3, 22]. We further estimate the DB-bonus by the learned dynamics-relevant representation from the DB model. We highlight that exploration based on DB-bonus directly utilize the information gain of the transitions, which filters out dynamics-irrelevant noise. We conduct experiments on the Atari suit with dynamics-irrelevant noise injected. Results demonstrate that our proposed selfsupervised exploration with DB-bonus is robust to dynamics-irrelevant noise and outperforms several state-of-the-art exploration methods. 2 Related Work Our work is closely related to previous exploration algorithms that construct intrinsic rewards to quantify the novelty of states and transitions. Several early approaches directly define the pseudocount by certain statistics to measure the novelty of states [41, 30]; more recent methods utilize density model [9, 38] or hash map [52, 42] for state statistics. Nevertheless, these approaches are easily affected by dynamics-irrelevant information such as white-noise. The contingency awareness method [16] addresses such an issue by using an attentive model to locate the agent and computes the pseudocount based on regions around the agent. However, such an approach could ignore features that are distant from the agent but relevant to the transition dynamics. Another line of research measures the novelty through learning a dynamics model and then use the prediction error to generate an intrinsic reward. These methods are known as the curiosity-driven exploration algorithms. Similar to the pseudo-count based methods, curiosity-driven methods become unstable in the presence of noises, because the prediction model is likely to yield high error for stochastic inputs or targets. Some recent attempts improve the curiosity-driven approach by learning the inverse dynamics [39] and variational dynamics [7] to define curiosity, or utilizes the prediction error of a random network to construct intrinsic rewards [12]. However, without explicitly removing dynamics-irrelevant information, these methods are still vulnerable to noises in practice [11]. The entropy-based exploration uses state entropy as the intrinsic reward. VISR [19], APT [29] and APS [28] use unsupervised skill discovery for fast task adaptation. In the unsupervised stage, they use k-nearest-neighbor entropy estimator to measure the entropy of state, and then use it as the intrinsic reward. RE3 [47] and ProtoRL [59] use random encoder and prototypes to learn the representation and use state-entropy as bonuses in exploration. Nevertheless, the state entropy will increase significantly if we inject noises in the state space. The entropy-based exploration will be misled by the noises. Previous approaches also quantify the epistemic uncertainty of dynamics through Bayesian network [21], bootstrapped Q-functions [37, 8], ensemble dynamics [40], and Stein variational inference [43] to tackle noisy environments. However, they typically require either complicated optimization methods or large networks. In contrast, DB learns a dynamics-relevant representation and encourages exploration by directly accessing the information gain of new transitions via DB-bonus. Another closely related line of studies uses the mutual information to promote exploration in RL. Novelty Search (NS) [53] proposes to learn a representation through IB. Curiosity Bottleneck (CB) [26] also performs exploration based on IB by measuring the task-relevant novelty. However, both NS and CB require extrinsic rewards to learn a value function and are not applicable for selfsupervised exploration. Moreover, NS contains additional k-nearest-neighbor to generate intrinsic reward and representation loss to constrain the distance of consecutive states, which are costly for computation. In contrast, our DB model handles self-supervised exploration without accessing extrinsic rewards. EMI [25] learns a representation by maximizing the mutual information in the forward dynamics and the inverse dynamics , which is different from the IB principle used in our method. In addition, we aim to perform robust exploration to overcome the white-noise problem, while EMI does not have an explicit mechanism to address the noise. Our work is also related to representation learning in RL. DrQ [60], RAD [27] and CURL [49] learn the state representation by data augmentation and contrastive learning [14, 20, 36] to improve the data-efficiency of DRL. Deep InfoMax [33] and Self-Predictive Representation (SPR) [46] learn the contrastive and predictive representations of dynamics, respectively, and utilize such representations as auxiliary losses for policy optimization. However, none of these existing approaches extracts information that benefits exploration. In contrast, we show that the dynamics-relevant representation learned by DB can be utilized for efficient exploration. 3 The Dynamic Bottleneck In this section, we introduce the objective function and architecture of the DB model. We consider an MDP that can be described by a tuple (O,A,P, r, γ), which consists of the observation space O, the action space A, the transition dynamics P, the reward function r, and the discount factor γ ∈ (0, 1). At each time step, an agent decides to perform an action at ∈ A after observing ot ∈ O, and then the observation transits to ot+1 with a reward rt received. In this paper, we use upper letters, such as Ot, to denote random variables and the corresponding lower case letter, such as ot, to represent their corresponding realizations. We first briefly introduce the IB principle [55]. In supervised setting that aims to learn a representation Z of a given input source X with the target source Y , IB maximizes the mutual information between Z and Y (i.e. max I(Z;Y )) and restricts the complexity of Z by using the constrain as I(Z;X) < Ic. Combining the two terms, the objective of IB is equal to max I(Z;Y )− αI(Z;X) with the introduction of a Lagrange multiplier. DB follows the IB principle [55] to learn dynamics-relevant representation. The input variable of the DB model is a tuple (Ot, At) that contains the current observation and action, and the target is the next observation Ot+1. We denote by St and St+1 the encoding of observations Ot and Ot+1. The goal of the DB model is to obtain a compressed latent representation Zt of (St, At), that preserves the information that is relevant to St+1 only. Specifically, we use fSo and f S m as the encoders of two consecutive observations ot and ot+1, respectively. We parameterize the dynamics-relevant representation zt by a Gaussian distribution with parameter φ, and it takes (st, at) as input. We summarize the DB model as follows, st = f S o (ot; θo), st+1 = f S m(ot+1; θm), zt ∼ gZ(st, at;φ). (1) Following the IB principle, the objective of the DB model seeks to maximize the mutual information I(Zt;St+1) while minimizing the mutual information I([St, At];Zt). To this end, we propose the DB objective by following the IB Lagrangian [55], which takes the form of min−I(Zt;St+1) + α1I([St, At];Zt). (2) Here α1 is a Lagrange multiplier that quantifies the amount of information about the next state preserved in Zt. Fig. 1 illustrates the DB objective. We minimize I([St, At];Zt) and consider it as a regularizer in the representation learning. Then the representation learning is done by maximizing the mutual information I(Zt, St+1). Maximizing I(Zt, St+1) ensures that we do not discard useful information from (St, At). In DB, the mutual information is estimated by several variational bounds parameterized by neural networks to enable differentiable and tractable computations. In what follows, we propose a lower bound of (2), which we optimize to train the DB model. 3.1 Maximizing the lower bound of I(Zt;St+1) As directly maximizing I(Zt;St+1) is intractable, we propose to optimize a predictive objective, which is a lower bound of I(Zt;St+1) [2]. It holds that I(Zt;St+1) = Ep(zt,st+1) [ log p(st+1|zt) p(st+1) ] = E [ log q(st+1|zt;ψ) p(st+1) ] +DKL[p(st+1|zt)‖q(st+1|zt;ψ)], (3) where p(st+1|zt) is an intractable conditional distribution and q(st+1|zt;ψ) is a tractable variational decoder with parameter ψ. By the non-negativity of the KL-divergence, we obtain the following lower bound, I(Zt;St+1) ≥ Ep(zt,st+1)[log q(st+1|zt;ψ)] +H(St+1), whereH(·) is the entropy. SinceH(St+1) is irrelevant to the parameter ψ, maximizing I(Zt;St+1) is equivalent to maximizing the following lower bound, Ipred , Ep(zt,st+1)[log q(st+1|zt;ψ)]. (4) Ipred can be interpreted as the log-likelihood of the next-state encoding st+1 given the dynamicsrelevant representation zt. In practice, we parameterize the prediction head q(st+1|zt;ψ) by a neural network that outputs a diagonal Gaussian random variable. Since st, st+1 and zt are all low-dimensional vectors instead of raw image pixels, optimizing Ipred is computationally efficient. Momentum Encoder To encode the consecutive observations ot and ot+1, we adopt the Siamese architecture [10] that uses the same neural network structures for the two encoders. Nevertheless, we observe that if we train both the encoders by directly maximizing Ipred, the Siamese architecture tends to converge to a collapsed solution. That is, the generated encodings appears to be uninformative constants. A simple fact is that if both the encoders generate zero vectors, predicting zeros conditioning on zt (or any variables) is a trivial solution. To address such issue, we update the parameter θo of fSo in (1) by directly optimizing Ipred. Meanwhile, we update the parameter θm of fSm by a momentum moving average of θo, which takes the form of θm ← τθm + (1 − τ)θo. In the sequel, we call fSo the online encoder and f S m the momentum encoder, respectively. Similar techniques is also adopted in previous study [18, 46] to avoid the mode collapse. 3.2 Contrastive Objective for Maximizing I(Zt;St+1) In addition to the lower bound of I(Zt;St+1) in §3.1, we also investigate the approach of maximizing the mutual information by contrastive learning (CL) [36]. CL classifies positive samples and negative samples in the learned representation space. An advantage of adopting CL is that training with negative samples plays the role of regularizer, which avoids collapsed solutions. Moreover, the contrastive objective yields a variational lower bound of the mutual information I(Z;St+1). To see such a fact, note that by the Bayes rule, we have I(Zt;St+1) ≥ Ep(zt,st+1)ES− [ log exp(h(zt, st+1))∑ sj∈S−∪st+1 exp(h(zt, sj)) ] , Ince. (5) Here h is a score function which assigns high scores to positive pairs and low score to negative pairs. We refer to Appendix A for a detailed proof of (5). The right-hand side of (5) is known as the InfoNCE objective [36]. The positive samples are obtained by directly sampling the transitions (s, a, s′). In contrast, the negative samples are obtained by first sampling a state-action pair (s, a), and then sampling a state s̃ independently. Then a negative sample is obtained by concatenating them together to form a tuple (s, a, s̃). The negative samples do not follow the transition dynamics. In practice, we collect the negative sample by sampling observation encodings randomly from the batch. We remark that comparing with methods that require data augmentation to construct negative samples [20, 49], DB utilizes a simple scheme to obtain positive and negative samples from on-policy experiences. In (5), we adopt the standard bilinear function as the score function h, which is defined as follows, h(zt, st+1) = f P o (q̄(zt;ψ)) >WfPm(st+1), (6) where fPo (·;ϕo) and fPm(·;ϕm) project st+1 and the mean value of next-state prediction q(st+1|zt;ϕ), i.e., q̄(·;ψ), to a latent space to apply the contrastive loss Ince in (5), andW is the parameter of the score function. Similar to the observation encoder and MoCo-based architectures [49, 20, 15], we also adopt an online projector fPo and a momentum projector f P m for zt and st+1, respectively. The momentum projector is updated by ϕm ← τϕm + (1− τ)ϕo. 3.3 Minimizing the Upper Bound of I([St, At];Zt) We minimize the mutual information I([St, At];Zt) through minimizing a tractable upper bound of the mutual information. To this end, we introduce a variational approximation q(zt) to the intractable marginal p(zt) = ∫ p(st, at)p(zt|st, at)dstat. Specifically, the following upper-bound of I([St, At];Zt) holds, I([St, At];Zt) = Ep(st,at) [p(zt|st, at) p(zt) ] = Ep(st,at) [p(zt|st, at) q(zt) ] −DKL [ p(zt)‖q(zt) ] ≤ Ep(st,at) [ DKL[p(zt|st, at)‖q(zt)] ] , Iupper, (7) where the inequality follows from the non-negativity of the KL divergence, and q(zt) is an approximation of the marginal distribution of Zt. We follow Alemi et al. [2] and use a standard spherical Gaussian distribution q(zt) = N (0, I) as the approximation. The expectation of Iupper is estimated by sampling from on-policy experiences. 3.4 The Loss Function and Architecture The final loss for training the DB model is a combination of the upper and lower bounds established in previous sections, min θo,φ,ψ,ϕo,W LDB = α1Iupper − α2Ipred − α3Ince, (8) where α1, α2 and α3 are hyper-parameters. As we show in the ablation study (§5), all of the three components in the loss plays an important role in learning dynamics-relevant representations. We illustrate the architecture of the DB model in Fig. 2. In practice, we minimize LDB in (8) by gradient descent, which iteratively updates the parameters of fSo , g Z , qψ,W and fPo . Meanwhile, we adopt exponential moving average to update the parameters of fSm and f P m to avoid collapsed solutions. We refer to Appendix B for the pseudocode of training DB model. 4 Exploration with DB-Bonus We are now ready to introduce the DB-bonus rdb for exploration. In this section, we first present the DB-bonus for self-supervised exploration. We establish the theoretical connections between the DB-bonus and provably efficient bonus functions. We further present the empirical estimation of DB-bonus and the policy optimization algorithm that utilizes the DB-bonus. In the sequel, we assume that the learned parameter Θ of the DB model follows a Bayesian posterior distribution given the training dataset Dm = {(sit, ait, sit+1)}i∈[0,m], which is a collection of past experiences from m episodes performed by the agent to train the DB model. We aim to estimate the following conceptual reward, which is defined by the mutual information between the parameter of the DB model and the transition dynamics given the training dataset, rdb(st, at) , I ( Θ; (st, at, St+1)|Dm )1/2 = [ H ( (st, at, St+1)|Dm ) −H ( (st, at, St+1)|Θ,Dm )]1/2 . (9) Intuitively, DB-bonus defined in (9) encourages the agent to explore transitions that are maximally informative to the improvement of the DB model. 4.1 Theoretical Analysis We show that the DB-bonus defined in (9) enjoys well theoretical properties, and establish theoretical connections between rdb and bonuses based on the optimism in the face of uncertainty [4, 23], which incorporates UCB into value functions in both tabular [5, 22, 17] and linear MDPs [24, 13]. Connection to UCB-bonus in linear MDPs In linear MDPs, the transition kernel and reward function are assumed to be linear. In such a setting, LSVI-UCB [24] provably attains a near-optimal worst-case regret, and we refer to Appendix C.1 for the details. The idea of LSVI-UCB is using an optimistic Q-value, which is obtained by adding an UCB-bonus rucb [1] to the estimation of the Q-value. The UCB-bonus is defined as rucbt = β · [ η(st, at) >Λ−1t η(st, at) ]1/2 , where β is a constant, Λt = ∑m i=0 η(x i t, a i t)η(x i t, a i t) > + λ · I is the Gram matrix, and m is the index of the current episode. The UCB-bonus measures the epistemic uncertainty of the state-action and is provably efficient [24]. For linear MDPs, we consider representation z ∈ Rc as the mean of the posterior gZ from the DB model, and set zt to be a linear function of the state-action encoding, i.e., zt = Wtη(st, at) parameterized by Wt ∈ Rc×d. Then, the following theorem establishes a connection between the DB-bonus rdb and the UCB-bonus rucb. Theorem 1. In linear MDPs, for tuning parameter β0 > 0, it holds that β0/ √ 2 · rucbt ≤ I(Wt; (st, at, St+1)|Dm) 1/2 ≤ β0 · rucbt , (10) where I(Wt; (st, at, St+1)|Dm)1/2 is the DB-bonus rdb(st, at) under the linear MDP setting. In addition, using rdb as bonus leads to the same regret as LSVI-UCB by following a similar proof to Jin et al. [24]. We refer to Appendix C.2 for the problem setup and the detailed proofs. We remark that Theorem 1 is an approximate derivation because we only consider the predictive objective Ipred in (8) in Theorem 1. Nevertheless, introducing the contrastive objective Ince is important in the training of the DB model as it prevents the mode collapse issue. Theorem 1 shows that the DB-bonus provides an instantiation of the UCB-bonus in DRL, which enables us to measure the epistemic uncertainty of high-dimensional states and actions without the linear MDP assumption. Connection to visiting count in tabular MDP The following theorem establishes connections between DB-bonus and the count-based bonus rcount(st, at) = β√ Nst,at+λ in tabular MDPs. Theorem 2. In tabular MDPs, it holds for the DB-bonus rdb(st, at) and the count-based intrinsic reward rcount(st, at) that, rdb(st, at) ≈ √ |S|/2√ Nst,at + λ = β0 · rcount(st, at), (11) when Nst,at is large, where λ > 0 is a tuning parameter, |S| is the number of states in tabular setting. We refer to Appendix C.3 for a detailed proofs. As a result, DB-bonus can also be considered as a count-based intrinsic reward in the space of dynamics-relevant representations. 4.2 Empirical Estimation To estimate such a bonus under our DB model, we face several challenges. (i) Firstly, estimating the bonus defined in (9) requires us to parameterize representation under a Bayesian learning framework, whereas our DB model is parameterized by non-Bayesian neural networks. (ii) Secondly, estimating the DB-bonus defined in (9) requires us to compute the mutual information between the unknown transitions and the estimated model, which is in general hard as we do not have access to such transitions in general. To address such challenges, we estimate a lower bound of the DB-bonus, which is easily implementable and achieves reasonable performance empirically. Specifically, we consider to use rdbl (st, at) as the lower bound of the information gain in (9), rdb(st, at) ≥ [ H ( g(st, at, St+1)|Dm ) −H ( g(st, at, St+1)|Θ,Dm )]1/2 , rdbl (st, at), (12) which holds for any mapping g according to Data Processing Inequality (DPI). DPI is an information theoretic concept that can be understood as ‘post-processing’ cannot increase information. Since g(st, at, St+1) is a post-processing of (st, at, St+1), we have I(Θ; (st, at, St+1)) > I(Θ; g(st, at, St+1)), where g is a neural network in practice. In our model, we adopt the following mapping, g(st, at, St+1)|Θ,Dm = gZ(st, at;φ), (13) where gZ is the representation distribution of DB, and φ constitutes a part of parameters of the total parameters Θ. Intuitively, since gZ is trained by IB principle to capture information of transitions, adopting the mapping gZ to (12) yields a reasonable approximation of the DB-bonus. It further holds rdbl (st, at) = [ H ( gmargin ) −H ( gZ(st, at;φ) )]1/2 = EΘDKL [ gZ(zt|st, at;φ)‖gmargin ]1/2 , (14) where we define gmargin = g(st, at, St+1)|Dm as the marginal of the encodings over the posterior of the parameters Θ of the DB model. In practice, since gmargin is intractable, we approximate gmargin with standard Gaussian distribution. We remark that such approximation is motivated by the training of DB model, which drives the marginal of representation gZ toward N (0, I) through minimizing Iupper in (7). Such approximation leads to a tractable estimation and stable empirical performances. In addition, since we do not train the DB model with Bayesian approach, we replace the expectation over posterior Θ in (14) by the corresponding point estimation, namely the parameter Θ of the neural networks trained with DB model on the dataset Dm. To summarize, we utilize the following approximation of the DB-bonus rdb proposed in (9), r̂dbl (st, at) = DKL [ gZ(·|st, at;φ) ‖ N (0, I) ]1/2 ≈ rdbl (st, at). (15) Since DB is trained by IB principle, which filters out the dynamics-irrelevant information, utilizing the bonus defined in (15) allows the agent to conduct robust exploration in noisy environments. We summarize the the overall RL algorithm with self-supervised exploration induced by the DB-bonus in Algorithm 1, which we refer to as Self-Supervised Exploration with DB-bonus (SSE-DB). For the RL implementation, we adopt Proximal Policy Optimization (PPO) [45] with generalized advantage estimation [44] and the normalization schemes from Burda et al. [11]. We refer to Appendix D for the implementation details. The codes are available at https://github.com/Baichenjia/DB. Algorithm 1 SSE-DB 1: Initialize: The DB model and the actor-critic network 2: for episode i = 1 to M do 3: for timestep i = 0 to T − 1 do 4: Obtain action from the actor at = π(st), then execute at and observe the state st+1; 5: Add (st, at, st+1) into the on-policy experiences; 6: Obtain the DB-bonus r̂dbl of (st, at) by (15); 7: end for 8: Update the actor and critic by PPO with the collected on-policy experiences as the input; 9: Update DB by gradient descent based on (8) with the collected on-policy experiences; 10: end for 100 200 300 400 500 Alien 200 400 600 800 Asteroids 0 200 400 600 800 BankHeist −60 −40 −20 0 20 40 60 Boxing 0 50 100 150 200 250 300 Breakout 1000 1500 2000 2500 3000 3500 Centipede 5000 10000 15000 20000 25000 30000 35000 CrazyClimber 2000 4000 6000 8000 Gopher 0 50 100 150 200 250 300 350 Gravitar 100 200 300 400 500 600 700 Kangaroo 0 5000 10000 15000 20000 KungFuMaster 200 300 400 500 600 MsPacman 0 100 200 300 400 0 100 200 300 400 500 600 700 Seaquest 0 100 200 300 400 250 500 750 1000 1250 1500 1750 Solaris 0 100 200 300 400 −25 −20 −15 −10 −5 0 Tennis 0 100 200 300 400 1000 1500 2000 2500 3000 3500 4000 TimePilot 0 100 200 300 400 2500 5000 7500 10000 12500 15000 17500 UpNDown 0 100 200 300 400 400 600 800 1000 1200 1400 1600 1800 WizardOfWor Frames (millions) Ex tri ns ic Re wa rd p er E pi so de SSE-DB(ours) ICM Disagreement CB random Figure 3: The evaluation curve in Atari games. The different methods are trained with different intrinsic rewards. The extrinsic rewards are only used to measure the performance. Each method was run with three random seeds. 5 Experiments We evaluate SSE-DB on Atari games. We conduct experiments to compare the following methods. (i) SSE-DB. The proposed method in Alg. 1. (ii) Intrinsic Curiosity Model (ICM) [39]. ICM uses an inverse dynamics model to extract features related to the actions. ICM further adopts the prediction error of dynamics as the intrinsic reward for exploration. (iii) Disagreement [40]. This method captures epistemic uncertainty by the disagreement among predictions from an ensemble of dynamics models. Disagreement performs competitive to ICM and RND [12]. Also, this method is robust to white-noise. (iv) Curiosity Bottleneck (CB) [26]. CB quantifies the compressiveness of observation with respect to the representation as the bonus. CB is originally proposed for exploration with extrinsic rewards. We adapt CB for self-supervised exploration by setting the extrinsic reward zero. We compare the model complexity of all the methods in Appendix D. Other methods including Novelty Search [53] and Contingency-aware exploration [16] are also deserve to compare. However, we find Novelty Search ineffective in our implementation since the detailed hyper-parameters and empirical results in Atari are not available. Contingency-aware exploration is related to DB while the attention module is relatively complicated and the code is not achievable. 5.1 The Main Results We evaluate all methods on Atari games with high-dimensional observations. The selected 18 games are frequently used in previous approaches for efficient exploration. The overall results are provided in Fig. 3. We highlight that in our experiments, the agents are trained without accessing the extrinsic rewards. The extrinsic rewards are ONLY utilized to evaluate the performance of the policies obtained from self-supervised exploration. Our experiments show that SSE-DB performs the best in 15 of 18 tasks, suggesting that dynamics-relevant feature together with DB-bonus helps the exploration of states with high extrinsic rewards. In addition, since pure exploration without extrinsic rewards is very difficult in most tasks, a random baseline is required to show whether the exploration methods learn meaningful behaviors. We adopt the random score from DQN [35] and show the comparison in the figure. In Solaris, Centipede and TimePilot, our method obtains similar scores to random policy, which suggests that relying solely on intrinsic rewards is insufficient to solve these tasks. We also observe that SSE-DB is suboptimal in Tennis. A possible explanation is that for Tennis, the prediction error based methods, such as ICM, could capture additional information in the intrinsic rewards. For example, in Tennis, the prediction error becomes higher when the ball moves faster or when the agent hits the ball towards a tricky direction. The prediction-error based methods can naturally benefit from such nature of the game. In contrast, SSE-DB encourages exploration based on the information gain from learning the dynamics-relevant representation, which may not capture such critical events in Tennis. 5.2 Robustness in the Presence of Noises Observation Noises. To analyze the robustness of SSE-DB to observation noises, an important evaluation metric is the performance of SSE-DB in the presence of dynamics-irrelevant information. A particularly challenging distractor is the white-noise [11, 26], which incorporates random taskirrelevant patterns to the observations. In such a scenario, a frequently visited state by injecting an unseen noise pattern may be mistakenly assigned with a high intrinsic reward by curiosity or pseudo-count based methods. We use two types of distractors for the observations of Atari games, namely, (1) the random-box noise distractor, which places boxes filled with random Gaussian noise over the raw pixels, and (2) the pixellevel noise distractor, which adds pixel-wise Gaussian noise to the observations. Fig. 4 shows examples of the two types of distractors. In the sequel, we discuss results for the random-box noise distractor on selected Atari games, which we find sufficiently representative, and defer the complete report to Appendix E.1 and E.2. Fig. 5(a) shows the performance of the compared methods on Alien, Breakout and TimePilot with and without noises. We observe that SSE-DB outperforms ICM on Alien and TimePilot with random-box noises. Nevertheless, in Breakout, we observe that both the methods fail to learn informative policies. A possible explanation is that, in Breakout, the ball is easily masked by the box-shaped noise (i.e., middle of Fig. 4). The random-box noise therefore buries critical transition information of the ball, which hinders all the baselines to extract dynamics-relevant information and leads to failures on Breakout with random-box noise as shown in Fig. 5(a). Action Noise. In addition to observation noises, noises in actions also raise challenges for learning the transition dynamics. To further study the robustness of SSE-DB, we conduct experiments on Atari games with sticky actions [32, 40]. At each time step, the agent may execute the previous action instead of the output action of current policy with a probability of 0.25. We illustrate results on three selected Atari games, i.e., Boxing, Gravitar and MsPacman, in Fig. 5(b) and defer the complete results to Appendix E.3. Our experiments show that SSE-DB is robust to action noises, whereas ICM suffers significant performance drop from the action noise. 5.3 Visualization and Ablation Study Visualization of the Learned Representations. To understand the latent representation learned by the DB model, we visualize the learned Z with t-SNE [31] plots, which projects a 128d z-vector to a 2d one through dimensionality reduction. We compare the representations learned by SSEDB and ICM with random-box noise. We illustrate the learned representations of MsPacman in Fig. 6. According to the visualization, representations learned by DB tends to align temporallyconsecutive movements on the same curve. Moreover, each segment of a curve corresponds to a semantic component of the trajectory, such as eating pellets aligned together or avoiding ghosts. The segments of a curve end up with critical states, including death and reborn of the agent and ghosts. The visualization indicates that DB well captures the dynamics-relevant information in the learned representations. In contrast, such temporally-consecutive patterns are missing in the learned representation of ICM. Visualization of the DB bonus. We provide visualization of the DB-bonus in Appendix E.5. The results show the DB-bonus effectively encourages the agent to explore the informative transitions. Ablation Study. The training of DB loss consists of multiple components, including Ipred, Ince, Iupper, and the momentum observation encoder. To analyze the importance of the components, we conduct an ablation study by removing each of them respectively and evaluate the DB model correspondingly. The ablation study suggests that the all the components are crucial for learning effective dynamics-relevant representations. In addition, we observe that Iupper is particularly important in the environment with dynamics-irrelevant noise. Please refer to Appendix E.4 for details. 6 Conclusion In this paper, we introduce Dynamic Bottleneck model that learns dynamics-relevant representations based on the IB principle. Based on the DB model, we further propose DB-bonus based on the DB model for efficient exploration. We establish theoretical connections between the proposed DB-bonus and provably efficient bonuses. Our experiments show that SSE-DB outperforms several strong baselines in stochastic environments for self-supervised exploration. Moreover, we observe that DB learns well-structured representations and the DB-bonus characterizes informative transitions for exploration. For our future work, we wish to combine the DB representation to effective exploration methods including BeBold [61] and NGU [6] to enhance their robustness in stochastic environments. Acknowledgements The authors thank Tencent Robotics X and Vector Institute for the computation resources supported. Part of the work was done during internship at Tencent Robotics X lab. The authors also thank the anonymous reviewers, whose invaluable suggestions have helped us to improve the paper.
1. How does the proposed method capture dynamics-relevant information, and how is it different from other related works? 2. Why are certain hyperparameters (α1, α2, α3) chosen, and how sensitive is the method to their selection? 3. Can you provide more intuitive explanations and supporting evidence for why the proposed method should be better than previous works? 4. How do recent baselines for self-supervised exploration, such as VISR, APT, Plan2Explore, APS, RE3, and ProtoRL, compare to the proposed method? 5. Are there any minor errors or typos in the paper that could be corrected?
Summary Of The Paper Review
Summary Of The Paper This paper propose an exploration method for reinforcement learning by introducing a dynamics bottleneck (DB) model that captures dynamics-revelant information from the states. To this end, they propose to maximize the mutual information between Z t and S t + 1 but minimize the mutual information between Z t and ( S t , a t ) to compress representation. Intrinsic reward based on learned DB-model. The proposed method is evaluated on tasks from Atari Learning Environment and corrupted Atari tasks. Review Proposed method is not surprising, but seems novel and reasonable, writing is clear, and its effectiveness is supported by extensive experimental results. I like the paper in general, but I have some minor concerns that could be addressed easily, and i'm willing to increase the score when the concerns below are resolved: Positioning of the proposed method in the paragraph 2 of introduction is a bit questionable. For example, representations learned by inverse dynamics model you mentioned as 'curiosity-driven explorations' should capture controllable features (relevant discussions are in [Pathak'17; Badia'20]), which in principle should not be affected by dynamics irrelevant objects like birds. While this aspect of capturing dynamics-relevant information is the very important characteristic of the proposed method and the method could be very good at doing it, it could not be its uniqueness compared to other related works. Could you elaborate on this and provide more intuitive explanation and supporting evidence why the proposed method should be better than the previous works? While I appreciate that experimental results are extensive, but important and more recent baselines for self-supervised exploration are missing, like VISR [Hansen'20], APT [Liu'21a], Plan2Explore [Sekar'20], which were all available at the time of submission (via conference, arXiv, OpenReview). As a note, much more recent works like APS [Liu'21b], RE3 [Seo'21], ProtoRL [Denis'21] are also relevant. How are hyperparameters ( α 1 , α 2 , α 3 ) are selected? Is the proposed method sensitive to the choice of these hyperparameters? Minor line 38: discards -> discard line 137: plays line 292: observer - > observe [Pathak'17] Curiosity-driven Exploration by Self-supervised Prediction [Sekar'20] Planning to Explore via Self-Supervised World Models [Hansen'20] Fast Task Inference with Variational Intrinsic Successor Features [Liu'21a] Behavior From the Void: Unsupervised Active Pre-Training [Liu'21b] APS: Active Pretraining with Successor Features [Seo'21] State Entropy Maximization with Random Encoders for Efficient Exploration [Yarats'21] Reinforcement Learning with Prototypical Representations
NIPS
Title Dynamic Bottleneck for Robust Self-Supervised Exploration Abstract Exploration methods based on pseudo-count of transitions or curiosity of dynamics have achieved promising results in solving reinforcement learning with sparse rewards. However, such methods are usually sensitive to environmental dynamicsirrelevant information, e.g., white-noise. To handle such dynamics-irrelevant information, we propose a Dynamic Bottleneck (DB) model, which attains a dynamics-relevant representation based on the information-bottleneck principle. Based on the DB model, we further propose DB-bonus, which encourages the agent to explore state-action pairs with high information gain. We establish theoretical connections between the proposed DB-bonus, the upper confidence bound (UCB) for linear case, and the visiting count for tabular case. We evaluate the proposed method on Atari suits with dynamics-irrelevant noises. Our experiments show that exploration with DB bonus outperforms several state-of-the-art exploration methods in noisy environments. 1 Introduction The tradeoff between exploration and exploitation has long been a major challenge in reinforcement learning (RL) [35, 50, 58]. Generally, excessive exploitation of the experience suffers from the potential risk of being suboptimal, whereas excessive exploration of novel states hinders the improvement of the policy. A straightforward way to tackle the exploration-exploitation dilemma is to enhance exploration efficiency while keeping exploitation in pace. When the extrinsic rewards are dense, reward shaping is commonly adopted for efficient exploration. However, in many real-world applications such as autonomous driving [34], the extrinsic rewards are sparse, making efficient exploration a challenging task in developing practical RL algorithms. The situations become even worse when the extrinsic rewards are entirely unavailable. In such a scenario, the task of collecting informative trajectories from exploration is known as the self-supervised exploration [11]. An effective approach to self-supervised exploration is to design a dense intrinsic reward that motivates the agent to explore novel transitions. Previous attempts include count-based [9] and curiosity-driven [39] explorations. The count-based exploration builds a density model to measure the pseudo-count of state visitation and assign high intrinsic rewards to less frequently visited states. In contrast, the 35th Conference on Neural Information Processing Systems (NeurIPS 2021). curiosity-driven methods maintain a predictive model of the transitions and encourage the agent to visit transitions with high prediction errors. However, all these methods becomes unstable when the states are noisy, e.g., containing dynamics-irrelevant information. For example, in autonomous driving tasks, the states captured by the camera may contain irrelevant objects, such as clouds that behave similar to Brownian movement. Hence, if we measure the novelty of states or the curiosity of transitions through raw observed pixels, exploration are likely to be affected by the dynamics of these irrelevant objects. To encourage the agent to explore the most informative transitions of dynamics, we propose a Dynamic Bottleneck (DB) model, which generates a dynamics-relevant representation Zt of the current state-action pair (St, At) through the Information-Bottleneck (IB) principle [55]. The goal of training DB model is to acquire dynamics-relevant information and discard dynamics-irrelevant features simultaneously. To this end, we maximize the mutual-information I(Zt;St+1) between a latent representation Zt and the next state St+1 through maximizing its lower bound and using contrastive learning. Meanwhile, we minimize the mutual-information I([St, At];Zt) between the state-action pair and the corresponding representation to compress dynamics-irrelevant information. Based on our proposed DB model, we further construct a DB-bonus for exploration. DB-bonus measures the novelty of state-action pairs by their information gain with respect to the representation computed from the DB model. We show that the DB-bonus are closely related to the provably efficient UCB-bonus in linear Markov Decision Processes (MDPs) [1] and the visiting count in tabular MDPs [3, 22]. We further estimate the DB-bonus by the learned dynamics-relevant representation from the DB model. We highlight that exploration based on DB-bonus directly utilize the information gain of the transitions, which filters out dynamics-irrelevant noise. We conduct experiments on the Atari suit with dynamics-irrelevant noise injected. Results demonstrate that our proposed selfsupervised exploration with DB-bonus is robust to dynamics-irrelevant noise and outperforms several state-of-the-art exploration methods. 2 Related Work Our work is closely related to previous exploration algorithms that construct intrinsic rewards to quantify the novelty of states and transitions. Several early approaches directly define the pseudocount by certain statistics to measure the novelty of states [41, 30]; more recent methods utilize density model [9, 38] or hash map [52, 42] for state statistics. Nevertheless, these approaches are easily affected by dynamics-irrelevant information such as white-noise. The contingency awareness method [16] addresses such an issue by using an attentive model to locate the agent and computes the pseudocount based on regions around the agent. However, such an approach could ignore features that are distant from the agent but relevant to the transition dynamics. Another line of research measures the novelty through learning a dynamics model and then use the prediction error to generate an intrinsic reward. These methods are known as the curiosity-driven exploration algorithms. Similar to the pseudo-count based methods, curiosity-driven methods become unstable in the presence of noises, because the prediction model is likely to yield high error for stochastic inputs or targets. Some recent attempts improve the curiosity-driven approach by learning the inverse dynamics [39] and variational dynamics [7] to define curiosity, or utilizes the prediction error of a random network to construct intrinsic rewards [12]. However, without explicitly removing dynamics-irrelevant information, these methods are still vulnerable to noises in practice [11]. The entropy-based exploration uses state entropy as the intrinsic reward. VISR [19], APT [29] and APS [28] use unsupervised skill discovery for fast task adaptation. In the unsupervised stage, they use k-nearest-neighbor entropy estimator to measure the entropy of state, and then use it as the intrinsic reward. RE3 [47] and ProtoRL [59] use random encoder and prototypes to learn the representation and use state-entropy as bonuses in exploration. Nevertheless, the state entropy will increase significantly if we inject noises in the state space. The entropy-based exploration will be misled by the noises. Previous approaches also quantify the epistemic uncertainty of dynamics through Bayesian network [21], bootstrapped Q-functions [37, 8], ensemble dynamics [40], and Stein variational inference [43] to tackle noisy environments. However, they typically require either complicated optimization methods or large networks. In contrast, DB learns a dynamics-relevant representation and encourages exploration by directly accessing the information gain of new transitions via DB-bonus. Another closely related line of studies uses the mutual information to promote exploration in RL. Novelty Search (NS) [53] proposes to learn a representation through IB. Curiosity Bottleneck (CB) [26] also performs exploration based on IB by measuring the task-relevant novelty. However, both NS and CB require extrinsic rewards to learn a value function and are not applicable for selfsupervised exploration. Moreover, NS contains additional k-nearest-neighbor to generate intrinsic reward and representation loss to constrain the distance of consecutive states, which are costly for computation. In contrast, our DB model handles self-supervised exploration without accessing extrinsic rewards. EMI [25] learns a representation by maximizing the mutual information in the forward dynamics and the inverse dynamics , which is different from the IB principle used in our method. In addition, we aim to perform robust exploration to overcome the white-noise problem, while EMI does not have an explicit mechanism to address the noise. Our work is also related to representation learning in RL. DrQ [60], RAD [27] and CURL [49] learn the state representation by data augmentation and contrastive learning [14, 20, 36] to improve the data-efficiency of DRL. Deep InfoMax [33] and Self-Predictive Representation (SPR) [46] learn the contrastive and predictive representations of dynamics, respectively, and utilize such representations as auxiliary losses for policy optimization. However, none of these existing approaches extracts information that benefits exploration. In contrast, we show that the dynamics-relevant representation learned by DB can be utilized for efficient exploration. 3 The Dynamic Bottleneck In this section, we introduce the objective function and architecture of the DB model. We consider an MDP that can be described by a tuple (O,A,P, r, γ), which consists of the observation space O, the action space A, the transition dynamics P, the reward function r, and the discount factor γ ∈ (0, 1). At each time step, an agent decides to perform an action at ∈ A after observing ot ∈ O, and then the observation transits to ot+1 with a reward rt received. In this paper, we use upper letters, such as Ot, to denote random variables and the corresponding lower case letter, such as ot, to represent their corresponding realizations. We first briefly introduce the IB principle [55]. In supervised setting that aims to learn a representation Z of a given input source X with the target source Y , IB maximizes the mutual information between Z and Y (i.e. max I(Z;Y )) and restricts the complexity of Z by using the constrain as I(Z;X) < Ic. Combining the two terms, the objective of IB is equal to max I(Z;Y )− αI(Z;X) with the introduction of a Lagrange multiplier. DB follows the IB principle [55] to learn dynamics-relevant representation. The input variable of the DB model is a tuple (Ot, At) that contains the current observation and action, and the target is the next observation Ot+1. We denote by St and St+1 the encoding of observations Ot and Ot+1. The goal of the DB model is to obtain a compressed latent representation Zt of (St, At), that preserves the information that is relevant to St+1 only. Specifically, we use fSo and f S m as the encoders of two consecutive observations ot and ot+1, respectively. We parameterize the dynamics-relevant representation zt by a Gaussian distribution with parameter φ, and it takes (st, at) as input. We summarize the DB model as follows, st = f S o (ot; θo), st+1 = f S m(ot+1; θm), zt ∼ gZ(st, at;φ). (1) Following the IB principle, the objective of the DB model seeks to maximize the mutual information I(Zt;St+1) while minimizing the mutual information I([St, At];Zt). To this end, we propose the DB objective by following the IB Lagrangian [55], which takes the form of min−I(Zt;St+1) + α1I([St, At];Zt). (2) Here α1 is a Lagrange multiplier that quantifies the amount of information about the next state preserved in Zt. Fig. 1 illustrates the DB objective. We minimize I([St, At];Zt) and consider it as a regularizer in the representation learning. Then the representation learning is done by maximizing the mutual information I(Zt, St+1). Maximizing I(Zt, St+1) ensures that we do not discard useful information from (St, At). In DB, the mutual information is estimated by several variational bounds parameterized by neural networks to enable differentiable and tractable computations. In what follows, we propose a lower bound of (2), which we optimize to train the DB model. 3.1 Maximizing the lower bound of I(Zt;St+1) As directly maximizing I(Zt;St+1) is intractable, we propose to optimize a predictive objective, which is a lower bound of I(Zt;St+1) [2]. It holds that I(Zt;St+1) = Ep(zt,st+1) [ log p(st+1|zt) p(st+1) ] = E [ log q(st+1|zt;ψ) p(st+1) ] +DKL[p(st+1|zt)‖q(st+1|zt;ψ)], (3) where p(st+1|zt) is an intractable conditional distribution and q(st+1|zt;ψ) is a tractable variational decoder with parameter ψ. By the non-negativity of the KL-divergence, we obtain the following lower bound, I(Zt;St+1) ≥ Ep(zt,st+1)[log q(st+1|zt;ψ)] +H(St+1), whereH(·) is the entropy. SinceH(St+1) is irrelevant to the parameter ψ, maximizing I(Zt;St+1) is equivalent to maximizing the following lower bound, Ipred , Ep(zt,st+1)[log q(st+1|zt;ψ)]. (4) Ipred can be interpreted as the log-likelihood of the next-state encoding st+1 given the dynamicsrelevant representation zt. In practice, we parameterize the prediction head q(st+1|zt;ψ) by a neural network that outputs a diagonal Gaussian random variable. Since st, st+1 and zt are all low-dimensional vectors instead of raw image pixels, optimizing Ipred is computationally efficient. Momentum Encoder To encode the consecutive observations ot and ot+1, we adopt the Siamese architecture [10] that uses the same neural network structures for the two encoders. Nevertheless, we observe that if we train both the encoders by directly maximizing Ipred, the Siamese architecture tends to converge to a collapsed solution. That is, the generated encodings appears to be uninformative constants. A simple fact is that if both the encoders generate zero vectors, predicting zeros conditioning on zt (or any variables) is a trivial solution. To address such issue, we update the parameter θo of fSo in (1) by directly optimizing Ipred. Meanwhile, we update the parameter θm of fSm by a momentum moving average of θo, which takes the form of θm ← τθm + (1 − τ)θo. In the sequel, we call fSo the online encoder and f S m the momentum encoder, respectively. Similar techniques is also adopted in previous study [18, 46] to avoid the mode collapse. 3.2 Contrastive Objective for Maximizing I(Zt;St+1) In addition to the lower bound of I(Zt;St+1) in §3.1, we also investigate the approach of maximizing the mutual information by contrastive learning (CL) [36]. CL classifies positive samples and negative samples in the learned representation space. An advantage of adopting CL is that training with negative samples plays the role of regularizer, which avoids collapsed solutions. Moreover, the contrastive objective yields a variational lower bound of the mutual information I(Z;St+1). To see such a fact, note that by the Bayes rule, we have I(Zt;St+1) ≥ Ep(zt,st+1)ES− [ log exp(h(zt, st+1))∑ sj∈S−∪st+1 exp(h(zt, sj)) ] , Ince. (5) Here h is a score function which assigns high scores to positive pairs and low score to negative pairs. We refer to Appendix A for a detailed proof of (5). The right-hand side of (5) is known as the InfoNCE objective [36]. The positive samples are obtained by directly sampling the transitions (s, a, s′). In contrast, the negative samples are obtained by first sampling a state-action pair (s, a), and then sampling a state s̃ independently. Then a negative sample is obtained by concatenating them together to form a tuple (s, a, s̃). The negative samples do not follow the transition dynamics. In practice, we collect the negative sample by sampling observation encodings randomly from the batch. We remark that comparing with methods that require data augmentation to construct negative samples [20, 49], DB utilizes a simple scheme to obtain positive and negative samples from on-policy experiences. In (5), we adopt the standard bilinear function as the score function h, which is defined as follows, h(zt, st+1) = f P o (q̄(zt;ψ)) >WfPm(st+1), (6) where fPo (·;ϕo) and fPm(·;ϕm) project st+1 and the mean value of next-state prediction q(st+1|zt;ϕ), i.e., q̄(·;ψ), to a latent space to apply the contrastive loss Ince in (5), andW is the parameter of the score function. Similar to the observation encoder and MoCo-based architectures [49, 20, 15], we also adopt an online projector fPo and a momentum projector f P m for zt and st+1, respectively. The momentum projector is updated by ϕm ← τϕm + (1− τ)ϕo. 3.3 Minimizing the Upper Bound of I([St, At];Zt) We minimize the mutual information I([St, At];Zt) through minimizing a tractable upper bound of the mutual information. To this end, we introduce a variational approximation q(zt) to the intractable marginal p(zt) = ∫ p(st, at)p(zt|st, at)dstat. Specifically, the following upper-bound of I([St, At];Zt) holds, I([St, At];Zt) = Ep(st,at) [p(zt|st, at) p(zt) ] = Ep(st,at) [p(zt|st, at) q(zt) ] −DKL [ p(zt)‖q(zt) ] ≤ Ep(st,at) [ DKL[p(zt|st, at)‖q(zt)] ] , Iupper, (7) where the inequality follows from the non-negativity of the KL divergence, and q(zt) is an approximation of the marginal distribution of Zt. We follow Alemi et al. [2] and use a standard spherical Gaussian distribution q(zt) = N (0, I) as the approximation. The expectation of Iupper is estimated by sampling from on-policy experiences. 3.4 The Loss Function and Architecture The final loss for training the DB model is a combination of the upper and lower bounds established in previous sections, min θo,φ,ψ,ϕo,W LDB = α1Iupper − α2Ipred − α3Ince, (8) where α1, α2 and α3 are hyper-parameters. As we show in the ablation study (§5), all of the three components in the loss plays an important role in learning dynamics-relevant representations. We illustrate the architecture of the DB model in Fig. 2. In practice, we minimize LDB in (8) by gradient descent, which iteratively updates the parameters of fSo , g Z , qψ,W and fPo . Meanwhile, we adopt exponential moving average to update the parameters of fSm and f P m to avoid collapsed solutions. We refer to Appendix B for the pseudocode of training DB model. 4 Exploration with DB-Bonus We are now ready to introduce the DB-bonus rdb for exploration. In this section, we first present the DB-bonus for self-supervised exploration. We establish the theoretical connections between the DB-bonus and provably efficient bonus functions. We further present the empirical estimation of DB-bonus and the policy optimization algorithm that utilizes the DB-bonus. In the sequel, we assume that the learned parameter Θ of the DB model follows a Bayesian posterior distribution given the training dataset Dm = {(sit, ait, sit+1)}i∈[0,m], which is a collection of past experiences from m episodes performed by the agent to train the DB model. We aim to estimate the following conceptual reward, which is defined by the mutual information between the parameter of the DB model and the transition dynamics given the training dataset, rdb(st, at) , I ( Θ; (st, at, St+1)|Dm )1/2 = [ H ( (st, at, St+1)|Dm ) −H ( (st, at, St+1)|Θ,Dm )]1/2 . (9) Intuitively, DB-bonus defined in (9) encourages the agent to explore transitions that are maximally informative to the improvement of the DB model. 4.1 Theoretical Analysis We show that the DB-bonus defined in (9) enjoys well theoretical properties, and establish theoretical connections between rdb and bonuses based on the optimism in the face of uncertainty [4, 23], which incorporates UCB into value functions in both tabular [5, 22, 17] and linear MDPs [24, 13]. Connection to UCB-bonus in linear MDPs In linear MDPs, the transition kernel and reward function are assumed to be linear. In such a setting, LSVI-UCB [24] provably attains a near-optimal worst-case regret, and we refer to Appendix C.1 for the details. The idea of LSVI-UCB is using an optimistic Q-value, which is obtained by adding an UCB-bonus rucb [1] to the estimation of the Q-value. The UCB-bonus is defined as rucbt = β · [ η(st, at) >Λ−1t η(st, at) ]1/2 , where β is a constant, Λt = ∑m i=0 η(x i t, a i t)η(x i t, a i t) > + λ · I is the Gram matrix, and m is the index of the current episode. The UCB-bonus measures the epistemic uncertainty of the state-action and is provably efficient [24]. For linear MDPs, we consider representation z ∈ Rc as the mean of the posterior gZ from the DB model, and set zt to be a linear function of the state-action encoding, i.e., zt = Wtη(st, at) parameterized by Wt ∈ Rc×d. Then, the following theorem establishes a connection between the DB-bonus rdb and the UCB-bonus rucb. Theorem 1. In linear MDPs, for tuning parameter β0 > 0, it holds that β0/ √ 2 · rucbt ≤ I(Wt; (st, at, St+1)|Dm) 1/2 ≤ β0 · rucbt , (10) where I(Wt; (st, at, St+1)|Dm)1/2 is the DB-bonus rdb(st, at) under the linear MDP setting. In addition, using rdb as bonus leads to the same regret as LSVI-UCB by following a similar proof to Jin et al. [24]. We refer to Appendix C.2 for the problem setup and the detailed proofs. We remark that Theorem 1 is an approximate derivation because we only consider the predictive objective Ipred in (8) in Theorem 1. Nevertheless, introducing the contrastive objective Ince is important in the training of the DB model as it prevents the mode collapse issue. Theorem 1 shows that the DB-bonus provides an instantiation of the UCB-bonus in DRL, which enables us to measure the epistemic uncertainty of high-dimensional states and actions without the linear MDP assumption. Connection to visiting count in tabular MDP The following theorem establishes connections between DB-bonus and the count-based bonus rcount(st, at) = β√ Nst,at+λ in tabular MDPs. Theorem 2. In tabular MDPs, it holds for the DB-bonus rdb(st, at) and the count-based intrinsic reward rcount(st, at) that, rdb(st, at) ≈ √ |S|/2√ Nst,at + λ = β0 · rcount(st, at), (11) when Nst,at is large, where λ > 0 is a tuning parameter, |S| is the number of states in tabular setting. We refer to Appendix C.3 for a detailed proofs. As a result, DB-bonus can also be considered as a count-based intrinsic reward in the space of dynamics-relevant representations. 4.2 Empirical Estimation To estimate such a bonus under our DB model, we face several challenges. (i) Firstly, estimating the bonus defined in (9) requires us to parameterize representation under a Bayesian learning framework, whereas our DB model is parameterized by non-Bayesian neural networks. (ii) Secondly, estimating the DB-bonus defined in (9) requires us to compute the mutual information between the unknown transitions and the estimated model, which is in general hard as we do not have access to such transitions in general. To address such challenges, we estimate a lower bound of the DB-bonus, which is easily implementable and achieves reasonable performance empirically. Specifically, we consider to use rdbl (st, at) as the lower bound of the information gain in (9), rdb(st, at) ≥ [ H ( g(st, at, St+1)|Dm ) −H ( g(st, at, St+1)|Θ,Dm )]1/2 , rdbl (st, at), (12) which holds for any mapping g according to Data Processing Inequality (DPI). DPI is an information theoretic concept that can be understood as ‘post-processing’ cannot increase information. Since g(st, at, St+1) is a post-processing of (st, at, St+1), we have I(Θ; (st, at, St+1)) > I(Θ; g(st, at, St+1)), where g is a neural network in practice. In our model, we adopt the following mapping, g(st, at, St+1)|Θ,Dm = gZ(st, at;φ), (13) where gZ is the representation distribution of DB, and φ constitutes a part of parameters of the total parameters Θ. Intuitively, since gZ is trained by IB principle to capture information of transitions, adopting the mapping gZ to (12) yields a reasonable approximation of the DB-bonus. It further holds rdbl (st, at) = [ H ( gmargin ) −H ( gZ(st, at;φ) )]1/2 = EΘDKL [ gZ(zt|st, at;φ)‖gmargin ]1/2 , (14) where we define gmargin = g(st, at, St+1)|Dm as the marginal of the encodings over the posterior of the parameters Θ of the DB model. In practice, since gmargin is intractable, we approximate gmargin with standard Gaussian distribution. We remark that such approximation is motivated by the training of DB model, which drives the marginal of representation gZ toward N (0, I) through minimizing Iupper in (7). Such approximation leads to a tractable estimation and stable empirical performances. In addition, since we do not train the DB model with Bayesian approach, we replace the expectation over posterior Θ in (14) by the corresponding point estimation, namely the parameter Θ of the neural networks trained with DB model on the dataset Dm. To summarize, we utilize the following approximation of the DB-bonus rdb proposed in (9), r̂dbl (st, at) = DKL [ gZ(·|st, at;φ) ‖ N (0, I) ]1/2 ≈ rdbl (st, at). (15) Since DB is trained by IB principle, which filters out the dynamics-irrelevant information, utilizing the bonus defined in (15) allows the agent to conduct robust exploration in noisy environments. We summarize the the overall RL algorithm with self-supervised exploration induced by the DB-bonus in Algorithm 1, which we refer to as Self-Supervised Exploration with DB-bonus (SSE-DB). For the RL implementation, we adopt Proximal Policy Optimization (PPO) [45] with generalized advantage estimation [44] and the normalization schemes from Burda et al. [11]. We refer to Appendix D for the implementation details. The codes are available at https://github.com/Baichenjia/DB. Algorithm 1 SSE-DB 1: Initialize: The DB model and the actor-critic network 2: for episode i = 1 to M do 3: for timestep i = 0 to T − 1 do 4: Obtain action from the actor at = π(st), then execute at and observe the state st+1; 5: Add (st, at, st+1) into the on-policy experiences; 6: Obtain the DB-bonus r̂dbl of (st, at) by (15); 7: end for 8: Update the actor and critic by PPO with the collected on-policy experiences as the input; 9: Update DB by gradient descent based on (8) with the collected on-policy experiences; 10: end for 100 200 300 400 500 Alien 200 400 600 800 Asteroids 0 200 400 600 800 BankHeist −60 −40 −20 0 20 40 60 Boxing 0 50 100 150 200 250 300 Breakout 1000 1500 2000 2500 3000 3500 Centipede 5000 10000 15000 20000 25000 30000 35000 CrazyClimber 2000 4000 6000 8000 Gopher 0 50 100 150 200 250 300 350 Gravitar 100 200 300 400 500 600 700 Kangaroo 0 5000 10000 15000 20000 KungFuMaster 200 300 400 500 600 MsPacman 0 100 200 300 400 0 100 200 300 400 500 600 700 Seaquest 0 100 200 300 400 250 500 750 1000 1250 1500 1750 Solaris 0 100 200 300 400 −25 −20 −15 −10 −5 0 Tennis 0 100 200 300 400 1000 1500 2000 2500 3000 3500 4000 TimePilot 0 100 200 300 400 2500 5000 7500 10000 12500 15000 17500 UpNDown 0 100 200 300 400 400 600 800 1000 1200 1400 1600 1800 WizardOfWor Frames (millions) Ex tri ns ic Re wa rd p er E pi so de SSE-DB(ours) ICM Disagreement CB random Figure 3: The evaluation curve in Atari games. The different methods are trained with different intrinsic rewards. The extrinsic rewards are only used to measure the performance. Each method was run with three random seeds. 5 Experiments We evaluate SSE-DB on Atari games. We conduct experiments to compare the following methods. (i) SSE-DB. The proposed method in Alg. 1. (ii) Intrinsic Curiosity Model (ICM) [39]. ICM uses an inverse dynamics model to extract features related to the actions. ICM further adopts the prediction error of dynamics as the intrinsic reward for exploration. (iii) Disagreement [40]. This method captures epistemic uncertainty by the disagreement among predictions from an ensemble of dynamics models. Disagreement performs competitive to ICM and RND [12]. Also, this method is robust to white-noise. (iv) Curiosity Bottleneck (CB) [26]. CB quantifies the compressiveness of observation with respect to the representation as the bonus. CB is originally proposed for exploration with extrinsic rewards. We adapt CB for self-supervised exploration by setting the extrinsic reward zero. We compare the model complexity of all the methods in Appendix D. Other methods including Novelty Search [53] and Contingency-aware exploration [16] are also deserve to compare. However, we find Novelty Search ineffective in our implementation since the detailed hyper-parameters and empirical results in Atari are not available. Contingency-aware exploration is related to DB while the attention module is relatively complicated and the code is not achievable. 5.1 The Main Results We evaluate all methods on Atari games with high-dimensional observations. The selected 18 games are frequently used in previous approaches for efficient exploration. The overall results are provided in Fig. 3. We highlight that in our experiments, the agents are trained without accessing the extrinsic rewards. The extrinsic rewards are ONLY utilized to evaluate the performance of the policies obtained from self-supervised exploration. Our experiments show that SSE-DB performs the best in 15 of 18 tasks, suggesting that dynamics-relevant feature together with DB-bonus helps the exploration of states with high extrinsic rewards. In addition, since pure exploration without extrinsic rewards is very difficult in most tasks, a random baseline is required to show whether the exploration methods learn meaningful behaviors. We adopt the random score from DQN [35] and show the comparison in the figure. In Solaris, Centipede and TimePilot, our method obtains similar scores to random policy, which suggests that relying solely on intrinsic rewards is insufficient to solve these tasks. We also observe that SSE-DB is suboptimal in Tennis. A possible explanation is that for Tennis, the prediction error based methods, such as ICM, could capture additional information in the intrinsic rewards. For example, in Tennis, the prediction error becomes higher when the ball moves faster or when the agent hits the ball towards a tricky direction. The prediction-error based methods can naturally benefit from such nature of the game. In contrast, SSE-DB encourages exploration based on the information gain from learning the dynamics-relevant representation, which may not capture such critical events in Tennis. 5.2 Robustness in the Presence of Noises Observation Noises. To analyze the robustness of SSE-DB to observation noises, an important evaluation metric is the performance of SSE-DB in the presence of dynamics-irrelevant information. A particularly challenging distractor is the white-noise [11, 26], which incorporates random taskirrelevant patterns to the observations. In such a scenario, a frequently visited state by injecting an unseen noise pattern may be mistakenly assigned with a high intrinsic reward by curiosity or pseudo-count based methods. We use two types of distractors for the observations of Atari games, namely, (1) the random-box noise distractor, which places boxes filled with random Gaussian noise over the raw pixels, and (2) the pixellevel noise distractor, which adds pixel-wise Gaussian noise to the observations. Fig. 4 shows examples of the two types of distractors. In the sequel, we discuss results for the random-box noise distractor on selected Atari games, which we find sufficiently representative, and defer the complete report to Appendix E.1 and E.2. Fig. 5(a) shows the performance of the compared methods on Alien, Breakout and TimePilot with and without noises. We observe that SSE-DB outperforms ICM on Alien and TimePilot with random-box noises. Nevertheless, in Breakout, we observe that both the methods fail to learn informative policies. A possible explanation is that, in Breakout, the ball is easily masked by the box-shaped noise (i.e., middle of Fig. 4). The random-box noise therefore buries critical transition information of the ball, which hinders all the baselines to extract dynamics-relevant information and leads to failures on Breakout with random-box noise as shown in Fig. 5(a). Action Noise. In addition to observation noises, noises in actions also raise challenges for learning the transition dynamics. To further study the robustness of SSE-DB, we conduct experiments on Atari games with sticky actions [32, 40]. At each time step, the agent may execute the previous action instead of the output action of current policy with a probability of 0.25. We illustrate results on three selected Atari games, i.e., Boxing, Gravitar and MsPacman, in Fig. 5(b) and defer the complete results to Appendix E.3. Our experiments show that SSE-DB is robust to action noises, whereas ICM suffers significant performance drop from the action noise. 5.3 Visualization and Ablation Study Visualization of the Learned Representations. To understand the latent representation learned by the DB model, we visualize the learned Z with t-SNE [31] plots, which projects a 128d z-vector to a 2d one through dimensionality reduction. We compare the representations learned by SSEDB and ICM with random-box noise. We illustrate the learned representations of MsPacman in Fig. 6. According to the visualization, representations learned by DB tends to align temporallyconsecutive movements on the same curve. Moreover, each segment of a curve corresponds to a semantic component of the trajectory, such as eating pellets aligned together or avoiding ghosts. The segments of a curve end up with critical states, including death and reborn of the agent and ghosts. The visualization indicates that DB well captures the dynamics-relevant information in the learned representations. In contrast, such temporally-consecutive patterns are missing in the learned representation of ICM. Visualization of the DB bonus. We provide visualization of the DB-bonus in Appendix E.5. The results show the DB-bonus effectively encourages the agent to explore the informative transitions. Ablation Study. The training of DB loss consists of multiple components, including Ipred, Ince, Iupper, and the momentum observation encoder. To analyze the importance of the components, we conduct an ablation study by removing each of them respectively and evaluate the DB model correspondingly. The ablation study suggests that the all the components are crucial for learning effective dynamics-relevant representations. In addition, we observe that Iupper is particularly important in the environment with dynamics-irrelevant noise. Please refer to Appendix E.4 for details. 6 Conclusion In this paper, we introduce Dynamic Bottleneck model that learns dynamics-relevant representations based on the IB principle. Based on the DB model, we further propose DB-bonus based on the DB model for efficient exploration. We establish theoretical connections between the proposed DB-bonus and provably efficient bonuses. Our experiments show that SSE-DB outperforms several strong baselines in stochastic environments for self-supervised exploration. Moreover, we observe that DB learns well-structured representations and the DB-bonus characterizes informative transitions for exploration. For our future work, we wish to combine the DB representation to effective exploration methods including BeBold [61] and NGU [6] to enhance their robustness in stochastic environments. Acknowledgements The authors thank Tencent Robotics X and Vector Institute for the computation resources supported. Part of the work was done during internship at Tencent Robotics X lab. The authors also thank the anonymous reviewers, whose invaluable suggestions have helped us to improve the paper.
1. What is the focus and contribution of the paper regarding exploration in Reinforcement Learning? 2. What are the strengths and weaknesses of the proposed approach, particularly in its empirical results and theoretical connections? 3. Do you have any concerns about the novelty of the approach, especially compared to prior works such as CB? 4. How does the reviewer assess the clarity and completeness of the paper's content, including the supplementary materials? 5. Are there any minor questions or concerns regarding specific aspects of the paper, such as the experiment in the presence of noise?
Summary Of The Paper Review
Summary Of The Paper The authors introduce the dynamics bottleneck which uses the information bottleneck to discover the novelty of state-action pairs to introduce a reward bonus, used for exploration in RL tasks. The authors evaluate their method on top of PPO in the Atari domain. Review Strengths: Overall, the empirical results look good compared to related approaches. Theoretical results drawing the connection between the proposed approach and different bonuses in the exploration literature shows this approach is reasonable. I thought the supplementary was very complete, reproducibility of the paper is very high. Supplementary includes an ablation study. Weaknesses: For a paper that aims to tackle the task of exploration, I thought the choice of environments was a bit lacking in specifically hard exploration tasks such as Montezuma's revenge or modified sparse reward environments. For example [1] lists (Gravitar, Montezuma’s Revenge, Pitfall!, Private Eye, Solaris, and Venture) as hard exploration tasks. Out of these 6, the 2 in tested in this paper (Solaris and Gravitar) have mostly flat learning curves. Compared to the learning curves from Dopamine [2], the performance of SSE-DB doesn't seem to be much better than the randomly initialized policy on these tasks. According to the PPO paper [3], on Gravitar (Solaris results were not included in this paper), vanilla PPO can achieve a performance of 500-750+ after 40M frames, which is than SSE-DB in fewer frames. (Edit: not a valid concern) Clarity on novelty. One thing that wasn't too clear to me after the reading the paper is what of the approach was novel and what was based on prior work. For example 3.1 and 3.3 are also described in the CB paper [4] (with references to prior work), so it isn't clear to me if there is novelty here, or if the novelty was limited to viewing the information bottleneck over the dynamics. A similar concern for 3.2 as (as mentioned in the related work) there has been literature on contrastive learning in RL. Minor: In the section 5.2 on the experiment in the presence of noise, is SSE-DB was only using the learned reward bonus, or is the learned representation used in some more meaningful way? References: [1] Burda, Yuri, et al. "Exploration by random network distillation." 2018. [2] Castro, Pablo Samuel, et al. "Dopamine: A research framework for deep reinforcement learning." 2018. [3] Schulman, John, et al. "Proximal policy optimization algorithms." 2017. [4] Kim, Youngjin, et al. "Curiosity-bottleneck: Exploration by distilling task-specific novelty." 2019.
NIPS
Title On Pathologies in KL-Regularized Reinforcement Learning from Expert Demonstrations Abstract KL-regularized reinforcement learning from expert demonstrations has proved successful in improving the sample efficiency of deep reinforcement learning algorithms, allowing them to be applied to challenging physical real-world tasks. However, we show that KL-regularized reinforcement learning with behavioral reference policies derived from expert demonstrations can suffer from pathological training dynamics that can lead to slow, unstable, and suboptimal online learning. We show empirically that the pathology occurs for commonly chosen behavioral policy classes and demonstrate its impact on sample efficiency and online policy performance. Finally, we show that the pathology can be remedied by non-parametric behavioral reference policies and that this allows KL-regularized reinforcement learning to significantly outperform state-of-the-art approaches on a variety of challenging locomotion and dexterous hand manipulation tasks. 1 Introduction Reinforcement learning (RL) [15, 24, 46, 47] is a powerful paradigm for learning complex behaviors. Unfortunately, many modern reinforcement learning algorithms require agents to carry out millions of interactions with their environment to learn desirable behaviors, making them of limited use for a wide range of practical applications that cannot be simulated [8, 28]. This limitation has motivated the study of algorithms that can incorporate pre-collected offline data into the training process either fully offline or with online exploration to improve sample efficiency, performance, and reliability [2, 6, 16, 23, 52, 53]. An important and well-motivated subset of these methods consists of approaches for efficiently incorporating expert demonstrations into the learning process [5, 11, 18, 42]. Reinforcement learning with Kullback-Leibler (KL) regularization is a particularly successful approach for doing so [3, 27, 29, 31, 44, 51]. In KL-regularized reinforcement learning, the standard reinforcement learning objective is augmented by a Kullback-Leibler divergence term that penalizes dissimilarity between the online policy and a behavioral reference policy derived from expert demonstrations. The resulting regularized objective pulls the agent’s online policy towards the behavioral reference policy while also allowing it to improve upon the behavioral reference policy by exploring and interacting with the environment. Recent advances that leverage explicit or implicit KL-regularized objectives, such as BRAC [51], ABM [44], and AWAC [27], have shown that KLregularized reinforcement learning from expert demonstrations is able to significantly improve the sample efficiency of online training and reliably solve challenging environments previously unsolved by standard deep reinforcement learning algorithms. ∗Equal contribution. † Corresponding author: tim.rudner@cs.ox.ac.uk. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Contributions. In this paper, we show that despite some empirical success, KL-regularized reinforcement learning from expert demonstrations can suffer from previously unrecognized pathologies that lead to instability and sub-optimality in online learning. To summarize, our core contributions are as follows: • We illustrate empirically that commonly used classes of parametric behavioral policies experi- ence a collapse in predictive variance about states away from the expert demonstrations. • We demonstrate theoretically and empirically that KL-regularized reinforcement learning al- gorithms can suffer from pathological training dynamics in online learning when regularized against behavioral policies that exhibit such a collapse in predictive variance. • We show that the pathology can be remedied by non-parametric behavioral policies, whose predictive variances are well-calibrated and guaranteed not to collapse about previously unseen states, and that fixing the pathology results in online policies that significantly outperform stateof-the-art approaches on a range of challenging locomotion and dexterous hand manipulation tasks. The left panel of Figure 1 shows an example of the collapse in predictive variance away from the expert trajectories in parametric behavioral policies. In contrast, the right panel of Figure 1 shows the predictive variance of a non-parametric behavioral policy, which—unlike in the case of the parametric policy—increases off the expert trajectories. By avoiding the pathology, we obtain a stable and reliable approach to sample-efficient reinforcement learning, applicable to a wide range of reinforcement learning algorithms that leverage KL-regularized objectives.2 2 Background We consider the standard reinforcement learning setting where an agent interacts with a discounted Markov Decision Process (MDP) [46] given by a 5-tuple (S,A, p, r, γ), where S and A are the state and action spaces, p(· | st,at) are the transition dynamics, r(st,at) is the reward function, and γ is a discount factor. ρπ(τt) denotes the state–action trajectory distribution from time t induced by a policy π(· | st). The discounted return from time step t is given by R(τt) = ∑∞ k=t γ kr(sk,ak) for t ∈ N0. The standard reinforcement learning objective to be maximized is the expected discounted return Jπ(τ0) = Eρπ(τ0)[R(τ0)] under the policy trajectory distribution. 2.1 Improving and Accelerating Online Training via Behavioral Cloning We consider settings where we have a set of expert demonstrations without reward, D0 = {(sn,an)}Nn=1 = {S̄, Ā}, which we would like to use to speed up and improve online learn- 2Code and visualizations of our results can be found at https://sites.google.com/view/nppac. ing [5, 42]. A standard approach for turning expert trajectories into a policy is behavioral cloning [1, 4] which involves learning a mapping from states in the expert demonstrations to their corresponding actions, that is, π0 : S → A. As such, behavioral cloning does not assume or require access to a reward function and only involves learning a mapping from states to action in a supervised fashion. Since expert demonstrations are costly to obtain and often only available in small number, behavioral cloning alone is typically insufficient for agents to learn good policies in complex environments and has to be complemented by a method that enables the learner to build on the cloned behavior by interacting with the environment. A particularly successful and popular class of algorithms used for incorporating behavioral policies into online training is KL-regularized reinforcement learning [10, 37, 43, 48]. 2.2 KL-Regularized Objectives in Reinforcement Learning KL-regularized reinforcement learning modifies the standard reinforcement learning objective by augmenting the return with a negative KL divergence term from the learned policy π to a reference policy π0, given a temperature parameter α. The resulting discounted return from time step t ∈ N0 is then given by R̃(τt) = ∞∑ k=t γk [ r(sk,ak)− αDKL(π(· | sk) ‖ π0(· | sk)) ] (1) and the reinforcement learning objective becomes J̃π(τ0) = Eρπ(τ0)[R̃(τ0)]. When the reference policy π0 is given by a uniform distribution, we recover the entropy-regularized reinforcement learning objective used in Soft Actor–Critic (SAC) [13] up to an additive constant. Under a uniform reference policy π0, the resulting objective encourages exploration, while also choosing high-reward actions. In contrast, when π0 is non-uniform, the agent is discouraged to explore areas of the state space S where the variance of π0(· | s) is low (i.e., more certain) and encouraged to explore areas of the state space where the variance of π0(· | s) is high. The KLregularized reinforcement learning objective can be optimized via policy–gradient and actor–critic algorithms. 2.3 KL-Regularized Actor–Critic An optimal policy π that maximizes the expected KL-augmented discounted return J̃π can be learned by directly optimizing the policy gradient ∇πJ̃π. However, this policy gradient estimator exhibits high variance, which can lead to unstable learning. Actor–critic algorithms [7, 17, 32, 38] attempt to reduce this variance by making use of the state value function V π(st) = Eρπ(τt)[R̃(τt) | st] or the state–action value function Qπ(st,at) = Eρπ(τt)[R̃(τt) | st,at] to stabilize training. Given a reference policy π0(at | st), the state value function can be shown to satisfy the modified Bellman equation V π(st) =̇ Eat∼π(·|st)[Q π(st,at)]− αDKL ( π(· | st) ||π0(· | st) ) with a recursively defined Q-function Qπ(st,at) =̇ r(st,at) + γ Est+1∼p(·|st,at)[V π(st+1)]. Instead of directly optimizing the objective function J̃π via the policy gradient, actor–critic methods alternate between policy evaluation and policy improvement [7, 13]: Policy Evaluation. During the policy evaluation step, Qπθ (s,a), parameterized by parameters θ, is trained by minimizing the Bellman residual JQ(θ) =̇ E(st,at)∼D [ (Qθ(st,at)− (r(st,at) + γEst+1∼p(·|st,at)[Vθ̄(st+1)]))2 ] , (2) where D is a replay buffer and θ̄ is a stabilizing moving average of parameters. Policy Improvement. In the policy improvement step, the policy πφ, parameterized by parameters φ, is updated towards the exponential of the KL-augmented Q-function, Jπ(φ) =̇ Est∼D [αDKL(πφ(· | st) ‖ π0(· | st))]− Est∼D [ Eat∼πφ(·|st) [Qθ(st,at)] ] , (3) with states sampled from a replay buffer D and actions sampled from the parameterized online policy πφ. The following sections will focus on the policy improvement objective and how certain types of references policies can lead to pathologies when optimizing Jπ(φ) with respect to φ. 3 Identifying the Pathology In this section, we investigate the effect of KL-regularization on the training dynamics. To do so, we first consider the properties of the KL divergence to identify a potential failure mode for KL-regularized reinforcement learning. Next, we consider parametric Gaussian behavioral reference policies commonly used in practice for continuous control tasks [13, 51] and show that for Gaussian behavioral reference policies with small predictive variance, the policy improvement objective suffers from exploding gradients with respect to the policy parameters φ. We confirm that this failure occurs empirically and demonstrate that it results in slow, unstable, and suboptimal online learning. Lastly, we show that various regularization techniques used for estimating behavioral policies are unable to prevent this failure and also lead to suboptimal online policies. 3.1 When Are KL-Regularized Reinforcement Learning Objectives Meaningful? We start by considering the properties of the KL divergence and discuss how these properties can lead to potential failure modes in KL-regularized objectives. A well-known property of KL-regularized objectives in the variational inference literature is the occurrence of singularities when the support of one distribution is not contained in the support of the other. To illustrate this problem, we consider the case of Gaussian behavioral and online policies commonly used in practice. Mathematically, the KL divergence between two full Gaussian distributions is always finite and well-defined. Hence, we might hope KL-regularized reinforcement learning with Gaussian behavioral and online policies to be unaffected by the failure mode described above. However, the support of a Gaussian online policy πφ(· | st) will not be contained in the support of a behavioral reference policy π0(· | st) as the predictive variance σ20(st) tends to zero, and hence DKL(πφ(· | st) ‖ π0(· | st)) → ∞ as σ20(st) → 0. In other words, as the variance of a behavioral reference policy tends to zero and the behavioral distribution becomes degenerate, the KL divergence blows up to infinity [25]. While in practice, Gaussian behavioral policy would not operate in the limit of zero variance, the functional form of the KL divergence between (univariate) Gaussians, DKL(πφ(· | st) ‖ π0(· | st)) ∝ log σ0(st) σφ(st) + σ2φ(st) + (µφ(st)− µ0(st))2 2σ20(st) , implies a continuous, quadratic increase in the magnitude of the divergence as σ0(st) decreases, further exacerbated by a large difference in predictive means, |µφ(st)− µ0(st)|. As a result, for Gaussian behavioral reference policies π0(· | st) that assign very low probability to sets of points in sample space far away from the distribution’s mean µ0(st), computing the KL divergence can result in divergence values so large to cause numerical instabilities and arithmetic overflow. Hence, even for a suitably chosen behavioral reference policy class, vanishingly small behavioral reference policy predictive variances can cause the KL divergence to ‘blow up’ and cause numerical issues at evaluation points far away from states in the expert demonstrations. One way to address this failure mode may be to lower-bound the output of the variance network (e.g., by adding a small constant bias). However, placing a floor on the predictive variance of the behavioral reference policy is not sufficient to encourage effective learning. While it would prevent the KL divergence from blowing up, it would also lead to poor gradient signals, as well-calibrated predictive variance estimates that increase on states far away from the expert trajectories are necessary to keep the KL penalty from pulling the predictive mean of the online policy towards poor behavioral reference policy predictive means on states off the expert trajectories. Another possible solution could be to use heavy-tailed behavioral reference policies distributions, for example, Laplace distributions, to avoid pathological training dynamics. However, in Appendix B.3 we show that Laplace behavioral reference policies also suffer from pathological training dynamics, albeit less severely. In the following sections, we explain how an explosion in DKL(πφ(· | st) ‖ π0(· | st)) caused by small σ20(st) affects the gradients of Jπ(φ) in KL-regularized RL and discuss of how and why σ 2 0(st) may tend to zero in practice. 3.2 Exploding Gradients in KL-Regularized Reinforcement Learning Objectives To understand how small predictive variances in behavioral reference policies can affect—and possibly destabilize—online training in KL-regularized RL, we consider the contribution of the behavioral reference policy’s variance to the gradient of the policy objective in Equation (3). Compared to entropy-regularized actor–critic methods (SAC, Haarnoja et al. [13]), which implicitly regularize against a uniform policy, the gradient estimator ∇̂φJπ(φ) in KL-regularized RL gains an extra scaling term ∇at log π0(at | st), the gradient of the prior log-density evaluated actions at ∼ πφ(· | s): Proposition 1 (Exploding Gradients in KL-Regularized RL). Let π0(· | s) be a Gaussian behavioral reference policy with mean µ0(st) and variance σ20(st), and let πφ(· | s) be an online policy with reparameterization at = fφ( t; st) and random vector t. The gradient of the policy loss with respect to the online policy’s parameters φ is then given by ∇̂φJπ(φ) = ( α∇at log πφ(at | st)− α∇at log π0(at | st) −∇atQ(st,at) ) ∇φfφ( t; st) + α∇φ log πφ(at | st) (4) with ∇at log π0(at | st) = −at−µ0(st)σ20(st) . For fixed |at − µ0(st)|, ∇at log π0(at|st) grows as O(σ−20 (st)); thus, | ∇̂φJπ(φ) | → ∞ as σ20(st)→ 0 whenever ∇φfφ( t; st) 6= 0. Proof. See Appendix A.1. This result formalizes the intuition presented in Section 3.1 that a behavioral reference policy with a sufficiently small predictive variance may cause KL-regularized reinforcement learning to suffer from pathological training dynamics in gradient-based optimization. The smaller the behavioral reference policy’s predictive variance, the more sensitive the policy objective’s gradients will be to differences in the means of the online and behavioral reference policies. As a result, for behavioral reference policies with small predictive variance, the KL divergence will heavily penalize online policies whose predictive means diverge from the predictive means of the behavioral policy—even in regions of the state space away from the expert trajectory where the behavioral policy’s mean prediction is poor. 3.3 Predictive Uncertainty Collapse Under Parametric Policies The most commonly used method for estimating behavioral policies is maximum likelihood estimation (MLE) [44, 51], where we seek π0 =̇ πψ? with ψ? =̇ arg maxψ { E(s,a)∼D0 [log πψ(a | s)] } for a parametric behavioral policy πψ. In practice, πψ is often assumed to be Gaussian, πψ(· | s) = N (µψ(s),σ2ψ(s)), with µψ(s) and σ2ψ(s) parameterized by a neural network. While maximizing the likelihood of the expert trajectories under the behavioral policy is a sensible choice for behavioral cloning, the limited capacity of the neural network parameterization can produce unwanted behaviors in the resulting policy. The maximum likelihood objective ensures that the behavioral policy’s predictive mean reflects the expert’s actions and the predictive variance the (aleatoric) uncertainty inherent in the expert trajectories. However, the maximum likelihood objective encourages parametric policies to use their model capacity toward fitting the expert demonstrations and reflecting the aleatoric uncertainty in the data. As a result, for states off the expert trajectories, the policy can become degenerate and collapse to point predictions instead of providing meaningful predictive variance estimates that reflect that the behavioral policy ought to be highly uncertain about its predictions in previously unseen regions of the state space. Similar behaviors are well-known in parametric probabilistic models and welldocumented in the approximate Bayesian inference literature [33, 39]. 0 2 4 6 8 10 Epochs 10−3 10−1 σ 2 ψ (s ) Validation Variance Validation Log-Likelihood 0 2 log π ψ (Ā | S̄ ) tently decreases during training. As shown in Proposition 1, such a collapse in predictive variance can result in pathological training dynamics in KL-regularized online learning—steering the online policy towards suboptimal trajectories in regions of the state space far away from the expert demonstrations and deteriorating performance. Effect of regularization on uncertainty collapse. To prevent a collapse in the behavioral policy’s predictive variance, prior work proposed adding entropy or Tikhonov regularization to the MLE objective [51]. However, doing so does not succeed in preventing a collapse in predictive variance off the expert demonstration trajectories, as we show in Appendix A.3. Deep ensembles [20], whose predictive mean and variance are computed from the predictive means and variances of multiple Gaussian neural networks, are a widely used method for uncertainty quantification in regression settings. However, model ensembling can be costly and unreliable, as it requires training multiple neural networks from scratch and does not guarantee well-calibrated uncertainty estimates [39, 49]. We provide visualizations in Appendix B.5 which show that ensembling multiple neural network policies does not fully prevent a collapse in predictive variance. 3.4 Empirical Confirmation of Uncertainty Collapse To confirm Proposition 1 empirically and assess the effect of the collapse in predictive variance on the performance of KL-regularized RL, we perform an ablation study where we fix the predictive mean function of a behavioral policy to a mean function that attains 60% of the optimal performance and vary the magnitude of the policy’s predictive variance. Specifically, we set the behavioral policy’s predictive variance to different constant values in the set {1× 10−3, 5× 10−3, 1× 10−2} (following a similar implementation in Nair et al. [27]).3 The results of this experiment are shown in Figure 3, which shows the average returns, the KL divergence, and the average absolute gradients of the policy loss over training. The plots confirm that as the predictive variance of the offline behavioral policy tends to zero, the KL terms and average policy gradient magnitude explode as implied by Proposition 1, leading to unstable training and a collapse or dampening in average returns. In other words, even for behavioral policies with accurate predictive means, smaller predictive variances slow down or even entirely prevent learning good behavioral policies. This observation confirms that the pathology identified in Proposition 1 occurs in practice and that it can have a significant impact on KL-regularized RL from expert demonstrations, calling into question the usefulness of KL regularization as a means for accelerating and improving online training. In Appendix B.1, we show that an analogous relationship exists for the gradients of the Q-function loss. 3We attempted to use smaller values, but the gradients grew too large and caused arithmetic overflow. 4 Fixing the Pathology In order to address the collapse in predictive uncertainty for behavioral policies parameterized by a neural network trained via MLE, we specify a non-parametric behavioral policy whose predictive variance is guaranteed not to collapse about previously unseen states. Noting that KL-regularized RL with a behavioral policy can be viewed as approximate Bayesian inference with an empirical prior policy [13, 21, 40], we propose Non-Parametric Prior Actor–Critic (N-PPAC), an off-policy temporal difference algorithm for improved, accelerated, and stable online learning with behavioral policies. 4.1 Non-Parametric Gaussian Processes Behavioral Policies Gaussian processes (GPs) [36] are models over functions defined by a mean m(·) and covariance function k(·, ·). When defined in terms of a non-parametric covariance function, that is, a covariance function constructed from infinitely many basis functions, we obtain a non-degenerate GP, which has sufficient capacity to prevent a collapse in predictive uncertainty away from the training data. Unlike parametric models, whose capacity is limited by their parameterization, a non-parametric model’s capacity increases with the amount of training data. Considering a non-parametric GP behavioral policy, π0(· | s), with A | s ∼ π0(· | s) = GP ( m(s), k(s, s′) ) , (5) we can obtain a non-degenerate posterior distribution over actions conditioned on the offline data D0 = {S̄, Ā} with actions sampled according to the A | s,D0 ∼ π0(· | s,D0) = GP ( µ0(s),Σ0(s, s ′) ) , (6) with µ(s)=m(s) + k(s, S̄)k(S̄, S̄)−1(Ā−m(Ā)) and Σ(s, s′)=k(s, s′) + k(s, S̄)k(S̄, S̄)−1k(S̄, s′). To obtain this posterior distribution, we perform exact Bayesian inference, which naively scales as O(N3) in the number of training points N , but Wang et al. [50] show that exact inference in GP regression can be scaled to N > 1, 000, 000. Since expert demonstrations usually contain less than 100k datapoints, non-parametric GP behavioral policies are applicable to a wide array of real-world tasks. For an empirical evaluation of the time complexity of using a GP prior, see Section 5.5. Figure 1 confirms that the non-parametric GP’s predictive variance is well-calibrated: It is small in magnitude in regions of the state space near the expert trajectories and large in magnitude in other regions of the state space. While actor–critic algorithms like SAC implicitly use a uniform prior to explore the state space, using a behavioral policy with a well-calibrated predictive variance has the benefit that in regions of the state space close to the expert demonstrations the online policy learns to match the expert, while elsewhere the predictive variance increases and encourages exploration. Algorithmic details. In our experiments, we use a KL-regularized objective with a standard actor– critic implementation and Double DQN [14]. Pseudocode is provided in (Appendix C.1). 5 Empirical Evaluation We carry out a comparative empirical evaluation of our proposed approach vis-à-vis related methods that integrate offline data into online training. We provide a detailed description of the algorithms we compare against in Appendix A.4. We perform experiments on the MuJoCo benchmark suite and the substantially more challenging dexterous hand manipulation suite with sparse rewards. We show that KL-regularized RL with a non-parametric behavioral reference policy can rapidly learn to solve difficult high-dimensional continuous control problems given only a small set of expert demonstrations and (often significantly) outperforms state-of-the-art methods, including ones that use offline reward information—which our approach does not require. Furthermore, we demonstrate that the GP behavioral policy’s predictive variance is crucial for KL-regularized objectives to learn good online policies from expert demonstrations. Finally, we perform ablation studies that illustrate that non-parametric GP behavioral reference policies also outperform parametric behavioral reference policies with improved uncertainty quantification, such as deep ensembles and Bayesian neural networks (BNNs) with Monte Carlo dropout, and that the difference between non-parametric and parametric models is exacerbated the fewer expert demonstrations are available. We use the expert data from Nair et al. [27], every experiment uses six random seeds, and we use a fixed KL-temperature for each environment class. For further implementation details, see Appendix C.2. 5.1 Environments MuJoCo locomotion tasks. We evaluate N-PPAC on three representative tasks: “Ant-v2”, “HalfCheetah-v2”, and “Walker2d-v2”. For each task, we use 15 demonstration trajectories collected by a pre-trained expert, each containing 1,000 steps. The behavioral policy is specified as the posterior distribution of a GP with a squared exponential kernel, which is well-suited for modeling smooth functions. Dexterous hand manipulation tasks. Real-world robot learning is a setting where human demonstration data is readily available, and many deep RL approaches fail to learn efficiently. We study this setting in a suite of challenging dexterous manipulation tasks [35] using a 28-DoF five-fingered simulated ADROIT hand. The tasks simulate challenges common to real-world settings with highdimensional action spaces, complex physics, and a large number of intermittent contact forces. We consider two tasks in particular: in-hand rotation of a pen to match a target and opening a door by unlatching and pulling a handle. We use binary rewards for task completion, which is significantly more challenging than the original setting considered in Rajeswaran et al. [35]. 25 expert demonstrations were provided for each task, each consisting of 200 environment steps which are not fully optimal but do successfully solve the task. The behavioral policy is specified as the posterior distribution of a GP with a Matérn kernel, which is more suitable for modeling non-smooth data. 5.2 Results On MuJoCo environments, KL-regularized RL with a non-parametric behavioral policy consistently outperforms all related methods across all three tasks, successfully accelerating learning from offline data, as shown in Figure 4. Most notably, it outperforms methods such as AWAC [27]—the previous state-of-the-art—which attempts to eschew the problem of learning behavioral policies but instead uses an implicit constraint. Our approach, N-PPAC, exhibits an increase in stability and higher returns compared to comparable methods such as ABM and BRAC that explicitly regularize the online policy against a parametric behavioral policy and plateau at suboptimal performance levels as they are being forced to copy poor actions from the behavioral policy away from the expert data. In contrast, using a non-parametric behavioral policy allows us to avoid such undesirable behavior. On dexterous hand manipulation environments, KL-regularized RL with a non-parametric behavioral policy performs on par or outperforms all related methods on both tasks, as shown in Figure 5. Most notably, on the door opening task, it achieves a stable success rate of 90% within only 100,000 environment interactions For comparison, AWAC requires 4× as many environment interactions to achieve the same performance and is significantly less stable, while most other methods fail to learn any meaningful behaviors. Alternative divergence metrics underperform KL-regularization. KL-regularized RL with a non-parametric behavioral policy consistently outperforms methods that use alternative divergence metrics, as shown in the bottom plots of Figures 4 and 5. 5.3 Can the Pathology Be Fixed by Improved Parametric Uncertainty Quantification? line policy success rates. We consider the challenging “door-binary-v0” environment for this ablation study. Parametric uncertainty quantification is insufficient. Figure 5 shows that parametric variance functions result in online policies that only achieve success rates of up to 20% and eventually deteriorate, whereas the non-parametric variance yields an online policy that achieves a success rate of nearly 100%. This finding shows that commonly used uncertainty quantification methods, such as deep ensembles or BNNs with Monte Carlo dropout, do not generate sufficiently well-calibrated uncertainty estimates to remedy the pathology, and better methods may be needed [9, 39, 41]. Lower-bounding the predictive variance does not remedy the pathology. The predictive variance of all MLE-based and ensemble behavioral reference policies in all experiments are bounded away from zero at a minimum value of ≈ 10−2. Hence, setting a floor on the variance is not sufficient to prevent pathological training dynamics. This result further demonstrates the importance of accurate predictive variance estimation in allowing the online policy to match expert actions in regions of the state space with low behavioral policy predictive variance and explore elsewhere. 5.4 Can a Single Expert Demonstration Be Sufficient to Accelerate Online Training? To assess the usefulness of nonparametric behavioral reference policies in settings where only few expert demonstrations are available, we investigate whether the difference in performance between online policies trained with non-parametric and parametric behavioral reference policies, respectively, is exacerbated the fewer expert demonstrations are available. To answer this question, we consider the “HalfCheetah-v2” environment and compare online policies trained with different behavioral reference policies—non-parametric GPs, deep ensembles, and BNNs with Monte Carlo dropout—estimated either from 15 expert demonstrations (i.e., 15 state–action trajectories, containing 15,000 samples) or from a single expert demonstration (i.e., a single state–action trajectory, containing 1,000 samples). A single expert demonstration is sufficient for non-parametric behavioral reference policies. Figure 7 shows the returns for online policies trained with behavioral reference policies estimated from the full dataset (top plot) and from only a single expert state–action trajectory (bottom plot). On the full dataset, we find that all three methods are competitive and improve on the prior stateof-the-art but that the GP behavioral policy leads to the highest return. Remarkably, non-parametric GP behavioral policies perform just as well with only a single expert demonstration as with all 15 (i.e., with 1,000 data points, instead of 15,000 data points). These results further emphasizes the usefulness of non-parametric behavioral policies when accelerating online training with expert demonstrations—even when only very few expert demonstrations are available. 5.5 Are Non-Parametric GP Behavioral Reference Policies Too Computationally Expensive? Table 1 presents the time complexity of KL-regularized RL under non-parametric GP and parametric neural network behavioral reference policies, as measured by the average time elapsed per epoch on the “door-binary-v0” and “HalfCheetah-v2” environments. One epoch of online training on “doorbinary-v0” and “HalfCheetah-v2” requires computing the KL divergence over 1,000 mini-batches of size 256 and 1,024, respectively. The time complexity of evaluating the log-density of a GP behavioral reference policy—needed for computing gradients of the KL divergence during online training—scales quadratically in the number of training data points and linearly in the dimensionality of the state and action space, respectively. As can be seen in Table 1, non-parametric GP behavioral reference policies only lead to a modest increase in the time needed to complete one epoch of training while resulting in significantly improved performance as shown in Figures 4 and 5. 6 Conclusion We identified a previously unrecognized pathology in KL-regularized RL from expert demonstrations and showed that this pathology can significantly impede and even entirely prevent online learning. To remedy the pathology, we proposed the use of non-parametric behavioral reference policies, which we showed can significantly accelerate and improve online learning and yield online policies that (often significantly) outperform current state-of-the-art methods on challenging continuous control tasks. We hope that this work will encourage further research into better model classes for deep reinforcement learning algorithms, including and especially for reinforcement from image inputs. Acknowledgments and Disclosure of Funding We thank Ashvin Nair for sharing his code and results, as well as for providing helpful insights about the dexterous hand manipulation suite. We also thank Clare Lyle, Charline Le Lan, and Angelos Filos for detailed feedback on an early draft of this paper, Avi Singh for early discussions about behavioral cloning in entropy-regularized RL, and Tim Pearce for a useful discussion on the role of good models in RL. TGJR and CL are funded by the Engineering and Physical Sciences Research Council (EPSRC). TGJR is also funded by the Rhodes Trust and by a Qualcomm Innovation Fellowship. We gratefully acknowledge donations of computing resources by the Alan Turing Institute.
1. What is the focus of the paper regarding KL-regularized reinforcement learning? 2. What are the strengths of the proposed approach, particularly in addressing the variance collapse issue? 3. What are the weaknesses of the paper, especially in terms of comparisons with other methods? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any concerns or suggestions regarding the experimental setup or comparisons?
Summary Of The Paper Review
Summary Of The Paper This paper studies the problem of KL-regularized reinforcement learning for locomotion tasks, where the KL regularization penalizes deviations from expert demonstrations. This work makes the observation that fitting expert demonstrations to a certain class of conditional neural density models (e.g. neural networks that output the parameters of a Gaussian distribution), results in policies whose variance collapses for states that are different enough from the expert data. The paper argues that this collapse causes instabilities in learning KL-regularized policies: for Gaussian policies, the KL penalty grows quadratically to infinity as the expert policy variance goes to 0. To address this issue, the paper proposes to instead compute the KL penalty by fitting expert demonstrations to models that do not suffer from this collapse, in particular Gaussian Process regression models. Gaussian Process models result in variance that increases (depending on the choice of kernel) for states that are far enough from the expert data. The paper provides some analysis on why the variance collapse affects the RL optimization process, experiments that aim to show an empirical relationship between expert policy variance, KL penalty and magnitude of the policy gradients, and a comparison between various methods that use KL regularization with an expert policy. Review As summarized above, this paper presents a method for KL regularized RL, in which the KL penalty computation uses a behavioural policy prior fit from expert data. The proposed method uses Gaussian Process models for fitting the expert data. The paper is clearly written, proposes a simple approach and provides empirical evidence of its usefulness. It is notable that, in the experiments presented in the paper, the resulting learned policies produce visually smooth behaviours when regularized with a smooth prior (even if the policy class is not smooth). The paper analyses the problem of variance collapse in KL regularized RL, showing the relationship between variance collapse (which is empirically observed) with optimization instabilities due to exploding gradients. Some things to improve: The paper limits its comparison to one between conditional Gaussian models with ReLU networks and GP models with smooth stationary kernels. While this is enough for the main point of the paper (addressing the variance collapse), it doesn't necessarily mean that parametric models will have variance that will invariably collapse, particularly in the case where modelling uncertainty is introduced in the parametric model fitting (e.g. the comparison with ensembles). The paper does not need to answer how parametric models could be made to work in this setting, but the main text should be explicit about what is specifically being compared. An useful comparison that is missing in this story is the effect of expert dataset size on the performance of the algorithms. Is it always better to use GP regression for fitting the prior as the expert dataset size changes? What kind of ensemble was used for the experiments (bootstrap, different random seeds/initialization)? Since the paper introduces a comparison with ensembles, another useful comparison of the method would be against an ensemble model, as you let the size of the ensemble grow. In the context of the discussion about ensembles, it would be helpful to include a discussion about other models that could be applied in this setting (e.g. bayesian neural networks). and their limitations. A comment on the robotics experiments: in general, human expert demonstrations are not readily available for robot tasks. Collecting human expert data is still expensive and Perhaps the authors meant that there exists publicly available datasets for learning from demonstrations for robot manipulation tasks.
NIPS
Title On Pathologies in KL-Regularized Reinforcement Learning from Expert Demonstrations Abstract KL-regularized reinforcement learning from expert demonstrations has proved successful in improving the sample efficiency of deep reinforcement learning algorithms, allowing them to be applied to challenging physical real-world tasks. However, we show that KL-regularized reinforcement learning with behavioral reference policies derived from expert demonstrations can suffer from pathological training dynamics that can lead to slow, unstable, and suboptimal online learning. We show empirically that the pathology occurs for commonly chosen behavioral policy classes and demonstrate its impact on sample efficiency and online policy performance. Finally, we show that the pathology can be remedied by non-parametric behavioral reference policies and that this allows KL-regularized reinforcement learning to significantly outperform state-of-the-art approaches on a variety of challenging locomotion and dexterous hand manipulation tasks. 1 Introduction Reinforcement learning (RL) [15, 24, 46, 47] is a powerful paradigm for learning complex behaviors. Unfortunately, many modern reinforcement learning algorithms require agents to carry out millions of interactions with their environment to learn desirable behaviors, making them of limited use for a wide range of practical applications that cannot be simulated [8, 28]. This limitation has motivated the study of algorithms that can incorporate pre-collected offline data into the training process either fully offline or with online exploration to improve sample efficiency, performance, and reliability [2, 6, 16, 23, 52, 53]. An important and well-motivated subset of these methods consists of approaches for efficiently incorporating expert demonstrations into the learning process [5, 11, 18, 42]. Reinforcement learning with Kullback-Leibler (KL) regularization is a particularly successful approach for doing so [3, 27, 29, 31, 44, 51]. In KL-regularized reinforcement learning, the standard reinforcement learning objective is augmented by a Kullback-Leibler divergence term that penalizes dissimilarity between the online policy and a behavioral reference policy derived from expert demonstrations. The resulting regularized objective pulls the agent’s online policy towards the behavioral reference policy while also allowing it to improve upon the behavioral reference policy by exploring and interacting with the environment. Recent advances that leverage explicit or implicit KL-regularized objectives, such as BRAC [51], ABM [44], and AWAC [27], have shown that KLregularized reinforcement learning from expert demonstrations is able to significantly improve the sample efficiency of online training and reliably solve challenging environments previously unsolved by standard deep reinforcement learning algorithms. ∗Equal contribution. † Corresponding author: tim.rudner@cs.ox.ac.uk. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Contributions. In this paper, we show that despite some empirical success, KL-regularized reinforcement learning from expert demonstrations can suffer from previously unrecognized pathologies that lead to instability and sub-optimality in online learning. To summarize, our core contributions are as follows: • We illustrate empirically that commonly used classes of parametric behavioral policies experi- ence a collapse in predictive variance about states away from the expert demonstrations. • We demonstrate theoretically and empirically that KL-regularized reinforcement learning al- gorithms can suffer from pathological training dynamics in online learning when regularized against behavioral policies that exhibit such a collapse in predictive variance. • We show that the pathology can be remedied by non-parametric behavioral policies, whose predictive variances are well-calibrated and guaranteed not to collapse about previously unseen states, and that fixing the pathology results in online policies that significantly outperform stateof-the-art approaches on a range of challenging locomotion and dexterous hand manipulation tasks. The left panel of Figure 1 shows an example of the collapse in predictive variance away from the expert trajectories in parametric behavioral policies. In contrast, the right panel of Figure 1 shows the predictive variance of a non-parametric behavioral policy, which—unlike in the case of the parametric policy—increases off the expert trajectories. By avoiding the pathology, we obtain a stable and reliable approach to sample-efficient reinforcement learning, applicable to a wide range of reinforcement learning algorithms that leverage KL-regularized objectives.2 2 Background We consider the standard reinforcement learning setting where an agent interacts with a discounted Markov Decision Process (MDP) [46] given by a 5-tuple (S,A, p, r, γ), where S and A are the state and action spaces, p(· | st,at) are the transition dynamics, r(st,at) is the reward function, and γ is a discount factor. ρπ(τt) denotes the state–action trajectory distribution from time t induced by a policy π(· | st). The discounted return from time step t is given by R(τt) = ∑∞ k=t γ kr(sk,ak) for t ∈ N0. The standard reinforcement learning objective to be maximized is the expected discounted return Jπ(τ0) = Eρπ(τ0)[R(τ0)] under the policy trajectory distribution. 2.1 Improving and Accelerating Online Training via Behavioral Cloning We consider settings where we have a set of expert demonstrations without reward, D0 = {(sn,an)}Nn=1 = {S̄, Ā}, which we would like to use to speed up and improve online learn- 2Code and visualizations of our results can be found at https://sites.google.com/view/nppac. ing [5, 42]. A standard approach for turning expert trajectories into a policy is behavioral cloning [1, 4] which involves learning a mapping from states in the expert demonstrations to their corresponding actions, that is, π0 : S → A. As such, behavioral cloning does not assume or require access to a reward function and only involves learning a mapping from states to action in a supervised fashion. Since expert demonstrations are costly to obtain and often only available in small number, behavioral cloning alone is typically insufficient for agents to learn good policies in complex environments and has to be complemented by a method that enables the learner to build on the cloned behavior by interacting with the environment. A particularly successful and popular class of algorithms used for incorporating behavioral policies into online training is KL-regularized reinforcement learning [10, 37, 43, 48]. 2.2 KL-Regularized Objectives in Reinforcement Learning KL-regularized reinforcement learning modifies the standard reinforcement learning objective by augmenting the return with a negative KL divergence term from the learned policy π to a reference policy π0, given a temperature parameter α. The resulting discounted return from time step t ∈ N0 is then given by R̃(τt) = ∞∑ k=t γk [ r(sk,ak)− αDKL(π(· | sk) ‖ π0(· | sk)) ] (1) and the reinforcement learning objective becomes J̃π(τ0) = Eρπ(τ0)[R̃(τ0)]. When the reference policy π0 is given by a uniform distribution, we recover the entropy-regularized reinforcement learning objective used in Soft Actor–Critic (SAC) [13] up to an additive constant. Under a uniform reference policy π0, the resulting objective encourages exploration, while also choosing high-reward actions. In contrast, when π0 is non-uniform, the agent is discouraged to explore areas of the state space S where the variance of π0(· | s) is low (i.e., more certain) and encouraged to explore areas of the state space where the variance of π0(· | s) is high. The KLregularized reinforcement learning objective can be optimized via policy–gradient and actor–critic algorithms. 2.3 KL-Regularized Actor–Critic An optimal policy π that maximizes the expected KL-augmented discounted return J̃π can be learned by directly optimizing the policy gradient ∇πJ̃π. However, this policy gradient estimator exhibits high variance, which can lead to unstable learning. Actor–critic algorithms [7, 17, 32, 38] attempt to reduce this variance by making use of the state value function V π(st) = Eρπ(τt)[R̃(τt) | st] or the state–action value function Qπ(st,at) = Eρπ(τt)[R̃(τt) | st,at] to stabilize training. Given a reference policy π0(at | st), the state value function can be shown to satisfy the modified Bellman equation V π(st) =̇ Eat∼π(·|st)[Q π(st,at)]− αDKL ( π(· | st) ||π0(· | st) ) with a recursively defined Q-function Qπ(st,at) =̇ r(st,at) + γ Est+1∼p(·|st,at)[V π(st+1)]. Instead of directly optimizing the objective function J̃π via the policy gradient, actor–critic methods alternate between policy evaluation and policy improvement [7, 13]: Policy Evaluation. During the policy evaluation step, Qπθ (s,a), parameterized by parameters θ, is trained by minimizing the Bellman residual JQ(θ) =̇ E(st,at)∼D [ (Qθ(st,at)− (r(st,at) + γEst+1∼p(·|st,at)[Vθ̄(st+1)]))2 ] , (2) where D is a replay buffer and θ̄ is a stabilizing moving average of parameters. Policy Improvement. In the policy improvement step, the policy πφ, parameterized by parameters φ, is updated towards the exponential of the KL-augmented Q-function, Jπ(φ) =̇ Est∼D [αDKL(πφ(· | st) ‖ π0(· | st))]− Est∼D [ Eat∼πφ(·|st) [Qθ(st,at)] ] , (3) with states sampled from a replay buffer D and actions sampled from the parameterized online policy πφ. The following sections will focus on the policy improvement objective and how certain types of references policies can lead to pathologies when optimizing Jπ(φ) with respect to φ. 3 Identifying the Pathology In this section, we investigate the effect of KL-regularization on the training dynamics. To do so, we first consider the properties of the KL divergence to identify a potential failure mode for KL-regularized reinforcement learning. Next, we consider parametric Gaussian behavioral reference policies commonly used in practice for continuous control tasks [13, 51] and show that for Gaussian behavioral reference policies with small predictive variance, the policy improvement objective suffers from exploding gradients with respect to the policy parameters φ. We confirm that this failure occurs empirically and demonstrate that it results in slow, unstable, and suboptimal online learning. Lastly, we show that various regularization techniques used for estimating behavioral policies are unable to prevent this failure and also lead to suboptimal online policies. 3.1 When Are KL-Regularized Reinforcement Learning Objectives Meaningful? We start by considering the properties of the KL divergence and discuss how these properties can lead to potential failure modes in KL-regularized objectives. A well-known property of KL-regularized objectives in the variational inference literature is the occurrence of singularities when the support of one distribution is not contained in the support of the other. To illustrate this problem, we consider the case of Gaussian behavioral and online policies commonly used in practice. Mathematically, the KL divergence between two full Gaussian distributions is always finite and well-defined. Hence, we might hope KL-regularized reinforcement learning with Gaussian behavioral and online policies to be unaffected by the failure mode described above. However, the support of a Gaussian online policy πφ(· | st) will not be contained in the support of a behavioral reference policy π0(· | st) as the predictive variance σ20(st) tends to zero, and hence DKL(πφ(· | st) ‖ π0(· | st)) → ∞ as σ20(st) → 0. In other words, as the variance of a behavioral reference policy tends to zero and the behavioral distribution becomes degenerate, the KL divergence blows up to infinity [25]. While in practice, Gaussian behavioral policy would not operate in the limit of zero variance, the functional form of the KL divergence between (univariate) Gaussians, DKL(πφ(· | st) ‖ π0(· | st)) ∝ log σ0(st) σφ(st) + σ2φ(st) + (µφ(st)− µ0(st))2 2σ20(st) , implies a continuous, quadratic increase in the magnitude of the divergence as σ0(st) decreases, further exacerbated by a large difference in predictive means, |µφ(st)− µ0(st)|. As a result, for Gaussian behavioral reference policies π0(· | st) that assign very low probability to sets of points in sample space far away from the distribution’s mean µ0(st), computing the KL divergence can result in divergence values so large to cause numerical instabilities and arithmetic overflow. Hence, even for a suitably chosen behavioral reference policy class, vanishingly small behavioral reference policy predictive variances can cause the KL divergence to ‘blow up’ and cause numerical issues at evaluation points far away from states in the expert demonstrations. One way to address this failure mode may be to lower-bound the output of the variance network (e.g., by adding a small constant bias). However, placing a floor on the predictive variance of the behavioral reference policy is not sufficient to encourage effective learning. While it would prevent the KL divergence from blowing up, it would also lead to poor gradient signals, as well-calibrated predictive variance estimates that increase on states far away from the expert trajectories are necessary to keep the KL penalty from pulling the predictive mean of the online policy towards poor behavioral reference policy predictive means on states off the expert trajectories. Another possible solution could be to use heavy-tailed behavioral reference policies distributions, for example, Laplace distributions, to avoid pathological training dynamics. However, in Appendix B.3 we show that Laplace behavioral reference policies also suffer from pathological training dynamics, albeit less severely. In the following sections, we explain how an explosion in DKL(πφ(· | st) ‖ π0(· | st)) caused by small σ20(st) affects the gradients of Jπ(φ) in KL-regularized RL and discuss of how and why σ 2 0(st) may tend to zero in practice. 3.2 Exploding Gradients in KL-Regularized Reinforcement Learning Objectives To understand how small predictive variances in behavioral reference policies can affect—and possibly destabilize—online training in KL-regularized RL, we consider the contribution of the behavioral reference policy’s variance to the gradient of the policy objective in Equation (3). Compared to entropy-regularized actor–critic methods (SAC, Haarnoja et al. [13]), which implicitly regularize against a uniform policy, the gradient estimator ∇̂φJπ(φ) in KL-regularized RL gains an extra scaling term ∇at log π0(at | st), the gradient of the prior log-density evaluated actions at ∼ πφ(· | s): Proposition 1 (Exploding Gradients in KL-Regularized RL). Let π0(· | s) be a Gaussian behavioral reference policy with mean µ0(st) and variance σ20(st), and let πφ(· | s) be an online policy with reparameterization at = fφ( t; st) and random vector t. The gradient of the policy loss with respect to the online policy’s parameters φ is then given by ∇̂φJπ(φ) = ( α∇at log πφ(at | st)− α∇at log π0(at | st) −∇atQ(st,at) ) ∇φfφ( t; st) + α∇φ log πφ(at | st) (4) with ∇at log π0(at | st) = −at−µ0(st)σ20(st) . For fixed |at − µ0(st)|, ∇at log π0(at|st) grows as O(σ−20 (st)); thus, | ∇̂φJπ(φ) | → ∞ as σ20(st)→ 0 whenever ∇φfφ( t; st) 6= 0. Proof. See Appendix A.1. This result formalizes the intuition presented in Section 3.1 that a behavioral reference policy with a sufficiently small predictive variance may cause KL-regularized reinforcement learning to suffer from pathological training dynamics in gradient-based optimization. The smaller the behavioral reference policy’s predictive variance, the more sensitive the policy objective’s gradients will be to differences in the means of the online and behavioral reference policies. As a result, for behavioral reference policies with small predictive variance, the KL divergence will heavily penalize online policies whose predictive means diverge from the predictive means of the behavioral policy—even in regions of the state space away from the expert trajectory where the behavioral policy’s mean prediction is poor. 3.3 Predictive Uncertainty Collapse Under Parametric Policies The most commonly used method for estimating behavioral policies is maximum likelihood estimation (MLE) [44, 51], where we seek π0 =̇ πψ? with ψ? =̇ arg maxψ { E(s,a)∼D0 [log πψ(a | s)] } for a parametric behavioral policy πψ. In practice, πψ is often assumed to be Gaussian, πψ(· | s) = N (µψ(s),σ2ψ(s)), with µψ(s) and σ2ψ(s) parameterized by a neural network. While maximizing the likelihood of the expert trajectories under the behavioral policy is a sensible choice for behavioral cloning, the limited capacity of the neural network parameterization can produce unwanted behaviors in the resulting policy. The maximum likelihood objective ensures that the behavioral policy’s predictive mean reflects the expert’s actions and the predictive variance the (aleatoric) uncertainty inherent in the expert trajectories. However, the maximum likelihood objective encourages parametric policies to use their model capacity toward fitting the expert demonstrations and reflecting the aleatoric uncertainty in the data. As a result, for states off the expert trajectories, the policy can become degenerate and collapse to point predictions instead of providing meaningful predictive variance estimates that reflect that the behavioral policy ought to be highly uncertain about its predictions in previously unseen regions of the state space. Similar behaviors are well-known in parametric probabilistic models and welldocumented in the approximate Bayesian inference literature [33, 39]. 0 2 4 6 8 10 Epochs 10−3 10−1 σ 2 ψ (s ) Validation Variance Validation Log-Likelihood 0 2 log π ψ (Ā | S̄ ) tently decreases during training. As shown in Proposition 1, such a collapse in predictive variance can result in pathological training dynamics in KL-regularized online learning—steering the online policy towards suboptimal trajectories in regions of the state space far away from the expert demonstrations and deteriorating performance. Effect of regularization on uncertainty collapse. To prevent a collapse in the behavioral policy’s predictive variance, prior work proposed adding entropy or Tikhonov regularization to the MLE objective [51]. However, doing so does not succeed in preventing a collapse in predictive variance off the expert demonstration trajectories, as we show in Appendix A.3. Deep ensembles [20], whose predictive mean and variance are computed from the predictive means and variances of multiple Gaussian neural networks, are a widely used method for uncertainty quantification in regression settings. However, model ensembling can be costly and unreliable, as it requires training multiple neural networks from scratch and does not guarantee well-calibrated uncertainty estimates [39, 49]. We provide visualizations in Appendix B.5 which show that ensembling multiple neural network policies does not fully prevent a collapse in predictive variance. 3.4 Empirical Confirmation of Uncertainty Collapse To confirm Proposition 1 empirically and assess the effect of the collapse in predictive variance on the performance of KL-regularized RL, we perform an ablation study where we fix the predictive mean function of a behavioral policy to a mean function that attains 60% of the optimal performance and vary the magnitude of the policy’s predictive variance. Specifically, we set the behavioral policy’s predictive variance to different constant values in the set {1× 10−3, 5× 10−3, 1× 10−2} (following a similar implementation in Nair et al. [27]).3 The results of this experiment are shown in Figure 3, which shows the average returns, the KL divergence, and the average absolute gradients of the policy loss over training. The plots confirm that as the predictive variance of the offline behavioral policy tends to zero, the KL terms and average policy gradient magnitude explode as implied by Proposition 1, leading to unstable training and a collapse or dampening in average returns. In other words, even for behavioral policies with accurate predictive means, smaller predictive variances slow down or even entirely prevent learning good behavioral policies. This observation confirms that the pathology identified in Proposition 1 occurs in practice and that it can have a significant impact on KL-regularized RL from expert demonstrations, calling into question the usefulness of KL regularization as a means for accelerating and improving online training. In Appendix B.1, we show that an analogous relationship exists for the gradients of the Q-function loss. 3We attempted to use smaller values, but the gradients grew too large and caused arithmetic overflow. 4 Fixing the Pathology In order to address the collapse in predictive uncertainty for behavioral policies parameterized by a neural network trained via MLE, we specify a non-parametric behavioral policy whose predictive variance is guaranteed not to collapse about previously unseen states. Noting that KL-regularized RL with a behavioral policy can be viewed as approximate Bayesian inference with an empirical prior policy [13, 21, 40], we propose Non-Parametric Prior Actor–Critic (N-PPAC), an off-policy temporal difference algorithm for improved, accelerated, and stable online learning with behavioral policies. 4.1 Non-Parametric Gaussian Processes Behavioral Policies Gaussian processes (GPs) [36] are models over functions defined by a mean m(·) and covariance function k(·, ·). When defined in terms of a non-parametric covariance function, that is, a covariance function constructed from infinitely many basis functions, we obtain a non-degenerate GP, which has sufficient capacity to prevent a collapse in predictive uncertainty away from the training data. Unlike parametric models, whose capacity is limited by their parameterization, a non-parametric model’s capacity increases with the amount of training data. Considering a non-parametric GP behavioral policy, π0(· | s), with A | s ∼ π0(· | s) = GP ( m(s), k(s, s′) ) , (5) we can obtain a non-degenerate posterior distribution over actions conditioned on the offline data D0 = {S̄, Ā} with actions sampled according to the A | s,D0 ∼ π0(· | s,D0) = GP ( µ0(s),Σ0(s, s ′) ) , (6) with µ(s)=m(s) + k(s, S̄)k(S̄, S̄)−1(Ā−m(Ā)) and Σ(s, s′)=k(s, s′) + k(s, S̄)k(S̄, S̄)−1k(S̄, s′). To obtain this posterior distribution, we perform exact Bayesian inference, which naively scales as O(N3) in the number of training points N , but Wang et al. [50] show that exact inference in GP regression can be scaled to N > 1, 000, 000. Since expert demonstrations usually contain less than 100k datapoints, non-parametric GP behavioral policies are applicable to a wide array of real-world tasks. For an empirical evaluation of the time complexity of using a GP prior, see Section 5.5. Figure 1 confirms that the non-parametric GP’s predictive variance is well-calibrated: It is small in magnitude in regions of the state space near the expert trajectories and large in magnitude in other regions of the state space. While actor–critic algorithms like SAC implicitly use a uniform prior to explore the state space, using a behavioral policy with a well-calibrated predictive variance has the benefit that in regions of the state space close to the expert demonstrations the online policy learns to match the expert, while elsewhere the predictive variance increases and encourages exploration. Algorithmic details. In our experiments, we use a KL-regularized objective with a standard actor– critic implementation and Double DQN [14]. Pseudocode is provided in (Appendix C.1). 5 Empirical Evaluation We carry out a comparative empirical evaluation of our proposed approach vis-à-vis related methods that integrate offline data into online training. We provide a detailed description of the algorithms we compare against in Appendix A.4. We perform experiments on the MuJoCo benchmark suite and the substantially more challenging dexterous hand manipulation suite with sparse rewards. We show that KL-regularized RL with a non-parametric behavioral reference policy can rapidly learn to solve difficult high-dimensional continuous control problems given only a small set of expert demonstrations and (often significantly) outperforms state-of-the-art methods, including ones that use offline reward information—which our approach does not require. Furthermore, we demonstrate that the GP behavioral policy’s predictive variance is crucial for KL-regularized objectives to learn good online policies from expert demonstrations. Finally, we perform ablation studies that illustrate that non-parametric GP behavioral reference policies also outperform parametric behavioral reference policies with improved uncertainty quantification, such as deep ensembles and Bayesian neural networks (BNNs) with Monte Carlo dropout, and that the difference between non-parametric and parametric models is exacerbated the fewer expert demonstrations are available. We use the expert data from Nair et al. [27], every experiment uses six random seeds, and we use a fixed KL-temperature for each environment class. For further implementation details, see Appendix C.2. 5.1 Environments MuJoCo locomotion tasks. We evaluate N-PPAC on three representative tasks: “Ant-v2”, “HalfCheetah-v2”, and “Walker2d-v2”. For each task, we use 15 demonstration trajectories collected by a pre-trained expert, each containing 1,000 steps. The behavioral policy is specified as the posterior distribution of a GP with a squared exponential kernel, which is well-suited for modeling smooth functions. Dexterous hand manipulation tasks. Real-world robot learning is a setting where human demonstration data is readily available, and many deep RL approaches fail to learn efficiently. We study this setting in a suite of challenging dexterous manipulation tasks [35] using a 28-DoF five-fingered simulated ADROIT hand. The tasks simulate challenges common to real-world settings with highdimensional action spaces, complex physics, and a large number of intermittent contact forces. We consider two tasks in particular: in-hand rotation of a pen to match a target and opening a door by unlatching and pulling a handle. We use binary rewards for task completion, which is significantly more challenging than the original setting considered in Rajeswaran et al. [35]. 25 expert demonstrations were provided for each task, each consisting of 200 environment steps which are not fully optimal but do successfully solve the task. The behavioral policy is specified as the posterior distribution of a GP with a Matérn kernel, which is more suitable for modeling non-smooth data. 5.2 Results On MuJoCo environments, KL-regularized RL with a non-parametric behavioral policy consistently outperforms all related methods across all three tasks, successfully accelerating learning from offline data, as shown in Figure 4. Most notably, it outperforms methods such as AWAC [27]—the previous state-of-the-art—which attempts to eschew the problem of learning behavioral policies but instead uses an implicit constraint. Our approach, N-PPAC, exhibits an increase in stability and higher returns compared to comparable methods such as ABM and BRAC that explicitly regularize the online policy against a parametric behavioral policy and plateau at suboptimal performance levels as they are being forced to copy poor actions from the behavioral policy away from the expert data. In contrast, using a non-parametric behavioral policy allows us to avoid such undesirable behavior. On dexterous hand manipulation environments, KL-regularized RL with a non-parametric behavioral policy performs on par or outperforms all related methods on both tasks, as shown in Figure 5. Most notably, on the door opening task, it achieves a stable success rate of 90% within only 100,000 environment interactions For comparison, AWAC requires 4× as many environment interactions to achieve the same performance and is significantly less stable, while most other methods fail to learn any meaningful behaviors. Alternative divergence metrics underperform KL-regularization. KL-regularized RL with a non-parametric behavioral policy consistently outperforms methods that use alternative divergence metrics, as shown in the bottom plots of Figures 4 and 5. 5.3 Can the Pathology Be Fixed by Improved Parametric Uncertainty Quantification? line policy success rates. We consider the challenging “door-binary-v0” environment for this ablation study. Parametric uncertainty quantification is insufficient. Figure 5 shows that parametric variance functions result in online policies that only achieve success rates of up to 20% and eventually deteriorate, whereas the non-parametric variance yields an online policy that achieves a success rate of nearly 100%. This finding shows that commonly used uncertainty quantification methods, such as deep ensembles or BNNs with Monte Carlo dropout, do not generate sufficiently well-calibrated uncertainty estimates to remedy the pathology, and better methods may be needed [9, 39, 41]. Lower-bounding the predictive variance does not remedy the pathology. The predictive variance of all MLE-based and ensemble behavioral reference policies in all experiments are bounded away from zero at a minimum value of ≈ 10−2. Hence, setting a floor on the variance is not sufficient to prevent pathological training dynamics. This result further demonstrates the importance of accurate predictive variance estimation in allowing the online policy to match expert actions in regions of the state space with low behavioral policy predictive variance and explore elsewhere. 5.4 Can a Single Expert Demonstration Be Sufficient to Accelerate Online Training? To assess the usefulness of nonparametric behavioral reference policies in settings where only few expert demonstrations are available, we investigate whether the difference in performance between online policies trained with non-parametric and parametric behavioral reference policies, respectively, is exacerbated the fewer expert demonstrations are available. To answer this question, we consider the “HalfCheetah-v2” environment and compare online policies trained with different behavioral reference policies—non-parametric GPs, deep ensembles, and BNNs with Monte Carlo dropout—estimated either from 15 expert demonstrations (i.e., 15 state–action trajectories, containing 15,000 samples) or from a single expert demonstration (i.e., a single state–action trajectory, containing 1,000 samples). A single expert demonstration is sufficient for non-parametric behavioral reference policies. Figure 7 shows the returns for online policies trained with behavioral reference policies estimated from the full dataset (top plot) and from only a single expert state–action trajectory (bottom plot). On the full dataset, we find that all three methods are competitive and improve on the prior stateof-the-art but that the GP behavioral policy leads to the highest return. Remarkably, non-parametric GP behavioral policies perform just as well with only a single expert demonstration as with all 15 (i.e., with 1,000 data points, instead of 15,000 data points). These results further emphasizes the usefulness of non-parametric behavioral policies when accelerating online training with expert demonstrations—even when only very few expert demonstrations are available. 5.5 Are Non-Parametric GP Behavioral Reference Policies Too Computationally Expensive? Table 1 presents the time complexity of KL-regularized RL under non-parametric GP and parametric neural network behavioral reference policies, as measured by the average time elapsed per epoch on the “door-binary-v0” and “HalfCheetah-v2” environments. One epoch of online training on “doorbinary-v0” and “HalfCheetah-v2” requires computing the KL divergence over 1,000 mini-batches of size 256 and 1,024, respectively. The time complexity of evaluating the log-density of a GP behavioral reference policy—needed for computing gradients of the KL divergence during online training—scales quadratically in the number of training data points and linearly in the dimensionality of the state and action space, respectively. As can be seen in Table 1, non-parametric GP behavioral reference policies only lead to a modest increase in the time needed to complete one epoch of training while resulting in significantly improved performance as shown in Figures 4 and 5. 6 Conclusion We identified a previously unrecognized pathology in KL-regularized RL from expert demonstrations and showed that this pathology can significantly impede and even entirely prevent online learning. To remedy the pathology, we proposed the use of non-parametric behavioral reference policies, which we showed can significantly accelerate and improve online learning and yield online policies that (often significantly) outperform current state-of-the-art methods on challenging continuous control tasks. We hope that this work will encourage further research into better model classes for deep reinforcement learning algorithms, including and especially for reinforcement from image inputs. Acknowledgments and Disclosure of Funding We thank Ashvin Nair for sharing his code and results, as well as for providing helpful insights about the dexterous hand manipulation suite. We also thank Clare Lyle, Charline Le Lan, and Angelos Filos for detailed feedback on an early draft of this paper, Avi Singh for early discussions about behavioral cloning in entropy-regularized RL, and Tim Pearce for a useful discussion on the role of good models in RL. TGJR and CL are funded by the Engineering and Physical Sciences Research Council (EPSRC). TGJR is also funded by the Rhodes Trust and by a Qualcomm Innovation Fellowship. We gratefully acknowledge donations of computing resources by the Alan Turing Institute.
1. What is the main contribution of the paper regarding reinforcement learning from expert demonstrations? 2. What are the strengths of the proposed approach compared to previous work? 3. Do you have any concerns or suggestions regarding the experimental results and comparisons? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any limitations or potential improvements regarding the proposed method?
Summary Of The Paper Review
Summary Of The Paper The authors identify a problem with KL-based reinforcement learning from expert demonstrations. Poor out-of-sample variance predictions of NN expert policies ends up unreasonably penalizing RL exploration outside the expert data distribution, potentially causing numerical instability. To remedy this they propose to instead use non-parametric expert policies based on a scalable GP approach. Review This seems like a solid incremental improvement. They thoroughly analyze an important problem in previous work and propose a fix. RL from expert demonstrations is a relevant niche with real-world applications. The paper is very well-written and the experimental results appear to improve on prior state-of-the-art. Some comments: L43: "without ad-hoc design choices" - Like what? this is both needlessly vague and ends up sounding a bit objectionable. L178-185: I don't believe it is the limited capacity from its parameteric nature. Policies in RL are usually very simple and overparameterized. However, it is well-known that an NN trained with maximum likelihood will not predict out-of-distribution uncertainty well, simply because it hasn't been trained on that. Both the mean and variance predictions will be very poor outside the training distribution. It also sometimes goes to zero, but I'm not aware of a formal analysis of this, or convinced that it always happens. As you note in L200, ensembles are sometimes used for predictions, so they can't all always go to zero. Figure 6: Clarify which "Non-KL" methods are used in the bottom row, I assume it is AWAC etc? It might also have been nice to compare against regular RL with pre-trained (BC) policies as a sanity check. Section 5.3: Shouldn't you compare compute time against some RL algorithm with expert demonstrations? I understand that if it doesn't reach the reward thresholds without extra tuning, it becomes a bit apples to oranges, but this isn't ideal either. It's unclear how the overhead of the GP policy compares to the NN policy. It might be relevant to compare on the more complex benchmarks as well, as the simulator is usually the bottleneck in non-trivial RL applications.
NIPS
Title On Pathologies in KL-Regularized Reinforcement Learning from Expert Demonstrations Abstract KL-regularized reinforcement learning from expert demonstrations has proved successful in improving the sample efficiency of deep reinforcement learning algorithms, allowing them to be applied to challenging physical real-world tasks. However, we show that KL-regularized reinforcement learning with behavioral reference policies derived from expert demonstrations can suffer from pathological training dynamics that can lead to slow, unstable, and suboptimal online learning. We show empirically that the pathology occurs for commonly chosen behavioral policy classes and demonstrate its impact on sample efficiency and online policy performance. Finally, we show that the pathology can be remedied by non-parametric behavioral reference policies and that this allows KL-regularized reinforcement learning to significantly outperform state-of-the-art approaches on a variety of challenging locomotion and dexterous hand manipulation tasks. 1 Introduction Reinforcement learning (RL) [15, 24, 46, 47] is a powerful paradigm for learning complex behaviors. Unfortunately, many modern reinforcement learning algorithms require agents to carry out millions of interactions with their environment to learn desirable behaviors, making them of limited use for a wide range of practical applications that cannot be simulated [8, 28]. This limitation has motivated the study of algorithms that can incorporate pre-collected offline data into the training process either fully offline or with online exploration to improve sample efficiency, performance, and reliability [2, 6, 16, 23, 52, 53]. An important and well-motivated subset of these methods consists of approaches for efficiently incorporating expert demonstrations into the learning process [5, 11, 18, 42]. Reinforcement learning with Kullback-Leibler (KL) regularization is a particularly successful approach for doing so [3, 27, 29, 31, 44, 51]. In KL-regularized reinforcement learning, the standard reinforcement learning objective is augmented by a Kullback-Leibler divergence term that penalizes dissimilarity between the online policy and a behavioral reference policy derived from expert demonstrations. The resulting regularized objective pulls the agent’s online policy towards the behavioral reference policy while also allowing it to improve upon the behavioral reference policy by exploring and interacting with the environment. Recent advances that leverage explicit or implicit KL-regularized objectives, such as BRAC [51], ABM [44], and AWAC [27], have shown that KLregularized reinforcement learning from expert demonstrations is able to significantly improve the sample efficiency of online training and reliably solve challenging environments previously unsolved by standard deep reinforcement learning algorithms. ∗Equal contribution. † Corresponding author: tim.rudner@cs.ox.ac.uk. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Contributions. In this paper, we show that despite some empirical success, KL-regularized reinforcement learning from expert demonstrations can suffer from previously unrecognized pathologies that lead to instability and sub-optimality in online learning. To summarize, our core contributions are as follows: • We illustrate empirically that commonly used classes of parametric behavioral policies experi- ence a collapse in predictive variance about states away from the expert demonstrations. • We demonstrate theoretically and empirically that KL-regularized reinforcement learning al- gorithms can suffer from pathological training dynamics in online learning when regularized against behavioral policies that exhibit such a collapse in predictive variance. • We show that the pathology can be remedied by non-parametric behavioral policies, whose predictive variances are well-calibrated and guaranteed not to collapse about previously unseen states, and that fixing the pathology results in online policies that significantly outperform stateof-the-art approaches on a range of challenging locomotion and dexterous hand manipulation tasks. The left panel of Figure 1 shows an example of the collapse in predictive variance away from the expert trajectories in parametric behavioral policies. In contrast, the right panel of Figure 1 shows the predictive variance of a non-parametric behavioral policy, which—unlike in the case of the parametric policy—increases off the expert trajectories. By avoiding the pathology, we obtain a stable and reliable approach to sample-efficient reinforcement learning, applicable to a wide range of reinforcement learning algorithms that leverage KL-regularized objectives.2 2 Background We consider the standard reinforcement learning setting where an agent interacts with a discounted Markov Decision Process (MDP) [46] given by a 5-tuple (S,A, p, r, γ), where S and A are the state and action spaces, p(· | st,at) are the transition dynamics, r(st,at) is the reward function, and γ is a discount factor. ρπ(τt) denotes the state–action trajectory distribution from time t induced by a policy π(· | st). The discounted return from time step t is given by R(τt) = ∑∞ k=t γ kr(sk,ak) for t ∈ N0. The standard reinforcement learning objective to be maximized is the expected discounted return Jπ(τ0) = Eρπ(τ0)[R(τ0)] under the policy trajectory distribution. 2.1 Improving and Accelerating Online Training via Behavioral Cloning We consider settings where we have a set of expert demonstrations without reward, D0 = {(sn,an)}Nn=1 = {S̄, Ā}, which we would like to use to speed up and improve online learn- 2Code and visualizations of our results can be found at https://sites.google.com/view/nppac. ing [5, 42]. A standard approach for turning expert trajectories into a policy is behavioral cloning [1, 4] which involves learning a mapping from states in the expert demonstrations to their corresponding actions, that is, π0 : S → A. As such, behavioral cloning does not assume or require access to a reward function and only involves learning a mapping from states to action in a supervised fashion. Since expert demonstrations are costly to obtain and often only available in small number, behavioral cloning alone is typically insufficient for agents to learn good policies in complex environments and has to be complemented by a method that enables the learner to build on the cloned behavior by interacting with the environment. A particularly successful and popular class of algorithms used for incorporating behavioral policies into online training is KL-regularized reinforcement learning [10, 37, 43, 48]. 2.2 KL-Regularized Objectives in Reinforcement Learning KL-regularized reinforcement learning modifies the standard reinforcement learning objective by augmenting the return with a negative KL divergence term from the learned policy π to a reference policy π0, given a temperature parameter α. The resulting discounted return from time step t ∈ N0 is then given by R̃(τt) = ∞∑ k=t γk [ r(sk,ak)− αDKL(π(· | sk) ‖ π0(· | sk)) ] (1) and the reinforcement learning objective becomes J̃π(τ0) = Eρπ(τ0)[R̃(τ0)]. When the reference policy π0 is given by a uniform distribution, we recover the entropy-regularized reinforcement learning objective used in Soft Actor–Critic (SAC) [13] up to an additive constant. Under a uniform reference policy π0, the resulting objective encourages exploration, while also choosing high-reward actions. In contrast, when π0 is non-uniform, the agent is discouraged to explore areas of the state space S where the variance of π0(· | s) is low (i.e., more certain) and encouraged to explore areas of the state space where the variance of π0(· | s) is high. The KLregularized reinforcement learning objective can be optimized via policy–gradient and actor–critic algorithms. 2.3 KL-Regularized Actor–Critic An optimal policy π that maximizes the expected KL-augmented discounted return J̃π can be learned by directly optimizing the policy gradient ∇πJ̃π. However, this policy gradient estimator exhibits high variance, which can lead to unstable learning. Actor–critic algorithms [7, 17, 32, 38] attempt to reduce this variance by making use of the state value function V π(st) = Eρπ(τt)[R̃(τt) | st] or the state–action value function Qπ(st,at) = Eρπ(τt)[R̃(τt) | st,at] to stabilize training. Given a reference policy π0(at | st), the state value function can be shown to satisfy the modified Bellman equation V π(st) =̇ Eat∼π(·|st)[Q π(st,at)]− αDKL ( π(· | st) ||π0(· | st) ) with a recursively defined Q-function Qπ(st,at) =̇ r(st,at) + γ Est+1∼p(·|st,at)[V π(st+1)]. Instead of directly optimizing the objective function J̃π via the policy gradient, actor–critic methods alternate between policy evaluation and policy improvement [7, 13]: Policy Evaluation. During the policy evaluation step, Qπθ (s,a), parameterized by parameters θ, is trained by minimizing the Bellman residual JQ(θ) =̇ E(st,at)∼D [ (Qθ(st,at)− (r(st,at) + γEst+1∼p(·|st,at)[Vθ̄(st+1)]))2 ] , (2) where D is a replay buffer and θ̄ is a stabilizing moving average of parameters. Policy Improvement. In the policy improvement step, the policy πφ, parameterized by parameters φ, is updated towards the exponential of the KL-augmented Q-function, Jπ(φ) =̇ Est∼D [αDKL(πφ(· | st) ‖ π0(· | st))]− Est∼D [ Eat∼πφ(·|st) [Qθ(st,at)] ] , (3) with states sampled from a replay buffer D and actions sampled from the parameterized online policy πφ. The following sections will focus on the policy improvement objective and how certain types of references policies can lead to pathologies when optimizing Jπ(φ) with respect to φ. 3 Identifying the Pathology In this section, we investigate the effect of KL-regularization on the training dynamics. To do so, we first consider the properties of the KL divergence to identify a potential failure mode for KL-regularized reinforcement learning. Next, we consider parametric Gaussian behavioral reference policies commonly used in practice for continuous control tasks [13, 51] and show that for Gaussian behavioral reference policies with small predictive variance, the policy improvement objective suffers from exploding gradients with respect to the policy parameters φ. We confirm that this failure occurs empirically and demonstrate that it results in slow, unstable, and suboptimal online learning. Lastly, we show that various regularization techniques used for estimating behavioral policies are unable to prevent this failure and also lead to suboptimal online policies. 3.1 When Are KL-Regularized Reinforcement Learning Objectives Meaningful? We start by considering the properties of the KL divergence and discuss how these properties can lead to potential failure modes in KL-regularized objectives. A well-known property of KL-regularized objectives in the variational inference literature is the occurrence of singularities when the support of one distribution is not contained in the support of the other. To illustrate this problem, we consider the case of Gaussian behavioral and online policies commonly used in practice. Mathematically, the KL divergence between two full Gaussian distributions is always finite and well-defined. Hence, we might hope KL-regularized reinforcement learning with Gaussian behavioral and online policies to be unaffected by the failure mode described above. However, the support of a Gaussian online policy πφ(· | st) will not be contained in the support of a behavioral reference policy π0(· | st) as the predictive variance σ20(st) tends to zero, and hence DKL(πφ(· | st) ‖ π0(· | st)) → ∞ as σ20(st) → 0. In other words, as the variance of a behavioral reference policy tends to zero and the behavioral distribution becomes degenerate, the KL divergence blows up to infinity [25]. While in practice, Gaussian behavioral policy would not operate in the limit of zero variance, the functional form of the KL divergence between (univariate) Gaussians, DKL(πφ(· | st) ‖ π0(· | st)) ∝ log σ0(st) σφ(st) + σ2φ(st) + (µφ(st)− µ0(st))2 2σ20(st) , implies a continuous, quadratic increase in the magnitude of the divergence as σ0(st) decreases, further exacerbated by a large difference in predictive means, |µφ(st)− µ0(st)|. As a result, for Gaussian behavioral reference policies π0(· | st) that assign very low probability to sets of points in sample space far away from the distribution’s mean µ0(st), computing the KL divergence can result in divergence values so large to cause numerical instabilities and arithmetic overflow. Hence, even for a suitably chosen behavioral reference policy class, vanishingly small behavioral reference policy predictive variances can cause the KL divergence to ‘blow up’ and cause numerical issues at evaluation points far away from states in the expert demonstrations. One way to address this failure mode may be to lower-bound the output of the variance network (e.g., by adding a small constant bias). However, placing a floor on the predictive variance of the behavioral reference policy is not sufficient to encourage effective learning. While it would prevent the KL divergence from blowing up, it would also lead to poor gradient signals, as well-calibrated predictive variance estimates that increase on states far away from the expert trajectories are necessary to keep the KL penalty from pulling the predictive mean of the online policy towards poor behavioral reference policy predictive means on states off the expert trajectories. Another possible solution could be to use heavy-tailed behavioral reference policies distributions, for example, Laplace distributions, to avoid pathological training dynamics. However, in Appendix B.3 we show that Laplace behavioral reference policies also suffer from pathological training dynamics, albeit less severely. In the following sections, we explain how an explosion in DKL(πφ(· | st) ‖ π0(· | st)) caused by small σ20(st) affects the gradients of Jπ(φ) in KL-regularized RL and discuss of how and why σ 2 0(st) may tend to zero in practice. 3.2 Exploding Gradients in KL-Regularized Reinforcement Learning Objectives To understand how small predictive variances in behavioral reference policies can affect—and possibly destabilize—online training in KL-regularized RL, we consider the contribution of the behavioral reference policy’s variance to the gradient of the policy objective in Equation (3). Compared to entropy-regularized actor–critic methods (SAC, Haarnoja et al. [13]), which implicitly regularize against a uniform policy, the gradient estimator ∇̂φJπ(φ) in KL-regularized RL gains an extra scaling term ∇at log π0(at | st), the gradient of the prior log-density evaluated actions at ∼ πφ(· | s): Proposition 1 (Exploding Gradients in KL-Regularized RL). Let π0(· | s) be a Gaussian behavioral reference policy with mean µ0(st) and variance σ20(st), and let πφ(· | s) be an online policy with reparameterization at = fφ( t; st) and random vector t. The gradient of the policy loss with respect to the online policy’s parameters φ is then given by ∇̂φJπ(φ) = ( α∇at log πφ(at | st)− α∇at log π0(at | st) −∇atQ(st,at) ) ∇φfφ( t; st) + α∇φ log πφ(at | st) (4) with ∇at log π0(at | st) = −at−µ0(st)σ20(st) . For fixed |at − µ0(st)|, ∇at log π0(at|st) grows as O(σ−20 (st)); thus, | ∇̂φJπ(φ) | → ∞ as σ20(st)→ 0 whenever ∇φfφ( t; st) 6= 0. Proof. See Appendix A.1. This result formalizes the intuition presented in Section 3.1 that a behavioral reference policy with a sufficiently small predictive variance may cause KL-regularized reinforcement learning to suffer from pathological training dynamics in gradient-based optimization. The smaller the behavioral reference policy’s predictive variance, the more sensitive the policy objective’s gradients will be to differences in the means of the online and behavioral reference policies. As a result, for behavioral reference policies with small predictive variance, the KL divergence will heavily penalize online policies whose predictive means diverge from the predictive means of the behavioral policy—even in regions of the state space away from the expert trajectory where the behavioral policy’s mean prediction is poor. 3.3 Predictive Uncertainty Collapse Under Parametric Policies The most commonly used method for estimating behavioral policies is maximum likelihood estimation (MLE) [44, 51], where we seek π0 =̇ πψ? with ψ? =̇ arg maxψ { E(s,a)∼D0 [log πψ(a | s)] } for a parametric behavioral policy πψ. In practice, πψ is often assumed to be Gaussian, πψ(· | s) = N (µψ(s),σ2ψ(s)), with µψ(s) and σ2ψ(s) parameterized by a neural network. While maximizing the likelihood of the expert trajectories under the behavioral policy is a sensible choice for behavioral cloning, the limited capacity of the neural network parameterization can produce unwanted behaviors in the resulting policy. The maximum likelihood objective ensures that the behavioral policy’s predictive mean reflects the expert’s actions and the predictive variance the (aleatoric) uncertainty inherent in the expert trajectories. However, the maximum likelihood objective encourages parametric policies to use their model capacity toward fitting the expert demonstrations and reflecting the aleatoric uncertainty in the data. As a result, for states off the expert trajectories, the policy can become degenerate and collapse to point predictions instead of providing meaningful predictive variance estimates that reflect that the behavioral policy ought to be highly uncertain about its predictions in previously unseen regions of the state space. Similar behaviors are well-known in parametric probabilistic models and welldocumented in the approximate Bayesian inference literature [33, 39]. 0 2 4 6 8 10 Epochs 10−3 10−1 σ 2 ψ (s ) Validation Variance Validation Log-Likelihood 0 2 log π ψ (Ā | S̄ ) tently decreases during training. As shown in Proposition 1, such a collapse in predictive variance can result in pathological training dynamics in KL-regularized online learning—steering the online policy towards suboptimal trajectories in regions of the state space far away from the expert demonstrations and deteriorating performance. Effect of regularization on uncertainty collapse. To prevent a collapse in the behavioral policy’s predictive variance, prior work proposed adding entropy or Tikhonov regularization to the MLE objective [51]. However, doing so does not succeed in preventing a collapse in predictive variance off the expert demonstration trajectories, as we show in Appendix A.3. Deep ensembles [20], whose predictive mean and variance are computed from the predictive means and variances of multiple Gaussian neural networks, are a widely used method for uncertainty quantification in regression settings. However, model ensembling can be costly and unreliable, as it requires training multiple neural networks from scratch and does not guarantee well-calibrated uncertainty estimates [39, 49]. We provide visualizations in Appendix B.5 which show that ensembling multiple neural network policies does not fully prevent a collapse in predictive variance. 3.4 Empirical Confirmation of Uncertainty Collapse To confirm Proposition 1 empirically and assess the effect of the collapse in predictive variance on the performance of KL-regularized RL, we perform an ablation study where we fix the predictive mean function of a behavioral policy to a mean function that attains 60% of the optimal performance and vary the magnitude of the policy’s predictive variance. Specifically, we set the behavioral policy’s predictive variance to different constant values in the set {1× 10−3, 5× 10−3, 1× 10−2} (following a similar implementation in Nair et al. [27]).3 The results of this experiment are shown in Figure 3, which shows the average returns, the KL divergence, and the average absolute gradients of the policy loss over training. The plots confirm that as the predictive variance of the offline behavioral policy tends to zero, the KL terms and average policy gradient magnitude explode as implied by Proposition 1, leading to unstable training and a collapse or dampening in average returns. In other words, even for behavioral policies with accurate predictive means, smaller predictive variances slow down or even entirely prevent learning good behavioral policies. This observation confirms that the pathology identified in Proposition 1 occurs in practice and that it can have a significant impact on KL-regularized RL from expert demonstrations, calling into question the usefulness of KL regularization as a means for accelerating and improving online training. In Appendix B.1, we show that an analogous relationship exists for the gradients of the Q-function loss. 3We attempted to use smaller values, but the gradients grew too large and caused arithmetic overflow. 4 Fixing the Pathology In order to address the collapse in predictive uncertainty for behavioral policies parameterized by a neural network trained via MLE, we specify a non-parametric behavioral policy whose predictive variance is guaranteed not to collapse about previously unseen states. Noting that KL-regularized RL with a behavioral policy can be viewed as approximate Bayesian inference with an empirical prior policy [13, 21, 40], we propose Non-Parametric Prior Actor–Critic (N-PPAC), an off-policy temporal difference algorithm for improved, accelerated, and stable online learning with behavioral policies. 4.1 Non-Parametric Gaussian Processes Behavioral Policies Gaussian processes (GPs) [36] are models over functions defined by a mean m(·) and covariance function k(·, ·). When defined in terms of a non-parametric covariance function, that is, a covariance function constructed from infinitely many basis functions, we obtain a non-degenerate GP, which has sufficient capacity to prevent a collapse in predictive uncertainty away from the training data. Unlike parametric models, whose capacity is limited by their parameterization, a non-parametric model’s capacity increases with the amount of training data. Considering a non-parametric GP behavioral policy, π0(· | s), with A | s ∼ π0(· | s) = GP ( m(s), k(s, s′) ) , (5) we can obtain a non-degenerate posterior distribution over actions conditioned on the offline data D0 = {S̄, Ā} with actions sampled according to the A | s,D0 ∼ π0(· | s,D0) = GP ( µ0(s),Σ0(s, s ′) ) , (6) with µ(s)=m(s) + k(s, S̄)k(S̄, S̄)−1(Ā−m(Ā)) and Σ(s, s′)=k(s, s′) + k(s, S̄)k(S̄, S̄)−1k(S̄, s′). To obtain this posterior distribution, we perform exact Bayesian inference, which naively scales as O(N3) in the number of training points N , but Wang et al. [50] show that exact inference in GP regression can be scaled to N > 1, 000, 000. Since expert demonstrations usually contain less than 100k datapoints, non-parametric GP behavioral policies are applicable to a wide array of real-world tasks. For an empirical evaluation of the time complexity of using a GP prior, see Section 5.5. Figure 1 confirms that the non-parametric GP’s predictive variance is well-calibrated: It is small in magnitude in regions of the state space near the expert trajectories and large in magnitude in other regions of the state space. While actor–critic algorithms like SAC implicitly use a uniform prior to explore the state space, using a behavioral policy with a well-calibrated predictive variance has the benefit that in regions of the state space close to the expert demonstrations the online policy learns to match the expert, while elsewhere the predictive variance increases and encourages exploration. Algorithmic details. In our experiments, we use a KL-regularized objective with a standard actor– critic implementation and Double DQN [14]. Pseudocode is provided in (Appendix C.1). 5 Empirical Evaluation We carry out a comparative empirical evaluation of our proposed approach vis-à-vis related methods that integrate offline data into online training. We provide a detailed description of the algorithms we compare against in Appendix A.4. We perform experiments on the MuJoCo benchmark suite and the substantially more challenging dexterous hand manipulation suite with sparse rewards. We show that KL-regularized RL with a non-parametric behavioral reference policy can rapidly learn to solve difficult high-dimensional continuous control problems given only a small set of expert demonstrations and (often significantly) outperforms state-of-the-art methods, including ones that use offline reward information—which our approach does not require. Furthermore, we demonstrate that the GP behavioral policy’s predictive variance is crucial for KL-regularized objectives to learn good online policies from expert demonstrations. Finally, we perform ablation studies that illustrate that non-parametric GP behavioral reference policies also outperform parametric behavioral reference policies with improved uncertainty quantification, such as deep ensembles and Bayesian neural networks (BNNs) with Monte Carlo dropout, and that the difference between non-parametric and parametric models is exacerbated the fewer expert demonstrations are available. We use the expert data from Nair et al. [27], every experiment uses six random seeds, and we use a fixed KL-temperature for each environment class. For further implementation details, see Appendix C.2. 5.1 Environments MuJoCo locomotion tasks. We evaluate N-PPAC on three representative tasks: “Ant-v2”, “HalfCheetah-v2”, and “Walker2d-v2”. For each task, we use 15 demonstration trajectories collected by a pre-trained expert, each containing 1,000 steps. The behavioral policy is specified as the posterior distribution of a GP with a squared exponential kernel, which is well-suited for modeling smooth functions. Dexterous hand manipulation tasks. Real-world robot learning is a setting where human demonstration data is readily available, and many deep RL approaches fail to learn efficiently. We study this setting in a suite of challenging dexterous manipulation tasks [35] using a 28-DoF five-fingered simulated ADROIT hand. The tasks simulate challenges common to real-world settings with highdimensional action spaces, complex physics, and a large number of intermittent contact forces. We consider two tasks in particular: in-hand rotation of a pen to match a target and opening a door by unlatching and pulling a handle. We use binary rewards for task completion, which is significantly more challenging than the original setting considered in Rajeswaran et al. [35]. 25 expert demonstrations were provided for each task, each consisting of 200 environment steps which are not fully optimal but do successfully solve the task. The behavioral policy is specified as the posterior distribution of a GP with a Matérn kernel, which is more suitable for modeling non-smooth data. 5.2 Results On MuJoCo environments, KL-regularized RL with a non-parametric behavioral policy consistently outperforms all related methods across all three tasks, successfully accelerating learning from offline data, as shown in Figure 4. Most notably, it outperforms methods such as AWAC [27]—the previous state-of-the-art—which attempts to eschew the problem of learning behavioral policies but instead uses an implicit constraint. Our approach, N-PPAC, exhibits an increase in stability and higher returns compared to comparable methods such as ABM and BRAC that explicitly regularize the online policy against a parametric behavioral policy and plateau at suboptimal performance levels as they are being forced to copy poor actions from the behavioral policy away from the expert data. In contrast, using a non-parametric behavioral policy allows us to avoid such undesirable behavior. On dexterous hand manipulation environments, KL-regularized RL with a non-parametric behavioral policy performs on par or outperforms all related methods on both tasks, as shown in Figure 5. Most notably, on the door opening task, it achieves a stable success rate of 90% within only 100,000 environment interactions For comparison, AWAC requires 4× as many environment interactions to achieve the same performance and is significantly less stable, while most other methods fail to learn any meaningful behaviors. Alternative divergence metrics underperform KL-regularization. KL-regularized RL with a non-parametric behavioral policy consistently outperforms methods that use alternative divergence metrics, as shown in the bottom plots of Figures 4 and 5. 5.3 Can the Pathology Be Fixed by Improved Parametric Uncertainty Quantification? line policy success rates. We consider the challenging “door-binary-v0” environment for this ablation study. Parametric uncertainty quantification is insufficient. Figure 5 shows that parametric variance functions result in online policies that only achieve success rates of up to 20% and eventually deteriorate, whereas the non-parametric variance yields an online policy that achieves a success rate of nearly 100%. This finding shows that commonly used uncertainty quantification methods, such as deep ensembles or BNNs with Monte Carlo dropout, do not generate sufficiently well-calibrated uncertainty estimates to remedy the pathology, and better methods may be needed [9, 39, 41]. Lower-bounding the predictive variance does not remedy the pathology. The predictive variance of all MLE-based and ensemble behavioral reference policies in all experiments are bounded away from zero at a minimum value of ≈ 10−2. Hence, setting a floor on the variance is not sufficient to prevent pathological training dynamics. This result further demonstrates the importance of accurate predictive variance estimation in allowing the online policy to match expert actions in regions of the state space with low behavioral policy predictive variance and explore elsewhere. 5.4 Can a Single Expert Demonstration Be Sufficient to Accelerate Online Training? To assess the usefulness of nonparametric behavioral reference policies in settings where only few expert demonstrations are available, we investigate whether the difference in performance between online policies trained with non-parametric and parametric behavioral reference policies, respectively, is exacerbated the fewer expert demonstrations are available. To answer this question, we consider the “HalfCheetah-v2” environment and compare online policies trained with different behavioral reference policies—non-parametric GPs, deep ensembles, and BNNs with Monte Carlo dropout—estimated either from 15 expert demonstrations (i.e., 15 state–action trajectories, containing 15,000 samples) or from a single expert demonstration (i.e., a single state–action trajectory, containing 1,000 samples). A single expert demonstration is sufficient for non-parametric behavioral reference policies. Figure 7 shows the returns for online policies trained with behavioral reference policies estimated from the full dataset (top plot) and from only a single expert state–action trajectory (bottom plot). On the full dataset, we find that all three methods are competitive and improve on the prior stateof-the-art but that the GP behavioral policy leads to the highest return. Remarkably, non-parametric GP behavioral policies perform just as well with only a single expert demonstration as with all 15 (i.e., with 1,000 data points, instead of 15,000 data points). These results further emphasizes the usefulness of non-parametric behavioral policies when accelerating online training with expert demonstrations—even when only very few expert demonstrations are available. 5.5 Are Non-Parametric GP Behavioral Reference Policies Too Computationally Expensive? Table 1 presents the time complexity of KL-regularized RL under non-parametric GP and parametric neural network behavioral reference policies, as measured by the average time elapsed per epoch on the “door-binary-v0” and “HalfCheetah-v2” environments. One epoch of online training on “doorbinary-v0” and “HalfCheetah-v2” requires computing the KL divergence over 1,000 mini-batches of size 256 and 1,024, respectively. The time complexity of evaluating the log-density of a GP behavioral reference policy—needed for computing gradients of the KL divergence during online training—scales quadratically in the number of training data points and linearly in the dimensionality of the state and action space, respectively. As can be seen in Table 1, non-parametric GP behavioral reference policies only lead to a modest increase in the time needed to complete one epoch of training while resulting in significantly improved performance as shown in Figures 4 and 5. 6 Conclusion We identified a previously unrecognized pathology in KL-regularized RL from expert demonstrations and showed that this pathology can significantly impede and even entirely prevent online learning. To remedy the pathology, we proposed the use of non-parametric behavioral reference policies, which we showed can significantly accelerate and improve online learning and yield online policies that (often significantly) outperform current state-of-the-art methods on challenging continuous control tasks. We hope that this work will encourage further research into better model classes for deep reinforcement learning algorithms, including and especially for reinforcement from image inputs. Acknowledgments and Disclosure of Funding We thank Ashvin Nair for sharing his code and results, as well as for providing helpful insights about the dexterous hand manipulation suite. We also thank Clare Lyle, Charline Le Lan, and Angelos Filos for detailed feedback on an early draft of this paper, Avi Singh for early discussions about behavioral cloning in entropy-regularized RL, and Tim Pearce for a useful discussion on the role of good models in RL. TGJR and CL are funded by the Engineering and Physical Sciences Research Council (EPSRC). TGJR is also funded by the Rhodes Trust and by a Qualcomm Innovation Fellowship. We gratefully acknowledge donations of computing resources by the Alan Turing Institute.
1. What is the focus of the paper regarding KL-regularized RL? 2. What are the strengths of the proposed approach, particularly in terms of non-parametric policies? 3. Are there any concerns or limitations regarding the comparison with other methods, such as parametric policies regularized with MC-dropout? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any questions regarding the computational trade-offs of using non-parametric policies?
Summary Of The Paper Review
Summary Of The Paper This paper presents a previously unrecognized pathology in KL-regularized RL with expert demonstrations. Specifically the commonly used parametric behavioral policies suffer from collapse of predictive variance at states far from the demonstrations. This collapse of variance hinders the algorithm’s ability to learn effectively. The authors then propose to use non-parametric behavioral policies and demonstrate its effectiveness in several control tasks. Review Estimating predictive variance/uncertainty is an important topic that influences a broad range of ML research. Understanding and estimating the predictive variance in RL is in particular challenging due the instability of deep RL. This work provides an insightful analysis on KL-regularized RL algorithms and how the behavioral policy class influences the online learning process. The paper is well-written with clarity. The claims in the paper are supported with empirical experimental results and the designed experiments successfully highlight the problem of collapsed predicted variance at states not in demos in KL-regularized RL. This observation is novel and inspires the authors to propose a class of new algorithms that use non-parametric policies, which show comparable or better performance than prior methods in several high-dimensional control tasks. It would be nice if the authors could also comment on how the non-parametric policy class would compare with parametric policies regularized with MC-dropout [1], since MC-dropout has also been considered as a way to do bayesian approximation on the network weights, and is often compared with Ensembles in related literature for estimating aleatoric uncertainty of neural network. In addition, in the paper the authors only commented on how the problem setting is friendly for non-parametric methods, it would be interesting to know what the actual computational trade-offs are, if any, for using non-parametric policies. [1] Gal, Y., & Ghahramani, Z. (2016, June). Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning (pp. 1050-1059). PMLR.
NIPS
Title On Pathologies in KL-Regularized Reinforcement Learning from Expert Demonstrations Abstract KL-regularized reinforcement learning from expert demonstrations has proved successful in improving the sample efficiency of deep reinforcement learning algorithms, allowing them to be applied to challenging physical real-world tasks. However, we show that KL-regularized reinforcement learning with behavioral reference policies derived from expert demonstrations can suffer from pathological training dynamics that can lead to slow, unstable, and suboptimal online learning. We show empirically that the pathology occurs for commonly chosen behavioral policy classes and demonstrate its impact on sample efficiency and online policy performance. Finally, we show that the pathology can be remedied by non-parametric behavioral reference policies and that this allows KL-regularized reinforcement learning to significantly outperform state-of-the-art approaches on a variety of challenging locomotion and dexterous hand manipulation tasks. 1 Introduction Reinforcement learning (RL) [15, 24, 46, 47] is a powerful paradigm for learning complex behaviors. Unfortunately, many modern reinforcement learning algorithms require agents to carry out millions of interactions with their environment to learn desirable behaviors, making them of limited use for a wide range of practical applications that cannot be simulated [8, 28]. This limitation has motivated the study of algorithms that can incorporate pre-collected offline data into the training process either fully offline or with online exploration to improve sample efficiency, performance, and reliability [2, 6, 16, 23, 52, 53]. An important and well-motivated subset of these methods consists of approaches for efficiently incorporating expert demonstrations into the learning process [5, 11, 18, 42]. Reinforcement learning with Kullback-Leibler (KL) regularization is a particularly successful approach for doing so [3, 27, 29, 31, 44, 51]. In KL-regularized reinforcement learning, the standard reinforcement learning objective is augmented by a Kullback-Leibler divergence term that penalizes dissimilarity between the online policy and a behavioral reference policy derived from expert demonstrations. The resulting regularized objective pulls the agent’s online policy towards the behavioral reference policy while also allowing it to improve upon the behavioral reference policy by exploring and interacting with the environment. Recent advances that leverage explicit or implicit KL-regularized objectives, such as BRAC [51], ABM [44], and AWAC [27], have shown that KLregularized reinforcement learning from expert demonstrations is able to significantly improve the sample efficiency of online training and reliably solve challenging environments previously unsolved by standard deep reinforcement learning algorithms. ∗Equal contribution. † Corresponding author: tim.rudner@cs.ox.ac.uk. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Contributions. In this paper, we show that despite some empirical success, KL-regularized reinforcement learning from expert demonstrations can suffer from previously unrecognized pathologies that lead to instability and sub-optimality in online learning. To summarize, our core contributions are as follows: • We illustrate empirically that commonly used classes of parametric behavioral policies experi- ence a collapse in predictive variance about states away from the expert demonstrations. • We demonstrate theoretically and empirically that KL-regularized reinforcement learning al- gorithms can suffer from pathological training dynamics in online learning when regularized against behavioral policies that exhibit such a collapse in predictive variance. • We show that the pathology can be remedied by non-parametric behavioral policies, whose predictive variances are well-calibrated and guaranteed not to collapse about previously unseen states, and that fixing the pathology results in online policies that significantly outperform stateof-the-art approaches on a range of challenging locomotion and dexterous hand manipulation tasks. The left panel of Figure 1 shows an example of the collapse in predictive variance away from the expert trajectories in parametric behavioral policies. In contrast, the right panel of Figure 1 shows the predictive variance of a non-parametric behavioral policy, which—unlike in the case of the parametric policy—increases off the expert trajectories. By avoiding the pathology, we obtain a stable and reliable approach to sample-efficient reinforcement learning, applicable to a wide range of reinforcement learning algorithms that leverage KL-regularized objectives.2 2 Background We consider the standard reinforcement learning setting where an agent interacts with a discounted Markov Decision Process (MDP) [46] given by a 5-tuple (S,A, p, r, γ), where S and A are the state and action spaces, p(· | st,at) are the transition dynamics, r(st,at) is the reward function, and γ is a discount factor. ρπ(τt) denotes the state–action trajectory distribution from time t induced by a policy π(· | st). The discounted return from time step t is given by R(τt) = ∑∞ k=t γ kr(sk,ak) for t ∈ N0. The standard reinforcement learning objective to be maximized is the expected discounted return Jπ(τ0) = Eρπ(τ0)[R(τ0)] under the policy trajectory distribution. 2.1 Improving and Accelerating Online Training via Behavioral Cloning We consider settings where we have a set of expert demonstrations without reward, D0 = {(sn,an)}Nn=1 = {S̄, Ā}, which we would like to use to speed up and improve online learn- 2Code and visualizations of our results can be found at https://sites.google.com/view/nppac. ing [5, 42]. A standard approach for turning expert trajectories into a policy is behavioral cloning [1, 4] which involves learning a mapping from states in the expert demonstrations to their corresponding actions, that is, π0 : S → A. As such, behavioral cloning does not assume or require access to a reward function and only involves learning a mapping from states to action in a supervised fashion. Since expert demonstrations are costly to obtain and often only available in small number, behavioral cloning alone is typically insufficient for agents to learn good policies in complex environments and has to be complemented by a method that enables the learner to build on the cloned behavior by interacting with the environment. A particularly successful and popular class of algorithms used for incorporating behavioral policies into online training is KL-regularized reinforcement learning [10, 37, 43, 48]. 2.2 KL-Regularized Objectives in Reinforcement Learning KL-regularized reinforcement learning modifies the standard reinforcement learning objective by augmenting the return with a negative KL divergence term from the learned policy π to a reference policy π0, given a temperature parameter α. The resulting discounted return from time step t ∈ N0 is then given by R̃(τt) = ∞∑ k=t γk [ r(sk,ak)− αDKL(π(· | sk) ‖ π0(· | sk)) ] (1) and the reinforcement learning objective becomes J̃π(τ0) = Eρπ(τ0)[R̃(τ0)]. When the reference policy π0 is given by a uniform distribution, we recover the entropy-regularized reinforcement learning objective used in Soft Actor–Critic (SAC) [13] up to an additive constant. Under a uniform reference policy π0, the resulting objective encourages exploration, while also choosing high-reward actions. In contrast, when π0 is non-uniform, the agent is discouraged to explore areas of the state space S where the variance of π0(· | s) is low (i.e., more certain) and encouraged to explore areas of the state space where the variance of π0(· | s) is high. The KLregularized reinforcement learning objective can be optimized via policy–gradient and actor–critic algorithms. 2.3 KL-Regularized Actor–Critic An optimal policy π that maximizes the expected KL-augmented discounted return J̃π can be learned by directly optimizing the policy gradient ∇πJ̃π. However, this policy gradient estimator exhibits high variance, which can lead to unstable learning. Actor–critic algorithms [7, 17, 32, 38] attempt to reduce this variance by making use of the state value function V π(st) = Eρπ(τt)[R̃(τt) | st] or the state–action value function Qπ(st,at) = Eρπ(τt)[R̃(τt) | st,at] to stabilize training. Given a reference policy π0(at | st), the state value function can be shown to satisfy the modified Bellman equation V π(st) =̇ Eat∼π(·|st)[Q π(st,at)]− αDKL ( π(· | st) ||π0(· | st) ) with a recursively defined Q-function Qπ(st,at) =̇ r(st,at) + γ Est+1∼p(·|st,at)[V π(st+1)]. Instead of directly optimizing the objective function J̃π via the policy gradient, actor–critic methods alternate between policy evaluation and policy improvement [7, 13]: Policy Evaluation. During the policy evaluation step, Qπθ (s,a), parameterized by parameters θ, is trained by minimizing the Bellman residual JQ(θ) =̇ E(st,at)∼D [ (Qθ(st,at)− (r(st,at) + γEst+1∼p(·|st,at)[Vθ̄(st+1)]))2 ] , (2) where D is a replay buffer and θ̄ is a stabilizing moving average of parameters. Policy Improvement. In the policy improvement step, the policy πφ, parameterized by parameters φ, is updated towards the exponential of the KL-augmented Q-function, Jπ(φ) =̇ Est∼D [αDKL(πφ(· | st) ‖ π0(· | st))]− Est∼D [ Eat∼πφ(·|st) [Qθ(st,at)] ] , (3) with states sampled from a replay buffer D and actions sampled from the parameterized online policy πφ. The following sections will focus on the policy improvement objective and how certain types of references policies can lead to pathologies when optimizing Jπ(φ) with respect to φ. 3 Identifying the Pathology In this section, we investigate the effect of KL-regularization on the training dynamics. To do so, we first consider the properties of the KL divergence to identify a potential failure mode for KL-regularized reinforcement learning. Next, we consider parametric Gaussian behavioral reference policies commonly used in practice for continuous control tasks [13, 51] and show that for Gaussian behavioral reference policies with small predictive variance, the policy improvement objective suffers from exploding gradients with respect to the policy parameters φ. We confirm that this failure occurs empirically and demonstrate that it results in slow, unstable, and suboptimal online learning. Lastly, we show that various regularization techniques used for estimating behavioral policies are unable to prevent this failure and also lead to suboptimal online policies. 3.1 When Are KL-Regularized Reinforcement Learning Objectives Meaningful? We start by considering the properties of the KL divergence and discuss how these properties can lead to potential failure modes in KL-regularized objectives. A well-known property of KL-regularized objectives in the variational inference literature is the occurrence of singularities when the support of one distribution is not contained in the support of the other. To illustrate this problem, we consider the case of Gaussian behavioral and online policies commonly used in practice. Mathematically, the KL divergence between two full Gaussian distributions is always finite and well-defined. Hence, we might hope KL-regularized reinforcement learning with Gaussian behavioral and online policies to be unaffected by the failure mode described above. However, the support of a Gaussian online policy πφ(· | st) will not be contained in the support of a behavioral reference policy π0(· | st) as the predictive variance σ20(st) tends to zero, and hence DKL(πφ(· | st) ‖ π0(· | st)) → ∞ as σ20(st) → 0. In other words, as the variance of a behavioral reference policy tends to zero and the behavioral distribution becomes degenerate, the KL divergence blows up to infinity [25]. While in practice, Gaussian behavioral policy would not operate in the limit of zero variance, the functional form of the KL divergence between (univariate) Gaussians, DKL(πφ(· | st) ‖ π0(· | st)) ∝ log σ0(st) σφ(st) + σ2φ(st) + (µφ(st)− µ0(st))2 2σ20(st) , implies a continuous, quadratic increase in the magnitude of the divergence as σ0(st) decreases, further exacerbated by a large difference in predictive means, |µφ(st)− µ0(st)|. As a result, for Gaussian behavioral reference policies π0(· | st) that assign very low probability to sets of points in sample space far away from the distribution’s mean µ0(st), computing the KL divergence can result in divergence values so large to cause numerical instabilities and arithmetic overflow. Hence, even for a suitably chosen behavioral reference policy class, vanishingly small behavioral reference policy predictive variances can cause the KL divergence to ‘blow up’ and cause numerical issues at evaluation points far away from states in the expert demonstrations. One way to address this failure mode may be to lower-bound the output of the variance network (e.g., by adding a small constant bias). However, placing a floor on the predictive variance of the behavioral reference policy is not sufficient to encourage effective learning. While it would prevent the KL divergence from blowing up, it would also lead to poor gradient signals, as well-calibrated predictive variance estimates that increase on states far away from the expert trajectories are necessary to keep the KL penalty from pulling the predictive mean of the online policy towards poor behavioral reference policy predictive means on states off the expert trajectories. Another possible solution could be to use heavy-tailed behavioral reference policies distributions, for example, Laplace distributions, to avoid pathological training dynamics. However, in Appendix B.3 we show that Laplace behavioral reference policies also suffer from pathological training dynamics, albeit less severely. In the following sections, we explain how an explosion in DKL(πφ(· | st) ‖ π0(· | st)) caused by small σ20(st) affects the gradients of Jπ(φ) in KL-regularized RL and discuss of how and why σ 2 0(st) may tend to zero in practice. 3.2 Exploding Gradients in KL-Regularized Reinforcement Learning Objectives To understand how small predictive variances in behavioral reference policies can affect—and possibly destabilize—online training in KL-regularized RL, we consider the contribution of the behavioral reference policy’s variance to the gradient of the policy objective in Equation (3). Compared to entropy-regularized actor–critic methods (SAC, Haarnoja et al. [13]), which implicitly regularize against a uniform policy, the gradient estimator ∇̂φJπ(φ) in KL-regularized RL gains an extra scaling term ∇at log π0(at | st), the gradient of the prior log-density evaluated actions at ∼ πφ(· | s): Proposition 1 (Exploding Gradients in KL-Regularized RL). Let π0(· | s) be a Gaussian behavioral reference policy with mean µ0(st) and variance σ20(st), and let πφ(· | s) be an online policy with reparameterization at = fφ( t; st) and random vector t. The gradient of the policy loss with respect to the online policy’s parameters φ is then given by ∇̂φJπ(φ) = ( α∇at log πφ(at | st)− α∇at log π0(at | st) −∇atQ(st,at) ) ∇φfφ( t; st) + α∇φ log πφ(at | st) (4) with ∇at log π0(at | st) = −at−µ0(st)σ20(st) . For fixed |at − µ0(st)|, ∇at log π0(at|st) grows as O(σ−20 (st)); thus, | ∇̂φJπ(φ) | → ∞ as σ20(st)→ 0 whenever ∇φfφ( t; st) 6= 0. Proof. See Appendix A.1. This result formalizes the intuition presented in Section 3.1 that a behavioral reference policy with a sufficiently small predictive variance may cause KL-regularized reinforcement learning to suffer from pathological training dynamics in gradient-based optimization. The smaller the behavioral reference policy’s predictive variance, the more sensitive the policy objective’s gradients will be to differences in the means of the online and behavioral reference policies. As a result, for behavioral reference policies with small predictive variance, the KL divergence will heavily penalize online policies whose predictive means diverge from the predictive means of the behavioral policy—even in regions of the state space away from the expert trajectory where the behavioral policy’s mean prediction is poor. 3.3 Predictive Uncertainty Collapse Under Parametric Policies The most commonly used method for estimating behavioral policies is maximum likelihood estimation (MLE) [44, 51], where we seek π0 =̇ πψ? with ψ? =̇ arg maxψ { E(s,a)∼D0 [log πψ(a | s)] } for a parametric behavioral policy πψ. In practice, πψ is often assumed to be Gaussian, πψ(· | s) = N (µψ(s),σ2ψ(s)), with µψ(s) and σ2ψ(s) parameterized by a neural network. While maximizing the likelihood of the expert trajectories under the behavioral policy is a sensible choice for behavioral cloning, the limited capacity of the neural network parameterization can produce unwanted behaviors in the resulting policy. The maximum likelihood objective ensures that the behavioral policy’s predictive mean reflects the expert’s actions and the predictive variance the (aleatoric) uncertainty inherent in the expert trajectories. However, the maximum likelihood objective encourages parametric policies to use their model capacity toward fitting the expert demonstrations and reflecting the aleatoric uncertainty in the data. As a result, for states off the expert trajectories, the policy can become degenerate and collapse to point predictions instead of providing meaningful predictive variance estimates that reflect that the behavioral policy ought to be highly uncertain about its predictions in previously unseen regions of the state space. Similar behaviors are well-known in parametric probabilistic models and welldocumented in the approximate Bayesian inference literature [33, 39]. 0 2 4 6 8 10 Epochs 10−3 10−1 σ 2 ψ (s ) Validation Variance Validation Log-Likelihood 0 2 log π ψ (Ā | S̄ ) tently decreases during training. As shown in Proposition 1, such a collapse in predictive variance can result in pathological training dynamics in KL-regularized online learning—steering the online policy towards suboptimal trajectories in regions of the state space far away from the expert demonstrations and deteriorating performance. Effect of regularization on uncertainty collapse. To prevent a collapse in the behavioral policy’s predictive variance, prior work proposed adding entropy or Tikhonov regularization to the MLE objective [51]. However, doing so does not succeed in preventing a collapse in predictive variance off the expert demonstration trajectories, as we show in Appendix A.3. Deep ensembles [20], whose predictive mean and variance are computed from the predictive means and variances of multiple Gaussian neural networks, are a widely used method for uncertainty quantification in regression settings. However, model ensembling can be costly and unreliable, as it requires training multiple neural networks from scratch and does not guarantee well-calibrated uncertainty estimates [39, 49]. We provide visualizations in Appendix B.5 which show that ensembling multiple neural network policies does not fully prevent a collapse in predictive variance. 3.4 Empirical Confirmation of Uncertainty Collapse To confirm Proposition 1 empirically and assess the effect of the collapse in predictive variance on the performance of KL-regularized RL, we perform an ablation study where we fix the predictive mean function of a behavioral policy to a mean function that attains 60% of the optimal performance and vary the magnitude of the policy’s predictive variance. Specifically, we set the behavioral policy’s predictive variance to different constant values in the set {1× 10−3, 5× 10−3, 1× 10−2} (following a similar implementation in Nair et al. [27]).3 The results of this experiment are shown in Figure 3, which shows the average returns, the KL divergence, and the average absolute gradients of the policy loss over training. The plots confirm that as the predictive variance of the offline behavioral policy tends to zero, the KL terms and average policy gradient magnitude explode as implied by Proposition 1, leading to unstable training and a collapse or dampening in average returns. In other words, even for behavioral policies with accurate predictive means, smaller predictive variances slow down or even entirely prevent learning good behavioral policies. This observation confirms that the pathology identified in Proposition 1 occurs in practice and that it can have a significant impact on KL-regularized RL from expert demonstrations, calling into question the usefulness of KL regularization as a means for accelerating and improving online training. In Appendix B.1, we show that an analogous relationship exists for the gradients of the Q-function loss. 3We attempted to use smaller values, but the gradients grew too large and caused arithmetic overflow. 4 Fixing the Pathology In order to address the collapse in predictive uncertainty for behavioral policies parameterized by a neural network trained via MLE, we specify a non-parametric behavioral policy whose predictive variance is guaranteed not to collapse about previously unseen states. Noting that KL-regularized RL with a behavioral policy can be viewed as approximate Bayesian inference with an empirical prior policy [13, 21, 40], we propose Non-Parametric Prior Actor–Critic (N-PPAC), an off-policy temporal difference algorithm for improved, accelerated, and stable online learning with behavioral policies. 4.1 Non-Parametric Gaussian Processes Behavioral Policies Gaussian processes (GPs) [36] are models over functions defined by a mean m(·) and covariance function k(·, ·). When defined in terms of a non-parametric covariance function, that is, a covariance function constructed from infinitely many basis functions, we obtain a non-degenerate GP, which has sufficient capacity to prevent a collapse in predictive uncertainty away from the training data. Unlike parametric models, whose capacity is limited by their parameterization, a non-parametric model’s capacity increases with the amount of training data. Considering a non-parametric GP behavioral policy, π0(· | s), with A | s ∼ π0(· | s) = GP ( m(s), k(s, s′) ) , (5) we can obtain a non-degenerate posterior distribution over actions conditioned on the offline data D0 = {S̄, Ā} with actions sampled according to the A | s,D0 ∼ π0(· | s,D0) = GP ( µ0(s),Σ0(s, s ′) ) , (6) with µ(s)=m(s) + k(s, S̄)k(S̄, S̄)−1(Ā−m(Ā)) and Σ(s, s′)=k(s, s′) + k(s, S̄)k(S̄, S̄)−1k(S̄, s′). To obtain this posterior distribution, we perform exact Bayesian inference, which naively scales as O(N3) in the number of training points N , but Wang et al. [50] show that exact inference in GP regression can be scaled to N > 1, 000, 000. Since expert demonstrations usually contain less than 100k datapoints, non-parametric GP behavioral policies are applicable to a wide array of real-world tasks. For an empirical evaluation of the time complexity of using a GP prior, see Section 5.5. Figure 1 confirms that the non-parametric GP’s predictive variance is well-calibrated: It is small in magnitude in regions of the state space near the expert trajectories and large in magnitude in other regions of the state space. While actor–critic algorithms like SAC implicitly use a uniform prior to explore the state space, using a behavioral policy with a well-calibrated predictive variance has the benefit that in regions of the state space close to the expert demonstrations the online policy learns to match the expert, while elsewhere the predictive variance increases and encourages exploration. Algorithmic details. In our experiments, we use a KL-regularized objective with a standard actor– critic implementation and Double DQN [14]. Pseudocode is provided in (Appendix C.1). 5 Empirical Evaluation We carry out a comparative empirical evaluation of our proposed approach vis-à-vis related methods that integrate offline data into online training. We provide a detailed description of the algorithms we compare against in Appendix A.4. We perform experiments on the MuJoCo benchmark suite and the substantially more challenging dexterous hand manipulation suite with sparse rewards. We show that KL-regularized RL with a non-parametric behavioral reference policy can rapidly learn to solve difficult high-dimensional continuous control problems given only a small set of expert demonstrations and (often significantly) outperforms state-of-the-art methods, including ones that use offline reward information—which our approach does not require. Furthermore, we demonstrate that the GP behavioral policy’s predictive variance is crucial for KL-regularized objectives to learn good online policies from expert demonstrations. Finally, we perform ablation studies that illustrate that non-parametric GP behavioral reference policies also outperform parametric behavioral reference policies with improved uncertainty quantification, such as deep ensembles and Bayesian neural networks (BNNs) with Monte Carlo dropout, and that the difference between non-parametric and parametric models is exacerbated the fewer expert demonstrations are available. We use the expert data from Nair et al. [27], every experiment uses six random seeds, and we use a fixed KL-temperature for each environment class. For further implementation details, see Appendix C.2. 5.1 Environments MuJoCo locomotion tasks. We evaluate N-PPAC on three representative tasks: “Ant-v2”, “HalfCheetah-v2”, and “Walker2d-v2”. For each task, we use 15 demonstration trajectories collected by a pre-trained expert, each containing 1,000 steps. The behavioral policy is specified as the posterior distribution of a GP with a squared exponential kernel, which is well-suited for modeling smooth functions. Dexterous hand manipulation tasks. Real-world robot learning is a setting where human demonstration data is readily available, and many deep RL approaches fail to learn efficiently. We study this setting in a suite of challenging dexterous manipulation tasks [35] using a 28-DoF five-fingered simulated ADROIT hand. The tasks simulate challenges common to real-world settings with highdimensional action spaces, complex physics, and a large number of intermittent contact forces. We consider two tasks in particular: in-hand rotation of a pen to match a target and opening a door by unlatching and pulling a handle. We use binary rewards for task completion, which is significantly more challenging than the original setting considered in Rajeswaran et al. [35]. 25 expert demonstrations were provided for each task, each consisting of 200 environment steps which are not fully optimal but do successfully solve the task. The behavioral policy is specified as the posterior distribution of a GP with a Matérn kernel, which is more suitable for modeling non-smooth data. 5.2 Results On MuJoCo environments, KL-regularized RL with a non-parametric behavioral policy consistently outperforms all related methods across all three tasks, successfully accelerating learning from offline data, as shown in Figure 4. Most notably, it outperforms methods such as AWAC [27]—the previous state-of-the-art—which attempts to eschew the problem of learning behavioral policies but instead uses an implicit constraint. Our approach, N-PPAC, exhibits an increase in stability and higher returns compared to comparable methods such as ABM and BRAC that explicitly regularize the online policy against a parametric behavioral policy and plateau at suboptimal performance levels as they are being forced to copy poor actions from the behavioral policy away from the expert data. In contrast, using a non-parametric behavioral policy allows us to avoid such undesirable behavior. On dexterous hand manipulation environments, KL-regularized RL with a non-parametric behavioral policy performs on par or outperforms all related methods on both tasks, as shown in Figure 5. Most notably, on the door opening task, it achieves a stable success rate of 90% within only 100,000 environment interactions For comparison, AWAC requires 4× as many environment interactions to achieve the same performance and is significantly less stable, while most other methods fail to learn any meaningful behaviors. Alternative divergence metrics underperform KL-regularization. KL-regularized RL with a non-parametric behavioral policy consistently outperforms methods that use alternative divergence metrics, as shown in the bottom plots of Figures 4 and 5. 5.3 Can the Pathology Be Fixed by Improved Parametric Uncertainty Quantification? line policy success rates. We consider the challenging “door-binary-v0” environment for this ablation study. Parametric uncertainty quantification is insufficient. Figure 5 shows that parametric variance functions result in online policies that only achieve success rates of up to 20% and eventually deteriorate, whereas the non-parametric variance yields an online policy that achieves a success rate of nearly 100%. This finding shows that commonly used uncertainty quantification methods, such as deep ensembles or BNNs with Monte Carlo dropout, do not generate sufficiently well-calibrated uncertainty estimates to remedy the pathology, and better methods may be needed [9, 39, 41]. Lower-bounding the predictive variance does not remedy the pathology. The predictive variance of all MLE-based and ensemble behavioral reference policies in all experiments are bounded away from zero at a minimum value of ≈ 10−2. Hence, setting a floor on the variance is not sufficient to prevent pathological training dynamics. This result further demonstrates the importance of accurate predictive variance estimation in allowing the online policy to match expert actions in regions of the state space with low behavioral policy predictive variance and explore elsewhere. 5.4 Can a Single Expert Demonstration Be Sufficient to Accelerate Online Training? To assess the usefulness of nonparametric behavioral reference policies in settings where only few expert demonstrations are available, we investigate whether the difference in performance between online policies trained with non-parametric and parametric behavioral reference policies, respectively, is exacerbated the fewer expert demonstrations are available. To answer this question, we consider the “HalfCheetah-v2” environment and compare online policies trained with different behavioral reference policies—non-parametric GPs, deep ensembles, and BNNs with Monte Carlo dropout—estimated either from 15 expert demonstrations (i.e., 15 state–action trajectories, containing 15,000 samples) or from a single expert demonstration (i.e., a single state–action trajectory, containing 1,000 samples). A single expert demonstration is sufficient for non-parametric behavioral reference policies. Figure 7 shows the returns for online policies trained with behavioral reference policies estimated from the full dataset (top plot) and from only a single expert state–action trajectory (bottom plot). On the full dataset, we find that all three methods are competitive and improve on the prior stateof-the-art but that the GP behavioral policy leads to the highest return. Remarkably, non-parametric GP behavioral policies perform just as well with only a single expert demonstration as with all 15 (i.e., with 1,000 data points, instead of 15,000 data points). These results further emphasizes the usefulness of non-parametric behavioral policies when accelerating online training with expert demonstrations—even when only very few expert demonstrations are available. 5.5 Are Non-Parametric GP Behavioral Reference Policies Too Computationally Expensive? Table 1 presents the time complexity of KL-regularized RL under non-parametric GP and parametric neural network behavioral reference policies, as measured by the average time elapsed per epoch on the “door-binary-v0” and “HalfCheetah-v2” environments. One epoch of online training on “doorbinary-v0” and “HalfCheetah-v2” requires computing the KL divergence over 1,000 mini-batches of size 256 and 1,024, respectively. The time complexity of evaluating the log-density of a GP behavioral reference policy—needed for computing gradients of the KL divergence during online training—scales quadratically in the number of training data points and linearly in the dimensionality of the state and action space, respectively. As can be seen in Table 1, non-parametric GP behavioral reference policies only lead to a modest increase in the time needed to complete one epoch of training while resulting in significantly improved performance as shown in Figures 4 and 5. 6 Conclusion We identified a previously unrecognized pathology in KL-regularized RL from expert demonstrations and showed that this pathology can significantly impede and even entirely prevent online learning. To remedy the pathology, we proposed the use of non-parametric behavioral reference policies, which we showed can significantly accelerate and improve online learning and yield online policies that (often significantly) outperform current state-of-the-art methods on challenging continuous control tasks. We hope that this work will encourage further research into better model classes for deep reinforcement learning algorithms, including and especially for reinforcement from image inputs. Acknowledgments and Disclosure of Funding We thank Ashvin Nair for sharing his code and results, as well as for providing helpful insights about the dexterous hand manipulation suite. We also thank Clare Lyle, Charline Le Lan, and Angelos Filos for detailed feedback on an early draft of this paper, Avi Singh for early discussions about behavioral cloning in entropy-regularized RL, and Tim Pearce for a useful discussion on the role of good models in RL. TGJR and CL are funded by the Engineering and Physical Sciences Research Council (EPSRC). TGJR is also funded by the Rhodes Trust and by a Qualcomm Innovation Fellowship. We gratefully acknowledge donations of computing resources by the Alan Turing Institute.
1. What is the main contribution of the paper regarding KL-regularized reinforcement learning? 2. What are the strengths of the proposed solution, and how does it address the identified problem? 3. What are the limitations of the paper, particularly regarding the scope of the proposed approach? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any concerns or suggestions regarding the experimental comparisons and scalability of the proposed method?
Summary Of The Paper Review
Summary Of The Paper The authors identify a potential failure in KL-regularized reinforcement learning in which small predictive variance of the target policy may lead to exploding gradients, possibly destabilizing gradient-based learning algorithms. The authors discuss an uncertainty collapse of parametric models under Maximum Likelihood estimation in OOD regions of the expert, resulting in the aforementioned pathology. Finally, they suggest to use non-parametric models of the behavior policy (in which uncertainty collapse is not present), showing improved performance on several continuous control benchmarks. Review The authors identify an interesting problem of KL regularized optimization and propose a solution to solve it. I find the problem interesting and the authors' solution to be well motivated and reasonable. In addition, the paper is written clearly, and the experiments were applied to reasonable tasks compared to similar / previous literature. The main limitations of the paper are as follows: The model of the paper focuses on KL-regularized reinforcement learning. There are various modifications to this model which may trivially solve the authors' suggested pathology, yet the paper does not discuss them at all. First, KL-regularization is applied to the policies themselves and not the stationary distributions, d^\pi. Algorithms such as GAIL and AIRL can be combined with reward functions to construct regularized algorithms which utilize the expert data through stationary distributions. Would the issues the authors suggest still exist in this case? I would be interested to see comparison to such algorithms (with addition of reward). KL regularization is only one way to regularize the algorithm. Other f-divergences can be used which would not have the exploding gradient problem (e.g., TV-distance). Why do the authors focus on KL-regularized policies? Scalability. While I agree that simple tasks would require small amounts of data, I find limited scalability as a major drawback of the paper. I hope for a solution that scales well with the amount of data provided, so that harder tasks could also be achieved by the proposed method. Experimental Comparisons. As mentioned in 1 and 2 , the author's should compare their results to algorithms using stationary distributions (e.g., GAIL, AIRL), as well as other forms of f-divergences. I would also be happy to see comparison to CQL as well as DICE algorithms.
NIPS
Title Multi-step learning and underlying structure in statistical models Abstract In multi-step learning, where a final learning task is accomplished via a sequence of intermediate learning tasks, the intuition is that successive steps or levels transform the initial data into representations more and more “suited" to the final learning task. A related principle arises in transfer-learning where Baxter (2000) proposed a theoretical framework to study how learning multiple tasks transforms the inductive bias of a learner. The most widespread multi-step learning approach is semisupervised learning with two steps: unsupervised, then supervised. Several authors (Castelli-Cover, 1996; Balcan-Blum, 2005; Niyogi, 2008; Ben-David et al, 2008; Urner et al, 2011) have analyzed SSL, with Balcan-Blum (2005) proposing a version of the PAC learning framework augmented by a “compatibility function" to link concept class and unlabeled data distribution. We propose to analyze SSL and other multi-step learning approaches, much in the spirit of Baxter’s framework, by defining a learning problem generatively as a joint statistical model on X ⇥ Y . This determines in a natural way the class of conditional distributions that are possible with each marginal, and amounts to an abstract form of compatibility function. It also allows to analyze both discrete and non-discrete settings. As tool for our analysis, we define a notion of -uniform shattering for statistical models. We use this to give conditions on the marginal and conditional models which imply an advantage for multi-step learning approaches. In particular, we recover a more general version of a result of Poggio et al (2012): under mild hypotheses a multi-step approach which learns features invariant under successive factors of a finite group of invariances has sample complexity requirements that are additive rather than multiplicative in the size of the subgroups. 1 Introduction The classical PAC learning framework of Valiant (1984) considers a learning problem with unknown true distribution p on X ⇥ Y , Y = {0, 1} and fixed concept class C consisting of (deterministic) functions f : X ! Y . The aim of learning is to select a hypothesis h : X ! Y , say from C itself (realizable case), that best recovers f . More formally, the class C is said to be PAC learnable if there is a learning algorithm that with high probability selects h 2 C having arbitrarily low generalization error for all possible distributions D on X . The distribution D governs both the sampling of points z = (x, y) 2 X ⇥ Y by which the algorithm obtains a training sample and also the cumulation of error over all x 2 X which gives the generalization error. A modification of this model, together with the notion of learnable with a model of probability (resp. decision rule) (Haussler, 1989; Kearns and Schapire, 1994), allows to treat non-deterministic functions f : X ! Y and the case Y = [0, 1] analogously. Polynomial dependence of the algorithms on sample size and reciprocals 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. of probability bounds is further required in both frameworks for efficient learning. Not only do these frameworks consider worst case error, in the sense of requiring the generalization error to be small for arbitrary distributions D on X , they assume the same concept class C regardless of the true underlying distribution D. In addition, choice of the hypothesis class is taken as part of the inductive bias of the algorithm and not addressed. Various, by now classic, measures of complexity of a hypothesis space (e.g., VC dimension or Rademacher complexity, see Mohri et al. (2012) for an overview) allow to prove upper bounds on generalization error in the above setting, and distribution-specific variants of these such as annealed VC-entropy (see Devroye et al. (1996)) or Rademacher averages (beginning with Koltchinskii (2001)) can be used to obtain more refined upper bounds. The widespread strategy of semi-supervised learning (SSL) is known not to fit well into PAC-style frameworks (Valiant, 1984; Haussler, 1989; Kearns and Schapire, 1994). SSL algorithms perform a first step using unlabeled training data drawn from a distribution on X , followed by a second step using labeled training data from a joint distribution on X ⇥ Y . This has been studied by several authors (Balcan and Blum, 2005; Ben-David et al., 2008; Urner et al., 2011; Niyogi, 2013) following the seminal work of Castelli and Cover (1996) comparing the value of unlabeled and labeled data. One immediate observation is that without some tie between the possible marginals D on X and the concept class C which records possible conditionals p(y|x), there is no benefit to unlabeled data: if D can be arbitrary then it conveys no information about the true joint distribution that generated labeled data. Within PAC-style frameworks, however, C and D are completely independent. Balcan and Blum therefore proposed augmenting the PAC learning framework by the addition of a compatibility function : C ⇥ D ! [0, 1], which records the amount of compatibility we believe each concept from C to have with each D 2 D, the class of “all" distributions on X . This function is required to be learnable from D and is then used to reduce the concept class from C to a sub-class which will be used for the subsequent (supervised) learning step. If is a good compatible function this sub-class should have lesser complexity than C (Balcan and Blum, 2005). While PAC-style frameworks in essence allow the true joint distribution to be anything in C⇥D, the existence of a good compatibility function in the sense of Balcan and Blum (2005) implicitly assumes the joint model that we believe in is smaller. We return to this point in Section 2.1. In this paper we study properties of multi-step learning strategies – those which involve multiple training steps – by considering the advantages of breaking a single learning problem into a sequence of two learning problems. We start by assuming a true distribution which comes from a class of joint distributions, i.e. statistical model, P on X ⇥ Y . We prove that underlying structure of a certain kind in P , together with differential availability of labeled vs. unlabeled data, imply a quantifiable advantage to multi-step learning at finite sample size. The structure we need is the existence of a representation t(x) of x 2 X which is a sufficient statistic for the classification or regression of interest. Two common settings where this assumption holds are: manifold learning and group-invariant feature learning. In these settings we have respectively 1. t = t pX is determined by the marginal pX and pX is concentrated on a submanifold of X , 2. t = t G is determined by a group action on X and p(y|x) is invariant1 under this action. Learning t in these cases corresponds respectively to learning manifold features or group-invariant features; various approaches exist (see (Niyogi, 2013; Poggio et al., 2012) for more discussion) and we do not assume any fixed method. Our framework is also not restricted to these two settings. As a tool for analysis we define a variant of VC dimension for statistical models which we use to prove a useful lower bound on generalization error even2 under the assumption that the true distribution comes from P . This allows us to establish a gap at finite sample size between the error achievable by a single-step purely supervised learner and that achievable by a semi-supervised learner. We do not claim an asymptotic gap. The purpose of our analysis is rather to show that differential finite availability of data can dictate a multi-step learning approach. Our applications are respectively a strengthening of a manifold learning example analyzed by Niyogi (2013) and a group-invariant features example related to a result of Poggio et al. (2012). We also discuss the relevance of these to biological learning. Our framework has commonalities with a framework of Baxter (2000) for transfer learning. In that work, Baxter considered learning the inductive bias (i.e., the hypothesis space) for an algorithm for a 1 This means there is a group G of transformations of X such that p(y|x) = p(y|g·x) for all g 2 G. 2(distribution-specific lower bounds are by definition weaker than distribution-free ones) “target" learning task, based on experience from previous “source" learning tasks. For this purpose he defined a learning environment E to be a class of probability distributions on X ⇥ Y together with an unknown probability distribution Q on E , and assumed E to restrict the possible joint distributions which may arise. We also make a generative assumption, assuming joint distributions come from P , but we do not use a prior Q. Within his framework Baxter studied the reduction in generalization error for an algorithm to learn a new task, defined by p 2 E , when given access to a sample from p and a sample from each of m other learning tasks, p 1 , . . . , p m 2 E , chosen randomly according to Q, compared with an algorithm having access to only a sample from p. The analysis produced upper bounds on generalization error in terms of covering numbers and a lower bound was also obtained in terms of VC dimension in the specific case of shallow neural networks. In proving our lower bound in terms of a variant of VC dimension we use a minimax analysis. 2 Setup We assume a learning problem is specified by a joint probability distribution p on Z = X ⇥ Y and a particular (regression, classification or decision) function f p : X ! R determined entirely by p(y|x). Moreover, we postulate a statistical model P on X ⇥ Y and assume p 2 P . Despite the simplified notation, f p (x) depends on the conditionals p(y|x) and not the entire joint distribution p. There are three main types of learning problem our framework addresses (reflected in three types of f p ). When y is noise-free, i.e. p(y|x) is concentrated at a single y-value v p (x) 2 {0, 1}, f p = v p : X ! {0, 1} (classification); here f p (x) = E p (y|x). When y is noisy, then either f p : X ! {0, 1} (classification/decision) or f p : X ! [0, 1] (regression) and f p (x) = E p (y|x). In all three cases the parameters which define f p , the learning goal, depend only on p(y|x) = E p (y|x). We assume the learner knows the model P and the type of learning problem, i.e., the hypothesis class is the “concept class" C := {f p : p 2 P}. To be more precise, for the first type of f p listed above, this is the concept class (Kearns and Vazirani, 1994); for the second type, it is a class of decision rules and for the third type, it is a class of p-concepts (Kearns and Schapire, 1994). For specific choice of loss functions, we seek worst-case bounds on learning rates, over all distributions p 2 P . Our results for all three types of learning problem are stated in Theorem 3. To keep the presentation simple, we give a detailed proof for the first two types, i.e., assuming labels are binary. This shows how classic PAC-style arguments for discrete X can be adapted to our framework where X may be smooth. Extending these arguments to handle non-binary Y proceeds by the same modifications as for discrete X (c.f. Kearns and Schapire (1994)). We remark that in the presence of noise, better bounds can be obtained (see Theorem 3 for details) if a more technical version of Definition 1 is used but we leave this for a subsequent paper. We define the following probabilistic version of fat shattering dimension: Definition 1. Given P , a class of probability distributions on X⇥{0, 1}, let 2 (0, 1), ↵ 2 (0, 1/2) and n 2 N = {0, 1, . . . , ...}. Suppose there exist (disjoint) sets S i ⇢ X , i 2 {1, . . . , n} with S = [ i S i , a reference probability measure q on X , and a sub-class P n ⇢ P of cardinality |P n | = 2n with the following properties: 1. q(S i ) /n for every i 2 {1, . . . , n} 2. q lower bounds the marginals of all p 2 P n on S, i.e. R B dp X R B dq for any p-measurable subset B ⇢ S 3. 8 e 2 {0, 1}n, 9 p 2 P n such that E p (y|x) > 1/2 + ↵ for x 2 S i when e i = 1 and E p (y|x) < 1/2 ↵ for x 2 S i when e i = 0 then we say P ↵-shatters S 1 , . . . , S n -uniformly using P n . The -uniform ↵-shattering dimension of P is the largest n such that P ↵-shatters some collection of n subsets of X -uniformly. This provides a measure of complexity of the class P of distributions in the sense that it indicates the variability of the expected y-values for x constrained to lie in the region S with measure at least under corresponding marginals. The reference measure q serves as a lower bound on the marginals and ensures that they “uniformly" assign probabilty at least to S. Richness (variability) of conditionals is thus traded off against uniformity of the corresponding marginal distributions. Remark 2 (Uniformity of measure). The technical requirement of a reference distribution q is automatically satisfied if all marginals p X for p 2 P n are uniform over S. For simplicity this is the situation considered in all our examples. The weaker condition (in terms of q) that we postulate in Definition 1 is however sufficient for our main result, Theorem 3. If f p is binary and y is noise-free then P shatters S 1 , . . . , S n -uniformly if and only if there is a sub-class P n ⇢ P with the specified uniformity of measure, such that each f p (·) = E p (y|·), p 2 P n is constant on each S i and the induced set-functions shatter {S 1 , . . . , S n } in the usual (Vapnik-Chervonenkis) sense. In that case, ↵ may be chosen arbitrarily in (0, 1/2) and we omit mention of it. If f p takes values in [0, 1] or f p is binary and y noisy then -uniform shattering can be expressed in terms of fat-shattering (both at scale ↵). We show that the -uniform ↵-shattering dimension of P can be used to lower bound the sample size required by even the most powerful learner of this class of problems. The proof is in the same spirit as purely combinatorial proofs of lower bounds using VC-dimension. Essentially the added condition on P in terms of allows to convert the risk calculation to a combinatorial problem. As a counterpoint to the lower bound result, we consider an alternative two step learning strategy which makes use of underlying structure in X implied by the model P and we obtain upper bounds for the corresponding risk. 2.1 Underlying structure We assume a representation t : X ! Rk of the data, such that p(y|x) can be expressed in terms of p(y|t(x)), say f p (x) = g ✓ (t(x)) for some parameter ✓ 2 ⇥. Such a t is generally known in Statistics as a sufficient dimension reduction for f p but here we make no assumption on the dimension k (compared with the dimension of X). This is in keeping with the paradigm of feature extraction for use in kernel machines, where the dimension of t(X) may even be higher than the original dimension of X . As in that setting, what will be important is rather that the intermediate representation t(x) reduce the complexity of the concept space. While t depends on p we will assume it does so only via X . For example t could depend on p through the marginal p X on X or possible group action on X; it is a manifestation in the data X , possibly over time, of underlying structure in the true joint distribution p 2 P . The representation t captures structure in X induced by p. On the other hand, the regression function itself depends only on the conditional p(y|t(x)). In general, the natural factorization ⇡ : P ! P X , p 7! p X determines for each marginal q 2 P X a collection ⇡ 1(q) of possible conditionals, namely those p(y|x) arising from joint p 2 P that have marginal p X = q. More generally any sufficient statistic t induces a similar factorization (c.f. Fisher-Neyman characterization) ⇡ t : P ! P t , p 7! p t , where P t is the marginal model with respect to t, and only conditionals p(y|t) are needed for learning. As before, given a known marginal q 2 P t , this implies a collection ⇡ 1 t (q) of possible conditionals p(y|t) relevant to learning. Knowing q thus reduces the original problem where p(y|x) or p(y|t) can come from any p 2 P to one where it comes from p in a reduced class ⇡ 1(q) or ⇡ 1 t (q) ( P . Note the similarity with the assumption of Balcan and Blum (2005) that a good compatibility function reduce the concept class. In our case the concept class C consists of f p defined by p(y|t) in [ t P Y |t with PY |t :={p(y|t) : p 2 P}, and marginals come from P t . The joint model P that we postulate, meanwhile, corresponds to a subset of C ⇥ P t (pairs (f p , q) where f p uses p 2 ⇡ 1 t (q)). The indicator function for this subset is an abstract (binary) version of compatibility function (recall the compatibility function of Balcan-Blum should be a [0, 1]-valued function on C ⇥D, satisfying further practical conditions that our function typically would not). Thus, in a sense, our assumption of a joint model P and sufficient statistic t amounts to a general form of compatibility function that links C and D without making assumptions on how t might be learned. This is enough to imply the original learning problem can be factored into first learning the structure t and then learning the parameter ✓ for f p (x) = g ✓ (t(x)) in a reduced hypothesis space. Our goal is to understand when and why one should do so. 2.2 Learning rates We wish to quantify the benefits achieved by using such a factorization in terms of the bounds on the expected loss (i.e. risk) for a sample of size m 2 N drawn iid from any p 2 P . We assume the learner is provided with a sample z̄ = (z 1 , z 2 · · · z m ), with z i = (x i , y i ) 2 X ⇥ Y = Z, drawn iid from the distribution p and uses an algorithm A : Zm ! C = H to select A(z̄) to approximate f p . Let `(A(z̄), f p ) denote a specific loss. It might be 0/1, absolute, squared, hinge or logistic loss. We define L(A(z̄), f p ) to be the global expectation or L2-norm of one of those pointwise losses `: L(A(z̄), f p ) := E x `(A(z̄)(x), f p (x)) = Z X `(A(z̄)(x), f p (x))dp X (x) (1) or L(A(z̄), f p ) := ||`(A(z̄), f p )|| L 2 (pX) = sZ X `(A(z̄)(x), f p (x))2dp X . (2) Then the worst case expected loss (i.e. minimax risk) for the best learning algorithm with no knowledge of t pX is R(m) := inf A sup p2P E z̄ L(A(z̄), f p ) = inf A sup q2PX sup p(y|tq)s.t. p2P,pX=q E z̄ L(A(z̄, f p ) . (3) while for the best learning algorithm with oracle knowledge of t pX it is Q(m) := sup q2PX inf A sup p(y|tq)s.t. p2P,pX=q E z̄ L(A(z̄, f p ) . (4) Some clarification is in order regarding the classes over which the suprema are taken. In principle the worst case expected loss for a given A is the supremum over P of the expected loss. Since f p (x) is determined by p(y|t pX (x)), and tpX is determined by pX this is a supremum over q 2 PX of a supremum over p(y|t q (·)) such that p X = q. Finding the worst case expected error for the best A therefore means taking the infimum of the supremum just described. In the case of Q(m) since the algorithm knows t q , the order of the supremum over t changes with respect to the infimum: the learner can select the best algorithm A using knowledge of t q . Clearly R(m) Q(m) by definition. In the next section, we lower bound R(m) and upper bound Q(m) to establish a gap between R(m) and Q(m). 3 Main Result We show that -uniform shattering dimension n or more implies a lower bound on the worst case expected error, R(m), when the sample size m n. In particular - in the setup specified in the previous section - if {g ✓ (·) : ✓ 2 ⇥} has much smaller VC dimension than n this results in a distinct gap between rates for a learner with oracle access to t pX and a learner without. Theorem 3. Consider the framework defined in the previous Section with Y = {0, 1}. Assume {g ✓ (·) : ✓ 2 ⇥} has VC dimension d < m and P has -uniform ↵-shattering dimension n (1+✏)m. Then, for sample size m, Q(m) 16 q d log(m+1)+log 8+1 2m while R(m) > ✏bc m+1/8 where b depends both on the type of loss and the presence of noise, while c depends on noise. Assume the standard definition in (1). If f p are binary (in the noise-free or noisy setting) b = 1 for absolute, squared, 0-1, hinge or logistic loss. In the noisy setting, if f p = E(y|x) 2 [0, 1], b = ↵ for absolute loss and b = ↵2 for squared loss. In general, c = 1 in the noise-free setting and c = (1/2 + ↵)m in the noisy setting. By requiring P to satisfy a stronger notion of -uniform ↵-shattering one can obtain c = 1 even in the noisy case. Note that for sample size m and -uniform ↵-shattering dimension 2m, we have ✏ = 1, so the lower bound in its simplest form becomes m+1/8. This is the bound we will use in the next Section to derive implications of Theorem 3. Remark 4. We have stated in the Theorem a simple upper bound, sticking to Y = {0, 1} and using VC dimension, in order to focus the presentation on the lower bound which uses the new complexity measure. The upper bound could be improved. It could also be replaced with a corresponding upper bound assuming instead Y = [0, 1] and fat shattering dimension d. Proof. The upper bound on Q(m) holds for an ERM algorithm (by the classic argument, see for example Corollary 12.1 in Devroye et al. (1996)). We focus here on the lower bound for R(m). Moreover, we stick to the simpler definition of -uniform shattering in Definition 1 and omit proof of the final statement of the Theorem, which is slightly more involved. We let n = 2m (i.e. ✏ = 1) and we comment in a footnote on the result for general ✏. Let S 1 , . . . , S 2m be sets which are -uniformly ↵-shattered using the family P 2m ⇢ P and denote their union by S. By assumption S has measure at least under a reference measure q which is dominated by all marginals p X for p 2 P 2m (see Definition 1). We divide our argument into three parts. 1. If we prove a lower bound for the average over P 2m , 8A, 1 22m X p2P2m E z̄ L(A(z̄), f p ) bc m+1/8 (5) it will also be a lower bound for the supremum over P 2m : 8A, sup p2P2m E z̄ L(A(z̄), f p ) bc m+1/8 . and hence for the supremum over P . It therefore suffices to prove (5). 2. Given x 2 S, define v p (x) to be the more likely label for x under the joint distribution p 2 P 2m . This notation extends to the noisy case the definition of v p already given for the noise-free case. The uniform shattering condition implies p(v p (x)|x) > 1/2 + ↵ in the noisy case and p(v p (x)|x) = 1 in the noise-free case. Given x̄ = (x 1 , . . . , x m ) 2 Sm, write z̄ p (x̄) := (z 1 , . . . , z m ) where z j = (x j , v p (x j )). Then E z̄ L(A(z̄), f p ) = Z Z m L(A(z̄), f p )dpm(z̄) Z S m⇥Y m L(A(z̄), f p )dpm(z̄) c Z S m L(A(z̄ p (x̄)), f p )dpm X (x̄) where c is as specified in the Theorem. Note the sets V l := {x̄ 2 Sm ⇢ Xm : the x j occupy exactly l of the S i } for l = 1, . . . ,m define a partition of Sm. Recall that dp X dq on S for all p 2 P 2m so Z S m L(A(z̄ p (x̄)), f p )dpm X (x̄) 1 22m X p2P2m mX l=1 Z x̄2Vl L(A(z̄ p (x̄)), f p ) dqm(x̄) = mX l=1 Z x̄2Vl 0 BBBB@ 1 22m X p2P2m L(A(z̄ p (x̄)), f p ) | {z } I 1 CCCCA dq m(x̄). We claim the integrand, I , is bounded below by b /8 (this computation is performed in part 3, and depends on knowing x̄ 2 V l ). At the same time, S has measure at least under q so mX l=1 Z x̄2Vl dq m(x̄) = Z x̄2Sm dq m(x̄) m which will complete the proof of (5). 3. We now assume a fixed but arbitrary x̄ 2 V l and prove I b /8. To simplify the discussion, we will refer to sets S i which contain a component x j of x̄ as S i with data. We also need notation for the elements of P 2m : for each L ⇢ [2m] denote by p(L) the unique element of P 2m such that v p (L) | Si = 1 if i 2 L, and vp(L) |Si = 0 if i /2 L. Now, let Lx̄ := {i 2 [2m] : x̄ \ Si 6= ;}. These are the indices of sets S i with data. By assumption |L x̄ | = l, and so |Lc x̄ | = 2m l. Every subset L ⇢ [2m] and hence every p 2 P 2m is determined by L \ L x̄ and L \ Lc x̄ . We will collect together all p(L) having the same L \ L x̄ , namely for each D ⇢ L x̄ define P D := {p(L) 2 P 2m : L \ L x̄ = D}. These 2l families partition P 2m and in each P D there are 22m l probability distributions. Most importantly, z̄ p (x̄) is the same for all p 2 P D (because D determines v p on the S i with data). This implies A(z̄ p (x̄)) : X ! R is the same function3 of X for all p in a given P D . To simplify notation, since we will be working within a single P D , we write f := A(z̄(x̄)). While f is the hypothesized regression function given data x̄, f p is the true regression function when p is the underlying distribution. For each set S i let v i be 1 if f is above 1/2 on a majority of S i using reference measure q (a q-majority) and 0 otherwise. We now focus on the “unseen" S i where no data lie (i.e., i 2 Lc x̄ ) and use the v i to specify a 1-1 correspondence between elements p 2 P D and subsets K ⇢ Lc x̄ : p 2 P D ! K p := {i 2 Lc x̄ : v p 6= v i }. Take a specific p 2 P D with its associated K p . We have |f(x) f p (x)| > ↵ on the q-majority of the set S i for all i 2 K p . The condition |f(x) f p (x)| > ↵ with f(x) and f p (x) on opposite sides of 1/2 implies a lower bound on `(f(x), f p (x)) for each of the pointwise loss functions ` that we consider (0/1, absolute, square, hinge, logistic). The value of b, however, differs from case to case (see Appendix). For now we have,Z Si `(f(x), f p (x)) dp X (x) Z Si `(f(x), f p (x)) dq(x) b 1 2 Z Si dq(x) b 4m . Summing over all i 2 K p , and letting k = |K p |, we obtain (still for the same p) L(f(x), f p (x)) k b 4m (assuming L is defined by equation (1))4. There are 2m ` k possible K with cardinality k, for any k = 0, . . . , 2m `. Therefore, X p2PD L(f(x), f p (x)) 2m `X k=0 ✓ 2m ` k ◆ k b 4m = 22m `(2m `) 2 b 4m 22m ` b 8 (using 2m ` 2m m = m)5. Since D was an arbitrary subset of L x̄ , this same lower bound holds for each of the 2` families P D and so I = 1 22m X p2P2m L(f(x), f p (x)) b 8 . In the constructions of the next Section it is often the case that one can prove a different level of shattering for different n, namely (n)-uniform shattering of n subsets for various n. The following Corollary is an immediate consequence of the Theorem for such settings. We state it for binary f p without noise. Corollary 5. Let C 2 (0, 1) and M 2 N. If P (n)-uniformly ↵-shatters n subsets of X and (n)n+1/8 > C for all n < M then no learning algorithm can achieve worst case expected error below ↵C, using a training sample of size less than M/2. If such uniform shattering holds for all n 2 N then the same lower bound applies regardless of sample size. Even when (n)-uniform shattering holds for all n 2 N and lim n!1 (n) = 1, if (n) approaches 1 sufficiently slowly then it is possible (n)n+1 ! 0 and there is no asymptotic obstacle to learning. By contrast, the next Section shows an extreme situation where lim n!1 (n)n+1 e > 0. In that case, learning is impossible. 4 Applications and conclusion Manifold learning We now describe a simpler, finite dimensional version of the example in Niyogi (2013). Let X = RD, D 2 and Y = {0, 1}. Fix N 2 N and consider a very simple type of 1-dimensional manifold in X , namely the union of N linear segments, connected in circular fashion (see Figure 1). Let P X be the collection of marginal distributions, each of which is supported on and assigns uniform probability along a curve of this type. There is a 1-1 correspondence between the elements of P X and curves just described. 3Warning: f need not be an element of {fp : p 2 P2n}; we only know f 2 H = {fp : p 2 P}. 4In the L2 version, using p x x, the reader can verify the same lower bound holds. 5In the case where we use (1 + ✏)m instead of 2m, we would have (1 + ✏)m ` ✏m here. On each curve M, choose two distinct points x0, x00. Removing these disconnects M. Let one component be labeled 0 and the other 1, then label x0 and x00 oppositely. Let P be the class of joint distributions on X ⇥ Y with conditionals as described and marginals in P X . This is a noise-free setting and f p is binary. Given M (or circular coordinates on M), consider the reduced class P 0 := {p 2 P : support(p X ) = M}. Then H0 := {f p : p 2 P 0} has VC dimension 3. On the other hand, for n < N/4 1 it can be shown that P (n)-uniformly shatters n sets with f p , where (n) = 1 1 n+1 (see Appendix and Figure 2). Since (1 1 n+1 )n+1 ! e > 0 as n!1, it follows from Corollary 5 that the worst case expected error is bounded below by e/8 for any sample of size n N/8 1/2. If many linear pieces are allowed (i.e. N is high) this could be an impractical number of labeled examples. By contrast with this example, (n) in Niyogi’s example cannot be made arbitrarily close to 1. Group-invariant features We give a simplified, partially-discrete example (for a smooth version and Figures, see Appendix). Let Y = {0, 1} and let X = J ⇥ I where J = {0, 1, . . . , n 1 1} ⇥ {0, 1, . . . , n 2 1} is an n 1 by n 2 grid (n i 2 N) and I = [0, 1] is a real line segment. One should picture X as a rectangular array of vertical sticks. Above each grid point (j 1 , j 2 ) consider two special points on the stick I , one with i = i + := 1 ✏ and the other with i = i := 0 + ✏. Let P X contain only the uniform distribution on X and assume the noise-free setting. For each ē 2 {+, }n1n2 , on each segment (j 1 , j 2 ) ⇥ I assign, via p ē , the label 1 above the special point (determined by ē) and 0 below the point. This determines a family of n 1 n 2 conditional distributions and thus a family P := {p ē : ē 2 {+, }n1n2} of n 1 n 2 joint distributions. The reader can verify that P has 2✏-uniform shattering dimension n 1 n 2 . Note that when the true distribution is p ē for some ē 2 {+, }n1n2 the labels will be invariant under the action a ē of Z n1 ⇥ Zn2 defined as follows. Given (z 1 , z 2 ) 2 Z n1 ⇥Zn2 and (j1, j2) 2 J , let the group element (z1, z2) move the vertical stick at (j 1 , j 2 ) to the one at (z 1 + j 1 mod n 1 , z 2 + j 2 mod n 2 ) without flipping the stick over, just stretching it as needed so the special point i± determined by ē on the first stick goes to the one on the second stick. The orbit space of the action can be identified with I . Let t : X ⇥ Y ! I be the projection of X ⇥ Y to this orbit space, then there is an induced labelling of this orbit space (because labels were invariant under the action of the group). Given access to t, the resulting concept class has VC dimension 1. On the other hand, given instead access to a projection s for the action of the subgroup Z n1 ⇥ {0}, the class eP := {p(·|s) : p 2 P} has 2✏-uniform shattering dimension n2. Thus we have a general setting where the over-all complexity requirements for two-step learning are n 1 + n 2 while for single-step learning they are n 1 n 2 . Conclusion We used a notion of uniform shattering to demonstrate both manifold learning and invariant feature learning situations where learning becomes impossible unless the learner has access to very large amounts of labeled data or else uses a two-step semi-supervised approach in which suitable manifold- or group-invariant features are learned first in unsupervised fashion. Our examples also provide a complexity manifestation of the advantages, observed by Poggio and Mallat, of forming intermediate group-invariant features according to sub-groups of a larger transformation group. Acknowledgements The author is deeply grateful to Partha Niyogi for the chance to have been his student. This paper is directly inspired by discussions with him which were cut short much too soon. The author also thanks Ankan Saha and Misha Belkin for very helpful input on preliminary drafts.
1. What is the focus of the paper regarding unlabeled data's impact on supervised learning? 2. What are the strengths of the theoretical framework introduced in the paper? 3. Do you have any concerns or questions about the presentation's clarity? 4. How does the reviewer assess the novel notion of complexity (uniform shattering dimension)? 5. Are there any suggestions for improving the exposition of certain concepts, definitions, and theories?
Review
Review This paper provides a theoretical framework for understanding how unlabeled data can improve the performance of supervised learning. It is assumed that the function to be learned ( the regression function in the case of regression, or the posterior class probability function in the case of classification) is completely determined by the marginal distribution of X. furthermore, it is assumed that the function that maps marginal distributions on X to the desired decision function is *known*. A theorem is established that quantifies the improvement in performance compared to the case where this mapping is not known. The theory introduces a novel notion of complexity, the so-called uniform shattering dimension. The theory is illustrated very briefly a couple of more concrete scenarios.Overall this is a good paper, and I would have no problem with it being accepted into the program. The theoretical results are substantial technically speaking. The work provides additional theoretical understanding to the problem of semi-supervised learning. The primary weakness of the paper is the lack of clarity in some of the presentation. Here are some examples of what I mean. 1) l 63, refers to a "joint distribution on D x C". But C is a collection of classifiers, so this framework where the decision functions are random is unfamiliar. 2) In the first three paragraphs of section 2, the setting needs to be spelled out more clearly. It seems like the authors want to receive credit for doing something in greater generality than what they actually present, and this muddles the exposition. 3) l 123, this is not the definition of "dominated" 4) for the third point of definition one, is there some connection to properties of universal kernels? See in particular chapter 4 of Steinwart and Christmann which discusses the ability of universal kernels two separate an arbitrary finite data set with margin arbitrarily close to one. 5) an example and perhaps a figure would be quite helpful in explaining the definition of uniform shattering. 6) in section 2.1 the phrase "group action" is used repeatedly, but it is not clear what this means. 7) in the same section, the notation {\cal P} with a subscript is used several times without being defined. 8) l 196-7: this requires more explanation. Why exactly are the two quantities different, and why does this capture the difference in learning settings? ---- I still lean toward acceptance. I think NIPS should have room for a few "pure theory" papers.
NIPS
Title Multi-step learning and underlying structure in statistical models Abstract In multi-step learning, where a final learning task is accomplished via a sequence of intermediate learning tasks, the intuition is that successive steps or levels transform the initial data into representations more and more “suited" to the final learning task. A related principle arises in transfer-learning where Baxter (2000) proposed a theoretical framework to study how learning multiple tasks transforms the inductive bias of a learner. The most widespread multi-step learning approach is semisupervised learning with two steps: unsupervised, then supervised. Several authors (Castelli-Cover, 1996; Balcan-Blum, 2005; Niyogi, 2008; Ben-David et al, 2008; Urner et al, 2011) have analyzed SSL, with Balcan-Blum (2005) proposing a version of the PAC learning framework augmented by a “compatibility function" to link concept class and unlabeled data distribution. We propose to analyze SSL and other multi-step learning approaches, much in the spirit of Baxter’s framework, by defining a learning problem generatively as a joint statistical model on X ⇥ Y . This determines in a natural way the class of conditional distributions that are possible with each marginal, and amounts to an abstract form of compatibility function. It also allows to analyze both discrete and non-discrete settings. As tool for our analysis, we define a notion of -uniform shattering for statistical models. We use this to give conditions on the marginal and conditional models which imply an advantage for multi-step learning approaches. In particular, we recover a more general version of a result of Poggio et al (2012): under mild hypotheses a multi-step approach which learns features invariant under successive factors of a finite group of invariances has sample complexity requirements that are additive rather than multiplicative in the size of the subgroups. 1 Introduction The classical PAC learning framework of Valiant (1984) considers a learning problem with unknown true distribution p on X ⇥ Y , Y = {0, 1} and fixed concept class C consisting of (deterministic) functions f : X ! Y . The aim of learning is to select a hypothesis h : X ! Y , say from C itself (realizable case), that best recovers f . More formally, the class C is said to be PAC learnable if there is a learning algorithm that with high probability selects h 2 C having arbitrarily low generalization error for all possible distributions D on X . The distribution D governs both the sampling of points z = (x, y) 2 X ⇥ Y by which the algorithm obtains a training sample and also the cumulation of error over all x 2 X which gives the generalization error. A modification of this model, together with the notion of learnable with a model of probability (resp. decision rule) (Haussler, 1989; Kearns and Schapire, 1994), allows to treat non-deterministic functions f : X ! Y and the case Y = [0, 1] analogously. Polynomial dependence of the algorithms on sample size and reciprocals 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. of probability bounds is further required in both frameworks for efficient learning. Not only do these frameworks consider worst case error, in the sense of requiring the generalization error to be small for arbitrary distributions D on X , they assume the same concept class C regardless of the true underlying distribution D. In addition, choice of the hypothesis class is taken as part of the inductive bias of the algorithm and not addressed. Various, by now classic, measures of complexity of a hypothesis space (e.g., VC dimension or Rademacher complexity, see Mohri et al. (2012) for an overview) allow to prove upper bounds on generalization error in the above setting, and distribution-specific variants of these such as annealed VC-entropy (see Devroye et al. (1996)) or Rademacher averages (beginning with Koltchinskii (2001)) can be used to obtain more refined upper bounds. The widespread strategy of semi-supervised learning (SSL) is known not to fit well into PAC-style frameworks (Valiant, 1984; Haussler, 1989; Kearns and Schapire, 1994). SSL algorithms perform a first step using unlabeled training data drawn from a distribution on X , followed by a second step using labeled training data from a joint distribution on X ⇥ Y . This has been studied by several authors (Balcan and Blum, 2005; Ben-David et al., 2008; Urner et al., 2011; Niyogi, 2013) following the seminal work of Castelli and Cover (1996) comparing the value of unlabeled and labeled data. One immediate observation is that without some tie between the possible marginals D on X and the concept class C which records possible conditionals p(y|x), there is no benefit to unlabeled data: if D can be arbitrary then it conveys no information about the true joint distribution that generated labeled data. Within PAC-style frameworks, however, C and D are completely independent. Balcan and Blum therefore proposed augmenting the PAC learning framework by the addition of a compatibility function : C ⇥ D ! [0, 1], which records the amount of compatibility we believe each concept from C to have with each D 2 D, the class of “all" distributions on X . This function is required to be learnable from D and is then used to reduce the concept class from C to a sub-class which will be used for the subsequent (supervised) learning step. If is a good compatible function this sub-class should have lesser complexity than C (Balcan and Blum, 2005). While PAC-style frameworks in essence allow the true joint distribution to be anything in C⇥D, the existence of a good compatibility function in the sense of Balcan and Blum (2005) implicitly assumes the joint model that we believe in is smaller. We return to this point in Section 2.1. In this paper we study properties of multi-step learning strategies – those which involve multiple training steps – by considering the advantages of breaking a single learning problem into a sequence of two learning problems. We start by assuming a true distribution which comes from a class of joint distributions, i.e. statistical model, P on X ⇥ Y . We prove that underlying structure of a certain kind in P , together with differential availability of labeled vs. unlabeled data, imply a quantifiable advantage to multi-step learning at finite sample size. The structure we need is the existence of a representation t(x) of x 2 X which is a sufficient statistic for the classification or regression of interest. Two common settings where this assumption holds are: manifold learning and group-invariant feature learning. In these settings we have respectively 1. t = t pX is determined by the marginal pX and pX is concentrated on a submanifold of X , 2. t = t G is determined by a group action on X and p(y|x) is invariant1 under this action. Learning t in these cases corresponds respectively to learning manifold features or group-invariant features; various approaches exist (see (Niyogi, 2013; Poggio et al., 2012) for more discussion) and we do not assume any fixed method. Our framework is also not restricted to these two settings. As a tool for analysis we define a variant of VC dimension for statistical models which we use to prove a useful lower bound on generalization error even2 under the assumption that the true distribution comes from P . This allows us to establish a gap at finite sample size between the error achievable by a single-step purely supervised learner and that achievable by a semi-supervised learner. We do not claim an asymptotic gap. The purpose of our analysis is rather to show that differential finite availability of data can dictate a multi-step learning approach. Our applications are respectively a strengthening of a manifold learning example analyzed by Niyogi (2013) and a group-invariant features example related to a result of Poggio et al. (2012). We also discuss the relevance of these to biological learning. Our framework has commonalities with a framework of Baxter (2000) for transfer learning. In that work, Baxter considered learning the inductive bias (i.e., the hypothesis space) for an algorithm for a 1 This means there is a group G of transformations of X such that p(y|x) = p(y|g·x) for all g 2 G. 2(distribution-specific lower bounds are by definition weaker than distribution-free ones) “target" learning task, based on experience from previous “source" learning tasks. For this purpose he defined a learning environment E to be a class of probability distributions on X ⇥ Y together with an unknown probability distribution Q on E , and assumed E to restrict the possible joint distributions which may arise. We also make a generative assumption, assuming joint distributions come from P , but we do not use a prior Q. Within his framework Baxter studied the reduction in generalization error for an algorithm to learn a new task, defined by p 2 E , when given access to a sample from p and a sample from each of m other learning tasks, p 1 , . . . , p m 2 E , chosen randomly according to Q, compared with an algorithm having access to only a sample from p. The analysis produced upper bounds on generalization error in terms of covering numbers and a lower bound was also obtained in terms of VC dimension in the specific case of shallow neural networks. In proving our lower bound in terms of a variant of VC dimension we use a minimax analysis. 2 Setup We assume a learning problem is specified by a joint probability distribution p on Z = X ⇥ Y and a particular (regression, classification or decision) function f p : X ! R determined entirely by p(y|x). Moreover, we postulate a statistical model P on X ⇥ Y and assume p 2 P . Despite the simplified notation, f p (x) depends on the conditionals p(y|x) and not the entire joint distribution p. There are three main types of learning problem our framework addresses (reflected in three types of f p ). When y is noise-free, i.e. p(y|x) is concentrated at a single y-value v p (x) 2 {0, 1}, f p = v p : X ! {0, 1} (classification); here f p (x) = E p (y|x). When y is noisy, then either f p : X ! {0, 1} (classification/decision) or f p : X ! [0, 1] (regression) and f p (x) = E p (y|x). In all three cases the parameters which define f p , the learning goal, depend only on p(y|x) = E p (y|x). We assume the learner knows the model P and the type of learning problem, i.e., the hypothesis class is the “concept class" C := {f p : p 2 P}. To be more precise, for the first type of f p listed above, this is the concept class (Kearns and Vazirani, 1994); for the second type, it is a class of decision rules and for the third type, it is a class of p-concepts (Kearns and Schapire, 1994). For specific choice of loss functions, we seek worst-case bounds on learning rates, over all distributions p 2 P . Our results for all three types of learning problem are stated in Theorem 3. To keep the presentation simple, we give a detailed proof for the first two types, i.e., assuming labels are binary. This shows how classic PAC-style arguments for discrete X can be adapted to our framework where X may be smooth. Extending these arguments to handle non-binary Y proceeds by the same modifications as for discrete X (c.f. Kearns and Schapire (1994)). We remark that in the presence of noise, better bounds can be obtained (see Theorem 3 for details) if a more technical version of Definition 1 is used but we leave this for a subsequent paper. We define the following probabilistic version of fat shattering dimension: Definition 1. Given P , a class of probability distributions on X⇥{0, 1}, let 2 (0, 1), ↵ 2 (0, 1/2) and n 2 N = {0, 1, . . . , ...}. Suppose there exist (disjoint) sets S i ⇢ X , i 2 {1, . . . , n} with S = [ i S i , a reference probability measure q on X , and a sub-class P n ⇢ P of cardinality |P n | = 2n with the following properties: 1. q(S i ) /n for every i 2 {1, . . . , n} 2. q lower bounds the marginals of all p 2 P n on S, i.e. R B dp X R B dq for any p-measurable subset B ⇢ S 3. 8 e 2 {0, 1}n, 9 p 2 P n such that E p (y|x) > 1/2 + ↵ for x 2 S i when e i = 1 and E p (y|x) < 1/2 ↵ for x 2 S i when e i = 0 then we say P ↵-shatters S 1 , . . . , S n -uniformly using P n . The -uniform ↵-shattering dimension of P is the largest n such that P ↵-shatters some collection of n subsets of X -uniformly. This provides a measure of complexity of the class P of distributions in the sense that it indicates the variability of the expected y-values for x constrained to lie in the region S with measure at least under corresponding marginals. The reference measure q serves as a lower bound on the marginals and ensures that they “uniformly" assign probabilty at least to S. Richness (variability) of conditionals is thus traded off against uniformity of the corresponding marginal distributions. Remark 2 (Uniformity of measure). The technical requirement of a reference distribution q is automatically satisfied if all marginals p X for p 2 P n are uniform over S. For simplicity this is the situation considered in all our examples. The weaker condition (in terms of q) that we postulate in Definition 1 is however sufficient for our main result, Theorem 3. If f p is binary and y is noise-free then P shatters S 1 , . . . , S n -uniformly if and only if there is a sub-class P n ⇢ P with the specified uniformity of measure, such that each f p (·) = E p (y|·), p 2 P n is constant on each S i and the induced set-functions shatter {S 1 , . . . , S n } in the usual (Vapnik-Chervonenkis) sense. In that case, ↵ may be chosen arbitrarily in (0, 1/2) and we omit mention of it. If f p takes values in [0, 1] or f p is binary and y noisy then -uniform shattering can be expressed in terms of fat-shattering (both at scale ↵). We show that the -uniform ↵-shattering dimension of P can be used to lower bound the sample size required by even the most powerful learner of this class of problems. The proof is in the same spirit as purely combinatorial proofs of lower bounds using VC-dimension. Essentially the added condition on P in terms of allows to convert the risk calculation to a combinatorial problem. As a counterpoint to the lower bound result, we consider an alternative two step learning strategy which makes use of underlying structure in X implied by the model P and we obtain upper bounds for the corresponding risk. 2.1 Underlying structure We assume a representation t : X ! Rk of the data, such that p(y|x) can be expressed in terms of p(y|t(x)), say f p (x) = g ✓ (t(x)) for some parameter ✓ 2 ⇥. Such a t is generally known in Statistics as a sufficient dimension reduction for f p but here we make no assumption on the dimension k (compared with the dimension of X). This is in keeping with the paradigm of feature extraction for use in kernel machines, where the dimension of t(X) may even be higher than the original dimension of X . As in that setting, what will be important is rather that the intermediate representation t(x) reduce the complexity of the concept space. While t depends on p we will assume it does so only via X . For example t could depend on p through the marginal p X on X or possible group action on X; it is a manifestation in the data X , possibly over time, of underlying structure in the true joint distribution p 2 P . The representation t captures structure in X induced by p. On the other hand, the regression function itself depends only on the conditional p(y|t(x)). In general, the natural factorization ⇡ : P ! P X , p 7! p X determines for each marginal q 2 P X a collection ⇡ 1(q) of possible conditionals, namely those p(y|x) arising from joint p 2 P that have marginal p X = q. More generally any sufficient statistic t induces a similar factorization (c.f. Fisher-Neyman characterization) ⇡ t : P ! P t , p 7! p t , where P t is the marginal model with respect to t, and only conditionals p(y|t) are needed for learning. As before, given a known marginal q 2 P t , this implies a collection ⇡ 1 t (q) of possible conditionals p(y|t) relevant to learning. Knowing q thus reduces the original problem where p(y|x) or p(y|t) can come from any p 2 P to one where it comes from p in a reduced class ⇡ 1(q) or ⇡ 1 t (q) ( P . Note the similarity with the assumption of Balcan and Blum (2005) that a good compatibility function reduce the concept class. In our case the concept class C consists of f p defined by p(y|t) in [ t P Y |t with PY |t :={p(y|t) : p 2 P}, and marginals come from P t . The joint model P that we postulate, meanwhile, corresponds to a subset of C ⇥ P t (pairs (f p , q) where f p uses p 2 ⇡ 1 t (q)). The indicator function for this subset is an abstract (binary) version of compatibility function (recall the compatibility function of Balcan-Blum should be a [0, 1]-valued function on C ⇥D, satisfying further practical conditions that our function typically would not). Thus, in a sense, our assumption of a joint model P and sufficient statistic t amounts to a general form of compatibility function that links C and D without making assumptions on how t might be learned. This is enough to imply the original learning problem can be factored into first learning the structure t and then learning the parameter ✓ for f p (x) = g ✓ (t(x)) in a reduced hypothesis space. Our goal is to understand when and why one should do so. 2.2 Learning rates We wish to quantify the benefits achieved by using such a factorization in terms of the bounds on the expected loss (i.e. risk) for a sample of size m 2 N drawn iid from any p 2 P . We assume the learner is provided with a sample z̄ = (z 1 , z 2 · · · z m ), with z i = (x i , y i ) 2 X ⇥ Y = Z, drawn iid from the distribution p and uses an algorithm A : Zm ! C = H to select A(z̄) to approximate f p . Let `(A(z̄), f p ) denote a specific loss. It might be 0/1, absolute, squared, hinge or logistic loss. We define L(A(z̄), f p ) to be the global expectation or L2-norm of one of those pointwise losses `: L(A(z̄), f p ) := E x `(A(z̄)(x), f p (x)) = Z X `(A(z̄)(x), f p (x))dp X (x) (1) or L(A(z̄), f p ) := ||`(A(z̄), f p )|| L 2 (pX) = sZ X `(A(z̄)(x), f p (x))2dp X . (2) Then the worst case expected loss (i.e. minimax risk) for the best learning algorithm with no knowledge of t pX is R(m) := inf A sup p2P E z̄ L(A(z̄), f p ) = inf A sup q2PX sup p(y|tq)s.t. p2P,pX=q E z̄ L(A(z̄, f p ) . (3) while for the best learning algorithm with oracle knowledge of t pX it is Q(m) := sup q2PX inf A sup p(y|tq)s.t. p2P,pX=q E z̄ L(A(z̄, f p ) . (4) Some clarification is in order regarding the classes over which the suprema are taken. In principle the worst case expected loss for a given A is the supremum over P of the expected loss. Since f p (x) is determined by p(y|t pX (x)), and tpX is determined by pX this is a supremum over q 2 PX of a supremum over p(y|t q (·)) such that p X = q. Finding the worst case expected error for the best A therefore means taking the infimum of the supremum just described. In the case of Q(m) since the algorithm knows t q , the order of the supremum over t changes with respect to the infimum: the learner can select the best algorithm A using knowledge of t q . Clearly R(m) Q(m) by definition. In the next section, we lower bound R(m) and upper bound Q(m) to establish a gap between R(m) and Q(m). 3 Main Result We show that -uniform shattering dimension n or more implies a lower bound on the worst case expected error, R(m), when the sample size m n. In particular - in the setup specified in the previous section - if {g ✓ (·) : ✓ 2 ⇥} has much smaller VC dimension than n this results in a distinct gap between rates for a learner with oracle access to t pX and a learner without. Theorem 3. Consider the framework defined in the previous Section with Y = {0, 1}. Assume {g ✓ (·) : ✓ 2 ⇥} has VC dimension d < m and P has -uniform ↵-shattering dimension n (1+✏)m. Then, for sample size m, Q(m) 16 q d log(m+1)+log 8+1 2m while R(m) > ✏bc m+1/8 where b depends both on the type of loss and the presence of noise, while c depends on noise. Assume the standard definition in (1). If f p are binary (in the noise-free or noisy setting) b = 1 for absolute, squared, 0-1, hinge or logistic loss. In the noisy setting, if f p = E(y|x) 2 [0, 1], b = ↵ for absolute loss and b = ↵2 for squared loss. In general, c = 1 in the noise-free setting and c = (1/2 + ↵)m in the noisy setting. By requiring P to satisfy a stronger notion of -uniform ↵-shattering one can obtain c = 1 even in the noisy case. Note that for sample size m and -uniform ↵-shattering dimension 2m, we have ✏ = 1, so the lower bound in its simplest form becomes m+1/8. This is the bound we will use in the next Section to derive implications of Theorem 3. Remark 4. We have stated in the Theorem a simple upper bound, sticking to Y = {0, 1} and using VC dimension, in order to focus the presentation on the lower bound which uses the new complexity measure. The upper bound could be improved. It could also be replaced with a corresponding upper bound assuming instead Y = [0, 1] and fat shattering dimension d. Proof. The upper bound on Q(m) holds for an ERM algorithm (by the classic argument, see for example Corollary 12.1 in Devroye et al. (1996)). We focus here on the lower bound for R(m). Moreover, we stick to the simpler definition of -uniform shattering in Definition 1 and omit proof of the final statement of the Theorem, which is slightly more involved. We let n = 2m (i.e. ✏ = 1) and we comment in a footnote on the result for general ✏. Let S 1 , . . . , S 2m be sets which are -uniformly ↵-shattered using the family P 2m ⇢ P and denote their union by S. By assumption S has measure at least under a reference measure q which is dominated by all marginals p X for p 2 P 2m (see Definition 1). We divide our argument into three parts. 1. If we prove a lower bound for the average over P 2m , 8A, 1 22m X p2P2m E z̄ L(A(z̄), f p ) bc m+1/8 (5) it will also be a lower bound for the supremum over P 2m : 8A, sup p2P2m E z̄ L(A(z̄), f p ) bc m+1/8 . and hence for the supremum over P . It therefore suffices to prove (5). 2. Given x 2 S, define v p (x) to be the more likely label for x under the joint distribution p 2 P 2m . This notation extends to the noisy case the definition of v p already given for the noise-free case. The uniform shattering condition implies p(v p (x)|x) > 1/2 + ↵ in the noisy case and p(v p (x)|x) = 1 in the noise-free case. Given x̄ = (x 1 , . . . , x m ) 2 Sm, write z̄ p (x̄) := (z 1 , . . . , z m ) where z j = (x j , v p (x j )). Then E z̄ L(A(z̄), f p ) = Z Z m L(A(z̄), f p )dpm(z̄) Z S m⇥Y m L(A(z̄), f p )dpm(z̄) c Z S m L(A(z̄ p (x̄)), f p )dpm X (x̄) where c is as specified in the Theorem. Note the sets V l := {x̄ 2 Sm ⇢ Xm : the x j occupy exactly l of the S i } for l = 1, . . . ,m define a partition of Sm. Recall that dp X dq on S for all p 2 P 2m so Z S m L(A(z̄ p (x̄)), f p )dpm X (x̄) 1 22m X p2P2m mX l=1 Z x̄2Vl L(A(z̄ p (x̄)), f p ) dqm(x̄) = mX l=1 Z x̄2Vl 0 BBBB@ 1 22m X p2P2m L(A(z̄ p (x̄)), f p ) | {z } I 1 CCCCA dq m(x̄). We claim the integrand, I , is bounded below by b /8 (this computation is performed in part 3, and depends on knowing x̄ 2 V l ). At the same time, S has measure at least under q so mX l=1 Z x̄2Vl dq m(x̄) = Z x̄2Sm dq m(x̄) m which will complete the proof of (5). 3. We now assume a fixed but arbitrary x̄ 2 V l and prove I b /8. To simplify the discussion, we will refer to sets S i which contain a component x j of x̄ as S i with data. We also need notation for the elements of P 2m : for each L ⇢ [2m] denote by p(L) the unique element of P 2m such that v p (L) | Si = 1 if i 2 L, and vp(L) |Si = 0 if i /2 L. Now, let Lx̄ := {i 2 [2m] : x̄ \ Si 6= ;}. These are the indices of sets S i with data. By assumption |L x̄ | = l, and so |Lc x̄ | = 2m l. Every subset L ⇢ [2m] and hence every p 2 P 2m is determined by L \ L x̄ and L \ Lc x̄ . We will collect together all p(L) having the same L \ L x̄ , namely for each D ⇢ L x̄ define P D := {p(L) 2 P 2m : L \ L x̄ = D}. These 2l families partition P 2m and in each P D there are 22m l probability distributions. Most importantly, z̄ p (x̄) is the same for all p 2 P D (because D determines v p on the S i with data). This implies A(z̄ p (x̄)) : X ! R is the same function3 of X for all p in a given P D . To simplify notation, since we will be working within a single P D , we write f := A(z̄(x̄)). While f is the hypothesized regression function given data x̄, f p is the true regression function when p is the underlying distribution. For each set S i let v i be 1 if f is above 1/2 on a majority of S i using reference measure q (a q-majority) and 0 otherwise. We now focus on the “unseen" S i where no data lie (i.e., i 2 Lc x̄ ) and use the v i to specify a 1-1 correspondence between elements p 2 P D and subsets K ⇢ Lc x̄ : p 2 P D ! K p := {i 2 Lc x̄ : v p 6= v i }. Take a specific p 2 P D with its associated K p . We have |f(x) f p (x)| > ↵ on the q-majority of the set S i for all i 2 K p . The condition |f(x) f p (x)| > ↵ with f(x) and f p (x) on opposite sides of 1/2 implies a lower bound on `(f(x), f p (x)) for each of the pointwise loss functions ` that we consider (0/1, absolute, square, hinge, logistic). The value of b, however, differs from case to case (see Appendix). For now we have,Z Si `(f(x), f p (x)) dp X (x) Z Si `(f(x), f p (x)) dq(x) b 1 2 Z Si dq(x) b 4m . Summing over all i 2 K p , and letting k = |K p |, we obtain (still for the same p) L(f(x), f p (x)) k b 4m (assuming L is defined by equation (1))4. There are 2m ` k possible K with cardinality k, for any k = 0, . . . , 2m `. Therefore, X p2PD L(f(x), f p (x)) 2m `X k=0 ✓ 2m ` k ◆ k b 4m = 22m `(2m `) 2 b 4m 22m ` b 8 (using 2m ` 2m m = m)5. Since D was an arbitrary subset of L x̄ , this same lower bound holds for each of the 2` families P D and so I = 1 22m X p2P2m L(f(x), f p (x)) b 8 . In the constructions of the next Section it is often the case that one can prove a different level of shattering for different n, namely (n)-uniform shattering of n subsets for various n. The following Corollary is an immediate consequence of the Theorem for such settings. We state it for binary f p without noise. Corollary 5. Let C 2 (0, 1) and M 2 N. If P (n)-uniformly ↵-shatters n subsets of X and (n)n+1/8 > C for all n < M then no learning algorithm can achieve worst case expected error below ↵C, using a training sample of size less than M/2. If such uniform shattering holds for all n 2 N then the same lower bound applies regardless of sample size. Even when (n)-uniform shattering holds for all n 2 N and lim n!1 (n) = 1, if (n) approaches 1 sufficiently slowly then it is possible (n)n+1 ! 0 and there is no asymptotic obstacle to learning. By contrast, the next Section shows an extreme situation where lim n!1 (n)n+1 e > 0. In that case, learning is impossible. 4 Applications and conclusion Manifold learning We now describe a simpler, finite dimensional version of the example in Niyogi (2013). Let X = RD, D 2 and Y = {0, 1}. Fix N 2 N and consider a very simple type of 1-dimensional manifold in X , namely the union of N linear segments, connected in circular fashion (see Figure 1). Let P X be the collection of marginal distributions, each of which is supported on and assigns uniform probability along a curve of this type. There is a 1-1 correspondence between the elements of P X and curves just described. 3Warning: f need not be an element of {fp : p 2 P2n}; we only know f 2 H = {fp : p 2 P}. 4In the L2 version, using p x x, the reader can verify the same lower bound holds. 5In the case where we use (1 + ✏)m instead of 2m, we would have (1 + ✏)m ` ✏m here. On each curve M, choose two distinct points x0, x00. Removing these disconnects M. Let one component be labeled 0 and the other 1, then label x0 and x00 oppositely. Let P be the class of joint distributions on X ⇥ Y with conditionals as described and marginals in P X . This is a noise-free setting and f p is binary. Given M (or circular coordinates on M), consider the reduced class P 0 := {p 2 P : support(p X ) = M}. Then H0 := {f p : p 2 P 0} has VC dimension 3. On the other hand, for n < N/4 1 it can be shown that P (n)-uniformly shatters n sets with f p , where (n) = 1 1 n+1 (see Appendix and Figure 2). Since (1 1 n+1 )n+1 ! e > 0 as n!1, it follows from Corollary 5 that the worst case expected error is bounded below by e/8 for any sample of size n N/8 1/2. If many linear pieces are allowed (i.e. N is high) this could be an impractical number of labeled examples. By contrast with this example, (n) in Niyogi’s example cannot be made arbitrarily close to 1. Group-invariant features We give a simplified, partially-discrete example (for a smooth version and Figures, see Appendix). Let Y = {0, 1} and let X = J ⇥ I where J = {0, 1, . . . , n 1 1} ⇥ {0, 1, . . . , n 2 1} is an n 1 by n 2 grid (n i 2 N) and I = [0, 1] is a real line segment. One should picture X as a rectangular array of vertical sticks. Above each grid point (j 1 , j 2 ) consider two special points on the stick I , one with i = i + := 1 ✏ and the other with i = i := 0 + ✏. Let P X contain only the uniform distribution on X and assume the noise-free setting. For each ē 2 {+, }n1n2 , on each segment (j 1 , j 2 ) ⇥ I assign, via p ē , the label 1 above the special point (determined by ē) and 0 below the point. This determines a family of n 1 n 2 conditional distributions and thus a family P := {p ē : ē 2 {+, }n1n2} of n 1 n 2 joint distributions. The reader can verify that P has 2✏-uniform shattering dimension n 1 n 2 . Note that when the true distribution is p ē for some ē 2 {+, }n1n2 the labels will be invariant under the action a ē of Z n1 ⇥ Zn2 defined as follows. Given (z 1 , z 2 ) 2 Z n1 ⇥Zn2 and (j1, j2) 2 J , let the group element (z1, z2) move the vertical stick at (j 1 , j 2 ) to the one at (z 1 + j 1 mod n 1 , z 2 + j 2 mod n 2 ) without flipping the stick over, just stretching it as needed so the special point i± determined by ē on the first stick goes to the one on the second stick. The orbit space of the action can be identified with I . Let t : X ⇥ Y ! I be the projection of X ⇥ Y to this orbit space, then there is an induced labelling of this orbit space (because labels were invariant under the action of the group). Given access to t, the resulting concept class has VC dimension 1. On the other hand, given instead access to a projection s for the action of the subgroup Z n1 ⇥ {0}, the class eP := {p(·|s) : p 2 P} has 2✏-uniform shattering dimension n2. Thus we have a general setting where the over-all complexity requirements for two-step learning are n 1 + n 2 while for single-step learning they are n 1 n 2 . Conclusion We used a notion of uniform shattering to demonstrate both manifold learning and invariant feature learning situations where learning becomes impossible unless the learner has access to very large amounts of labeled data or else uses a two-step semi-supervised approach in which suitable manifold- or group-invariant features are learned first in unsupervised fashion. Our examples also provide a complexity manifestation of the advantages, observed by Poggio and Mallat, of forming intermediate group-invariant features according to sub-groups of a larger transformation group. Acknowledgements The author is deeply grateful to Partha Niyogi for the chance to have been his student. This paper is directly inspired by discussions with him which were cut short much too soon. The author also thanks Ankan Saha and Misha Belkin for very helpful input on preliminary drafts.
1. What is the focus of the paper regarding prediction problems? 2. What are the strengths and weaknesses of the proposed approach, particularly in its assumption and theoretical analysis? 3. Do you have any concerns or questions about the presented results, their novelty, and their relation to prior works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review The paper studies prediction problems (a binary realizable, separable classification) within a standard PAC framework and tries to quantifies benefits which a learning algorithm can achieve by using a certain structure of the unknown data-generating distribution $P(X,Y)$. More precisely, the authors consider problems where (a) the conditional probability $P(Y=1|X=x)$ function depends on the input $x$ only through a sufficient statistic (or the *representation*) $t(x)$ and (b) function $t(.)$ depends on the unknown distribution $P(X,Y)$ only through its marginal $P(X)$. The authors approach this problem by providing (1) the upper bound on the generalization error of the "oracle learner" having access to the structure and (2) the lower bound for the performance of a standard learner with no knowledge regarding the structure in $P(X,Y)$ (Theorem 3). The lower bound is based on a novel extension of a fat-shattering dimension (Definition 1), which measures the complexity of a set of *distributions*, while the upper bound trivially follows from the standard worst-case bounds on the ERM algorithm. Finally, the authors consider two different particular examples where the oracle learner can effectively solve the prediction problem, while the standard learner fails to do so (Section 4). I checked all the proofs appearing in the main part of the paper and they are correct. While the main topic of the paper seems to be important and interesting, I was not impressed with the presented results. First of all, the assumption on the structure of $P$ is too strong for my taste: not only did we assume that such a factorization holds, but we also assumed that the oracle learner knows *exactly* the sufficient statistic $t$. I think this is a rather unrealistic situation. Second, the main theoretical result (Theorem 3) trivially follows the standard proofs of the lower bounds in the supervised learning (Chapter 14 in Devroye et al.). The only difference is captured in the extension of the fat-shattering dimension of Definition 1, but apart from that these are all well-known steps: let a marginal be concentrated on shattered points, then use probabilistic method, and play with the labels on the points not appearing in the training sample. In other words, I would say Theorem 3 does not contain any significant novel ideas. Finally, Section 4 looks confusing to me: the authors did not provide the original example of Niyogi in the "Manifold Learning" paragraph, which makes it hard to assess the novelty of the current paper in this context; "Group-Invariant features" paragraph is overly dense and is not supported with clear and intuitive discussions. Concluding, I am not sure if this work provides a sufficient contribution to the literature (neither in terms of the techniques used, nor in terms of the results proved and conclusions made). DETAILED COMMENTS (0) Line 207 and 221-222. The upper bound of Devroy et al. holds for the *excess loss*, which is the difference between the loss of the learning algorithm and the smallest loss of the functions from the class. Meanwhile, the authors are using it to upper bound the loss of the learner itself. This is true only when we are in the separable realizable situation, meaning the hypotheses class contains the BEST classifier, which achieves zero error overall. This was not explicitly stated in the setting (at least, I did not notice it). (1) Words "fiber sub-bundle" in the abstract look mysterious... (2) Lines 42-43: Devroy et al. does not contain an overview of Rademacher complexities, only VC. (3) Line 64: not clear what do the authors call "a full joint model" (4) Section 2. I had a hard time guessing the meaning of $f_p$. In the end (from the proof of Theorem 3) I conclude that in all cases $f_p(x) = P(Y=1|X=x)$. This should be stated explicitly! (5) Line 110. p(y|x) = E_p(y|x) should be replaced with p(y = 1|x) = E_p(y|x). (6) It is better to present or sketch the original definition of a fat-shattering dimension somewhere around Definition 1. (7) Line 173. How can $P$ be an element of $P_t x C$ ? (8) Equations (3) and (4): parenthesis missing, in (3) dot should be switched to comma. (9) line 212: "stronger notion of a stronger notion of..." (10) I did not get a purpose of a footnote (3) (11) lines 229-230. Better to state explicitly that this is due to probabilistic method, which is a standard tool used in these situations. Refer to the same book of Devroye, Chapter 14 if necessary. (12) Line 233. p(v_p(x)|x) should be replaced with p(y = v_p(x)|x) (13) Definition of $V_l$. Note that the r.h.s. depends on $i$ and $j$ while the l.h.s. does not. Should be fixed. (14) L.h.s. of the first inequality after line 236: the authors forgot to put $1/2^{2^m} \sum_{p\in P_{2m}}$. (15) Line 246. $\bar{z}(\bar{x})$ should be replaced with $\bar{z}_p(\bar{x})$ =======UPDATE======= I have carefully read the rebuttal. unfortunately, I am going to keep my initial decisions. The main reason for this is that I still think the assumptions considered in the paper are too strong. The authors are comparing the performance of a standard supervised algorithm to the learner which knows the structure of the problem, including the $t$ function. It is not surprising to me at all that in this case the "smart" learner will easily outperform the standard SL algorithm. In my point of view, one could hope to *learn* the $t$ function during the training, but for sure it is unrealistic to assume that the algorithms *precisely knows* $t$ in advance. Thus I think the contribution of the paper is marginal.
NIPS
Title Multi-step learning and underlying structure in statistical models Abstract In multi-step learning, where a final learning task is accomplished via a sequence of intermediate learning tasks, the intuition is that successive steps or levels transform the initial data into representations more and more “suited" to the final learning task. A related principle arises in transfer-learning where Baxter (2000) proposed a theoretical framework to study how learning multiple tasks transforms the inductive bias of a learner. The most widespread multi-step learning approach is semisupervised learning with two steps: unsupervised, then supervised. Several authors (Castelli-Cover, 1996; Balcan-Blum, 2005; Niyogi, 2008; Ben-David et al, 2008; Urner et al, 2011) have analyzed SSL, with Balcan-Blum (2005) proposing a version of the PAC learning framework augmented by a “compatibility function" to link concept class and unlabeled data distribution. We propose to analyze SSL and other multi-step learning approaches, much in the spirit of Baxter’s framework, by defining a learning problem generatively as a joint statistical model on X ⇥ Y . This determines in a natural way the class of conditional distributions that are possible with each marginal, and amounts to an abstract form of compatibility function. It also allows to analyze both discrete and non-discrete settings. As tool for our analysis, we define a notion of -uniform shattering for statistical models. We use this to give conditions on the marginal and conditional models which imply an advantage for multi-step learning approaches. In particular, we recover a more general version of a result of Poggio et al (2012): under mild hypotheses a multi-step approach which learns features invariant under successive factors of a finite group of invariances has sample complexity requirements that are additive rather than multiplicative in the size of the subgroups. 1 Introduction The classical PAC learning framework of Valiant (1984) considers a learning problem with unknown true distribution p on X ⇥ Y , Y = {0, 1} and fixed concept class C consisting of (deterministic) functions f : X ! Y . The aim of learning is to select a hypothesis h : X ! Y , say from C itself (realizable case), that best recovers f . More formally, the class C is said to be PAC learnable if there is a learning algorithm that with high probability selects h 2 C having arbitrarily low generalization error for all possible distributions D on X . The distribution D governs both the sampling of points z = (x, y) 2 X ⇥ Y by which the algorithm obtains a training sample and also the cumulation of error over all x 2 X which gives the generalization error. A modification of this model, together with the notion of learnable with a model of probability (resp. decision rule) (Haussler, 1989; Kearns and Schapire, 1994), allows to treat non-deterministic functions f : X ! Y and the case Y = [0, 1] analogously. Polynomial dependence of the algorithms on sample size and reciprocals 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. of probability bounds is further required in both frameworks for efficient learning. Not only do these frameworks consider worst case error, in the sense of requiring the generalization error to be small for arbitrary distributions D on X , they assume the same concept class C regardless of the true underlying distribution D. In addition, choice of the hypothesis class is taken as part of the inductive bias of the algorithm and not addressed. Various, by now classic, measures of complexity of a hypothesis space (e.g., VC dimension or Rademacher complexity, see Mohri et al. (2012) for an overview) allow to prove upper bounds on generalization error in the above setting, and distribution-specific variants of these such as annealed VC-entropy (see Devroye et al. (1996)) or Rademacher averages (beginning with Koltchinskii (2001)) can be used to obtain more refined upper bounds. The widespread strategy of semi-supervised learning (SSL) is known not to fit well into PAC-style frameworks (Valiant, 1984; Haussler, 1989; Kearns and Schapire, 1994). SSL algorithms perform a first step using unlabeled training data drawn from a distribution on X , followed by a second step using labeled training data from a joint distribution on X ⇥ Y . This has been studied by several authors (Balcan and Blum, 2005; Ben-David et al., 2008; Urner et al., 2011; Niyogi, 2013) following the seminal work of Castelli and Cover (1996) comparing the value of unlabeled and labeled data. One immediate observation is that without some tie between the possible marginals D on X and the concept class C which records possible conditionals p(y|x), there is no benefit to unlabeled data: if D can be arbitrary then it conveys no information about the true joint distribution that generated labeled data. Within PAC-style frameworks, however, C and D are completely independent. Balcan and Blum therefore proposed augmenting the PAC learning framework by the addition of a compatibility function : C ⇥ D ! [0, 1], which records the amount of compatibility we believe each concept from C to have with each D 2 D, the class of “all" distributions on X . This function is required to be learnable from D and is then used to reduce the concept class from C to a sub-class which will be used for the subsequent (supervised) learning step. If is a good compatible function this sub-class should have lesser complexity than C (Balcan and Blum, 2005). While PAC-style frameworks in essence allow the true joint distribution to be anything in C⇥D, the existence of a good compatibility function in the sense of Balcan and Blum (2005) implicitly assumes the joint model that we believe in is smaller. We return to this point in Section 2.1. In this paper we study properties of multi-step learning strategies – those which involve multiple training steps – by considering the advantages of breaking a single learning problem into a sequence of two learning problems. We start by assuming a true distribution which comes from a class of joint distributions, i.e. statistical model, P on X ⇥ Y . We prove that underlying structure of a certain kind in P , together with differential availability of labeled vs. unlabeled data, imply a quantifiable advantage to multi-step learning at finite sample size. The structure we need is the existence of a representation t(x) of x 2 X which is a sufficient statistic for the classification or regression of interest. Two common settings where this assumption holds are: manifold learning and group-invariant feature learning. In these settings we have respectively 1. t = t pX is determined by the marginal pX and pX is concentrated on a submanifold of X , 2. t = t G is determined by a group action on X and p(y|x) is invariant1 under this action. Learning t in these cases corresponds respectively to learning manifold features or group-invariant features; various approaches exist (see (Niyogi, 2013; Poggio et al., 2012) for more discussion) and we do not assume any fixed method. Our framework is also not restricted to these two settings. As a tool for analysis we define a variant of VC dimension for statistical models which we use to prove a useful lower bound on generalization error even2 under the assumption that the true distribution comes from P . This allows us to establish a gap at finite sample size between the error achievable by a single-step purely supervised learner and that achievable by a semi-supervised learner. We do not claim an asymptotic gap. The purpose of our analysis is rather to show that differential finite availability of data can dictate a multi-step learning approach. Our applications are respectively a strengthening of a manifold learning example analyzed by Niyogi (2013) and a group-invariant features example related to a result of Poggio et al. (2012). We also discuss the relevance of these to biological learning. Our framework has commonalities with a framework of Baxter (2000) for transfer learning. In that work, Baxter considered learning the inductive bias (i.e., the hypothesis space) for an algorithm for a 1 This means there is a group G of transformations of X such that p(y|x) = p(y|g·x) for all g 2 G. 2(distribution-specific lower bounds are by definition weaker than distribution-free ones) “target" learning task, based on experience from previous “source" learning tasks. For this purpose he defined a learning environment E to be a class of probability distributions on X ⇥ Y together with an unknown probability distribution Q on E , and assumed E to restrict the possible joint distributions which may arise. We also make a generative assumption, assuming joint distributions come from P , but we do not use a prior Q. Within his framework Baxter studied the reduction in generalization error for an algorithm to learn a new task, defined by p 2 E , when given access to a sample from p and a sample from each of m other learning tasks, p 1 , . . . , p m 2 E , chosen randomly according to Q, compared with an algorithm having access to only a sample from p. The analysis produced upper bounds on generalization error in terms of covering numbers and a lower bound was also obtained in terms of VC dimension in the specific case of shallow neural networks. In proving our lower bound in terms of a variant of VC dimension we use a minimax analysis. 2 Setup We assume a learning problem is specified by a joint probability distribution p on Z = X ⇥ Y and a particular (regression, classification or decision) function f p : X ! R determined entirely by p(y|x). Moreover, we postulate a statistical model P on X ⇥ Y and assume p 2 P . Despite the simplified notation, f p (x) depends on the conditionals p(y|x) and not the entire joint distribution p. There are three main types of learning problem our framework addresses (reflected in three types of f p ). When y is noise-free, i.e. p(y|x) is concentrated at a single y-value v p (x) 2 {0, 1}, f p = v p : X ! {0, 1} (classification); here f p (x) = E p (y|x). When y is noisy, then either f p : X ! {0, 1} (classification/decision) or f p : X ! [0, 1] (regression) and f p (x) = E p (y|x). In all three cases the parameters which define f p , the learning goal, depend only on p(y|x) = E p (y|x). We assume the learner knows the model P and the type of learning problem, i.e., the hypothesis class is the “concept class" C := {f p : p 2 P}. To be more precise, for the first type of f p listed above, this is the concept class (Kearns and Vazirani, 1994); for the second type, it is a class of decision rules and for the third type, it is a class of p-concepts (Kearns and Schapire, 1994). For specific choice of loss functions, we seek worst-case bounds on learning rates, over all distributions p 2 P . Our results for all three types of learning problem are stated in Theorem 3. To keep the presentation simple, we give a detailed proof for the first two types, i.e., assuming labels are binary. This shows how classic PAC-style arguments for discrete X can be adapted to our framework where X may be smooth. Extending these arguments to handle non-binary Y proceeds by the same modifications as for discrete X (c.f. Kearns and Schapire (1994)). We remark that in the presence of noise, better bounds can be obtained (see Theorem 3 for details) if a more technical version of Definition 1 is used but we leave this for a subsequent paper. We define the following probabilistic version of fat shattering dimension: Definition 1. Given P , a class of probability distributions on X⇥{0, 1}, let 2 (0, 1), ↵ 2 (0, 1/2) and n 2 N = {0, 1, . . . , ...}. Suppose there exist (disjoint) sets S i ⇢ X , i 2 {1, . . . , n} with S = [ i S i , a reference probability measure q on X , and a sub-class P n ⇢ P of cardinality |P n | = 2n with the following properties: 1. q(S i ) /n for every i 2 {1, . . . , n} 2. q lower bounds the marginals of all p 2 P n on S, i.e. R B dp X R B dq for any p-measurable subset B ⇢ S 3. 8 e 2 {0, 1}n, 9 p 2 P n such that E p (y|x) > 1/2 + ↵ for x 2 S i when e i = 1 and E p (y|x) < 1/2 ↵ for x 2 S i when e i = 0 then we say P ↵-shatters S 1 , . . . , S n -uniformly using P n . The -uniform ↵-shattering dimension of P is the largest n such that P ↵-shatters some collection of n subsets of X -uniformly. This provides a measure of complexity of the class P of distributions in the sense that it indicates the variability of the expected y-values for x constrained to lie in the region S with measure at least under corresponding marginals. The reference measure q serves as a lower bound on the marginals and ensures that they “uniformly" assign probabilty at least to S. Richness (variability) of conditionals is thus traded off against uniformity of the corresponding marginal distributions. Remark 2 (Uniformity of measure). The technical requirement of a reference distribution q is automatically satisfied if all marginals p X for p 2 P n are uniform over S. For simplicity this is the situation considered in all our examples. The weaker condition (in terms of q) that we postulate in Definition 1 is however sufficient for our main result, Theorem 3. If f p is binary and y is noise-free then P shatters S 1 , . . . , S n -uniformly if and only if there is a sub-class P n ⇢ P with the specified uniformity of measure, such that each f p (·) = E p (y|·), p 2 P n is constant on each S i and the induced set-functions shatter {S 1 , . . . , S n } in the usual (Vapnik-Chervonenkis) sense. In that case, ↵ may be chosen arbitrarily in (0, 1/2) and we omit mention of it. If f p takes values in [0, 1] or f p is binary and y noisy then -uniform shattering can be expressed in terms of fat-shattering (both at scale ↵). We show that the -uniform ↵-shattering dimension of P can be used to lower bound the sample size required by even the most powerful learner of this class of problems. The proof is in the same spirit as purely combinatorial proofs of lower bounds using VC-dimension. Essentially the added condition on P in terms of allows to convert the risk calculation to a combinatorial problem. As a counterpoint to the lower bound result, we consider an alternative two step learning strategy which makes use of underlying structure in X implied by the model P and we obtain upper bounds for the corresponding risk. 2.1 Underlying structure We assume a representation t : X ! Rk of the data, such that p(y|x) can be expressed in terms of p(y|t(x)), say f p (x) = g ✓ (t(x)) for some parameter ✓ 2 ⇥. Such a t is generally known in Statistics as a sufficient dimension reduction for f p but here we make no assumption on the dimension k (compared with the dimension of X). This is in keeping with the paradigm of feature extraction for use in kernel machines, where the dimension of t(X) may even be higher than the original dimension of X . As in that setting, what will be important is rather that the intermediate representation t(x) reduce the complexity of the concept space. While t depends on p we will assume it does so only via X . For example t could depend on p through the marginal p X on X or possible group action on X; it is a manifestation in the data X , possibly over time, of underlying structure in the true joint distribution p 2 P . The representation t captures structure in X induced by p. On the other hand, the regression function itself depends only on the conditional p(y|t(x)). In general, the natural factorization ⇡ : P ! P X , p 7! p X determines for each marginal q 2 P X a collection ⇡ 1(q) of possible conditionals, namely those p(y|x) arising from joint p 2 P that have marginal p X = q. More generally any sufficient statistic t induces a similar factorization (c.f. Fisher-Neyman characterization) ⇡ t : P ! P t , p 7! p t , where P t is the marginal model with respect to t, and only conditionals p(y|t) are needed for learning. As before, given a known marginal q 2 P t , this implies a collection ⇡ 1 t (q) of possible conditionals p(y|t) relevant to learning. Knowing q thus reduces the original problem where p(y|x) or p(y|t) can come from any p 2 P to one where it comes from p in a reduced class ⇡ 1(q) or ⇡ 1 t (q) ( P . Note the similarity with the assumption of Balcan and Blum (2005) that a good compatibility function reduce the concept class. In our case the concept class C consists of f p defined by p(y|t) in [ t P Y |t with PY |t :={p(y|t) : p 2 P}, and marginals come from P t . The joint model P that we postulate, meanwhile, corresponds to a subset of C ⇥ P t (pairs (f p , q) where f p uses p 2 ⇡ 1 t (q)). The indicator function for this subset is an abstract (binary) version of compatibility function (recall the compatibility function of Balcan-Blum should be a [0, 1]-valued function on C ⇥D, satisfying further practical conditions that our function typically would not). Thus, in a sense, our assumption of a joint model P and sufficient statistic t amounts to a general form of compatibility function that links C and D without making assumptions on how t might be learned. This is enough to imply the original learning problem can be factored into first learning the structure t and then learning the parameter ✓ for f p (x) = g ✓ (t(x)) in a reduced hypothesis space. Our goal is to understand when and why one should do so. 2.2 Learning rates We wish to quantify the benefits achieved by using such a factorization in terms of the bounds on the expected loss (i.e. risk) for a sample of size m 2 N drawn iid from any p 2 P . We assume the learner is provided with a sample z̄ = (z 1 , z 2 · · · z m ), with z i = (x i , y i ) 2 X ⇥ Y = Z, drawn iid from the distribution p and uses an algorithm A : Zm ! C = H to select A(z̄) to approximate f p . Let `(A(z̄), f p ) denote a specific loss. It might be 0/1, absolute, squared, hinge or logistic loss. We define L(A(z̄), f p ) to be the global expectation or L2-norm of one of those pointwise losses `: L(A(z̄), f p ) := E x `(A(z̄)(x), f p (x)) = Z X `(A(z̄)(x), f p (x))dp X (x) (1) or L(A(z̄), f p ) := ||`(A(z̄), f p )|| L 2 (pX) = sZ X `(A(z̄)(x), f p (x))2dp X . (2) Then the worst case expected loss (i.e. minimax risk) for the best learning algorithm with no knowledge of t pX is R(m) := inf A sup p2P E z̄ L(A(z̄), f p ) = inf A sup q2PX sup p(y|tq)s.t. p2P,pX=q E z̄ L(A(z̄, f p ) . (3) while for the best learning algorithm with oracle knowledge of t pX it is Q(m) := sup q2PX inf A sup p(y|tq)s.t. p2P,pX=q E z̄ L(A(z̄, f p ) . (4) Some clarification is in order regarding the classes over which the suprema are taken. In principle the worst case expected loss for a given A is the supremum over P of the expected loss. Since f p (x) is determined by p(y|t pX (x)), and tpX is determined by pX this is a supremum over q 2 PX of a supremum over p(y|t q (·)) such that p X = q. Finding the worst case expected error for the best A therefore means taking the infimum of the supremum just described. In the case of Q(m) since the algorithm knows t q , the order of the supremum over t changes with respect to the infimum: the learner can select the best algorithm A using knowledge of t q . Clearly R(m) Q(m) by definition. In the next section, we lower bound R(m) and upper bound Q(m) to establish a gap between R(m) and Q(m). 3 Main Result We show that -uniform shattering dimension n or more implies a lower bound on the worst case expected error, R(m), when the sample size m n. In particular - in the setup specified in the previous section - if {g ✓ (·) : ✓ 2 ⇥} has much smaller VC dimension than n this results in a distinct gap between rates for a learner with oracle access to t pX and a learner without. Theorem 3. Consider the framework defined in the previous Section with Y = {0, 1}. Assume {g ✓ (·) : ✓ 2 ⇥} has VC dimension d < m and P has -uniform ↵-shattering dimension n (1+✏)m. Then, for sample size m, Q(m) 16 q d log(m+1)+log 8+1 2m while R(m) > ✏bc m+1/8 where b depends both on the type of loss and the presence of noise, while c depends on noise. Assume the standard definition in (1). If f p are binary (in the noise-free or noisy setting) b = 1 for absolute, squared, 0-1, hinge or logistic loss. In the noisy setting, if f p = E(y|x) 2 [0, 1], b = ↵ for absolute loss and b = ↵2 for squared loss. In general, c = 1 in the noise-free setting and c = (1/2 + ↵)m in the noisy setting. By requiring P to satisfy a stronger notion of -uniform ↵-shattering one can obtain c = 1 even in the noisy case. Note that for sample size m and -uniform ↵-shattering dimension 2m, we have ✏ = 1, so the lower bound in its simplest form becomes m+1/8. This is the bound we will use in the next Section to derive implications of Theorem 3. Remark 4. We have stated in the Theorem a simple upper bound, sticking to Y = {0, 1} and using VC dimension, in order to focus the presentation on the lower bound which uses the new complexity measure. The upper bound could be improved. It could also be replaced with a corresponding upper bound assuming instead Y = [0, 1] and fat shattering dimension d. Proof. The upper bound on Q(m) holds for an ERM algorithm (by the classic argument, see for example Corollary 12.1 in Devroye et al. (1996)). We focus here on the lower bound for R(m). Moreover, we stick to the simpler definition of -uniform shattering in Definition 1 and omit proof of the final statement of the Theorem, which is slightly more involved. We let n = 2m (i.e. ✏ = 1) and we comment in a footnote on the result for general ✏. Let S 1 , . . . , S 2m be sets which are -uniformly ↵-shattered using the family P 2m ⇢ P and denote their union by S. By assumption S has measure at least under a reference measure q which is dominated by all marginals p X for p 2 P 2m (see Definition 1). We divide our argument into three parts. 1. If we prove a lower bound for the average over P 2m , 8A, 1 22m X p2P2m E z̄ L(A(z̄), f p ) bc m+1/8 (5) it will also be a lower bound for the supremum over P 2m : 8A, sup p2P2m E z̄ L(A(z̄), f p ) bc m+1/8 . and hence for the supremum over P . It therefore suffices to prove (5). 2. Given x 2 S, define v p (x) to be the more likely label for x under the joint distribution p 2 P 2m . This notation extends to the noisy case the definition of v p already given for the noise-free case. The uniform shattering condition implies p(v p (x)|x) > 1/2 + ↵ in the noisy case and p(v p (x)|x) = 1 in the noise-free case. Given x̄ = (x 1 , . . . , x m ) 2 Sm, write z̄ p (x̄) := (z 1 , . . . , z m ) where z j = (x j , v p (x j )). Then E z̄ L(A(z̄), f p ) = Z Z m L(A(z̄), f p )dpm(z̄) Z S m⇥Y m L(A(z̄), f p )dpm(z̄) c Z S m L(A(z̄ p (x̄)), f p )dpm X (x̄) where c is as specified in the Theorem. Note the sets V l := {x̄ 2 Sm ⇢ Xm : the x j occupy exactly l of the S i } for l = 1, . . . ,m define a partition of Sm. Recall that dp X dq on S for all p 2 P 2m so Z S m L(A(z̄ p (x̄)), f p )dpm X (x̄) 1 22m X p2P2m mX l=1 Z x̄2Vl L(A(z̄ p (x̄)), f p ) dqm(x̄) = mX l=1 Z x̄2Vl 0 BBBB@ 1 22m X p2P2m L(A(z̄ p (x̄)), f p ) | {z } I 1 CCCCA dq m(x̄). We claim the integrand, I , is bounded below by b /8 (this computation is performed in part 3, and depends on knowing x̄ 2 V l ). At the same time, S has measure at least under q so mX l=1 Z x̄2Vl dq m(x̄) = Z x̄2Sm dq m(x̄) m which will complete the proof of (5). 3. We now assume a fixed but arbitrary x̄ 2 V l and prove I b /8. To simplify the discussion, we will refer to sets S i which contain a component x j of x̄ as S i with data. We also need notation for the elements of P 2m : for each L ⇢ [2m] denote by p(L) the unique element of P 2m such that v p (L) | Si = 1 if i 2 L, and vp(L) |Si = 0 if i /2 L. Now, let Lx̄ := {i 2 [2m] : x̄ \ Si 6= ;}. These are the indices of sets S i with data. By assumption |L x̄ | = l, and so |Lc x̄ | = 2m l. Every subset L ⇢ [2m] and hence every p 2 P 2m is determined by L \ L x̄ and L \ Lc x̄ . We will collect together all p(L) having the same L \ L x̄ , namely for each D ⇢ L x̄ define P D := {p(L) 2 P 2m : L \ L x̄ = D}. These 2l families partition P 2m and in each P D there are 22m l probability distributions. Most importantly, z̄ p (x̄) is the same for all p 2 P D (because D determines v p on the S i with data). This implies A(z̄ p (x̄)) : X ! R is the same function3 of X for all p in a given P D . To simplify notation, since we will be working within a single P D , we write f := A(z̄(x̄)). While f is the hypothesized regression function given data x̄, f p is the true regression function when p is the underlying distribution. For each set S i let v i be 1 if f is above 1/2 on a majority of S i using reference measure q (a q-majority) and 0 otherwise. We now focus on the “unseen" S i where no data lie (i.e., i 2 Lc x̄ ) and use the v i to specify a 1-1 correspondence between elements p 2 P D and subsets K ⇢ Lc x̄ : p 2 P D ! K p := {i 2 Lc x̄ : v p 6= v i }. Take a specific p 2 P D with its associated K p . We have |f(x) f p (x)| > ↵ on the q-majority of the set S i for all i 2 K p . The condition |f(x) f p (x)| > ↵ with f(x) and f p (x) on opposite sides of 1/2 implies a lower bound on `(f(x), f p (x)) for each of the pointwise loss functions ` that we consider (0/1, absolute, square, hinge, logistic). The value of b, however, differs from case to case (see Appendix). For now we have,Z Si `(f(x), f p (x)) dp X (x) Z Si `(f(x), f p (x)) dq(x) b 1 2 Z Si dq(x) b 4m . Summing over all i 2 K p , and letting k = |K p |, we obtain (still for the same p) L(f(x), f p (x)) k b 4m (assuming L is defined by equation (1))4. There are 2m ` k possible K with cardinality k, for any k = 0, . . . , 2m `. Therefore, X p2PD L(f(x), f p (x)) 2m `X k=0 ✓ 2m ` k ◆ k b 4m = 22m `(2m `) 2 b 4m 22m ` b 8 (using 2m ` 2m m = m)5. Since D was an arbitrary subset of L x̄ , this same lower bound holds for each of the 2` families P D and so I = 1 22m X p2P2m L(f(x), f p (x)) b 8 . In the constructions of the next Section it is often the case that one can prove a different level of shattering for different n, namely (n)-uniform shattering of n subsets for various n. The following Corollary is an immediate consequence of the Theorem for such settings. We state it for binary f p without noise. Corollary 5. Let C 2 (0, 1) and M 2 N. If P (n)-uniformly ↵-shatters n subsets of X and (n)n+1/8 > C for all n < M then no learning algorithm can achieve worst case expected error below ↵C, using a training sample of size less than M/2. If such uniform shattering holds for all n 2 N then the same lower bound applies regardless of sample size. Even when (n)-uniform shattering holds for all n 2 N and lim n!1 (n) = 1, if (n) approaches 1 sufficiently slowly then it is possible (n)n+1 ! 0 and there is no asymptotic obstacle to learning. By contrast, the next Section shows an extreme situation where lim n!1 (n)n+1 e > 0. In that case, learning is impossible. 4 Applications and conclusion Manifold learning We now describe a simpler, finite dimensional version of the example in Niyogi (2013). Let X = RD, D 2 and Y = {0, 1}. Fix N 2 N and consider a very simple type of 1-dimensional manifold in X , namely the union of N linear segments, connected in circular fashion (see Figure 1). Let P X be the collection of marginal distributions, each of which is supported on and assigns uniform probability along a curve of this type. There is a 1-1 correspondence between the elements of P X and curves just described. 3Warning: f need not be an element of {fp : p 2 P2n}; we only know f 2 H = {fp : p 2 P}. 4In the L2 version, using p x x, the reader can verify the same lower bound holds. 5In the case where we use (1 + ✏)m instead of 2m, we would have (1 + ✏)m ` ✏m here. On each curve M, choose two distinct points x0, x00. Removing these disconnects M. Let one component be labeled 0 and the other 1, then label x0 and x00 oppositely. Let P be the class of joint distributions on X ⇥ Y with conditionals as described and marginals in P X . This is a noise-free setting and f p is binary. Given M (or circular coordinates on M), consider the reduced class P 0 := {p 2 P : support(p X ) = M}. Then H0 := {f p : p 2 P 0} has VC dimension 3. On the other hand, for n < N/4 1 it can be shown that P (n)-uniformly shatters n sets with f p , where (n) = 1 1 n+1 (see Appendix and Figure 2). Since (1 1 n+1 )n+1 ! e > 0 as n!1, it follows from Corollary 5 that the worst case expected error is bounded below by e/8 for any sample of size n N/8 1/2. If many linear pieces are allowed (i.e. N is high) this could be an impractical number of labeled examples. By contrast with this example, (n) in Niyogi’s example cannot be made arbitrarily close to 1. Group-invariant features We give a simplified, partially-discrete example (for a smooth version and Figures, see Appendix). Let Y = {0, 1} and let X = J ⇥ I where J = {0, 1, . . . , n 1 1} ⇥ {0, 1, . . . , n 2 1} is an n 1 by n 2 grid (n i 2 N) and I = [0, 1] is a real line segment. One should picture X as a rectangular array of vertical sticks. Above each grid point (j 1 , j 2 ) consider two special points on the stick I , one with i = i + := 1 ✏ and the other with i = i := 0 + ✏. Let P X contain only the uniform distribution on X and assume the noise-free setting. For each ē 2 {+, }n1n2 , on each segment (j 1 , j 2 ) ⇥ I assign, via p ē , the label 1 above the special point (determined by ē) and 0 below the point. This determines a family of n 1 n 2 conditional distributions and thus a family P := {p ē : ē 2 {+, }n1n2} of n 1 n 2 joint distributions. The reader can verify that P has 2✏-uniform shattering dimension n 1 n 2 . Note that when the true distribution is p ē for some ē 2 {+, }n1n2 the labels will be invariant under the action a ē of Z n1 ⇥ Zn2 defined as follows. Given (z 1 , z 2 ) 2 Z n1 ⇥Zn2 and (j1, j2) 2 J , let the group element (z1, z2) move the vertical stick at (j 1 , j 2 ) to the one at (z 1 + j 1 mod n 1 , z 2 + j 2 mod n 2 ) without flipping the stick over, just stretching it as needed so the special point i± determined by ē on the first stick goes to the one on the second stick. The orbit space of the action can be identified with I . Let t : X ⇥ Y ! I be the projection of X ⇥ Y to this orbit space, then there is an induced labelling of this orbit space (because labels were invariant under the action of the group). Given access to t, the resulting concept class has VC dimension 1. On the other hand, given instead access to a projection s for the action of the subgroup Z n1 ⇥ {0}, the class eP := {p(·|s) : p 2 P} has 2✏-uniform shattering dimension n2. Thus we have a general setting where the over-all complexity requirements for two-step learning are n 1 + n 2 while for single-step learning they are n 1 n 2 . Conclusion We used a notion of uniform shattering to demonstrate both manifold learning and invariant feature learning situations where learning becomes impossible unless the learner has access to very large amounts of labeled data or else uses a two-step semi-supervised approach in which suitable manifold- or group-invariant features are learned first in unsupervised fashion. Our examples also provide a complexity manifestation of the advantages, observed by Poggio and Mallat, of forming intermediate group-invariant features according to sub-groups of a larger transformation group. Acknowledgements The author is deeply grateful to Partha Niyogi for the chance to have been his student. This paper is directly inspired by discussions with him which were cut short much too soon. The author also thanks Ankan Saha and Misha Belkin for very helpful input on preliminary drafts.
1. What is the main contribution of the paper in terms of multi-step learning approaches? 2. How does the paper explain the use of a variant of VC dimension for bounding the generalization error? 3. Can the paper's arguments be applied to agnostic cases, and if so, how? 4. What is the significance of the paper's assumption that the concept class is the same as the hypothesis class? 5. Are there any limitations or areas for improvement in the paper's theoretical analysis?
Review
Review The paper addresses the multi-step learning approaches that learn the final task from breaking the problem into a sequence of several intermediate steps, as in semi-supervised learning, and considering the underlying structure (t(x)) induced by the joint distribution P. This paper gives a strong theoretical explanation using a variant of VC dimension for bounding the generalization error (in term of gap between the single-step learning and multi-step learning) with application in manifold learning and group-invariant feature learning.Although I am not an expert in PAC learning, I can follow the core idea behind the paper. The paper is well-written and contains different way of understanding the notion of compatibility function used in Balcan 2005. The paper assumes that the concept class is same as the hypothesis class, can we give a similar argument (in term of the gap and \gamma-uniformly \alpha-shattering) for agnostic cases?
NIPS
Title Multi-step learning and underlying structure in statistical models Abstract In multi-step learning, where a final learning task is accomplished via a sequence of intermediate learning tasks, the intuition is that successive steps or levels transform the initial data into representations more and more “suited" to the final learning task. A related principle arises in transfer-learning where Baxter (2000) proposed a theoretical framework to study how learning multiple tasks transforms the inductive bias of a learner. The most widespread multi-step learning approach is semisupervised learning with two steps: unsupervised, then supervised. Several authors (Castelli-Cover, 1996; Balcan-Blum, 2005; Niyogi, 2008; Ben-David et al, 2008; Urner et al, 2011) have analyzed SSL, with Balcan-Blum (2005) proposing a version of the PAC learning framework augmented by a “compatibility function" to link concept class and unlabeled data distribution. We propose to analyze SSL and other multi-step learning approaches, much in the spirit of Baxter’s framework, by defining a learning problem generatively as a joint statistical model on X ⇥ Y . This determines in a natural way the class of conditional distributions that are possible with each marginal, and amounts to an abstract form of compatibility function. It also allows to analyze both discrete and non-discrete settings. As tool for our analysis, we define a notion of -uniform shattering for statistical models. We use this to give conditions on the marginal and conditional models which imply an advantage for multi-step learning approaches. In particular, we recover a more general version of a result of Poggio et al (2012): under mild hypotheses a multi-step approach which learns features invariant under successive factors of a finite group of invariances has sample complexity requirements that are additive rather than multiplicative in the size of the subgroups. 1 Introduction The classical PAC learning framework of Valiant (1984) considers a learning problem with unknown true distribution p on X ⇥ Y , Y = {0, 1} and fixed concept class C consisting of (deterministic) functions f : X ! Y . The aim of learning is to select a hypothesis h : X ! Y , say from C itself (realizable case), that best recovers f . More formally, the class C is said to be PAC learnable if there is a learning algorithm that with high probability selects h 2 C having arbitrarily low generalization error for all possible distributions D on X . The distribution D governs both the sampling of points z = (x, y) 2 X ⇥ Y by which the algorithm obtains a training sample and also the cumulation of error over all x 2 X which gives the generalization error. A modification of this model, together with the notion of learnable with a model of probability (resp. decision rule) (Haussler, 1989; Kearns and Schapire, 1994), allows to treat non-deterministic functions f : X ! Y and the case Y = [0, 1] analogously. Polynomial dependence of the algorithms on sample size and reciprocals 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. of probability bounds is further required in both frameworks for efficient learning. Not only do these frameworks consider worst case error, in the sense of requiring the generalization error to be small for arbitrary distributions D on X , they assume the same concept class C regardless of the true underlying distribution D. In addition, choice of the hypothesis class is taken as part of the inductive bias of the algorithm and not addressed. Various, by now classic, measures of complexity of a hypothesis space (e.g., VC dimension or Rademacher complexity, see Mohri et al. (2012) for an overview) allow to prove upper bounds on generalization error in the above setting, and distribution-specific variants of these such as annealed VC-entropy (see Devroye et al. (1996)) or Rademacher averages (beginning with Koltchinskii (2001)) can be used to obtain more refined upper bounds. The widespread strategy of semi-supervised learning (SSL) is known not to fit well into PAC-style frameworks (Valiant, 1984; Haussler, 1989; Kearns and Schapire, 1994). SSL algorithms perform a first step using unlabeled training data drawn from a distribution on X , followed by a second step using labeled training data from a joint distribution on X ⇥ Y . This has been studied by several authors (Balcan and Blum, 2005; Ben-David et al., 2008; Urner et al., 2011; Niyogi, 2013) following the seminal work of Castelli and Cover (1996) comparing the value of unlabeled and labeled data. One immediate observation is that without some tie between the possible marginals D on X and the concept class C which records possible conditionals p(y|x), there is no benefit to unlabeled data: if D can be arbitrary then it conveys no information about the true joint distribution that generated labeled data. Within PAC-style frameworks, however, C and D are completely independent. Balcan and Blum therefore proposed augmenting the PAC learning framework by the addition of a compatibility function : C ⇥ D ! [0, 1], which records the amount of compatibility we believe each concept from C to have with each D 2 D, the class of “all" distributions on X . This function is required to be learnable from D and is then used to reduce the concept class from C to a sub-class which will be used for the subsequent (supervised) learning step. If is a good compatible function this sub-class should have lesser complexity than C (Balcan and Blum, 2005). While PAC-style frameworks in essence allow the true joint distribution to be anything in C⇥D, the existence of a good compatibility function in the sense of Balcan and Blum (2005) implicitly assumes the joint model that we believe in is smaller. We return to this point in Section 2.1. In this paper we study properties of multi-step learning strategies – those which involve multiple training steps – by considering the advantages of breaking a single learning problem into a sequence of two learning problems. We start by assuming a true distribution which comes from a class of joint distributions, i.e. statistical model, P on X ⇥ Y . We prove that underlying structure of a certain kind in P , together with differential availability of labeled vs. unlabeled data, imply a quantifiable advantage to multi-step learning at finite sample size. The structure we need is the existence of a representation t(x) of x 2 X which is a sufficient statistic for the classification or regression of interest. Two common settings where this assumption holds are: manifold learning and group-invariant feature learning. In these settings we have respectively 1. t = t pX is determined by the marginal pX and pX is concentrated on a submanifold of X , 2. t = t G is determined by a group action on X and p(y|x) is invariant1 under this action. Learning t in these cases corresponds respectively to learning manifold features or group-invariant features; various approaches exist (see (Niyogi, 2013; Poggio et al., 2012) for more discussion) and we do not assume any fixed method. Our framework is also not restricted to these two settings. As a tool for analysis we define a variant of VC dimension for statistical models which we use to prove a useful lower bound on generalization error even2 under the assumption that the true distribution comes from P . This allows us to establish a gap at finite sample size between the error achievable by a single-step purely supervised learner and that achievable by a semi-supervised learner. We do not claim an asymptotic gap. The purpose of our analysis is rather to show that differential finite availability of data can dictate a multi-step learning approach. Our applications are respectively a strengthening of a manifold learning example analyzed by Niyogi (2013) and a group-invariant features example related to a result of Poggio et al. (2012). We also discuss the relevance of these to biological learning. Our framework has commonalities with a framework of Baxter (2000) for transfer learning. In that work, Baxter considered learning the inductive bias (i.e., the hypothesis space) for an algorithm for a 1 This means there is a group G of transformations of X such that p(y|x) = p(y|g·x) for all g 2 G. 2(distribution-specific lower bounds are by definition weaker than distribution-free ones) “target" learning task, based on experience from previous “source" learning tasks. For this purpose he defined a learning environment E to be a class of probability distributions on X ⇥ Y together with an unknown probability distribution Q on E , and assumed E to restrict the possible joint distributions which may arise. We also make a generative assumption, assuming joint distributions come from P , but we do not use a prior Q. Within his framework Baxter studied the reduction in generalization error for an algorithm to learn a new task, defined by p 2 E , when given access to a sample from p and a sample from each of m other learning tasks, p 1 , . . . , p m 2 E , chosen randomly according to Q, compared with an algorithm having access to only a sample from p. The analysis produced upper bounds on generalization error in terms of covering numbers and a lower bound was also obtained in terms of VC dimension in the specific case of shallow neural networks. In proving our lower bound in terms of a variant of VC dimension we use a minimax analysis. 2 Setup We assume a learning problem is specified by a joint probability distribution p on Z = X ⇥ Y and a particular (regression, classification or decision) function f p : X ! R determined entirely by p(y|x). Moreover, we postulate a statistical model P on X ⇥ Y and assume p 2 P . Despite the simplified notation, f p (x) depends on the conditionals p(y|x) and not the entire joint distribution p. There are three main types of learning problem our framework addresses (reflected in three types of f p ). When y is noise-free, i.e. p(y|x) is concentrated at a single y-value v p (x) 2 {0, 1}, f p = v p : X ! {0, 1} (classification); here f p (x) = E p (y|x). When y is noisy, then either f p : X ! {0, 1} (classification/decision) or f p : X ! [0, 1] (regression) and f p (x) = E p (y|x). In all three cases the parameters which define f p , the learning goal, depend only on p(y|x) = E p (y|x). We assume the learner knows the model P and the type of learning problem, i.e., the hypothesis class is the “concept class" C := {f p : p 2 P}. To be more precise, for the first type of f p listed above, this is the concept class (Kearns and Vazirani, 1994); for the second type, it is a class of decision rules and for the third type, it is a class of p-concepts (Kearns and Schapire, 1994). For specific choice of loss functions, we seek worst-case bounds on learning rates, over all distributions p 2 P . Our results for all three types of learning problem are stated in Theorem 3. To keep the presentation simple, we give a detailed proof for the first two types, i.e., assuming labels are binary. This shows how classic PAC-style arguments for discrete X can be adapted to our framework where X may be smooth. Extending these arguments to handle non-binary Y proceeds by the same modifications as for discrete X (c.f. Kearns and Schapire (1994)). We remark that in the presence of noise, better bounds can be obtained (see Theorem 3 for details) if a more technical version of Definition 1 is used but we leave this for a subsequent paper. We define the following probabilistic version of fat shattering dimension: Definition 1. Given P , a class of probability distributions on X⇥{0, 1}, let 2 (0, 1), ↵ 2 (0, 1/2) and n 2 N = {0, 1, . . . , ...}. Suppose there exist (disjoint) sets S i ⇢ X , i 2 {1, . . . , n} with S = [ i S i , a reference probability measure q on X , and a sub-class P n ⇢ P of cardinality |P n | = 2n with the following properties: 1. q(S i ) /n for every i 2 {1, . . . , n} 2. q lower bounds the marginals of all p 2 P n on S, i.e. R B dp X R B dq for any p-measurable subset B ⇢ S 3. 8 e 2 {0, 1}n, 9 p 2 P n such that E p (y|x) > 1/2 + ↵ for x 2 S i when e i = 1 and E p (y|x) < 1/2 ↵ for x 2 S i when e i = 0 then we say P ↵-shatters S 1 , . . . , S n -uniformly using P n . The -uniform ↵-shattering dimension of P is the largest n such that P ↵-shatters some collection of n subsets of X -uniformly. This provides a measure of complexity of the class P of distributions in the sense that it indicates the variability of the expected y-values for x constrained to lie in the region S with measure at least under corresponding marginals. The reference measure q serves as a lower bound on the marginals and ensures that they “uniformly" assign probabilty at least to S. Richness (variability) of conditionals is thus traded off against uniformity of the corresponding marginal distributions. Remark 2 (Uniformity of measure). The technical requirement of a reference distribution q is automatically satisfied if all marginals p X for p 2 P n are uniform over S. For simplicity this is the situation considered in all our examples. The weaker condition (in terms of q) that we postulate in Definition 1 is however sufficient for our main result, Theorem 3. If f p is binary and y is noise-free then P shatters S 1 , . . . , S n -uniformly if and only if there is a sub-class P n ⇢ P with the specified uniformity of measure, such that each f p (·) = E p (y|·), p 2 P n is constant on each S i and the induced set-functions shatter {S 1 , . . . , S n } in the usual (Vapnik-Chervonenkis) sense. In that case, ↵ may be chosen arbitrarily in (0, 1/2) and we omit mention of it. If f p takes values in [0, 1] or f p is binary and y noisy then -uniform shattering can be expressed in terms of fat-shattering (both at scale ↵). We show that the -uniform ↵-shattering dimension of P can be used to lower bound the sample size required by even the most powerful learner of this class of problems. The proof is in the same spirit as purely combinatorial proofs of lower bounds using VC-dimension. Essentially the added condition on P in terms of allows to convert the risk calculation to a combinatorial problem. As a counterpoint to the lower bound result, we consider an alternative two step learning strategy which makes use of underlying structure in X implied by the model P and we obtain upper bounds for the corresponding risk. 2.1 Underlying structure We assume a representation t : X ! Rk of the data, such that p(y|x) can be expressed in terms of p(y|t(x)), say f p (x) = g ✓ (t(x)) for some parameter ✓ 2 ⇥. Such a t is generally known in Statistics as a sufficient dimension reduction for f p but here we make no assumption on the dimension k (compared with the dimension of X). This is in keeping with the paradigm of feature extraction for use in kernel machines, where the dimension of t(X) may even be higher than the original dimension of X . As in that setting, what will be important is rather that the intermediate representation t(x) reduce the complexity of the concept space. While t depends on p we will assume it does so only via X . For example t could depend on p through the marginal p X on X or possible group action on X; it is a manifestation in the data X , possibly over time, of underlying structure in the true joint distribution p 2 P . The representation t captures structure in X induced by p. On the other hand, the regression function itself depends only on the conditional p(y|t(x)). In general, the natural factorization ⇡ : P ! P X , p 7! p X determines for each marginal q 2 P X a collection ⇡ 1(q) of possible conditionals, namely those p(y|x) arising from joint p 2 P that have marginal p X = q. More generally any sufficient statistic t induces a similar factorization (c.f. Fisher-Neyman characterization) ⇡ t : P ! P t , p 7! p t , where P t is the marginal model with respect to t, and only conditionals p(y|t) are needed for learning. As before, given a known marginal q 2 P t , this implies a collection ⇡ 1 t (q) of possible conditionals p(y|t) relevant to learning. Knowing q thus reduces the original problem where p(y|x) or p(y|t) can come from any p 2 P to one where it comes from p in a reduced class ⇡ 1(q) or ⇡ 1 t (q) ( P . Note the similarity with the assumption of Balcan and Blum (2005) that a good compatibility function reduce the concept class. In our case the concept class C consists of f p defined by p(y|t) in [ t P Y |t with PY |t :={p(y|t) : p 2 P}, and marginals come from P t . The joint model P that we postulate, meanwhile, corresponds to a subset of C ⇥ P t (pairs (f p , q) where f p uses p 2 ⇡ 1 t (q)). The indicator function for this subset is an abstract (binary) version of compatibility function (recall the compatibility function of Balcan-Blum should be a [0, 1]-valued function on C ⇥D, satisfying further practical conditions that our function typically would not). Thus, in a sense, our assumption of a joint model P and sufficient statistic t amounts to a general form of compatibility function that links C and D without making assumptions on how t might be learned. This is enough to imply the original learning problem can be factored into first learning the structure t and then learning the parameter ✓ for f p (x) = g ✓ (t(x)) in a reduced hypothesis space. Our goal is to understand when and why one should do so. 2.2 Learning rates We wish to quantify the benefits achieved by using such a factorization in terms of the bounds on the expected loss (i.e. risk) for a sample of size m 2 N drawn iid from any p 2 P . We assume the learner is provided with a sample z̄ = (z 1 , z 2 · · · z m ), with z i = (x i , y i ) 2 X ⇥ Y = Z, drawn iid from the distribution p and uses an algorithm A : Zm ! C = H to select A(z̄) to approximate f p . Let `(A(z̄), f p ) denote a specific loss. It might be 0/1, absolute, squared, hinge or logistic loss. We define L(A(z̄), f p ) to be the global expectation or L2-norm of one of those pointwise losses `: L(A(z̄), f p ) := E x `(A(z̄)(x), f p (x)) = Z X `(A(z̄)(x), f p (x))dp X (x) (1) or L(A(z̄), f p ) := ||`(A(z̄), f p )|| L 2 (pX) = sZ X `(A(z̄)(x), f p (x))2dp X . (2) Then the worst case expected loss (i.e. minimax risk) for the best learning algorithm with no knowledge of t pX is R(m) := inf A sup p2P E z̄ L(A(z̄), f p ) = inf A sup q2PX sup p(y|tq)s.t. p2P,pX=q E z̄ L(A(z̄, f p ) . (3) while for the best learning algorithm with oracle knowledge of t pX it is Q(m) := sup q2PX inf A sup p(y|tq)s.t. p2P,pX=q E z̄ L(A(z̄, f p ) . (4) Some clarification is in order regarding the classes over which the suprema are taken. In principle the worst case expected loss for a given A is the supremum over P of the expected loss. Since f p (x) is determined by p(y|t pX (x)), and tpX is determined by pX this is a supremum over q 2 PX of a supremum over p(y|t q (·)) such that p X = q. Finding the worst case expected error for the best A therefore means taking the infimum of the supremum just described. In the case of Q(m) since the algorithm knows t q , the order of the supremum over t changes with respect to the infimum: the learner can select the best algorithm A using knowledge of t q . Clearly R(m) Q(m) by definition. In the next section, we lower bound R(m) and upper bound Q(m) to establish a gap between R(m) and Q(m). 3 Main Result We show that -uniform shattering dimension n or more implies a lower bound on the worst case expected error, R(m), when the sample size m n. In particular - in the setup specified in the previous section - if {g ✓ (·) : ✓ 2 ⇥} has much smaller VC dimension than n this results in a distinct gap between rates for a learner with oracle access to t pX and a learner without. Theorem 3. Consider the framework defined in the previous Section with Y = {0, 1}. Assume {g ✓ (·) : ✓ 2 ⇥} has VC dimension d < m and P has -uniform ↵-shattering dimension n (1+✏)m. Then, for sample size m, Q(m) 16 q d log(m+1)+log 8+1 2m while R(m) > ✏bc m+1/8 where b depends both on the type of loss and the presence of noise, while c depends on noise. Assume the standard definition in (1). If f p are binary (in the noise-free or noisy setting) b = 1 for absolute, squared, 0-1, hinge or logistic loss. In the noisy setting, if f p = E(y|x) 2 [0, 1], b = ↵ for absolute loss and b = ↵2 for squared loss. In general, c = 1 in the noise-free setting and c = (1/2 + ↵)m in the noisy setting. By requiring P to satisfy a stronger notion of -uniform ↵-shattering one can obtain c = 1 even in the noisy case. Note that for sample size m and -uniform ↵-shattering dimension 2m, we have ✏ = 1, so the lower bound in its simplest form becomes m+1/8. This is the bound we will use in the next Section to derive implications of Theorem 3. Remark 4. We have stated in the Theorem a simple upper bound, sticking to Y = {0, 1} and using VC dimension, in order to focus the presentation on the lower bound which uses the new complexity measure. The upper bound could be improved. It could also be replaced with a corresponding upper bound assuming instead Y = [0, 1] and fat shattering dimension d. Proof. The upper bound on Q(m) holds for an ERM algorithm (by the classic argument, see for example Corollary 12.1 in Devroye et al. (1996)). We focus here on the lower bound for R(m). Moreover, we stick to the simpler definition of -uniform shattering in Definition 1 and omit proof of the final statement of the Theorem, which is slightly more involved. We let n = 2m (i.e. ✏ = 1) and we comment in a footnote on the result for general ✏. Let S 1 , . . . , S 2m be sets which are -uniformly ↵-shattered using the family P 2m ⇢ P and denote their union by S. By assumption S has measure at least under a reference measure q which is dominated by all marginals p X for p 2 P 2m (see Definition 1). We divide our argument into three parts. 1. If we prove a lower bound for the average over P 2m , 8A, 1 22m X p2P2m E z̄ L(A(z̄), f p ) bc m+1/8 (5) it will also be a lower bound for the supremum over P 2m : 8A, sup p2P2m E z̄ L(A(z̄), f p ) bc m+1/8 . and hence for the supremum over P . It therefore suffices to prove (5). 2. Given x 2 S, define v p (x) to be the more likely label for x under the joint distribution p 2 P 2m . This notation extends to the noisy case the definition of v p already given for the noise-free case. The uniform shattering condition implies p(v p (x)|x) > 1/2 + ↵ in the noisy case and p(v p (x)|x) = 1 in the noise-free case. Given x̄ = (x 1 , . . . , x m ) 2 Sm, write z̄ p (x̄) := (z 1 , . . . , z m ) where z j = (x j , v p (x j )). Then E z̄ L(A(z̄), f p ) = Z Z m L(A(z̄), f p )dpm(z̄) Z S m⇥Y m L(A(z̄), f p )dpm(z̄) c Z S m L(A(z̄ p (x̄)), f p )dpm X (x̄) where c is as specified in the Theorem. Note the sets V l := {x̄ 2 Sm ⇢ Xm : the x j occupy exactly l of the S i } for l = 1, . . . ,m define a partition of Sm. Recall that dp X dq on S for all p 2 P 2m so Z S m L(A(z̄ p (x̄)), f p )dpm X (x̄) 1 22m X p2P2m mX l=1 Z x̄2Vl L(A(z̄ p (x̄)), f p ) dqm(x̄) = mX l=1 Z x̄2Vl 0 BBBB@ 1 22m X p2P2m L(A(z̄ p (x̄)), f p ) | {z } I 1 CCCCA dq m(x̄). We claim the integrand, I , is bounded below by b /8 (this computation is performed in part 3, and depends on knowing x̄ 2 V l ). At the same time, S has measure at least under q so mX l=1 Z x̄2Vl dq m(x̄) = Z x̄2Sm dq m(x̄) m which will complete the proof of (5). 3. We now assume a fixed but arbitrary x̄ 2 V l and prove I b /8. To simplify the discussion, we will refer to sets S i which contain a component x j of x̄ as S i with data. We also need notation for the elements of P 2m : for each L ⇢ [2m] denote by p(L) the unique element of P 2m such that v p (L) | Si = 1 if i 2 L, and vp(L) |Si = 0 if i /2 L. Now, let Lx̄ := {i 2 [2m] : x̄ \ Si 6= ;}. These are the indices of sets S i with data. By assumption |L x̄ | = l, and so |Lc x̄ | = 2m l. Every subset L ⇢ [2m] and hence every p 2 P 2m is determined by L \ L x̄ and L \ Lc x̄ . We will collect together all p(L) having the same L \ L x̄ , namely for each D ⇢ L x̄ define P D := {p(L) 2 P 2m : L \ L x̄ = D}. These 2l families partition P 2m and in each P D there are 22m l probability distributions. Most importantly, z̄ p (x̄) is the same for all p 2 P D (because D determines v p on the S i with data). This implies A(z̄ p (x̄)) : X ! R is the same function3 of X for all p in a given P D . To simplify notation, since we will be working within a single P D , we write f := A(z̄(x̄)). While f is the hypothesized regression function given data x̄, f p is the true regression function when p is the underlying distribution. For each set S i let v i be 1 if f is above 1/2 on a majority of S i using reference measure q (a q-majority) and 0 otherwise. We now focus on the “unseen" S i where no data lie (i.e., i 2 Lc x̄ ) and use the v i to specify a 1-1 correspondence between elements p 2 P D and subsets K ⇢ Lc x̄ : p 2 P D ! K p := {i 2 Lc x̄ : v p 6= v i }. Take a specific p 2 P D with its associated K p . We have |f(x) f p (x)| > ↵ on the q-majority of the set S i for all i 2 K p . The condition |f(x) f p (x)| > ↵ with f(x) and f p (x) on opposite sides of 1/2 implies a lower bound on `(f(x), f p (x)) for each of the pointwise loss functions ` that we consider (0/1, absolute, square, hinge, logistic). The value of b, however, differs from case to case (see Appendix). For now we have,Z Si `(f(x), f p (x)) dp X (x) Z Si `(f(x), f p (x)) dq(x) b 1 2 Z Si dq(x) b 4m . Summing over all i 2 K p , and letting k = |K p |, we obtain (still for the same p) L(f(x), f p (x)) k b 4m (assuming L is defined by equation (1))4. There are 2m ` k possible K with cardinality k, for any k = 0, . . . , 2m `. Therefore, X p2PD L(f(x), f p (x)) 2m `X k=0 ✓ 2m ` k ◆ k b 4m = 22m `(2m `) 2 b 4m 22m ` b 8 (using 2m ` 2m m = m)5. Since D was an arbitrary subset of L x̄ , this same lower bound holds for each of the 2` families P D and so I = 1 22m X p2P2m L(f(x), f p (x)) b 8 . In the constructions of the next Section it is often the case that one can prove a different level of shattering for different n, namely (n)-uniform shattering of n subsets for various n. The following Corollary is an immediate consequence of the Theorem for such settings. We state it for binary f p without noise. Corollary 5. Let C 2 (0, 1) and M 2 N. If P (n)-uniformly ↵-shatters n subsets of X and (n)n+1/8 > C for all n < M then no learning algorithm can achieve worst case expected error below ↵C, using a training sample of size less than M/2. If such uniform shattering holds for all n 2 N then the same lower bound applies regardless of sample size. Even when (n)-uniform shattering holds for all n 2 N and lim n!1 (n) = 1, if (n) approaches 1 sufficiently slowly then it is possible (n)n+1 ! 0 and there is no asymptotic obstacle to learning. By contrast, the next Section shows an extreme situation where lim n!1 (n)n+1 e > 0. In that case, learning is impossible. 4 Applications and conclusion Manifold learning We now describe a simpler, finite dimensional version of the example in Niyogi (2013). Let X = RD, D 2 and Y = {0, 1}. Fix N 2 N and consider a very simple type of 1-dimensional manifold in X , namely the union of N linear segments, connected in circular fashion (see Figure 1). Let P X be the collection of marginal distributions, each of which is supported on and assigns uniform probability along a curve of this type. There is a 1-1 correspondence between the elements of P X and curves just described. 3Warning: f need not be an element of {fp : p 2 P2n}; we only know f 2 H = {fp : p 2 P}. 4In the L2 version, using p x x, the reader can verify the same lower bound holds. 5In the case where we use (1 + ✏)m instead of 2m, we would have (1 + ✏)m ` ✏m here. On each curve M, choose two distinct points x0, x00. Removing these disconnects M. Let one component be labeled 0 and the other 1, then label x0 and x00 oppositely. Let P be the class of joint distributions on X ⇥ Y with conditionals as described and marginals in P X . This is a noise-free setting and f p is binary. Given M (or circular coordinates on M), consider the reduced class P 0 := {p 2 P : support(p X ) = M}. Then H0 := {f p : p 2 P 0} has VC dimension 3. On the other hand, for n < N/4 1 it can be shown that P (n)-uniformly shatters n sets with f p , where (n) = 1 1 n+1 (see Appendix and Figure 2). Since (1 1 n+1 )n+1 ! e > 0 as n!1, it follows from Corollary 5 that the worst case expected error is bounded below by e/8 for any sample of size n N/8 1/2. If many linear pieces are allowed (i.e. N is high) this could be an impractical number of labeled examples. By contrast with this example, (n) in Niyogi’s example cannot be made arbitrarily close to 1. Group-invariant features We give a simplified, partially-discrete example (for a smooth version and Figures, see Appendix). Let Y = {0, 1} and let X = J ⇥ I where J = {0, 1, . . . , n 1 1} ⇥ {0, 1, . . . , n 2 1} is an n 1 by n 2 grid (n i 2 N) and I = [0, 1] is a real line segment. One should picture X as a rectangular array of vertical sticks. Above each grid point (j 1 , j 2 ) consider two special points on the stick I , one with i = i + := 1 ✏ and the other with i = i := 0 + ✏. Let P X contain only the uniform distribution on X and assume the noise-free setting. For each ē 2 {+, }n1n2 , on each segment (j 1 , j 2 ) ⇥ I assign, via p ē , the label 1 above the special point (determined by ē) and 0 below the point. This determines a family of n 1 n 2 conditional distributions and thus a family P := {p ē : ē 2 {+, }n1n2} of n 1 n 2 joint distributions. The reader can verify that P has 2✏-uniform shattering dimension n 1 n 2 . Note that when the true distribution is p ē for some ē 2 {+, }n1n2 the labels will be invariant under the action a ē of Z n1 ⇥ Zn2 defined as follows. Given (z 1 , z 2 ) 2 Z n1 ⇥Zn2 and (j1, j2) 2 J , let the group element (z1, z2) move the vertical stick at (j 1 , j 2 ) to the one at (z 1 + j 1 mod n 1 , z 2 + j 2 mod n 2 ) without flipping the stick over, just stretching it as needed so the special point i± determined by ē on the first stick goes to the one on the second stick. The orbit space of the action can be identified with I . Let t : X ⇥ Y ! I be the projection of X ⇥ Y to this orbit space, then there is an induced labelling of this orbit space (because labels were invariant under the action of the group). Given access to t, the resulting concept class has VC dimension 1. On the other hand, given instead access to a projection s for the action of the subgroup Z n1 ⇥ {0}, the class eP := {p(·|s) : p 2 P} has 2✏-uniform shattering dimension n2. Thus we have a general setting where the over-all complexity requirements for two-step learning are n 1 + n 2 while for single-step learning they are n 1 n 2 . Conclusion We used a notion of uniform shattering to demonstrate both manifold learning and invariant feature learning situations where learning becomes impossible unless the learner has access to very large amounts of labeled data or else uses a two-step semi-supervised approach in which suitable manifold- or group-invariant features are learned first in unsupervised fashion. Our examples also provide a complexity manifestation of the advantages, observed by Poggio and Mallat, of forming intermediate group-invariant features according to sub-groups of a larger transformation group. Acknowledgements The author is deeply grateful to Partha Niyogi for the chance to have been his student. This paper is directly inspired by discussions with him which were cut short much too soon. The author also thanks Ankan Saha and Misha Belkin for very helpful input on preliminary drafts.
1. What is the focus of the paper, and how does it attempt to contribute to the field of machine learning? 2. What are the issues with the presentation and organization of the paper, according to the reviewer? 3. How does the reviewer assess the clarity and understanding of the proposed framework and its relation to existing algorithms? 4. What are the concerns regarding the experimental part of the paper, and how does it impact the overall evaluation of the work? 5. Are there any suggestions or recommendations for improving the paper, specifically in terms of providing a clearer understanding of the proposed framework and its contributions?
Review
Review This paper attempts to present a multi-step learning framework and provide some theoretical analyses for it. This paper is not well-written. References should not appear in the abstract part. It is hard to follow this paper and difficult to understand clearly what the multi-step learning is (new manifold learning?), how it works (no pseudo-code), and which famous algorithms can be summarized into this framework. The proposed framework is not well-defined, it is hard to catch the main contribution of this paper. The experimental part is too weak and cannot provide good support to this paper. It is more interesting if previous algorithms, e.g., manifold learning, has better performance in the new framework.
NIPS
Title Multi-step learning and underlying structure in statistical models Abstract In multi-step learning, where a final learning task is accomplished via a sequence of intermediate learning tasks, the intuition is that successive steps or levels transform the initial data into representations more and more “suited" to the final learning task. A related principle arises in transfer-learning where Baxter (2000) proposed a theoretical framework to study how learning multiple tasks transforms the inductive bias of a learner. The most widespread multi-step learning approach is semisupervised learning with two steps: unsupervised, then supervised. Several authors (Castelli-Cover, 1996; Balcan-Blum, 2005; Niyogi, 2008; Ben-David et al, 2008; Urner et al, 2011) have analyzed SSL, with Balcan-Blum (2005) proposing a version of the PAC learning framework augmented by a “compatibility function" to link concept class and unlabeled data distribution. We propose to analyze SSL and other multi-step learning approaches, much in the spirit of Baxter’s framework, by defining a learning problem generatively as a joint statistical model on X ⇥ Y . This determines in a natural way the class of conditional distributions that are possible with each marginal, and amounts to an abstract form of compatibility function. It also allows to analyze both discrete and non-discrete settings. As tool for our analysis, we define a notion of -uniform shattering for statistical models. We use this to give conditions on the marginal and conditional models which imply an advantage for multi-step learning approaches. In particular, we recover a more general version of a result of Poggio et al (2012): under mild hypotheses a multi-step approach which learns features invariant under successive factors of a finite group of invariances has sample complexity requirements that are additive rather than multiplicative in the size of the subgroups. 1 Introduction The classical PAC learning framework of Valiant (1984) considers a learning problem with unknown true distribution p on X ⇥ Y , Y = {0, 1} and fixed concept class C consisting of (deterministic) functions f : X ! Y . The aim of learning is to select a hypothesis h : X ! Y , say from C itself (realizable case), that best recovers f . More formally, the class C is said to be PAC learnable if there is a learning algorithm that with high probability selects h 2 C having arbitrarily low generalization error for all possible distributions D on X . The distribution D governs both the sampling of points z = (x, y) 2 X ⇥ Y by which the algorithm obtains a training sample and also the cumulation of error over all x 2 X which gives the generalization error. A modification of this model, together with the notion of learnable with a model of probability (resp. decision rule) (Haussler, 1989; Kearns and Schapire, 1994), allows to treat non-deterministic functions f : X ! Y and the case Y = [0, 1] analogously. Polynomial dependence of the algorithms on sample size and reciprocals 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. of probability bounds is further required in both frameworks for efficient learning. Not only do these frameworks consider worst case error, in the sense of requiring the generalization error to be small for arbitrary distributions D on X , they assume the same concept class C regardless of the true underlying distribution D. In addition, choice of the hypothesis class is taken as part of the inductive bias of the algorithm and not addressed. Various, by now classic, measures of complexity of a hypothesis space (e.g., VC dimension or Rademacher complexity, see Mohri et al. (2012) for an overview) allow to prove upper bounds on generalization error in the above setting, and distribution-specific variants of these such as annealed VC-entropy (see Devroye et al. (1996)) or Rademacher averages (beginning with Koltchinskii (2001)) can be used to obtain more refined upper bounds. The widespread strategy of semi-supervised learning (SSL) is known not to fit well into PAC-style frameworks (Valiant, 1984; Haussler, 1989; Kearns and Schapire, 1994). SSL algorithms perform a first step using unlabeled training data drawn from a distribution on X , followed by a second step using labeled training data from a joint distribution on X ⇥ Y . This has been studied by several authors (Balcan and Blum, 2005; Ben-David et al., 2008; Urner et al., 2011; Niyogi, 2013) following the seminal work of Castelli and Cover (1996) comparing the value of unlabeled and labeled data. One immediate observation is that without some tie between the possible marginals D on X and the concept class C which records possible conditionals p(y|x), there is no benefit to unlabeled data: if D can be arbitrary then it conveys no information about the true joint distribution that generated labeled data. Within PAC-style frameworks, however, C and D are completely independent. Balcan and Blum therefore proposed augmenting the PAC learning framework by the addition of a compatibility function : C ⇥ D ! [0, 1], which records the amount of compatibility we believe each concept from C to have with each D 2 D, the class of “all" distributions on X . This function is required to be learnable from D and is then used to reduce the concept class from C to a sub-class which will be used for the subsequent (supervised) learning step. If is a good compatible function this sub-class should have lesser complexity than C (Balcan and Blum, 2005). While PAC-style frameworks in essence allow the true joint distribution to be anything in C⇥D, the existence of a good compatibility function in the sense of Balcan and Blum (2005) implicitly assumes the joint model that we believe in is smaller. We return to this point in Section 2.1. In this paper we study properties of multi-step learning strategies – those which involve multiple training steps – by considering the advantages of breaking a single learning problem into a sequence of two learning problems. We start by assuming a true distribution which comes from a class of joint distributions, i.e. statistical model, P on X ⇥ Y . We prove that underlying structure of a certain kind in P , together with differential availability of labeled vs. unlabeled data, imply a quantifiable advantage to multi-step learning at finite sample size. The structure we need is the existence of a representation t(x) of x 2 X which is a sufficient statistic for the classification or regression of interest. Two common settings where this assumption holds are: manifold learning and group-invariant feature learning. In these settings we have respectively 1. t = t pX is determined by the marginal pX and pX is concentrated on a submanifold of X , 2. t = t G is determined by a group action on X and p(y|x) is invariant1 under this action. Learning t in these cases corresponds respectively to learning manifold features or group-invariant features; various approaches exist (see (Niyogi, 2013; Poggio et al., 2012) for more discussion) and we do not assume any fixed method. Our framework is also not restricted to these two settings. As a tool for analysis we define a variant of VC dimension for statistical models which we use to prove a useful lower bound on generalization error even2 under the assumption that the true distribution comes from P . This allows us to establish a gap at finite sample size between the error achievable by a single-step purely supervised learner and that achievable by a semi-supervised learner. We do not claim an asymptotic gap. The purpose of our analysis is rather to show that differential finite availability of data can dictate a multi-step learning approach. Our applications are respectively a strengthening of a manifold learning example analyzed by Niyogi (2013) and a group-invariant features example related to a result of Poggio et al. (2012). We also discuss the relevance of these to biological learning. Our framework has commonalities with a framework of Baxter (2000) for transfer learning. In that work, Baxter considered learning the inductive bias (i.e., the hypothesis space) for an algorithm for a 1 This means there is a group G of transformations of X such that p(y|x) = p(y|g·x) for all g 2 G. 2(distribution-specific lower bounds are by definition weaker than distribution-free ones) “target" learning task, based on experience from previous “source" learning tasks. For this purpose he defined a learning environment E to be a class of probability distributions on X ⇥ Y together with an unknown probability distribution Q on E , and assumed E to restrict the possible joint distributions which may arise. We also make a generative assumption, assuming joint distributions come from P , but we do not use a prior Q. Within his framework Baxter studied the reduction in generalization error for an algorithm to learn a new task, defined by p 2 E , when given access to a sample from p and a sample from each of m other learning tasks, p 1 , . . . , p m 2 E , chosen randomly according to Q, compared with an algorithm having access to only a sample from p. The analysis produced upper bounds on generalization error in terms of covering numbers and a lower bound was also obtained in terms of VC dimension in the specific case of shallow neural networks. In proving our lower bound in terms of a variant of VC dimension we use a minimax analysis. 2 Setup We assume a learning problem is specified by a joint probability distribution p on Z = X ⇥ Y and a particular (regression, classification or decision) function f p : X ! R determined entirely by p(y|x). Moreover, we postulate a statistical model P on X ⇥ Y and assume p 2 P . Despite the simplified notation, f p (x) depends on the conditionals p(y|x) and not the entire joint distribution p. There are three main types of learning problem our framework addresses (reflected in three types of f p ). When y is noise-free, i.e. p(y|x) is concentrated at a single y-value v p (x) 2 {0, 1}, f p = v p : X ! {0, 1} (classification); here f p (x) = E p (y|x). When y is noisy, then either f p : X ! {0, 1} (classification/decision) or f p : X ! [0, 1] (regression) and f p (x) = E p (y|x). In all three cases the parameters which define f p , the learning goal, depend only on p(y|x) = E p (y|x). We assume the learner knows the model P and the type of learning problem, i.e., the hypothesis class is the “concept class" C := {f p : p 2 P}. To be more precise, for the first type of f p listed above, this is the concept class (Kearns and Vazirani, 1994); for the second type, it is a class of decision rules and for the third type, it is a class of p-concepts (Kearns and Schapire, 1994). For specific choice of loss functions, we seek worst-case bounds on learning rates, over all distributions p 2 P . Our results for all three types of learning problem are stated in Theorem 3. To keep the presentation simple, we give a detailed proof for the first two types, i.e., assuming labels are binary. This shows how classic PAC-style arguments for discrete X can be adapted to our framework where X may be smooth. Extending these arguments to handle non-binary Y proceeds by the same modifications as for discrete X (c.f. Kearns and Schapire (1994)). We remark that in the presence of noise, better bounds can be obtained (see Theorem 3 for details) if a more technical version of Definition 1 is used but we leave this for a subsequent paper. We define the following probabilistic version of fat shattering dimension: Definition 1. Given P , a class of probability distributions on X⇥{0, 1}, let 2 (0, 1), ↵ 2 (0, 1/2) and n 2 N = {0, 1, . . . , ...}. Suppose there exist (disjoint) sets S i ⇢ X , i 2 {1, . . . , n} with S = [ i S i , a reference probability measure q on X , and a sub-class P n ⇢ P of cardinality |P n | = 2n with the following properties: 1. q(S i ) /n for every i 2 {1, . . . , n} 2. q lower bounds the marginals of all p 2 P n on S, i.e. R B dp X R B dq for any p-measurable subset B ⇢ S 3. 8 e 2 {0, 1}n, 9 p 2 P n such that E p (y|x) > 1/2 + ↵ for x 2 S i when e i = 1 and E p (y|x) < 1/2 ↵ for x 2 S i when e i = 0 then we say P ↵-shatters S 1 , . . . , S n -uniformly using P n . The -uniform ↵-shattering dimension of P is the largest n such that P ↵-shatters some collection of n subsets of X -uniformly. This provides a measure of complexity of the class P of distributions in the sense that it indicates the variability of the expected y-values for x constrained to lie in the region S with measure at least under corresponding marginals. The reference measure q serves as a lower bound on the marginals and ensures that they “uniformly" assign probabilty at least to S. Richness (variability) of conditionals is thus traded off against uniformity of the corresponding marginal distributions. Remark 2 (Uniformity of measure). The technical requirement of a reference distribution q is automatically satisfied if all marginals p X for p 2 P n are uniform over S. For simplicity this is the situation considered in all our examples. The weaker condition (in terms of q) that we postulate in Definition 1 is however sufficient for our main result, Theorem 3. If f p is binary and y is noise-free then P shatters S 1 , . . . , S n -uniformly if and only if there is a sub-class P n ⇢ P with the specified uniformity of measure, such that each f p (·) = E p (y|·), p 2 P n is constant on each S i and the induced set-functions shatter {S 1 , . . . , S n } in the usual (Vapnik-Chervonenkis) sense. In that case, ↵ may be chosen arbitrarily in (0, 1/2) and we omit mention of it. If f p takes values in [0, 1] or f p is binary and y noisy then -uniform shattering can be expressed in terms of fat-shattering (both at scale ↵). We show that the -uniform ↵-shattering dimension of P can be used to lower bound the sample size required by even the most powerful learner of this class of problems. The proof is in the same spirit as purely combinatorial proofs of lower bounds using VC-dimension. Essentially the added condition on P in terms of allows to convert the risk calculation to a combinatorial problem. As a counterpoint to the lower bound result, we consider an alternative two step learning strategy which makes use of underlying structure in X implied by the model P and we obtain upper bounds for the corresponding risk. 2.1 Underlying structure We assume a representation t : X ! Rk of the data, such that p(y|x) can be expressed in terms of p(y|t(x)), say f p (x) = g ✓ (t(x)) for some parameter ✓ 2 ⇥. Such a t is generally known in Statistics as a sufficient dimension reduction for f p but here we make no assumption on the dimension k (compared with the dimension of X). This is in keeping with the paradigm of feature extraction for use in kernel machines, where the dimension of t(X) may even be higher than the original dimension of X . As in that setting, what will be important is rather that the intermediate representation t(x) reduce the complexity of the concept space. While t depends on p we will assume it does so only via X . For example t could depend on p through the marginal p X on X or possible group action on X; it is a manifestation in the data X , possibly over time, of underlying structure in the true joint distribution p 2 P . The representation t captures structure in X induced by p. On the other hand, the regression function itself depends only on the conditional p(y|t(x)). In general, the natural factorization ⇡ : P ! P X , p 7! p X determines for each marginal q 2 P X a collection ⇡ 1(q) of possible conditionals, namely those p(y|x) arising from joint p 2 P that have marginal p X = q. More generally any sufficient statistic t induces a similar factorization (c.f. Fisher-Neyman characterization) ⇡ t : P ! P t , p 7! p t , where P t is the marginal model with respect to t, and only conditionals p(y|t) are needed for learning. As before, given a known marginal q 2 P t , this implies a collection ⇡ 1 t (q) of possible conditionals p(y|t) relevant to learning. Knowing q thus reduces the original problem where p(y|x) or p(y|t) can come from any p 2 P to one where it comes from p in a reduced class ⇡ 1(q) or ⇡ 1 t (q) ( P . Note the similarity with the assumption of Balcan and Blum (2005) that a good compatibility function reduce the concept class. In our case the concept class C consists of f p defined by p(y|t) in [ t P Y |t with PY |t :={p(y|t) : p 2 P}, and marginals come from P t . The joint model P that we postulate, meanwhile, corresponds to a subset of C ⇥ P t (pairs (f p , q) where f p uses p 2 ⇡ 1 t (q)). The indicator function for this subset is an abstract (binary) version of compatibility function (recall the compatibility function of Balcan-Blum should be a [0, 1]-valued function on C ⇥D, satisfying further practical conditions that our function typically would not). Thus, in a sense, our assumption of a joint model P and sufficient statistic t amounts to a general form of compatibility function that links C and D without making assumptions on how t might be learned. This is enough to imply the original learning problem can be factored into first learning the structure t and then learning the parameter ✓ for f p (x) = g ✓ (t(x)) in a reduced hypothesis space. Our goal is to understand when and why one should do so. 2.2 Learning rates We wish to quantify the benefits achieved by using such a factorization in terms of the bounds on the expected loss (i.e. risk) for a sample of size m 2 N drawn iid from any p 2 P . We assume the learner is provided with a sample z̄ = (z 1 , z 2 · · · z m ), with z i = (x i , y i ) 2 X ⇥ Y = Z, drawn iid from the distribution p and uses an algorithm A : Zm ! C = H to select A(z̄) to approximate f p . Let `(A(z̄), f p ) denote a specific loss. It might be 0/1, absolute, squared, hinge or logistic loss. We define L(A(z̄), f p ) to be the global expectation or L2-norm of one of those pointwise losses `: L(A(z̄), f p ) := E x `(A(z̄)(x), f p (x)) = Z X `(A(z̄)(x), f p (x))dp X (x) (1) or L(A(z̄), f p ) := ||`(A(z̄), f p )|| L 2 (pX) = sZ X `(A(z̄)(x), f p (x))2dp X . (2) Then the worst case expected loss (i.e. minimax risk) for the best learning algorithm with no knowledge of t pX is R(m) := inf A sup p2P E z̄ L(A(z̄), f p ) = inf A sup q2PX sup p(y|tq)s.t. p2P,pX=q E z̄ L(A(z̄, f p ) . (3) while for the best learning algorithm with oracle knowledge of t pX it is Q(m) := sup q2PX inf A sup p(y|tq)s.t. p2P,pX=q E z̄ L(A(z̄, f p ) . (4) Some clarification is in order regarding the classes over which the suprema are taken. In principle the worst case expected loss for a given A is the supremum over P of the expected loss. Since f p (x) is determined by p(y|t pX (x)), and tpX is determined by pX this is a supremum over q 2 PX of a supremum over p(y|t q (·)) such that p X = q. Finding the worst case expected error for the best A therefore means taking the infimum of the supremum just described. In the case of Q(m) since the algorithm knows t q , the order of the supremum over t changes with respect to the infimum: the learner can select the best algorithm A using knowledge of t q . Clearly R(m) Q(m) by definition. In the next section, we lower bound R(m) and upper bound Q(m) to establish a gap between R(m) and Q(m). 3 Main Result We show that -uniform shattering dimension n or more implies a lower bound on the worst case expected error, R(m), when the sample size m n. In particular - in the setup specified in the previous section - if {g ✓ (·) : ✓ 2 ⇥} has much smaller VC dimension than n this results in a distinct gap between rates for a learner with oracle access to t pX and a learner without. Theorem 3. Consider the framework defined in the previous Section with Y = {0, 1}. Assume {g ✓ (·) : ✓ 2 ⇥} has VC dimension d < m and P has -uniform ↵-shattering dimension n (1+✏)m. Then, for sample size m, Q(m) 16 q d log(m+1)+log 8+1 2m while R(m) > ✏bc m+1/8 where b depends both on the type of loss and the presence of noise, while c depends on noise. Assume the standard definition in (1). If f p are binary (in the noise-free or noisy setting) b = 1 for absolute, squared, 0-1, hinge or logistic loss. In the noisy setting, if f p = E(y|x) 2 [0, 1], b = ↵ for absolute loss and b = ↵2 for squared loss. In general, c = 1 in the noise-free setting and c = (1/2 + ↵)m in the noisy setting. By requiring P to satisfy a stronger notion of -uniform ↵-shattering one can obtain c = 1 even in the noisy case. Note that for sample size m and -uniform ↵-shattering dimension 2m, we have ✏ = 1, so the lower bound in its simplest form becomes m+1/8. This is the bound we will use in the next Section to derive implications of Theorem 3. Remark 4. We have stated in the Theorem a simple upper bound, sticking to Y = {0, 1} and using VC dimension, in order to focus the presentation on the lower bound which uses the new complexity measure. The upper bound could be improved. It could also be replaced with a corresponding upper bound assuming instead Y = [0, 1] and fat shattering dimension d. Proof. The upper bound on Q(m) holds for an ERM algorithm (by the classic argument, see for example Corollary 12.1 in Devroye et al. (1996)). We focus here on the lower bound for R(m). Moreover, we stick to the simpler definition of -uniform shattering in Definition 1 and omit proof of the final statement of the Theorem, which is slightly more involved. We let n = 2m (i.e. ✏ = 1) and we comment in a footnote on the result for general ✏. Let S 1 , . . . , S 2m be sets which are -uniformly ↵-shattered using the family P 2m ⇢ P and denote their union by S. By assumption S has measure at least under a reference measure q which is dominated by all marginals p X for p 2 P 2m (see Definition 1). We divide our argument into three parts. 1. If we prove a lower bound for the average over P 2m , 8A, 1 22m X p2P2m E z̄ L(A(z̄), f p ) bc m+1/8 (5) it will also be a lower bound for the supremum over P 2m : 8A, sup p2P2m E z̄ L(A(z̄), f p ) bc m+1/8 . and hence for the supremum over P . It therefore suffices to prove (5). 2. Given x 2 S, define v p (x) to be the more likely label for x under the joint distribution p 2 P 2m . This notation extends to the noisy case the definition of v p already given for the noise-free case. The uniform shattering condition implies p(v p (x)|x) > 1/2 + ↵ in the noisy case and p(v p (x)|x) = 1 in the noise-free case. Given x̄ = (x 1 , . . . , x m ) 2 Sm, write z̄ p (x̄) := (z 1 , . . . , z m ) where z j = (x j , v p (x j )). Then E z̄ L(A(z̄), f p ) = Z Z m L(A(z̄), f p )dpm(z̄) Z S m⇥Y m L(A(z̄), f p )dpm(z̄) c Z S m L(A(z̄ p (x̄)), f p )dpm X (x̄) where c is as specified in the Theorem. Note the sets V l := {x̄ 2 Sm ⇢ Xm : the x j occupy exactly l of the S i } for l = 1, . . . ,m define a partition of Sm. Recall that dp X dq on S for all p 2 P 2m so Z S m L(A(z̄ p (x̄)), f p )dpm X (x̄) 1 22m X p2P2m mX l=1 Z x̄2Vl L(A(z̄ p (x̄)), f p ) dqm(x̄) = mX l=1 Z x̄2Vl 0 BBBB@ 1 22m X p2P2m L(A(z̄ p (x̄)), f p ) | {z } I 1 CCCCA dq m(x̄). We claim the integrand, I , is bounded below by b /8 (this computation is performed in part 3, and depends on knowing x̄ 2 V l ). At the same time, S has measure at least under q so mX l=1 Z x̄2Vl dq m(x̄) = Z x̄2Sm dq m(x̄) m which will complete the proof of (5). 3. We now assume a fixed but arbitrary x̄ 2 V l and prove I b /8. To simplify the discussion, we will refer to sets S i which contain a component x j of x̄ as S i with data. We also need notation for the elements of P 2m : for each L ⇢ [2m] denote by p(L) the unique element of P 2m such that v p (L) | Si = 1 if i 2 L, and vp(L) |Si = 0 if i /2 L. Now, let Lx̄ := {i 2 [2m] : x̄ \ Si 6= ;}. These are the indices of sets S i with data. By assumption |L x̄ | = l, and so |Lc x̄ | = 2m l. Every subset L ⇢ [2m] and hence every p 2 P 2m is determined by L \ L x̄ and L \ Lc x̄ . We will collect together all p(L) having the same L \ L x̄ , namely for each D ⇢ L x̄ define P D := {p(L) 2 P 2m : L \ L x̄ = D}. These 2l families partition P 2m and in each P D there are 22m l probability distributions. Most importantly, z̄ p (x̄) is the same for all p 2 P D (because D determines v p on the S i with data). This implies A(z̄ p (x̄)) : X ! R is the same function3 of X for all p in a given P D . To simplify notation, since we will be working within a single P D , we write f := A(z̄(x̄)). While f is the hypothesized regression function given data x̄, f p is the true regression function when p is the underlying distribution. For each set S i let v i be 1 if f is above 1/2 on a majority of S i using reference measure q (a q-majority) and 0 otherwise. We now focus on the “unseen" S i where no data lie (i.e., i 2 Lc x̄ ) and use the v i to specify a 1-1 correspondence between elements p 2 P D and subsets K ⇢ Lc x̄ : p 2 P D ! K p := {i 2 Lc x̄ : v p 6= v i }. Take a specific p 2 P D with its associated K p . We have |f(x) f p (x)| > ↵ on the q-majority of the set S i for all i 2 K p . The condition |f(x) f p (x)| > ↵ with f(x) and f p (x) on opposite sides of 1/2 implies a lower bound on `(f(x), f p (x)) for each of the pointwise loss functions ` that we consider (0/1, absolute, square, hinge, logistic). The value of b, however, differs from case to case (see Appendix). For now we have,Z Si `(f(x), f p (x)) dp X (x) Z Si `(f(x), f p (x)) dq(x) b 1 2 Z Si dq(x) b 4m . Summing over all i 2 K p , and letting k = |K p |, we obtain (still for the same p) L(f(x), f p (x)) k b 4m (assuming L is defined by equation (1))4. There are 2m ` k possible K with cardinality k, for any k = 0, . . . , 2m `. Therefore, X p2PD L(f(x), f p (x)) 2m `X k=0 ✓ 2m ` k ◆ k b 4m = 22m `(2m `) 2 b 4m 22m ` b 8 (using 2m ` 2m m = m)5. Since D was an arbitrary subset of L x̄ , this same lower bound holds for each of the 2` families P D and so I = 1 22m X p2P2m L(f(x), f p (x)) b 8 . In the constructions of the next Section it is often the case that one can prove a different level of shattering for different n, namely (n)-uniform shattering of n subsets for various n. The following Corollary is an immediate consequence of the Theorem for such settings. We state it for binary f p without noise. Corollary 5. Let C 2 (0, 1) and M 2 N. If P (n)-uniformly ↵-shatters n subsets of X and (n)n+1/8 > C for all n < M then no learning algorithm can achieve worst case expected error below ↵C, using a training sample of size less than M/2. If such uniform shattering holds for all n 2 N then the same lower bound applies regardless of sample size. Even when (n)-uniform shattering holds for all n 2 N and lim n!1 (n) = 1, if (n) approaches 1 sufficiently slowly then it is possible (n)n+1 ! 0 and there is no asymptotic obstacle to learning. By contrast, the next Section shows an extreme situation where lim n!1 (n)n+1 e > 0. In that case, learning is impossible. 4 Applications and conclusion Manifold learning We now describe a simpler, finite dimensional version of the example in Niyogi (2013). Let X = RD, D 2 and Y = {0, 1}. Fix N 2 N and consider a very simple type of 1-dimensional manifold in X , namely the union of N linear segments, connected in circular fashion (see Figure 1). Let P X be the collection of marginal distributions, each of which is supported on and assigns uniform probability along a curve of this type. There is a 1-1 correspondence between the elements of P X and curves just described. 3Warning: f need not be an element of {fp : p 2 P2n}; we only know f 2 H = {fp : p 2 P}. 4In the L2 version, using p x x, the reader can verify the same lower bound holds. 5In the case where we use (1 + ✏)m instead of 2m, we would have (1 + ✏)m ` ✏m here. On each curve M, choose two distinct points x0, x00. Removing these disconnects M. Let one component be labeled 0 and the other 1, then label x0 and x00 oppositely. Let P be the class of joint distributions on X ⇥ Y with conditionals as described and marginals in P X . This is a noise-free setting and f p is binary. Given M (or circular coordinates on M), consider the reduced class P 0 := {p 2 P : support(p X ) = M}. Then H0 := {f p : p 2 P 0} has VC dimension 3. On the other hand, for n < N/4 1 it can be shown that P (n)-uniformly shatters n sets with f p , where (n) = 1 1 n+1 (see Appendix and Figure 2). Since (1 1 n+1 )n+1 ! e > 0 as n!1, it follows from Corollary 5 that the worst case expected error is bounded below by e/8 for any sample of size n N/8 1/2. If many linear pieces are allowed (i.e. N is high) this could be an impractical number of labeled examples. By contrast with this example, (n) in Niyogi’s example cannot be made arbitrarily close to 1. Group-invariant features We give a simplified, partially-discrete example (for a smooth version and Figures, see Appendix). Let Y = {0, 1} and let X = J ⇥ I where J = {0, 1, . . . , n 1 1} ⇥ {0, 1, . . . , n 2 1} is an n 1 by n 2 grid (n i 2 N) and I = [0, 1] is a real line segment. One should picture X as a rectangular array of vertical sticks. Above each grid point (j 1 , j 2 ) consider two special points on the stick I , one with i = i + := 1 ✏ and the other with i = i := 0 + ✏. Let P X contain only the uniform distribution on X and assume the noise-free setting. For each ē 2 {+, }n1n2 , on each segment (j 1 , j 2 ) ⇥ I assign, via p ē , the label 1 above the special point (determined by ē) and 0 below the point. This determines a family of n 1 n 2 conditional distributions and thus a family P := {p ē : ē 2 {+, }n1n2} of n 1 n 2 joint distributions. The reader can verify that P has 2✏-uniform shattering dimension n 1 n 2 . Note that when the true distribution is p ē for some ē 2 {+, }n1n2 the labels will be invariant under the action a ē of Z n1 ⇥ Zn2 defined as follows. Given (z 1 , z 2 ) 2 Z n1 ⇥Zn2 and (j1, j2) 2 J , let the group element (z1, z2) move the vertical stick at (j 1 , j 2 ) to the one at (z 1 + j 1 mod n 1 , z 2 + j 2 mod n 2 ) without flipping the stick over, just stretching it as needed so the special point i± determined by ē on the first stick goes to the one on the second stick. The orbit space of the action can be identified with I . Let t : X ⇥ Y ! I be the projection of X ⇥ Y to this orbit space, then there is an induced labelling of this orbit space (because labels were invariant under the action of the group). Given access to t, the resulting concept class has VC dimension 1. On the other hand, given instead access to a projection s for the action of the subgroup Z n1 ⇥ {0}, the class eP := {p(·|s) : p 2 P} has 2✏-uniform shattering dimension n2. Thus we have a general setting where the over-all complexity requirements for two-step learning are n 1 + n 2 while for single-step learning they are n 1 n 2 . Conclusion We used a notion of uniform shattering to demonstrate both manifold learning and invariant feature learning situations where learning becomes impossible unless the learner has access to very large amounts of labeled data or else uses a two-step semi-supervised approach in which suitable manifold- or group-invariant features are learned first in unsupervised fashion. Our examples also provide a complexity manifestation of the advantages, observed by Poggio and Mallat, of forming intermediate group-invariant features according to sub-groups of a larger transformation group. Acknowledgements The author is deeply grateful to Partha Niyogi for the chance to have been his student. This paper is directly inspired by discussions with him which were cut short much too soon. The author also thanks Ankan Saha and Misha Belkin for very helpful input on preliminary drafts.
1. What is the focus of the paper, and what are the key contributions regarding multi-step algorithms? 2. How does the proposed framework improve upon previous theoretical analyses, specifically in the context of SSL? 3. What are the weaknesses of the paper regarding its organization and experimental explanations? 4. Are there any concerns regarding the proof of the main result, and would it benefit from a more concise explanation in the main text? 5. How does the reviewer assess the overall value and significance of the paper's content?
Review
Review This paper provides a theoretical framework to analyze the generalization error of multi-step algorithms, such as SSL. This paper proposed a new notion of \gamma-uniform shattering for statistical models especially in multi-step learning. This new framework considers the advantages of breaking a single learning problem into a sequence of two learning algorithms. It is valuable in the sense that it provides a tentative analysis for SSL which previous theoretical analysis fails to perform. However, the paper is still limited in the following aspects. First, the paper's organization needs some further modification. I feel difficult to follow the authors explanation of both background introduction and the main results. Also, the proof for main result is too long to put in main text. It may make the paper more succinct if it is briefly explained in main text and put details in supplementary. Second, the paper lacks detailed explanation in the experimental parts, which suppose to be a very important piece in this paper. The authors just throw out lots of results without clear clarification.
NIPS
Title Multi-step learning and underlying structure in statistical models Abstract In multi-step learning, where a final learning task is accomplished via a sequence of intermediate learning tasks, the intuition is that successive steps or levels transform the initial data into representations more and more “suited" to the final learning task. A related principle arises in transfer-learning where Baxter (2000) proposed a theoretical framework to study how learning multiple tasks transforms the inductive bias of a learner. The most widespread multi-step learning approach is semisupervised learning with two steps: unsupervised, then supervised. Several authors (Castelli-Cover, 1996; Balcan-Blum, 2005; Niyogi, 2008; Ben-David et al, 2008; Urner et al, 2011) have analyzed SSL, with Balcan-Blum (2005) proposing a version of the PAC learning framework augmented by a “compatibility function" to link concept class and unlabeled data distribution. We propose to analyze SSL and other multi-step learning approaches, much in the spirit of Baxter’s framework, by defining a learning problem generatively as a joint statistical model on X ⇥ Y . This determines in a natural way the class of conditional distributions that are possible with each marginal, and amounts to an abstract form of compatibility function. It also allows to analyze both discrete and non-discrete settings. As tool for our analysis, we define a notion of -uniform shattering for statistical models. We use this to give conditions on the marginal and conditional models which imply an advantage for multi-step learning approaches. In particular, we recover a more general version of a result of Poggio et al (2012): under mild hypotheses a multi-step approach which learns features invariant under successive factors of a finite group of invariances has sample complexity requirements that are additive rather than multiplicative in the size of the subgroups. 1 Introduction The classical PAC learning framework of Valiant (1984) considers a learning problem with unknown true distribution p on X ⇥ Y , Y = {0, 1} and fixed concept class C consisting of (deterministic) functions f : X ! Y . The aim of learning is to select a hypothesis h : X ! Y , say from C itself (realizable case), that best recovers f . More formally, the class C is said to be PAC learnable if there is a learning algorithm that with high probability selects h 2 C having arbitrarily low generalization error for all possible distributions D on X . The distribution D governs both the sampling of points z = (x, y) 2 X ⇥ Y by which the algorithm obtains a training sample and also the cumulation of error over all x 2 X which gives the generalization error. A modification of this model, together with the notion of learnable with a model of probability (resp. decision rule) (Haussler, 1989; Kearns and Schapire, 1994), allows to treat non-deterministic functions f : X ! Y and the case Y = [0, 1] analogously. Polynomial dependence of the algorithms on sample size and reciprocals 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. of probability bounds is further required in both frameworks for efficient learning. Not only do these frameworks consider worst case error, in the sense of requiring the generalization error to be small for arbitrary distributions D on X , they assume the same concept class C regardless of the true underlying distribution D. In addition, choice of the hypothesis class is taken as part of the inductive bias of the algorithm and not addressed. Various, by now classic, measures of complexity of a hypothesis space (e.g., VC dimension or Rademacher complexity, see Mohri et al. (2012) for an overview) allow to prove upper bounds on generalization error in the above setting, and distribution-specific variants of these such as annealed VC-entropy (see Devroye et al. (1996)) or Rademacher averages (beginning with Koltchinskii (2001)) can be used to obtain more refined upper bounds. The widespread strategy of semi-supervised learning (SSL) is known not to fit well into PAC-style frameworks (Valiant, 1984; Haussler, 1989; Kearns and Schapire, 1994). SSL algorithms perform a first step using unlabeled training data drawn from a distribution on X , followed by a second step using labeled training data from a joint distribution on X ⇥ Y . This has been studied by several authors (Balcan and Blum, 2005; Ben-David et al., 2008; Urner et al., 2011; Niyogi, 2013) following the seminal work of Castelli and Cover (1996) comparing the value of unlabeled and labeled data. One immediate observation is that without some tie between the possible marginals D on X and the concept class C which records possible conditionals p(y|x), there is no benefit to unlabeled data: if D can be arbitrary then it conveys no information about the true joint distribution that generated labeled data. Within PAC-style frameworks, however, C and D are completely independent. Balcan and Blum therefore proposed augmenting the PAC learning framework by the addition of a compatibility function : C ⇥ D ! [0, 1], which records the amount of compatibility we believe each concept from C to have with each D 2 D, the class of “all" distributions on X . This function is required to be learnable from D and is then used to reduce the concept class from C to a sub-class which will be used for the subsequent (supervised) learning step. If is a good compatible function this sub-class should have lesser complexity than C (Balcan and Blum, 2005). While PAC-style frameworks in essence allow the true joint distribution to be anything in C⇥D, the existence of a good compatibility function in the sense of Balcan and Blum (2005) implicitly assumes the joint model that we believe in is smaller. We return to this point in Section 2.1. In this paper we study properties of multi-step learning strategies – those which involve multiple training steps – by considering the advantages of breaking a single learning problem into a sequence of two learning problems. We start by assuming a true distribution which comes from a class of joint distributions, i.e. statistical model, P on X ⇥ Y . We prove that underlying structure of a certain kind in P , together with differential availability of labeled vs. unlabeled data, imply a quantifiable advantage to multi-step learning at finite sample size. The structure we need is the existence of a representation t(x) of x 2 X which is a sufficient statistic for the classification or regression of interest. Two common settings where this assumption holds are: manifold learning and group-invariant feature learning. In these settings we have respectively 1. t = t pX is determined by the marginal pX and pX is concentrated on a submanifold of X , 2. t = t G is determined by a group action on X and p(y|x) is invariant1 under this action. Learning t in these cases corresponds respectively to learning manifold features or group-invariant features; various approaches exist (see (Niyogi, 2013; Poggio et al., 2012) for more discussion) and we do not assume any fixed method. Our framework is also not restricted to these two settings. As a tool for analysis we define a variant of VC dimension for statistical models which we use to prove a useful lower bound on generalization error even2 under the assumption that the true distribution comes from P . This allows us to establish a gap at finite sample size between the error achievable by a single-step purely supervised learner and that achievable by a semi-supervised learner. We do not claim an asymptotic gap. The purpose of our analysis is rather to show that differential finite availability of data can dictate a multi-step learning approach. Our applications are respectively a strengthening of a manifold learning example analyzed by Niyogi (2013) and a group-invariant features example related to a result of Poggio et al. (2012). We also discuss the relevance of these to biological learning. Our framework has commonalities with a framework of Baxter (2000) for transfer learning. In that work, Baxter considered learning the inductive bias (i.e., the hypothesis space) for an algorithm for a 1 This means there is a group G of transformations of X such that p(y|x) = p(y|g·x) for all g 2 G. 2(distribution-specific lower bounds are by definition weaker than distribution-free ones) “target" learning task, based on experience from previous “source" learning tasks. For this purpose he defined a learning environment E to be a class of probability distributions on X ⇥ Y together with an unknown probability distribution Q on E , and assumed E to restrict the possible joint distributions which may arise. We also make a generative assumption, assuming joint distributions come from P , but we do not use a prior Q. Within his framework Baxter studied the reduction in generalization error for an algorithm to learn a new task, defined by p 2 E , when given access to a sample from p and a sample from each of m other learning tasks, p 1 , . . . , p m 2 E , chosen randomly according to Q, compared with an algorithm having access to only a sample from p. The analysis produced upper bounds on generalization error in terms of covering numbers and a lower bound was also obtained in terms of VC dimension in the specific case of shallow neural networks. In proving our lower bound in terms of a variant of VC dimension we use a minimax analysis. 2 Setup We assume a learning problem is specified by a joint probability distribution p on Z = X ⇥ Y and a particular (regression, classification or decision) function f p : X ! R determined entirely by p(y|x). Moreover, we postulate a statistical model P on X ⇥ Y and assume p 2 P . Despite the simplified notation, f p (x) depends on the conditionals p(y|x) and not the entire joint distribution p. There are three main types of learning problem our framework addresses (reflected in three types of f p ). When y is noise-free, i.e. p(y|x) is concentrated at a single y-value v p (x) 2 {0, 1}, f p = v p : X ! {0, 1} (classification); here f p (x) = E p (y|x). When y is noisy, then either f p : X ! {0, 1} (classification/decision) or f p : X ! [0, 1] (regression) and f p (x) = E p (y|x). In all three cases the parameters which define f p , the learning goal, depend only on p(y|x) = E p (y|x). We assume the learner knows the model P and the type of learning problem, i.e., the hypothesis class is the “concept class" C := {f p : p 2 P}. To be more precise, for the first type of f p listed above, this is the concept class (Kearns and Vazirani, 1994); for the second type, it is a class of decision rules and for the third type, it is a class of p-concepts (Kearns and Schapire, 1994). For specific choice of loss functions, we seek worst-case bounds on learning rates, over all distributions p 2 P . Our results for all three types of learning problem are stated in Theorem 3. To keep the presentation simple, we give a detailed proof for the first two types, i.e., assuming labels are binary. This shows how classic PAC-style arguments for discrete X can be adapted to our framework where X may be smooth. Extending these arguments to handle non-binary Y proceeds by the same modifications as for discrete X (c.f. Kearns and Schapire (1994)). We remark that in the presence of noise, better bounds can be obtained (see Theorem 3 for details) if a more technical version of Definition 1 is used but we leave this for a subsequent paper. We define the following probabilistic version of fat shattering dimension: Definition 1. Given P , a class of probability distributions on X⇥{0, 1}, let 2 (0, 1), ↵ 2 (0, 1/2) and n 2 N = {0, 1, . . . , ...}. Suppose there exist (disjoint) sets S i ⇢ X , i 2 {1, . . . , n} with S = [ i S i , a reference probability measure q on X , and a sub-class P n ⇢ P of cardinality |P n | = 2n with the following properties: 1. q(S i ) /n for every i 2 {1, . . . , n} 2. q lower bounds the marginals of all p 2 P n on S, i.e. R B dp X R B dq for any p-measurable subset B ⇢ S 3. 8 e 2 {0, 1}n, 9 p 2 P n such that E p (y|x) > 1/2 + ↵ for x 2 S i when e i = 1 and E p (y|x) < 1/2 ↵ for x 2 S i when e i = 0 then we say P ↵-shatters S 1 , . . . , S n -uniformly using P n . The -uniform ↵-shattering dimension of P is the largest n such that P ↵-shatters some collection of n subsets of X -uniformly. This provides a measure of complexity of the class P of distributions in the sense that it indicates the variability of the expected y-values for x constrained to lie in the region S with measure at least under corresponding marginals. The reference measure q serves as a lower bound on the marginals and ensures that they “uniformly" assign probabilty at least to S. Richness (variability) of conditionals is thus traded off against uniformity of the corresponding marginal distributions. Remark 2 (Uniformity of measure). The technical requirement of a reference distribution q is automatically satisfied if all marginals p X for p 2 P n are uniform over S. For simplicity this is the situation considered in all our examples. The weaker condition (in terms of q) that we postulate in Definition 1 is however sufficient for our main result, Theorem 3. If f p is binary and y is noise-free then P shatters S 1 , . . . , S n -uniformly if and only if there is a sub-class P n ⇢ P with the specified uniformity of measure, such that each f p (·) = E p (y|·), p 2 P n is constant on each S i and the induced set-functions shatter {S 1 , . . . , S n } in the usual (Vapnik-Chervonenkis) sense. In that case, ↵ may be chosen arbitrarily in (0, 1/2) and we omit mention of it. If f p takes values in [0, 1] or f p is binary and y noisy then -uniform shattering can be expressed in terms of fat-shattering (both at scale ↵). We show that the -uniform ↵-shattering dimension of P can be used to lower bound the sample size required by even the most powerful learner of this class of problems. The proof is in the same spirit as purely combinatorial proofs of lower bounds using VC-dimension. Essentially the added condition on P in terms of allows to convert the risk calculation to a combinatorial problem. As a counterpoint to the lower bound result, we consider an alternative two step learning strategy which makes use of underlying structure in X implied by the model P and we obtain upper bounds for the corresponding risk. 2.1 Underlying structure We assume a representation t : X ! Rk of the data, such that p(y|x) can be expressed in terms of p(y|t(x)), say f p (x) = g ✓ (t(x)) for some parameter ✓ 2 ⇥. Such a t is generally known in Statistics as a sufficient dimension reduction for f p but here we make no assumption on the dimension k (compared with the dimension of X). This is in keeping with the paradigm of feature extraction for use in kernel machines, where the dimension of t(X) may even be higher than the original dimension of X . As in that setting, what will be important is rather that the intermediate representation t(x) reduce the complexity of the concept space. While t depends on p we will assume it does so only via X . For example t could depend on p through the marginal p X on X or possible group action on X; it is a manifestation in the data X , possibly over time, of underlying structure in the true joint distribution p 2 P . The representation t captures structure in X induced by p. On the other hand, the regression function itself depends only on the conditional p(y|t(x)). In general, the natural factorization ⇡ : P ! P X , p 7! p X determines for each marginal q 2 P X a collection ⇡ 1(q) of possible conditionals, namely those p(y|x) arising from joint p 2 P that have marginal p X = q. More generally any sufficient statistic t induces a similar factorization (c.f. Fisher-Neyman characterization) ⇡ t : P ! P t , p 7! p t , where P t is the marginal model with respect to t, and only conditionals p(y|t) are needed for learning. As before, given a known marginal q 2 P t , this implies a collection ⇡ 1 t (q) of possible conditionals p(y|t) relevant to learning. Knowing q thus reduces the original problem where p(y|x) or p(y|t) can come from any p 2 P to one where it comes from p in a reduced class ⇡ 1(q) or ⇡ 1 t (q) ( P . Note the similarity with the assumption of Balcan and Blum (2005) that a good compatibility function reduce the concept class. In our case the concept class C consists of f p defined by p(y|t) in [ t P Y |t with PY |t :={p(y|t) : p 2 P}, and marginals come from P t . The joint model P that we postulate, meanwhile, corresponds to a subset of C ⇥ P t (pairs (f p , q) where f p uses p 2 ⇡ 1 t (q)). The indicator function for this subset is an abstract (binary) version of compatibility function (recall the compatibility function of Balcan-Blum should be a [0, 1]-valued function on C ⇥D, satisfying further practical conditions that our function typically would not). Thus, in a sense, our assumption of a joint model P and sufficient statistic t amounts to a general form of compatibility function that links C and D without making assumptions on how t might be learned. This is enough to imply the original learning problem can be factored into first learning the structure t and then learning the parameter ✓ for f p (x) = g ✓ (t(x)) in a reduced hypothesis space. Our goal is to understand when and why one should do so. 2.2 Learning rates We wish to quantify the benefits achieved by using such a factorization in terms of the bounds on the expected loss (i.e. risk) for a sample of size m 2 N drawn iid from any p 2 P . We assume the learner is provided with a sample z̄ = (z 1 , z 2 · · · z m ), with z i = (x i , y i ) 2 X ⇥ Y = Z, drawn iid from the distribution p and uses an algorithm A : Zm ! C = H to select A(z̄) to approximate f p . Let `(A(z̄), f p ) denote a specific loss. It might be 0/1, absolute, squared, hinge or logistic loss. We define L(A(z̄), f p ) to be the global expectation or L2-norm of one of those pointwise losses `: L(A(z̄), f p ) := E x `(A(z̄)(x), f p (x)) = Z X `(A(z̄)(x), f p (x))dp X (x) (1) or L(A(z̄), f p ) := ||`(A(z̄), f p )|| L 2 (pX) = sZ X `(A(z̄)(x), f p (x))2dp X . (2) Then the worst case expected loss (i.e. minimax risk) for the best learning algorithm with no knowledge of t pX is R(m) := inf A sup p2P E z̄ L(A(z̄), f p ) = inf A sup q2PX sup p(y|tq)s.t. p2P,pX=q E z̄ L(A(z̄, f p ) . (3) while for the best learning algorithm with oracle knowledge of t pX it is Q(m) := sup q2PX inf A sup p(y|tq)s.t. p2P,pX=q E z̄ L(A(z̄, f p ) . (4) Some clarification is in order regarding the classes over which the suprema are taken. In principle the worst case expected loss for a given A is the supremum over P of the expected loss. Since f p (x) is determined by p(y|t pX (x)), and tpX is determined by pX this is a supremum over q 2 PX of a supremum over p(y|t q (·)) such that p X = q. Finding the worst case expected error for the best A therefore means taking the infimum of the supremum just described. In the case of Q(m) since the algorithm knows t q , the order of the supremum over t changes with respect to the infimum: the learner can select the best algorithm A using knowledge of t q . Clearly R(m) Q(m) by definition. In the next section, we lower bound R(m) and upper bound Q(m) to establish a gap between R(m) and Q(m). 3 Main Result We show that -uniform shattering dimension n or more implies a lower bound on the worst case expected error, R(m), when the sample size m n. In particular - in the setup specified in the previous section - if {g ✓ (·) : ✓ 2 ⇥} has much smaller VC dimension than n this results in a distinct gap between rates for a learner with oracle access to t pX and a learner without. Theorem 3. Consider the framework defined in the previous Section with Y = {0, 1}. Assume {g ✓ (·) : ✓ 2 ⇥} has VC dimension d < m and P has -uniform ↵-shattering dimension n (1+✏)m. Then, for sample size m, Q(m) 16 q d log(m+1)+log 8+1 2m while R(m) > ✏bc m+1/8 where b depends both on the type of loss and the presence of noise, while c depends on noise. Assume the standard definition in (1). If f p are binary (in the noise-free or noisy setting) b = 1 for absolute, squared, 0-1, hinge or logistic loss. In the noisy setting, if f p = E(y|x) 2 [0, 1], b = ↵ for absolute loss and b = ↵2 for squared loss. In general, c = 1 in the noise-free setting and c = (1/2 + ↵)m in the noisy setting. By requiring P to satisfy a stronger notion of -uniform ↵-shattering one can obtain c = 1 even in the noisy case. Note that for sample size m and -uniform ↵-shattering dimension 2m, we have ✏ = 1, so the lower bound in its simplest form becomes m+1/8. This is the bound we will use in the next Section to derive implications of Theorem 3. Remark 4. We have stated in the Theorem a simple upper bound, sticking to Y = {0, 1} and using VC dimension, in order to focus the presentation on the lower bound which uses the new complexity measure. The upper bound could be improved. It could also be replaced with a corresponding upper bound assuming instead Y = [0, 1] and fat shattering dimension d. Proof. The upper bound on Q(m) holds for an ERM algorithm (by the classic argument, see for example Corollary 12.1 in Devroye et al. (1996)). We focus here on the lower bound for R(m). Moreover, we stick to the simpler definition of -uniform shattering in Definition 1 and omit proof of the final statement of the Theorem, which is slightly more involved. We let n = 2m (i.e. ✏ = 1) and we comment in a footnote on the result for general ✏. Let S 1 , . . . , S 2m be sets which are -uniformly ↵-shattered using the family P 2m ⇢ P and denote their union by S. By assumption S has measure at least under a reference measure q which is dominated by all marginals p X for p 2 P 2m (see Definition 1). We divide our argument into three parts. 1. If we prove a lower bound for the average over P 2m , 8A, 1 22m X p2P2m E z̄ L(A(z̄), f p ) bc m+1/8 (5) it will also be a lower bound for the supremum over P 2m : 8A, sup p2P2m E z̄ L(A(z̄), f p ) bc m+1/8 . and hence for the supremum over P . It therefore suffices to prove (5). 2. Given x 2 S, define v p (x) to be the more likely label for x under the joint distribution p 2 P 2m . This notation extends to the noisy case the definition of v p already given for the noise-free case. The uniform shattering condition implies p(v p (x)|x) > 1/2 + ↵ in the noisy case and p(v p (x)|x) = 1 in the noise-free case. Given x̄ = (x 1 , . . . , x m ) 2 Sm, write z̄ p (x̄) := (z 1 , . . . , z m ) where z j = (x j , v p (x j )). Then E z̄ L(A(z̄), f p ) = Z Z m L(A(z̄), f p )dpm(z̄) Z S m⇥Y m L(A(z̄), f p )dpm(z̄) c Z S m L(A(z̄ p (x̄)), f p )dpm X (x̄) where c is as specified in the Theorem. Note the sets V l := {x̄ 2 Sm ⇢ Xm : the x j occupy exactly l of the S i } for l = 1, . . . ,m define a partition of Sm. Recall that dp X dq on S for all p 2 P 2m so Z S m L(A(z̄ p (x̄)), f p )dpm X (x̄) 1 22m X p2P2m mX l=1 Z x̄2Vl L(A(z̄ p (x̄)), f p ) dqm(x̄) = mX l=1 Z x̄2Vl 0 BBBB@ 1 22m X p2P2m L(A(z̄ p (x̄)), f p ) | {z } I 1 CCCCA dq m(x̄). We claim the integrand, I , is bounded below by b /8 (this computation is performed in part 3, and depends on knowing x̄ 2 V l ). At the same time, S has measure at least under q so mX l=1 Z x̄2Vl dq m(x̄) = Z x̄2Sm dq m(x̄) m which will complete the proof of (5). 3. We now assume a fixed but arbitrary x̄ 2 V l and prove I b /8. To simplify the discussion, we will refer to sets S i which contain a component x j of x̄ as S i with data. We also need notation for the elements of P 2m : for each L ⇢ [2m] denote by p(L) the unique element of P 2m such that v p (L) | Si = 1 if i 2 L, and vp(L) |Si = 0 if i /2 L. Now, let Lx̄ := {i 2 [2m] : x̄ \ Si 6= ;}. These are the indices of sets S i with data. By assumption |L x̄ | = l, and so |Lc x̄ | = 2m l. Every subset L ⇢ [2m] and hence every p 2 P 2m is determined by L \ L x̄ and L \ Lc x̄ . We will collect together all p(L) having the same L \ L x̄ , namely for each D ⇢ L x̄ define P D := {p(L) 2 P 2m : L \ L x̄ = D}. These 2l families partition P 2m and in each P D there are 22m l probability distributions. Most importantly, z̄ p (x̄) is the same for all p 2 P D (because D determines v p on the S i with data). This implies A(z̄ p (x̄)) : X ! R is the same function3 of X for all p in a given P D . To simplify notation, since we will be working within a single P D , we write f := A(z̄(x̄)). While f is the hypothesized regression function given data x̄, f p is the true regression function when p is the underlying distribution. For each set S i let v i be 1 if f is above 1/2 on a majority of S i using reference measure q (a q-majority) and 0 otherwise. We now focus on the “unseen" S i where no data lie (i.e., i 2 Lc x̄ ) and use the v i to specify a 1-1 correspondence between elements p 2 P D and subsets K ⇢ Lc x̄ : p 2 P D ! K p := {i 2 Lc x̄ : v p 6= v i }. Take a specific p 2 P D with its associated K p . We have |f(x) f p (x)| > ↵ on the q-majority of the set S i for all i 2 K p . The condition |f(x) f p (x)| > ↵ with f(x) and f p (x) on opposite sides of 1/2 implies a lower bound on `(f(x), f p (x)) for each of the pointwise loss functions ` that we consider (0/1, absolute, square, hinge, logistic). The value of b, however, differs from case to case (see Appendix). For now we have,Z Si `(f(x), f p (x)) dp X (x) Z Si `(f(x), f p (x)) dq(x) b 1 2 Z Si dq(x) b 4m . Summing over all i 2 K p , and letting k = |K p |, we obtain (still for the same p) L(f(x), f p (x)) k b 4m (assuming L is defined by equation (1))4. There are 2m ` k possible K with cardinality k, for any k = 0, . . . , 2m `. Therefore, X p2PD L(f(x), f p (x)) 2m `X k=0 ✓ 2m ` k ◆ k b 4m = 22m `(2m `) 2 b 4m 22m ` b 8 (using 2m ` 2m m = m)5. Since D was an arbitrary subset of L x̄ , this same lower bound holds for each of the 2` families P D and so I = 1 22m X p2P2m L(f(x), f p (x)) b 8 . In the constructions of the next Section it is often the case that one can prove a different level of shattering for different n, namely (n)-uniform shattering of n subsets for various n. The following Corollary is an immediate consequence of the Theorem for such settings. We state it for binary f p without noise. Corollary 5. Let C 2 (0, 1) and M 2 N. If P (n)-uniformly ↵-shatters n subsets of X and (n)n+1/8 > C for all n < M then no learning algorithm can achieve worst case expected error below ↵C, using a training sample of size less than M/2. If such uniform shattering holds for all n 2 N then the same lower bound applies regardless of sample size. Even when (n)-uniform shattering holds for all n 2 N and lim n!1 (n) = 1, if (n) approaches 1 sufficiently slowly then it is possible (n)n+1 ! 0 and there is no asymptotic obstacle to learning. By contrast, the next Section shows an extreme situation where lim n!1 (n)n+1 e > 0. In that case, learning is impossible. 4 Applications and conclusion Manifold learning We now describe a simpler, finite dimensional version of the example in Niyogi (2013). Let X = RD, D 2 and Y = {0, 1}. Fix N 2 N and consider a very simple type of 1-dimensional manifold in X , namely the union of N linear segments, connected in circular fashion (see Figure 1). Let P X be the collection of marginal distributions, each of which is supported on and assigns uniform probability along a curve of this type. There is a 1-1 correspondence between the elements of P X and curves just described. 3Warning: f need not be an element of {fp : p 2 P2n}; we only know f 2 H = {fp : p 2 P}. 4In the L2 version, using p x x, the reader can verify the same lower bound holds. 5In the case where we use (1 + ✏)m instead of 2m, we would have (1 + ✏)m ` ✏m here. On each curve M, choose two distinct points x0, x00. Removing these disconnects M. Let one component be labeled 0 and the other 1, then label x0 and x00 oppositely. Let P be the class of joint distributions on X ⇥ Y with conditionals as described and marginals in P X . This is a noise-free setting and f p is binary. Given M (or circular coordinates on M), consider the reduced class P 0 := {p 2 P : support(p X ) = M}. Then H0 := {f p : p 2 P 0} has VC dimension 3. On the other hand, for n < N/4 1 it can be shown that P (n)-uniformly shatters n sets with f p , where (n) = 1 1 n+1 (see Appendix and Figure 2). Since (1 1 n+1 )n+1 ! e > 0 as n!1, it follows from Corollary 5 that the worst case expected error is bounded below by e/8 for any sample of size n N/8 1/2. If many linear pieces are allowed (i.e. N is high) this could be an impractical number of labeled examples. By contrast with this example, (n) in Niyogi’s example cannot be made arbitrarily close to 1. Group-invariant features We give a simplified, partially-discrete example (for a smooth version and Figures, see Appendix). Let Y = {0, 1} and let X = J ⇥ I where J = {0, 1, . . . , n 1 1} ⇥ {0, 1, . . . , n 2 1} is an n 1 by n 2 grid (n i 2 N) and I = [0, 1] is a real line segment. One should picture X as a rectangular array of vertical sticks. Above each grid point (j 1 , j 2 ) consider two special points on the stick I , one with i = i + := 1 ✏ and the other with i = i := 0 + ✏. Let P X contain only the uniform distribution on X and assume the noise-free setting. For each ē 2 {+, }n1n2 , on each segment (j 1 , j 2 ) ⇥ I assign, via p ē , the label 1 above the special point (determined by ē) and 0 below the point. This determines a family of n 1 n 2 conditional distributions and thus a family P := {p ē : ē 2 {+, }n1n2} of n 1 n 2 joint distributions. The reader can verify that P has 2✏-uniform shattering dimension n 1 n 2 . Note that when the true distribution is p ē for some ē 2 {+, }n1n2 the labels will be invariant under the action a ē of Z n1 ⇥ Zn2 defined as follows. Given (z 1 , z 2 ) 2 Z n1 ⇥Zn2 and (j1, j2) 2 J , let the group element (z1, z2) move the vertical stick at (j 1 , j 2 ) to the one at (z 1 + j 1 mod n 1 , z 2 + j 2 mod n 2 ) without flipping the stick over, just stretching it as needed so the special point i± determined by ē on the first stick goes to the one on the second stick. The orbit space of the action can be identified with I . Let t : X ⇥ Y ! I be the projection of X ⇥ Y to this orbit space, then there is an induced labelling of this orbit space (because labels were invariant under the action of the group). Given access to t, the resulting concept class has VC dimension 1. On the other hand, given instead access to a projection s for the action of the subgroup Z n1 ⇥ {0}, the class eP := {p(·|s) : p 2 P} has 2✏-uniform shattering dimension n2. Thus we have a general setting where the over-all complexity requirements for two-step learning are n 1 + n 2 while for single-step learning they are n 1 n 2 . Conclusion We used a notion of uniform shattering to demonstrate both manifold learning and invariant feature learning situations where learning becomes impossible unless the learner has access to very large amounts of labeled data or else uses a two-step semi-supervised approach in which suitable manifold- or group-invariant features are learned first in unsupervised fashion. Our examples also provide a complexity manifestation of the advantages, observed by Poggio and Mallat, of forming intermediate group-invariant features according to sub-groups of a larger transformation group. Acknowledgements The author is deeply grateful to Partha Niyogi for the chance to have been his student. This paper is directly inspired by discussions with him which were cut short much too soon. The author also thanks Ankan Saha and Misha Belkin for very helpful input on preliminary drafts.
1. What is the focus of the paper regarding multi-step learning? 2. What is the main contribution of the paper, particularly in theorem 3? 3. What are some questions raised by the reviewer regarding the application of uniform shattering in manifold learning and invariant feature learning? 4. How does the reviewer assess the clarity and quality of the paper's content?
Review
Review This paper focuses on the theoretical analysis of multi-step learning by defining the learning problem generatively as a full statistical model, which results in a natural compatibility function that agrees with the work by Balcan-Blum. In particular, they start by defining the \gamma-uniform \alpha-shattering dimension of a class of probability distributions on Xx{0,1}, and focus on the cases where the marginals p_X is uniform over S. The main contribution is in Theorem 3, which provides both the upper bound for the worst case expected loss of the best learning algorithm with oracle knowledge of the sufficient statistic, and the lower bound for the worse case expected loss of the best learning algorithm with no such knowledge. Finally, the authors apply the notion of uniform shattering in both manifold learning and invariant feature learning situations, and draw the conclusion that learning becomes impossible unless the learner has access to very large amounts of labeled data, or the learner uses a two-step semi-supervised approach. The paper is well written in general, with clear contribution regarding the theoretical benefit of leveraging the unlabeled data via the sufficient statistics t_{pX}. A few questions: 1. Theorem 3 essentially applies to the cases where the VC dimension is finite. How would the result look like if this is not the case? 2. Page 2, lines 52-54, the authors stated that ‘One immediate observation is that without some tie between the possible marginals D on X and the concept class C which records possible conditionals p(y|x), there is no benefit to unlabeled data’. How is this statement supported in Theorem 3? 3. Q(m) and R(m) are derived for the cases with and without oracle knowledge of t_{pX}. If such knowledge is derived from unlabeled data, what would the worst case expected loss be?
NIPS
Title The Image Local Autoregressive Transformer Abstract Recently, AutoRegressive (AR) models for the whole image generation empowered by transformers have achieved comparable or even better performance compared to Generative Adversarial Networks (GANs). Unfortunately, directly applying such AR models to edit/change local image regions, may suffer from the problems of missing global information, slow inference speed, and information leakage of local guidance. To address these limitations, we propose a novel model – image Local Autoregressive Transformer (iLAT), to better facilitate the locally guided image synthesis. Our iLAT learns the novel local discrete representations, by the newly proposed local autoregressive (LA) transformer of the attention mask and convolution mechanism. Thus iLAT can efficiently synthesize the local image regions by key guidance information. Our iLAT is evaluated on various locally guided image syntheses, such as pose-guided person image synthesis and face editing. Both quantitative and qualitative results show the efficacy of our model. 1 Introduction Generating realistic images has been attracting ubiquitous research attention of the community for a long time. In particular, those image synthesis tasks involving persons or portrait [6, 28, 29] can be applied in a wide variety of scenarios, such as advertising, games, and motion capture, etc. Most real-world image synthesis tasks only involve the local generation, which means generating pixels in certain regions, while maintaining the semantic consistency, e.g., face editing [19, 1, 40], pose guiding [36, 55, 47], and image inpainting [51, 30, 49, 53]. Unfortunately, most works can only handle the well aligned images of ‘icon-view’ foregrounds, rather than the image synthesis of ‘non-iconic view’ foregrounds [47, 24], i.e., person instances with arbitrary poses in cluttered scenes, which is concerned in this paper. Even worse, the global semantics tend to be distorted during the generation of previous methods, even if subtle modifications are applied to a local image region. Critically, given the local editing/guidance such as sketches of faces, or skeleton of bodies in the first column of Fig. 1(A), it is imperative to design our new algorithm for locally guided image synthesis. Generally, several inherent problems exist in previous works for such a task. For example, despite impressive quality of images are generated, GANs/Autoencoder(AE)-based methods [51, 47, 19, 30, 18] are inclined to synthesize blurry local regions, as in Fig. 1(A)-row(c). Furthermore, some inspiring autoregressive (AR) methods, such as PixelCNN [32, 41, 23] and recent transformers [8, 14], should efficiently model the joint image distribution (even in very complex background [32]) for whole image generation as Fig. 1(B)-row(b). These AR models, however, are still not ready for locally guided image synthesis, as several reasons. (1) Missing global information. As in Fig. 1(B)-row(b), vanilla AR models take the top-to-down and left-to-right sequential generation with limited receptive fields for the initial generating (top left corner), which are incapable of directly modeling global information. Additionally, the sequential AR models suffer from exposure bias [2], which may ∗ Corresponding author. Dr. Fu is also with Fudan ISTBI—ZJNU Algorithm Centre for Brain-inspired Intelligence, Zhejiang Normal University, Jinhua, China. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). predict future pixels conditioned on the past ones with mistakes, due to the discrepancy between training and testing in AR. This makes small local guidance unpredictable changes to the whole image, resulting in inconsistent semantics as in Fig. 1(A)-row(a). (2) Slow inference speed. The AR models have to sequentially predict the pixels in the testing with notoriously slow inference speed, especially for high-resolution image generation. Although the parallel training techniques are used in pixelCNN [32] and transformer [14], the conditional probability sampling fails to work in parallel during the inference phase. (3) Information leakage of local guidance. As shown in Fig. 1(B)-row(c), the local guidance should be implemented with specific masks to ensure the validity of the local AR learning. During the sequential training process, pixels from masked regions may be exposed to AR models by convolutions with large kernel sizes or inappropriate attention masks in the transformer. We call it information leakage [44, 16] of local guidance, which makes models overfit the masked regions, and miss detailed local guidance as in Fig. 1(A)-row(b). To this end, we propose a novel image Local Autoregressive Transformer (iLAT) for the task of locally guided image synthesis. Our key idea lies in learning the local discrete representations effectively. Particularly, we tailor the receptive fields of AR models to local guidance, achieving semantically consistent and visually realistic generation results. Furthermore, a local autoregressive (LA) transformer with the novel LA attention mask and convolution mechanism is proposed to enable successful local generation of images with efficient inference time, without information leakage. Formally, we propose the iLAT model with several novel components. (1) We complementarily incorporate receptive fields of both AR and AE to fit LA generation with a novel attention mask as shown in Fig. 1(B)-row(c). In detail, local discrete representation is proposed to represent those masked regions, while the unmasked areas are encoded with continuous image features. Thus, we achieve favorable results with both consistent global semantics and realistic local generations. (2) Our iLAT dramatically reduces the inference time for local generation, since only masked regions will be generated autoregressively. (3) A simple but effective two-stream convolution and a local causal attention mask mechanism are proposed for discrete image encoder and transformer respectively, with which information leakage is prevented without detriment to the performance. We make several contributions in this work. (1) A novel local discrete representation learning is proposed to efficiently help to learn our iLAT for the local generation. (2) We propose an image local autoregressive transformer for local image synthesis, which enjoys both semantically consistent and realistic generative results. (3) Our iLAT only generates necessary regions autoregressively, which is much faster than vanilla AR methods during the inference. (4) We propose a two-stream convolution and a LA attention mask to prevent both convolutions and transformer from information leakage, thus improving the quality of generated images. Empirically, we introduce several locally guidance tasks, including pose-guided image generation and face editing tasks; and extensive experiments are conducted on the corresponding dataset to validate the efficacy of our model. 2 Related Work Conditional Image Synthesis. Some conditional generation models are designed to globally generate images with pre-defined styles based on user-provided references, such as poses and face sketches. These previous synthesis efforts are made on Variational auto-encoder (VAE) [10, 13], AR Model [48, 39], and AE with adversarial training [51, 49, 53]. Critically, it is very non-trivial for all these methods to generate images of locally guided image synthesis of the non-iconic foreground. Some tentative attempts have been conducted in pose-guided synthesis [42, 47] with person existing in non-ironic views. On the other hand, face editing methods are mostly based on adversarial AE based inpainting [30, 51, 19] and GAN inversion based methods [53, 40, 1]. Rather than synthesizing the whole image, our iLAT generates the local regions of images autoregressively, which not only improves the stability with the well-optimized joint conditional distribution for large masked regions but also maintains the global semantic information. Autoregressive Generation. The deep AR models have achieved great success recently in the community [35, 15, 32, 9, 38]. Some well known works include PixelRNN [43], Conditional PixelCNN [32], Gated PixelCNN [32], and WaveNet [31]. Recently, transformer based AR models [38, 5, 12] have achieved excellent results in many machine learning tasks. Unfortunately, the common troubles of these AR models are the expensive inference time and potential exposure bias, as AR models sequentially predict future values from the given past values. The inconsistent receptive fields for training and testing will lead to accumulated errors and unreasonable generated results [2]. Our iLAT is thus designed to address these limitations. Visual Transformer. The transformer takes advantage of the self-attention module [44], and shows impressive expressive power in many Natural Language Processing (NLP) [38, 11, 5] and vision tasks [12, 7, 25]. With costly time and space complexity of O(n2) in transformers, Parmar et al. [34] utilize local self-attention to achieve AR generated results. Chen et al. [8] and Kumar et al. [22] autoregressively generate pixels with simplified discrete color palettes to save computations. But limited by the computing power, they still generate low-resolution images. To address this, some works have exploited the discrete representation learning, e.g. dVAE [39] and VQGAN [14]. It not only reduces the sequence length of image tokens but also shares perceptually rich latent features in image synthesis as the word embedding in NLP. However, recovering images from the discrete codebook still causes blur and artifacts in complex scenes. Besides, vanilla convolutions of the discrete encoder may leak information among different discrete codebook tokens. Moreover, local conditional generation based on VQGAN [14] suffers from poor semantical consistency compared with other unchanged image regions. To end these, the iLAT proposes the novel discrete representation to improve the model capability of local image synthesis. 3 Approach Given the conditional image Ic and target image It, our image Local Autoregressive Transformer (iLAT) aims at producing the output image Io of semantically consistent and visually realistic. The key foreground objects (e.g., the skeleton of body), or the regions of interest (e.g., sketches of facial regions) extracted from Ic, are applied to guide the synthesis of output image Io. Essentially, the background and the other non-key foreground image regions of Io should be visually similar to It. As shown in Fig. 2, our iLAT includes the branches of Two-Stream convolutions based Vector Quantized GAN (TS-VQGAN) for the discrete representation learning and a transformer for the AR generation with Local Autoregressive (LA) attention mask. Particularly, our iLAT firstly encodes Ic and It, into codebook vectors zq,c and zq,t by TS-VQGAN (Sec. 3.1) without local information leakage. Then the index of the masked vectors ẑq,t will be predicted by the transformer autoregressively with LA attention mask (Sec. 3.2). During the test phase, the decoder of TS-VQGAN takes the combination of ẑq,t in masked regions and ẑ in unmasked regions to achieve the final result. 3.1 Local Discrete Representation Learning We propose a novel local discrete representation learning in this section. Since it is inefficient to learn the generative transformer model through pixels directly, inspired by VQGAN [14], we incorporate the VQVAE mechanism [33] into the proposed iLAT for the discrete representation learning. The VQVAE is consisted of an encoder E, a decoder D, and a learnable discrete codebook Z = {zk}Kk=1, where K means the total number of codebook vectors. Given an image I ∈ RH×W×3, E encodes the image into latent features ẑ = E(I) ∈ Rh×w×ce , where ce indicates the channel of the encoder outputs. Then, the spatially unfolded ẑh′w′ ∈ Rce , (h′ ∈ h,w′ ∈ w) are replaced with the closest codebook vectors as z (q) h′w′ = arg minzk∈Z ||ẑh′w′ − zk|| ∈ Rcq , zq = fold(z (q) h′w′ , h ′ ∈ h,w′ ∈ w) ∈ Rh×w×cq , (1) where cq indicates the codebook channels. However, VQVAE will suffer from obscure information leakage for the local image synthesis, if the receptive field (kernel size) of vanilla convolution is larger than 1× 1 as shown in Fig. 3(a). Intuitively, each 3× 3 convolution layer spreads the masked features to the outside border of the mask. Furthermore, multi-convolutional based E accumulates the information leakage, which makes the model learn the local generating with unreasonable confidence, leading to model overfitting (see Sec. 4.3). To this end, we present two-stream convolutions in TS-VQGAN as shown in Fig. 3(a). Since the masked information is only leaked to a circle around the mask with each 3 × 3 convolution layer, we can just replace the corrupt features for each layer with masked ones. Thus, the influence of information leakage will be eliminated without hurting the integrity of both masked and unmasked features. Specifically, for the given image mask M ∈ RH×W that 1 means masked regions, and 0 means unmasked regions, it should be resized into M′ with max-pooling to fit the feature size. The two-stream convolution converts the input feature F into the masked feature Fm = F (1−M′), where is element-wise multiplication. Then, both F and Fm are convoluted with shared weights and combined according to the leaked regions Ml, which can be obtained from the difference of convoluted mask as Ml = clip(conv1(M ′), 0, 1)−M′, Ml[Ml > 0] = 1, (2) where conv1 implemented with an all-one 3 × 3 kernel. Therefore, the output of two-stream convolution can be written as Fc = conv(F) (1−Ml) + conv(Fm) Ml. (3) So the leaked regions are replaced with features that only depend on unmasked regions. Besides, masked features can be further leveraged for AR learning without any limitations. Compared with VQVAE, we replace all vanilla convolutions with two-stream convolutions in the encoder of TSVQGAN. Note that the decoder D is unnecessary to prevent information leakage at all. Since the decoding process is implemented after the AR generating of the transformer as shown in Fig. 2. For the VQVAE, the decoder D decodes the codebook vectors zq got from Eq.(1), and reconstructs the output image as Io = D(zq). Although VQGAN [14] can generate more reliable textures with adversarial training, handling complex real-world backgrounds and precise face details is still tough to the existed discrete learning methods. In TS-VQGAN, we further finetune the model with local quantized learning, which can be written as Io = D(zq Mq + ẑ (1−Mq)), (4) where Mq ∈ Rh×w is the resized mask for quantized vectors, and ẑ is the output of the encoder. In Eq.(4), unmasked features are directly encoded from the encoder, while masked features are replaced with the codebook vectors, which works between AE and VQVAE. This simple trick effectively maintains the fidelity of the unmasked regions and reduces the number of quantized vectors that have to be generated autoregressively, which also leads to a more efficient local AR inference. Note that the back-propagation of Eq.( 4) is implemented with the straight-through gradient estimator [4]. 3.2 Local Autoregressive Transformer Learning From the discrete representation learning in Sec. 3.1, we can get the discrete codebook vectors zq,c, zq,t ∈ Rh×w×cq for conditional images and target images respectively. Then the conditional and target image tokens {ci, tj}hwi,j=1 ∈ {0, 1, ...,K − 1} can be converted from the index-based representation of zq,c, zq,t in the codebook with length hw, where K indicates the all number of codebook vectors. For the resized target mask Mq ∈ Rh×w, the second stage needs to learn the AR likelihood for the masked target tokens {tm} where Mq,m = 1 with conditional tokens {ci}hwi=1 and other unmasked target tokens {tu} where Mq,u = 0 as p(tm|c, tu) = ∏ j p(t(m,j)|c, tu, t(m,<j)). (5) Benefits from Eq. (4), iLAT only needs to generate masked target tokens {tm} rather than all. Then, the negative log likelihood (NLL) loss can be optimized as LNLL = −Etm∼p(tm|c,tu) log p(tm|c, tu). (6) We use a decoder-only transformer to handle the AR likelihood. As shown in Fig. 2, two special tokens c[s], t[s] are concatenated to {ci} and {tj} as start tokens respectively. Then, the trainable position embedding [11] is added to the token embedding to introduce the position information to the self-attention modules. According to the self-attention mechanism in the transformer, the attention mask is the key factor to achieve parallel training without information leakage. As shown in Fig. 3(b), we propose a novel LA attention mask M̂LA with four sub-masks, which indicate receptive fields of condition to condition (C2C), condition to target (C2T), target to condition (T2C), and target to target (T2T) respectively. All conditional tokens {ci}hwi=1 can be attended by themselves and targets. So C2C and T2C should be all-one matrices. We think that the conditional tokens are unnecessary to attend targets in advance, so C2T is set as the all-zero matrix. Therefore, the LA attention mask M̂LA can be written as M̂LA = [ 1, 0 1, MLA ] , (7) where MLA indicates the T2T LA mask. To leverage the AR generation and maintain the global information simultaneously, the target tokens are divided into two groups called the global group and the causal group. Furthermore, the causal group includes masked targets {tm}(Mq,m = 1) and {tm−1} that need to predict them, because the labels need to be shifted to the right with one position for the AR learning. Besides, other tokens are classified into the global group. Then, the global attention sub-mask Mgs can be attended to all tokens to share global information and handle the semantic consistency. On the other hand, the causal attention sub-mask Mcs constitutes the local AR generation. Note that Mgs can not attend any masked tokens to avoid information leakage. The T2T LA mask can be got with MLA = Mgs +Mcs2. A more intuitive example is shown in Fig. 3(b). Therefore, for the given feature h, the self-attention in iLAT can be written as SelfAttention(h) = softmax( QKT√ d − (1− M̂LA) · ∞)V, (8) where Q,K, V are h encoded with different weights of d channels. We make all masked elements to −∞ before the softmax. During the inference, all generated target tokens {t̂m} are converted back to codebook vectors ẑq,t. Then, they are further combined with encoded unmasked features ẑ, and decoded with Eq.(4) as shown in Fig. 2. To highlight the difference of our proposed mask MLA, other common attention masks are shown in Fig. 1. The Vanilla AR mask MAR is widely used in the AR transformer [14, 8], but they fail to maintain the semantic consistency and cause unexpected identities for the face synthesis. AE mask MAE is utilized in some attention based image inpainting tasks [50, 49]. Although MAE enjoys good receptive fields, the masked regions are completely corrupted in the AE, which is much more unstable to restructure a large hole. Our method is an in-between strategy with both their superiorities mentioned above. 3.3 Implement Details for Different Tasks Non-Iconic Posed-Guiding. The proposed TS-VQGAN is also learned with adversarial training. For the complex non-iconic pose guiding, we finetune the pretrained open-source ImageNet based VQGAN weights with the two-stream convolution strategy. To avoid adding too many sequences with excessive computations, we use the coordinates of 13 target pose landmarks as the supplemental condition to the iLAT. They are encoded with 3 fully connected layers with ReLU. As shown in Fig. 2, both the condition and the target are images, which have different poses in the training phase, and the same pose in the inference phase. Besides, we use the union of conditional and target masks got by dilating the poses with different kernel sizes according to the scenes3 to the target image. Face Editing. In face editing, we find that the adaptive GAN learning weight λ makes the results unstable, so it is replaced with a fixed λ = 0.1. Besides, the TS-VQGAN is simplified compared to the one used in the pose guiding. Specifically, all attention layers among the encoder and decoder are removed. Then, all Group Normalizations are replaced with Instance Normalization to save memory without a large performance drop. The conditions are composed of the sketch images extracted with the XDoG [46], while the targets are face images. The training masks for the face editing are COCO masks [24] and simulated irregular masks [51], while the test ones are drawn manually. 4 Experiments In this section, we present experimental results on pose-guided generation of Penn Action (PA) [52] and Synthetic DeepFashion (SDF) [26], face editing of CelebA [27] and FFHQ [20] compared with other competitors and variants of iLAT. 2More about the expansion of LA attention mask are discussed in the supplementary. 3Details about the mask generation are illustrated in the supplementary. Datasets. For the pose guiding, PA dataset [52], which contains 2,326 video sequences of 15 action classes in non-iconic views is used in this section. Each frame from the video is annotated with 13 body landmarks consisted of 2D locations and visibility. The resolution of PA is resized into 256× 256 during the preprocessing. We randomly gather pairs of the same video sequence in the training phase dynamically and select 1,000 testing pairs in the remaining videos. Besides, the SDF is synthesized with DeepFashion [26] images as foregrounds and Places2 [54] images as backgrounds. Since only a few images of DeepFashion have related exact segmentation masks, we select 4,500/285 pairs from it for training and testing respectively. Each pair of them contains two images of the same person with different poses and randomly chosen backgrounds. The face editing dataset consists of Flickr-Faces-HQ dataset (FFHQ) [20] and CelebA-HQ [27]. FFHQ is a high-quality image dataset with 70,000 human faces. We resize them from 1024× 1024 into 256× 256 and use 68,000 of them for the training. The CelebA is only used for testing in this section for the diversity. Since face editing has no paired ground truth, we randomly select 68 images from the rest of FFHQ and all CelebA, and draw related sketches for them. Implementation Details. Our method is implemented in PyTorch in 256 × 256 image size. For the TS-VQGAN training, we use the Adam optimizer [21] with β1= 0.5 and β2 = 0.9. For the pose guiding, the TS-VQGAN is finetuned from the ImageNet pretrained VQGAN [14], while it is completely retrained for FFHQ. TS-VQGAN is trained with 150k steps without masks at first, and then it is trained with another 150k steps with masks in batch size 16. The initial learning rates of pose guiding and face editing are 8e-5 and 2e-4 respectively, which are decayed by 0.5 for every 50k steps. For the transformer training, we use Adam with β1 = 0.9 and β2 = 0.95 with initial learning rate 5e-5 and 0.01 weight decay. Besides, we warmup the learning rate with the first 10k steps, then it is linearly decayed to 0 for 300k iterations with batch size 16. During the inference, we simply use top-1 sampling for our iLAT. Competitors. The model proposed in [14] is abbreviated as Taming transformer (Taming) in this section. For fair comparisons, VQGAN used in Taming is finetuned for pose guiding, and retrained for face editing with the same steps as TS-VQGAN. For the pose guiding, we compare the proposed iLAT with other state-of-the-art methods retrained in the PA dataset, which include PATN [56], PN-GAN [37], PoseWarp [3], MR-Net [47] and Taming [14]. As the image size of PoseWarp and MR-Net is 128× 128, we resized the outputs for the comparison. For the face editing, we compare the iLAT with inpainting based SC-FEGAN [19] and Taming [14]. We also test the Taming results in our LA attention mask as Taming* (without retraining). 4.1 Quantitative Results Pose-Guided Comparison. Quantitative results in PA and SDF datasets of our proposed iLAT and other competitors are presented in Tab. 1. Peak signal-to-noise ratio (PSNR), Structural Similarity (SSIM) [45], Mean Average Error (MAE) and Fréchet Inception Distance (FID) [17] are employed to measure the quality of results. We also add the results of iLAT*, which is implemented without the two-stream convolutions. The results in Tab. 1 clearly show that our proposed method outperforms other methods in all metrics, especially for the FID, which accords with the human perception. The good scores of iLAT indicate that the proposed iLAT can generate more convincing and photo-realistic images on locally guided image synthesis of the non-iconic foreground. For the more challenging SDF dataset, iLAT still achieves better results compared with Taming. Inference Time. We also record the average inference times in PA, SDF, and FFHQ as showed in Tab. 2. Except for the better quality of generated images over Taming as discussed above, our iLAT costs less time for the local synthesis task according to the masking rate of the inputs. Low masking rates can achieve dramatic speedup, e.g., face editing. 4.2 Qualitative Results Non-Iconic Pose Guiding. Fig. 4(A) shows qualitative results in the non-iconic pose-guided image synthesis task. Compared to other competitors, it is apparent that our method can generate more reasonable target images both in human bodies and backgrounds, while images generated by other methods suffer from either wrong poses or deformed backgrounds. Particularly, PATN collapses in most cases. PN-GAN and PoseWarp only copy the reference images as the target ones, which fails to be guided by the given poses due to the challenging PA dataset. Moreover, MR-Net and Taming* can indeed generate poses that are similar to the target ones, but the background details of reference images are not transferred properly. Especially for the results in column (g), Taming fails to synthesize complicated backgrounds, such as noisy audiences in rows 2 and 3 and the gym with various fitness equipment in row 4. Compared to others, our proposed iLAT can capture the structure of human bodies given the target poses as well as retaining the vivid backgrounds, which demonstrate the efficacy of our model in synthesizing high-quality images in the non-iconic pose guiding. Besides, for the pose guiding with synthetic backgrounds of SDF, iLAT can still get more reasonable and stable backgrounds and foregrounds compared with Taming as in Fig. 5(C). Face Editing. Since there are no ground truth face editing targets, we only compared the qualitative results as shown in Fig. 4(B) of FFHQ and CelebA. Note that the Taming results in column (c) fail to preserve the identity information in both FFHQ and CelebA compared with the reference. For example, in rows 1, 2 and 3, the skin tones of Taming results are different from the original ones. And in row 4, Taming generates absolutely another person with contrasting ages, which indicates that vanilla AR is unsuited to the local face editing. When Taming is tested with our LA attention mask, column (d) shows that Taming* can retain the identities of persons. However, rows 1 and 2 demonstrate that Taming* fails to properly generate the target faces according to guided sketches, while in rows 3 and 4 some generations have obvious artifacts without consistency. Besides, inpainting-based SC-FEGAN achieves unstable results in rows 3 and 4. SC-FEGAN also strongly depends on the quality of input sketches, while unprofessional sketches lead to unnatural results as shown in row 1. Besides, detailed textures of AE-based SC-FEGAN are unsatisfactory too. Compared with these methods, our iLAT can always generate correct and vivid human faces with identities retained. Furthermore, benefits from the discrete representation, iLAT enjoys robustness to the guided information. 4.3 Further Discussions Ablation Study. The effectiveness of our proposed two-stream convolution is discussed in the ablation study. As we can find in Fig. 5, the woman face in row 1, (c) generated by iLAT* has residual face parts that conflict with the guided sketch. Moreover, in row 2, iLAT* without two-stream convolutions leaks information from sunglasses that lacks correct semantic features and leads to the inconsistent color of the generated face. For the pose-guided instance shown in the second row, it is apparent that the man generated in column (c) has blurry leg positions. However, in column (d) the complete iLAT can always generate authentic and accurate images, validating the efficacy of our designed two-stream convolutions. Sequential Generation. Our iLAT can also be extended to guide the video generation properly. We give a qualitative example in this section. As shown in Fig. 6, given a sequence of poses and one reference image, iLAT can forecast a plausible motion of the person. And the results are robust for most kinds of activities in non-ironic views. 5 Conclusion This paper proposes a transformer based AR model called iLAT to solve local image generation tasks. This method leverages a novel LA attention mask to enlarge the receptive fields of AR, which achieves not only semantically consistent but also realistic generative results and accelerates the inference speed of AR. Besides, a TS-VQGAN is proposed to learn a discrete representation learning without information leakages. Such a model can get superior performance in detail editing. Extensive experiments validate the efficacy of our iLAT for local image generation. Social Impacts This paper exploited the image editing with transformers. Since face editing may causes some privacy issues, we sincerely remind users to pay attention for it. Our method only focuses on technical aspects. The images used in this paper are all open sourced. Acknowledgements This work was supported in part by NSFC Project (62076067, 62176061), Science and Technology Commission of Shanghai Municipality Projects (19511120700, 2021SHZDZX0103).
1. What is the focus of the paper regarding image editing? 2. What are the strengths of the proposed approach, particularly in addressing the issues of missing global information and information leakage of local guidance? 3. How does the reviewer assess the contribution of the paper compared to prior works, such as the Taming paper? 4. What are the minor questions raised by the reviewer regarding the experimental results? 5. Overall, how does the reviewer evaluate the quality and impact of the paper?
Summary Of The Paper Review
Summary Of The Paper This paper proposes to use the self-attention mechanism (in particular, Transformer) to enhance image editing. Experiments show promising performance. Review This paper proposes to use the self-attention mechanism (in particular, Transformer) to enhance image editing. Experiments show promising performance. This paper falls out of my research field. I would try my best to offer some comments. First, the idea of using Transformer, a recently popular module, to enhance the existing problems (e.g. missing global information) is fine. However, this idea seems to have been proposed by the Taming paper (which is cited frequently in this work). Based on this, I am a bit confused of the contribution of this work. In particular, regarding the issues mentioned in the abstract (missing global information, and information leakage of local guidance), how are they solved by the Transformer design, and can both of them be solved by the Transformer design? In particular, the issue of information leakage issue - is it alleviated by random masking? In summary, this paper should mention clearly about the difference between this paper and the Taming paper, and the additional contribution beyond the Taming paper. Regarding the network design, since I am not an expert, I can only say that everything looks fine, and I will wait for other reviewers' comments for further judgment. Regarding the experiments, the shown results seem good, but I have two minor questions in Fig 4. (1) In the left part, why is the resolution of some methods significantly different from others? (2) In the right part, why the proposed method change some non-edited contents in the original image (e.g. in the 3rd row, the letter of S in the right border of the image) ? Overall, this is a well prepared paper. I give a borderline score and will wait for additional information to make the final decision.
NIPS
Title The Image Local Autoregressive Transformer Abstract Recently, AutoRegressive (AR) models for the whole image generation empowered by transformers have achieved comparable or even better performance compared to Generative Adversarial Networks (GANs). Unfortunately, directly applying such AR models to edit/change local image regions, may suffer from the problems of missing global information, slow inference speed, and information leakage of local guidance. To address these limitations, we propose a novel model – image Local Autoregressive Transformer (iLAT), to better facilitate the locally guided image synthesis. Our iLAT learns the novel local discrete representations, by the newly proposed local autoregressive (LA) transformer of the attention mask and convolution mechanism. Thus iLAT can efficiently synthesize the local image regions by key guidance information. Our iLAT is evaluated on various locally guided image syntheses, such as pose-guided person image synthesis and face editing. Both quantitative and qualitative results show the efficacy of our model. 1 Introduction Generating realistic images has been attracting ubiquitous research attention of the community for a long time. In particular, those image synthesis tasks involving persons or portrait [6, 28, 29] can be applied in a wide variety of scenarios, such as advertising, games, and motion capture, etc. Most real-world image synthesis tasks only involve the local generation, which means generating pixels in certain regions, while maintaining the semantic consistency, e.g., face editing [19, 1, 40], pose guiding [36, 55, 47], and image inpainting [51, 30, 49, 53]. Unfortunately, most works can only handle the well aligned images of ‘icon-view’ foregrounds, rather than the image synthesis of ‘non-iconic view’ foregrounds [47, 24], i.e., person instances with arbitrary poses in cluttered scenes, which is concerned in this paper. Even worse, the global semantics tend to be distorted during the generation of previous methods, even if subtle modifications are applied to a local image region. Critically, given the local editing/guidance such as sketches of faces, or skeleton of bodies in the first column of Fig. 1(A), it is imperative to design our new algorithm for locally guided image synthesis. Generally, several inherent problems exist in previous works for such a task. For example, despite impressive quality of images are generated, GANs/Autoencoder(AE)-based methods [51, 47, 19, 30, 18] are inclined to synthesize blurry local regions, as in Fig. 1(A)-row(c). Furthermore, some inspiring autoregressive (AR) methods, such as PixelCNN [32, 41, 23] and recent transformers [8, 14], should efficiently model the joint image distribution (even in very complex background [32]) for whole image generation as Fig. 1(B)-row(b). These AR models, however, are still not ready for locally guided image synthesis, as several reasons. (1) Missing global information. As in Fig. 1(B)-row(b), vanilla AR models take the top-to-down and left-to-right sequential generation with limited receptive fields for the initial generating (top left corner), which are incapable of directly modeling global information. Additionally, the sequential AR models suffer from exposure bias [2], which may ∗ Corresponding author. Dr. Fu is also with Fudan ISTBI—ZJNU Algorithm Centre for Brain-inspired Intelligence, Zhejiang Normal University, Jinhua, China. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). predict future pixels conditioned on the past ones with mistakes, due to the discrepancy between training and testing in AR. This makes small local guidance unpredictable changes to the whole image, resulting in inconsistent semantics as in Fig. 1(A)-row(a). (2) Slow inference speed. The AR models have to sequentially predict the pixels in the testing with notoriously slow inference speed, especially for high-resolution image generation. Although the parallel training techniques are used in pixelCNN [32] and transformer [14], the conditional probability sampling fails to work in parallel during the inference phase. (3) Information leakage of local guidance. As shown in Fig. 1(B)-row(c), the local guidance should be implemented with specific masks to ensure the validity of the local AR learning. During the sequential training process, pixels from masked regions may be exposed to AR models by convolutions with large kernel sizes or inappropriate attention masks in the transformer. We call it information leakage [44, 16] of local guidance, which makes models overfit the masked regions, and miss detailed local guidance as in Fig. 1(A)-row(b). To this end, we propose a novel image Local Autoregressive Transformer (iLAT) for the task of locally guided image synthesis. Our key idea lies in learning the local discrete representations effectively. Particularly, we tailor the receptive fields of AR models to local guidance, achieving semantically consistent and visually realistic generation results. Furthermore, a local autoregressive (LA) transformer with the novel LA attention mask and convolution mechanism is proposed to enable successful local generation of images with efficient inference time, without information leakage. Formally, we propose the iLAT model with several novel components. (1) We complementarily incorporate receptive fields of both AR and AE to fit LA generation with a novel attention mask as shown in Fig. 1(B)-row(c). In detail, local discrete representation is proposed to represent those masked regions, while the unmasked areas are encoded with continuous image features. Thus, we achieve favorable results with both consistent global semantics and realistic local generations. (2) Our iLAT dramatically reduces the inference time for local generation, since only masked regions will be generated autoregressively. (3) A simple but effective two-stream convolution and a local causal attention mask mechanism are proposed for discrete image encoder and transformer respectively, with which information leakage is prevented without detriment to the performance. We make several contributions in this work. (1) A novel local discrete representation learning is proposed to efficiently help to learn our iLAT for the local generation. (2) We propose an image local autoregressive transformer for local image synthesis, which enjoys both semantically consistent and realistic generative results. (3) Our iLAT only generates necessary regions autoregressively, which is much faster than vanilla AR methods during the inference. (4) We propose a two-stream convolution and a LA attention mask to prevent both convolutions and transformer from information leakage, thus improving the quality of generated images. Empirically, we introduce several locally guidance tasks, including pose-guided image generation and face editing tasks; and extensive experiments are conducted on the corresponding dataset to validate the efficacy of our model. 2 Related Work Conditional Image Synthesis. Some conditional generation models are designed to globally generate images with pre-defined styles based on user-provided references, such as poses and face sketches. These previous synthesis efforts are made on Variational auto-encoder (VAE) [10, 13], AR Model [48, 39], and AE with adversarial training [51, 49, 53]. Critically, it is very non-trivial for all these methods to generate images of locally guided image synthesis of the non-iconic foreground. Some tentative attempts have been conducted in pose-guided synthesis [42, 47] with person existing in non-ironic views. On the other hand, face editing methods are mostly based on adversarial AE based inpainting [30, 51, 19] and GAN inversion based methods [53, 40, 1]. Rather than synthesizing the whole image, our iLAT generates the local regions of images autoregressively, which not only improves the stability with the well-optimized joint conditional distribution for large masked regions but also maintains the global semantic information. Autoregressive Generation. The deep AR models have achieved great success recently in the community [35, 15, 32, 9, 38]. Some well known works include PixelRNN [43], Conditional PixelCNN [32], Gated PixelCNN [32], and WaveNet [31]. Recently, transformer based AR models [38, 5, 12] have achieved excellent results in many machine learning tasks. Unfortunately, the common troubles of these AR models are the expensive inference time and potential exposure bias, as AR models sequentially predict future values from the given past values. The inconsistent receptive fields for training and testing will lead to accumulated errors and unreasonable generated results [2]. Our iLAT is thus designed to address these limitations. Visual Transformer. The transformer takes advantage of the self-attention module [44], and shows impressive expressive power in many Natural Language Processing (NLP) [38, 11, 5] and vision tasks [12, 7, 25]. With costly time and space complexity of O(n2) in transformers, Parmar et al. [34] utilize local self-attention to achieve AR generated results. Chen et al. [8] and Kumar et al. [22] autoregressively generate pixels with simplified discrete color palettes to save computations. But limited by the computing power, they still generate low-resolution images. To address this, some works have exploited the discrete representation learning, e.g. dVAE [39] and VQGAN [14]. It not only reduces the sequence length of image tokens but also shares perceptually rich latent features in image synthesis as the word embedding in NLP. However, recovering images from the discrete codebook still causes blur and artifacts in complex scenes. Besides, vanilla convolutions of the discrete encoder may leak information among different discrete codebook tokens. Moreover, local conditional generation based on VQGAN [14] suffers from poor semantical consistency compared with other unchanged image regions. To end these, the iLAT proposes the novel discrete representation to improve the model capability of local image synthesis. 3 Approach Given the conditional image Ic and target image It, our image Local Autoregressive Transformer (iLAT) aims at producing the output image Io of semantically consistent and visually realistic. The key foreground objects (e.g., the skeleton of body), or the regions of interest (e.g., sketches of facial regions) extracted from Ic, are applied to guide the synthesis of output image Io. Essentially, the background and the other non-key foreground image regions of Io should be visually similar to It. As shown in Fig. 2, our iLAT includes the branches of Two-Stream convolutions based Vector Quantized GAN (TS-VQGAN) for the discrete representation learning and a transformer for the AR generation with Local Autoregressive (LA) attention mask. Particularly, our iLAT firstly encodes Ic and It, into codebook vectors zq,c and zq,t by TS-VQGAN (Sec. 3.1) without local information leakage. Then the index of the masked vectors ẑq,t will be predicted by the transformer autoregressively with LA attention mask (Sec. 3.2). During the test phase, the decoder of TS-VQGAN takes the combination of ẑq,t in masked regions and ẑ in unmasked regions to achieve the final result. 3.1 Local Discrete Representation Learning We propose a novel local discrete representation learning in this section. Since it is inefficient to learn the generative transformer model through pixels directly, inspired by VQGAN [14], we incorporate the VQVAE mechanism [33] into the proposed iLAT for the discrete representation learning. The VQVAE is consisted of an encoder E, a decoder D, and a learnable discrete codebook Z = {zk}Kk=1, where K means the total number of codebook vectors. Given an image I ∈ RH×W×3, E encodes the image into latent features ẑ = E(I) ∈ Rh×w×ce , where ce indicates the channel of the encoder outputs. Then, the spatially unfolded ẑh′w′ ∈ Rce , (h′ ∈ h,w′ ∈ w) are replaced with the closest codebook vectors as z (q) h′w′ = arg minzk∈Z ||ẑh′w′ − zk|| ∈ Rcq , zq = fold(z (q) h′w′ , h ′ ∈ h,w′ ∈ w) ∈ Rh×w×cq , (1) where cq indicates the codebook channels. However, VQVAE will suffer from obscure information leakage for the local image synthesis, if the receptive field (kernel size) of vanilla convolution is larger than 1× 1 as shown in Fig. 3(a). Intuitively, each 3× 3 convolution layer spreads the masked features to the outside border of the mask. Furthermore, multi-convolutional based E accumulates the information leakage, which makes the model learn the local generating with unreasonable confidence, leading to model overfitting (see Sec. 4.3). To this end, we present two-stream convolutions in TS-VQGAN as shown in Fig. 3(a). Since the masked information is only leaked to a circle around the mask with each 3 × 3 convolution layer, we can just replace the corrupt features for each layer with masked ones. Thus, the influence of information leakage will be eliminated without hurting the integrity of both masked and unmasked features. Specifically, for the given image mask M ∈ RH×W that 1 means masked regions, and 0 means unmasked regions, it should be resized into M′ with max-pooling to fit the feature size. The two-stream convolution converts the input feature F into the masked feature Fm = F (1−M′), where is element-wise multiplication. Then, both F and Fm are convoluted with shared weights and combined according to the leaked regions Ml, which can be obtained from the difference of convoluted mask as Ml = clip(conv1(M ′), 0, 1)−M′, Ml[Ml > 0] = 1, (2) where conv1 implemented with an all-one 3 × 3 kernel. Therefore, the output of two-stream convolution can be written as Fc = conv(F) (1−Ml) + conv(Fm) Ml. (3) So the leaked regions are replaced with features that only depend on unmasked regions. Besides, masked features can be further leveraged for AR learning without any limitations. Compared with VQVAE, we replace all vanilla convolutions with two-stream convolutions in the encoder of TSVQGAN. Note that the decoder D is unnecessary to prevent information leakage at all. Since the decoding process is implemented after the AR generating of the transformer as shown in Fig. 2. For the VQVAE, the decoder D decodes the codebook vectors zq got from Eq.(1), and reconstructs the output image as Io = D(zq). Although VQGAN [14] can generate more reliable textures with adversarial training, handling complex real-world backgrounds and precise face details is still tough to the existed discrete learning methods. In TS-VQGAN, we further finetune the model with local quantized learning, which can be written as Io = D(zq Mq + ẑ (1−Mq)), (4) where Mq ∈ Rh×w is the resized mask for quantized vectors, and ẑ is the output of the encoder. In Eq.(4), unmasked features are directly encoded from the encoder, while masked features are replaced with the codebook vectors, which works between AE and VQVAE. This simple trick effectively maintains the fidelity of the unmasked regions and reduces the number of quantized vectors that have to be generated autoregressively, which also leads to a more efficient local AR inference. Note that the back-propagation of Eq.( 4) is implemented with the straight-through gradient estimator [4]. 3.2 Local Autoregressive Transformer Learning From the discrete representation learning in Sec. 3.1, we can get the discrete codebook vectors zq,c, zq,t ∈ Rh×w×cq for conditional images and target images respectively. Then the conditional and target image tokens {ci, tj}hwi,j=1 ∈ {0, 1, ...,K − 1} can be converted from the index-based representation of zq,c, zq,t in the codebook with length hw, where K indicates the all number of codebook vectors. For the resized target mask Mq ∈ Rh×w, the second stage needs to learn the AR likelihood for the masked target tokens {tm} where Mq,m = 1 with conditional tokens {ci}hwi=1 and other unmasked target tokens {tu} where Mq,u = 0 as p(tm|c, tu) = ∏ j p(t(m,j)|c, tu, t(m,<j)). (5) Benefits from Eq. (4), iLAT only needs to generate masked target tokens {tm} rather than all. Then, the negative log likelihood (NLL) loss can be optimized as LNLL = −Etm∼p(tm|c,tu) log p(tm|c, tu). (6) We use a decoder-only transformer to handle the AR likelihood. As shown in Fig. 2, two special tokens c[s], t[s] are concatenated to {ci} and {tj} as start tokens respectively. Then, the trainable position embedding [11] is added to the token embedding to introduce the position information to the self-attention modules. According to the self-attention mechanism in the transformer, the attention mask is the key factor to achieve parallel training without information leakage. As shown in Fig. 3(b), we propose a novel LA attention mask M̂LA with four sub-masks, which indicate receptive fields of condition to condition (C2C), condition to target (C2T), target to condition (T2C), and target to target (T2T) respectively. All conditional tokens {ci}hwi=1 can be attended by themselves and targets. So C2C and T2C should be all-one matrices. We think that the conditional tokens are unnecessary to attend targets in advance, so C2T is set as the all-zero matrix. Therefore, the LA attention mask M̂LA can be written as M̂LA = [ 1, 0 1, MLA ] , (7) where MLA indicates the T2T LA mask. To leverage the AR generation and maintain the global information simultaneously, the target tokens are divided into two groups called the global group and the causal group. Furthermore, the causal group includes masked targets {tm}(Mq,m = 1) and {tm−1} that need to predict them, because the labels need to be shifted to the right with one position for the AR learning. Besides, other tokens are classified into the global group. Then, the global attention sub-mask Mgs can be attended to all tokens to share global information and handle the semantic consistency. On the other hand, the causal attention sub-mask Mcs constitutes the local AR generation. Note that Mgs can not attend any masked tokens to avoid information leakage. The T2T LA mask can be got with MLA = Mgs +Mcs2. A more intuitive example is shown in Fig. 3(b). Therefore, for the given feature h, the self-attention in iLAT can be written as SelfAttention(h) = softmax( QKT√ d − (1− M̂LA) · ∞)V, (8) where Q,K, V are h encoded with different weights of d channels. We make all masked elements to −∞ before the softmax. During the inference, all generated target tokens {t̂m} are converted back to codebook vectors ẑq,t. Then, they are further combined with encoded unmasked features ẑ, and decoded with Eq.(4) as shown in Fig. 2. To highlight the difference of our proposed mask MLA, other common attention masks are shown in Fig. 1. The Vanilla AR mask MAR is widely used in the AR transformer [14, 8], but they fail to maintain the semantic consistency and cause unexpected identities for the face synthesis. AE mask MAE is utilized in some attention based image inpainting tasks [50, 49]. Although MAE enjoys good receptive fields, the masked regions are completely corrupted in the AE, which is much more unstable to restructure a large hole. Our method is an in-between strategy with both their superiorities mentioned above. 3.3 Implement Details for Different Tasks Non-Iconic Posed-Guiding. The proposed TS-VQGAN is also learned with adversarial training. For the complex non-iconic pose guiding, we finetune the pretrained open-source ImageNet based VQGAN weights with the two-stream convolution strategy. To avoid adding too many sequences with excessive computations, we use the coordinates of 13 target pose landmarks as the supplemental condition to the iLAT. They are encoded with 3 fully connected layers with ReLU. As shown in Fig. 2, both the condition and the target are images, which have different poses in the training phase, and the same pose in the inference phase. Besides, we use the union of conditional and target masks got by dilating the poses with different kernel sizes according to the scenes3 to the target image. Face Editing. In face editing, we find that the adaptive GAN learning weight λ makes the results unstable, so it is replaced with a fixed λ = 0.1. Besides, the TS-VQGAN is simplified compared to the one used in the pose guiding. Specifically, all attention layers among the encoder and decoder are removed. Then, all Group Normalizations are replaced with Instance Normalization to save memory without a large performance drop. The conditions are composed of the sketch images extracted with the XDoG [46], while the targets are face images. The training masks for the face editing are COCO masks [24] and simulated irregular masks [51], while the test ones are drawn manually. 4 Experiments In this section, we present experimental results on pose-guided generation of Penn Action (PA) [52] and Synthetic DeepFashion (SDF) [26], face editing of CelebA [27] and FFHQ [20] compared with other competitors and variants of iLAT. 2More about the expansion of LA attention mask are discussed in the supplementary. 3Details about the mask generation are illustrated in the supplementary. Datasets. For the pose guiding, PA dataset [52], which contains 2,326 video sequences of 15 action classes in non-iconic views is used in this section. Each frame from the video is annotated with 13 body landmarks consisted of 2D locations and visibility. The resolution of PA is resized into 256× 256 during the preprocessing. We randomly gather pairs of the same video sequence in the training phase dynamically and select 1,000 testing pairs in the remaining videos. Besides, the SDF is synthesized with DeepFashion [26] images as foregrounds and Places2 [54] images as backgrounds. Since only a few images of DeepFashion have related exact segmentation masks, we select 4,500/285 pairs from it for training and testing respectively. Each pair of them contains two images of the same person with different poses and randomly chosen backgrounds. The face editing dataset consists of Flickr-Faces-HQ dataset (FFHQ) [20] and CelebA-HQ [27]. FFHQ is a high-quality image dataset with 70,000 human faces. We resize them from 1024× 1024 into 256× 256 and use 68,000 of them for the training. The CelebA is only used for testing in this section for the diversity. Since face editing has no paired ground truth, we randomly select 68 images from the rest of FFHQ and all CelebA, and draw related sketches for them. Implementation Details. Our method is implemented in PyTorch in 256 × 256 image size. For the TS-VQGAN training, we use the Adam optimizer [21] with β1= 0.5 and β2 = 0.9. For the pose guiding, the TS-VQGAN is finetuned from the ImageNet pretrained VQGAN [14], while it is completely retrained for FFHQ. TS-VQGAN is trained with 150k steps without masks at first, and then it is trained with another 150k steps with masks in batch size 16. The initial learning rates of pose guiding and face editing are 8e-5 and 2e-4 respectively, which are decayed by 0.5 for every 50k steps. For the transformer training, we use Adam with β1 = 0.9 and β2 = 0.95 with initial learning rate 5e-5 and 0.01 weight decay. Besides, we warmup the learning rate with the first 10k steps, then it is linearly decayed to 0 for 300k iterations with batch size 16. During the inference, we simply use top-1 sampling for our iLAT. Competitors. The model proposed in [14] is abbreviated as Taming transformer (Taming) in this section. For fair comparisons, VQGAN used in Taming is finetuned for pose guiding, and retrained for face editing with the same steps as TS-VQGAN. For the pose guiding, we compare the proposed iLAT with other state-of-the-art methods retrained in the PA dataset, which include PATN [56], PN-GAN [37], PoseWarp [3], MR-Net [47] and Taming [14]. As the image size of PoseWarp and MR-Net is 128× 128, we resized the outputs for the comparison. For the face editing, we compare the iLAT with inpainting based SC-FEGAN [19] and Taming [14]. We also test the Taming results in our LA attention mask as Taming* (without retraining). 4.1 Quantitative Results Pose-Guided Comparison. Quantitative results in PA and SDF datasets of our proposed iLAT and other competitors are presented in Tab. 1. Peak signal-to-noise ratio (PSNR), Structural Similarity (SSIM) [45], Mean Average Error (MAE) and Fréchet Inception Distance (FID) [17] are employed to measure the quality of results. We also add the results of iLAT*, which is implemented without the two-stream convolutions. The results in Tab. 1 clearly show that our proposed method outperforms other methods in all metrics, especially for the FID, which accords with the human perception. The good scores of iLAT indicate that the proposed iLAT can generate more convincing and photo-realistic images on locally guided image synthesis of the non-iconic foreground. For the more challenging SDF dataset, iLAT still achieves better results compared with Taming. Inference Time. We also record the average inference times in PA, SDF, and FFHQ as showed in Tab. 2. Except for the better quality of generated images over Taming as discussed above, our iLAT costs less time for the local synthesis task according to the masking rate of the inputs. Low masking rates can achieve dramatic speedup, e.g., face editing. 4.2 Qualitative Results Non-Iconic Pose Guiding. Fig. 4(A) shows qualitative results in the non-iconic pose-guided image synthesis task. Compared to other competitors, it is apparent that our method can generate more reasonable target images both in human bodies and backgrounds, while images generated by other methods suffer from either wrong poses or deformed backgrounds. Particularly, PATN collapses in most cases. PN-GAN and PoseWarp only copy the reference images as the target ones, which fails to be guided by the given poses due to the challenging PA dataset. Moreover, MR-Net and Taming* can indeed generate poses that are similar to the target ones, but the background details of reference images are not transferred properly. Especially for the results in column (g), Taming fails to synthesize complicated backgrounds, such as noisy audiences in rows 2 and 3 and the gym with various fitness equipment in row 4. Compared to others, our proposed iLAT can capture the structure of human bodies given the target poses as well as retaining the vivid backgrounds, which demonstrate the efficacy of our model in synthesizing high-quality images in the non-iconic pose guiding. Besides, for the pose guiding with synthetic backgrounds of SDF, iLAT can still get more reasonable and stable backgrounds and foregrounds compared with Taming as in Fig. 5(C). Face Editing. Since there are no ground truth face editing targets, we only compared the qualitative results as shown in Fig. 4(B) of FFHQ and CelebA. Note that the Taming results in column (c) fail to preserve the identity information in both FFHQ and CelebA compared with the reference. For example, in rows 1, 2 and 3, the skin tones of Taming results are different from the original ones. And in row 4, Taming generates absolutely another person with contrasting ages, which indicates that vanilla AR is unsuited to the local face editing. When Taming is tested with our LA attention mask, column (d) shows that Taming* can retain the identities of persons. However, rows 1 and 2 demonstrate that Taming* fails to properly generate the target faces according to guided sketches, while in rows 3 and 4 some generations have obvious artifacts without consistency. Besides, inpainting-based SC-FEGAN achieves unstable results in rows 3 and 4. SC-FEGAN also strongly depends on the quality of input sketches, while unprofessional sketches lead to unnatural results as shown in row 1. Besides, detailed textures of AE-based SC-FEGAN are unsatisfactory too. Compared with these methods, our iLAT can always generate correct and vivid human faces with identities retained. Furthermore, benefits from the discrete representation, iLAT enjoys robustness to the guided information. 4.3 Further Discussions Ablation Study. The effectiveness of our proposed two-stream convolution is discussed in the ablation study. As we can find in Fig. 5, the woman face in row 1, (c) generated by iLAT* has residual face parts that conflict with the guided sketch. Moreover, in row 2, iLAT* without two-stream convolutions leaks information from sunglasses that lacks correct semantic features and leads to the inconsistent color of the generated face. For the pose-guided instance shown in the second row, it is apparent that the man generated in column (c) has blurry leg positions. However, in column (d) the complete iLAT can always generate authentic and accurate images, validating the efficacy of our designed two-stream convolutions. Sequential Generation. Our iLAT can also be extended to guide the video generation properly. We give a qualitative example in this section. As shown in Fig. 6, given a sequence of poses and one reference image, iLAT can forecast a plausible motion of the person. And the results are robust for most kinds of activities in non-ironic views. 5 Conclusion This paper proposes a transformer based AR model called iLAT to solve local image generation tasks. This method leverages a novel LA attention mask to enlarge the receptive fields of AR, which achieves not only semantically consistent but also realistic generative results and accelerates the inference speed of AR. Besides, a TS-VQGAN is proposed to learn a discrete representation learning without information leakages. Such a model can get superior performance in detail editing. Extensive experiments validate the efficacy of our iLAT for local image generation. Social Impacts This paper exploited the image editing with transformers. Since face editing may causes some privacy issues, we sincerely remind users to pay attention for it. Our method only focuses on technical aspects. The images used in this paper are all open sourced. Acknowledgements This work was supported in part by NSFC Project (62076067, 62176061), Science and Technology Commission of Shanghai Municipality Projects (19511120700, 2021SHZDZX0103).
1. What is the main contribution of the paper regarding local image editing? 2. What are the concerns regarding the two-stream convolution and its effectiveness? 3. How does the reviewer assess the efficiency improvements claimed in the paper? 4. What are the questions regarding the local autoregressive transformer and its strictness? 5. Why does the reviewer suggest focusing on image inpainting tasks instead of editing? 6. What is the reviewer's overall assessment of the paper's quality and suitability for NeurIPS?
Summary Of The Paper Review
Summary Of The Paper This paper aims to utilize the auto-regressive generative model to achieve the local image editing based on previous state-of-the-art method taming transformer. Specifically, to prevent the information leakage while encoding the input image into discrete sequences, a very straightforward two-stream convolution is proposed. Besides, this paper also presents a new local auto-regressive transformer to benefit from the capability of generative AR model and the global information simultaneously. The experiments show some improvements over compared methods. Review What this paper wants to achieve, i.e. editing any regions of input image while maintaining the original information, is really interesting, but there indeed are some aspects should be explained and clarified. The real effectiveness of two-stream convolution is not claimed well. In my understanding, the ultimate target of VQ-VAE is to faithfully reconstruct the input image, even though there are masks in the input. However, none experiments analyze why the designed convolution is better or could help this point. By contrast, the main effective part of Sec 3.1 may be Equation 4. However, in Line157-Line159, it claims that such simple trick makes AR inference more efficient, which also seems inaccurate. In my opinion, this finetuning trick actually aims to preserve the boundary textures as much as possible, and the efficiency improvements come from the subsequent design of local auto-regressive transformer. Even so, I believe that there are still cases in which the original local contents could not be preserved well while giving arbitrary downsampled masks. This aspect also should be analyzed in details. Sec 3.2 introduces the local autoregressive transformer learning, but is it the strict auto-regressive model? To predict next tokens, you need to rely on the conditions of both past tokens and future global tokens, which actually breaks the fixed-order decomposed joint distributions of pixels. The comparison with taming transformer is not fair. You should re-train the baseline, i.e., conditioned on the edited input to regress the final results. For the unmasked region, pick the top-1 token while sampling to reserve the original content. Why does this paper mainly focus on editing? The best way to show the improvements of the proposed method is to conduct experiments on image inpainting task. The rests are just the controllable applications of image inpainting. Based on these considerations, I think currently this submission could not reach the acceptance bar of NeurIPS venue. My rating is rejection now.
NIPS
Title The Image Local Autoregressive Transformer Abstract Recently, AutoRegressive (AR) models for the whole image generation empowered by transformers have achieved comparable or even better performance compared to Generative Adversarial Networks (GANs). Unfortunately, directly applying such AR models to edit/change local image regions, may suffer from the problems of missing global information, slow inference speed, and information leakage of local guidance. To address these limitations, we propose a novel model – image Local Autoregressive Transformer (iLAT), to better facilitate the locally guided image synthesis. Our iLAT learns the novel local discrete representations, by the newly proposed local autoregressive (LA) transformer of the attention mask and convolution mechanism. Thus iLAT can efficiently synthesize the local image regions by key guidance information. Our iLAT is evaluated on various locally guided image syntheses, such as pose-guided person image synthesis and face editing. Both quantitative and qualitative results show the efficacy of our model. 1 Introduction Generating realistic images has been attracting ubiquitous research attention of the community for a long time. In particular, those image synthesis tasks involving persons or portrait [6, 28, 29] can be applied in a wide variety of scenarios, such as advertising, games, and motion capture, etc. Most real-world image synthesis tasks only involve the local generation, which means generating pixels in certain regions, while maintaining the semantic consistency, e.g., face editing [19, 1, 40], pose guiding [36, 55, 47], and image inpainting [51, 30, 49, 53]. Unfortunately, most works can only handle the well aligned images of ‘icon-view’ foregrounds, rather than the image synthesis of ‘non-iconic view’ foregrounds [47, 24], i.e., person instances with arbitrary poses in cluttered scenes, which is concerned in this paper. Even worse, the global semantics tend to be distorted during the generation of previous methods, even if subtle modifications are applied to a local image region. Critically, given the local editing/guidance such as sketches of faces, or skeleton of bodies in the first column of Fig. 1(A), it is imperative to design our new algorithm for locally guided image synthesis. Generally, several inherent problems exist in previous works for such a task. For example, despite impressive quality of images are generated, GANs/Autoencoder(AE)-based methods [51, 47, 19, 30, 18] are inclined to synthesize blurry local regions, as in Fig. 1(A)-row(c). Furthermore, some inspiring autoregressive (AR) methods, such as PixelCNN [32, 41, 23] and recent transformers [8, 14], should efficiently model the joint image distribution (even in very complex background [32]) for whole image generation as Fig. 1(B)-row(b). These AR models, however, are still not ready for locally guided image synthesis, as several reasons. (1) Missing global information. As in Fig. 1(B)-row(b), vanilla AR models take the top-to-down and left-to-right sequential generation with limited receptive fields for the initial generating (top left corner), which are incapable of directly modeling global information. Additionally, the sequential AR models suffer from exposure bias [2], which may ∗ Corresponding author. Dr. Fu is also with Fudan ISTBI—ZJNU Algorithm Centre for Brain-inspired Intelligence, Zhejiang Normal University, Jinhua, China. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). predict future pixels conditioned on the past ones with mistakes, due to the discrepancy between training and testing in AR. This makes small local guidance unpredictable changes to the whole image, resulting in inconsistent semantics as in Fig. 1(A)-row(a). (2) Slow inference speed. The AR models have to sequentially predict the pixels in the testing with notoriously slow inference speed, especially for high-resolution image generation. Although the parallel training techniques are used in pixelCNN [32] and transformer [14], the conditional probability sampling fails to work in parallel during the inference phase. (3) Information leakage of local guidance. As shown in Fig. 1(B)-row(c), the local guidance should be implemented with specific masks to ensure the validity of the local AR learning. During the sequential training process, pixels from masked regions may be exposed to AR models by convolutions with large kernel sizes or inappropriate attention masks in the transformer. We call it information leakage [44, 16] of local guidance, which makes models overfit the masked regions, and miss detailed local guidance as in Fig. 1(A)-row(b). To this end, we propose a novel image Local Autoregressive Transformer (iLAT) for the task of locally guided image synthesis. Our key idea lies in learning the local discrete representations effectively. Particularly, we tailor the receptive fields of AR models to local guidance, achieving semantically consistent and visually realistic generation results. Furthermore, a local autoregressive (LA) transformer with the novel LA attention mask and convolution mechanism is proposed to enable successful local generation of images with efficient inference time, without information leakage. Formally, we propose the iLAT model with several novel components. (1) We complementarily incorporate receptive fields of both AR and AE to fit LA generation with a novel attention mask as shown in Fig. 1(B)-row(c). In detail, local discrete representation is proposed to represent those masked regions, while the unmasked areas are encoded with continuous image features. Thus, we achieve favorable results with both consistent global semantics and realistic local generations. (2) Our iLAT dramatically reduces the inference time for local generation, since only masked regions will be generated autoregressively. (3) A simple but effective two-stream convolution and a local causal attention mask mechanism are proposed for discrete image encoder and transformer respectively, with which information leakage is prevented without detriment to the performance. We make several contributions in this work. (1) A novel local discrete representation learning is proposed to efficiently help to learn our iLAT for the local generation. (2) We propose an image local autoregressive transformer for local image synthesis, which enjoys both semantically consistent and realistic generative results. (3) Our iLAT only generates necessary regions autoregressively, which is much faster than vanilla AR methods during the inference. (4) We propose a two-stream convolution and a LA attention mask to prevent both convolutions and transformer from information leakage, thus improving the quality of generated images. Empirically, we introduce several locally guidance tasks, including pose-guided image generation and face editing tasks; and extensive experiments are conducted on the corresponding dataset to validate the efficacy of our model. 2 Related Work Conditional Image Synthesis. Some conditional generation models are designed to globally generate images with pre-defined styles based on user-provided references, such as poses and face sketches. These previous synthesis efforts are made on Variational auto-encoder (VAE) [10, 13], AR Model [48, 39], and AE with adversarial training [51, 49, 53]. Critically, it is very non-trivial for all these methods to generate images of locally guided image synthesis of the non-iconic foreground. Some tentative attempts have been conducted in pose-guided synthesis [42, 47] with person existing in non-ironic views. On the other hand, face editing methods are mostly based on adversarial AE based inpainting [30, 51, 19] and GAN inversion based methods [53, 40, 1]. Rather than synthesizing the whole image, our iLAT generates the local regions of images autoregressively, which not only improves the stability with the well-optimized joint conditional distribution for large masked regions but also maintains the global semantic information. Autoregressive Generation. The deep AR models have achieved great success recently in the community [35, 15, 32, 9, 38]. Some well known works include PixelRNN [43], Conditional PixelCNN [32], Gated PixelCNN [32], and WaveNet [31]. Recently, transformer based AR models [38, 5, 12] have achieved excellent results in many machine learning tasks. Unfortunately, the common troubles of these AR models are the expensive inference time and potential exposure bias, as AR models sequentially predict future values from the given past values. The inconsistent receptive fields for training and testing will lead to accumulated errors and unreasonable generated results [2]. Our iLAT is thus designed to address these limitations. Visual Transformer. The transformer takes advantage of the self-attention module [44], and shows impressive expressive power in many Natural Language Processing (NLP) [38, 11, 5] and vision tasks [12, 7, 25]. With costly time and space complexity of O(n2) in transformers, Parmar et al. [34] utilize local self-attention to achieve AR generated results. Chen et al. [8] and Kumar et al. [22] autoregressively generate pixels with simplified discrete color palettes to save computations. But limited by the computing power, they still generate low-resolution images. To address this, some works have exploited the discrete representation learning, e.g. dVAE [39] and VQGAN [14]. It not only reduces the sequence length of image tokens but also shares perceptually rich latent features in image synthesis as the word embedding in NLP. However, recovering images from the discrete codebook still causes blur and artifacts in complex scenes. Besides, vanilla convolutions of the discrete encoder may leak information among different discrete codebook tokens. Moreover, local conditional generation based on VQGAN [14] suffers from poor semantical consistency compared with other unchanged image regions. To end these, the iLAT proposes the novel discrete representation to improve the model capability of local image synthesis. 3 Approach Given the conditional image Ic and target image It, our image Local Autoregressive Transformer (iLAT) aims at producing the output image Io of semantically consistent and visually realistic. The key foreground objects (e.g., the skeleton of body), or the regions of interest (e.g., sketches of facial regions) extracted from Ic, are applied to guide the synthesis of output image Io. Essentially, the background and the other non-key foreground image regions of Io should be visually similar to It. As shown in Fig. 2, our iLAT includes the branches of Two-Stream convolutions based Vector Quantized GAN (TS-VQGAN) for the discrete representation learning and a transformer for the AR generation with Local Autoregressive (LA) attention mask. Particularly, our iLAT firstly encodes Ic and It, into codebook vectors zq,c and zq,t by TS-VQGAN (Sec. 3.1) without local information leakage. Then the index of the masked vectors ẑq,t will be predicted by the transformer autoregressively with LA attention mask (Sec. 3.2). During the test phase, the decoder of TS-VQGAN takes the combination of ẑq,t in masked regions and ẑ in unmasked regions to achieve the final result. 3.1 Local Discrete Representation Learning We propose a novel local discrete representation learning in this section. Since it is inefficient to learn the generative transformer model through pixels directly, inspired by VQGAN [14], we incorporate the VQVAE mechanism [33] into the proposed iLAT for the discrete representation learning. The VQVAE is consisted of an encoder E, a decoder D, and a learnable discrete codebook Z = {zk}Kk=1, where K means the total number of codebook vectors. Given an image I ∈ RH×W×3, E encodes the image into latent features ẑ = E(I) ∈ Rh×w×ce , where ce indicates the channel of the encoder outputs. Then, the spatially unfolded ẑh′w′ ∈ Rce , (h′ ∈ h,w′ ∈ w) are replaced with the closest codebook vectors as z (q) h′w′ = arg minzk∈Z ||ẑh′w′ − zk|| ∈ Rcq , zq = fold(z (q) h′w′ , h ′ ∈ h,w′ ∈ w) ∈ Rh×w×cq , (1) where cq indicates the codebook channels. However, VQVAE will suffer from obscure information leakage for the local image synthesis, if the receptive field (kernel size) of vanilla convolution is larger than 1× 1 as shown in Fig. 3(a). Intuitively, each 3× 3 convolution layer spreads the masked features to the outside border of the mask. Furthermore, multi-convolutional based E accumulates the information leakage, which makes the model learn the local generating with unreasonable confidence, leading to model overfitting (see Sec. 4.3). To this end, we present two-stream convolutions in TS-VQGAN as shown in Fig. 3(a). Since the masked information is only leaked to a circle around the mask with each 3 × 3 convolution layer, we can just replace the corrupt features for each layer with masked ones. Thus, the influence of information leakage will be eliminated without hurting the integrity of both masked and unmasked features. Specifically, for the given image mask M ∈ RH×W that 1 means masked regions, and 0 means unmasked regions, it should be resized into M′ with max-pooling to fit the feature size. The two-stream convolution converts the input feature F into the masked feature Fm = F (1−M′), where is element-wise multiplication. Then, both F and Fm are convoluted with shared weights and combined according to the leaked regions Ml, which can be obtained from the difference of convoluted mask as Ml = clip(conv1(M ′), 0, 1)−M′, Ml[Ml > 0] = 1, (2) where conv1 implemented with an all-one 3 × 3 kernel. Therefore, the output of two-stream convolution can be written as Fc = conv(F) (1−Ml) + conv(Fm) Ml. (3) So the leaked regions are replaced with features that only depend on unmasked regions. Besides, masked features can be further leveraged for AR learning without any limitations. Compared with VQVAE, we replace all vanilla convolutions with two-stream convolutions in the encoder of TSVQGAN. Note that the decoder D is unnecessary to prevent information leakage at all. Since the decoding process is implemented after the AR generating of the transformer as shown in Fig. 2. For the VQVAE, the decoder D decodes the codebook vectors zq got from Eq.(1), and reconstructs the output image as Io = D(zq). Although VQGAN [14] can generate more reliable textures with adversarial training, handling complex real-world backgrounds and precise face details is still tough to the existed discrete learning methods. In TS-VQGAN, we further finetune the model with local quantized learning, which can be written as Io = D(zq Mq + ẑ (1−Mq)), (4) where Mq ∈ Rh×w is the resized mask for quantized vectors, and ẑ is the output of the encoder. In Eq.(4), unmasked features are directly encoded from the encoder, while masked features are replaced with the codebook vectors, which works between AE and VQVAE. This simple trick effectively maintains the fidelity of the unmasked regions and reduces the number of quantized vectors that have to be generated autoregressively, which also leads to a more efficient local AR inference. Note that the back-propagation of Eq.( 4) is implemented with the straight-through gradient estimator [4]. 3.2 Local Autoregressive Transformer Learning From the discrete representation learning in Sec. 3.1, we can get the discrete codebook vectors zq,c, zq,t ∈ Rh×w×cq for conditional images and target images respectively. Then the conditional and target image tokens {ci, tj}hwi,j=1 ∈ {0, 1, ...,K − 1} can be converted from the index-based representation of zq,c, zq,t in the codebook with length hw, where K indicates the all number of codebook vectors. For the resized target mask Mq ∈ Rh×w, the second stage needs to learn the AR likelihood for the masked target tokens {tm} where Mq,m = 1 with conditional tokens {ci}hwi=1 and other unmasked target tokens {tu} where Mq,u = 0 as p(tm|c, tu) = ∏ j p(t(m,j)|c, tu, t(m,<j)). (5) Benefits from Eq. (4), iLAT only needs to generate masked target tokens {tm} rather than all. Then, the negative log likelihood (NLL) loss can be optimized as LNLL = −Etm∼p(tm|c,tu) log p(tm|c, tu). (6) We use a decoder-only transformer to handle the AR likelihood. As shown in Fig. 2, two special tokens c[s], t[s] are concatenated to {ci} and {tj} as start tokens respectively. Then, the trainable position embedding [11] is added to the token embedding to introduce the position information to the self-attention modules. According to the self-attention mechanism in the transformer, the attention mask is the key factor to achieve parallel training without information leakage. As shown in Fig. 3(b), we propose a novel LA attention mask M̂LA with four sub-masks, which indicate receptive fields of condition to condition (C2C), condition to target (C2T), target to condition (T2C), and target to target (T2T) respectively. All conditional tokens {ci}hwi=1 can be attended by themselves and targets. So C2C and T2C should be all-one matrices. We think that the conditional tokens are unnecessary to attend targets in advance, so C2T is set as the all-zero matrix. Therefore, the LA attention mask M̂LA can be written as M̂LA = [ 1, 0 1, MLA ] , (7) where MLA indicates the T2T LA mask. To leverage the AR generation and maintain the global information simultaneously, the target tokens are divided into two groups called the global group and the causal group. Furthermore, the causal group includes masked targets {tm}(Mq,m = 1) and {tm−1} that need to predict them, because the labels need to be shifted to the right with one position for the AR learning. Besides, other tokens are classified into the global group. Then, the global attention sub-mask Mgs can be attended to all tokens to share global information and handle the semantic consistency. On the other hand, the causal attention sub-mask Mcs constitutes the local AR generation. Note that Mgs can not attend any masked tokens to avoid information leakage. The T2T LA mask can be got with MLA = Mgs +Mcs2. A more intuitive example is shown in Fig. 3(b). Therefore, for the given feature h, the self-attention in iLAT can be written as SelfAttention(h) = softmax( QKT√ d − (1− M̂LA) · ∞)V, (8) where Q,K, V are h encoded with different weights of d channels. We make all masked elements to −∞ before the softmax. During the inference, all generated target tokens {t̂m} are converted back to codebook vectors ẑq,t. Then, they are further combined with encoded unmasked features ẑ, and decoded with Eq.(4) as shown in Fig. 2. To highlight the difference of our proposed mask MLA, other common attention masks are shown in Fig. 1. The Vanilla AR mask MAR is widely used in the AR transformer [14, 8], but they fail to maintain the semantic consistency and cause unexpected identities for the face synthesis. AE mask MAE is utilized in some attention based image inpainting tasks [50, 49]. Although MAE enjoys good receptive fields, the masked regions are completely corrupted in the AE, which is much more unstable to restructure a large hole. Our method is an in-between strategy with both their superiorities mentioned above. 3.3 Implement Details for Different Tasks Non-Iconic Posed-Guiding. The proposed TS-VQGAN is also learned with adversarial training. For the complex non-iconic pose guiding, we finetune the pretrained open-source ImageNet based VQGAN weights with the two-stream convolution strategy. To avoid adding too many sequences with excessive computations, we use the coordinates of 13 target pose landmarks as the supplemental condition to the iLAT. They are encoded with 3 fully connected layers with ReLU. As shown in Fig. 2, both the condition and the target are images, which have different poses in the training phase, and the same pose in the inference phase. Besides, we use the union of conditional and target masks got by dilating the poses with different kernel sizes according to the scenes3 to the target image. Face Editing. In face editing, we find that the adaptive GAN learning weight λ makes the results unstable, so it is replaced with a fixed λ = 0.1. Besides, the TS-VQGAN is simplified compared to the one used in the pose guiding. Specifically, all attention layers among the encoder and decoder are removed. Then, all Group Normalizations are replaced with Instance Normalization to save memory without a large performance drop. The conditions are composed of the sketch images extracted with the XDoG [46], while the targets are face images. The training masks for the face editing are COCO masks [24] and simulated irregular masks [51], while the test ones are drawn manually. 4 Experiments In this section, we present experimental results on pose-guided generation of Penn Action (PA) [52] and Synthetic DeepFashion (SDF) [26], face editing of CelebA [27] and FFHQ [20] compared with other competitors and variants of iLAT. 2More about the expansion of LA attention mask are discussed in the supplementary. 3Details about the mask generation are illustrated in the supplementary. Datasets. For the pose guiding, PA dataset [52], which contains 2,326 video sequences of 15 action classes in non-iconic views is used in this section. Each frame from the video is annotated with 13 body landmarks consisted of 2D locations and visibility. The resolution of PA is resized into 256× 256 during the preprocessing. We randomly gather pairs of the same video sequence in the training phase dynamically and select 1,000 testing pairs in the remaining videos. Besides, the SDF is synthesized with DeepFashion [26] images as foregrounds and Places2 [54] images as backgrounds. Since only a few images of DeepFashion have related exact segmentation masks, we select 4,500/285 pairs from it for training and testing respectively. Each pair of them contains two images of the same person with different poses and randomly chosen backgrounds. The face editing dataset consists of Flickr-Faces-HQ dataset (FFHQ) [20] and CelebA-HQ [27]. FFHQ is a high-quality image dataset with 70,000 human faces. We resize them from 1024× 1024 into 256× 256 and use 68,000 of them for the training. The CelebA is only used for testing in this section for the diversity. Since face editing has no paired ground truth, we randomly select 68 images from the rest of FFHQ and all CelebA, and draw related sketches for them. Implementation Details. Our method is implemented in PyTorch in 256 × 256 image size. For the TS-VQGAN training, we use the Adam optimizer [21] with β1= 0.5 and β2 = 0.9. For the pose guiding, the TS-VQGAN is finetuned from the ImageNet pretrained VQGAN [14], while it is completely retrained for FFHQ. TS-VQGAN is trained with 150k steps without masks at first, and then it is trained with another 150k steps with masks in batch size 16. The initial learning rates of pose guiding and face editing are 8e-5 and 2e-4 respectively, which are decayed by 0.5 for every 50k steps. For the transformer training, we use Adam with β1 = 0.9 and β2 = 0.95 with initial learning rate 5e-5 and 0.01 weight decay. Besides, we warmup the learning rate with the first 10k steps, then it is linearly decayed to 0 for 300k iterations with batch size 16. During the inference, we simply use top-1 sampling for our iLAT. Competitors. The model proposed in [14] is abbreviated as Taming transformer (Taming) in this section. For fair comparisons, VQGAN used in Taming is finetuned for pose guiding, and retrained for face editing with the same steps as TS-VQGAN. For the pose guiding, we compare the proposed iLAT with other state-of-the-art methods retrained in the PA dataset, which include PATN [56], PN-GAN [37], PoseWarp [3], MR-Net [47] and Taming [14]. As the image size of PoseWarp and MR-Net is 128× 128, we resized the outputs for the comparison. For the face editing, we compare the iLAT with inpainting based SC-FEGAN [19] and Taming [14]. We also test the Taming results in our LA attention mask as Taming* (without retraining). 4.1 Quantitative Results Pose-Guided Comparison. Quantitative results in PA and SDF datasets of our proposed iLAT and other competitors are presented in Tab. 1. Peak signal-to-noise ratio (PSNR), Structural Similarity (SSIM) [45], Mean Average Error (MAE) and Fréchet Inception Distance (FID) [17] are employed to measure the quality of results. We also add the results of iLAT*, which is implemented without the two-stream convolutions. The results in Tab. 1 clearly show that our proposed method outperforms other methods in all metrics, especially for the FID, which accords with the human perception. The good scores of iLAT indicate that the proposed iLAT can generate more convincing and photo-realistic images on locally guided image synthesis of the non-iconic foreground. For the more challenging SDF dataset, iLAT still achieves better results compared with Taming. Inference Time. We also record the average inference times in PA, SDF, and FFHQ as showed in Tab. 2. Except for the better quality of generated images over Taming as discussed above, our iLAT costs less time for the local synthesis task according to the masking rate of the inputs. Low masking rates can achieve dramatic speedup, e.g., face editing. 4.2 Qualitative Results Non-Iconic Pose Guiding. Fig. 4(A) shows qualitative results in the non-iconic pose-guided image synthesis task. Compared to other competitors, it is apparent that our method can generate more reasonable target images both in human bodies and backgrounds, while images generated by other methods suffer from either wrong poses or deformed backgrounds. Particularly, PATN collapses in most cases. PN-GAN and PoseWarp only copy the reference images as the target ones, which fails to be guided by the given poses due to the challenging PA dataset. Moreover, MR-Net and Taming* can indeed generate poses that are similar to the target ones, but the background details of reference images are not transferred properly. Especially for the results in column (g), Taming fails to synthesize complicated backgrounds, such as noisy audiences in rows 2 and 3 and the gym with various fitness equipment in row 4. Compared to others, our proposed iLAT can capture the structure of human bodies given the target poses as well as retaining the vivid backgrounds, which demonstrate the efficacy of our model in synthesizing high-quality images in the non-iconic pose guiding. Besides, for the pose guiding with synthetic backgrounds of SDF, iLAT can still get more reasonable and stable backgrounds and foregrounds compared with Taming as in Fig. 5(C). Face Editing. Since there are no ground truth face editing targets, we only compared the qualitative results as shown in Fig. 4(B) of FFHQ and CelebA. Note that the Taming results in column (c) fail to preserve the identity information in both FFHQ and CelebA compared with the reference. For example, in rows 1, 2 and 3, the skin tones of Taming results are different from the original ones. And in row 4, Taming generates absolutely another person with contrasting ages, which indicates that vanilla AR is unsuited to the local face editing. When Taming is tested with our LA attention mask, column (d) shows that Taming* can retain the identities of persons. However, rows 1 and 2 demonstrate that Taming* fails to properly generate the target faces according to guided sketches, while in rows 3 and 4 some generations have obvious artifacts without consistency. Besides, inpainting-based SC-FEGAN achieves unstable results in rows 3 and 4. SC-FEGAN also strongly depends on the quality of input sketches, while unprofessional sketches lead to unnatural results as shown in row 1. Besides, detailed textures of AE-based SC-FEGAN are unsatisfactory too. Compared with these methods, our iLAT can always generate correct and vivid human faces with identities retained. Furthermore, benefits from the discrete representation, iLAT enjoys robustness to the guided information. 4.3 Further Discussions Ablation Study. The effectiveness of our proposed two-stream convolution is discussed in the ablation study. As we can find in Fig. 5, the woman face in row 1, (c) generated by iLAT* has residual face parts that conflict with the guided sketch. Moreover, in row 2, iLAT* without two-stream convolutions leaks information from sunglasses that lacks correct semantic features and leads to the inconsistent color of the generated face. For the pose-guided instance shown in the second row, it is apparent that the man generated in column (c) has blurry leg positions. However, in column (d) the complete iLAT can always generate authentic and accurate images, validating the efficacy of our designed two-stream convolutions. Sequential Generation. Our iLAT can also be extended to guide the video generation properly. We give a qualitative example in this section. As shown in Fig. 6, given a sequence of poses and one reference image, iLAT can forecast a plausible motion of the person. And the results are robust for most kinds of activities in non-ironic views. 5 Conclusion This paper proposes a transformer based AR model called iLAT to solve local image generation tasks. This method leverages a novel LA attention mask to enlarge the receptive fields of AR, which achieves not only semantically consistent but also realistic generative results and accelerates the inference speed of AR. Besides, a TS-VQGAN is proposed to learn a discrete representation learning without information leakages. Such a model can get superior performance in detail editing. Extensive experiments validate the efficacy of our iLAT for local image generation. Social Impacts This paper exploited the image editing with transformers. Since face editing may causes some privacy issues, we sincerely remind users to pay attention for it. Our method only focuses on technical aspects. The images used in this paper are all open sourced. Acknowledgements This work was supported in part by NSFC Project (62076067, 62176061), Science and Technology Commission of Shanghai Municipality Projects (19511120700, 2021SHZDZX0103).
1. What is the main contribution of the paper regarding local image editing? 2. How does the proposed method differ from existing transformer-based image synthesis approaches? 3. What are the strengths and weaknesses of the paper regarding its clarity and quality? 4. How does the reviewer assess the significance and impact of the paper's content? 5. Are there any questions or concerns regarding the paper's writing, explanations, and figures?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a transformer-based model for local image editing. The key challenges tackled by the paper is the need to integrate global information, while also preventing leakage of local guidance information (i.e. the local editing should only affect the designated region). To accomplish this, a combination of discrete representation learning (VQVAE) and an autoregresive transformer model is used. The local guidance is enforced through the use of a masked convolution in the feature learning and a local attention mask in the transformer. Review TL;DR: The paper introduces a useful technique for local image editing / synthesis, but the writing could be substantially improved to make the paper clearer. In the main review below I have labelled points as a strength (+), weakness (-) or mixed (+/-). Originality: (+) The method proposed in the paper has some novelty. Particularly, I’m not aware of an existing approach that generates local image edits using a transformer model. A transformer does seem ideally suited for this task as they are able to operate on a set of discrete tokens, which means that only the relevant parts of the image can be generated. The proposed components, such as the local attention mask for the transformer and the masked convolution are also to some extent novel, and useful for the task. (+/-) The authors have described how the work differs from generally related work in a satisfactory manner. However, it would be advisable to add a description of how the proposed method compares to existing transformer-based image synthesis approaches such as [A] and [14] “Taming transformers”. I know [14] is included in the numerical comparison, but it is not clear what parts of the proposed method improve upon [14]. For example, the related work states “recovering images from the discrete codebook still causes blur and artifacts in complex scene” - but the proposed method also uses a discrete codebook. What part of the proposed method addresses this? [A] Parmar, Niki, et al. "Image transformer." International Conference on Machine Learning. PMLR, 2018. Quality: (+) The submission appears to be technically sound and I cannot see any major errors or flaws in the method. (+) The claims made in the paper are largely supported by the experimental results. Specifically, the results show that the proposed approach significantly improves upon the state of the art both qualitatively and quantitatively. The generated images show a significant improvement in detail on both the PA dataset for pose-guided image generation and the face editing task using the FFHQ dataset (+/-) I would regard the work as complete in terms of the method being well-developed and the convincing results. However, the paper itself can do with some editing as many parts are a bit difficult to read and not expressed very well (see below). Clarity: (-) The submission is not that clearly written and some explanations are very difficult to follow. For example, at the start of the intro (line 20) the authors state “most works can only handle the images of ‘icon-view’ foreground, rather than the image synthesis of ‘non-iconic view’ foreground”. However, as a reader I have no idea what non-iconic view foreground means. The reference provided ([23] - Microsoft COCO) explains this concept to some extent, but it is by no means a general term. I would suggest the authors add some explanation to the paper to clarify it. Some further parts that need to be clarified: What are “respective fields” (ln 50,55,61,310)? What does “to lighten the negative influence to the normal CNN learning.” (ln 135) mean? What does “condition” and “target” mean in “condition to condition (C2C), condition to target (C2T), target to condition (T2C), and target to target (T2T)”? Also, the aspect ratio of some of the figures is severely distorted (especially Figure 4). For a paper about generating high quality images, I would at least expect the basic concepts of image editing to be done correctly. Significance: (+) In my view the results are quite important as guided image generation is a challenging problem and the proposed approach show a marked improvement over the current methods in the literature. It would, however, be nice to see some higher resolution editing results as the current images seem to be limited to resolutions of 256x256 which is not that useful for real-world use cases. The idea of editing and synthesising local parts of the image using a transfomer is definitely an idea that others would be keen to build upon. I can see how this could be applied to other problems too (for example in 3D scene and object synthesis). Post author response and discussion After considering the other reviews and the authors' responses, I remain positive about this paper. The authors have promised to address some of the repeated typos and overall, I think this is a good contribution in terms of image editing.The impact of the paper could be improved by some more visually-appealing examples (such as showcasing high-resolution editing), but this is not a fundamental limitation. Therefore, my final rating is "6: Marginally above the acceptance threshold".
NIPS
Title The Image Local Autoregressive Transformer Abstract Recently, AutoRegressive (AR) models for the whole image generation empowered by transformers have achieved comparable or even better performance compared to Generative Adversarial Networks (GANs). Unfortunately, directly applying such AR models to edit/change local image regions, may suffer from the problems of missing global information, slow inference speed, and information leakage of local guidance. To address these limitations, we propose a novel model – image Local Autoregressive Transformer (iLAT), to better facilitate the locally guided image synthesis. Our iLAT learns the novel local discrete representations, by the newly proposed local autoregressive (LA) transformer of the attention mask and convolution mechanism. Thus iLAT can efficiently synthesize the local image regions by key guidance information. Our iLAT is evaluated on various locally guided image syntheses, such as pose-guided person image synthesis and face editing. Both quantitative and qualitative results show the efficacy of our model. 1 Introduction Generating realistic images has been attracting ubiquitous research attention of the community for a long time. In particular, those image synthesis tasks involving persons or portrait [6, 28, 29] can be applied in a wide variety of scenarios, such as advertising, games, and motion capture, etc. Most real-world image synthesis tasks only involve the local generation, which means generating pixels in certain regions, while maintaining the semantic consistency, e.g., face editing [19, 1, 40], pose guiding [36, 55, 47], and image inpainting [51, 30, 49, 53]. Unfortunately, most works can only handle the well aligned images of ‘icon-view’ foregrounds, rather than the image synthesis of ‘non-iconic view’ foregrounds [47, 24], i.e., person instances with arbitrary poses in cluttered scenes, which is concerned in this paper. Even worse, the global semantics tend to be distorted during the generation of previous methods, even if subtle modifications are applied to a local image region. Critically, given the local editing/guidance such as sketches of faces, or skeleton of bodies in the first column of Fig. 1(A), it is imperative to design our new algorithm for locally guided image synthesis. Generally, several inherent problems exist in previous works for such a task. For example, despite impressive quality of images are generated, GANs/Autoencoder(AE)-based methods [51, 47, 19, 30, 18] are inclined to synthesize blurry local regions, as in Fig. 1(A)-row(c). Furthermore, some inspiring autoregressive (AR) methods, such as PixelCNN [32, 41, 23] and recent transformers [8, 14], should efficiently model the joint image distribution (even in very complex background [32]) for whole image generation as Fig. 1(B)-row(b). These AR models, however, are still not ready for locally guided image synthesis, as several reasons. (1) Missing global information. As in Fig. 1(B)-row(b), vanilla AR models take the top-to-down and left-to-right sequential generation with limited receptive fields for the initial generating (top left corner), which are incapable of directly modeling global information. Additionally, the sequential AR models suffer from exposure bias [2], which may ∗ Corresponding author. Dr. Fu is also with Fudan ISTBI—ZJNU Algorithm Centre for Brain-inspired Intelligence, Zhejiang Normal University, Jinhua, China. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). predict future pixels conditioned on the past ones with mistakes, due to the discrepancy between training and testing in AR. This makes small local guidance unpredictable changes to the whole image, resulting in inconsistent semantics as in Fig. 1(A)-row(a). (2) Slow inference speed. The AR models have to sequentially predict the pixels in the testing with notoriously slow inference speed, especially for high-resolution image generation. Although the parallel training techniques are used in pixelCNN [32] and transformer [14], the conditional probability sampling fails to work in parallel during the inference phase. (3) Information leakage of local guidance. As shown in Fig. 1(B)-row(c), the local guidance should be implemented with specific masks to ensure the validity of the local AR learning. During the sequential training process, pixels from masked regions may be exposed to AR models by convolutions with large kernel sizes or inappropriate attention masks in the transformer. We call it information leakage [44, 16] of local guidance, which makes models overfit the masked regions, and miss detailed local guidance as in Fig. 1(A)-row(b). To this end, we propose a novel image Local Autoregressive Transformer (iLAT) for the task of locally guided image synthesis. Our key idea lies in learning the local discrete representations effectively. Particularly, we tailor the receptive fields of AR models to local guidance, achieving semantically consistent and visually realistic generation results. Furthermore, a local autoregressive (LA) transformer with the novel LA attention mask and convolution mechanism is proposed to enable successful local generation of images with efficient inference time, without information leakage. Formally, we propose the iLAT model with several novel components. (1) We complementarily incorporate receptive fields of both AR and AE to fit LA generation with a novel attention mask as shown in Fig. 1(B)-row(c). In detail, local discrete representation is proposed to represent those masked regions, while the unmasked areas are encoded with continuous image features. Thus, we achieve favorable results with both consistent global semantics and realistic local generations. (2) Our iLAT dramatically reduces the inference time for local generation, since only masked regions will be generated autoregressively. (3) A simple but effective two-stream convolution and a local causal attention mask mechanism are proposed for discrete image encoder and transformer respectively, with which information leakage is prevented without detriment to the performance. We make several contributions in this work. (1) A novel local discrete representation learning is proposed to efficiently help to learn our iLAT for the local generation. (2) We propose an image local autoregressive transformer for local image synthesis, which enjoys both semantically consistent and realistic generative results. (3) Our iLAT only generates necessary regions autoregressively, which is much faster than vanilla AR methods during the inference. (4) We propose a two-stream convolution and a LA attention mask to prevent both convolutions and transformer from information leakage, thus improving the quality of generated images. Empirically, we introduce several locally guidance tasks, including pose-guided image generation and face editing tasks; and extensive experiments are conducted on the corresponding dataset to validate the efficacy of our model. 2 Related Work Conditional Image Synthesis. Some conditional generation models are designed to globally generate images with pre-defined styles based on user-provided references, such as poses and face sketches. These previous synthesis efforts are made on Variational auto-encoder (VAE) [10, 13], AR Model [48, 39], and AE with adversarial training [51, 49, 53]. Critically, it is very non-trivial for all these methods to generate images of locally guided image synthesis of the non-iconic foreground. Some tentative attempts have been conducted in pose-guided synthesis [42, 47] with person existing in non-ironic views. On the other hand, face editing methods are mostly based on adversarial AE based inpainting [30, 51, 19] and GAN inversion based methods [53, 40, 1]. Rather than synthesizing the whole image, our iLAT generates the local regions of images autoregressively, which not only improves the stability with the well-optimized joint conditional distribution for large masked regions but also maintains the global semantic information. Autoregressive Generation. The deep AR models have achieved great success recently in the community [35, 15, 32, 9, 38]. Some well known works include PixelRNN [43], Conditional PixelCNN [32], Gated PixelCNN [32], and WaveNet [31]. Recently, transformer based AR models [38, 5, 12] have achieved excellent results in many machine learning tasks. Unfortunately, the common troubles of these AR models are the expensive inference time and potential exposure bias, as AR models sequentially predict future values from the given past values. The inconsistent receptive fields for training and testing will lead to accumulated errors and unreasonable generated results [2]. Our iLAT is thus designed to address these limitations. Visual Transformer. The transformer takes advantage of the self-attention module [44], and shows impressive expressive power in many Natural Language Processing (NLP) [38, 11, 5] and vision tasks [12, 7, 25]. With costly time and space complexity of O(n2) in transformers, Parmar et al. [34] utilize local self-attention to achieve AR generated results. Chen et al. [8] and Kumar et al. [22] autoregressively generate pixels with simplified discrete color palettes to save computations. But limited by the computing power, they still generate low-resolution images. To address this, some works have exploited the discrete representation learning, e.g. dVAE [39] and VQGAN [14]. It not only reduces the sequence length of image tokens but also shares perceptually rich latent features in image synthesis as the word embedding in NLP. However, recovering images from the discrete codebook still causes blur and artifacts in complex scenes. Besides, vanilla convolutions of the discrete encoder may leak information among different discrete codebook tokens. Moreover, local conditional generation based on VQGAN [14] suffers from poor semantical consistency compared with other unchanged image regions. To end these, the iLAT proposes the novel discrete representation to improve the model capability of local image synthesis. 3 Approach Given the conditional image Ic and target image It, our image Local Autoregressive Transformer (iLAT) aims at producing the output image Io of semantically consistent and visually realistic. The key foreground objects (e.g., the skeleton of body), or the regions of interest (e.g., sketches of facial regions) extracted from Ic, are applied to guide the synthesis of output image Io. Essentially, the background and the other non-key foreground image regions of Io should be visually similar to It. As shown in Fig. 2, our iLAT includes the branches of Two-Stream convolutions based Vector Quantized GAN (TS-VQGAN) for the discrete representation learning and a transformer for the AR generation with Local Autoregressive (LA) attention mask. Particularly, our iLAT firstly encodes Ic and It, into codebook vectors zq,c and zq,t by TS-VQGAN (Sec. 3.1) without local information leakage. Then the index of the masked vectors ẑq,t will be predicted by the transformer autoregressively with LA attention mask (Sec. 3.2). During the test phase, the decoder of TS-VQGAN takes the combination of ẑq,t in masked regions and ẑ in unmasked regions to achieve the final result. 3.1 Local Discrete Representation Learning We propose a novel local discrete representation learning in this section. Since it is inefficient to learn the generative transformer model through pixels directly, inspired by VQGAN [14], we incorporate the VQVAE mechanism [33] into the proposed iLAT for the discrete representation learning. The VQVAE is consisted of an encoder E, a decoder D, and a learnable discrete codebook Z = {zk}Kk=1, where K means the total number of codebook vectors. Given an image I ∈ RH×W×3, E encodes the image into latent features ẑ = E(I) ∈ Rh×w×ce , where ce indicates the channel of the encoder outputs. Then, the spatially unfolded ẑh′w′ ∈ Rce , (h′ ∈ h,w′ ∈ w) are replaced with the closest codebook vectors as z (q) h′w′ = arg minzk∈Z ||ẑh′w′ − zk|| ∈ Rcq , zq = fold(z (q) h′w′ , h ′ ∈ h,w′ ∈ w) ∈ Rh×w×cq , (1) where cq indicates the codebook channels. However, VQVAE will suffer from obscure information leakage for the local image synthesis, if the receptive field (kernel size) of vanilla convolution is larger than 1× 1 as shown in Fig. 3(a). Intuitively, each 3× 3 convolution layer spreads the masked features to the outside border of the mask. Furthermore, multi-convolutional based E accumulates the information leakage, which makes the model learn the local generating with unreasonable confidence, leading to model overfitting (see Sec. 4.3). To this end, we present two-stream convolutions in TS-VQGAN as shown in Fig. 3(a). Since the masked information is only leaked to a circle around the mask with each 3 × 3 convolution layer, we can just replace the corrupt features for each layer with masked ones. Thus, the influence of information leakage will be eliminated without hurting the integrity of both masked and unmasked features. Specifically, for the given image mask M ∈ RH×W that 1 means masked regions, and 0 means unmasked regions, it should be resized into M′ with max-pooling to fit the feature size. The two-stream convolution converts the input feature F into the masked feature Fm = F (1−M′), where is element-wise multiplication. Then, both F and Fm are convoluted with shared weights and combined according to the leaked regions Ml, which can be obtained from the difference of convoluted mask as Ml = clip(conv1(M ′), 0, 1)−M′, Ml[Ml > 0] = 1, (2) where conv1 implemented with an all-one 3 × 3 kernel. Therefore, the output of two-stream convolution can be written as Fc = conv(F) (1−Ml) + conv(Fm) Ml. (3) So the leaked regions are replaced with features that only depend on unmasked regions. Besides, masked features can be further leveraged for AR learning without any limitations. Compared with VQVAE, we replace all vanilla convolutions with two-stream convolutions in the encoder of TSVQGAN. Note that the decoder D is unnecessary to prevent information leakage at all. Since the decoding process is implemented after the AR generating of the transformer as shown in Fig. 2. For the VQVAE, the decoder D decodes the codebook vectors zq got from Eq.(1), and reconstructs the output image as Io = D(zq). Although VQGAN [14] can generate more reliable textures with adversarial training, handling complex real-world backgrounds and precise face details is still tough to the existed discrete learning methods. In TS-VQGAN, we further finetune the model with local quantized learning, which can be written as Io = D(zq Mq + ẑ (1−Mq)), (4) where Mq ∈ Rh×w is the resized mask for quantized vectors, and ẑ is the output of the encoder. In Eq.(4), unmasked features are directly encoded from the encoder, while masked features are replaced with the codebook vectors, which works between AE and VQVAE. This simple trick effectively maintains the fidelity of the unmasked regions and reduces the number of quantized vectors that have to be generated autoregressively, which also leads to a more efficient local AR inference. Note that the back-propagation of Eq.( 4) is implemented with the straight-through gradient estimator [4]. 3.2 Local Autoregressive Transformer Learning From the discrete representation learning in Sec. 3.1, we can get the discrete codebook vectors zq,c, zq,t ∈ Rh×w×cq for conditional images and target images respectively. Then the conditional and target image tokens {ci, tj}hwi,j=1 ∈ {0, 1, ...,K − 1} can be converted from the index-based representation of zq,c, zq,t in the codebook with length hw, where K indicates the all number of codebook vectors. For the resized target mask Mq ∈ Rh×w, the second stage needs to learn the AR likelihood for the masked target tokens {tm} where Mq,m = 1 with conditional tokens {ci}hwi=1 and other unmasked target tokens {tu} where Mq,u = 0 as p(tm|c, tu) = ∏ j p(t(m,j)|c, tu, t(m,<j)). (5) Benefits from Eq. (4), iLAT only needs to generate masked target tokens {tm} rather than all. Then, the negative log likelihood (NLL) loss can be optimized as LNLL = −Etm∼p(tm|c,tu) log p(tm|c, tu). (6) We use a decoder-only transformer to handle the AR likelihood. As shown in Fig. 2, two special tokens c[s], t[s] are concatenated to {ci} and {tj} as start tokens respectively. Then, the trainable position embedding [11] is added to the token embedding to introduce the position information to the self-attention modules. According to the self-attention mechanism in the transformer, the attention mask is the key factor to achieve parallel training without information leakage. As shown in Fig. 3(b), we propose a novel LA attention mask M̂LA with four sub-masks, which indicate receptive fields of condition to condition (C2C), condition to target (C2T), target to condition (T2C), and target to target (T2T) respectively. All conditional tokens {ci}hwi=1 can be attended by themselves and targets. So C2C and T2C should be all-one matrices. We think that the conditional tokens are unnecessary to attend targets in advance, so C2T is set as the all-zero matrix. Therefore, the LA attention mask M̂LA can be written as M̂LA = [ 1, 0 1, MLA ] , (7) where MLA indicates the T2T LA mask. To leverage the AR generation and maintain the global information simultaneously, the target tokens are divided into two groups called the global group and the causal group. Furthermore, the causal group includes masked targets {tm}(Mq,m = 1) and {tm−1} that need to predict them, because the labels need to be shifted to the right with one position for the AR learning. Besides, other tokens are classified into the global group. Then, the global attention sub-mask Mgs can be attended to all tokens to share global information and handle the semantic consistency. On the other hand, the causal attention sub-mask Mcs constitutes the local AR generation. Note that Mgs can not attend any masked tokens to avoid information leakage. The T2T LA mask can be got with MLA = Mgs +Mcs2. A more intuitive example is shown in Fig. 3(b). Therefore, for the given feature h, the self-attention in iLAT can be written as SelfAttention(h) = softmax( QKT√ d − (1− M̂LA) · ∞)V, (8) where Q,K, V are h encoded with different weights of d channels. We make all masked elements to −∞ before the softmax. During the inference, all generated target tokens {t̂m} are converted back to codebook vectors ẑq,t. Then, they are further combined with encoded unmasked features ẑ, and decoded with Eq.(4) as shown in Fig. 2. To highlight the difference of our proposed mask MLA, other common attention masks are shown in Fig. 1. The Vanilla AR mask MAR is widely used in the AR transformer [14, 8], but they fail to maintain the semantic consistency and cause unexpected identities for the face synthesis. AE mask MAE is utilized in some attention based image inpainting tasks [50, 49]. Although MAE enjoys good receptive fields, the masked regions are completely corrupted in the AE, which is much more unstable to restructure a large hole. Our method is an in-between strategy with both their superiorities mentioned above. 3.3 Implement Details for Different Tasks Non-Iconic Posed-Guiding. The proposed TS-VQGAN is also learned with adversarial training. For the complex non-iconic pose guiding, we finetune the pretrained open-source ImageNet based VQGAN weights with the two-stream convolution strategy. To avoid adding too many sequences with excessive computations, we use the coordinates of 13 target pose landmarks as the supplemental condition to the iLAT. They are encoded with 3 fully connected layers with ReLU. As shown in Fig. 2, both the condition and the target are images, which have different poses in the training phase, and the same pose in the inference phase. Besides, we use the union of conditional and target masks got by dilating the poses with different kernel sizes according to the scenes3 to the target image. Face Editing. In face editing, we find that the adaptive GAN learning weight λ makes the results unstable, so it is replaced with a fixed λ = 0.1. Besides, the TS-VQGAN is simplified compared to the one used in the pose guiding. Specifically, all attention layers among the encoder and decoder are removed. Then, all Group Normalizations are replaced with Instance Normalization to save memory without a large performance drop. The conditions are composed of the sketch images extracted with the XDoG [46], while the targets are face images. The training masks for the face editing are COCO masks [24] and simulated irregular masks [51], while the test ones are drawn manually. 4 Experiments In this section, we present experimental results on pose-guided generation of Penn Action (PA) [52] and Synthetic DeepFashion (SDF) [26], face editing of CelebA [27] and FFHQ [20] compared with other competitors and variants of iLAT. 2More about the expansion of LA attention mask are discussed in the supplementary. 3Details about the mask generation are illustrated in the supplementary. Datasets. For the pose guiding, PA dataset [52], which contains 2,326 video sequences of 15 action classes in non-iconic views is used in this section. Each frame from the video is annotated with 13 body landmarks consisted of 2D locations and visibility. The resolution of PA is resized into 256× 256 during the preprocessing. We randomly gather pairs of the same video sequence in the training phase dynamically and select 1,000 testing pairs in the remaining videos. Besides, the SDF is synthesized with DeepFashion [26] images as foregrounds and Places2 [54] images as backgrounds. Since only a few images of DeepFashion have related exact segmentation masks, we select 4,500/285 pairs from it for training and testing respectively. Each pair of them contains two images of the same person with different poses and randomly chosen backgrounds. The face editing dataset consists of Flickr-Faces-HQ dataset (FFHQ) [20] and CelebA-HQ [27]. FFHQ is a high-quality image dataset with 70,000 human faces. We resize them from 1024× 1024 into 256× 256 and use 68,000 of them for the training. The CelebA is only used for testing in this section for the diversity. Since face editing has no paired ground truth, we randomly select 68 images from the rest of FFHQ and all CelebA, and draw related sketches for them. Implementation Details. Our method is implemented in PyTorch in 256 × 256 image size. For the TS-VQGAN training, we use the Adam optimizer [21] with β1= 0.5 and β2 = 0.9. For the pose guiding, the TS-VQGAN is finetuned from the ImageNet pretrained VQGAN [14], while it is completely retrained for FFHQ. TS-VQGAN is trained with 150k steps without masks at first, and then it is trained with another 150k steps with masks in batch size 16. The initial learning rates of pose guiding and face editing are 8e-5 and 2e-4 respectively, which are decayed by 0.5 for every 50k steps. For the transformer training, we use Adam with β1 = 0.9 and β2 = 0.95 with initial learning rate 5e-5 and 0.01 weight decay. Besides, we warmup the learning rate with the first 10k steps, then it is linearly decayed to 0 for 300k iterations with batch size 16. During the inference, we simply use top-1 sampling for our iLAT. Competitors. The model proposed in [14] is abbreviated as Taming transformer (Taming) in this section. For fair comparisons, VQGAN used in Taming is finetuned for pose guiding, and retrained for face editing with the same steps as TS-VQGAN. For the pose guiding, we compare the proposed iLAT with other state-of-the-art methods retrained in the PA dataset, which include PATN [56], PN-GAN [37], PoseWarp [3], MR-Net [47] and Taming [14]. As the image size of PoseWarp and MR-Net is 128× 128, we resized the outputs for the comparison. For the face editing, we compare the iLAT with inpainting based SC-FEGAN [19] and Taming [14]. We also test the Taming results in our LA attention mask as Taming* (without retraining). 4.1 Quantitative Results Pose-Guided Comparison. Quantitative results in PA and SDF datasets of our proposed iLAT and other competitors are presented in Tab. 1. Peak signal-to-noise ratio (PSNR), Structural Similarity (SSIM) [45], Mean Average Error (MAE) and Fréchet Inception Distance (FID) [17] are employed to measure the quality of results. We also add the results of iLAT*, which is implemented without the two-stream convolutions. The results in Tab. 1 clearly show that our proposed method outperforms other methods in all metrics, especially for the FID, which accords with the human perception. The good scores of iLAT indicate that the proposed iLAT can generate more convincing and photo-realistic images on locally guided image synthesis of the non-iconic foreground. For the more challenging SDF dataset, iLAT still achieves better results compared with Taming. Inference Time. We also record the average inference times in PA, SDF, and FFHQ as showed in Tab. 2. Except for the better quality of generated images over Taming as discussed above, our iLAT costs less time for the local synthesis task according to the masking rate of the inputs. Low masking rates can achieve dramatic speedup, e.g., face editing. 4.2 Qualitative Results Non-Iconic Pose Guiding. Fig. 4(A) shows qualitative results in the non-iconic pose-guided image synthesis task. Compared to other competitors, it is apparent that our method can generate more reasonable target images both in human bodies and backgrounds, while images generated by other methods suffer from either wrong poses or deformed backgrounds. Particularly, PATN collapses in most cases. PN-GAN and PoseWarp only copy the reference images as the target ones, which fails to be guided by the given poses due to the challenging PA dataset. Moreover, MR-Net and Taming* can indeed generate poses that are similar to the target ones, but the background details of reference images are not transferred properly. Especially for the results in column (g), Taming fails to synthesize complicated backgrounds, such as noisy audiences in rows 2 and 3 and the gym with various fitness equipment in row 4. Compared to others, our proposed iLAT can capture the structure of human bodies given the target poses as well as retaining the vivid backgrounds, which demonstrate the efficacy of our model in synthesizing high-quality images in the non-iconic pose guiding. Besides, for the pose guiding with synthetic backgrounds of SDF, iLAT can still get more reasonable and stable backgrounds and foregrounds compared with Taming as in Fig. 5(C). Face Editing. Since there are no ground truth face editing targets, we only compared the qualitative results as shown in Fig. 4(B) of FFHQ and CelebA. Note that the Taming results in column (c) fail to preserve the identity information in both FFHQ and CelebA compared with the reference. For example, in rows 1, 2 and 3, the skin tones of Taming results are different from the original ones. And in row 4, Taming generates absolutely another person with contrasting ages, which indicates that vanilla AR is unsuited to the local face editing. When Taming is tested with our LA attention mask, column (d) shows that Taming* can retain the identities of persons. However, rows 1 and 2 demonstrate that Taming* fails to properly generate the target faces according to guided sketches, while in rows 3 and 4 some generations have obvious artifacts without consistency. Besides, inpainting-based SC-FEGAN achieves unstable results in rows 3 and 4. SC-FEGAN also strongly depends on the quality of input sketches, while unprofessional sketches lead to unnatural results as shown in row 1. Besides, detailed textures of AE-based SC-FEGAN are unsatisfactory too. Compared with these methods, our iLAT can always generate correct and vivid human faces with identities retained. Furthermore, benefits from the discrete representation, iLAT enjoys robustness to the guided information. 4.3 Further Discussions Ablation Study. The effectiveness of our proposed two-stream convolution is discussed in the ablation study. As we can find in Fig. 5, the woman face in row 1, (c) generated by iLAT* has residual face parts that conflict with the guided sketch. Moreover, in row 2, iLAT* without two-stream convolutions leaks information from sunglasses that lacks correct semantic features and leads to the inconsistent color of the generated face. For the pose-guided instance shown in the second row, it is apparent that the man generated in column (c) has blurry leg positions. However, in column (d) the complete iLAT can always generate authentic and accurate images, validating the efficacy of our designed two-stream convolutions. Sequential Generation. Our iLAT can also be extended to guide the video generation properly. We give a qualitative example in this section. As shown in Fig. 6, given a sequence of poses and one reference image, iLAT can forecast a plausible motion of the person. And the results are robust for most kinds of activities in non-ironic views. 5 Conclusion This paper proposes a transformer based AR model called iLAT to solve local image generation tasks. This method leverages a novel LA attention mask to enlarge the receptive fields of AR, which achieves not only semantically consistent but also realistic generative results and accelerates the inference speed of AR. Besides, a TS-VQGAN is proposed to learn a discrete representation learning without information leakages. Such a model can get superior performance in detail editing. Extensive experiments validate the efficacy of our iLAT for local image generation. Social Impacts This paper exploited the image editing with transformers. Since face editing may causes some privacy issues, we sincerely remind users to pay attention for it. Our method only focuses on technical aspects. The images used in this paper are all open sourced. Acknowledgements This work was supported in part by NSFC Project (62076067, 62176061), Science and Technology Commission of Shanghai Municipality Projects (19511120700, 2021SHZDZX0103).
1. What are the limitations of AutoRegressive (AR) models for image generation? 2. What is the novel approach proposed by the paper to address these limitations? 3. How does the proposed method improve the efficiency of local image region synthesis? 4. What are the key components of the proposed Local Autoregressive Transformer (LA) transformer? 5. How does the paper compare its approach with other works in the field? 6. Can the authors provide more insights into the performance improvement of their approach in certain tasks?
Summary Of The Paper Review
Summary Of The Paper This paper focuses on AutoRegressive (AR) models for the image generation empowered by transformers as they show comparable results compared to Generative Adversarial Networks (GANs). Directly applying such AR models to edit/change local image regions, may suffer from the problems of missing global information, slow inference speed, and information leakage of local guidance. This paper focuses on these problems with a novel model – image Local Autoregressive Transformer (iLAT). The proposed method learns the novel local discrete representations, by the newly proposed local lautoregressive (LA) transformer of the attention mask and convolution mechanism. iLAT can efficiently synthesize the local image regions by key guidance and can perform well in various tasks. Review 1/ The paper focuses on important problems in AR models such as local generation, semantical consistency, generation of only necessary regions (filling missing regions). 2/ A novel attention mask is proposed to incorporate respective fields of both AR and AE to fit LA generation. 3/ The proposed iLAT dramatically reduces the inference time for local generation, since the only masked region will be generated autoregressively. 4/ Two-stream convolution and a local causal attention mask mechanism are proposed for discrete image encoder and transformer respectively. 5/ The choice of compared works is not clear as there are many GAN-based techniques for missing region completion. 6/ In pose-to-image task, compared to Taming better results are obtained, can authors state the main reasons behind this performance.
NIPS
Title Neural Approximation of Graph Topological Features Abstract Topological features based on persistent homology can capture high-order structural information which can then be used to augment graph neural network methods. However, computing extended persistent homology summaries remains slow for large and dense graphs and can be a serious bottleneck for the learning pipeline. Inspired by recent success in neural algorithmic reasoning, we propose a novel graph neural network to estimate extended persistence diagrams (EPDs) on graphs efficiently. Our model is built on algorithmic insights, and benefits from better supervision and closer alignment with the EPD computation algorithm. We validate our method with convincing empirical results on approximating EPDs and downstream graph representation learning tasks. Our method is also efficient; on large and dense graphs, we accelerate the computation by nearly 100 times. 1 Introduction Graph neural networks (GNNs) have been widely used in various domains with graph-structured data [45, 27, 22, 42, 6]. Much effort has been made to understand and to improve graph representation power [47, 30, 3, 28]. An intuitive solution is to explicitly inject high order information, such as graph topological/structural information, into the GNN models [51, 26]. To this end, persistent homology [15, 14], which captures topological structures (e.g., connected components and loops) and encodes them in a summary called persistence diagram (PD), have attracted the attention of researchers. Indeed, persistence has already been injected to machine learning pipelines for various graph learning tasks [55, 56, 16, 4, 8, 50]. In particular, it has been found helpful to use the so-called extended persistence diagrams (EPDs) [10], which contain richer information than the standard PDs. Despite the usefulness of PDs and EPDs, their computation remains a bottleneck in graph learning. In situations such as node classification [56] or link prediction [50], one has to compute EPDs on vicinity graphs (local subgraph motifs) generated around all the nodes or all possible edges in the input graph. This can be computationally prohibitive for large and dense graphs. Take the Amazon ∗Correspondence to Chao Chen, Yusu Wang, and Liangcai Gao 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Computers dataset [37] as an example. To compute EPDs on vicinity graphs take several seconds on average, and there are 13381 nodes. So to compute all EPDs with a single CPU can take up to a day. This is not surprising as, while theoretically EPD for graphs can be computed in O(n log n) time [2], that algorithm has not been implemented, and practical algorithms for computing PD take quadratic time in worst case [50]. These computational difficulties raise the question: can we approximate the expensive computation of EPDs using an efficient learning-based approach? This is a challenging question due to the complex mathematical machinery behind the original algorithm. First, the algorithm involves a reduction algorithm of the graph incidence matrix. Each step of the algorithm is a modulo-2 addition of columns that can involve edges and nodes far apart. Such algorithm can be hard to be approximated by a direct application of the black-box deep neural networks. The second challenge comes from the supervision. The output EPD is a point set with an unknown cardinality. The distance between EPDs, called the Wasserstein distance [9, 11], involves a complex point matching algorithm. It is nontrivial to design a deep neural network with variable output and to support supervision via such Wassserstein distance. Previous attempts [38, 29] directly use black-box neural networks to generate fixed-length vectorization of the PDs/EPDs and use mean squared error or cross-entropy loss for supervision. The compromise in supervision and the lack of control make it hard to achieve high-quality approximation of PDs/EPDs. In this paper, we propose a novel learning approach to approximate EPDs on graphs. Unlike previous attempts, we address the aforementioned challenges through a carefully designed learning framework guided by several insights into the EPD computation algorithm. In terms of model output and supervision, we observe that the computation of EPDs can be treated as an edge-wise prediction instead of a whole-graph prediction. Each edge in the graph is paired with another graph element (either vertex or edge), and the function values of the pair are the coordinates of a persistence point in the EPD. This observation allows us to compute EPDs by predicting the paired element for every edge of the graph. The Wasserstein distance can be naturally decomposed into supervision loss for each edge. This element-wise supervision can significantly improve learning efficiency compared with previous solutions, which treat PDs/EPDs as a whole-graph representation and have to use whole-graph representation pooling. Another concern is whether and how a deep neural network can approximate the sophisticated EPD algorithm. To this end, we redesign the algorithm so that it is better aligned with algorithms that are known to be learnable by neural networks. Recall we observe that computing EPDs can be decomposed into finding pairing for each edge. We show that the decomposition is not only at the output level, but also at the algorithm level. The complex standard EPD computation algorithm can indeed be decomposed into independent pairing problems, each of which can be solved exactly using a classic Union-Find algorithm [12]. To this end, we draw inspiration from recent observations that neural networks can imitate certain categories of sequential algorithms on graphs [43, 46]. We propose a carefully designed graph neural network with specific message passing and aggregation mechanism to imitate the Union-Find algorithm. Decomposing the algorithm into Union-Find subroutines and approximating them with a customized GNN provide better alignment between our neural network and the EPD algorithm. A better alignment can lead to better performance [48]. Empirically, we validate our method by quantifying its approximation quality of the EPDs. On two downstream graph learning tasks, node classification and link prediction, we also show that our neural approximations are as effective as the original EPDs. Meanwhile, on large and dense graphs, our method is much faster than direct computation. In other words, the approximated EPDs do not lose accuracy and learning power, but can be computed much more efficiently. Finally, we observe that our model can be potentially transferred to unseen graphs, perhaps due to the close imitation of the Union-Find subroutine. This is encouraging as we may generalize topological computation to various challenging real-world graphs without much additional effort. In summary, we propose an effective learning approach to approximate EPDs with better supervision and better transparency. The technical contributions are as follows. • We reformulate the EPD computation as an edge-wise prediction problem, allowing better supervision and more efficient representation learning. We show that the EPD computation can be decomposed into independent pairing problems, each of which can be solved by the Union-Find algorithm. • Inspired by recent neural algorithm approximation works [43, 46], we design a novel graph neural network architecture to learn the Union-Find algorithm. The closer algorithmic alignment ensures high approximation quality and transferability. 2 Background: Extended Persistent Homology We briefly introduce extended persistent homology and refer the readers to [10, 14] for more details. Ordinary Persistent Homology. Persistent homology captures 0-dimensional (connected components), 1-dimensional (loops) topological structures, as well as high-dimensional analogs, and measures their saliency via a scalar function called filter function. Here we will only describe it for the graph setting. Given a input graph G = (V,E), with node set V and edge set E, we call all the nodes and edges simplices. Denote by X = V ∪E the set of all simplices. We define a filter function on all simpices, f : X → R. In the typical sublevel-set setting, f is induced by a node-valued function (e.g., node degrees), and further defined on edges as f(uv) = max(f(u), f(v)). Denote by Xa the sublevel set of X , consisting of simplices whose filter function values ≤ a, Xa = {x ∈ X|f(x) ≤ a}. As the threshold value a increases from −∞ to ∞, we obtain a sequence of growing spaces, called an ascending filtration of X: ∅ = X−∞ ⊂ ... ⊂ X∞ = X. As Xa increases from ∅ to X , new topological structures gradually appear (born) and disappear (die). For instance, the blue square persistence point at (t2, t3) in Figure 1 (b) indicates that the connected component u2 appears at Xt2 and is merged with the whole connected component at Xt3 . Applying the homology functor to the filtration, we can more precisely quantify the birth and death of topological features (as captured by homology groups) throughout the filtration, and the output is the so-called persistence diagram (PD), which is a planar multiset of points, each of which (b, d) corresponds to the birth and death time of some homological feature (i.e., components, loops, and their higher dimensional analogs). The lifetime |d− b| is called the persistence of this feature and intuitively measures its importance w.r.t. the input filtration. Extended Persistent Homology. In the ordinary persistent homology, topology of the domain (e.g., the graph) will be created at some time (has a birth time), but never dies (i.e., with death time being equal to +∞). We call such topological features essential features. In the context of graphs, the importance of 1D essential features, corresponding to independent loops, are not captured via the ordinary persistence. To this end, an extended persistence module is introduced in [10]: ∅ = H(X−∞) → · · ·H(Xa) → · · ·H(X) = H(X,X∞) → · · · → H(X,Xa) → · · · → H(X,X−∞), where Xa = {x ∈ X|f(x) ≥ a} is a superlevel set of X at value a. We say that the second part H(X,X∞) → · · · → H(X,Xa) → · · · → H(X,X−∞) is induced by a descending filtration. If we inspect the persistence diagram induced by this extended sequence, as H(X,X−∞) is trivial, all the loop features created will also be killed in the end, and thus captured by persistence points whose birth happens in the ascending filtration and death happens in the descending filtration. In what follows, we abuse the notation slightly and use 1D EPD to refer to only such persistence points (i.e., born in ascending portion and death in descending portion) in the persistence diagram induced by the extended module2. We use 0D PD to refer to the standard ordinary 0D persistence diagram induced by the ascending sequence. Our goal is to compute/approximate the union of 0D PD and 1D EPD. Specifically, in the graph setting, at the end of the ascending filtration, some edges, which are the so-called negative edges (as they kill homological features), are paired with the vertices. These correspond to points in the 0D PD, capturing the birth and death of connected components in the ascending filtration. Those unpaired edges, called positive edges, will create independent loops (1D homology for graphs) and remain unpaired after the ascending filtration. The number of such unpaired edges equals to the 1st Betti number β1 (rank of the 1st homology group). These edges will then be paired in the descending part of the persistence module and their birth-depth times give rise to 1D EPD. An example is given in Figure 1(b). Note that since our domain is a graph, β1 equals the number of independent loops, which also equals to β1 = |E| − |V |+ 1 for a connected graph. Hence we also say that 1D EPD captures the birth and death of independent loop features. The birth and death times of the loop feature correspond to the threshold value a’s when these events happen. In general, the death time for such loop feature is smaller than the birth time. For example, the red triangle persistence point in Figure 1 (b) denotes that the red cycle in Figure 1 (a) appears at Xt5 in the ascending filtration and appears again at Xt1 in the descending filtration. Finally, PDs live in an infinite-dimensional space equipped with an appropriate metric structure, such as the so-called p-th Wasserstein distance [11] or the bottleneck distance [9]. They have been combined with various deep learning methods including kernel machines [34, 25, 5], convolutional neural networks [18, 20, 44, 57], transformers [53], connectivity loss [7, 17], and GNNs [56, 8, 50, 55, 16, 4]. During learning, there have been many works in the literature to vectorize persistence diagrams for downstream analysis. Among these works a popular choice is the persistence image [1]. 3 Algorithm Revision: Decomposing EPD into Edge-Wise Paring Predictions In this section, we provide algorithmic insights into how the expensive and complex computation of EPDs can be decomposed into pairing problems for edges. And each pairing problem can be solved exactly using a Union-Find algorithm. The benefit is two-folds. First, the decomposition makes it possible to train the neural network through edge-wise supervision.This allows us to adopt the popular and effective edge-prediction GNN for the goal. Second, we observe the similarity between the Union-Find and sequential algorithms which are known to be imitable by neural networks. This gives us the opportunity to design a special graph neural network to imitate the algorithm accurately, and to approximate EPDs accurately. Decompose the EPD Computation into Pairing Computations. Recall that our goal is to compute the 0D PDs and 1D EPDs PD0 and PD1. The reason for not estimating 0D EPDs (or not including the global max/min pair that corresponds to the whole connected component) is that (1) the global max/min value is easy to obtain, and does not need an extra prediction; (2) in our setting, the global max/min pair will not be paired with any edge in the ascending filtration. In the later section, the estimation of EPDs denote the estimation of PD0 and PD1. We observe that on these diagrams, each point corresponds to a unique pairing of graph elements (vertex-edge pair for PD0, edge-edge pair for PD1). Each pair of elements are essentially the “creator” and “destroyer” of the corresponding topological feature during the filtration. And their filtration values are the birth and death times of the topological feature. For example, the persistence point located at (t2, t3) in Figure 1 (b) denotes that the edge u2u3 is paired with u2. We consider the following "unique pairing" for all edges in the graph: Consider each edge in the ascending filtration: if the edge is a destroyer in the ascending filtration, it will be paired with a vertex. Otherwise, this edge e is a creator in the ascending filtration and will be paired during the descending filtration with another edge e′. We note that this is not in conflict with the fact that the PDs/EPDs are often sparse. Many pairings are local and only pair adjacent elements. They correspond to zero-persistence points living in the diagonal of the diagrams. 2We note that in standard terminology, extended persistence diagram will also contain persistent points born and destroyed both in the descending sequence. Algorithm 1 Sequential algorithm 1: Input: graph G = (V,E), filter function f . 2: Initialise-Nodes(V, f ) 3: Q = Sort-Queue(V ) 4: while Q is not empty do 5: u = Q.pop-min() 6: for v ∈ G.neighbors(u) do 7: Relax-Edge(u, v, f ) 8: end for 9: end while Algorithm 2 Computation of EPD 1: Input: filter function f , input graph G = (V,E) 2: V,E = sorted(V,E, f) 3: PD0 = Union-Find(V,E, f), PD1 = {} 4: for i ∈ V do 5: Ci = {Cij |(i, j) ∈ E, f(j) > f(i)}, Ei = E 6: for Cij ∈ Ci do 7: f(Cij ) = f(i), Ei = Ei − {(i, j)} + {(Cij , j)} 8: end for 9: PDi1 = Union-Find-step(V + Ci − {i}, Ei, f, Ci) 10: PD1+ = PDi1 11: end for 12: Output: PD0, PD1 Algorithm 3 Union-Find-step (Sequential) 1: Input: V , E, f , Ci 2: PDi1 = {} 3: for v ∈ V do 4: v.value = f(v), v.root = v 5: end for 6: Q = Sort(V ), Q = Q − {v|f(v) < f(i)}, G = {Q,EQ}, where EQ = E ∪Q2. 7: while Q is not empty do 8: u = Q.pop-min() 9: for v ∈ G.neighbors(u) do 10: 11: pu, pv = Find-Root(u),Find-Root(v) 12: if pu ̸= pv then 13: s = argmin(pu.value, pv.value) 14: l = argmax(pu.value, pv.value) 15: l.root = s 16: if pu ∈ Ci and pv ∈ Ci then 17: PDi1 + {(u.value, l.value)} 18: end if 19: end if 20: end for 21: end while 22: Function: Find-Root(u) 23: pu = u 24: while pu ̸= pu.root do 25: pu.root = (pu.root).root, pu = pu.root 26: end while 27: Return: pu This pairing view gives us the opportunity to transform the computation of EPDs into a pairing prediction problem: for every edge in the graph, we predict its pairing element. This will be the foundation of our design of the GNN in Sec. 4. Meanwhile, we observe that the decomposition is not only at the output level. The original algorithm of EPD, a sequential modulo-2 matrix reduction algorithm, can indeed be rewritten into a set of independent algorithm subroutines, each for the computation of one pairing. Each subroutine is a Union-Find algorithm. This new decomposed EPD algorithm has not been reported before, although the idea follows from existing work [2]. For completeness, we will provide a proof of correctness of the algorithm. Description of Algorithm 2. The pseudocode for 1D EPD computation is shown in Algorithm 2. We leave the algorithm for 0D PD to the supplementary material3. For simplicity of presentation, we assume that all vertices have distinct function values f : V → R4. Therefore finding the persistence value equals to finding the pairing. To compute the EPD, we traverse all nodes in the vertex set and find their extended persistence pairing. Combining the persistence pair from all nodes, we can obtain the final EPD. The algorithm complexity analysis is provided in the supplementary material. Finding persistence pairing for nodes. For node ui ∈ V , we can call Algorithm 3 to identify the corresponding persistence pair. In particular, the algorithm first sorts the graph elements according to an input scalar function, then does the edge operation by finding the roots of the corresponding nodes and merging these nodes. See Figure 1(c) for a simple illustration. For node u1, there are three upper edges: u1u3, u1u4, and u1u6. We put each such edge uiuj in a different component Cij , – we call this upper-edge splitting operation – and start to sweep the graph in increasing values starting at f(ui). Then, the first time any two such components merge will give rise to a new persistence 3The 0D algorithm needs a single run of Union-Find [14, 13], and is very similar to Algorithm 3 which is a subroutine used by Algorithm 2. 4We can add jitter to the original filter function. The output EPDs will only have minor changes [9] point in the 1D EPD. For instance, C14 and C13 first merge at u4, and this will give rise to the brown loop in Figure 1(a) with (t4, t1) as its persistence point. While in Figure 1 (d), the two connected components, C23 and C25 (originated from u2) will not be united. Therefore, node u2 will not lead to any persistence point in the EPD. Correctness. The idea behind Algorithm 2 to compute the extended pairing for essential edges appears to be folklore. For completeness, we provide a proof of its correctness (stated in Theorem 3.1). We provide a sketch of the proof here, leaving the complete proof to the supplementary material. Theorem 3.1. Algorithm 2 outputs the same 1D EPDs as the standard EPD computation algorithm. Proof sketch. To compute the 1D EPDs, we simply need to find the pairing partner for all edges. Therefore, to prove that the two algorithms output the same 1D EPDs, we need to prove that the output pairing partners are the same (or share the same filter value). We prove this by showing that both the standard EPD computation algorithm and Algorithm 2 find the “thinnest pair", i.e., the paired saddle points are with the minimum distance in terms of filter value, for all edges. Neural Approximation of Union-Find. In the previous paragraph, we showed that the computation of 1D EPDs can be decomposed into the parallel execution of Union-Find algorithms, which share a similar sequential behavior. This gives us the opportunity to approximate these Union-Find algorithms well, and consequently approximate EPDs well. Approximating algorithms with neural networks is a very active research direction [52, 21, 24, 33, 35, 49]. Within the context of graph, GNNs have been proposed to approximate parallel algorithms (e.g., Breadth-First-Search) and sequential algorithms (e.g., Dijkstra) [43, 41, 46]. Particularly relevant to us is the success in approximating the category of sequential algorithms such as Dijkstra. These sequential algorithms, as generally defined in Algorithm 1, sort graph elements (vertices and edges) according to certain function, and perform algorithmic operations according to the order. As described in previous paragraphs, the Union-Find algorithm also contains these steps, and can be expressed in a sequential-like form (Algorithm 3). Therefore we propose a framework to simulate the algorithm. 4 A Graph Neural Network for EPD Approximation Previous section establishes the algorithm foundation by showing that we can decompose EPD computation into edge pairing prediction problems, each of which can be solved using a Union-Find algorithm. Based on such algorithmic insights, we next introduce our neural network architecture to approximate the EPDs on graphs. Our main contributions are: (1) we transform the EPD computation into an edge-wise prediction problem, and solve it using a GNN framework, inspired by the GNN for link prediction; (2) we design a new backbone GNN model PDGNN to approximate the Union-Find algorithm, with specially designed pooling and message passing operations. 4.1 EPD computation as a edge-wise prediction problem We have established that computing PD0 and PD1 can be reduced into finding the pairing partners for all edges. We transfer the problem into an edge-wise prediction problem. We predict the persistence pairing for all edges. This is very similar to a standard link prediction problem [6, 50], in which one predicts for each node pair of interest whether it is a real edge of the graph or not. Inspired by standard link-prediction GNN architectures [6, 50], we propose our model (see Figure 2) as follows. (1) For an input graph G = (V,E) and a filter function f , we first obtain the initial filter value for all the nodes: X = f(V ) ∈ R|V |∗1, and then use a specially designed GNN model which later we call PDGNN G to obtain the node embedding for all these vertices: H = G(X) ∈ R|V |∗dH . (2) Subsequently, a MLP (Multi-layer perceptron) W is applied to the node embeddings to obtain a two dimensional output for each edge (u, v) ∈ E, corresponding to its persistence pairing. Formally, we use PPuv = W ([hu ⊕ hv]) ∈ R2 as the persistence pair. Here, hu and hv denote the node embedding for node u and v, and ⊕ represents the concatenation of vectors. In Algorithm 2, the Union-Find-step should be implemented on all edges to obtain 1D EPDs. Hence ideally we would need a large GNN model with node features proportional to the graph size so as to simulate all these Union-find-steps in parallel simultaneously. However this would be expensive in practice. On the other hand, there are many overlapping or similar computational steps between the Union-Find-step procedures on different vertices. Hence in practice, we only use bounded-size node features. 4.2 PDGNN In this section, we explain how to design the backbone GNN to approximate the Union-Find algorithm. Note the Union-Find is similar to known sequential algorithms but with a few exceptions. We design specific pooling and message passing operations to imitate these special changes. These design choices will be shown to be necessary in the experiment section. Recall a typical GNN learns the node embedding via an iterative aggregation of local graph neighbors. Following [47], we write the k-th iteration (the k-th GNN layer) as: hku = AGG k({MSGk(hk−1v ), v ∈ N(u)}, hk−1u ) (1) where hku is the node features for node u after k-th iterations, and N(u) is the neighborhood of node u. In our setting, h0u = xu is initialized to be the filter value of node u. Different GNNs have different MSG and AGG functions, e.g., in GIN [47], the message function MSG is a MLP followed by an activation function, and the aggregation function AGG is a sum aggregation function. We now describe our specially designed GNN, called PDGNN (Persistence Diagram Graph Neural Network). Compared with the Sequential algorithms (Algorithm 1) [46], our Union-Find algorithm (Algorithm 3) differs in: (1) the Find-Root algorithm which needs to return the minimum of the component, (2) additional edge operations such as upper-edge splitting. To handle these special algorithmic needs, our PDGNN modifies standard GNNs with the following modules. A new aggregation due to the Find-Root function. Finding the minimum intuitively suggests using a combination of several local min-aggregations. Considering that the sum aggregation can bring the best expressiveness to GNNs [47], we implement the root-finding process by a concatenation of sum aggregation and min aggregation as our aggregation function. To be specific: AGGk(.) = SUM(.) ⊕ MIN(.) (2) Improved edge operations. As shown in [43, 46], classic GNNs are not effective in “executing” Relax-Edge subroutines. Furthermore, in Algorithm 2, we also need the upper-edge splitting operation for each vertex. In other words, the information of the separated components Cij are formed by the information from both nodes ui and uj . To this end, we use edge features and attention to provide bias using edges. Specifically, we propose the following message function in the k-th iteration: MSGk(hk−1v ) = σ k[αkuv(h k−1 u ⊕ hk−1v )W k] (3) where σk is an activation function, W k is a MLP module, and αkuv is the edge weight for uv. We adopt PRELU as our activation function, and the edge weight proposed in [42] as our edge weight. Training PDGNN. We use the 2-Wasserstein distance between the predicted diagram and the ground truth EPD as the loss function. Through optimal matching, the gradient is passed to each predicted persistence pair. Since we have established the one-to-one correspondence between pairs and edges, the gradient is then passed to the corresponding edge, and contributes to the representation learning. 5 Experiments In this section, we thoroughly evaluate the proposed model from 3 different perspectives. In Section 5.1, we evaluate the approximation error between the predicted diagram and the original diagram and show that the prediction is very close to the ground truth. Even with a small approximation error, we still need to know how much does the error influence downstream tasks. Therefore, in Section 5.2, we evaluate the learning power of the predicted diagrams through 2 downstream graph representation learning tasks: node classification and link prediction. We observe that the model using the predicted diagrams performs comparably with the model using the ground truth diagrams. In Section 5.3, we evaluate the efficiency of the proposed algorithm. Experiments demonstrate that the proposed method is much faster than the original algorithm, especially on large and dense graphs. Source code is available at https://github.com/pkuyzy/TLC-GNN. Datasets. To compute EPDs, we need to set the input graphs and the filter functions. Existing state-of-the-art models on node classification [56] and link prediction [50] mainly focus on the local topological information of the target node(s). Following their settings, for a given graph G = (V,E), we extract the k-hop neighborhoods of all the vertices, and extract |V | vicinity graphs. In our experiments, k is set to 1 or 2 (details are provided in the supplementary material). In terms of filter functions, we use Ollivier-Ricci curvature [31], heat kernel signature with two temprature values [39, 19] and the node degree5. For an input vicinity graph, we compute 4 EPDs based on the 4 filter functions, and then vectorize them to get 4 peristence images [1]. Therefore, we can get 4|V | EPDs in total. The input graphs include (1) citation networks including Cora, Citeseer, and PubMed [36]; (2) Amazon shopping datasets including Photo and Computers [37]; (3) coauthor datasets including CS and Physics [37]. Details are available in the supplementary material. 5Following the settings in [56, 50], we adopt the Ollivier-Ricci curvature as the graph metric, and the distance to target node(s) as the filter function; Following the settings in [4], we set the temparature t = 10 and 0.1 and adopt these two kernel functions as the filter functions; Node degree is used as the initial filter function in [16]. 5.1 Approximation Quality In this section, we evaluate the approximation error between the prediction and the original EPDs. Evaluation metrics. Recall that the input of our model is a graph and a filter function, and the output is the predicted EPD. After obtaining the predicted EPD, we vectorize it with persistence image [1] and evaluate (1) the 2-Wasserstein (W2) distance between the predicted diagram and the ground truth EPD; (2) the total square error between the predicted persistence image and the ground truth image (persistence image error, denoted as PIE). Considering that our aim is to estimate EPDs on graphs rather than roughly approximating persistence images, we use the W2 distance as the training loss, while the PIE is only used as an evaluation metric. Given an input graph (e.g., Cora, Citeseer, etc.) and a filter function, we extract the k-hop neighborhoods of all the vertices and separate these vicinity graphs randomly into 80%/20% as training/test sets. We report the mean W2 distance between diagrams and PIE on different vicinity graphs and 4 different filter functions. Baseline settings. PDGNN denotes our proposed method, that is, the GNN framework with the proposed AGG function and MSG function. Its strategy is to first predict the EPD, and then convert it to the persistence image. To show its superiority, we compare with the strategy from [38, 29], i.e., directly approximate the persistence image of the input graph, as a baseline strategy. GIN_PI and GAT_PI denote the baseline strategy with GIN [47] and GAT [42] as the backbone GNNs. To show the effectiveness of the modules proposed in Section 4, we add other baselines with our proposed strategy. GAT denotes GAT as the backbone GNN. GAT (+MIN) denotes GAT with the new AGG function. Compared with PDGNN, it exploits the original node feature rather than the new edge feature in the MSG function. PDGNN (w/o ew) denotes PDGNN without edge weight. Further experimental settings can be found in the supplementary material. Results. Table 1 reports the approximation error, we observe that PDGNN outperforms all the baseline methods among all the datasets. The comparison between GAT and GAT_PI shows the benefit of predicting EPDs instead of predicting the persistence image. Comparing GAT and GAT (+MIN), we observe the advantage of the new AGG function, which shows the necessity of using min aggregation to approximate the Find-Root algorithm; Comparing GAT (+MIN) and PDGNN, we observe the effectiveness of using the new MSG function to help the model capture information of the separated connected components. The comparison between PDGNN (w/o ew) and PDGNN shows that edge weights help the model focus on the individual Relax-Edge sub-algorithm operated on every edge. 5.2 Downstream Tasks In this section, we evaluate the performance of the predicted diagrams on 2 graph representation learning tasks: node classification and link prediction. We replace the ground truth EPDs in state-ofthe-art models based on persistence [56, 50] with our predicted diagrams and report the results. Baselines. We compare our method with various state-of-the-art methods. We compare with popular GNN models including GCN [22], GAT [42] and HGCN [6]. For link prediction, we compare with several state-of-the-art methods such as SEAL [54] and P-GNN [51]. Notice that GCN and GAT are not originally designed for link prediction, therefore we follow the settings in [6, 50], that is, to get the node embedding through these models, and use the Fermi-Dirac decoder [23, 32] to predict whether there is a link between the two target nodes. In comparison with the original EPD, we also add PEGN [56] and TLC-GNN [50] as baseline methods. Furthermore, to show the benefit of directly predicting EPDs, we also add the baseline methods PEGN (GIN_PI) and TLC-GNN (GIN_PI), which replace the original persistent homology feature with the output from GIN_PI. Evaluation metrics. For node classification, our setting is the same as [22, 42, 56]. To be specific, we train the GNNs with 20 nodes from each class and validate (resp. test) the GNN on 500 (resp. 1000) nodes. We run the GNNs on these datasets 10 times and report the average classification accuracy and standard deviation. For link prediction, our setting is the same as [6, 50]. To be precise, we randomly split existing edges into 85/5/10% for training, validation, and test sets. An equal number of non-existent edges are sampled as negative samples in the training process. We fix the negative validation and test sets, and randomly select the negative training sets in every epoch. We run the GNNs on these datasets 10 times and report the mean average area under the ROC curve (ROCAUC) scores and the standard deviation. Results. Table 2 and Table 3 summarize the performance of all methods on node classification and link prediction. We observe that PEGN (PDGNN) and TLC-GNN (PDGNN) consistently perform comparably with PEGN and TLC-GNN, showing that the EPDs approximated by PDGNN have the same learning power as the true EPDs. Furthermore, PEGN using the approximated EPDs achieve better or comparable performance with different SOTA methods. We also discover that PEGN (GIN_PI) and TLC-GNN (GIN_PI) perform much inferior to the original models using the true EPDs. It demonstrates that the large approximation error from GIN_PI lose much of the crucial information which is preserved in PDGNN. Transferability. One appealing feature of our method is its transferability. Training on one graph, our algorithm can estimate EPDs well on another graph. This makes it possible to apply the computationally expensive topological features to a wide spectrum of real-world graphs; we can potentially apply a pre-trained model to large and dense graphs, on which direct EPD computation is infeasible. The experiments are provided in the supplementary material. 5.3 Algorithm Efficiency In this section, we evaluate the efficiency of our proposed model. For a fair and complete comparison, we compare with algorithms from Gudhi [40] and from [50]. We select the first 1000 nodes from Cora, Citeseer, PubMed, Photo, Computers, CS, Physics, and then extract their 2-hop neighborhoods. With Ollivier-Ricci curvature as the filter function, we compute the EPDs and report the time (seconds) used to infer these diagrams. Results. We list the average nodes and edges of these vicinity graphs in the first line of Table 4. As shown in Table 4, although our model is slower on small datasets like Cora or Citeseer, it is much faster on large and dense datasets. Therefore we can simply use the original algorithm to compute the EPDs on small graphs, and use our model to estimate EPDs on large graphs. The model can be applied to various graph representation learning works based on persistent homology. 6 Conclusion Inspired by recent success on neural algorithm execution, we propose a novel GNN with different technical contributions to simulate the computation of EPDs on graphs. The network is built on algorithmic insights, and benefits from better supervision and closer alignment with the EPD computation algorithm. Experiments show that our method achieves satisfying approximation quality and learning power while being significantly faster than the original algorithm on large and dense graphs. Another strength of our method is the transferability: training on one graph, our algorithm can still approximate EPDs well on another graph. This makes it possible to apply the computationally expensive topological features to a wide spectrum of real-world graphs. Acknowledgements. We thank all anonymous reviewers for their constructive feedback very much. This work of Zuoyu Yan, Liangcai Gao, and Zhi Tang is supported by the projects of National Key R&D Program of China (2019YFB1406303) and National Natural Science Foundation of China (No. 61876003), which is also a research achievement of Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology).
1. What is the focus and contribution of the paper on efficient learning frameworks for extended persistence diagrams? 2. What are the strengths of the proposed approach, particularly in terms of its efficiency and applicability to large and dense graphs? 3. What are the weaknesses of the paper regarding its experiments and comparisons with other works? 4. Do you have any concerns about the method's performance on large sparse graphs, which are common in real-world complex systems? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In this paper, the authors propose an efficient learning framework to estimate extended persistence diagrams (EPD). It decomposes a reformulated version of EPD as an edge-wise prediction task into independent pairing problems. The authors use GNN architecture to learn the union-find algorithm which can solve the pairing problems. Strengths And Weaknesses Pros: The EPD approximation method is examined on varying datasets and achieves the best performance. The proposed method is efficient on large and dense graphs. The paper is well written and easy to follow. Cons: I suggest the authors check this related paper on using PD to learn graph representation and consider it as a baseline: https://openreview.net/forum?id=yqPnIRhHtZv Large scale real-world graphs are usually sparse such as the brain connectome etc. However, in this paper the large graphs are mostly denser than the smaller ones. The authors claim that their estimation approach of EPD is faster than other methods on large and dense graphs, it would be worthwhile to check the performance on large sparse graphs, which are more common in real-world complex systems. The proposed method is actually slower when the graph is sparse (citation networks). The major contribution of this paper is the efficient estimation of EPD using decomposition and GNN architecture to learn the reformulated edge prediction task. Downstream tasks in section 5.2 are more like an indirect way to examine the approximation accuracy. As long as the EPD approximation error is low, PEGN/TLC-GNN (True Diagram) and PEGN/TLC-GNN (PDGNN) will have similar performance. Therefore I recommend the authors to focus more on the EPD estimation experiments. For example, the author could further discuss and analyze why indirectly estimating the persistence image (PI) by estimating the EPD is better than directly estimating the PI to explain their experiment result presented in table 1. Another thing worthy to look at is the choice of filter functions. Ablation study or discovery of other graph metrics such as clustering coefficient, centrality can help to make the experiment complete. Questions Can such EPD decomposition be extended to simplicial complexes (with 2-simplices, 3-simplices and more) apart from graph representations as simplicial complexes? If so, it would be interesting to also apply the proposed framework on some real-world higher-order data. How is the value of k (k-hop) chosen when extracting the |V| vicinity graphs? The authors claim that their estimation method is much faster on large and dense datasets. Is there a threshold value of average N/E or degree to decide which method is the fastest to compute/estimate the EPD? Some additional experiment on real-world or synthetic graphs and a plot of computation time vs graph density may give an answer to this. The efficiency experiment is based on 2-hop neighborhoods. Does the value choice of k-hop affect such results? Limitations See question 1.
NIPS
Title Neural Approximation of Graph Topological Features Abstract Topological features based on persistent homology can capture high-order structural information which can then be used to augment graph neural network methods. However, computing extended persistent homology summaries remains slow for large and dense graphs and can be a serious bottleneck for the learning pipeline. Inspired by recent success in neural algorithmic reasoning, we propose a novel graph neural network to estimate extended persistence diagrams (EPDs) on graphs efficiently. Our model is built on algorithmic insights, and benefits from better supervision and closer alignment with the EPD computation algorithm. We validate our method with convincing empirical results on approximating EPDs and downstream graph representation learning tasks. Our method is also efficient; on large and dense graphs, we accelerate the computation by nearly 100 times. 1 Introduction Graph neural networks (GNNs) have been widely used in various domains with graph-structured data [45, 27, 22, 42, 6]. Much effort has been made to understand and to improve graph representation power [47, 30, 3, 28]. An intuitive solution is to explicitly inject high order information, such as graph topological/structural information, into the GNN models [51, 26]. To this end, persistent homology [15, 14], which captures topological structures (e.g., connected components and loops) and encodes them in a summary called persistence diagram (PD), have attracted the attention of researchers. Indeed, persistence has already been injected to machine learning pipelines for various graph learning tasks [55, 56, 16, 4, 8, 50]. In particular, it has been found helpful to use the so-called extended persistence diagrams (EPDs) [10], which contain richer information than the standard PDs. Despite the usefulness of PDs and EPDs, their computation remains a bottleneck in graph learning. In situations such as node classification [56] or link prediction [50], one has to compute EPDs on vicinity graphs (local subgraph motifs) generated around all the nodes or all possible edges in the input graph. This can be computationally prohibitive for large and dense graphs. Take the Amazon ∗Correspondence to Chao Chen, Yusu Wang, and Liangcai Gao 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Computers dataset [37] as an example. To compute EPDs on vicinity graphs take several seconds on average, and there are 13381 nodes. So to compute all EPDs with a single CPU can take up to a day. This is not surprising as, while theoretically EPD for graphs can be computed in O(n log n) time [2], that algorithm has not been implemented, and practical algorithms for computing PD take quadratic time in worst case [50]. These computational difficulties raise the question: can we approximate the expensive computation of EPDs using an efficient learning-based approach? This is a challenging question due to the complex mathematical machinery behind the original algorithm. First, the algorithm involves a reduction algorithm of the graph incidence matrix. Each step of the algorithm is a modulo-2 addition of columns that can involve edges and nodes far apart. Such algorithm can be hard to be approximated by a direct application of the black-box deep neural networks. The second challenge comes from the supervision. The output EPD is a point set with an unknown cardinality. The distance between EPDs, called the Wasserstein distance [9, 11], involves a complex point matching algorithm. It is nontrivial to design a deep neural network with variable output and to support supervision via such Wassserstein distance. Previous attempts [38, 29] directly use black-box neural networks to generate fixed-length vectorization of the PDs/EPDs and use mean squared error or cross-entropy loss for supervision. The compromise in supervision and the lack of control make it hard to achieve high-quality approximation of PDs/EPDs. In this paper, we propose a novel learning approach to approximate EPDs on graphs. Unlike previous attempts, we address the aforementioned challenges through a carefully designed learning framework guided by several insights into the EPD computation algorithm. In terms of model output and supervision, we observe that the computation of EPDs can be treated as an edge-wise prediction instead of a whole-graph prediction. Each edge in the graph is paired with another graph element (either vertex or edge), and the function values of the pair are the coordinates of a persistence point in the EPD. This observation allows us to compute EPDs by predicting the paired element for every edge of the graph. The Wasserstein distance can be naturally decomposed into supervision loss for each edge. This element-wise supervision can significantly improve learning efficiency compared with previous solutions, which treat PDs/EPDs as a whole-graph representation and have to use whole-graph representation pooling. Another concern is whether and how a deep neural network can approximate the sophisticated EPD algorithm. To this end, we redesign the algorithm so that it is better aligned with algorithms that are known to be learnable by neural networks. Recall we observe that computing EPDs can be decomposed into finding pairing for each edge. We show that the decomposition is not only at the output level, but also at the algorithm level. The complex standard EPD computation algorithm can indeed be decomposed into independent pairing problems, each of which can be solved exactly using a classic Union-Find algorithm [12]. To this end, we draw inspiration from recent observations that neural networks can imitate certain categories of sequential algorithms on graphs [43, 46]. We propose a carefully designed graph neural network with specific message passing and aggregation mechanism to imitate the Union-Find algorithm. Decomposing the algorithm into Union-Find subroutines and approximating them with a customized GNN provide better alignment between our neural network and the EPD algorithm. A better alignment can lead to better performance [48]. Empirically, we validate our method by quantifying its approximation quality of the EPDs. On two downstream graph learning tasks, node classification and link prediction, we also show that our neural approximations are as effective as the original EPDs. Meanwhile, on large and dense graphs, our method is much faster than direct computation. In other words, the approximated EPDs do not lose accuracy and learning power, but can be computed much more efficiently. Finally, we observe that our model can be potentially transferred to unseen graphs, perhaps due to the close imitation of the Union-Find subroutine. This is encouraging as we may generalize topological computation to various challenging real-world graphs without much additional effort. In summary, we propose an effective learning approach to approximate EPDs with better supervision and better transparency. The technical contributions are as follows. • We reformulate the EPD computation as an edge-wise prediction problem, allowing better supervision and more efficient representation learning. We show that the EPD computation can be decomposed into independent pairing problems, each of which can be solved by the Union-Find algorithm. • Inspired by recent neural algorithm approximation works [43, 46], we design a novel graph neural network architecture to learn the Union-Find algorithm. The closer algorithmic alignment ensures high approximation quality and transferability. 2 Background: Extended Persistent Homology We briefly introduce extended persistent homology and refer the readers to [10, 14] for more details. Ordinary Persistent Homology. Persistent homology captures 0-dimensional (connected components), 1-dimensional (loops) topological structures, as well as high-dimensional analogs, and measures their saliency via a scalar function called filter function. Here we will only describe it for the graph setting. Given a input graph G = (V,E), with node set V and edge set E, we call all the nodes and edges simplices. Denote by X = V ∪E the set of all simplices. We define a filter function on all simpices, f : X → R. In the typical sublevel-set setting, f is induced by a node-valued function (e.g., node degrees), and further defined on edges as f(uv) = max(f(u), f(v)). Denote by Xa the sublevel set of X , consisting of simplices whose filter function values ≤ a, Xa = {x ∈ X|f(x) ≤ a}. As the threshold value a increases from −∞ to ∞, we obtain a sequence of growing spaces, called an ascending filtration of X: ∅ = X−∞ ⊂ ... ⊂ X∞ = X. As Xa increases from ∅ to X , new topological structures gradually appear (born) and disappear (die). For instance, the blue square persistence point at (t2, t3) in Figure 1 (b) indicates that the connected component u2 appears at Xt2 and is merged with the whole connected component at Xt3 . Applying the homology functor to the filtration, we can more precisely quantify the birth and death of topological features (as captured by homology groups) throughout the filtration, and the output is the so-called persistence diagram (PD), which is a planar multiset of points, each of which (b, d) corresponds to the birth and death time of some homological feature (i.e., components, loops, and their higher dimensional analogs). The lifetime |d− b| is called the persistence of this feature and intuitively measures its importance w.r.t. the input filtration. Extended Persistent Homology. In the ordinary persistent homology, topology of the domain (e.g., the graph) will be created at some time (has a birth time), but never dies (i.e., with death time being equal to +∞). We call such topological features essential features. In the context of graphs, the importance of 1D essential features, corresponding to independent loops, are not captured via the ordinary persistence. To this end, an extended persistence module is introduced in [10]: ∅ = H(X−∞) → · · ·H(Xa) → · · ·H(X) = H(X,X∞) → · · · → H(X,Xa) → · · · → H(X,X−∞), where Xa = {x ∈ X|f(x) ≥ a} is a superlevel set of X at value a. We say that the second part H(X,X∞) → · · · → H(X,Xa) → · · · → H(X,X−∞) is induced by a descending filtration. If we inspect the persistence diagram induced by this extended sequence, as H(X,X−∞) is trivial, all the loop features created will also be killed in the end, and thus captured by persistence points whose birth happens in the ascending filtration and death happens in the descending filtration. In what follows, we abuse the notation slightly and use 1D EPD to refer to only such persistence points (i.e., born in ascending portion and death in descending portion) in the persistence diagram induced by the extended module2. We use 0D PD to refer to the standard ordinary 0D persistence diagram induced by the ascending sequence. Our goal is to compute/approximate the union of 0D PD and 1D EPD. Specifically, in the graph setting, at the end of the ascending filtration, some edges, which are the so-called negative edges (as they kill homological features), are paired with the vertices. These correspond to points in the 0D PD, capturing the birth and death of connected components in the ascending filtration. Those unpaired edges, called positive edges, will create independent loops (1D homology for graphs) and remain unpaired after the ascending filtration. The number of such unpaired edges equals to the 1st Betti number β1 (rank of the 1st homology group). These edges will then be paired in the descending part of the persistence module and their birth-depth times give rise to 1D EPD. An example is given in Figure 1(b). Note that since our domain is a graph, β1 equals the number of independent loops, which also equals to β1 = |E| − |V |+ 1 for a connected graph. Hence we also say that 1D EPD captures the birth and death of independent loop features. The birth and death times of the loop feature correspond to the threshold value a’s when these events happen. In general, the death time for such loop feature is smaller than the birth time. For example, the red triangle persistence point in Figure 1 (b) denotes that the red cycle in Figure 1 (a) appears at Xt5 in the ascending filtration and appears again at Xt1 in the descending filtration. Finally, PDs live in an infinite-dimensional space equipped with an appropriate metric structure, such as the so-called p-th Wasserstein distance [11] or the bottleneck distance [9]. They have been combined with various deep learning methods including kernel machines [34, 25, 5], convolutional neural networks [18, 20, 44, 57], transformers [53], connectivity loss [7, 17], and GNNs [56, 8, 50, 55, 16, 4]. During learning, there have been many works in the literature to vectorize persistence diagrams for downstream analysis. Among these works a popular choice is the persistence image [1]. 3 Algorithm Revision: Decomposing EPD into Edge-Wise Paring Predictions In this section, we provide algorithmic insights into how the expensive and complex computation of EPDs can be decomposed into pairing problems for edges. And each pairing problem can be solved exactly using a Union-Find algorithm. The benefit is two-folds. First, the decomposition makes it possible to train the neural network through edge-wise supervision.This allows us to adopt the popular and effective edge-prediction GNN for the goal. Second, we observe the similarity between the Union-Find and sequential algorithms which are known to be imitable by neural networks. This gives us the opportunity to design a special graph neural network to imitate the algorithm accurately, and to approximate EPDs accurately. Decompose the EPD Computation into Pairing Computations. Recall that our goal is to compute the 0D PDs and 1D EPDs PD0 and PD1. The reason for not estimating 0D EPDs (or not including the global max/min pair that corresponds to the whole connected component) is that (1) the global max/min value is easy to obtain, and does not need an extra prediction; (2) in our setting, the global max/min pair will not be paired with any edge in the ascending filtration. In the later section, the estimation of EPDs denote the estimation of PD0 and PD1. We observe that on these diagrams, each point corresponds to a unique pairing of graph elements (vertex-edge pair for PD0, edge-edge pair for PD1). Each pair of elements are essentially the “creator” and “destroyer” of the corresponding topological feature during the filtration. And their filtration values are the birth and death times of the topological feature. For example, the persistence point located at (t2, t3) in Figure 1 (b) denotes that the edge u2u3 is paired with u2. We consider the following "unique pairing" for all edges in the graph: Consider each edge in the ascending filtration: if the edge is a destroyer in the ascending filtration, it will be paired with a vertex. Otherwise, this edge e is a creator in the ascending filtration and will be paired during the descending filtration with another edge e′. We note that this is not in conflict with the fact that the PDs/EPDs are often sparse. Many pairings are local and only pair adjacent elements. They correspond to zero-persistence points living in the diagonal of the diagrams. 2We note that in standard terminology, extended persistence diagram will also contain persistent points born and destroyed both in the descending sequence. Algorithm 1 Sequential algorithm 1: Input: graph G = (V,E), filter function f . 2: Initialise-Nodes(V, f ) 3: Q = Sort-Queue(V ) 4: while Q is not empty do 5: u = Q.pop-min() 6: for v ∈ G.neighbors(u) do 7: Relax-Edge(u, v, f ) 8: end for 9: end while Algorithm 2 Computation of EPD 1: Input: filter function f , input graph G = (V,E) 2: V,E = sorted(V,E, f) 3: PD0 = Union-Find(V,E, f), PD1 = {} 4: for i ∈ V do 5: Ci = {Cij |(i, j) ∈ E, f(j) > f(i)}, Ei = E 6: for Cij ∈ Ci do 7: f(Cij ) = f(i), Ei = Ei − {(i, j)} + {(Cij , j)} 8: end for 9: PDi1 = Union-Find-step(V + Ci − {i}, Ei, f, Ci) 10: PD1+ = PDi1 11: end for 12: Output: PD0, PD1 Algorithm 3 Union-Find-step (Sequential) 1: Input: V , E, f , Ci 2: PDi1 = {} 3: for v ∈ V do 4: v.value = f(v), v.root = v 5: end for 6: Q = Sort(V ), Q = Q − {v|f(v) < f(i)}, G = {Q,EQ}, where EQ = E ∪Q2. 7: while Q is not empty do 8: u = Q.pop-min() 9: for v ∈ G.neighbors(u) do 10: 11: pu, pv = Find-Root(u),Find-Root(v) 12: if pu ̸= pv then 13: s = argmin(pu.value, pv.value) 14: l = argmax(pu.value, pv.value) 15: l.root = s 16: if pu ∈ Ci and pv ∈ Ci then 17: PDi1 + {(u.value, l.value)} 18: end if 19: end if 20: end for 21: end while 22: Function: Find-Root(u) 23: pu = u 24: while pu ̸= pu.root do 25: pu.root = (pu.root).root, pu = pu.root 26: end while 27: Return: pu This pairing view gives us the opportunity to transform the computation of EPDs into a pairing prediction problem: for every edge in the graph, we predict its pairing element. This will be the foundation of our design of the GNN in Sec. 4. Meanwhile, we observe that the decomposition is not only at the output level. The original algorithm of EPD, a sequential modulo-2 matrix reduction algorithm, can indeed be rewritten into a set of independent algorithm subroutines, each for the computation of one pairing. Each subroutine is a Union-Find algorithm. This new decomposed EPD algorithm has not been reported before, although the idea follows from existing work [2]. For completeness, we will provide a proof of correctness of the algorithm. Description of Algorithm 2. The pseudocode for 1D EPD computation is shown in Algorithm 2. We leave the algorithm for 0D PD to the supplementary material3. For simplicity of presentation, we assume that all vertices have distinct function values f : V → R4. Therefore finding the persistence value equals to finding the pairing. To compute the EPD, we traverse all nodes in the vertex set and find their extended persistence pairing. Combining the persistence pair from all nodes, we can obtain the final EPD. The algorithm complexity analysis is provided in the supplementary material. Finding persistence pairing for nodes. For node ui ∈ V , we can call Algorithm 3 to identify the corresponding persistence pair. In particular, the algorithm first sorts the graph elements according to an input scalar function, then does the edge operation by finding the roots of the corresponding nodes and merging these nodes. See Figure 1(c) for a simple illustration. For node u1, there are three upper edges: u1u3, u1u4, and u1u6. We put each such edge uiuj in a different component Cij , – we call this upper-edge splitting operation – and start to sweep the graph in increasing values starting at f(ui). Then, the first time any two such components merge will give rise to a new persistence 3The 0D algorithm needs a single run of Union-Find [14, 13], and is very similar to Algorithm 3 which is a subroutine used by Algorithm 2. 4We can add jitter to the original filter function. The output EPDs will only have minor changes [9] point in the 1D EPD. For instance, C14 and C13 first merge at u4, and this will give rise to the brown loop in Figure 1(a) with (t4, t1) as its persistence point. While in Figure 1 (d), the two connected components, C23 and C25 (originated from u2) will not be united. Therefore, node u2 will not lead to any persistence point in the EPD. Correctness. The idea behind Algorithm 2 to compute the extended pairing for essential edges appears to be folklore. For completeness, we provide a proof of its correctness (stated in Theorem 3.1). We provide a sketch of the proof here, leaving the complete proof to the supplementary material. Theorem 3.1. Algorithm 2 outputs the same 1D EPDs as the standard EPD computation algorithm. Proof sketch. To compute the 1D EPDs, we simply need to find the pairing partner for all edges. Therefore, to prove that the two algorithms output the same 1D EPDs, we need to prove that the output pairing partners are the same (or share the same filter value). We prove this by showing that both the standard EPD computation algorithm and Algorithm 2 find the “thinnest pair", i.e., the paired saddle points are with the minimum distance in terms of filter value, for all edges. Neural Approximation of Union-Find. In the previous paragraph, we showed that the computation of 1D EPDs can be decomposed into the parallel execution of Union-Find algorithms, which share a similar sequential behavior. This gives us the opportunity to approximate these Union-Find algorithms well, and consequently approximate EPDs well. Approximating algorithms with neural networks is a very active research direction [52, 21, 24, 33, 35, 49]. Within the context of graph, GNNs have been proposed to approximate parallel algorithms (e.g., Breadth-First-Search) and sequential algorithms (e.g., Dijkstra) [43, 41, 46]. Particularly relevant to us is the success in approximating the category of sequential algorithms such as Dijkstra. These sequential algorithms, as generally defined in Algorithm 1, sort graph elements (vertices and edges) according to certain function, and perform algorithmic operations according to the order. As described in previous paragraphs, the Union-Find algorithm also contains these steps, and can be expressed in a sequential-like form (Algorithm 3). Therefore we propose a framework to simulate the algorithm. 4 A Graph Neural Network for EPD Approximation Previous section establishes the algorithm foundation by showing that we can decompose EPD computation into edge pairing prediction problems, each of which can be solved using a Union-Find algorithm. Based on such algorithmic insights, we next introduce our neural network architecture to approximate the EPDs on graphs. Our main contributions are: (1) we transform the EPD computation into an edge-wise prediction problem, and solve it using a GNN framework, inspired by the GNN for link prediction; (2) we design a new backbone GNN model PDGNN to approximate the Union-Find algorithm, with specially designed pooling and message passing operations. 4.1 EPD computation as a edge-wise prediction problem We have established that computing PD0 and PD1 can be reduced into finding the pairing partners for all edges. We transfer the problem into an edge-wise prediction problem. We predict the persistence pairing for all edges. This is very similar to a standard link prediction problem [6, 50], in which one predicts for each node pair of interest whether it is a real edge of the graph or not. Inspired by standard link-prediction GNN architectures [6, 50], we propose our model (see Figure 2) as follows. (1) For an input graph G = (V,E) and a filter function f , we first obtain the initial filter value for all the nodes: X = f(V ) ∈ R|V |∗1, and then use a specially designed GNN model which later we call PDGNN G to obtain the node embedding for all these vertices: H = G(X) ∈ R|V |∗dH . (2) Subsequently, a MLP (Multi-layer perceptron) W is applied to the node embeddings to obtain a two dimensional output for each edge (u, v) ∈ E, corresponding to its persistence pairing. Formally, we use PPuv = W ([hu ⊕ hv]) ∈ R2 as the persistence pair. Here, hu and hv denote the node embedding for node u and v, and ⊕ represents the concatenation of vectors. In Algorithm 2, the Union-Find-step should be implemented on all edges to obtain 1D EPDs. Hence ideally we would need a large GNN model with node features proportional to the graph size so as to simulate all these Union-find-steps in parallel simultaneously. However this would be expensive in practice. On the other hand, there are many overlapping or similar computational steps between the Union-Find-step procedures on different vertices. Hence in practice, we only use bounded-size node features. 4.2 PDGNN In this section, we explain how to design the backbone GNN to approximate the Union-Find algorithm. Note the Union-Find is similar to known sequential algorithms but with a few exceptions. We design specific pooling and message passing operations to imitate these special changes. These design choices will be shown to be necessary in the experiment section. Recall a typical GNN learns the node embedding via an iterative aggregation of local graph neighbors. Following [47], we write the k-th iteration (the k-th GNN layer) as: hku = AGG k({MSGk(hk−1v ), v ∈ N(u)}, hk−1u ) (1) where hku is the node features for node u after k-th iterations, and N(u) is the neighborhood of node u. In our setting, h0u = xu is initialized to be the filter value of node u. Different GNNs have different MSG and AGG functions, e.g., in GIN [47], the message function MSG is a MLP followed by an activation function, and the aggregation function AGG is a sum aggregation function. We now describe our specially designed GNN, called PDGNN (Persistence Diagram Graph Neural Network). Compared with the Sequential algorithms (Algorithm 1) [46], our Union-Find algorithm (Algorithm 3) differs in: (1) the Find-Root algorithm which needs to return the minimum of the component, (2) additional edge operations such as upper-edge splitting. To handle these special algorithmic needs, our PDGNN modifies standard GNNs with the following modules. A new aggregation due to the Find-Root function. Finding the minimum intuitively suggests using a combination of several local min-aggregations. Considering that the sum aggregation can bring the best expressiveness to GNNs [47], we implement the root-finding process by a concatenation of sum aggregation and min aggregation as our aggregation function. To be specific: AGGk(.) = SUM(.) ⊕ MIN(.) (2) Improved edge operations. As shown in [43, 46], classic GNNs are not effective in “executing” Relax-Edge subroutines. Furthermore, in Algorithm 2, we also need the upper-edge splitting operation for each vertex. In other words, the information of the separated components Cij are formed by the information from both nodes ui and uj . To this end, we use edge features and attention to provide bias using edges. Specifically, we propose the following message function in the k-th iteration: MSGk(hk−1v ) = σ k[αkuv(h k−1 u ⊕ hk−1v )W k] (3) where σk is an activation function, W k is a MLP module, and αkuv is the edge weight for uv. We adopt PRELU as our activation function, and the edge weight proposed in [42] as our edge weight. Training PDGNN. We use the 2-Wasserstein distance between the predicted diagram and the ground truth EPD as the loss function. Through optimal matching, the gradient is passed to each predicted persistence pair. Since we have established the one-to-one correspondence between pairs and edges, the gradient is then passed to the corresponding edge, and contributes to the representation learning. 5 Experiments In this section, we thoroughly evaluate the proposed model from 3 different perspectives. In Section 5.1, we evaluate the approximation error between the predicted diagram and the original diagram and show that the prediction is very close to the ground truth. Even with a small approximation error, we still need to know how much does the error influence downstream tasks. Therefore, in Section 5.2, we evaluate the learning power of the predicted diagrams through 2 downstream graph representation learning tasks: node classification and link prediction. We observe that the model using the predicted diagrams performs comparably with the model using the ground truth diagrams. In Section 5.3, we evaluate the efficiency of the proposed algorithm. Experiments demonstrate that the proposed method is much faster than the original algorithm, especially on large and dense graphs. Source code is available at https://github.com/pkuyzy/TLC-GNN. Datasets. To compute EPDs, we need to set the input graphs and the filter functions. Existing state-of-the-art models on node classification [56] and link prediction [50] mainly focus on the local topological information of the target node(s). Following their settings, for a given graph G = (V,E), we extract the k-hop neighborhoods of all the vertices, and extract |V | vicinity graphs. In our experiments, k is set to 1 or 2 (details are provided in the supplementary material). In terms of filter functions, we use Ollivier-Ricci curvature [31], heat kernel signature with two temprature values [39, 19] and the node degree5. For an input vicinity graph, we compute 4 EPDs based on the 4 filter functions, and then vectorize them to get 4 peristence images [1]. Therefore, we can get 4|V | EPDs in total. The input graphs include (1) citation networks including Cora, Citeseer, and PubMed [36]; (2) Amazon shopping datasets including Photo and Computers [37]; (3) coauthor datasets including CS and Physics [37]. Details are available in the supplementary material. 5Following the settings in [56, 50], we adopt the Ollivier-Ricci curvature as the graph metric, and the distance to target node(s) as the filter function; Following the settings in [4], we set the temparature t = 10 and 0.1 and adopt these two kernel functions as the filter functions; Node degree is used as the initial filter function in [16]. 5.1 Approximation Quality In this section, we evaluate the approximation error between the prediction and the original EPDs. Evaluation metrics. Recall that the input of our model is a graph and a filter function, and the output is the predicted EPD. After obtaining the predicted EPD, we vectorize it with persistence image [1] and evaluate (1) the 2-Wasserstein (W2) distance between the predicted diagram and the ground truth EPD; (2) the total square error between the predicted persistence image and the ground truth image (persistence image error, denoted as PIE). Considering that our aim is to estimate EPDs on graphs rather than roughly approximating persistence images, we use the W2 distance as the training loss, while the PIE is only used as an evaluation metric. Given an input graph (e.g., Cora, Citeseer, etc.) and a filter function, we extract the k-hop neighborhoods of all the vertices and separate these vicinity graphs randomly into 80%/20% as training/test sets. We report the mean W2 distance between diagrams and PIE on different vicinity graphs and 4 different filter functions. Baseline settings. PDGNN denotes our proposed method, that is, the GNN framework with the proposed AGG function and MSG function. Its strategy is to first predict the EPD, and then convert it to the persistence image. To show its superiority, we compare with the strategy from [38, 29], i.e., directly approximate the persistence image of the input graph, as a baseline strategy. GIN_PI and GAT_PI denote the baseline strategy with GIN [47] and GAT [42] as the backbone GNNs. To show the effectiveness of the modules proposed in Section 4, we add other baselines with our proposed strategy. GAT denotes GAT as the backbone GNN. GAT (+MIN) denotes GAT with the new AGG function. Compared with PDGNN, it exploits the original node feature rather than the new edge feature in the MSG function. PDGNN (w/o ew) denotes PDGNN without edge weight. Further experimental settings can be found in the supplementary material. Results. Table 1 reports the approximation error, we observe that PDGNN outperforms all the baseline methods among all the datasets. The comparison between GAT and GAT_PI shows the benefit of predicting EPDs instead of predicting the persistence image. Comparing GAT and GAT (+MIN), we observe the advantage of the new AGG function, which shows the necessity of using min aggregation to approximate the Find-Root algorithm; Comparing GAT (+MIN) and PDGNN, we observe the effectiveness of using the new MSG function to help the model capture information of the separated connected components. The comparison between PDGNN (w/o ew) and PDGNN shows that edge weights help the model focus on the individual Relax-Edge sub-algorithm operated on every edge. 5.2 Downstream Tasks In this section, we evaluate the performance of the predicted diagrams on 2 graph representation learning tasks: node classification and link prediction. We replace the ground truth EPDs in state-ofthe-art models based on persistence [56, 50] with our predicted diagrams and report the results. Baselines. We compare our method with various state-of-the-art methods. We compare with popular GNN models including GCN [22], GAT [42] and HGCN [6]. For link prediction, we compare with several state-of-the-art methods such as SEAL [54] and P-GNN [51]. Notice that GCN and GAT are not originally designed for link prediction, therefore we follow the settings in [6, 50], that is, to get the node embedding through these models, and use the Fermi-Dirac decoder [23, 32] to predict whether there is a link between the two target nodes. In comparison with the original EPD, we also add PEGN [56] and TLC-GNN [50] as baseline methods. Furthermore, to show the benefit of directly predicting EPDs, we also add the baseline methods PEGN (GIN_PI) and TLC-GNN (GIN_PI), which replace the original persistent homology feature with the output from GIN_PI. Evaluation metrics. For node classification, our setting is the same as [22, 42, 56]. To be specific, we train the GNNs with 20 nodes from each class and validate (resp. test) the GNN on 500 (resp. 1000) nodes. We run the GNNs on these datasets 10 times and report the average classification accuracy and standard deviation. For link prediction, our setting is the same as [6, 50]. To be precise, we randomly split existing edges into 85/5/10% for training, validation, and test sets. An equal number of non-existent edges are sampled as negative samples in the training process. We fix the negative validation and test sets, and randomly select the negative training sets in every epoch. We run the GNNs on these datasets 10 times and report the mean average area under the ROC curve (ROCAUC) scores and the standard deviation. Results. Table 2 and Table 3 summarize the performance of all methods on node classification and link prediction. We observe that PEGN (PDGNN) and TLC-GNN (PDGNN) consistently perform comparably with PEGN and TLC-GNN, showing that the EPDs approximated by PDGNN have the same learning power as the true EPDs. Furthermore, PEGN using the approximated EPDs achieve better or comparable performance with different SOTA methods. We also discover that PEGN (GIN_PI) and TLC-GNN (GIN_PI) perform much inferior to the original models using the true EPDs. It demonstrates that the large approximation error from GIN_PI lose much of the crucial information which is preserved in PDGNN. Transferability. One appealing feature of our method is its transferability. Training on one graph, our algorithm can estimate EPDs well on another graph. This makes it possible to apply the computationally expensive topological features to a wide spectrum of real-world graphs; we can potentially apply a pre-trained model to large and dense graphs, on which direct EPD computation is infeasible. The experiments are provided in the supplementary material. 5.3 Algorithm Efficiency In this section, we evaluate the efficiency of our proposed model. For a fair and complete comparison, we compare with algorithms from Gudhi [40] and from [50]. We select the first 1000 nodes from Cora, Citeseer, PubMed, Photo, Computers, CS, Physics, and then extract their 2-hop neighborhoods. With Ollivier-Ricci curvature as the filter function, we compute the EPDs and report the time (seconds) used to infer these diagrams. Results. We list the average nodes and edges of these vicinity graphs in the first line of Table 4. As shown in Table 4, although our model is slower on small datasets like Cora or Citeseer, it is much faster on large and dense datasets. Therefore we can simply use the original algorithm to compute the EPDs on small graphs, and use our model to estimate EPDs on large graphs. The model can be applied to various graph representation learning works based on persistent homology. 6 Conclusion Inspired by recent success on neural algorithm execution, we propose a novel GNN with different technical contributions to simulate the computation of EPDs on graphs. The network is built on algorithmic insights, and benefits from better supervision and closer alignment with the EPD computation algorithm. Experiments show that our method achieves satisfying approximation quality and learning power while being significantly faster than the original algorithm on large and dense graphs. Another strength of our method is the transferability: training on one graph, our algorithm can still approximate EPDs well on another graph. This makes it possible to apply the computationally expensive topological features to a wide spectrum of real-world graphs. Acknowledgements. We thank all anonymous reviewers for their constructive feedback very much. This work of Zuoyu Yan, Liangcai Gao, and Zhi Tang is supported by the projects of National Key R&D Program of China (2019YFB1406303) and National Natural Science Foundation of China (No. 61876003), which is also a research achievement of Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology).
1. What is the focus of the paper regarding graph structure learning tasks? 2. What are the strengths of the proposed approach in terms of efficiency and accuracy? 3. What are the weaknesses of the paper, particularly concerning its limited scope and assumptions? 4. How does the reviewer assess the contribution of the paper, and how does it compare to other works in the field? 5. Are there any concerns or suggestions regarding the method's applicability and potential for future research?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In this paper, the authors propose a mechanism to reduce the computation time, which is a major bottleneck of the extended persistence diagram that has recently been proven effective for graph structure learning tasks. The authors analyze the calculation algorithm of EPD, describe the key Union-Find algorithm in seqential, and approximate it with GNN to simplify the calculation. In their experiments, the authors evaluated the effect of EPD on graph analysis, the impact of pride of place, and computation time, and showed that EPD can analyze graphs efficiently and accurately, especially for large graphs. Strengths And Weaknesses Strength Algorithms to improve the efficiency of PED calculations were precisely constructed by analyzing the structure of PED calculations. The authors evaluate the effect of PED and the speed-up effect of this algorithm, as well as the accuracy of the speed-up. Weakness Although the subject is to improve the efficiency of EPD calculations, there are insufficient references to other acceleration methods. This method is an EPD-only speed-up method, assuming that EPD has high performance, but there is no mention of whether EPD is superior to other graph-oriented TDA methods. If EPD is inferior to other TDA methods, the method is less valuable from an AI perspective, i.e., for graph analysis. Overall, the effect of graph EPD on non-traditional TDA-based methods and the speedup of EPD are accurately described, and the contribution is significant. On the other hand, the evaluation is based on a limited set of assumptions that favor the proposed method and can be viewed as unfair. Questions Can you clarify the following two points? Provide evidence that EPD is more effective than other TDA methods. Discussion on whether there is any other research on EPD acceleration. Also, there are other speed-up methods for TDAs that are not EPD, such as [Cufar], etc. Can you mention why these methods cannot be used for EPD? [Cufar] MATIJA ČUFAR AND ŽIGA VIRK, FAST COMPUTATION OF PERSISTENT HOMOLOGY REPRESENTATIVES WITH INVOLUTED PERSISTENT HOMOLOGY, https://arxiv.org/abs/2105.03629 Also, instead of approximating part of the algorithm with a GNN as an approximate computation of the diagram or PI, one could directly approximate the entire flow with a GNN or NN, but what is the reason for approximating only part of it? I know there is a trade-off between approximate performance and computation time, as well as the amount of data required, but is there anything you can mention? Please mention any that you can, although they may be related to issues to be considered in the future. Limitations The authors have clearly defined the application, clearly described the performance, and adequately addressed the limitation of their work.
NIPS
Title Neural Approximation of Graph Topological Features Abstract Topological features based on persistent homology can capture high-order structural information which can then be used to augment graph neural network methods. However, computing extended persistent homology summaries remains slow for large and dense graphs and can be a serious bottleneck for the learning pipeline. Inspired by recent success in neural algorithmic reasoning, we propose a novel graph neural network to estimate extended persistence diagrams (EPDs) on graphs efficiently. Our model is built on algorithmic insights, and benefits from better supervision and closer alignment with the EPD computation algorithm. We validate our method with convincing empirical results on approximating EPDs and downstream graph representation learning tasks. Our method is also efficient; on large and dense graphs, we accelerate the computation by nearly 100 times. 1 Introduction Graph neural networks (GNNs) have been widely used in various domains with graph-structured data [45, 27, 22, 42, 6]. Much effort has been made to understand and to improve graph representation power [47, 30, 3, 28]. An intuitive solution is to explicitly inject high order information, such as graph topological/structural information, into the GNN models [51, 26]. To this end, persistent homology [15, 14], which captures topological structures (e.g., connected components and loops) and encodes them in a summary called persistence diagram (PD), have attracted the attention of researchers. Indeed, persistence has already been injected to machine learning pipelines for various graph learning tasks [55, 56, 16, 4, 8, 50]. In particular, it has been found helpful to use the so-called extended persistence diagrams (EPDs) [10], which contain richer information than the standard PDs. Despite the usefulness of PDs and EPDs, their computation remains a bottleneck in graph learning. In situations such as node classification [56] or link prediction [50], one has to compute EPDs on vicinity graphs (local subgraph motifs) generated around all the nodes or all possible edges in the input graph. This can be computationally prohibitive for large and dense graphs. Take the Amazon ∗Correspondence to Chao Chen, Yusu Wang, and Liangcai Gao 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Computers dataset [37] as an example. To compute EPDs on vicinity graphs take several seconds on average, and there are 13381 nodes. So to compute all EPDs with a single CPU can take up to a day. This is not surprising as, while theoretically EPD for graphs can be computed in O(n log n) time [2], that algorithm has not been implemented, and practical algorithms for computing PD take quadratic time in worst case [50]. These computational difficulties raise the question: can we approximate the expensive computation of EPDs using an efficient learning-based approach? This is a challenging question due to the complex mathematical machinery behind the original algorithm. First, the algorithm involves a reduction algorithm of the graph incidence matrix. Each step of the algorithm is a modulo-2 addition of columns that can involve edges and nodes far apart. Such algorithm can be hard to be approximated by a direct application of the black-box deep neural networks. The second challenge comes from the supervision. The output EPD is a point set with an unknown cardinality. The distance between EPDs, called the Wasserstein distance [9, 11], involves a complex point matching algorithm. It is nontrivial to design a deep neural network with variable output and to support supervision via such Wassserstein distance. Previous attempts [38, 29] directly use black-box neural networks to generate fixed-length vectorization of the PDs/EPDs and use mean squared error or cross-entropy loss for supervision. The compromise in supervision and the lack of control make it hard to achieve high-quality approximation of PDs/EPDs. In this paper, we propose a novel learning approach to approximate EPDs on graphs. Unlike previous attempts, we address the aforementioned challenges through a carefully designed learning framework guided by several insights into the EPD computation algorithm. In terms of model output and supervision, we observe that the computation of EPDs can be treated as an edge-wise prediction instead of a whole-graph prediction. Each edge in the graph is paired with another graph element (either vertex or edge), and the function values of the pair are the coordinates of a persistence point in the EPD. This observation allows us to compute EPDs by predicting the paired element for every edge of the graph. The Wasserstein distance can be naturally decomposed into supervision loss for each edge. This element-wise supervision can significantly improve learning efficiency compared with previous solutions, which treat PDs/EPDs as a whole-graph representation and have to use whole-graph representation pooling. Another concern is whether and how a deep neural network can approximate the sophisticated EPD algorithm. To this end, we redesign the algorithm so that it is better aligned with algorithms that are known to be learnable by neural networks. Recall we observe that computing EPDs can be decomposed into finding pairing for each edge. We show that the decomposition is not only at the output level, but also at the algorithm level. The complex standard EPD computation algorithm can indeed be decomposed into independent pairing problems, each of which can be solved exactly using a classic Union-Find algorithm [12]. To this end, we draw inspiration from recent observations that neural networks can imitate certain categories of sequential algorithms on graphs [43, 46]. We propose a carefully designed graph neural network with specific message passing and aggregation mechanism to imitate the Union-Find algorithm. Decomposing the algorithm into Union-Find subroutines and approximating them with a customized GNN provide better alignment between our neural network and the EPD algorithm. A better alignment can lead to better performance [48]. Empirically, we validate our method by quantifying its approximation quality of the EPDs. On two downstream graph learning tasks, node classification and link prediction, we also show that our neural approximations are as effective as the original EPDs. Meanwhile, on large and dense graphs, our method is much faster than direct computation. In other words, the approximated EPDs do not lose accuracy and learning power, but can be computed much more efficiently. Finally, we observe that our model can be potentially transferred to unseen graphs, perhaps due to the close imitation of the Union-Find subroutine. This is encouraging as we may generalize topological computation to various challenging real-world graphs without much additional effort. In summary, we propose an effective learning approach to approximate EPDs with better supervision and better transparency. The technical contributions are as follows. • We reformulate the EPD computation as an edge-wise prediction problem, allowing better supervision and more efficient representation learning. We show that the EPD computation can be decomposed into independent pairing problems, each of which can be solved by the Union-Find algorithm. • Inspired by recent neural algorithm approximation works [43, 46], we design a novel graph neural network architecture to learn the Union-Find algorithm. The closer algorithmic alignment ensures high approximation quality and transferability. 2 Background: Extended Persistent Homology We briefly introduce extended persistent homology and refer the readers to [10, 14] for more details. Ordinary Persistent Homology. Persistent homology captures 0-dimensional (connected components), 1-dimensional (loops) topological structures, as well as high-dimensional analogs, and measures their saliency via a scalar function called filter function. Here we will only describe it for the graph setting. Given a input graph G = (V,E), with node set V and edge set E, we call all the nodes and edges simplices. Denote by X = V ∪E the set of all simplices. We define a filter function on all simpices, f : X → R. In the typical sublevel-set setting, f is induced by a node-valued function (e.g., node degrees), and further defined on edges as f(uv) = max(f(u), f(v)). Denote by Xa the sublevel set of X , consisting of simplices whose filter function values ≤ a, Xa = {x ∈ X|f(x) ≤ a}. As the threshold value a increases from −∞ to ∞, we obtain a sequence of growing spaces, called an ascending filtration of X: ∅ = X−∞ ⊂ ... ⊂ X∞ = X. As Xa increases from ∅ to X , new topological structures gradually appear (born) and disappear (die). For instance, the blue square persistence point at (t2, t3) in Figure 1 (b) indicates that the connected component u2 appears at Xt2 and is merged with the whole connected component at Xt3 . Applying the homology functor to the filtration, we can more precisely quantify the birth and death of topological features (as captured by homology groups) throughout the filtration, and the output is the so-called persistence diagram (PD), which is a planar multiset of points, each of which (b, d) corresponds to the birth and death time of some homological feature (i.e., components, loops, and their higher dimensional analogs). The lifetime |d− b| is called the persistence of this feature and intuitively measures its importance w.r.t. the input filtration. Extended Persistent Homology. In the ordinary persistent homology, topology of the domain (e.g., the graph) will be created at some time (has a birth time), but never dies (i.e., with death time being equal to +∞). We call such topological features essential features. In the context of graphs, the importance of 1D essential features, corresponding to independent loops, are not captured via the ordinary persistence. To this end, an extended persistence module is introduced in [10]: ∅ = H(X−∞) → · · ·H(Xa) → · · ·H(X) = H(X,X∞) → · · · → H(X,Xa) → · · · → H(X,X−∞), where Xa = {x ∈ X|f(x) ≥ a} is a superlevel set of X at value a. We say that the second part H(X,X∞) → · · · → H(X,Xa) → · · · → H(X,X−∞) is induced by a descending filtration. If we inspect the persistence diagram induced by this extended sequence, as H(X,X−∞) is trivial, all the loop features created will also be killed in the end, and thus captured by persistence points whose birth happens in the ascending filtration and death happens in the descending filtration. In what follows, we abuse the notation slightly and use 1D EPD to refer to only such persistence points (i.e., born in ascending portion and death in descending portion) in the persistence diagram induced by the extended module2. We use 0D PD to refer to the standard ordinary 0D persistence diagram induced by the ascending sequence. Our goal is to compute/approximate the union of 0D PD and 1D EPD. Specifically, in the graph setting, at the end of the ascending filtration, some edges, which are the so-called negative edges (as they kill homological features), are paired with the vertices. These correspond to points in the 0D PD, capturing the birth and death of connected components in the ascending filtration. Those unpaired edges, called positive edges, will create independent loops (1D homology for graphs) and remain unpaired after the ascending filtration. The number of such unpaired edges equals to the 1st Betti number β1 (rank of the 1st homology group). These edges will then be paired in the descending part of the persistence module and their birth-depth times give rise to 1D EPD. An example is given in Figure 1(b). Note that since our domain is a graph, β1 equals the number of independent loops, which also equals to β1 = |E| − |V |+ 1 for a connected graph. Hence we also say that 1D EPD captures the birth and death of independent loop features. The birth and death times of the loop feature correspond to the threshold value a’s when these events happen. In general, the death time for such loop feature is smaller than the birth time. For example, the red triangle persistence point in Figure 1 (b) denotes that the red cycle in Figure 1 (a) appears at Xt5 in the ascending filtration and appears again at Xt1 in the descending filtration. Finally, PDs live in an infinite-dimensional space equipped with an appropriate metric structure, such as the so-called p-th Wasserstein distance [11] or the bottleneck distance [9]. They have been combined with various deep learning methods including kernel machines [34, 25, 5], convolutional neural networks [18, 20, 44, 57], transformers [53], connectivity loss [7, 17], and GNNs [56, 8, 50, 55, 16, 4]. During learning, there have been many works in the literature to vectorize persistence diagrams for downstream analysis. Among these works a popular choice is the persistence image [1]. 3 Algorithm Revision: Decomposing EPD into Edge-Wise Paring Predictions In this section, we provide algorithmic insights into how the expensive and complex computation of EPDs can be decomposed into pairing problems for edges. And each pairing problem can be solved exactly using a Union-Find algorithm. The benefit is two-folds. First, the decomposition makes it possible to train the neural network through edge-wise supervision.This allows us to adopt the popular and effective edge-prediction GNN for the goal. Second, we observe the similarity between the Union-Find and sequential algorithms which are known to be imitable by neural networks. This gives us the opportunity to design a special graph neural network to imitate the algorithm accurately, and to approximate EPDs accurately. Decompose the EPD Computation into Pairing Computations. Recall that our goal is to compute the 0D PDs and 1D EPDs PD0 and PD1. The reason for not estimating 0D EPDs (or not including the global max/min pair that corresponds to the whole connected component) is that (1) the global max/min value is easy to obtain, and does not need an extra prediction; (2) in our setting, the global max/min pair will not be paired with any edge in the ascending filtration. In the later section, the estimation of EPDs denote the estimation of PD0 and PD1. We observe that on these diagrams, each point corresponds to a unique pairing of graph elements (vertex-edge pair for PD0, edge-edge pair for PD1). Each pair of elements are essentially the “creator” and “destroyer” of the corresponding topological feature during the filtration. And their filtration values are the birth and death times of the topological feature. For example, the persistence point located at (t2, t3) in Figure 1 (b) denotes that the edge u2u3 is paired with u2. We consider the following "unique pairing" for all edges in the graph: Consider each edge in the ascending filtration: if the edge is a destroyer in the ascending filtration, it will be paired with a vertex. Otherwise, this edge e is a creator in the ascending filtration and will be paired during the descending filtration with another edge e′. We note that this is not in conflict with the fact that the PDs/EPDs are often sparse. Many pairings are local and only pair adjacent elements. They correspond to zero-persistence points living in the diagonal of the diagrams. 2We note that in standard terminology, extended persistence diagram will also contain persistent points born and destroyed both in the descending sequence. Algorithm 1 Sequential algorithm 1: Input: graph G = (V,E), filter function f . 2: Initialise-Nodes(V, f ) 3: Q = Sort-Queue(V ) 4: while Q is not empty do 5: u = Q.pop-min() 6: for v ∈ G.neighbors(u) do 7: Relax-Edge(u, v, f ) 8: end for 9: end while Algorithm 2 Computation of EPD 1: Input: filter function f , input graph G = (V,E) 2: V,E = sorted(V,E, f) 3: PD0 = Union-Find(V,E, f), PD1 = {} 4: for i ∈ V do 5: Ci = {Cij |(i, j) ∈ E, f(j) > f(i)}, Ei = E 6: for Cij ∈ Ci do 7: f(Cij ) = f(i), Ei = Ei − {(i, j)} + {(Cij , j)} 8: end for 9: PDi1 = Union-Find-step(V + Ci − {i}, Ei, f, Ci) 10: PD1+ = PDi1 11: end for 12: Output: PD0, PD1 Algorithm 3 Union-Find-step (Sequential) 1: Input: V , E, f , Ci 2: PDi1 = {} 3: for v ∈ V do 4: v.value = f(v), v.root = v 5: end for 6: Q = Sort(V ), Q = Q − {v|f(v) < f(i)}, G = {Q,EQ}, where EQ = E ∪Q2. 7: while Q is not empty do 8: u = Q.pop-min() 9: for v ∈ G.neighbors(u) do 10: 11: pu, pv = Find-Root(u),Find-Root(v) 12: if pu ̸= pv then 13: s = argmin(pu.value, pv.value) 14: l = argmax(pu.value, pv.value) 15: l.root = s 16: if pu ∈ Ci and pv ∈ Ci then 17: PDi1 + {(u.value, l.value)} 18: end if 19: end if 20: end for 21: end while 22: Function: Find-Root(u) 23: pu = u 24: while pu ̸= pu.root do 25: pu.root = (pu.root).root, pu = pu.root 26: end while 27: Return: pu This pairing view gives us the opportunity to transform the computation of EPDs into a pairing prediction problem: for every edge in the graph, we predict its pairing element. This will be the foundation of our design of the GNN in Sec. 4. Meanwhile, we observe that the decomposition is not only at the output level. The original algorithm of EPD, a sequential modulo-2 matrix reduction algorithm, can indeed be rewritten into a set of independent algorithm subroutines, each for the computation of one pairing. Each subroutine is a Union-Find algorithm. This new decomposed EPD algorithm has not been reported before, although the idea follows from existing work [2]. For completeness, we will provide a proof of correctness of the algorithm. Description of Algorithm 2. The pseudocode for 1D EPD computation is shown in Algorithm 2. We leave the algorithm for 0D PD to the supplementary material3. For simplicity of presentation, we assume that all vertices have distinct function values f : V → R4. Therefore finding the persistence value equals to finding the pairing. To compute the EPD, we traverse all nodes in the vertex set and find their extended persistence pairing. Combining the persistence pair from all nodes, we can obtain the final EPD. The algorithm complexity analysis is provided in the supplementary material. Finding persistence pairing for nodes. For node ui ∈ V , we can call Algorithm 3 to identify the corresponding persistence pair. In particular, the algorithm first sorts the graph elements according to an input scalar function, then does the edge operation by finding the roots of the corresponding nodes and merging these nodes. See Figure 1(c) for a simple illustration. For node u1, there are three upper edges: u1u3, u1u4, and u1u6. We put each such edge uiuj in a different component Cij , – we call this upper-edge splitting operation – and start to sweep the graph in increasing values starting at f(ui). Then, the first time any two such components merge will give rise to a new persistence 3The 0D algorithm needs a single run of Union-Find [14, 13], and is very similar to Algorithm 3 which is a subroutine used by Algorithm 2. 4We can add jitter to the original filter function. The output EPDs will only have minor changes [9] point in the 1D EPD. For instance, C14 and C13 first merge at u4, and this will give rise to the brown loop in Figure 1(a) with (t4, t1) as its persistence point. While in Figure 1 (d), the two connected components, C23 and C25 (originated from u2) will not be united. Therefore, node u2 will not lead to any persistence point in the EPD. Correctness. The idea behind Algorithm 2 to compute the extended pairing for essential edges appears to be folklore. For completeness, we provide a proof of its correctness (stated in Theorem 3.1). We provide a sketch of the proof here, leaving the complete proof to the supplementary material. Theorem 3.1. Algorithm 2 outputs the same 1D EPDs as the standard EPD computation algorithm. Proof sketch. To compute the 1D EPDs, we simply need to find the pairing partner for all edges. Therefore, to prove that the two algorithms output the same 1D EPDs, we need to prove that the output pairing partners are the same (or share the same filter value). We prove this by showing that both the standard EPD computation algorithm and Algorithm 2 find the “thinnest pair", i.e., the paired saddle points are with the minimum distance in terms of filter value, for all edges. Neural Approximation of Union-Find. In the previous paragraph, we showed that the computation of 1D EPDs can be decomposed into the parallel execution of Union-Find algorithms, which share a similar sequential behavior. This gives us the opportunity to approximate these Union-Find algorithms well, and consequently approximate EPDs well. Approximating algorithms with neural networks is a very active research direction [52, 21, 24, 33, 35, 49]. Within the context of graph, GNNs have been proposed to approximate parallel algorithms (e.g., Breadth-First-Search) and sequential algorithms (e.g., Dijkstra) [43, 41, 46]. Particularly relevant to us is the success in approximating the category of sequential algorithms such as Dijkstra. These sequential algorithms, as generally defined in Algorithm 1, sort graph elements (vertices and edges) according to certain function, and perform algorithmic operations according to the order. As described in previous paragraphs, the Union-Find algorithm also contains these steps, and can be expressed in a sequential-like form (Algorithm 3). Therefore we propose a framework to simulate the algorithm. 4 A Graph Neural Network for EPD Approximation Previous section establishes the algorithm foundation by showing that we can decompose EPD computation into edge pairing prediction problems, each of which can be solved using a Union-Find algorithm. Based on such algorithmic insights, we next introduce our neural network architecture to approximate the EPDs on graphs. Our main contributions are: (1) we transform the EPD computation into an edge-wise prediction problem, and solve it using a GNN framework, inspired by the GNN for link prediction; (2) we design a new backbone GNN model PDGNN to approximate the Union-Find algorithm, with specially designed pooling and message passing operations. 4.1 EPD computation as a edge-wise prediction problem We have established that computing PD0 and PD1 can be reduced into finding the pairing partners for all edges. We transfer the problem into an edge-wise prediction problem. We predict the persistence pairing for all edges. This is very similar to a standard link prediction problem [6, 50], in which one predicts for each node pair of interest whether it is a real edge of the graph or not. Inspired by standard link-prediction GNN architectures [6, 50], we propose our model (see Figure 2) as follows. (1) For an input graph G = (V,E) and a filter function f , we first obtain the initial filter value for all the nodes: X = f(V ) ∈ R|V |∗1, and then use a specially designed GNN model which later we call PDGNN G to obtain the node embedding for all these vertices: H = G(X) ∈ R|V |∗dH . (2) Subsequently, a MLP (Multi-layer perceptron) W is applied to the node embeddings to obtain a two dimensional output for each edge (u, v) ∈ E, corresponding to its persistence pairing. Formally, we use PPuv = W ([hu ⊕ hv]) ∈ R2 as the persistence pair. Here, hu and hv denote the node embedding for node u and v, and ⊕ represents the concatenation of vectors. In Algorithm 2, the Union-Find-step should be implemented on all edges to obtain 1D EPDs. Hence ideally we would need a large GNN model with node features proportional to the graph size so as to simulate all these Union-find-steps in parallel simultaneously. However this would be expensive in practice. On the other hand, there are many overlapping or similar computational steps between the Union-Find-step procedures on different vertices. Hence in practice, we only use bounded-size node features. 4.2 PDGNN In this section, we explain how to design the backbone GNN to approximate the Union-Find algorithm. Note the Union-Find is similar to known sequential algorithms but with a few exceptions. We design specific pooling and message passing operations to imitate these special changes. These design choices will be shown to be necessary in the experiment section. Recall a typical GNN learns the node embedding via an iterative aggregation of local graph neighbors. Following [47], we write the k-th iteration (the k-th GNN layer) as: hku = AGG k({MSGk(hk−1v ), v ∈ N(u)}, hk−1u ) (1) where hku is the node features for node u after k-th iterations, and N(u) is the neighborhood of node u. In our setting, h0u = xu is initialized to be the filter value of node u. Different GNNs have different MSG and AGG functions, e.g., in GIN [47], the message function MSG is a MLP followed by an activation function, and the aggregation function AGG is a sum aggregation function. We now describe our specially designed GNN, called PDGNN (Persistence Diagram Graph Neural Network). Compared with the Sequential algorithms (Algorithm 1) [46], our Union-Find algorithm (Algorithm 3) differs in: (1) the Find-Root algorithm which needs to return the minimum of the component, (2) additional edge operations such as upper-edge splitting. To handle these special algorithmic needs, our PDGNN modifies standard GNNs with the following modules. A new aggregation due to the Find-Root function. Finding the minimum intuitively suggests using a combination of several local min-aggregations. Considering that the sum aggregation can bring the best expressiveness to GNNs [47], we implement the root-finding process by a concatenation of sum aggregation and min aggregation as our aggregation function. To be specific: AGGk(.) = SUM(.) ⊕ MIN(.) (2) Improved edge operations. As shown in [43, 46], classic GNNs are not effective in “executing” Relax-Edge subroutines. Furthermore, in Algorithm 2, we also need the upper-edge splitting operation for each vertex. In other words, the information of the separated components Cij are formed by the information from both nodes ui and uj . To this end, we use edge features and attention to provide bias using edges. Specifically, we propose the following message function in the k-th iteration: MSGk(hk−1v ) = σ k[αkuv(h k−1 u ⊕ hk−1v )W k] (3) where σk is an activation function, W k is a MLP module, and αkuv is the edge weight for uv. We adopt PRELU as our activation function, and the edge weight proposed in [42] as our edge weight. Training PDGNN. We use the 2-Wasserstein distance between the predicted diagram and the ground truth EPD as the loss function. Through optimal matching, the gradient is passed to each predicted persistence pair. Since we have established the one-to-one correspondence between pairs and edges, the gradient is then passed to the corresponding edge, and contributes to the representation learning. 5 Experiments In this section, we thoroughly evaluate the proposed model from 3 different perspectives. In Section 5.1, we evaluate the approximation error between the predicted diagram and the original diagram and show that the prediction is very close to the ground truth. Even with a small approximation error, we still need to know how much does the error influence downstream tasks. Therefore, in Section 5.2, we evaluate the learning power of the predicted diagrams through 2 downstream graph representation learning tasks: node classification and link prediction. We observe that the model using the predicted diagrams performs comparably with the model using the ground truth diagrams. In Section 5.3, we evaluate the efficiency of the proposed algorithm. Experiments demonstrate that the proposed method is much faster than the original algorithm, especially on large and dense graphs. Source code is available at https://github.com/pkuyzy/TLC-GNN. Datasets. To compute EPDs, we need to set the input graphs and the filter functions. Existing state-of-the-art models on node classification [56] and link prediction [50] mainly focus on the local topological information of the target node(s). Following their settings, for a given graph G = (V,E), we extract the k-hop neighborhoods of all the vertices, and extract |V | vicinity graphs. In our experiments, k is set to 1 or 2 (details are provided in the supplementary material). In terms of filter functions, we use Ollivier-Ricci curvature [31], heat kernel signature with two temprature values [39, 19] and the node degree5. For an input vicinity graph, we compute 4 EPDs based on the 4 filter functions, and then vectorize them to get 4 peristence images [1]. Therefore, we can get 4|V | EPDs in total. The input graphs include (1) citation networks including Cora, Citeseer, and PubMed [36]; (2) Amazon shopping datasets including Photo and Computers [37]; (3) coauthor datasets including CS and Physics [37]. Details are available in the supplementary material. 5Following the settings in [56, 50], we adopt the Ollivier-Ricci curvature as the graph metric, and the distance to target node(s) as the filter function; Following the settings in [4], we set the temparature t = 10 and 0.1 and adopt these two kernel functions as the filter functions; Node degree is used as the initial filter function in [16]. 5.1 Approximation Quality In this section, we evaluate the approximation error between the prediction and the original EPDs. Evaluation metrics. Recall that the input of our model is a graph and a filter function, and the output is the predicted EPD. After obtaining the predicted EPD, we vectorize it with persistence image [1] and evaluate (1) the 2-Wasserstein (W2) distance between the predicted diagram and the ground truth EPD; (2) the total square error between the predicted persistence image and the ground truth image (persistence image error, denoted as PIE). Considering that our aim is to estimate EPDs on graphs rather than roughly approximating persistence images, we use the W2 distance as the training loss, while the PIE is only used as an evaluation metric. Given an input graph (e.g., Cora, Citeseer, etc.) and a filter function, we extract the k-hop neighborhoods of all the vertices and separate these vicinity graphs randomly into 80%/20% as training/test sets. We report the mean W2 distance between diagrams and PIE on different vicinity graphs and 4 different filter functions. Baseline settings. PDGNN denotes our proposed method, that is, the GNN framework with the proposed AGG function and MSG function. Its strategy is to first predict the EPD, and then convert it to the persistence image. To show its superiority, we compare with the strategy from [38, 29], i.e., directly approximate the persistence image of the input graph, as a baseline strategy. GIN_PI and GAT_PI denote the baseline strategy with GIN [47] and GAT [42] as the backbone GNNs. To show the effectiveness of the modules proposed in Section 4, we add other baselines with our proposed strategy. GAT denotes GAT as the backbone GNN. GAT (+MIN) denotes GAT with the new AGG function. Compared with PDGNN, it exploits the original node feature rather than the new edge feature in the MSG function. PDGNN (w/o ew) denotes PDGNN without edge weight. Further experimental settings can be found in the supplementary material. Results. Table 1 reports the approximation error, we observe that PDGNN outperforms all the baseline methods among all the datasets. The comparison between GAT and GAT_PI shows the benefit of predicting EPDs instead of predicting the persistence image. Comparing GAT and GAT (+MIN), we observe the advantage of the new AGG function, which shows the necessity of using min aggregation to approximate the Find-Root algorithm; Comparing GAT (+MIN) and PDGNN, we observe the effectiveness of using the new MSG function to help the model capture information of the separated connected components. The comparison between PDGNN (w/o ew) and PDGNN shows that edge weights help the model focus on the individual Relax-Edge sub-algorithm operated on every edge. 5.2 Downstream Tasks In this section, we evaluate the performance of the predicted diagrams on 2 graph representation learning tasks: node classification and link prediction. We replace the ground truth EPDs in state-ofthe-art models based on persistence [56, 50] with our predicted diagrams and report the results. Baselines. We compare our method with various state-of-the-art methods. We compare with popular GNN models including GCN [22], GAT [42] and HGCN [6]. For link prediction, we compare with several state-of-the-art methods such as SEAL [54] and P-GNN [51]. Notice that GCN and GAT are not originally designed for link prediction, therefore we follow the settings in [6, 50], that is, to get the node embedding through these models, and use the Fermi-Dirac decoder [23, 32] to predict whether there is a link between the two target nodes. In comparison with the original EPD, we also add PEGN [56] and TLC-GNN [50] as baseline methods. Furthermore, to show the benefit of directly predicting EPDs, we also add the baseline methods PEGN (GIN_PI) and TLC-GNN (GIN_PI), which replace the original persistent homology feature with the output from GIN_PI. Evaluation metrics. For node classification, our setting is the same as [22, 42, 56]. To be specific, we train the GNNs with 20 nodes from each class and validate (resp. test) the GNN on 500 (resp. 1000) nodes. We run the GNNs on these datasets 10 times and report the average classification accuracy and standard deviation. For link prediction, our setting is the same as [6, 50]. To be precise, we randomly split existing edges into 85/5/10% for training, validation, and test sets. An equal number of non-existent edges are sampled as negative samples in the training process. We fix the negative validation and test sets, and randomly select the negative training sets in every epoch. We run the GNNs on these datasets 10 times and report the mean average area under the ROC curve (ROCAUC) scores and the standard deviation. Results. Table 2 and Table 3 summarize the performance of all methods on node classification and link prediction. We observe that PEGN (PDGNN) and TLC-GNN (PDGNN) consistently perform comparably with PEGN and TLC-GNN, showing that the EPDs approximated by PDGNN have the same learning power as the true EPDs. Furthermore, PEGN using the approximated EPDs achieve better or comparable performance with different SOTA methods. We also discover that PEGN (GIN_PI) and TLC-GNN (GIN_PI) perform much inferior to the original models using the true EPDs. It demonstrates that the large approximation error from GIN_PI lose much of the crucial information which is preserved in PDGNN. Transferability. One appealing feature of our method is its transferability. Training on one graph, our algorithm can estimate EPDs well on another graph. This makes it possible to apply the computationally expensive topological features to a wide spectrum of real-world graphs; we can potentially apply a pre-trained model to large and dense graphs, on which direct EPD computation is infeasible. The experiments are provided in the supplementary material. 5.3 Algorithm Efficiency In this section, we evaluate the efficiency of our proposed model. For a fair and complete comparison, we compare with algorithms from Gudhi [40] and from [50]. We select the first 1000 nodes from Cora, Citeseer, PubMed, Photo, Computers, CS, Physics, and then extract their 2-hop neighborhoods. With Ollivier-Ricci curvature as the filter function, we compute the EPDs and report the time (seconds) used to infer these diagrams. Results. We list the average nodes and edges of these vicinity graphs in the first line of Table 4. As shown in Table 4, although our model is slower on small datasets like Cora or Citeseer, it is much faster on large and dense datasets. Therefore we can simply use the original algorithm to compute the EPDs on small graphs, and use our model to estimate EPDs on large graphs. The model can be applied to various graph representation learning works based on persistent homology. 6 Conclusion Inspired by recent success on neural algorithm execution, we propose a novel GNN with different technical contributions to simulate the computation of EPDs on graphs. The network is built on algorithmic insights, and benefits from better supervision and closer alignment with the EPD computation algorithm. Experiments show that our method achieves satisfying approximation quality and learning power while being significantly faster than the original algorithm on large and dense graphs. Another strength of our method is the transferability: training on one graph, our algorithm can still approximate EPDs well on another graph. This makes it possible to apply the computationally expensive topological features to a wide spectrum of real-world graphs. Acknowledgements. We thank all anonymous reviewers for their constructive feedback very much. This work of Zuoyu Yan, Liangcai Gao, and Zhi Tang is supported by the projects of National Key R&D Program of China (2019YFB1406303) and National Natural Science Foundation of China (No. 61876003), which is also a research achievement of Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology).
1. What is the focus of the paper regarding topological feature extraction? 2. What are the strengths of the proposed approach, particularly in its novelty and significance? 3. What are the weaknesses of the paper, especially regarding computational complexity and training loss? 4. How does the reviewer assess the clarity, quality, and significance of the paper's content? 5. Are there any suggestions or recommendations for improving the paper or expanding its scope?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The extraction of topological features (called persistence diagrams - PDs) from structured data (such as graphs, point clouds, time series, etc.) is a promising approach to develop new machine learning methods but suffers from high computational complexity (among other things) to be faithfully used in learning pipelines. This happens mostly because the computation of PDs relies on combinatorial algorithms (roughly, sparse matrix reduction) that do not scale well with the size of the considered data. Following a recent line of development, this article proposes to bypass the exact computation of topological features via combinatorial algorithms and to instead approximate it using neural networks. The difference with previous approaches is that instead of brutaly computing PDs using an ad-hoc architecture, authors decompose the (extended) PD computation in different algorithmic steps, which roughly rely on running a Union-Find. It invite them to design a Graph Neural Network (GNN) capable of learning to reproduce this sub-routine that they call PDGNN, essentially based on an adapted aggregation function (mixing a minimum and a maximum). They showcase the benefits of the proposed approach on different experimental tasks. Strengths And Weaknesses [Originality] While not being entirely novel for either the TDA nor the ML communities, the approach proposed in this paper is quite fresh and promising. At least from the TDA perspective, I think that the idea of "learning to produce topological descriptors by approximating the inner-algorithms in a relevant way" was somewhat in the minds, but this paper is the first one to do so in a convincing manner to the best of my knowledge. [Clarity] The paper is well written an pleasant to read. [Significance] The approach developed in this work seems quite promising and may be a source of inspiration for future works in TDA, in particular showcasing the potential of GNN when it comes to extract topological information from data. [Quality] I think this paper is of great quality overall. Aside from the interesting approach, I also like the experimental evaluation methodology which covers the important questions (i) Is our model capable of producing outputs similar to ground truth PDs (Sec. 5.1), (ii) are our approximated PDs still relevant as topological features in downstream tasks, (iii) what are the computational benefits/drawbacks of using this approach in terms of running time. Questions Questions In line 271, it is said that the training loss was the W 2 metric rather than the L2 norm between some vectorization such as persistence images. Given that the W 2 requires to compute a matching to retrieve a gradient, doesn't it involve some computational burden in practice? (related to 1.) In line 49, it is said that "The Wasserstein distance can be naturally decomposed into supervision loss for each edge (...) improve learning efficiency" ; while I guess that I understand the underlying idea in an informal way, could you elaborate on this? In concrete terms, how is the training loss implemented and how does it benefits from the "edge decomposition" properties of your approach? Given that Persistence Images should be stable wrt the W 2 metric, isn't it somewhat "redundant" to use the PIE as a second evaluation metric? Suggestions The works Learning persistent homology of 3D point clouds and RipsNet: a general architecture for fast and robust estimation of the persistent homology of point clouds are also learning to produce topological features (on top of point cloud this times) and may be referenced along PI-Net and Can Neural Networks learn persistent homology features in the introduction (line 38). Limitations I think the limitations of the work have been properly addressed (for instance the authors aknowledge that their approach is only beneficial on large graphs --- exact computation may be more efficient otherwise). I do not identify potential negative societal impact specific to this work.
NIPS
Title DIB-R++: Learning to Predict Lighting and Material with a Hybrid Differentiable Renderer Abstract We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiable renderers. Many previous learning-based approaches for inverse graphics adopt rasterization-based renderers and assume naive lighting and material models, which often fail to account for non-Lambertian, specular reflections commonly observed in the wild. In this work, we propose DIBR++, a hybrid differentiable renderer which supports these photorealistic effects by combining rasterization and ray-tracing, taking the advantage of their respective strengths—speed and realism. Our renderer incorporates environmental lighting and spatially-varying material models to efficiently approximate light transport, either through direct estimation or via spherical basis functions. Compared to more advanced physics-based differentiable renderers leveraging path tracing, DIB-R++ is highly performant due to its compact and expressive shading model, which enables easy integration with learning frameworks for geometry, reflectance and lighting prediction from a single image without requiring any ground-truth. We experimentally demonstrate that our approach achieves superior material and lighting disentanglement on synthetic and real data compared to existing rasterization-based approaches and showcase several artistic applications including material editing and relighting. 1 Introduction Inferring intrinsic 3D properties from 2D images is a long-standing goal of computer vision [3]. In recent years, differentiable rendering has shown great promise in estimating shape, reflectance and illumination from real photographs. Differentiable renderers have become natural candidates for learning-based inverse rendering applications, where image synthesis algorithms and neural networks can be jointly optimized to model physical aspects of objects from posed images, either by leveraging strong data priors or by directly modeling the interactions between light and surfaces. Not all differentiable renderers are made equal. On the one hand, recent physics-based differentiable rendering techniques [31, 45, 2, 44, 59] try to model the full light transport with proper visibility gradients, but they tend to require careful initialization of scene parameters and typically exhibit high computational cost which limits their usage in larger end-to-end learning pipelines. On the other hand, performance-oriented differentiable renderers [10, 27, 36, 25] trade physical accuracy for scalability and speed by approximating scene elements through neural representations or by employing simpler shading models. While the latter line of work has proven to be successful in 3D scene reconstruction, ∗Work done during an internship at NVIDIA. 1Project page: https://nv-tlabs.github.io/DIBRPlus. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). the frequent assumptions of Lambertian-only surfaces and low-frequency lighting prevent these works from modeling more complex specular transport commonly observed in the real world. In this work, we consider the problem of single-view 3D object reconstruction without any 3D supervision. To this end, we propose DIB-R++, a hybrid differentiable renderer that combines rasterization and ray-tracing through an efficient deferred rendering framework. Our framework builds on top of DIB-R [10] and integrates physics-based lighting and material models to capture challenging non-Lambertian reflectance under unknown poses and illumination. Our method is versatile and supports both single-bounce ray-tracing and a spherical Gaussian representation for a compact approximation of direct illumination, allowing us to adapt and tune the shading model based on the radiometric complexity of the scene. We validate our technique on both synthetic and real images and demonstrate superior performance on reconstructing realistic materials BRDFs and lighting configurations over prior rasterization-based methods. We then follow the setting proposed in Zhang et al. [62] to show that DIB-R++ can reconstruct scene intrinsics also from real images without any 3D supervision. We further apply our framework to single-image appearance manipulation such as material editing and scene relighting. 2 Related Work Differentiable Rendering. Research on differentiable rendering can be divided into two categories: physics-based methods focusing on photorealistic image quality, and approximation methods aiming at higher performance. The former differentiates the forward light transport simulation [31, 45, 2, 44, 59] with careful handling of geometric discontinuities. While capable of supporting global illumination, these techniques tend to be relatively slow to optimize or require a detailed initial description of the input in terms of geometry, materials, lighting and camera, which prevents their deployment in the wild. The latter line of works leverages simpler local shading models. Along this axis, rasterization-based differentiable renderers [10, 27, 36, 25, 16] approximate gradients by generating derivatives from projected pixels to 3D parameters. These methods are restricted to primary visibility and ignore indirect lighting effects by construction, but their simplicity and efficiency offer an attractive trade-off for 3D reconstruction. We follow this line of work and build atop DIB-R [10, 22] by augmenting its shading models with physics-based ones. Learning-based Inverse Graphics. Recent research on inverse graphics targets the ill-posed problem of jointly estimating geometry, reflectance and illumination from image observations using neural networks. For single image inverse rendering, one dominant approach is to employ 2D CNNs to learn data-driven features and use synthetic data as supervision [42, 49, 35, 33, 52], but these methods do not always generalize to complex real-world images [4]. To overcome the data issue, a recent body of work investigates the use of self-supervised learning to recover scene intrinsics [1], including domain adaptation from synthetic reflectance dataset [37], object symmetry [54, 53], or multi-illumination images depicting the same scene [30, 32, 39, 58]. However, these methods either rely on specific priors or require data sources tedious to capture in practice. Some works tackle the subtask of lighting estimation only [18, 17, 20], but still need to carefully utilize training data that are hard to capture. Most similar to us, DIB-R [10] tackles unsupervised inverse rendering in the context of differentiable rendering. Zhang et al. [62] further combines DIB-R with StyleGAN [24] generated images to extract and disentangle 3D knowledge. These works perform inverse rendering from real image collections without supervision, but may fail to capture complex material and lighting effects—in contrast, our method models these directly. Several techniques also try to handle more photorealistic effects but typically require complex capturing settings, such as controllable lighting [28, 29], a co-located camera-flashlight setup [41, 13, 34, 5, 6, 8, 48, 38], and densely captured multi-view images [14, 55, 7, 60] with additional known lighting [19] or hand-crafted inductive labels [43]. In our work, we propose a hybrid differentiable renderer and learn to disentangle complex specular effects given a single image. Similar to the recent NeRD [7] and PhySG [60] which recover non-Lambertian reflectance and illumination with a spherical Gaussian (SG) basis [51], we also employ SGs to model the SV-BRDF and incident lighting, but apply this representation to mesh-based differentiable rendering with direct access to the surface. 3 Differentiable Deferred Rendering In this section, we introduce DIB-R++, our differentiable rendering framework based on deferred shading [12]. DIB-R++ is a hybrid differentiable renderer that can efficiently approximate direct illumination and synthesize high-quality images. Concretely, our renderer leverages the differentiable rasterization framework of DIB-R [10] to recover shape attributes and further employs physics-based material and lighting models to estimate appearance. 3.1 Overview We provide an overview of our technique in Fig. 1. We first rasterize a 3D mesh to obtain diffuse albedo and material maps, surface normals, and a silhouette mask. This information is deferred to the shading pass, where outgoing radiance is either estimated stochastically or approximated using a spherical Gaussian basis. The rasterizer and shader are differentiable by design, allowing gradients to be propagated to lighting, material and shape parameters for downstream learning tasks. 3.2 Background Our goal is to provide a differentiable formulation of the rendering process to enable fast inverse rendering from 2D images. LetM be a 3D object in a virtual scene. We start from the (non-emissive) rendering equation (RE) [23], which states that the outgoing radiance Lo at any surface point x ∈M in the camera direction ωo is given by Lo(x,ωo) = ∫ H2 fr(x,ωi,ωo)Li(x,ωi)|n · ωi|dωi, (1) where Li is the incident radiance, fr is the (spatially-varying) bidirectional reflectance distribution function (SV-BRDF) and n is the surface normal at x. The domain of integration is the unit hemisphereH2 of incoming light directions ωi. The BRDF characterizes the surface’s response to illumination from different directions and is modulated by the cosine foreshortening term |n · ω|. Intuitively, Eq. (1) captures an energy balance and computes how much light is received and scattered at a shading point in a particular direction. Estimating the RE typically requires Monte Carlo (MC) integration [46], which involves tracing rays from the camera into the scene. Albeit physically correct, this process is computationally expensive and does not generally admit a closed-form solution. MC estimators can exhibit high variance and may produce noisy pixel gradients at low sample count, which may significantly impact performance and convergence. To keep the problem tractable, we thus make several approximations of Eq. (1), which we detail in the next section. 3.3 Two-stage Deferred Rendering We now describe our rendering framework (Fig. 1). We start by defining three families of parameters, where π ∈ Rdπ encodes the shape attributes (e.g., vertex positions), θ ∈ Rdθ describes the material properties, and γ ∈ Rdγ captures the illumination in the scene. In what follows, we shall only consider a single pixel, indexed by p, within an RGB image I ∈ R3×h×w+ for notational simplicity. Stage 1: Rasterization Pass. We first employ a differentiable rasterizer R [10] to generate primary rays ωo ∈ S2 from a camera and render our scene M into geometry buffers (commonly called G-buffers) containing the surface intersection point xp ∈ R3, the surface normal np ∈ S2, and the spatially-varying material parameters θp (e.g., diffuse albedo). This rendering pass also returns a visibility mask vp ∈ {0, 1} indicating whether pixel p is occupied by the rendered object, separating the foreground object If from its background environment Ib so that I = If + Ib. We have: R(M, p,ωo) = (xp,np,θp, vp). (2) Stage 2: Shading Pass. Given surface properties and outgoing direction ωo, we then approximate the outgoing radiance Lo(xp,ωo) through several key assumptions. First, we restrict ourselves to direct illumination only (i.e. single-bounce scattering) and assume that the incoming radiance is given by a distant environment map Li : S2 → R3+. Therefore, we do not model self-occlusion and Li(xp,ωi) ≡ Li(ωi;γ). Such simplification largely reduces computation and memory costs and is trivially differentiable. Second, we assume that the material parameters θ can model both diffuse and specular view-dependent effects. At a high level, we define our shading model S so that: S(xp,np,ωo;θp,γ) ≈ Lo(xp,ωo). (3) Importantly, a differentiable parameterization of S enables the computation of pixel gradients with respect to all scene parameters Θ = (π,θ,γ) by differentiating Ip(Θ) = (S ◦R)(M, p,ωo). Given a scalar objective function defined on the rendered output I , ∂I/∂π is computed using DIB-R [10]. In what follows, we thus mainly focus on formulating ∂I/∂{θ,γ} so that all gradients can be computed using the chain rule, allowing for joint optimization of geometry, material and lighting parameters. We assume henceforth a fixed pixel p for conciseness, and remove the subscript. 3.4 Shading Models Since our primary goal is to capture a wide range of appearances, we provide two simple techniques to approximate Eq. (1): Monte Carlo (MC) and spherical Gaussians (SG). The former targets more mirror-like objects and can better approximate higher frequencies in the integrand, but is more expensive to compute. The latter is more robust to roughness variations but is limited by the number of basis elements. To model reflectance, we choose to use a simplified version of the isotropic Disney BRDF [9, 15] based on the Cook–Torrance model [11], which includes diffuse albedo a ∈ [0, 1]3, specular albedo s ∈ [0, 1], surface roughness β ∈ [0, 1] and metalness m ∈ [0, 1]. Metalness allows us to model both metals and plastics in a unified framework. We let the diffuse albedo vary spatially (a = a(x)) and globally define all other attributes to restrict the number of learnable parameters. Monte Carlo Shading. Given a surface point x ∈ M to shade, we importance sample the BRDF to obtain N light directions ωki and compute the BRDF value. We represent the incident lighting L (MC) i as a high-dynamic range image γ ∈ R 3×hl×wl + using an equirectangular projection, which can be queried for any direction via interpolation between nearby pixels. The final pixel color is then computed as the average over all samples, divided by the probability of sampling ωki : S(MC)(x,n,ωo;θ,γ) = 1 N N∑ k=1 fr(x,ω k i ,ωo;θ)L (MC) i (ω k i ;γ) |n · ωki | p(ωki ) . (4) When the surface is near-specular (e.g., a mirror), one can efficiently estimate the RE as reflected rays are concentrated in bundles (e.g., to satisfy the law of reflection). However, this estimator can suffer from high variance for rougher surfaces; a higher number of samples may be necessary to produce usable gradients. While this can be partially improved with multiple importance sampling [50], emitter sampling would add a significant overhead due to the environment map being updated at every optimization step. This motivates the use of a more compact representation. Spherical Gaussian Shading. To further accelerate rendering while preserving expressivity in our shading model, we use a spherical Gaussian (SG) [51] representation. Projecting both the cosineweighted BRDF and incident radiance into an SG basis allows for fast, analytic integration within our differentiable shader, at the cost of some high frequency features in the integrand. Concretely, an SG kernel has the form G(ω; ξ, λ,µ) = µ eλ(ξ·ω−1), where ω ∈ S2 is the input spherical direction to evaluate, ξ ∈ S2 is the axis, λ ∈ R+ is the sharpness, and µ ∈ R3+ is the amplitude of the lobe. We represent our environment map using a mixture of K lighting SGs Gl, so that: L (SG) i (ωi;γ) ≈ K∑ k=1 Gkl ( ωi; ξ k l , λ k l ,µ k l ) , (5) where γ := {ξkl , λkl ,µkl }k. For the BRDF, we follow Wang et al. [51] and fit a single, monochromatic SG to the specular lobe so that f (SG)r is a sum of diffuse and specular lobes. The full derivation can be found in our supplementary material (Sec. A). Finally, we approximate the cosine foreshortening term using a single SG |n · ωi| ≈ Gc(ωi;n, 2.133, 1.17) [40]. Regrouping all terms, the final pixel color can be computed as: S(SG)(x,n,ωo;θ,γ) = ∫ S2 f (SG)r (x,ω k i ,ωo;θ)L (SG) i (ωi;γ)Gc(ωi) dωi, (6) which has an analytic form that can be automatically differentiated inside our renderer. All parameters of the SGs, as well as the BRDF parameters, are learnable. Comparison. To visually compare our two shading techniques and understand their limitations, we render a unit sphere under the same lighting (represented differently) in Fig. 2. To do so, we first fit a HDR environment map with K = 128 SGs using an equirectangular projection. As shown on the left, SGs smooth out high frequency details and sharp corners but require much fewer parameters to reconstruct incident lighting (896 vs. 98 304). On the right, we visualize the effect of increasing the surface roughness β under the corresponding light representation. Intuitively, this point of diminishing return indicates that MC is only so useful when the surface reflects most of the incoming light (e.g., a mirror). Indeed, when β is small enough (β → 0) and we deal with a highly non-Lambertian surface, a small number of MC samples are enough to estimate direct illumination, which in turn implies faster render speed and low memory cost. On the other end of the spectrum (β → 1), significantly more samples (e.g., N > 1000) are needed to accurately integrate incident light, resulting in longer inference times. In such a case, SGs should be favored since they offer a significant improvement. In the absence of any prior knowledge on the material type, SG shading is preferred. This is reflected in our experiments in Sec. 5-6. Optimization. We perform a sanity check on our renderer in Fig. 3 by optimizing for lighting and reflectance properties from a multi-view image L1-loss with fixed geometry. We show the ground-truth (GT) parameters and rendered images in the first row, along with initial parameters (Init.) in the second row. Here, we optimize parameters separately using gradient-descent while the others are kept fixed to validate each component of our shading model. DIB-R++ can successfully estimate material and lighting parameters, including the environmental lighting, surface roughness and specular albedo of the object. We find that the converged material parameters closely match GT, while the optimized environment map loses some details due to gradients coming entirely from surface reflections (foreground supervision only). In particular, surface highlights are well captured by our technique. For more optimization results, please refer to the supplementary (Sec. C). 4 Application: Single Image 3D Reconstruction We demonstrate the effectiveness of our hybrid framework through the learning-based problem of single image 3D reconstruction without supervision. While previous works [10, 62] generally focus on diffuse illumination only, our goal is to jointly infer geometry, reflectance, and lighting from a single image Ĩ containing strong specular transport. To this end, we employ a convolutional neural network F , parameterized by learnable weights ϑ, to predict 3D attributes of a meshM with pre-determined topology (sphere in our case). We adopt the U-Net [47] architecture of the original DIB-R [10, 62] and modify its output to also predict the appropriate BRDF attributes θ and light parameters γ (pixel colors or SG coefficients) so that F (Ĩ;ϑ) = (π,θ,γ). We then render these parameters back to an image I using our differentiable renderer and apply a loss L on the RGB output to compare the input image Ĩ and the rendered image I , where: L(ϑ) = αimLim(Ĩ , I) + αmskLmsk(Ṽ , V ) + αperLper(Ĩ , I) + αlapLlap(π). (7) Similar to DIB-R [10], we combine multiple consistency losses with regularization terms: Lim is an image loss computing the L1-distance between the rendered image and the input image, Lmsk is an Intersection-over-Union (IoU) loss of the rendered silhouette V and the input mask Ṽ of the object [25], Lper is a perceptual loss [21, 61] computing the L1-distance between the pre-trained AlexNet [26] feature maps of rendered image and input image, and Llap is a Laplacian loss [36, 25] to penalize the change in relative positions of neighboring vertices. We set αim = 20, αmsk = 5, αper = 0.5, αlap = 5, which we empirically found worked best. 5 Evaluation on Synthetic Datasets We conduct extensive experiments to evaluate the performance of DIB-R++. We first quantitatively evaluate on synthetic data where we have access to ground-truth geometry, material and lighting. Since MC ad SG shading have individual pros and cons, we validate them under different settings. In particular, we generate separate datasets with two different surface materials: purely metallic surfaces with no roughness, and glossy surfaces with random positive roughness. We compare the performance of both shading models against the baseline method [10]. Synthetic Datasets. We chose 485 different car models from TurboSquid2 to prepare data for metallic and glossy surfaces. We also collected 438 freely available high-dynamic range (HDR) environment maps from HDRI Haven3 to use as reference lighting, which contain a wide variety of illumination configurations for both indoor and outdoor scenes. To render all 3D models, we use Blender’s Cycles4 path tracer with the Principled BRDF model [9]. We create two datasets, Metallic-Surfaces and Glossy-Surfaces. For metallic surfaces, we set β = 0 and m = 1. Conversely, we set m = 0, s = 1 and randomly pick β ∈ [0, 0.4] to generate images for glossy surfaces. Baseline. We compare our method with the rasterization-based baseline DIB-R [10], which supports spherical harmonics (SH) lighting. While the original lighting implementation in [10] is monochromatic, we extend it to RGB for a fairer comparison. For quantitative evaluation, we first report the common L1 pixel loss between the re-rendered image using our predictions and groundtruth (GT) image(L = ‖Ĩ − I‖1), and 2D IoU loss between rendered silhouettes and ground-truth masks(L = 1− Ṽ V Ṽ+V ). We experimentally find that these numbers are very close in different methods. Thus, we further evaluate the quality of diffuse albedo and lighting predictions using normalized cross correlation (NCC, L = 1− ∑ γ̃ γpred ‖γ̃‖2‖γpred‖2 , where γpred is the predicted albedo and light while γ̃ is GT). We provide more details of these metrics in the supplementary material (Sec. E). 5.1 Metallic Surfaces Experimental Settings. We first apply all methods to the metallic car dataset. Since this surface property is known a priori, we relax the task for MC shading by setting β = 0 and only predict geometry, diffuse albedo and lighting from the input image. This allows us to render MC at a low sample count (N = 4), achieving higher rendering speed and a lower memory cost. In particular, we predict the relative offset for all |M| = 642 vertices in a mesh and a 256× 256 texture map, following the choices in [10]. We also predict a 256 × 256 RGB environment map. For SG shading, we predict all parameters. While shape and texture are the same as MC shading, we adopt K = 32 for SG and predict two global parameters β and s for the specular BRDF. This keeps the number of parameters relatively low while providing enough flexibility to capture different radiometric configurations. 2https://turbosquid.com. We obtain consent via agreement with TurboSquid, following their license at https://blog.turbosquid.com/turbosquid-3d-model-license. 3https://hdrihaven.com. We follow the CCO license at https://hdrihaven.com/p/license.php. 4https://blender.org Experimental Results. Quantitative and qualitative results are shown in Fig. 4 and Table 1 (Left), respectively. Since the main loss function comes from the difference between GT and re-rendered images, we find the re-rendered images (with light) from the predictions are all close to the GT image for different methods, and quantitatively, the image loss and 2D IoU loss are also similar across different models. However, we observe significant differences on the predicted albedo and lighting. Specifically, in Fig. 4, MC shading successfully predicts cleaner diffuse albedo maps and more accurate lighting, while Chen et al. [10] “bakes in” high specular effects into the texture. Quantitatively, we outperform [10] with a 3× improvement in terms of NCC loss for lighting, demonstrating the effectiveness of our DIB-R++. We further compare MC shading with SG shading. While SG shading achieves reasonable lighting predictions, it fails to reconstruct the high frequency details in the lighting and has circular spot effects caused by the isotropic SGs. Finally, we note that due to the ambiguity of the learning task, the overall intensities of all predicted texture maps can largely vary. Still, we observe that MC can better recover fine details, such as the wheels’ rims. 5.2 Glossy Surfaces Experimental Settings. We further apply our model to synthetic images rendered with positive roughness (glossy). To apply MC shading in such a case, we assume no prior knowledge for material and use a high sample count (N = 1024) to account for possibly low roughness images in the dataset. Due to high rendering time and memory cost, we subsample 4% of the pixels and apply Lim to those pixels only in each training iteration. As such, we do not use the perceptual loss Lper as it relies on the whole image. We predict a 32× 16 environment map for lighting and predict global β and m for the specular BRDF. For SG shading, we use the same settings as for the metallic surfaces. Experimental Results. We apply both MC and SG shading and compare with [10]. Results are shown in Fig. 5 and Table 1 (Right). Qualitatively, SG shading has better lighting predictions with correct high-luminance regions. The specular highlights in the images successfully guide SG shading, while the bright reflection on the car window and front cover are fused with texture map in [10]. MC also has reasonable lighting predictions, but the predicted light map lacks structure due to weak surface reflections. Without a perceptual loss term, the predicted textures also tend to be blurrier. Quantitatively, SG shading significantly outperforms [10] on lighting prediction in terms of NCC (0.078 vs. 0.127) and improves on texture prediction in terms of BRDF/lighting disentanglement. We also compare with MC shading, where MC achieves slightly worse results on lighting predictions compared to SG, but is still much better than [10]. 5.3 Discussion As shown in our previous two experiments, Monte Carlo shading works best under a metallic assumption (β = 0), in which case the rendered images can have rich details at low sample count (N ≤ 4). However, when the surface is more Lambertian (i.e., when β is becoming larger), we have to compensate with a larger N to produce noise-free renderings, which impacts learning both timeand memory-wise. As a consequence, we recommend applying MC shading to metallic surfaces only, and default to SGs otherwise. Our spherical Gaussian shading pipeline provides an analytic formulation for estimating the rendering equation, which avoids the need of tracing ray samples, largely accelerating the rendering process. While SGs can be blurry on metallic surfaces, in most case (e.g., when β ≥ 0.2) it can model similar rendering effects at a fraction of the cost, achieving better results than MC shading and [10]. After inspecting the predicted surface material properties (β, s, m) and diffuse albedo with the ground-truth parameters in Blender, we find the materials contain little correlation and the intensities of diffuse albedo might change. As for SG, we are using only 32 basis elements to simulate a complex, high definition environment map (2K). Since SGs can only represent a finite amount of details, we find the predicted global β tends to be too small. One hypothesis for this is that the optimizer artificially prefers more reflections (and thus lower roughness) to be able to estimate at least some portions of the environment map. On the other hand, in MC, due to the absence of a perceptual loss, the predicted texture is too blurry and cannot represent GT to a high detail. We find that the predicted β and m do not have strong correlation with the GT material. Lastly, we note that the predicted texture map has to change its overall intensity to accommodate for other parameters to ensure the re-rendered images are correct, which leads to some differences with GT. More analysis can be found in supplementary (Sec. C and F). In summary, when we have no prior knowledge about the material, our re-rendered images can be very close to the input images but the predicted material parameters are not always aligned with the GT materials. We believe this problem can be relieved by incorporating additional local constraints, e.g., part-based material priors, or by leveraging anisotropic SGs [56]. For instance, a car body is metallic while its wheels are typically diffuse; predicting different parameters for each region has the potential of improving disentanglement and interpretability. We leave this as future work. 6 Evaluation on a Real-world Dataset We further qualitatively evaluate our method via training on StyleGAN generated data and testing on real imagery, following the pipeline in [62] in Sec. 6.1, and use our predictions to perform artistic manipulation in Sec. 6.2. [6 2] Pred. albedoTexture map SG Envmap + = × = SG + = [6 2] × = SG + = [6 2] × = Decomposition Pred. albedo +/× (Specular BRDF + Light) RGB Input Re-rendered (with light) Figure 6: Results on real imagery from the StyleGAN-generated dataset (cars and white female faces). Our method can recover a meaningful decomposition as opposed to [62], as shown by cleaner texture maps and directional highlights (e.g., car windshield). Even when using monochromatic lighting on faces, our method can correctly predict the specular highlights on the forehead and none in the hair, while SH produces dark artifacts. 6.1 Realistic Imagery Experimental Settings. Our DIB-R++ can also be applied to learn 3D properties from realistic imagery. Following [62], we use StyleGAN [24] to generate multi-view images of cars and faces, which is the data we need to train our model. The generated objects contains cars under various lighting conditions, ranging from high specular paint to nearly diffuse. Thus, we only apply SG shading and adopt the same setting, where we predict |M| = 642 vertex movements, a 256× 256 diffuse texture map, K = 32 SG bases and two global β and s. We also compare [62] as the baseline on the same dataset by using the same training procedure. Experimental Results on StyleGAN Dataset. In the absence of ground-truth on the StyleGAN generated data, we qualitatively evaluate our results and compare with [62] in Fig. 6. Our DIB-R++ reconstructs more faithful material and lighting components, producing an interpretable decomposition. Specifically, our model can represent the dominant light direction more accurately, while naive shading tend to merge reflectance with lighting. We also provide an example with monochromatic lighting on a face example to reduce the degrees of freedom for the SH representation, yet it cannot correctly model light. Extension to Real Imagery from LSUN Dataset. Our model is trained on synthetic data [62] generated by StyleGAN [24]. Thanks to this powerful generative model, the distribution of GAN images is similar to the distribution of real images, allowing our model to generalize well. We show reconstruction results on real images from LSUN [57] in Fig. 7. We provide more results in the supplementary (Sec. E), we also provide additional turntable videos on our project webpage. During inference, we do not need any camera pose and predict shapes in canonical view. However, camera poses are needed to re-render the shape. Since ground-truth camera poses are not available for real images, we manually adjust the camera poses in Fig. 7. As a result, the re-rendered images are slightly misaligned with GT. However, DIB-R++ still accounts for specularities and predicts correct predominant lighting directions and clean textures. 6.2 Material Editing and Relighting Finally, we demonstrate some applications of DIB-R++ to artistic manipulation in Fig. 8. On the left, we show examples of editing the diffuse albedo, where we can insert text, decals or modify the base tint. Since our textures are not contaminated by lighting, clean texture maps can be easily edited by hand and the re-rendered images look natural. On the right, we show examples of editing lighting and surface materials, where we rotate the light (top) or increase glossiness (bottom). We also showcase results where we change lighting orientation or modify the object’s glossiness with consistent shading, which is not feasible with a naive, Lambertian-only shading model. 7 Conclusion We presented DIB-R++, a hybrid differentiable renderer that can effectively disentangle material and lighting. When embedded in a learning framework for single image 3D reconstruction, our method produces state-of-the-art results, and enables applications such as material editing and relighting. One limitation of our method is that the predicted base color may sometimes “bleed” into the lighting predictions. Combining our technique with segmentation methods like DatasetGAN [63] could alleviate this issue for a more practical, artist-friendly disentanglement. Moreover, the predicted reflections are sometimes blurrier than ground-truth; this is mainly due to a limited number SG components for lighting and could potentially be improved with a larger mixture. Finally, on some occasions, the diffuse albedo in our synthetic dataset have baked-in reflections (e.g., GT red car in Fig. 5) instead of a uniform base color, which obfuscates the learning process. This could be mitigated by using more advanced physics-based materials such as those modeling clear coats. Broader Impact. Our work focuses on disentangling geometry from appearance using a differentiable renderer, a relatively nascent research area. We show that augmenting a rasterization-based renderer with physics-based shading models improves reconstruction and allows for easier integration within larger machine learning pipelines. DIB-R++ relies on simple topology and strong data prior assumptions to produce useful decompositions; therefore it cannot generalize to the complexity and multi-modality of real-world scenes in its current form. Nonetheless, we believe that our work takes an important step in the joint estimation of shape, material and environmental lighting from a single image and we hope that it can advance applications in performance-oriented settings such as AR/VR, simulation technology and robotics. For instance, autonomous vehicles need to correctly assess their surroundings from limited signals; directly modeling light-surface interactions (e.g., specular highlights) may provide important cues to this end. Like any ML model, DIB-R++ is prone to biases imparted through training data which requires an abundance of caution when applied to sensitive applications. For example, it needs to be carefully inspected when it is used to recover the 3D parameters of human faces and bodies as it is not tailored for them. It is not recommended in off-the-shelf settings where privacy or erroneous recognition can lead to potential misuse or any harmful application. For purposes of real deployment, one would need to carefully inspect and de-bias the dataset to depict the target distribution of a wide range of possible lighting conditions, skin tones, or at the intersection of race and gender. Disclosure of Funding. This work was funded by NVIDIA. Wenzheng Chen, Jun Gao and Zian Wang acknowledge additional indirect revenue in the form of student scholarships from University of Toronto and the Vector Institute. Joey Litalien acknowledges indirect funding from McGill University and the Natural Sciences and Engineering Research Council of Canada (NSERC).
1. What is the focus of the paper regarding single image inverse rendering? 2. What are the strengths of the proposed method, particularly its extension from DIB-R? 3. What are the weaknesses of the paper, especially regarding its novelty and comparisons with other works? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any questions regarding the paper's experiments and comparisons with state-of-the-art methods?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a single image inverse rendering method, named DIB-R++ that extends DIB-R [7] with MC/SG-based ray-tracing to accelerate training/inference. This method works in two-stage, first it uses DIB-R [7] to infer the mesh (and associated geometric attributes) from a single input image, then in the second stage, the network infers albedo and environment light jointly, and a direct light only shader renders a photorealistic image from the inferred geometry, material and lighting. Review Strengths: The proposed DIB-R++ is unsupervised and does not require ground truth of geometry, material or environment light. In experiments, the proposed DIB-R++ outperforms DIB-R [7] on both metallic and glossy surfaces. Weaknesses: My major concern about this paper is novelty. Using SG to accelerate training and inference is not a new idea, and it has been studied in NeRD [33] and PhySG [34], the difference is that the proposed DIB-R++ uses DIB-R [7] to recover a mesh from the input image, then infers material and illumination (represented by MC or SG, which is not novel). So I think the novelty is rather incremental compared with both NeRD [33] and PhySG [34] and DIB-R [7]. The paper mentioned single image 3D reconstruction in Section 4, but there were no such experimental evaluations in the paper. It is hard to tell the 3D reconstruction quality without quantitative evaluations on reconstructed mesh, depth or normal. This paper also lacks experimental comparisons with a state-of-the-art method, e.g., [ref1] Li et al. Inverse rendering for complex indoor scenes: Shape, spatially-varying lighting and svbrdf from a single image. CVPR 2020. Moreover, the proposed DIB-R++ can only infer direct illumination, while [ref1] can also learn global illumination. I'd like to hear more discussion on this in rebuttal. The paper is not easy to follow, and it would be good to show a system diagram that describes network architecture, network input/output, data flow and training/inference steps.
NIPS
Title DIB-R++: Learning to Predict Lighting and Material with a Hybrid Differentiable Renderer Abstract We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiable renderers. Many previous learning-based approaches for inverse graphics adopt rasterization-based renderers and assume naive lighting and material models, which often fail to account for non-Lambertian, specular reflections commonly observed in the wild. In this work, we propose DIBR++, a hybrid differentiable renderer which supports these photorealistic effects by combining rasterization and ray-tracing, taking the advantage of their respective strengths—speed and realism. Our renderer incorporates environmental lighting and spatially-varying material models to efficiently approximate light transport, either through direct estimation or via spherical basis functions. Compared to more advanced physics-based differentiable renderers leveraging path tracing, DIB-R++ is highly performant due to its compact and expressive shading model, which enables easy integration with learning frameworks for geometry, reflectance and lighting prediction from a single image without requiring any ground-truth. We experimentally demonstrate that our approach achieves superior material and lighting disentanglement on synthetic and real data compared to existing rasterization-based approaches and showcase several artistic applications including material editing and relighting. 1 Introduction Inferring intrinsic 3D properties from 2D images is a long-standing goal of computer vision [3]. In recent years, differentiable rendering has shown great promise in estimating shape, reflectance and illumination from real photographs. Differentiable renderers have become natural candidates for learning-based inverse rendering applications, where image synthesis algorithms and neural networks can be jointly optimized to model physical aspects of objects from posed images, either by leveraging strong data priors or by directly modeling the interactions between light and surfaces. Not all differentiable renderers are made equal. On the one hand, recent physics-based differentiable rendering techniques [31, 45, 2, 44, 59] try to model the full light transport with proper visibility gradients, but they tend to require careful initialization of scene parameters and typically exhibit high computational cost which limits their usage in larger end-to-end learning pipelines. On the other hand, performance-oriented differentiable renderers [10, 27, 36, 25] trade physical accuracy for scalability and speed by approximating scene elements through neural representations or by employing simpler shading models. While the latter line of work has proven to be successful in 3D scene reconstruction, ∗Work done during an internship at NVIDIA. 1Project page: https://nv-tlabs.github.io/DIBRPlus. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). the frequent assumptions of Lambertian-only surfaces and low-frequency lighting prevent these works from modeling more complex specular transport commonly observed in the real world. In this work, we consider the problem of single-view 3D object reconstruction without any 3D supervision. To this end, we propose DIB-R++, a hybrid differentiable renderer that combines rasterization and ray-tracing through an efficient deferred rendering framework. Our framework builds on top of DIB-R [10] and integrates physics-based lighting and material models to capture challenging non-Lambertian reflectance under unknown poses and illumination. Our method is versatile and supports both single-bounce ray-tracing and a spherical Gaussian representation for a compact approximation of direct illumination, allowing us to adapt and tune the shading model based on the radiometric complexity of the scene. We validate our technique on both synthetic and real images and demonstrate superior performance on reconstructing realistic materials BRDFs and lighting configurations over prior rasterization-based methods. We then follow the setting proposed in Zhang et al. [62] to show that DIB-R++ can reconstruct scene intrinsics also from real images without any 3D supervision. We further apply our framework to single-image appearance manipulation such as material editing and scene relighting. 2 Related Work Differentiable Rendering. Research on differentiable rendering can be divided into two categories: physics-based methods focusing on photorealistic image quality, and approximation methods aiming at higher performance. The former differentiates the forward light transport simulation [31, 45, 2, 44, 59] with careful handling of geometric discontinuities. While capable of supporting global illumination, these techniques tend to be relatively slow to optimize or require a detailed initial description of the input in terms of geometry, materials, lighting and camera, which prevents their deployment in the wild. The latter line of works leverages simpler local shading models. Along this axis, rasterization-based differentiable renderers [10, 27, 36, 25, 16] approximate gradients by generating derivatives from projected pixels to 3D parameters. These methods are restricted to primary visibility and ignore indirect lighting effects by construction, but their simplicity and efficiency offer an attractive trade-off for 3D reconstruction. We follow this line of work and build atop DIB-R [10, 22] by augmenting its shading models with physics-based ones. Learning-based Inverse Graphics. Recent research on inverse graphics targets the ill-posed problem of jointly estimating geometry, reflectance and illumination from image observations using neural networks. For single image inverse rendering, one dominant approach is to employ 2D CNNs to learn data-driven features and use synthetic data as supervision [42, 49, 35, 33, 52], but these methods do not always generalize to complex real-world images [4]. To overcome the data issue, a recent body of work investigates the use of self-supervised learning to recover scene intrinsics [1], including domain adaptation from synthetic reflectance dataset [37], object symmetry [54, 53], or multi-illumination images depicting the same scene [30, 32, 39, 58]. However, these methods either rely on specific priors or require data sources tedious to capture in practice. Some works tackle the subtask of lighting estimation only [18, 17, 20], but still need to carefully utilize training data that are hard to capture. Most similar to us, DIB-R [10] tackles unsupervised inverse rendering in the context of differentiable rendering. Zhang et al. [62] further combines DIB-R with StyleGAN [24] generated images to extract and disentangle 3D knowledge. These works perform inverse rendering from real image collections without supervision, but may fail to capture complex material and lighting effects—in contrast, our method models these directly. Several techniques also try to handle more photorealistic effects but typically require complex capturing settings, such as controllable lighting [28, 29], a co-located camera-flashlight setup [41, 13, 34, 5, 6, 8, 48, 38], and densely captured multi-view images [14, 55, 7, 60] with additional known lighting [19] or hand-crafted inductive labels [43]. In our work, we propose a hybrid differentiable renderer and learn to disentangle complex specular effects given a single image. Similar to the recent NeRD [7] and PhySG [60] which recover non-Lambertian reflectance and illumination with a spherical Gaussian (SG) basis [51], we also employ SGs to model the SV-BRDF and incident lighting, but apply this representation to mesh-based differentiable rendering with direct access to the surface. 3 Differentiable Deferred Rendering In this section, we introduce DIB-R++, our differentiable rendering framework based on deferred shading [12]. DIB-R++ is a hybrid differentiable renderer that can efficiently approximate direct illumination and synthesize high-quality images. Concretely, our renderer leverages the differentiable rasterization framework of DIB-R [10] to recover shape attributes and further employs physics-based material and lighting models to estimate appearance. 3.1 Overview We provide an overview of our technique in Fig. 1. We first rasterize a 3D mesh to obtain diffuse albedo and material maps, surface normals, and a silhouette mask. This information is deferred to the shading pass, where outgoing radiance is either estimated stochastically or approximated using a spherical Gaussian basis. The rasterizer and shader are differentiable by design, allowing gradients to be propagated to lighting, material and shape parameters for downstream learning tasks. 3.2 Background Our goal is to provide a differentiable formulation of the rendering process to enable fast inverse rendering from 2D images. LetM be a 3D object in a virtual scene. We start from the (non-emissive) rendering equation (RE) [23], which states that the outgoing radiance Lo at any surface point x ∈M in the camera direction ωo is given by Lo(x,ωo) = ∫ H2 fr(x,ωi,ωo)Li(x,ωi)|n · ωi|dωi, (1) where Li is the incident radiance, fr is the (spatially-varying) bidirectional reflectance distribution function (SV-BRDF) and n is the surface normal at x. The domain of integration is the unit hemisphereH2 of incoming light directions ωi. The BRDF characterizes the surface’s response to illumination from different directions and is modulated by the cosine foreshortening term |n · ω|. Intuitively, Eq. (1) captures an energy balance and computes how much light is received and scattered at a shading point in a particular direction. Estimating the RE typically requires Monte Carlo (MC) integration [46], which involves tracing rays from the camera into the scene. Albeit physically correct, this process is computationally expensive and does not generally admit a closed-form solution. MC estimators can exhibit high variance and may produce noisy pixel gradients at low sample count, which may significantly impact performance and convergence. To keep the problem tractable, we thus make several approximations of Eq. (1), which we detail in the next section. 3.3 Two-stage Deferred Rendering We now describe our rendering framework (Fig. 1). We start by defining three families of parameters, where π ∈ Rdπ encodes the shape attributes (e.g., vertex positions), θ ∈ Rdθ describes the material properties, and γ ∈ Rdγ captures the illumination in the scene. In what follows, we shall only consider a single pixel, indexed by p, within an RGB image I ∈ R3×h×w+ for notational simplicity. Stage 1: Rasterization Pass. We first employ a differentiable rasterizer R [10] to generate primary rays ωo ∈ S2 from a camera and render our scene M into geometry buffers (commonly called G-buffers) containing the surface intersection point xp ∈ R3, the surface normal np ∈ S2, and the spatially-varying material parameters θp (e.g., diffuse albedo). This rendering pass also returns a visibility mask vp ∈ {0, 1} indicating whether pixel p is occupied by the rendered object, separating the foreground object If from its background environment Ib so that I = If + Ib. We have: R(M, p,ωo) = (xp,np,θp, vp). (2) Stage 2: Shading Pass. Given surface properties and outgoing direction ωo, we then approximate the outgoing radiance Lo(xp,ωo) through several key assumptions. First, we restrict ourselves to direct illumination only (i.e. single-bounce scattering) and assume that the incoming radiance is given by a distant environment map Li : S2 → R3+. Therefore, we do not model self-occlusion and Li(xp,ωi) ≡ Li(ωi;γ). Such simplification largely reduces computation and memory costs and is trivially differentiable. Second, we assume that the material parameters θ can model both diffuse and specular view-dependent effects. At a high level, we define our shading model S so that: S(xp,np,ωo;θp,γ) ≈ Lo(xp,ωo). (3) Importantly, a differentiable parameterization of S enables the computation of pixel gradients with respect to all scene parameters Θ = (π,θ,γ) by differentiating Ip(Θ) = (S ◦R)(M, p,ωo). Given a scalar objective function defined on the rendered output I , ∂I/∂π is computed using DIB-R [10]. In what follows, we thus mainly focus on formulating ∂I/∂{θ,γ} so that all gradients can be computed using the chain rule, allowing for joint optimization of geometry, material and lighting parameters. We assume henceforth a fixed pixel p for conciseness, and remove the subscript. 3.4 Shading Models Since our primary goal is to capture a wide range of appearances, we provide two simple techniques to approximate Eq. (1): Monte Carlo (MC) and spherical Gaussians (SG). The former targets more mirror-like objects and can better approximate higher frequencies in the integrand, but is more expensive to compute. The latter is more robust to roughness variations but is limited by the number of basis elements. To model reflectance, we choose to use a simplified version of the isotropic Disney BRDF [9, 15] based on the Cook–Torrance model [11], which includes diffuse albedo a ∈ [0, 1]3, specular albedo s ∈ [0, 1], surface roughness β ∈ [0, 1] and metalness m ∈ [0, 1]. Metalness allows us to model both metals and plastics in a unified framework. We let the diffuse albedo vary spatially (a = a(x)) and globally define all other attributes to restrict the number of learnable parameters. Monte Carlo Shading. Given a surface point x ∈ M to shade, we importance sample the BRDF to obtain N light directions ωki and compute the BRDF value. We represent the incident lighting L (MC) i as a high-dynamic range image γ ∈ R 3×hl×wl + using an equirectangular projection, which can be queried for any direction via interpolation between nearby pixels. The final pixel color is then computed as the average over all samples, divided by the probability of sampling ωki : S(MC)(x,n,ωo;θ,γ) = 1 N N∑ k=1 fr(x,ω k i ,ωo;θ)L (MC) i (ω k i ;γ) |n · ωki | p(ωki ) . (4) When the surface is near-specular (e.g., a mirror), one can efficiently estimate the RE as reflected rays are concentrated in bundles (e.g., to satisfy the law of reflection). However, this estimator can suffer from high variance for rougher surfaces; a higher number of samples may be necessary to produce usable gradients. While this can be partially improved with multiple importance sampling [50], emitter sampling would add a significant overhead due to the environment map being updated at every optimization step. This motivates the use of a more compact representation. Spherical Gaussian Shading. To further accelerate rendering while preserving expressivity in our shading model, we use a spherical Gaussian (SG) [51] representation. Projecting both the cosineweighted BRDF and incident radiance into an SG basis allows for fast, analytic integration within our differentiable shader, at the cost of some high frequency features in the integrand. Concretely, an SG kernel has the form G(ω; ξ, λ,µ) = µ eλ(ξ·ω−1), where ω ∈ S2 is the input spherical direction to evaluate, ξ ∈ S2 is the axis, λ ∈ R+ is the sharpness, and µ ∈ R3+ is the amplitude of the lobe. We represent our environment map using a mixture of K lighting SGs Gl, so that: L (SG) i (ωi;γ) ≈ K∑ k=1 Gkl ( ωi; ξ k l , λ k l ,µ k l ) , (5) where γ := {ξkl , λkl ,µkl }k. For the BRDF, we follow Wang et al. [51] and fit a single, monochromatic SG to the specular lobe so that f (SG)r is a sum of diffuse and specular lobes. The full derivation can be found in our supplementary material (Sec. A). Finally, we approximate the cosine foreshortening term using a single SG |n · ωi| ≈ Gc(ωi;n, 2.133, 1.17) [40]. Regrouping all terms, the final pixel color can be computed as: S(SG)(x,n,ωo;θ,γ) = ∫ S2 f (SG)r (x,ω k i ,ωo;θ)L (SG) i (ωi;γ)Gc(ωi) dωi, (6) which has an analytic form that can be automatically differentiated inside our renderer. All parameters of the SGs, as well as the BRDF parameters, are learnable. Comparison. To visually compare our two shading techniques and understand their limitations, we render a unit sphere under the same lighting (represented differently) in Fig. 2. To do so, we first fit a HDR environment map with K = 128 SGs using an equirectangular projection. As shown on the left, SGs smooth out high frequency details and sharp corners but require much fewer parameters to reconstruct incident lighting (896 vs. 98 304). On the right, we visualize the effect of increasing the surface roughness β under the corresponding light representation. Intuitively, this point of diminishing return indicates that MC is only so useful when the surface reflects most of the incoming light (e.g., a mirror). Indeed, when β is small enough (β → 0) and we deal with a highly non-Lambertian surface, a small number of MC samples are enough to estimate direct illumination, which in turn implies faster render speed and low memory cost. On the other end of the spectrum (β → 1), significantly more samples (e.g., N > 1000) are needed to accurately integrate incident light, resulting in longer inference times. In such a case, SGs should be favored since they offer a significant improvement. In the absence of any prior knowledge on the material type, SG shading is preferred. This is reflected in our experiments in Sec. 5-6. Optimization. We perform a sanity check on our renderer in Fig. 3 by optimizing for lighting and reflectance properties from a multi-view image L1-loss with fixed geometry. We show the ground-truth (GT) parameters and rendered images in the first row, along with initial parameters (Init.) in the second row. Here, we optimize parameters separately using gradient-descent while the others are kept fixed to validate each component of our shading model. DIB-R++ can successfully estimate material and lighting parameters, including the environmental lighting, surface roughness and specular albedo of the object. We find that the converged material parameters closely match GT, while the optimized environment map loses some details due to gradients coming entirely from surface reflections (foreground supervision only). In particular, surface highlights are well captured by our technique. For more optimization results, please refer to the supplementary (Sec. C). 4 Application: Single Image 3D Reconstruction We demonstrate the effectiveness of our hybrid framework through the learning-based problem of single image 3D reconstruction without supervision. While previous works [10, 62] generally focus on diffuse illumination only, our goal is to jointly infer geometry, reflectance, and lighting from a single image Ĩ containing strong specular transport. To this end, we employ a convolutional neural network F , parameterized by learnable weights ϑ, to predict 3D attributes of a meshM with pre-determined topology (sphere in our case). We adopt the U-Net [47] architecture of the original DIB-R [10, 62] and modify its output to also predict the appropriate BRDF attributes θ and light parameters γ (pixel colors or SG coefficients) so that F (Ĩ;ϑ) = (π,θ,γ). We then render these parameters back to an image I using our differentiable renderer and apply a loss L on the RGB output to compare the input image Ĩ and the rendered image I , where: L(ϑ) = αimLim(Ĩ , I) + αmskLmsk(Ṽ , V ) + αperLper(Ĩ , I) + αlapLlap(π). (7) Similar to DIB-R [10], we combine multiple consistency losses with regularization terms: Lim is an image loss computing the L1-distance between the rendered image and the input image, Lmsk is an Intersection-over-Union (IoU) loss of the rendered silhouette V and the input mask Ṽ of the object [25], Lper is a perceptual loss [21, 61] computing the L1-distance between the pre-trained AlexNet [26] feature maps of rendered image and input image, and Llap is a Laplacian loss [36, 25] to penalize the change in relative positions of neighboring vertices. We set αim = 20, αmsk = 5, αper = 0.5, αlap = 5, which we empirically found worked best. 5 Evaluation on Synthetic Datasets We conduct extensive experiments to evaluate the performance of DIB-R++. We first quantitatively evaluate on synthetic data where we have access to ground-truth geometry, material and lighting. Since MC ad SG shading have individual pros and cons, we validate them under different settings. In particular, we generate separate datasets with two different surface materials: purely metallic surfaces with no roughness, and glossy surfaces with random positive roughness. We compare the performance of both shading models against the baseline method [10]. Synthetic Datasets. We chose 485 different car models from TurboSquid2 to prepare data for metallic and glossy surfaces. We also collected 438 freely available high-dynamic range (HDR) environment maps from HDRI Haven3 to use as reference lighting, which contain a wide variety of illumination configurations for both indoor and outdoor scenes. To render all 3D models, we use Blender’s Cycles4 path tracer with the Principled BRDF model [9]. We create two datasets, Metallic-Surfaces and Glossy-Surfaces. For metallic surfaces, we set β = 0 and m = 1. Conversely, we set m = 0, s = 1 and randomly pick β ∈ [0, 0.4] to generate images for glossy surfaces. Baseline. We compare our method with the rasterization-based baseline DIB-R [10], which supports spherical harmonics (SH) lighting. While the original lighting implementation in [10] is monochromatic, we extend it to RGB for a fairer comparison. For quantitative evaluation, we first report the common L1 pixel loss between the re-rendered image using our predictions and groundtruth (GT) image(L = ‖Ĩ − I‖1), and 2D IoU loss between rendered silhouettes and ground-truth masks(L = 1− Ṽ V Ṽ+V ). We experimentally find that these numbers are very close in different methods. Thus, we further evaluate the quality of diffuse albedo and lighting predictions using normalized cross correlation (NCC, L = 1− ∑ γ̃ γpred ‖γ̃‖2‖γpred‖2 , where γpred is the predicted albedo and light while γ̃ is GT). We provide more details of these metrics in the supplementary material (Sec. E). 5.1 Metallic Surfaces Experimental Settings. We first apply all methods to the metallic car dataset. Since this surface property is known a priori, we relax the task for MC shading by setting β = 0 and only predict geometry, diffuse albedo and lighting from the input image. This allows us to render MC at a low sample count (N = 4), achieving higher rendering speed and a lower memory cost. In particular, we predict the relative offset for all |M| = 642 vertices in a mesh and a 256× 256 texture map, following the choices in [10]. We also predict a 256 × 256 RGB environment map. For SG shading, we predict all parameters. While shape and texture are the same as MC shading, we adopt K = 32 for SG and predict two global parameters β and s for the specular BRDF. This keeps the number of parameters relatively low while providing enough flexibility to capture different radiometric configurations. 2https://turbosquid.com. We obtain consent via agreement with TurboSquid, following their license at https://blog.turbosquid.com/turbosquid-3d-model-license. 3https://hdrihaven.com. We follow the CCO license at https://hdrihaven.com/p/license.php. 4https://blender.org Experimental Results. Quantitative and qualitative results are shown in Fig. 4 and Table 1 (Left), respectively. Since the main loss function comes from the difference between GT and re-rendered images, we find the re-rendered images (with light) from the predictions are all close to the GT image for different methods, and quantitatively, the image loss and 2D IoU loss are also similar across different models. However, we observe significant differences on the predicted albedo and lighting. Specifically, in Fig. 4, MC shading successfully predicts cleaner diffuse albedo maps and more accurate lighting, while Chen et al. [10] “bakes in” high specular effects into the texture. Quantitatively, we outperform [10] with a 3× improvement in terms of NCC loss for lighting, demonstrating the effectiveness of our DIB-R++. We further compare MC shading with SG shading. While SG shading achieves reasonable lighting predictions, it fails to reconstruct the high frequency details in the lighting and has circular spot effects caused by the isotropic SGs. Finally, we note that due to the ambiguity of the learning task, the overall intensities of all predicted texture maps can largely vary. Still, we observe that MC can better recover fine details, such as the wheels’ rims. 5.2 Glossy Surfaces Experimental Settings. We further apply our model to synthetic images rendered with positive roughness (glossy). To apply MC shading in such a case, we assume no prior knowledge for material and use a high sample count (N = 1024) to account for possibly low roughness images in the dataset. Due to high rendering time and memory cost, we subsample 4% of the pixels and apply Lim to those pixels only in each training iteration. As such, we do not use the perceptual loss Lper as it relies on the whole image. We predict a 32× 16 environment map for lighting and predict global β and m for the specular BRDF. For SG shading, we use the same settings as for the metallic surfaces. Experimental Results. We apply both MC and SG shading and compare with [10]. Results are shown in Fig. 5 and Table 1 (Right). Qualitatively, SG shading has better lighting predictions with correct high-luminance regions. The specular highlights in the images successfully guide SG shading, while the bright reflection on the car window and front cover are fused with texture map in [10]. MC also has reasonable lighting predictions, but the predicted light map lacks structure due to weak surface reflections. Without a perceptual loss term, the predicted textures also tend to be blurrier. Quantitatively, SG shading significantly outperforms [10] on lighting prediction in terms of NCC (0.078 vs. 0.127) and improves on texture prediction in terms of BRDF/lighting disentanglement. We also compare with MC shading, where MC achieves slightly worse results on lighting predictions compared to SG, but is still much better than [10]. 5.3 Discussion As shown in our previous two experiments, Monte Carlo shading works best under a metallic assumption (β = 0), in which case the rendered images can have rich details at low sample count (N ≤ 4). However, when the surface is more Lambertian (i.e., when β is becoming larger), we have to compensate with a larger N to produce noise-free renderings, which impacts learning both timeand memory-wise. As a consequence, we recommend applying MC shading to metallic surfaces only, and default to SGs otherwise. Our spherical Gaussian shading pipeline provides an analytic formulation for estimating the rendering equation, which avoids the need of tracing ray samples, largely accelerating the rendering process. While SGs can be blurry on metallic surfaces, in most case (e.g., when β ≥ 0.2) it can model similar rendering effects at a fraction of the cost, achieving better results than MC shading and [10]. After inspecting the predicted surface material properties (β, s, m) and diffuse albedo with the ground-truth parameters in Blender, we find the materials contain little correlation and the intensities of diffuse albedo might change. As for SG, we are using only 32 basis elements to simulate a complex, high definition environment map (2K). Since SGs can only represent a finite amount of details, we find the predicted global β tends to be too small. One hypothesis for this is that the optimizer artificially prefers more reflections (and thus lower roughness) to be able to estimate at least some portions of the environment map. On the other hand, in MC, due to the absence of a perceptual loss, the predicted texture is too blurry and cannot represent GT to a high detail. We find that the predicted β and m do not have strong correlation with the GT material. Lastly, we note that the predicted texture map has to change its overall intensity to accommodate for other parameters to ensure the re-rendered images are correct, which leads to some differences with GT. More analysis can be found in supplementary (Sec. C and F). In summary, when we have no prior knowledge about the material, our re-rendered images can be very close to the input images but the predicted material parameters are not always aligned with the GT materials. We believe this problem can be relieved by incorporating additional local constraints, e.g., part-based material priors, or by leveraging anisotropic SGs [56]. For instance, a car body is metallic while its wheels are typically diffuse; predicting different parameters for each region has the potential of improving disentanglement and interpretability. We leave this as future work. 6 Evaluation on a Real-world Dataset We further qualitatively evaluate our method via training on StyleGAN generated data and testing on real imagery, following the pipeline in [62] in Sec. 6.1, and use our predictions to perform artistic manipulation in Sec. 6.2. [6 2] Pred. albedoTexture map SG Envmap + = × = SG + = [6 2] × = SG + = [6 2] × = Decomposition Pred. albedo +/× (Specular BRDF + Light) RGB Input Re-rendered (with light) Figure 6: Results on real imagery from the StyleGAN-generated dataset (cars and white female faces). Our method can recover a meaningful decomposition as opposed to [62], as shown by cleaner texture maps and directional highlights (e.g., car windshield). Even when using monochromatic lighting on faces, our method can correctly predict the specular highlights on the forehead and none in the hair, while SH produces dark artifacts. 6.1 Realistic Imagery Experimental Settings. Our DIB-R++ can also be applied to learn 3D properties from realistic imagery. Following [62], we use StyleGAN [24] to generate multi-view images of cars and faces, which is the data we need to train our model. The generated objects contains cars under various lighting conditions, ranging from high specular paint to nearly diffuse. Thus, we only apply SG shading and adopt the same setting, where we predict |M| = 642 vertex movements, a 256× 256 diffuse texture map, K = 32 SG bases and two global β and s. We also compare [62] as the baseline on the same dataset by using the same training procedure. Experimental Results on StyleGAN Dataset. In the absence of ground-truth on the StyleGAN generated data, we qualitatively evaluate our results and compare with [62] in Fig. 6. Our DIB-R++ reconstructs more faithful material and lighting components, producing an interpretable decomposition. Specifically, our model can represent the dominant light direction more accurately, while naive shading tend to merge reflectance with lighting. We also provide an example with monochromatic lighting on a face example to reduce the degrees of freedom for the SH representation, yet it cannot correctly model light. Extension to Real Imagery from LSUN Dataset. Our model is trained on synthetic data [62] generated by StyleGAN [24]. Thanks to this powerful generative model, the distribution of GAN images is similar to the distribution of real images, allowing our model to generalize well. We show reconstruction results on real images from LSUN [57] in Fig. 7. We provide more results in the supplementary (Sec. E), we also provide additional turntable videos on our project webpage. During inference, we do not need any camera pose and predict shapes in canonical view. However, camera poses are needed to re-render the shape. Since ground-truth camera poses are not available for real images, we manually adjust the camera poses in Fig. 7. As a result, the re-rendered images are slightly misaligned with GT. However, DIB-R++ still accounts for specularities and predicts correct predominant lighting directions and clean textures. 6.2 Material Editing and Relighting Finally, we demonstrate some applications of DIB-R++ to artistic manipulation in Fig. 8. On the left, we show examples of editing the diffuse albedo, where we can insert text, decals or modify the base tint. Since our textures are not contaminated by lighting, clean texture maps can be easily edited by hand and the re-rendered images look natural. On the right, we show examples of editing lighting and surface materials, where we rotate the light (top) or increase glossiness (bottom). We also showcase results where we change lighting orientation or modify the object’s glossiness with consistent shading, which is not feasible with a naive, Lambertian-only shading model. 7 Conclusion We presented DIB-R++, a hybrid differentiable renderer that can effectively disentangle material and lighting. When embedded in a learning framework for single image 3D reconstruction, our method produces state-of-the-art results, and enables applications such as material editing and relighting. One limitation of our method is that the predicted base color may sometimes “bleed” into the lighting predictions. Combining our technique with segmentation methods like DatasetGAN [63] could alleviate this issue for a more practical, artist-friendly disentanglement. Moreover, the predicted reflections are sometimes blurrier than ground-truth; this is mainly due to a limited number SG components for lighting and could potentially be improved with a larger mixture. Finally, on some occasions, the diffuse albedo in our synthetic dataset have baked-in reflections (e.g., GT red car in Fig. 5) instead of a uniform base color, which obfuscates the learning process. This could be mitigated by using more advanced physics-based materials such as those modeling clear coats. Broader Impact. Our work focuses on disentangling geometry from appearance using a differentiable renderer, a relatively nascent research area. We show that augmenting a rasterization-based renderer with physics-based shading models improves reconstruction and allows for easier integration within larger machine learning pipelines. DIB-R++ relies on simple topology and strong data prior assumptions to produce useful decompositions; therefore it cannot generalize to the complexity and multi-modality of real-world scenes in its current form. Nonetheless, we believe that our work takes an important step in the joint estimation of shape, material and environmental lighting from a single image and we hope that it can advance applications in performance-oriented settings such as AR/VR, simulation technology and robotics. For instance, autonomous vehicles need to correctly assess their surroundings from limited signals; directly modeling light-surface interactions (e.g., specular highlights) may provide important cues to this end. Like any ML model, DIB-R++ is prone to biases imparted through training data which requires an abundance of caution when applied to sensitive applications. For example, it needs to be carefully inspected when it is used to recover the 3D parameters of human faces and bodies as it is not tailored for them. It is not recommended in off-the-shelf settings where privacy or erroneous recognition can lead to potential misuse or any harmful application. For purposes of real deployment, one would need to carefully inspect and de-bias the dataset to depict the target distribution of a wide range of possible lighting conditions, skin tones, or at the intersection of race and gender. Disclosure of Funding. This work was funded by NVIDIA. Wenzheng Chen, Jun Gao and Zian Wang acknowledge additional indirect revenue in the form of student scholarships from University of Toronto and the Vector Institute. Joey Litalien acknowledges indirect funding from McGill University and the Natural Sciences and Engineering Research Council of Canada (NSERC).
1. What is the main contribution of the paper, and how does it build upon existing work in the field? 2. What are the strengths and weaknesses of the proposed method, particularly regarding its ability to handle specular materials and occlusions? 3. How does the reviewer assess the novelty and incremental nature of the proposed approach compared to prior works? 4. Are there any concerns or suggestions regarding the choice of differentiable rasterizers and the use of ground truth material and lighting in training? 5. How does the reviewer evaluate the effectiveness and limitations of the proposed method in handling ambiguity in inverse rendering, especially for glossy surfaces? 6. What are some missing comparisons and citations that the reviewer suggests should be included in the paper? 7. How does the reviewer assess the quality and quantity of results provided in the paper, particularly regarding the inclusion of ground truth relighting images and videos showing renderings under different lighting conditions and viewpoints?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a method to reconstruct geometry, material and lighting from a single image. They apply differentiable rasterizers to calculate the rendering loss, which is back-propagated to supervise the training of the desired components. They compare different lighting representations include HDR envmaps as well as Spherical Gaussians. Review My major concern over the paper is the technical contribution of the paper. The paper builds on an existing rasterization-based inverse-rendering framework DIB-R. The difference is that DIB-R assumes a Lambertian model, while the proposed method handles specular materials. However, the overall framework is almost the same. Jointly optimizing lighting, materials, and geometry are also common, for example, [14]. The paper says [14] does not generalize to real images, and however [14] does show results on real images and the paper does not compare to [14]. It is also not clear to me why the proposed method would generalize, since it is also trained on synthetic dataset. For me, it seems the major contribution of the paper is that it adds the lighting, material in the inverse-rendering framework of DIB-R, which I think is incremental. The paper has a lot of text on the shading model and lighting representation. While I appreciate the efforts, I have to say that both representations are pretty standard in computer graphics and such discussions do not increase the technical contributions. In terms of differentiable rasterizers, I think [8] has better performance than [7] in terms of better handling occlusions and efficiency, why not use [8]? The loss function only consists of losses on images and vertices. Since the model is trained on a synthetic dataset, why not also render ground truth material and lighting and use them as direct supervisions? Involving those GT in training would better resolve the ambiguity in inverse rendering. The inverse-rendering problem itself is highly ambiguous. In the case that the object is purely lambertian, there is no way to infer the lighting accurately, and such a problem exists for both SGs and envmaps. Therefore the proposed method is mostly limited to glossy surfaces. In the results section, the paper provides no comparison against previous single-image lighting and SVBRDF estimation method, for example [14]. For lighting estimation, the following paper should be cited and compared: Deep Parametric Indoor Lighting Estimation, ICCV 2019. In all results on synthetic data, the corresponding ground truth relighting images should be included as references. Also there should be videos showing renderings under different lighting conditions and viewpoints with comparison to ground truth. Overall I think the technical contribution of the proposed method is not enough to be accepted. Most of the components are from previous works and the rasterizer-based optimization framework itself is also standard. Therefore, I vote for rejection.
NIPS
Title DIB-R++: Learning to Predict Lighting and Material with a Hybrid Differentiable Renderer Abstract We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiable renderers. Many previous learning-based approaches for inverse graphics adopt rasterization-based renderers and assume naive lighting and material models, which often fail to account for non-Lambertian, specular reflections commonly observed in the wild. In this work, we propose DIBR++, a hybrid differentiable renderer which supports these photorealistic effects by combining rasterization and ray-tracing, taking the advantage of their respective strengths—speed and realism. Our renderer incorporates environmental lighting and spatially-varying material models to efficiently approximate light transport, either through direct estimation or via spherical basis functions. Compared to more advanced physics-based differentiable renderers leveraging path tracing, DIB-R++ is highly performant due to its compact and expressive shading model, which enables easy integration with learning frameworks for geometry, reflectance and lighting prediction from a single image without requiring any ground-truth. We experimentally demonstrate that our approach achieves superior material and lighting disentanglement on synthetic and real data compared to existing rasterization-based approaches and showcase several artistic applications including material editing and relighting. 1 Introduction Inferring intrinsic 3D properties from 2D images is a long-standing goal of computer vision [3]. In recent years, differentiable rendering has shown great promise in estimating shape, reflectance and illumination from real photographs. Differentiable renderers have become natural candidates for learning-based inverse rendering applications, where image synthesis algorithms and neural networks can be jointly optimized to model physical aspects of objects from posed images, either by leveraging strong data priors or by directly modeling the interactions between light and surfaces. Not all differentiable renderers are made equal. On the one hand, recent physics-based differentiable rendering techniques [31, 45, 2, 44, 59] try to model the full light transport with proper visibility gradients, but they tend to require careful initialization of scene parameters and typically exhibit high computational cost which limits their usage in larger end-to-end learning pipelines. On the other hand, performance-oriented differentiable renderers [10, 27, 36, 25] trade physical accuracy for scalability and speed by approximating scene elements through neural representations or by employing simpler shading models. While the latter line of work has proven to be successful in 3D scene reconstruction, ∗Work done during an internship at NVIDIA. 1Project page: https://nv-tlabs.github.io/DIBRPlus. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). the frequent assumptions of Lambertian-only surfaces and low-frequency lighting prevent these works from modeling more complex specular transport commonly observed in the real world. In this work, we consider the problem of single-view 3D object reconstruction without any 3D supervision. To this end, we propose DIB-R++, a hybrid differentiable renderer that combines rasterization and ray-tracing through an efficient deferred rendering framework. Our framework builds on top of DIB-R [10] and integrates physics-based lighting and material models to capture challenging non-Lambertian reflectance under unknown poses and illumination. Our method is versatile and supports both single-bounce ray-tracing and a spherical Gaussian representation for a compact approximation of direct illumination, allowing us to adapt and tune the shading model based on the radiometric complexity of the scene. We validate our technique on both synthetic and real images and demonstrate superior performance on reconstructing realistic materials BRDFs and lighting configurations over prior rasterization-based methods. We then follow the setting proposed in Zhang et al. [62] to show that DIB-R++ can reconstruct scene intrinsics also from real images without any 3D supervision. We further apply our framework to single-image appearance manipulation such as material editing and scene relighting. 2 Related Work Differentiable Rendering. Research on differentiable rendering can be divided into two categories: physics-based methods focusing on photorealistic image quality, and approximation methods aiming at higher performance. The former differentiates the forward light transport simulation [31, 45, 2, 44, 59] with careful handling of geometric discontinuities. While capable of supporting global illumination, these techniques tend to be relatively slow to optimize or require a detailed initial description of the input in terms of geometry, materials, lighting and camera, which prevents their deployment in the wild. The latter line of works leverages simpler local shading models. Along this axis, rasterization-based differentiable renderers [10, 27, 36, 25, 16] approximate gradients by generating derivatives from projected pixels to 3D parameters. These methods are restricted to primary visibility and ignore indirect lighting effects by construction, but their simplicity and efficiency offer an attractive trade-off for 3D reconstruction. We follow this line of work and build atop DIB-R [10, 22] by augmenting its shading models with physics-based ones. Learning-based Inverse Graphics. Recent research on inverse graphics targets the ill-posed problem of jointly estimating geometry, reflectance and illumination from image observations using neural networks. For single image inverse rendering, one dominant approach is to employ 2D CNNs to learn data-driven features and use synthetic data as supervision [42, 49, 35, 33, 52], but these methods do not always generalize to complex real-world images [4]. To overcome the data issue, a recent body of work investigates the use of self-supervised learning to recover scene intrinsics [1], including domain adaptation from synthetic reflectance dataset [37], object symmetry [54, 53], or multi-illumination images depicting the same scene [30, 32, 39, 58]. However, these methods either rely on specific priors or require data sources tedious to capture in practice. Some works tackle the subtask of lighting estimation only [18, 17, 20], but still need to carefully utilize training data that are hard to capture. Most similar to us, DIB-R [10] tackles unsupervised inverse rendering in the context of differentiable rendering. Zhang et al. [62] further combines DIB-R with StyleGAN [24] generated images to extract and disentangle 3D knowledge. These works perform inverse rendering from real image collections without supervision, but may fail to capture complex material and lighting effects—in contrast, our method models these directly. Several techniques also try to handle more photorealistic effects but typically require complex capturing settings, such as controllable lighting [28, 29], a co-located camera-flashlight setup [41, 13, 34, 5, 6, 8, 48, 38], and densely captured multi-view images [14, 55, 7, 60] with additional known lighting [19] or hand-crafted inductive labels [43]. In our work, we propose a hybrid differentiable renderer and learn to disentangle complex specular effects given a single image. Similar to the recent NeRD [7] and PhySG [60] which recover non-Lambertian reflectance and illumination with a spherical Gaussian (SG) basis [51], we also employ SGs to model the SV-BRDF and incident lighting, but apply this representation to mesh-based differentiable rendering with direct access to the surface. 3 Differentiable Deferred Rendering In this section, we introduce DIB-R++, our differentiable rendering framework based on deferred shading [12]. DIB-R++ is a hybrid differentiable renderer that can efficiently approximate direct illumination and synthesize high-quality images. Concretely, our renderer leverages the differentiable rasterization framework of DIB-R [10] to recover shape attributes and further employs physics-based material and lighting models to estimate appearance. 3.1 Overview We provide an overview of our technique in Fig. 1. We first rasterize a 3D mesh to obtain diffuse albedo and material maps, surface normals, and a silhouette mask. This information is deferred to the shading pass, where outgoing radiance is either estimated stochastically or approximated using a spherical Gaussian basis. The rasterizer and shader are differentiable by design, allowing gradients to be propagated to lighting, material and shape parameters for downstream learning tasks. 3.2 Background Our goal is to provide a differentiable formulation of the rendering process to enable fast inverse rendering from 2D images. LetM be a 3D object in a virtual scene. We start from the (non-emissive) rendering equation (RE) [23], which states that the outgoing radiance Lo at any surface point x ∈M in the camera direction ωo is given by Lo(x,ωo) = ∫ H2 fr(x,ωi,ωo)Li(x,ωi)|n · ωi|dωi, (1) where Li is the incident radiance, fr is the (spatially-varying) bidirectional reflectance distribution function (SV-BRDF) and n is the surface normal at x. The domain of integration is the unit hemisphereH2 of incoming light directions ωi. The BRDF characterizes the surface’s response to illumination from different directions and is modulated by the cosine foreshortening term |n · ω|. Intuitively, Eq. (1) captures an energy balance and computes how much light is received and scattered at a shading point in a particular direction. Estimating the RE typically requires Monte Carlo (MC) integration [46], which involves tracing rays from the camera into the scene. Albeit physically correct, this process is computationally expensive and does not generally admit a closed-form solution. MC estimators can exhibit high variance and may produce noisy pixel gradients at low sample count, which may significantly impact performance and convergence. To keep the problem tractable, we thus make several approximations of Eq. (1), which we detail in the next section. 3.3 Two-stage Deferred Rendering We now describe our rendering framework (Fig. 1). We start by defining three families of parameters, where π ∈ Rdπ encodes the shape attributes (e.g., vertex positions), θ ∈ Rdθ describes the material properties, and γ ∈ Rdγ captures the illumination in the scene. In what follows, we shall only consider a single pixel, indexed by p, within an RGB image I ∈ R3×h×w+ for notational simplicity. Stage 1: Rasterization Pass. We first employ a differentiable rasterizer R [10] to generate primary rays ωo ∈ S2 from a camera and render our scene M into geometry buffers (commonly called G-buffers) containing the surface intersection point xp ∈ R3, the surface normal np ∈ S2, and the spatially-varying material parameters θp (e.g., diffuse albedo). This rendering pass also returns a visibility mask vp ∈ {0, 1} indicating whether pixel p is occupied by the rendered object, separating the foreground object If from its background environment Ib so that I = If + Ib. We have: R(M, p,ωo) = (xp,np,θp, vp). (2) Stage 2: Shading Pass. Given surface properties and outgoing direction ωo, we then approximate the outgoing radiance Lo(xp,ωo) through several key assumptions. First, we restrict ourselves to direct illumination only (i.e. single-bounce scattering) and assume that the incoming radiance is given by a distant environment map Li : S2 → R3+. Therefore, we do not model self-occlusion and Li(xp,ωi) ≡ Li(ωi;γ). Such simplification largely reduces computation and memory costs and is trivially differentiable. Second, we assume that the material parameters θ can model both diffuse and specular view-dependent effects. At a high level, we define our shading model S so that: S(xp,np,ωo;θp,γ) ≈ Lo(xp,ωo). (3) Importantly, a differentiable parameterization of S enables the computation of pixel gradients with respect to all scene parameters Θ = (π,θ,γ) by differentiating Ip(Θ) = (S ◦R)(M, p,ωo). Given a scalar objective function defined on the rendered output I , ∂I/∂π is computed using DIB-R [10]. In what follows, we thus mainly focus on formulating ∂I/∂{θ,γ} so that all gradients can be computed using the chain rule, allowing for joint optimization of geometry, material and lighting parameters. We assume henceforth a fixed pixel p for conciseness, and remove the subscript. 3.4 Shading Models Since our primary goal is to capture a wide range of appearances, we provide two simple techniques to approximate Eq. (1): Monte Carlo (MC) and spherical Gaussians (SG). The former targets more mirror-like objects and can better approximate higher frequencies in the integrand, but is more expensive to compute. The latter is more robust to roughness variations but is limited by the number of basis elements. To model reflectance, we choose to use a simplified version of the isotropic Disney BRDF [9, 15] based on the Cook–Torrance model [11], which includes diffuse albedo a ∈ [0, 1]3, specular albedo s ∈ [0, 1], surface roughness β ∈ [0, 1] and metalness m ∈ [0, 1]. Metalness allows us to model both metals and plastics in a unified framework. We let the diffuse albedo vary spatially (a = a(x)) and globally define all other attributes to restrict the number of learnable parameters. Monte Carlo Shading. Given a surface point x ∈ M to shade, we importance sample the BRDF to obtain N light directions ωki and compute the BRDF value. We represent the incident lighting L (MC) i as a high-dynamic range image γ ∈ R 3×hl×wl + using an equirectangular projection, which can be queried for any direction via interpolation between nearby pixels. The final pixel color is then computed as the average over all samples, divided by the probability of sampling ωki : S(MC)(x,n,ωo;θ,γ) = 1 N N∑ k=1 fr(x,ω k i ,ωo;θ)L (MC) i (ω k i ;γ) |n · ωki | p(ωki ) . (4) When the surface is near-specular (e.g., a mirror), one can efficiently estimate the RE as reflected rays are concentrated in bundles (e.g., to satisfy the law of reflection). However, this estimator can suffer from high variance for rougher surfaces; a higher number of samples may be necessary to produce usable gradients. While this can be partially improved with multiple importance sampling [50], emitter sampling would add a significant overhead due to the environment map being updated at every optimization step. This motivates the use of a more compact representation. Spherical Gaussian Shading. To further accelerate rendering while preserving expressivity in our shading model, we use a spherical Gaussian (SG) [51] representation. Projecting both the cosineweighted BRDF and incident radiance into an SG basis allows for fast, analytic integration within our differentiable shader, at the cost of some high frequency features in the integrand. Concretely, an SG kernel has the form G(ω; ξ, λ,µ) = µ eλ(ξ·ω−1), where ω ∈ S2 is the input spherical direction to evaluate, ξ ∈ S2 is the axis, λ ∈ R+ is the sharpness, and µ ∈ R3+ is the amplitude of the lobe. We represent our environment map using a mixture of K lighting SGs Gl, so that: L (SG) i (ωi;γ) ≈ K∑ k=1 Gkl ( ωi; ξ k l , λ k l ,µ k l ) , (5) where γ := {ξkl , λkl ,µkl }k. For the BRDF, we follow Wang et al. [51] and fit a single, monochromatic SG to the specular lobe so that f (SG)r is a sum of diffuse and specular lobes. The full derivation can be found in our supplementary material (Sec. A). Finally, we approximate the cosine foreshortening term using a single SG |n · ωi| ≈ Gc(ωi;n, 2.133, 1.17) [40]. Regrouping all terms, the final pixel color can be computed as: S(SG)(x,n,ωo;θ,γ) = ∫ S2 f (SG)r (x,ω k i ,ωo;θ)L (SG) i (ωi;γ)Gc(ωi) dωi, (6) which has an analytic form that can be automatically differentiated inside our renderer. All parameters of the SGs, as well as the BRDF parameters, are learnable. Comparison. To visually compare our two shading techniques and understand their limitations, we render a unit sphere under the same lighting (represented differently) in Fig. 2. To do so, we first fit a HDR environment map with K = 128 SGs using an equirectangular projection. As shown on the left, SGs smooth out high frequency details and sharp corners but require much fewer parameters to reconstruct incident lighting (896 vs. 98 304). On the right, we visualize the effect of increasing the surface roughness β under the corresponding light representation. Intuitively, this point of diminishing return indicates that MC is only so useful when the surface reflects most of the incoming light (e.g., a mirror). Indeed, when β is small enough (β → 0) and we deal with a highly non-Lambertian surface, a small number of MC samples are enough to estimate direct illumination, which in turn implies faster render speed and low memory cost. On the other end of the spectrum (β → 1), significantly more samples (e.g., N > 1000) are needed to accurately integrate incident light, resulting in longer inference times. In such a case, SGs should be favored since they offer a significant improvement. In the absence of any prior knowledge on the material type, SG shading is preferred. This is reflected in our experiments in Sec. 5-6. Optimization. We perform a sanity check on our renderer in Fig. 3 by optimizing for lighting and reflectance properties from a multi-view image L1-loss with fixed geometry. We show the ground-truth (GT) parameters and rendered images in the first row, along with initial parameters (Init.) in the second row. Here, we optimize parameters separately using gradient-descent while the others are kept fixed to validate each component of our shading model. DIB-R++ can successfully estimate material and lighting parameters, including the environmental lighting, surface roughness and specular albedo of the object. We find that the converged material parameters closely match GT, while the optimized environment map loses some details due to gradients coming entirely from surface reflections (foreground supervision only). In particular, surface highlights are well captured by our technique. For more optimization results, please refer to the supplementary (Sec. C). 4 Application: Single Image 3D Reconstruction We demonstrate the effectiveness of our hybrid framework through the learning-based problem of single image 3D reconstruction without supervision. While previous works [10, 62] generally focus on diffuse illumination only, our goal is to jointly infer geometry, reflectance, and lighting from a single image Ĩ containing strong specular transport. To this end, we employ a convolutional neural network F , parameterized by learnable weights ϑ, to predict 3D attributes of a meshM with pre-determined topology (sphere in our case). We adopt the U-Net [47] architecture of the original DIB-R [10, 62] and modify its output to also predict the appropriate BRDF attributes θ and light parameters γ (pixel colors or SG coefficients) so that F (Ĩ;ϑ) = (π,θ,γ). We then render these parameters back to an image I using our differentiable renderer and apply a loss L on the RGB output to compare the input image Ĩ and the rendered image I , where: L(ϑ) = αimLim(Ĩ , I) + αmskLmsk(Ṽ , V ) + αperLper(Ĩ , I) + αlapLlap(π). (7) Similar to DIB-R [10], we combine multiple consistency losses with regularization terms: Lim is an image loss computing the L1-distance between the rendered image and the input image, Lmsk is an Intersection-over-Union (IoU) loss of the rendered silhouette V and the input mask Ṽ of the object [25], Lper is a perceptual loss [21, 61] computing the L1-distance between the pre-trained AlexNet [26] feature maps of rendered image and input image, and Llap is a Laplacian loss [36, 25] to penalize the change in relative positions of neighboring vertices. We set αim = 20, αmsk = 5, αper = 0.5, αlap = 5, which we empirically found worked best. 5 Evaluation on Synthetic Datasets We conduct extensive experiments to evaluate the performance of DIB-R++. We first quantitatively evaluate on synthetic data where we have access to ground-truth geometry, material and lighting. Since MC ad SG shading have individual pros and cons, we validate them under different settings. In particular, we generate separate datasets with two different surface materials: purely metallic surfaces with no roughness, and glossy surfaces with random positive roughness. We compare the performance of both shading models against the baseline method [10]. Synthetic Datasets. We chose 485 different car models from TurboSquid2 to prepare data for metallic and glossy surfaces. We also collected 438 freely available high-dynamic range (HDR) environment maps from HDRI Haven3 to use as reference lighting, which contain a wide variety of illumination configurations for both indoor and outdoor scenes. To render all 3D models, we use Blender’s Cycles4 path tracer with the Principled BRDF model [9]. We create two datasets, Metallic-Surfaces and Glossy-Surfaces. For metallic surfaces, we set β = 0 and m = 1. Conversely, we set m = 0, s = 1 and randomly pick β ∈ [0, 0.4] to generate images for glossy surfaces. Baseline. We compare our method with the rasterization-based baseline DIB-R [10], which supports spherical harmonics (SH) lighting. While the original lighting implementation in [10] is monochromatic, we extend it to RGB for a fairer comparison. For quantitative evaluation, we first report the common L1 pixel loss between the re-rendered image using our predictions and groundtruth (GT) image(L = ‖Ĩ − I‖1), and 2D IoU loss between rendered silhouettes and ground-truth masks(L = 1− Ṽ V Ṽ+V ). We experimentally find that these numbers are very close in different methods. Thus, we further evaluate the quality of diffuse albedo and lighting predictions using normalized cross correlation (NCC, L = 1− ∑ γ̃ γpred ‖γ̃‖2‖γpred‖2 , where γpred is the predicted albedo and light while γ̃ is GT). We provide more details of these metrics in the supplementary material (Sec. E). 5.1 Metallic Surfaces Experimental Settings. We first apply all methods to the metallic car dataset. Since this surface property is known a priori, we relax the task for MC shading by setting β = 0 and only predict geometry, diffuse albedo and lighting from the input image. This allows us to render MC at a low sample count (N = 4), achieving higher rendering speed and a lower memory cost. In particular, we predict the relative offset for all |M| = 642 vertices in a mesh and a 256× 256 texture map, following the choices in [10]. We also predict a 256 × 256 RGB environment map. For SG shading, we predict all parameters. While shape and texture are the same as MC shading, we adopt K = 32 for SG and predict two global parameters β and s for the specular BRDF. This keeps the number of parameters relatively low while providing enough flexibility to capture different radiometric configurations. 2https://turbosquid.com. We obtain consent via agreement with TurboSquid, following their license at https://blog.turbosquid.com/turbosquid-3d-model-license. 3https://hdrihaven.com. We follow the CCO license at https://hdrihaven.com/p/license.php. 4https://blender.org Experimental Results. Quantitative and qualitative results are shown in Fig. 4 and Table 1 (Left), respectively. Since the main loss function comes from the difference between GT and re-rendered images, we find the re-rendered images (with light) from the predictions are all close to the GT image for different methods, and quantitatively, the image loss and 2D IoU loss are also similar across different models. However, we observe significant differences on the predicted albedo and lighting. Specifically, in Fig. 4, MC shading successfully predicts cleaner diffuse albedo maps and more accurate lighting, while Chen et al. [10] “bakes in” high specular effects into the texture. Quantitatively, we outperform [10] with a 3× improvement in terms of NCC loss for lighting, demonstrating the effectiveness of our DIB-R++. We further compare MC shading with SG shading. While SG shading achieves reasonable lighting predictions, it fails to reconstruct the high frequency details in the lighting and has circular spot effects caused by the isotropic SGs. Finally, we note that due to the ambiguity of the learning task, the overall intensities of all predicted texture maps can largely vary. Still, we observe that MC can better recover fine details, such as the wheels’ rims. 5.2 Glossy Surfaces Experimental Settings. We further apply our model to synthetic images rendered with positive roughness (glossy). To apply MC shading in such a case, we assume no prior knowledge for material and use a high sample count (N = 1024) to account for possibly low roughness images in the dataset. Due to high rendering time and memory cost, we subsample 4% of the pixels and apply Lim to those pixels only in each training iteration. As such, we do not use the perceptual loss Lper as it relies on the whole image. We predict a 32× 16 environment map for lighting and predict global β and m for the specular BRDF. For SG shading, we use the same settings as for the metallic surfaces. Experimental Results. We apply both MC and SG shading and compare with [10]. Results are shown in Fig. 5 and Table 1 (Right). Qualitatively, SG shading has better lighting predictions with correct high-luminance regions. The specular highlights in the images successfully guide SG shading, while the bright reflection on the car window and front cover are fused with texture map in [10]. MC also has reasonable lighting predictions, but the predicted light map lacks structure due to weak surface reflections. Without a perceptual loss term, the predicted textures also tend to be blurrier. Quantitatively, SG shading significantly outperforms [10] on lighting prediction in terms of NCC (0.078 vs. 0.127) and improves on texture prediction in terms of BRDF/lighting disentanglement. We also compare with MC shading, where MC achieves slightly worse results on lighting predictions compared to SG, but is still much better than [10]. 5.3 Discussion As shown in our previous two experiments, Monte Carlo shading works best under a metallic assumption (β = 0), in which case the rendered images can have rich details at low sample count (N ≤ 4). However, when the surface is more Lambertian (i.e., when β is becoming larger), we have to compensate with a larger N to produce noise-free renderings, which impacts learning both timeand memory-wise. As a consequence, we recommend applying MC shading to metallic surfaces only, and default to SGs otherwise. Our spherical Gaussian shading pipeline provides an analytic formulation for estimating the rendering equation, which avoids the need of tracing ray samples, largely accelerating the rendering process. While SGs can be blurry on metallic surfaces, in most case (e.g., when β ≥ 0.2) it can model similar rendering effects at a fraction of the cost, achieving better results than MC shading and [10]. After inspecting the predicted surface material properties (β, s, m) and diffuse albedo with the ground-truth parameters in Blender, we find the materials contain little correlation and the intensities of diffuse albedo might change. As for SG, we are using only 32 basis elements to simulate a complex, high definition environment map (2K). Since SGs can only represent a finite amount of details, we find the predicted global β tends to be too small. One hypothesis for this is that the optimizer artificially prefers more reflections (and thus lower roughness) to be able to estimate at least some portions of the environment map. On the other hand, in MC, due to the absence of a perceptual loss, the predicted texture is too blurry and cannot represent GT to a high detail. We find that the predicted β and m do not have strong correlation with the GT material. Lastly, we note that the predicted texture map has to change its overall intensity to accommodate for other parameters to ensure the re-rendered images are correct, which leads to some differences with GT. More analysis can be found in supplementary (Sec. C and F). In summary, when we have no prior knowledge about the material, our re-rendered images can be very close to the input images but the predicted material parameters are not always aligned with the GT materials. We believe this problem can be relieved by incorporating additional local constraints, e.g., part-based material priors, or by leveraging anisotropic SGs [56]. For instance, a car body is metallic while its wheels are typically diffuse; predicting different parameters for each region has the potential of improving disentanglement and interpretability. We leave this as future work. 6 Evaluation on a Real-world Dataset We further qualitatively evaluate our method via training on StyleGAN generated data and testing on real imagery, following the pipeline in [62] in Sec. 6.1, and use our predictions to perform artistic manipulation in Sec. 6.2. [6 2] Pred. albedoTexture map SG Envmap + = × = SG + = [6 2] × = SG + = [6 2] × = Decomposition Pred. albedo +/× (Specular BRDF + Light) RGB Input Re-rendered (with light) Figure 6: Results on real imagery from the StyleGAN-generated dataset (cars and white female faces). Our method can recover a meaningful decomposition as opposed to [62], as shown by cleaner texture maps and directional highlights (e.g., car windshield). Even when using monochromatic lighting on faces, our method can correctly predict the specular highlights on the forehead and none in the hair, while SH produces dark artifacts. 6.1 Realistic Imagery Experimental Settings. Our DIB-R++ can also be applied to learn 3D properties from realistic imagery. Following [62], we use StyleGAN [24] to generate multi-view images of cars and faces, which is the data we need to train our model. The generated objects contains cars under various lighting conditions, ranging from high specular paint to nearly diffuse. Thus, we only apply SG shading and adopt the same setting, where we predict |M| = 642 vertex movements, a 256× 256 diffuse texture map, K = 32 SG bases and two global β and s. We also compare [62] as the baseline on the same dataset by using the same training procedure. Experimental Results on StyleGAN Dataset. In the absence of ground-truth on the StyleGAN generated data, we qualitatively evaluate our results and compare with [62] in Fig. 6. Our DIB-R++ reconstructs more faithful material and lighting components, producing an interpretable decomposition. Specifically, our model can represent the dominant light direction more accurately, while naive shading tend to merge reflectance with lighting. We also provide an example with monochromatic lighting on a face example to reduce the degrees of freedom for the SH representation, yet it cannot correctly model light. Extension to Real Imagery from LSUN Dataset. Our model is trained on synthetic data [62] generated by StyleGAN [24]. Thanks to this powerful generative model, the distribution of GAN images is similar to the distribution of real images, allowing our model to generalize well. We show reconstruction results on real images from LSUN [57] in Fig. 7. We provide more results in the supplementary (Sec. E), we also provide additional turntable videos on our project webpage. During inference, we do not need any camera pose and predict shapes in canonical view. However, camera poses are needed to re-render the shape. Since ground-truth camera poses are not available for real images, we manually adjust the camera poses in Fig. 7. As a result, the re-rendered images are slightly misaligned with GT. However, DIB-R++ still accounts for specularities and predicts correct predominant lighting directions and clean textures. 6.2 Material Editing and Relighting Finally, we demonstrate some applications of DIB-R++ to artistic manipulation in Fig. 8. On the left, we show examples of editing the diffuse albedo, where we can insert text, decals or modify the base tint. Since our textures are not contaminated by lighting, clean texture maps can be easily edited by hand and the re-rendered images look natural. On the right, we show examples of editing lighting and surface materials, where we rotate the light (top) or increase glossiness (bottom). We also showcase results where we change lighting orientation or modify the object’s glossiness with consistent shading, which is not feasible with a naive, Lambertian-only shading model. 7 Conclusion We presented DIB-R++, a hybrid differentiable renderer that can effectively disentangle material and lighting. When embedded in a learning framework for single image 3D reconstruction, our method produces state-of-the-art results, and enables applications such as material editing and relighting. One limitation of our method is that the predicted base color may sometimes “bleed” into the lighting predictions. Combining our technique with segmentation methods like DatasetGAN [63] could alleviate this issue for a more practical, artist-friendly disentanglement. Moreover, the predicted reflections are sometimes blurrier than ground-truth; this is mainly due to a limited number SG components for lighting and could potentially be improved with a larger mixture. Finally, on some occasions, the diffuse albedo in our synthetic dataset have baked-in reflections (e.g., GT red car in Fig. 5) instead of a uniform base color, which obfuscates the learning process. This could be mitigated by using more advanced physics-based materials such as those modeling clear coats. Broader Impact. Our work focuses on disentangling geometry from appearance using a differentiable renderer, a relatively nascent research area. We show that augmenting a rasterization-based renderer with physics-based shading models improves reconstruction and allows for easier integration within larger machine learning pipelines. DIB-R++ relies on simple topology and strong data prior assumptions to produce useful decompositions; therefore it cannot generalize to the complexity and multi-modality of real-world scenes in its current form. Nonetheless, we believe that our work takes an important step in the joint estimation of shape, material and environmental lighting from a single image and we hope that it can advance applications in performance-oriented settings such as AR/VR, simulation technology and robotics. For instance, autonomous vehicles need to correctly assess their surroundings from limited signals; directly modeling light-surface interactions (e.g., specular highlights) may provide important cues to this end. Like any ML model, DIB-R++ is prone to biases imparted through training data which requires an abundance of caution when applied to sensitive applications. For example, it needs to be carefully inspected when it is used to recover the 3D parameters of human faces and bodies as it is not tailored for them. It is not recommended in off-the-shelf settings where privacy or erroneous recognition can lead to potential misuse or any harmful application. For purposes of real deployment, one would need to carefully inspect and de-bias the dataset to depict the target distribution of a wide range of possible lighting conditions, skin tones, or at the intersection of race and gender. Disclosure of Funding. This work was funded by NVIDIA. Wenzheng Chen, Jun Gao and Zian Wang acknowledge additional indirect revenue in the form of student scholarships from University of Toronto and the Vector Institute. Joey Litalien acknowledges indirect funding from McGill University and the Natural Sciences and Engineering Research Council of Canada (NSERC).
1. What is the focus and contribution of the paper regarding DIB-R++? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of shading model and computational complexity? 3. Do you have any concerns or suggestions regarding the presentation and demonstration of the method's effectiveness? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. What are some specific questions or points that the reviewer raises, such as the usefulness of improved results, the impact on predicted geometry, and the presentation of qualitative results?
Summary Of The Paper Review
Summary Of The Paper DIB-R++ proposes an extending DIB-R with a physics based shading model. The method predicts an environment map for lighting and uses a simplified version of isotropic Disney BRDF model. The shading model is single bounce and is estimated using either Monte Carlo methods or using spherical basis functions. With these modifications, the authors demonstrate that they are able to decompose the lighting and materials of single objects (cars, faces) from a single image better than DIB-R. Review The paper is well written and well explained. The addition of more physically based shading to DIB-R seems like a logical and well motivated extension. The paper presents an interesting analysis of the pros and cons of using Monte Carlo versus Spherical Gaussians. From a computational complexity standpoint, the spherical gaussians approach makes a lot of sense and the qualitative results confirm that the impact on quality is minimal. I have listed some suggestions/questions/concerns below, overall I think this paper is marginally above the acceptance threshold. a) It is noted in LN 251 that DIB-R can accurately re-render the image (albeit with incorrect albedo and lighting). From the quantitative and qualitative results it is fairly easy to conclude that DIB-R++ produces more physically plausible results, however, the usefulness of this improvement is never demonstrated to the reader. It would be informative to see the modifications in Fig 7 applied to DIB-R to hopefully demonstrate why this more complex method should be considered. b) Topics like glossy/metallic and MC/SG are well analyzed in the paper, but other topics like geometry are barely discussed. It would be nice to know how the proposed modifications to DIB-R affect the predicted geometry. Additionally, it would be good to include metrics computed against the ground truth geometry. Table 1 is also missing a number of details, specifically what are these values actually measuring (Mean squared error?), please specify. Also consider reporting other common image metrics such as PSNR, SSIM, and LPIPS. c) The paper relies heavily on the qualitative results. The way they are currently presented makes it difficult to determine the effectiveness of the method. It would be beneficial to include examples of the ground truth albedo in the same poses as the predicted albedo example. Additionally it would be good to include ground truth examples of the scene re-rendered (with light). Other: Why is approximating the cosine foreshortening with a SG useful? LN 291. Can you provide these errors, I think it would be interesting to the reader to understand how far off these are.
NIPS
Title DIB-R++: Learning to Predict Lighting and Material with a Hybrid Differentiable Renderer Abstract We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiable renderers. Many previous learning-based approaches for inverse graphics adopt rasterization-based renderers and assume naive lighting and material models, which often fail to account for non-Lambertian, specular reflections commonly observed in the wild. In this work, we propose DIBR++, a hybrid differentiable renderer which supports these photorealistic effects by combining rasterization and ray-tracing, taking the advantage of their respective strengths—speed and realism. Our renderer incorporates environmental lighting and spatially-varying material models to efficiently approximate light transport, either through direct estimation or via spherical basis functions. Compared to more advanced physics-based differentiable renderers leveraging path tracing, DIB-R++ is highly performant due to its compact and expressive shading model, which enables easy integration with learning frameworks for geometry, reflectance and lighting prediction from a single image without requiring any ground-truth. We experimentally demonstrate that our approach achieves superior material and lighting disentanglement on synthetic and real data compared to existing rasterization-based approaches and showcase several artistic applications including material editing and relighting. 1 Introduction Inferring intrinsic 3D properties from 2D images is a long-standing goal of computer vision [3]. In recent years, differentiable rendering has shown great promise in estimating shape, reflectance and illumination from real photographs. Differentiable renderers have become natural candidates for learning-based inverse rendering applications, where image synthesis algorithms and neural networks can be jointly optimized to model physical aspects of objects from posed images, either by leveraging strong data priors or by directly modeling the interactions between light and surfaces. Not all differentiable renderers are made equal. On the one hand, recent physics-based differentiable rendering techniques [31, 45, 2, 44, 59] try to model the full light transport with proper visibility gradients, but they tend to require careful initialization of scene parameters and typically exhibit high computational cost which limits their usage in larger end-to-end learning pipelines. On the other hand, performance-oriented differentiable renderers [10, 27, 36, 25] trade physical accuracy for scalability and speed by approximating scene elements through neural representations or by employing simpler shading models. While the latter line of work has proven to be successful in 3D scene reconstruction, ∗Work done during an internship at NVIDIA. 1Project page: https://nv-tlabs.github.io/DIBRPlus. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). the frequent assumptions of Lambertian-only surfaces and low-frequency lighting prevent these works from modeling more complex specular transport commonly observed in the real world. In this work, we consider the problem of single-view 3D object reconstruction without any 3D supervision. To this end, we propose DIB-R++, a hybrid differentiable renderer that combines rasterization and ray-tracing through an efficient deferred rendering framework. Our framework builds on top of DIB-R [10] and integrates physics-based lighting and material models to capture challenging non-Lambertian reflectance under unknown poses and illumination. Our method is versatile and supports both single-bounce ray-tracing and a spherical Gaussian representation for a compact approximation of direct illumination, allowing us to adapt and tune the shading model based on the radiometric complexity of the scene. We validate our technique on both synthetic and real images and demonstrate superior performance on reconstructing realistic materials BRDFs and lighting configurations over prior rasterization-based methods. We then follow the setting proposed in Zhang et al. [62] to show that DIB-R++ can reconstruct scene intrinsics also from real images without any 3D supervision. We further apply our framework to single-image appearance manipulation such as material editing and scene relighting. 2 Related Work Differentiable Rendering. Research on differentiable rendering can be divided into two categories: physics-based methods focusing on photorealistic image quality, and approximation methods aiming at higher performance. The former differentiates the forward light transport simulation [31, 45, 2, 44, 59] with careful handling of geometric discontinuities. While capable of supporting global illumination, these techniques tend to be relatively slow to optimize or require a detailed initial description of the input in terms of geometry, materials, lighting and camera, which prevents their deployment in the wild. The latter line of works leverages simpler local shading models. Along this axis, rasterization-based differentiable renderers [10, 27, 36, 25, 16] approximate gradients by generating derivatives from projected pixels to 3D parameters. These methods are restricted to primary visibility and ignore indirect lighting effects by construction, but their simplicity and efficiency offer an attractive trade-off for 3D reconstruction. We follow this line of work and build atop DIB-R [10, 22] by augmenting its shading models with physics-based ones. Learning-based Inverse Graphics. Recent research on inverse graphics targets the ill-posed problem of jointly estimating geometry, reflectance and illumination from image observations using neural networks. For single image inverse rendering, one dominant approach is to employ 2D CNNs to learn data-driven features and use synthetic data as supervision [42, 49, 35, 33, 52], but these methods do not always generalize to complex real-world images [4]. To overcome the data issue, a recent body of work investigates the use of self-supervised learning to recover scene intrinsics [1], including domain adaptation from synthetic reflectance dataset [37], object symmetry [54, 53], or multi-illumination images depicting the same scene [30, 32, 39, 58]. However, these methods either rely on specific priors or require data sources tedious to capture in practice. Some works tackle the subtask of lighting estimation only [18, 17, 20], but still need to carefully utilize training data that are hard to capture. Most similar to us, DIB-R [10] tackles unsupervised inverse rendering in the context of differentiable rendering. Zhang et al. [62] further combines DIB-R with StyleGAN [24] generated images to extract and disentangle 3D knowledge. These works perform inverse rendering from real image collections without supervision, but may fail to capture complex material and lighting effects—in contrast, our method models these directly. Several techniques also try to handle more photorealistic effects but typically require complex capturing settings, such as controllable lighting [28, 29], a co-located camera-flashlight setup [41, 13, 34, 5, 6, 8, 48, 38], and densely captured multi-view images [14, 55, 7, 60] with additional known lighting [19] or hand-crafted inductive labels [43]. In our work, we propose a hybrid differentiable renderer and learn to disentangle complex specular effects given a single image. Similar to the recent NeRD [7] and PhySG [60] which recover non-Lambertian reflectance and illumination with a spherical Gaussian (SG) basis [51], we also employ SGs to model the SV-BRDF and incident lighting, but apply this representation to mesh-based differentiable rendering with direct access to the surface. 3 Differentiable Deferred Rendering In this section, we introduce DIB-R++, our differentiable rendering framework based on deferred shading [12]. DIB-R++ is a hybrid differentiable renderer that can efficiently approximate direct illumination and synthesize high-quality images. Concretely, our renderer leverages the differentiable rasterization framework of DIB-R [10] to recover shape attributes and further employs physics-based material and lighting models to estimate appearance. 3.1 Overview We provide an overview of our technique in Fig. 1. We first rasterize a 3D mesh to obtain diffuse albedo and material maps, surface normals, and a silhouette mask. This information is deferred to the shading pass, where outgoing radiance is either estimated stochastically or approximated using a spherical Gaussian basis. The rasterizer and shader are differentiable by design, allowing gradients to be propagated to lighting, material and shape parameters for downstream learning tasks. 3.2 Background Our goal is to provide a differentiable formulation of the rendering process to enable fast inverse rendering from 2D images. LetM be a 3D object in a virtual scene. We start from the (non-emissive) rendering equation (RE) [23], which states that the outgoing radiance Lo at any surface point x ∈M in the camera direction ωo is given by Lo(x,ωo) = ∫ H2 fr(x,ωi,ωo)Li(x,ωi)|n · ωi|dωi, (1) where Li is the incident radiance, fr is the (spatially-varying) bidirectional reflectance distribution function (SV-BRDF) and n is the surface normal at x. The domain of integration is the unit hemisphereH2 of incoming light directions ωi. The BRDF characterizes the surface’s response to illumination from different directions and is modulated by the cosine foreshortening term |n · ω|. Intuitively, Eq. (1) captures an energy balance and computes how much light is received and scattered at a shading point in a particular direction. Estimating the RE typically requires Monte Carlo (MC) integration [46], which involves tracing rays from the camera into the scene. Albeit physically correct, this process is computationally expensive and does not generally admit a closed-form solution. MC estimators can exhibit high variance and may produce noisy pixel gradients at low sample count, which may significantly impact performance and convergence. To keep the problem tractable, we thus make several approximations of Eq. (1), which we detail in the next section. 3.3 Two-stage Deferred Rendering We now describe our rendering framework (Fig. 1). We start by defining three families of parameters, where π ∈ Rdπ encodes the shape attributes (e.g., vertex positions), θ ∈ Rdθ describes the material properties, and γ ∈ Rdγ captures the illumination in the scene. In what follows, we shall only consider a single pixel, indexed by p, within an RGB image I ∈ R3×h×w+ for notational simplicity. Stage 1: Rasterization Pass. We first employ a differentiable rasterizer R [10] to generate primary rays ωo ∈ S2 from a camera and render our scene M into geometry buffers (commonly called G-buffers) containing the surface intersection point xp ∈ R3, the surface normal np ∈ S2, and the spatially-varying material parameters θp (e.g., diffuse albedo). This rendering pass also returns a visibility mask vp ∈ {0, 1} indicating whether pixel p is occupied by the rendered object, separating the foreground object If from its background environment Ib so that I = If + Ib. We have: R(M, p,ωo) = (xp,np,θp, vp). (2) Stage 2: Shading Pass. Given surface properties and outgoing direction ωo, we then approximate the outgoing radiance Lo(xp,ωo) through several key assumptions. First, we restrict ourselves to direct illumination only (i.e. single-bounce scattering) and assume that the incoming radiance is given by a distant environment map Li : S2 → R3+. Therefore, we do not model self-occlusion and Li(xp,ωi) ≡ Li(ωi;γ). Such simplification largely reduces computation and memory costs and is trivially differentiable. Second, we assume that the material parameters θ can model both diffuse and specular view-dependent effects. At a high level, we define our shading model S so that: S(xp,np,ωo;θp,γ) ≈ Lo(xp,ωo). (3) Importantly, a differentiable parameterization of S enables the computation of pixel gradients with respect to all scene parameters Θ = (π,θ,γ) by differentiating Ip(Θ) = (S ◦R)(M, p,ωo). Given a scalar objective function defined on the rendered output I , ∂I/∂π is computed using DIB-R [10]. In what follows, we thus mainly focus on formulating ∂I/∂{θ,γ} so that all gradients can be computed using the chain rule, allowing for joint optimization of geometry, material and lighting parameters. We assume henceforth a fixed pixel p for conciseness, and remove the subscript. 3.4 Shading Models Since our primary goal is to capture a wide range of appearances, we provide two simple techniques to approximate Eq. (1): Monte Carlo (MC) and spherical Gaussians (SG). The former targets more mirror-like objects and can better approximate higher frequencies in the integrand, but is more expensive to compute. The latter is more robust to roughness variations but is limited by the number of basis elements. To model reflectance, we choose to use a simplified version of the isotropic Disney BRDF [9, 15] based on the Cook–Torrance model [11], which includes diffuse albedo a ∈ [0, 1]3, specular albedo s ∈ [0, 1], surface roughness β ∈ [0, 1] and metalness m ∈ [0, 1]. Metalness allows us to model both metals and plastics in a unified framework. We let the diffuse albedo vary spatially (a = a(x)) and globally define all other attributes to restrict the number of learnable parameters. Monte Carlo Shading. Given a surface point x ∈ M to shade, we importance sample the BRDF to obtain N light directions ωki and compute the BRDF value. We represent the incident lighting L (MC) i as a high-dynamic range image γ ∈ R 3×hl×wl + using an equirectangular projection, which can be queried for any direction via interpolation between nearby pixels. The final pixel color is then computed as the average over all samples, divided by the probability of sampling ωki : S(MC)(x,n,ωo;θ,γ) = 1 N N∑ k=1 fr(x,ω k i ,ωo;θ)L (MC) i (ω k i ;γ) |n · ωki | p(ωki ) . (4) When the surface is near-specular (e.g., a mirror), one can efficiently estimate the RE as reflected rays are concentrated in bundles (e.g., to satisfy the law of reflection). However, this estimator can suffer from high variance for rougher surfaces; a higher number of samples may be necessary to produce usable gradients. While this can be partially improved with multiple importance sampling [50], emitter sampling would add a significant overhead due to the environment map being updated at every optimization step. This motivates the use of a more compact representation. Spherical Gaussian Shading. To further accelerate rendering while preserving expressivity in our shading model, we use a spherical Gaussian (SG) [51] representation. Projecting both the cosineweighted BRDF and incident radiance into an SG basis allows for fast, analytic integration within our differentiable shader, at the cost of some high frequency features in the integrand. Concretely, an SG kernel has the form G(ω; ξ, λ,µ) = µ eλ(ξ·ω−1), where ω ∈ S2 is the input spherical direction to evaluate, ξ ∈ S2 is the axis, λ ∈ R+ is the sharpness, and µ ∈ R3+ is the amplitude of the lobe. We represent our environment map using a mixture of K lighting SGs Gl, so that: L (SG) i (ωi;γ) ≈ K∑ k=1 Gkl ( ωi; ξ k l , λ k l ,µ k l ) , (5) where γ := {ξkl , λkl ,µkl }k. For the BRDF, we follow Wang et al. [51] and fit a single, monochromatic SG to the specular lobe so that f (SG)r is a sum of diffuse and specular lobes. The full derivation can be found in our supplementary material (Sec. A). Finally, we approximate the cosine foreshortening term using a single SG |n · ωi| ≈ Gc(ωi;n, 2.133, 1.17) [40]. Regrouping all terms, the final pixel color can be computed as: S(SG)(x,n,ωo;θ,γ) = ∫ S2 f (SG)r (x,ω k i ,ωo;θ)L (SG) i (ωi;γ)Gc(ωi) dωi, (6) which has an analytic form that can be automatically differentiated inside our renderer. All parameters of the SGs, as well as the BRDF parameters, are learnable. Comparison. To visually compare our two shading techniques and understand their limitations, we render a unit sphere under the same lighting (represented differently) in Fig. 2. To do so, we first fit a HDR environment map with K = 128 SGs using an equirectangular projection. As shown on the left, SGs smooth out high frequency details and sharp corners but require much fewer parameters to reconstruct incident lighting (896 vs. 98 304). On the right, we visualize the effect of increasing the surface roughness β under the corresponding light representation. Intuitively, this point of diminishing return indicates that MC is only so useful when the surface reflects most of the incoming light (e.g., a mirror). Indeed, when β is small enough (β → 0) and we deal with a highly non-Lambertian surface, a small number of MC samples are enough to estimate direct illumination, which in turn implies faster render speed and low memory cost. On the other end of the spectrum (β → 1), significantly more samples (e.g., N > 1000) are needed to accurately integrate incident light, resulting in longer inference times. In such a case, SGs should be favored since they offer a significant improvement. In the absence of any prior knowledge on the material type, SG shading is preferred. This is reflected in our experiments in Sec. 5-6. Optimization. We perform a sanity check on our renderer in Fig. 3 by optimizing for lighting and reflectance properties from a multi-view image L1-loss with fixed geometry. We show the ground-truth (GT) parameters and rendered images in the first row, along with initial parameters (Init.) in the second row. Here, we optimize parameters separately using gradient-descent while the others are kept fixed to validate each component of our shading model. DIB-R++ can successfully estimate material and lighting parameters, including the environmental lighting, surface roughness and specular albedo of the object. We find that the converged material parameters closely match GT, while the optimized environment map loses some details due to gradients coming entirely from surface reflections (foreground supervision only). In particular, surface highlights are well captured by our technique. For more optimization results, please refer to the supplementary (Sec. C). 4 Application: Single Image 3D Reconstruction We demonstrate the effectiveness of our hybrid framework through the learning-based problem of single image 3D reconstruction without supervision. While previous works [10, 62] generally focus on diffuse illumination only, our goal is to jointly infer geometry, reflectance, and lighting from a single image Ĩ containing strong specular transport. To this end, we employ a convolutional neural network F , parameterized by learnable weights ϑ, to predict 3D attributes of a meshM with pre-determined topology (sphere in our case). We adopt the U-Net [47] architecture of the original DIB-R [10, 62] and modify its output to also predict the appropriate BRDF attributes θ and light parameters γ (pixel colors or SG coefficients) so that F (Ĩ;ϑ) = (π,θ,γ). We then render these parameters back to an image I using our differentiable renderer and apply a loss L on the RGB output to compare the input image Ĩ and the rendered image I , where: L(ϑ) = αimLim(Ĩ , I) + αmskLmsk(Ṽ , V ) + αperLper(Ĩ , I) + αlapLlap(π). (7) Similar to DIB-R [10], we combine multiple consistency losses with regularization terms: Lim is an image loss computing the L1-distance between the rendered image and the input image, Lmsk is an Intersection-over-Union (IoU) loss of the rendered silhouette V and the input mask Ṽ of the object [25], Lper is a perceptual loss [21, 61] computing the L1-distance between the pre-trained AlexNet [26] feature maps of rendered image and input image, and Llap is a Laplacian loss [36, 25] to penalize the change in relative positions of neighboring vertices. We set αim = 20, αmsk = 5, αper = 0.5, αlap = 5, which we empirically found worked best. 5 Evaluation on Synthetic Datasets We conduct extensive experiments to evaluate the performance of DIB-R++. We first quantitatively evaluate on synthetic data where we have access to ground-truth geometry, material and lighting. Since MC ad SG shading have individual pros and cons, we validate them under different settings. In particular, we generate separate datasets with two different surface materials: purely metallic surfaces with no roughness, and glossy surfaces with random positive roughness. We compare the performance of both shading models against the baseline method [10]. Synthetic Datasets. We chose 485 different car models from TurboSquid2 to prepare data for metallic and glossy surfaces. We also collected 438 freely available high-dynamic range (HDR) environment maps from HDRI Haven3 to use as reference lighting, which contain a wide variety of illumination configurations for both indoor and outdoor scenes. To render all 3D models, we use Blender’s Cycles4 path tracer with the Principled BRDF model [9]. We create two datasets, Metallic-Surfaces and Glossy-Surfaces. For metallic surfaces, we set β = 0 and m = 1. Conversely, we set m = 0, s = 1 and randomly pick β ∈ [0, 0.4] to generate images for glossy surfaces. Baseline. We compare our method with the rasterization-based baseline DIB-R [10], which supports spherical harmonics (SH) lighting. While the original lighting implementation in [10] is monochromatic, we extend it to RGB for a fairer comparison. For quantitative evaluation, we first report the common L1 pixel loss between the re-rendered image using our predictions and groundtruth (GT) image(L = ‖Ĩ − I‖1), and 2D IoU loss between rendered silhouettes and ground-truth masks(L = 1− Ṽ V Ṽ+V ). We experimentally find that these numbers are very close in different methods. Thus, we further evaluate the quality of diffuse albedo and lighting predictions using normalized cross correlation (NCC, L = 1− ∑ γ̃ γpred ‖γ̃‖2‖γpred‖2 , where γpred is the predicted albedo and light while γ̃ is GT). We provide more details of these metrics in the supplementary material (Sec. E). 5.1 Metallic Surfaces Experimental Settings. We first apply all methods to the metallic car dataset. Since this surface property is known a priori, we relax the task for MC shading by setting β = 0 and only predict geometry, diffuse albedo and lighting from the input image. This allows us to render MC at a low sample count (N = 4), achieving higher rendering speed and a lower memory cost. In particular, we predict the relative offset for all |M| = 642 vertices in a mesh and a 256× 256 texture map, following the choices in [10]. We also predict a 256 × 256 RGB environment map. For SG shading, we predict all parameters. While shape and texture are the same as MC shading, we adopt K = 32 for SG and predict two global parameters β and s for the specular BRDF. This keeps the number of parameters relatively low while providing enough flexibility to capture different radiometric configurations. 2https://turbosquid.com. We obtain consent via agreement with TurboSquid, following their license at https://blog.turbosquid.com/turbosquid-3d-model-license. 3https://hdrihaven.com. We follow the CCO license at https://hdrihaven.com/p/license.php. 4https://blender.org Experimental Results. Quantitative and qualitative results are shown in Fig. 4 and Table 1 (Left), respectively. Since the main loss function comes from the difference between GT and re-rendered images, we find the re-rendered images (with light) from the predictions are all close to the GT image for different methods, and quantitatively, the image loss and 2D IoU loss are also similar across different models. However, we observe significant differences on the predicted albedo and lighting. Specifically, in Fig. 4, MC shading successfully predicts cleaner diffuse albedo maps and more accurate lighting, while Chen et al. [10] “bakes in” high specular effects into the texture. Quantitatively, we outperform [10] with a 3× improvement in terms of NCC loss for lighting, demonstrating the effectiveness of our DIB-R++. We further compare MC shading with SG shading. While SG shading achieves reasonable lighting predictions, it fails to reconstruct the high frequency details in the lighting and has circular spot effects caused by the isotropic SGs. Finally, we note that due to the ambiguity of the learning task, the overall intensities of all predicted texture maps can largely vary. Still, we observe that MC can better recover fine details, such as the wheels’ rims. 5.2 Glossy Surfaces Experimental Settings. We further apply our model to synthetic images rendered with positive roughness (glossy). To apply MC shading in such a case, we assume no prior knowledge for material and use a high sample count (N = 1024) to account for possibly low roughness images in the dataset. Due to high rendering time and memory cost, we subsample 4% of the pixels and apply Lim to those pixels only in each training iteration. As such, we do not use the perceptual loss Lper as it relies on the whole image. We predict a 32× 16 environment map for lighting and predict global β and m for the specular BRDF. For SG shading, we use the same settings as for the metallic surfaces. Experimental Results. We apply both MC and SG shading and compare with [10]. Results are shown in Fig. 5 and Table 1 (Right). Qualitatively, SG shading has better lighting predictions with correct high-luminance regions. The specular highlights in the images successfully guide SG shading, while the bright reflection on the car window and front cover are fused with texture map in [10]. MC also has reasonable lighting predictions, but the predicted light map lacks structure due to weak surface reflections. Without a perceptual loss term, the predicted textures also tend to be blurrier. Quantitatively, SG shading significantly outperforms [10] on lighting prediction in terms of NCC (0.078 vs. 0.127) and improves on texture prediction in terms of BRDF/lighting disentanglement. We also compare with MC shading, where MC achieves slightly worse results on lighting predictions compared to SG, but is still much better than [10]. 5.3 Discussion As shown in our previous two experiments, Monte Carlo shading works best under a metallic assumption (β = 0), in which case the rendered images can have rich details at low sample count (N ≤ 4). However, when the surface is more Lambertian (i.e., when β is becoming larger), we have to compensate with a larger N to produce noise-free renderings, which impacts learning both timeand memory-wise. As a consequence, we recommend applying MC shading to metallic surfaces only, and default to SGs otherwise. Our spherical Gaussian shading pipeline provides an analytic formulation for estimating the rendering equation, which avoids the need of tracing ray samples, largely accelerating the rendering process. While SGs can be blurry on metallic surfaces, in most case (e.g., when β ≥ 0.2) it can model similar rendering effects at a fraction of the cost, achieving better results than MC shading and [10]. After inspecting the predicted surface material properties (β, s, m) and diffuse albedo with the ground-truth parameters in Blender, we find the materials contain little correlation and the intensities of diffuse albedo might change. As for SG, we are using only 32 basis elements to simulate a complex, high definition environment map (2K). Since SGs can only represent a finite amount of details, we find the predicted global β tends to be too small. One hypothesis for this is that the optimizer artificially prefers more reflections (and thus lower roughness) to be able to estimate at least some portions of the environment map. On the other hand, in MC, due to the absence of a perceptual loss, the predicted texture is too blurry and cannot represent GT to a high detail. We find that the predicted β and m do not have strong correlation with the GT material. Lastly, we note that the predicted texture map has to change its overall intensity to accommodate for other parameters to ensure the re-rendered images are correct, which leads to some differences with GT. More analysis can be found in supplementary (Sec. C and F). In summary, when we have no prior knowledge about the material, our re-rendered images can be very close to the input images but the predicted material parameters are not always aligned with the GT materials. We believe this problem can be relieved by incorporating additional local constraints, e.g., part-based material priors, or by leveraging anisotropic SGs [56]. For instance, a car body is metallic while its wheels are typically diffuse; predicting different parameters for each region has the potential of improving disentanglement and interpretability. We leave this as future work. 6 Evaluation on a Real-world Dataset We further qualitatively evaluate our method via training on StyleGAN generated data and testing on real imagery, following the pipeline in [62] in Sec. 6.1, and use our predictions to perform artistic manipulation in Sec. 6.2. [6 2] Pred. albedoTexture map SG Envmap + = × = SG + = [6 2] × = SG + = [6 2] × = Decomposition Pred. albedo +/× (Specular BRDF + Light) RGB Input Re-rendered (with light) Figure 6: Results on real imagery from the StyleGAN-generated dataset (cars and white female faces). Our method can recover a meaningful decomposition as opposed to [62], as shown by cleaner texture maps and directional highlights (e.g., car windshield). Even when using monochromatic lighting on faces, our method can correctly predict the specular highlights on the forehead and none in the hair, while SH produces dark artifacts. 6.1 Realistic Imagery Experimental Settings. Our DIB-R++ can also be applied to learn 3D properties from realistic imagery. Following [62], we use StyleGAN [24] to generate multi-view images of cars and faces, which is the data we need to train our model. The generated objects contains cars under various lighting conditions, ranging from high specular paint to nearly diffuse. Thus, we only apply SG shading and adopt the same setting, where we predict |M| = 642 vertex movements, a 256× 256 diffuse texture map, K = 32 SG bases and two global β and s. We also compare [62] as the baseline on the same dataset by using the same training procedure. Experimental Results on StyleGAN Dataset. In the absence of ground-truth on the StyleGAN generated data, we qualitatively evaluate our results and compare with [62] in Fig. 6. Our DIB-R++ reconstructs more faithful material and lighting components, producing an interpretable decomposition. Specifically, our model can represent the dominant light direction more accurately, while naive shading tend to merge reflectance with lighting. We also provide an example with monochromatic lighting on a face example to reduce the degrees of freedom for the SH representation, yet it cannot correctly model light. Extension to Real Imagery from LSUN Dataset. Our model is trained on synthetic data [62] generated by StyleGAN [24]. Thanks to this powerful generative model, the distribution of GAN images is similar to the distribution of real images, allowing our model to generalize well. We show reconstruction results on real images from LSUN [57] in Fig. 7. We provide more results in the supplementary (Sec. E), we also provide additional turntable videos on our project webpage. During inference, we do not need any camera pose and predict shapes in canonical view. However, camera poses are needed to re-render the shape. Since ground-truth camera poses are not available for real images, we manually adjust the camera poses in Fig. 7. As a result, the re-rendered images are slightly misaligned with GT. However, DIB-R++ still accounts for specularities and predicts correct predominant lighting directions and clean textures. 6.2 Material Editing and Relighting Finally, we demonstrate some applications of DIB-R++ to artistic manipulation in Fig. 8. On the left, we show examples of editing the diffuse albedo, where we can insert text, decals or modify the base tint. Since our textures are not contaminated by lighting, clean texture maps can be easily edited by hand and the re-rendered images look natural. On the right, we show examples of editing lighting and surface materials, where we rotate the light (top) or increase glossiness (bottom). We also showcase results where we change lighting orientation or modify the object’s glossiness with consistent shading, which is not feasible with a naive, Lambertian-only shading model. 7 Conclusion We presented DIB-R++, a hybrid differentiable renderer that can effectively disentangle material and lighting. When embedded in a learning framework for single image 3D reconstruction, our method produces state-of-the-art results, and enables applications such as material editing and relighting. One limitation of our method is that the predicted base color may sometimes “bleed” into the lighting predictions. Combining our technique with segmentation methods like DatasetGAN [63] could alleviate this issue for a more practical, artist-friendly disentanglement. Moreover, the predicted reflections are sometimes blurrier than ground-truth; this is mainly due to a limited number SG components for lighting and could potentially be improved with a larger mixture. Finally, on some occasions, the diffuse albedo in our synthetic dataset have baked-in reflections (e.g., GT red car in Fig. 5) instead of a uniform base color, which obfuscates the learning process. This could be mitigated by using more advanced physics-based materials such as those modeling clear coats. Broader Impact. Our work focuses on disentangling geometry from appearance using a differentiable renderer, a relatively nascent research area. We show that augmenting a rasterization-based renderer with physics-based shading models improves reconstruction and allows for easier integration within larger machine learning pipelines. DIB-R++ relies on simple topology and strong data prior assumptions to produce useful decompositions; therefore it cannot generalize to the complexity and multi-modality of real-world scenes in its current form. Nonetheless, we believe that our work takes an important step in the joint estimation of shape, material and environmental lighting from a single image and we hope that it can advance applications in performance-oriented settings such as AR/VR, simulation technology and robotics. For instance, autonomous vehicles need to correctly assess their surroundings from limited signals; directly modeling light-surface interactions (e.g., specular highlights) may provide important cues to this end. Like any ML model, DIB-R++ is prone to biases imparted through training data which requires an abundance of caution when applied to sensitive applications. For example, it needs to be carefully inspected when it is used to recover the 3D parameters of human faces and bodies as it is not tailored for them. It is not recommended in off-the-shelf settings where privacy or erroneous recognition can lead to potential misuse or any harmful application. For purposes of real deployment, one would need to carefully inspect and de-bias the dataset to depict the target distribution of a wide range of possible lighting conditions, skin tones, or at the intersection of race and gender. Disclosure of Funding. This work was funded by NVIDIA. Wenzheng Chen, Jun Gao and Zian Wang acknowledge additional indirect revenue in the form of student scholarships from University of Toronto and the Vector Institute. Joey Litalien acknowledges indirect funding from McGill University and the Natural Sciences and Engineering Research Council of Canada (NSERC).
1. What are the strengths and weaknesses of the proposed pipeline for self-supervised training of a neural network? 2. How does the DIB-R++ renderer improve upon previous works in terms of speed and accuracy? 3. What are the limitations of the proposed approach regarding disentanglement? 4. How does DIB-R++ compare to other differentiable renderers in terms of training time and convergence rate? 5. Are there any additional experiments that could be performed to further validate the effectiveness of the proposed method?
Summary Of The Paper Review
Summary Of The Paper This paper proposes an improved pipeline for self-supervised training of a neural network that recovers the 3D properties (geometry, brdf, light) of an image containing a single object. This is achieved by proposing DIB-R++, a differentiable renderer which offers improved rendering speed at the cost of less accurate renderings. The DIB-R++ builds on top of the differentiable rasterization framework of DIB-R in order to add physically-based material and lighting models. DIB-R++ comes in 2 variations: A) MC - relies on Monte Carlo sampling of incident light directions, which is better suited for reflective surfaces. B) SG - relies on using Spherical Gaussian for fast analytic integrations, which is better for rough surfaces. It is important to note that DIB-R++ assumes direct illumination only. Experimentations include analyzing the effects of forwards and backward passes through the DIB-R++ renderer in isolation (figures 2 and 3), and training neural networks on metal synthetic (section 5.1), glossy synthetic (section 5.2), and StyleGAN generated (section 6.1) datasets. This paper also contains interesting experiments to demonstrate the effects of using different variations and parameters of this renderer in terms of speed and rendering quality in the supplementary materials. Review Pros: This paper is well presented and experiments were nicely detailed. The experiment comparing using SG vs MC for the DIB-R++ rendering (figure 2) offers interesting insights on the suitability of each renderer based on the material roughness. The optimization task (figure 3) is reassuring that differentiating through DIB-R++ yields meaningful signals with respect to the material and lighting conditions Models trained with DIB-R++ seem to recover better light and material parameters compared to previous works. Cons: Contribution of paper should be stated more clearly - initially the renderer formulation (DIB-R++) seems to be the contribution, but I can not pinpoint the novelty from a graphics/rendering point of view as all of the used formulations were pre-existing and cited. After further reading, the main contribution seems to be in evaluating how the formulation of DIB-R++ fits into the context of training neural networks. There is a claim of disentangling lighting and materials in the title and abstract of this paper, but I'm not sure what in the proposed formulation of DIB-R++ would cause for better disentanglement? Seems to be a better renderer for training neural networks in terms of less computations for better performance, but there isn't really anything in the formulation to encourage disentanglement. This is also hinted at by the authors in supplementary materials section C.1. Therefore I find the disentanglement term misleading. Questions: How does DIB-R++ compare to other differentiable renderers, e.g. the one from Mitsuba 2? How does the training time of the models using DIB-R++ renderer compare to the methods of "Learning to predict 3D objects with an interpolation-based differentiable renderer" by Wenzheng Chen et al. and "Image GANs meet differentiable rendering for inverse graphics and interpretable 3D neural rendering" by Yuxuan Zhang et al.? Is it slower or faster to train? Does the model converge in less or more iterations? Overall, I like the experimentations of this paper and they demonstrate improved performance compared to previous methods. However, the contribution in the renderer does not seem significant enough to me for a NeurIPS paper.
NIPS
Title Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods Abstract Many widely used datasets for graph machine learning tasks have generally been homophilous, where nodes with similar labels connect to each other. Recently, new Graph Neural Networks (GNNs) have been developed that move beyond the homophily regime; however, their evaluation has often been conducted on small graphs with limited application domains. We collect and introduce diverse nonhomophilous datasets from a variety of application areas that have up to 384x more nodes and 1398x more edges than prior datasets. We further show that existing scalable graph learning and graph minibatching techniques lead to performance degradation on these non-homophilous datasets, thus highlighting the need for further work on scalable non-homophilous methods. To address these concerns, we introduce LINKX — a strong simple method that admits straightforward minibatch training and inference. Extensive experimental results with representative simple methods and GNNs across our proposed datasets show that LINKX achieves state-ofthe-art performance for learning on non-homophilous graphs. Our codes and data are available at https://github.com/CUAI/Non-Homophily-Large-Scale. 1 Introduction Graph learning methods generate predictions by leveraging complex inductive biases captured in the topology of the graph [7]. A large volume of work in this area, including graph neural networks (GNNs), exploits homophily as a strong inductive bias, where connected nodes tend to be similar to each other in terms of labels [46, 3]. Such assumptions of homophily, however, do not always hold true. For example, malicious node detection, a key application of graph machine learning, is known to be non-homophilous in many settings [55, 13, 25, 11]. Further, while new GNNs that work better in these non-homophilous settings have been developed [82, 44, 81, 17, 15, 73, 36, 35, 9, 54], their evaluation is limited to a few graph datasets used by Pei et al. [58] (collected by [61, 66, 48]) that have certain undesirable properties such as small size, narrow range of application areas, and high variance between different train/test splits [82]. Consequently, method scalability has not been thoroughly studied in non-homophilous graph learning. In fact, many non-homophilous techniques frequently require more parameters and computational resources [82, 1, 17], which is neither evident nor detrimental when they are evaluated on very small datasets. Even though scalable graph learning techniques do exist, these methods generally cannot ⇤Equal contribution. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). be directly applied to the non-homophilous setting, as they oftentimes assume homophily in their construction [71, 32, 20, 10]. Non-homophily in graphs also degrades proven graph learning techniques that have been instrumental to strong performance in scalable graph learning. For instance, label propagation, personalized PageRank, and low-pass graph filtering have been used for scalable graph representation learning models, but these methods all assume homophily [71, 32, 20, 10]. Moreover, we give empirical evidence that existing minibatching techniques in graph learning [16, 77] significantly degrade performance in non-homophilous settings. In response, we develop a novel model, LINKX, that addresses these concerns; LINKX outperforms existing graph learning methods on large-scale nonhomophilous datasets and admits a simple minibatching procedure that maintains strong performance. To summarize, we demonstrate three key areas of deficiency as mentioned above, namely: (1) that there is a lack of large, high-quality datasets covering different non-homophilous applications, (2) that current graph minibatching techniques and scalable methods do not work well in non-homophilous settings, and (3) that prior non-homophilous methods are not scalable. To these ends, this paper makes the following contributions: Dataset Collection and Benchmarking. We collect a diverse series of large, non-homophilous graph datasets and define new node features and tasks for classification. These datasets are substantially larger than previous non-homophilous datasets, span wider application areas, and capture different types of complex label-topology relationships. With these proposed datasets, we conduct extensive experiments with 14 graph learning methods and 3 graph minibatching techniques that are broadly representative of the graph machine learning model space. Analyzing Scalable Methods and Minibatching. We analyze current graph minibatching techniques like GraphSAINT [77] in non-homophilous settings, showing that they substantially degrade performance in experiments. Also, we show empirically that scalable methods for graph learning like SGC and C&S [71, 32] do not perform well in non-homophilous settings — even though they achieve state-of-the-art results on many homophilous graph benchmarks. Finally, we demonstrate that existing non-homophilous methods often suffer from issues with scalability and performance in large non-homophilous graphs, in large part due to a lack of study of large-scale non-homophilous graph learning. LINKX: a strong, simple method. We propose a simple method LINKX that achieves excellent results for non-homophilous graphs while overcoming the above-mentioned minibatching issues. LINKX works by separately embedding the adjacency A and node features X, then combining them with multilayer perceptrons and simple transformations, as illustrated in Figure 1. It generalizes node feature MLP and LINK regression [79], two baselines that often work well on non-homophilous graphs. This method is simple to train and evaluate in a minibatched fashion, and does not face the performance degradation that other methods do in the minibatch setting. We develop the model and give more details in Section 4. 2 Prior Work Graph Representation Learning. Graph neural networks [28, 38, 69] have demonstrated their utility on a variety of graph machine learning tasks. Most GNNs are constructed by stacking layers that propagate transformed node features, which are then aggregated via different mechanisms. The neighborhood aggregation used in many existing GNNs implicitly leverage homophily, so they often fail to generalize on non-homophilous graphs [82, 6]. Indeed, a wide range of GNNs operate as low-pass graph filters [53, 71, 6] that smooth features over the graph topology, which produces similar representations and thus similar predictions for neighboring nodes. Scalable methods. A variety of scalable graph learning methods have been developed for efficient computation in larger datasets [77, 16, 75, 28, 71, 32, 20, 10]. Many of these methods explicitly make use of an assumption of homophily in the data [71, 32, 20, 10]. By leveraging this assumption, several simple, inexpensive models are able to achieve state-of-the-art performance on homophilic datasets [71, 32]. However, these methods are unable to achieve comparable performance in non-homophilous settings, as we show empirically in Section 5. Graph sampling. As node representations depend on other nodes in the graph, there are no simple minibatching techniques in graph learning as there are for i.i.d. data. To scale to large graphs, one line of work samples nodes that are used in each layer of a graph neural network [28, 75, 14]. Another family of methods samples subgraphs of an input graph, then passes each subgraph through a GNN to make a prediction for each node of the subgraph [16, 76, 77]. While these methods are useful for scalable graph learning, we show that they substantially degrade performance in our non-homphilous experiments (see Section 5). Non-Homophilous methods. Various GNNs have been proposed to achieve higher performance in low-homophily settings [82, 44, 81, 17, 15, 73, 36, 35]. Geom-GCN [58] introduces a geometric aggregation scheme, MixHop [1] proposes a graph convolutional layer that mixes powers of the adjacency matrix, GPR-GNN [17] features learnable weights that can be positive and negative in feature propagation, GCNII [15] allows deep graph convolutional networks with relieved oversmoothing, which empirically performs better in non-homophilous settings, and H2GCN [82] shows that separation of ego and neighbor embeddings, aggregation in higher-order neighborhoods, and the combination of intermediate representations improves GNN performance in low-homophily. There are several recurring design decisions across these methods that appear to strengthen performance in non-homophilous settings: using higher-order neighborhoods, decoupling neighbor information from ego information, and combining graph information at different scales [82]. Many of these design choices require additional overhead (see Section 4.3), thus reducing their scalability. Datasets. The widely used citation networks Cora, Citeseer, and Pubmed [62, 74] are highly homophilous (see Appendix A) [82]. Recently, the Open Graph Benchmark [31] has provided a series of datasets and leaderboards that improve the quality of evaluation in graph representation learning; however, most of the node classification datasets tend to be homophilous, as noted in past work [82] and expanded upon in Appendix A.2. A comparable set of high-quality benchmarks to evaluate non-homophilous methods does not currently exist. 3 Datasets for Non-Homophilous Graph Learning 3.1 Currently Used Datasets The most widely used datasets to evaluate non-homophilous graph representation learning methods were used by Pei et al. [58] (and collected by [61, 66, 48]); see our Table 1 for statistics. However, these datasets have fundamental issues. First, they are very small — the Cornell, Texas, and Wisconsin datasets have between 180-250 nodes, and the largest dataset Actor has 7,600 nodes. In analogy to certain pitfalls of graph neural network evaluation on small (homophilic) datasets discussed in [63], evaluation on the datasets of Pei et al. [58] is plagued by high variance across different train/test splits (see results in [82]). The small size of these datasets may tend to create models that are more prone to overfitting [21], which prevents the scaling up of GNNs designed for non-homophilous settings. Peel [57] also studies node classification on network datasets with various types of relationships between edges and labels. However, they only study methods that act on graph topology, and thus their datasets do not necessarily have node features. We take inspiration from their work, by testing on Pokec and Facebook networks with node features that we define, and by introducing other year-prediction tasks on citation networks that have node features. 3.2 An Improved Homophily Measure Various metrics have been proposed to measure the homophily of a graph. However, these metrics are sensitive to the number of classes and the number of nodes in each class. Let G = (V,E) be a graph with n nodes, none of which are isolated. Further let each node u 2 V have a class label ku 2 {0, 1, . . . , C 1} for some number of classes C, and denote by Ck the set of nodes in class k. The edge homophily [82] is the proportion of edges that connect two nodes of the same class: h = |{(u, v) 2 E : ku = kv}| |E| . (1) Another related measure is what we call the node homophily [58], defined as 1|V | P u2V d(ku)u du , in which du is the number of neighbors of node u, and d (ku) u is the number of neighbors of u that have the same class label. We focus on the edge homophily (1) in this work, but find that node homophily tends to have similar qualitative behavior in experiments. The sensitivity of edge homophily to the number of classes and size of each class limits its utility. We consider a null model for graphs in which the graph topology is independent of the labels; suppose that nodes with corresponding labels are fixed, and include edges uniformly at random in the graph that are independent of node labels. Under this null model, a node u 2 V would be expected to have d(ku)u /du ⇡ |Cku |/n as the proportion of nodes of the same class that they connect to [3]. For a dataset with C balanced classes, we would thus expect the edge homophily to be around 1C , so the interpretation of the measure depends on the number of classes. Also, if classes are imbalanced, then the edge homophily may be misleadingly large. For instance, if 99% of nodes were of one class, then most edges would likely be within that same class, so the edge homophily would be high, even when the graph is generated from the null model where labels are independent of graph topology. Thus, the edge homophily does not capture deviation of the label distribution from the null model. We introduce a metric that better captures the presence or absence of homophily. Unlike the edge homophily, our metric measures excess homophily that is not expected from the above null model where edges are randomly wired. Our metric does not distinguish between different non-homophilous settings (such as heterophily or independent edges); we believe that there are too many degrees of freedom in non-homophilous settings for a single scalar quantity to be able to distinguish them all. Our measure is given as: ĥ = 1 C 1 C 1X k=0 hk |Ck| n + , (2) where [a]+ = max(a, 0), and hk is the class-wise homophily metric hk = P u2Ck d (ku) uP u2Ck du . (3) Note that ĥ 2 [0, 1], with a fully homophilous graph (in which every node is only connected to nodes of the same class) having ĥ = 1. Since each class-wise homophily metric hk only contributes positive deviations from the null expected proportion |Ck|/n, the class-imbalance problem is substantially mitigated. Also, graphs in which edges are independent of node labels are expected to have ĥ ⇡ 0, for any number of classes. Our measure ĥ measures presence of homophily, but does not distinguish between the many types of possibly non-homophilous relationships. This is reasonable given the diversity of non-homophilous relationships. For example, non-homophily can imply independence of edges and classes, extreme heterophily, connections only among subsets of classes, or certain chemically / biologically determined relationships. Indeed, these relationships are very different, and are better captured by more than one scalar quantity, such as the compatibility matrices presented in the appendix. Further discussion is given in Appendix A. 3.3 Proposed Datasets Here, we detail the non-homophilous datasets that we propose for graph machine learning evaluation. Our datasets and tasks span diverse application areas. Penn94 [67], Pokec [41], genius [43], and twitch-gamers [60] are online social networks, where the task is to predict reported gender, certain account labels, or use of explicit content on user accounts. For the citation networks arXiv-year [31] and snap-patents [42, 41] the goal is to predict year of paper publication or the year that a patent is granted. The dataset wiki consists of Wikipedia articles, where the goal is to predict total page views of each article. Detailed descriptions about the graph structure, node features, node labels, and licenses of each dataset are given in Appendix D.2. Most of these datasets have been used for evaluation of graph machine learning models in past work; we make adjustments such as modifying node labels and adding node features that allow for evaluation of GNNs in non-homophilous settings. We define node features for Pokec, genius, and snap-patents, and we also define node labels for arXiv-year, snap-patents, and genius. Additionally, we crawl and clean the large-scale wiki dataset — a new Wikipedia dataset where the task is to predict page views, which is non-homophilous with respect to the graph of articles connected by links between articles (see Appendix D.3). This wiki dataset has 1,925,342 nodes and 303,434,860 edges, so training and inference require scalable algorithms. Basic dataset statistics are given in Table 2. Note the substantial difference between the size of our datasets and those of Pei et al. [58] in Table 1; our datasets have up to 384x more nodes and 1398x more edges. The homophily measures along with the lower empirical performance of homophilyassuming models (Section 5) and examination of compatibility matrices (Appendix A) show that our datasets are indeed non-homophilous. As there is little study in large-scale non-homophilous graph learning, our proposed large datasets strongly motivate the need for developing a new, scalable approach that can accurately learn on non-homophilous graphs. 4 LINKX: A New Scalable Model In this section, we introduce our novel model, LINKX, for scalable node classification in nonhomophilous settings. LINKX is built out of multilayer perceptrons (MLPs) and linear transformations, thus making it simple and scalable. It also admits simple row-wise minibatching procedures that allow it to perform well on large non-homophilous graphs. As a result, LINKX is able to circumvent aforementioned issues of graph minibatching and non-homophilous GNNs in large-scale non-homophilous settings. 4.1 Motivation from two simple baselines Here, we detail two simple baselines for node classification that we build on to develop LINKX. MLP on node features. A naïve method for node classification is to ignore the graph topology and simply train an MLP on node features. For the same reason that the graph topology has more complicated relationships with label distributions in non-homophilous graphs, many GNNs are not able to effectively leverage the graph topology in these settings. Thus, MLPs can actually perform comparatively well on non-homophilous graphs — achieving higher or approximately equal performance to various GNNs [82]. LINK regression on graph topology. On the other extreme, there is LINK [79] — a simple baseline that only utilizes graph topology. In particular, we consider LINK regression, which trains a logistic regression model in which each node’s features are taken from a column of the adjacency matrix. Letting A 2 {0, 1}n⇥n be the binary adjacency matrix of the graph, and W 2 Rc⇥n be a learned weight matrix, LINK computes class probabilities as Y = softmax(WA). (4) Let u 2 {1, . . . , n} be a specific node, and let k 2 {1, . . . , c} be a specific class. Then, expanding the matrix multiplication, the log-odds of node u belonging to class k is given by (WA)ku = X v2N (u) Wkv, (5) where N (u) contains the 1-hop neighbors of u. In other words, the logit is given by the sum of weights Wkv across the 1-hop neighbors of u. If a specific node v has many neighbors of class k, then Wkv is probably large, as we would expect with a high probability that any neighbor of v is of class k. In this sense, LINK is like a 2-hop method: for a given node u, the probability of being in a given class is related to the class memberships of u’s 2-hop neighbors in N (v) for each neighbor v 2 N (u). Related interpretations of LINK as a method acting on 2-hop paths between nodes are given by Altenburger and Ugander [3]. Though it is simple and has been overlooked in the recent non-homophilous GNN literature, LINK has been found to perform well in certain node classification tasks like gender prediction in social networks [3, 4]. A major reason why LINK does well in many settings is exactly because it acts as a 2-hop method. For example, while 1-hop neighbors are often not so informative for gender prediction in social networks due to lack of homophily, 2-hop neighbors are very informative due to so-called “monophily,” whereby many nodes have extreme preferences for connecting to a certain class [3]. Beyond just gender prediction, we show in Section 5 that LINK empirically outperforms many models across the various application areas of the non-homophilous datasets we propose. 4.2 LINKX We combine these two simple baselines through simple linear transformations and component-wise nonlinearities. Let X 2 RD⇥n denote the matrix of node features with input dimension D, and let [h1;h2] denote concatenation of vectors h1 and h2. Then our model outputs predictions Y through the following mapping: hA = MLPA(A) 2 Rd⇥n (6) hX = MLPX(X) 2 Rd⇥n (7) Y = MLPf ⇣ W[hA;hX] + hA + hX ⌘ , (8) in which d is the hidden dimension, W 2 Rd⇥2d is a weight matrix, and is a component-wise nonlinearity (which we take to be ReLU). We call our model LINKX, as it extends LINK with node feature information from the matrix X. A diagram of LINKX is given in Figure 1. First, LINKX computes hidden representations hA of the adjacency (extending LINK) and hX of the feature matrix (as in node-feature MLPs). Then it combines these hidden representations through a linear transform W of their concatenation, with skip connections that add back in hA and hX to better preserve pure adjacency or node feature information. Finally, it puts this combined representation through a non-linearity and another MLP to make a prediction. Separating then mixing adjacency and feature information. LINKX separately embeds the adjacency A to hA and the features X into hX before mixing them for a few reasons. First, we note that this design is reminiscent of fusion architectures in multimodal networks, where data from different modalities are processed and combined in a neural network [24, 78]. In our setting, we can view adjacency information and node feature information as separate modalities. Since node feature MLPs and LINK do well independently on different datasets, this allows us to preserve their individual performance if needed. Ignoring hX information is similar to just using LINK, and ignoring hA information is just using an node feature MLP. Still, to preserve the ability to just learn a similar mapping to LINK or to a node feature MLP, we find that having the additive skip connections helps to get performance at least as good as either baseline. Our initial empirical results showed that simply concatenating adjacency and node features as input to a network does worse overall empirically (see Appendix C.1). There are also computational benefits to our design choices. Embedding A is beneficial for depth as adding more layers to the MLPs only gives an O(d2) cost — depending only on the hidden dimension d — and thus does not scale in the number of edges |E| as when adding layers to message-passing GNNs. This is because the graph information in A is already compressed to hidden feature vectors after the first linear mapping of MLPA, and we do not need to propagate along the graph in later steps. Moreover, this enables a sparse-dense matrix product to compute the first linear mapping of MLPA on A, which greatly increases efficiency as A is typically very sparse for real-world graphs. Separate embeddings are key here, as this would not be possible if we for instance concatenated A and X when X is large and dense. Simple minibatching. Message-passing GNNs must take graph topology into account when minibatching with techniques such as neighbor sampling, subgraph sampling, or graph partitioning. However, LINKX does not require this, as it utilizes graph information solely through defining adjacency matrix columns as features. Thus, we can train LINKX with standard stochastic gradient descent variants by taking i.i.d. samples of nodes along with the corresponding columns of the adjacency and feature matrix as features. This is much simpler than the graph minibatching procedures for message-passing GNNs, which require specific hyperparameter choices, have to avoid exponential blowup of number of neighbors per layer, and are generally more complex to implement [77]. In Section 5.3, we use the simple LINKX minibatching procedure for large-scale experiments that show that LINKX with this minibatching style outperforms GNNs with graph minibatching methods. This is especially important on the scale of the wiki dataset, where none of our tested methods — other than MLP — is capable of running on a Titan RTX GPU with 24 GB GPU RAM (see Section 5). 4.3 Complexity Analysis Using the above notation, a forward pass of LINKX has a time complexity of O d|E|+ nd2L , in which d is the hidden dimension (which we assume to be on the same order as the input feature dimension D), L is the number of layers, n is the number of nodes, and |E| is the number of edges. We require a O(d|E|) cost for the first linear mapping of A and a O(d2) cost per layer for MLP operations on hidden features, for L total layers and each of n nodes. As mentioned above, message passing GNNs have to propagate using the adjacency in each layer, so they have an L|E| term in the complexity. For instance, an L-layer GCN [38] with d hidden dimensions has O(dL|E| + nd2L) complexity, as it costs O(d|E|) to propagate features in each layer, and O(nd2) to multiply by the weight matrix in each layer. Non-homophilous methods often make modifications to standard architectures that increase computational cost, such as using higher-order neighborhoods or using additional hidden embeddings [82]. For instance, the complexity of MixHop [1] is O(K(dL|E| + nd2L)), which has an extra factor K that is the number of adjacency powers to propagate with. The complexity of GCNII [15] is asymptotically the same as that of GCN, but in practice it requires more computations per layer due to residual connections and linear combinations, and it also often achieves best performance with a large number of layers L. H2GCN [82] is significantly more expensive due to its usage of strict two-hop neighborhoods, which requires it to form the squared adjacency A2. This makes the memory requirements intractable even for medium sized graphs (see Section 5). 5 Experiments We conduct two sets of experiments for node classification on our proposed non-homophilous datasets. One set of experiments does full batch gradient descent training for all applicable methods. This of course limits the size of each model, as the large datasets require substantial GPU memory to train on. Our other set of experiments uses minibatching methods. As all graph-based methods run out of memory on the wiki dataset, even on 24 GB GPUs, we only include wiki results in the minibatching section. In all settings, our LINKX model matches or outperforms other methods. 5.1 Experimental Setup Methods. We include both methods that are graph-agnostic and node-feature-agnostic as simple baselines. The node-feature-agnostic models of two-hop label propagation [57] and LINK (logistic regression on the adjacency matrix) [79] have been found to perform well in various non-homophilous settings, but they have often been overlooked by recent graph representation learning work. Also, we include SGC [71] and C&S [32] as simple, scalable methods that perform well on homophilic datasets. We include a two-hop propagation variant of C&S in analogy with two-step label propagation. In addition to representative general GNNs, we also include GNNs recently proposed for non-homophilous settings. The full list of methods is: Only node features: MLP [26]. Only graph topology: label propagation (standard and two-hop) [80, 57], LINK [79]. Simple methods: SGC [71], C&S [32] and their two-hop variants. General GNNs: GCN [38], GAT [69], jumping knowledge networks (GCNJK, GATJK) [72], and APPNP [39]. Non-homophilous methods: H2GCN [82], MixHop [1], GPR-GNN [17], GCNII [15], and LINKX (ours). Minibatching methods. We also evaluate GNNs with various minibatching methods. We take GCNJK [72] and MixHop [1] as our base models for evaluation, as they are representative of many GNN design choices and MixHop performs very well in full batch training. As other minibatching methods are trickier to make work with these models, we use the Cluster-GCN [16] and GraphSAINT [77] minibatching methods, which sample subgraphs. We include both the node based sampling and random walk based sampling variants of GraphSAINT. We compare these GNNs with MLP, LINK, and our LINKX, which use simple i.i.d. node minibatching. Training and evaluation. Following other works in non-homophilous graph learning evaluation, we take a high proportion of training nodes [82, 58, 73]; we run each method on the same five random 50/25/25 train/val/test splits for each dataset. All methods requiring gradient-based optimization are run for 500 epochs, with test performance reported for the learned parameters of highest validation performance. We use ROC-AUC as the metric for the class-imbalanced genius dataset (about 80% of nodes are in the majority class), as it is less sensitive to class-imbalance than accuracy. For other datasets, we use classification accuracy as the metric. Further experimental details are in Appendix B. 5.2 Full-Batch Results Table 3 lists the results of each method across the datasets that we propose. Our datasets reveal several important properties of non-homophilous node classification. Firstly, the stability of performance across runs is better for our datasets than those of Pei et al. [58] (see [82] results). Secondly, as suggested by prior theory and experiments [82, 1, 17], the non-homophilous GNNs usually do well — though not necessarily on every dataset. The core assumption of homophily in SGC and C&S that enables them to be simple and efficient does not hold on these non-homophilous datasets, and thus the performance of these methods is typically relatively low. Still, as expected, two-hop variants generally improve upon their one-hop counter-parts in these low-homophily settings. One consequence of using larger datasets for benchmarks is that the tradeoff between scalability and learning performance of non-homophilous methods has become starker, with some methods facing memory issues. This tradeoff is especially important to consider in light of the fact that many scalable graph learning methods rely on implicit or explicit homophily assumptions [71, 32, 20, 10], and thus face issues when used in non-homophilous settings. Finally, LINKX achieves superior performance on all datasets, taking advantage of LINK’s power, while also being able to utilize node features where they provide additional information. 5.3 Minibatching Results Our experimental results for minibatched methods on our proposed datasets are in Table 4. Since GraphSAINT does not partition the nodes of the graph into subgraphs that cover all nodes, we test on the full input graph for the smaller datasets and uniformly random partitions of the graph into 10 induced subgraphs for the larger datasets. First, we note that both Cluster-GCN and GraphSAINT sampling lead to performance degradation for these methods on our proposed non-homophilous datasets. When compared to the full-batch training results of the previous section, classification accuracy is typically substantially lower. Further experiments in Appendix C.2 give evidence that the performance degradation is often more substantial in non-homophilous settings, and provides possible explanations for why this may be the case. On the other hand, LINKX does not suffer much performance degradation with the simple i.i.d. node minibatching technique. In fact, it matches or outperforms all methods in this setting, often by a wide margin. Though LINK performs on par with LINKX in arXiv-year and pokec, our LINKX model significantly outperforms it on other datasets, again due to LINKX’s ability to integrate node feature information. We again stress that the LINKX minibatching is very simple to implement, yet it still substantially outperforms other methods. Consequently, LINKX is generally well-suited for scalable node classification across a broad range of non-homophilous settings, surpassing even specially designed non-homophilous GNNs with current graph minibatching techniques. 6 Discussion and Conclusion In this paper, we propose new, high-quality non-homophilous graph learning datasets, and we benchmark simple baselines and representative graph representation learning methods across these datasets. Further, we develop LINKX: a strong, simple, and scalable method for non-homophilous classification. Our experiments show that LINKX significantly outperforms other methods on our proposed datasets, thus providing one powerful method in the underexplored area of scalable learning on non-homophilous graphs. We hope that our contributions will provide researchers with new avenues of research in learning on non-homophilous graphs, along with better tools to test models and evaluate utility of new techniques. While we do find utility in our proposed datasets and LINKX model, this work is somewhat limited by only focusing on transductive node classification. This setting is the most natural for studying performance in the absence of homophily, since here we define homophily in terms of the node labels, and previous non-homophilous GNN work using the Pei et al. [58] data also studies this setting exclusively [82, 17]. Using other Facebook 100 datasets besides Penn94 [67] would allow for inductive node classification, but LINKX does not directly generalize to this setting. Our proposed datasets and model LINKX could be used for link prediction, but this is left for future work. Broader Impact. Fundamental research in graph learning on non-homophilous graphs has the potential for positive societal benefit. As a major application, it enables malicious node detection techniques in social networks and transaction networks that are not fooled by fraudsters’ connections to legitimate users and customers. This is a widely studied task, and past works have noted that non-homophilous structures are present in many such networks [11, 25, 55]. We hope that this paper provides insight on the homophily limitations of existing scalable graph learning models and help researchers design scalable models that continue to work well in the non-homophilous regime, thus improving the quality of node classification on graphs more broadly. As our proposed datasets have diverse structures and our model performs well across all of these datasets, the potential for future application of our work to important non-homophilous tasks is high. Nevertheless, our work could also have potential for different types of negative social consequences. Nefarious behavior by key actors could be one source of such consequences. Nonetheless, we expect that the actors that can make use of large-scale social networks for gender prediction as studied in our work are limited in number. Actors with both the capability and incentive to perform such operations probably mostly consist of entities with access to large social network data such as social media companies or government actors with auxiliary networks [50]. Smaller actors can perform certain attacks, but this may be made more difficult by resource requirements such as the need for certain external information [50] or the ability to add nodes and edges before an anonymized version of a social network is released [5]. Furthermore, additional actors could make use of deanonymization attacks [30, 49, 50] to reveal user identities in supposedly anonymized datasets. Also, accidental consequences and implicit biases are a potential issue, even if the applications of the learning algorithms are benign and intended to benefit society [47]. Performance of algorithms may vary substantially between intersectional subgroups of subjects — as in the case of vision-based gender predictors [12] (and some have questioned the propriety of vision-based gender classifiers altogether). Thus, there may be disparate effects on different populations, so care should be taken to understand the impact of those differences across subgroups. Moreover, large datasets require computing resources, so projects can only be pursued by large entities at the possible expense of the individual and smaller research groups [8]. This is alleviated by the fact that our experiments are each run on one GPU, and hence have significantly less GPU computing requirements than much current deep learning research. Thus, smaller research groups and independent researchers should find our work beneficial, and should be able to build on it. Finally, the nature of collection of online user information also comes with notable ethical concerns. Common notice-and-consent policies are often ineffective in actually protecting user privacy [52]. Indeed, users may not actually have much choice in using certain platforms or sharing data due to social or economic reasons. Also, users are generally unable to fully read and understand all of the different privacy policies that they come across, and may not understand the implications of having their data available for long periods of time to entities with powerful inference algorithms. Furthermore, people may rely on obscurity for privacy [29], but this assumption may be ignored in courts of law, and it may be directly broken when data leaks or is released in aggregated form without sufficient privacy protections. Overall, while we believe that our work will benefit machine learning research and enable positive applications, we must still be aware of possible negative consequences. Acknowledgements We thank Abhay Singh, Austin Benson, and Horace He for insightful discussions. We also thank the rest of Cornell University Artificial Intelligence for their support and discussion. We thank Facebook AI for funding equipment that made this work possible.
1. What is the focus of the paper regarding graph neural networks? 2. What are the strengths of the proposed approach, particularly in its simplicity and effectiveness? 3. What are the weaknesses of the paper, specifically regarding its claims and experiments? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any concerns or questions regarding the paper's contributions, such as the introduced metric for measuring homophily?
Summary Of The Paper Review
Summary Of The Paper The paper collects graph datasets that are not as homophile as the regular benchmarks for GNNs. Furthermore, it shows that a rather simple new baseline achieves quite good results compared to much more involved models / the SOTA. The paper presents important contributions to the community, is well and thoroughly written, related work is covered adequately, the experiments are overall thorough, and I have few to criticise. Altogether, I suggest to accept the paper. Review As outlined above, I have few to critisize, I summarise the most important contributions below. The collected (and sometimes adapted) datasets represent a useful set to move research in the field forward. The paper actually introduces a new metric to measure homophily which is quite interesting since it overcomes issues with existing ones. I would strongly suggest to move parts of all the explanations in the appendix into the paper, especially the descriptions illustrating the latter issues. I personally consider this metric contribution at least as important as the new model. The proposed approach is simple but effective which is quite nice, and the experiments over wiki show that such a solution was needed. The appendix contains much additional information and insights. Questions & Smaller Comments I wonder if the prediction of the year for the arxiv and patent data is a realistic task. Why would someone care about this in particular, and is it even possible to do this (ie do we expect enough variability in the data to derive such conclusions)?
NIPS
Title Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods Abstract Many widely used datasets for graph machine learning tasks have generally been homophilous, where nodes with similar labels connect to each other. Recently, new Graph Neural Networks (GNNs) have been developed that move beyond the homophily regime; however, their evaluation has often been conducted on small graphs with limited application domains. We collect and introduce diverse nonhomophilous datasets from a variety of application areas that have up to 384x more nodes and 1398x more edges than prior datasets. We further show that existing scalable graph learning and graph minibatching techniques lead to performance degradation on these non-homophilous datasets, thus highlighting the need for further work on scalable non-homophilous methods. To address these concerns, we introduce LINKX — a strong simple method that admits straightforward minibatch training and inference. Extensive experimental results with representative simple methods and GNNs across our proposed datasets show that LINKX achieves state-ofthe-art performance for learning on non-homophilous graphs. Our codes and data are available at https://github.com/CUAI/Non-Homophily-Large-Scale. 1 Introduction Graph learning methods generate predictions by leveraging complex inductive biases captured in the topology of the graph [7]. A large volume of work in this area, including graph neural networks (GNNs), exploits homophily as a strong inductive bias, where connected nodes tend to be similar to each other in terms of labels [46, 3]. Such assumptions of homophily, however, do not always hold true. For example, malicious node detection, a key application of graph machine learning, is known to be non-homophilous in many settings [55, 13, 25, 11]. Further, while new GNNs that work better in these non-homophilous settings have been developed [82, 44, 81, 17, 15, 73, 36, 35, 9, 54], their evaluation is limited to a few graph datasets used by Pei et al. [58] (collected by [61, 66, 48]) that have certain undesirable properties such as small size, narrow range of application areas, and high variance between different train/test splits [82]. Consequently, method scalability has not been thoroughly studied in non-homophilous graph learning. In fact, many non-homophilous techniques frequently require more parameters and computational resources [82, 1, 17], which is neither evident nor detrimental when they are evaluated on very small datasets. Even though scalable graph learning techniques do exist, these methods generally cannot ⇤Equal contribution. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). be directly applied to the non-homophilous setting, as they oftentimes assume homophily in their construction [71, 32, 20, 10]. Non-homophily in graphs also degrades proven graph learning techniques that have been instrumental to strong performance in scalable graph learning. For instance, label propagation, personalized PageRank, and low-pass graph filtering have been used for scalable graph representation learning models, but these methods all assume homophily [71, 32, 20, 10]. Moreover, we give empirical evidence that existing minibatching techniques in graph learning [16, 77] significantly degrade performance in non-homophilous settings. In response, we develop a novel model, LINKX, that addresses these concerns; LINKX outperforms existing graph learning methods on large-scale nonhomophilous datasets and admits a simple minibatching procedure that maintains strong performance. To summarize, we demonstrate three key areas of deficiency as mentioned above, namely: (1) that there is a lack of large, high-quality datasets covering different non-homophilous applications, (2) that current graph minibatching techniques and scalable methods do not work well in non-homophilous settings, and (3) that prior non-homophilous methods are not scalable. To these ends, this paper makes the following contributions: Dataset Collection and Benchmarking. We collect a diverse series of large, non-homophilous graph datasets and define new node features and tasks for classification. These datasets are substantially larger than previous non-homophilous datasets, span wider application areas, and capture different types of complex label-topology relationships. With these proposed datasets, we conduct extensive experiments with 14 graph learning methods and 3 graph minibatching techniques that are broadly representative of the graph machine learning model space. Analyzing Scalable Methods and Minibatching. We analyze current graph minibatching techniques like GraphSAINT [77] in non-homophilous settings, showing that they substantially degrade performance in experiments. Also, we show empirically that scalable methods for graph learning like SGC and C&S [71, 32] do not perform well in non-homophilous settings — even though they achieve state-of-the-art results on many homophilous graph benchmarks. Finally, we demonstrate that existing non-homophilous methods often suffer from issues with scalability and performance in large non-homophilous graphs, in large part due to a lack of study of large-scale non-homophilous graph learning. LINKX: a strong, simple method. We propose a simple method LINKX that achieves excellent results for non-homophilous graphs while overcoming the above-mentioned minibatching issues. LINKX works by separately embedding the adjacency A and node features X, then combining them with multilayer perceptrons and simple transformations, as illustrated in Figure 1. It generalizes node feature MLP and LINK regression [79], two baselines that often work well on non-homophilous graphs. This method is simple to train and evaluate in a minibatched fashion, and does not face the performance degradation that other methods do in the minibatch setting. We develop the model and give more details in Section 4. 2 Prior Work Graph Representation Learning. Graph neural networks [28, 38, 69] have demonstrated their utility on a variety of graph machine learning tasks. Most GNNs are constructed by stacking layers that propagate transformed node features, which are then aggregated via different mechanisms. The neighborhood aggregation used in many existing GNNs implicitly leverage homophily, so they often fail to generalize on non-homophilous graphs [82, 6]. Indeed, a wide range of GNNs operate as low-pass graph filters [53, 71, 6] that smooth features over the graph topology, which produces similar representations and thus similar predictions for neighboring nodes. Scalable methods. A variety of scalable graph learning methods have been developed for efficient computation in larger datasets [77, 16, 75, 28, 71, 32, 20, 10]. Many of these methods explicitly make use of an assumption of homophily in the data [71, 32, 20, 10]. By leveraging this assumption, several simple, inexpensive models are able to achieve state-of-the-art performance on homophilic datasets [71, 32]. However, these methods are unable to achieve comparable performance in non-homophilous settings, as we show empirically in Section 5. Graph sampling. As node representations depend on other nodes in the graph, there are no simple minibatching techniques in graph learning as there are for i.i.d. data. To scale to large graphs, one line of work samples nodes that are used in each layer of a graph neural network [28, 75, 14]. Another family of methods samples subgraphs of an input graph, then passes each subgraph through a GNN to make a prediction for each node of the subgraph [16, 76, 77]. While these methods are useful for scalable graph learning, we show that they substantially degrade performance in our non-homphilous experiments (see Section 5). Non-Homophilous methods. Various GNNs have been proposed to achieve higher performance in low-homophily settings [82, 44, 81, 17, 15, 73, 36, 35]. Geom-GCN [58] introduces a geometric aggregation scheme, MixHop [1] proposes a graph convolutional layer that mixes powers of the adjacency matrix, GPR-GNN [17] features learnable weights that can be positive and negative in feature propagation, GCNII [15] allows deep graph convolutional networks with relieved oversmoothing, which empirically performs better in non-homophilous settings, and H2GCN [82] shows that separation of ego and neighbor embeddings, aggregation in higher-order neighborhoods, and the combination of intermediate representations improves GNN performance in low-homophily. There are several recurring design decisions across these methods that appear to strengthen performance in non-homophilous settings: using higher-order neighborhoods, decoupling neighbor information from ego information, and combining graph information at different scales [82]. Many of these design choices require additional overhead (see Section 4.3), thus reducing their scalability. Datasets. The widely used citation networks Cora, Citeseer, and Pubmed [62, 74] are highly homophilous (see Appendix A) [82]. Recently, the Open Graph Benchmark [31] has provided a series of datasets and leaderboards that improve the quality of evaluation in graph representation learning; however, most of the node classification datasets tend to be homophilous, as noted in past work [82] and expanded upon in Appendix A.2. A comparable set of high-quality benchmarks to evaluate non-homophilous methods does not currently exist. 3 Datasets for Non-Homophilous Graph Learning 3.1 Currently Used Datasets The most widely used datasets to evaluate non-homophilous graph representation learning methods were used by Pei et al. [58] (and collected by [61, 66, 48]); see our Table 1 for statistics. However, these datasets have fundamental issues. First, they are very small — the Cornell, Texas, and Wisconsin datasets have between 180-250 nodes, and the largest dataset Actor has 7,600 nodes. In analogy to certain pitfalls of graph neural network evaluation on small (homophilic) datasets discussed in [63], evaluation on the datasets of Pei et al. [58] is plagued by high variance across different train/test splits (see results in [82]). The small size of these datasets may tend to create models that are more prone to overfitting [21], which prevents the scaling up of GNNs designed for non-homophilous settings. Peel [57] also studies node classification on network datasets with various types of relationships between edges and labels. However, they only study methods that act on graph topology, and thus their datasets do not necessarily have node features. We take inspiration from their work, by testing on Pokec and Facebook networks with node features that we define, and by introducing other year-prediction tasks on citation networks that have node features. 3.2 An Improved Homophily Measure Various metrics have been proposed to measure the homophily of a graph. However, these metrics are sensitive to the number of classes and the number of nodes in each class. Let G = (V,E) be a graph with n nodes, none of which are isolated. Further let each node u 2 V have a class label ku 2 {0, 1, . . . , C 1} for some number of classes C, and denote by Ck the set of nodes in class k. The edge homophily [82] is the proportion of edges that connect two nodes of the same class: h = |{(u, v) 2 E : ku = kv}| |E| . (1) Another related measure is what we call the node homophily [58], defined as 1|V | P u2V d(ku)u du , in which du is the number of neighbors of node u, and d (ku) u is the number of neighbors of u that have the same class label. We focus on the edge homophily (1) in this work, but find that node homophily tends to have similar qualitative behavior in experiments. The sensitivity of edge homophily to the number of classes and size of each class limits its utility. We consider a null model for graphs in which the graph topology is independent of the labels; suppose that nodes with corresponding labels are fixed, and include edges uniformly at random in the graph that are independent of node labels. Under this null model, a node u 2 V would be expected to have d(ku)u /du ⇡ |Cku |/n as the proportion of nodes of the same class that they connect to [3]. For a dataset with C balanced classes, we would thus expect the edge homophily to be around 1C , so the interpretation of the measure depends on the number of classes. Also, if classes are imbalanced, then the edge homophily may be misleadingly large. For instance, if 99% of nodes were of one class, then most edges would likely be within that same class, so the edge homophily would be high, even when the graph is generated from the null model where labels are independent of graph topology. Thus, the edge homophily does not capture deviation of the label distribution from the null model. We introduce a metric that better captures the presence or absence of homophily. Unlike the edge homophily, our metric measures excess homophily that is not expected from the above null model where edges are randomly wired. Our metric does not distinguish between different non-homophilous settings (such as heterophily or independent edges); we believe that there are too many degrees of freedom in non-homophilous settings for a single scalar quantity to be able to distinguish them all. Our measure is given as: ĥ = 1 C 1 C 1X k=0 hk |Ck| n + , (2) where [a]+ = max(a, 0), and hk is the class-wise homophily metric hk = P u2Ck d (ku) uP u2Ck du . (3) Note that ĥ 2 [0, 1], with a fully homophilous graph (in which every node is only connected to nodes of the same class) having ĥ = 1. Since each class-wise homophily metric hk only contributes positive deviations from the null expected proportion |Ck|/n, the class-imbalance problem is substantially mitigated. Also, graphs in which edges are independent of node labels are expected to have ĥ ⇡ 0, for any number of classes. Our measure ĥ measures presence of homophily, but does not distinguish between the many types of possibly non-homophilous relationships. This is reasonable given the diversity of non-homophilous relationships. For example, non-homophily can imply independence of edges and classes, extreme heterophily, connections only among subsets of classes, or certain chemically / biologically determined relationships. Indeed, these relationships are very different, and are better captured by more than one scalar quantity, such as the compatibility matrices presented in the appendix. Further discussion is given in Appendix A. 3.3 Proposed Datasets Here, we detail the non-homophilous datasets that we propose for graph machine learning evaluation. Our datasets and tasks span diverse application areas. Penn94 [67], Pokec [41], genius [43], and twitch-gamers [60] are online social networks, where the task is to predict reported gender, certain account labels, or use of explicit content on user accounts. For the citation networks arXiv-year [31] and snap-patents [42, 41] the goal is to predict year of paper publication or the year that a patent is granted. The dataset wiki consists of Wikipedia articles, where the goal is to predict total page views of each article. Detailed descriptions about the graph structure, node features, node labels, and licenses of each dataset are given in Appendix D.2. Most of these datasets have been used for evaluation of graph machine learning models in past work; we make adjustments such as modifying node labels and adding node features that allow for evaluation of GNNs in non-homophilous settings. We define node features for Pokec, genius, and snap-patents, and we also define node labels for arXiv-year, snap-patents, and genius. Additionally, we crawl and clean the large-scale wiki dataset — a new Wikipedia dataset where the task is to predict page views, which is non-homophilous with respect to the graph of articles connected by links between articles (see Appendix D.3). This wiki dataset has 1,925,342 nodes and 303,434,860 edges, so training and inference require scalable algorithms. Basic dataset statistics are given in Table 2. Note the substantial difference between the size of our datasets and those of Pei et al. [58] in Table 1; our datasets have up to 384x more nodes and 1398x more edges. The homophily measures along with the lower empirical performance of homophilyassuming models (Section 5) and examination of compatibility matrices (Appendix A) show that our datasets are indeed non-homophilous. As there is little study in large-scale non-homophilous graph learning, our proposed large datasets strongly motivate the need for developing a new, scalable approach that can accurately learn on non-homophilous graphs. 4 LINKX: A New Scalable Model In this section, we introduce our novel model, LINKX, for scalable node classification in nonhomophilous settings. LINKX is built out of multilayer perceptrons (MLPs) and linear transformations, thus making it simple and scalable. It also admits simple row-wise minibatching procedures that allow it to perform well on large non-homophilous graphs. As a result, LINKX is able to circumvent aforementioned issues of graph minibatching and non-homophilous GNNs in large-scale non-homophilous settings. 4.1 Motivation from two simple baselines Here, we detail two simple baselines for node classification that we build on to develop LINKX. MLP on node features. A naïve method for node classification is to ignore the graph topology and simply train an MLP on node features. For the same reason that the graph topology has more complicated relationships with label distributions in non-homophilous graphs, many GNNs are not able to effectively leverage the graph topology in these settings. Thus, MLPs can actually perform comparatively well on non-homophilous graphs — achieving higher or approximately equal performance to various GNNs [82]. LINK regression on graph topology. On the other extreme, there is LINK [79] — a simple baseline that only utilizes graph topology. In particular, we consider LINK regression, which trains a logistic regression model in which each node’s features are taken from a column of the adjacency matrix. Letting A 2 {0, 1}n⇥n be the binary adjacency matrix of the graph, and W 2 Rc⇥n be a learned weight matrix, LINK computes class probabilities as Y = softmax(WA). (4) Let u 2 {1, . . . , n} be a specific node, and let k 2 {1, . . . , c} be a specific class. Then, expanding the matrix multiplication, the log-odds of node u belonging to class k is given by (WA)ku = X v2N (u) Wkv, (5) where N (u) contains the 1-hop neighbors of u. In other words, the logit is given by the sum of weights Wkv across the 1-hop neighbors of u. If a specific node v has many neighbors of class k, then Wkv is probably large, as we would expect with a high probability that any neighbor of v is of class k. In this sense, LINK is like a 2-hop method: for a given node u, the probability of being in a given class is related to the class memberships of u’s 2-hop neighbors in N (v) for each neighbor v 2 N (u). Related interpretations of LINK as a method acting on 2-hop paths between nodes are given by Altenburger and Ugander [3]. Though it is simple and has been overlooked in the recent non-homophilous GNN literature, LINK has been found to perform well in certain node classification tasks like gender prediction in social networks [3, 4]. A major reason why LINK does well in many settings is exactly because it acts as a 2-hop method. For example, while 1-hop neighbors are often not so informative for gender prediction in social networks due to lack of homophily, 2-hop neighbors are very informative due to so-called “monophily,” whereby many nodes have extreme preferences for connecting to a certain class [3]. Beyond just gender prediction, we show in Section 5 that LINK empirically outperforms many models across the various application areas of the non-homophilous datasets we propose. 4.2 LINKX We combine these two simple baselines through simple linear transformations and component-wise nonlinearities. Let X 2 RD⇥n denote the matrix of node features with input dimension D, and let [h1;h2] denote concatenation of vectors h1 and h2. Then our model outputs predictions Y through the following mapping: hA = MLPA(A) 2 Rd⇥n (6) hX = MLPX(X) 2 Rd⇥n (7) Y = MLPf ⇣ W[hA;hX] + hA + hX ⌘ , (8) in which d is the hidden dimension, W 2 Rd⇥2d is a weight matrix, and is a component-wise nonlinearity (which we take to be ReLU). We call our model LINKX, as it extends LINK with node feature information from the matrix X. A diagram of LINKX is given in Figure 1. First, LINKX computes hidden representations hA of the adjacency (extending LINK) and hX of the feature matrix (as in node-feature MLPs). Then it combines these hidden representations through a linear transform W of their concatenation, with skip connections that add back in hA and hX to better preserve pure adjacency or node feature information. Finally, it puts this combined representation through a non-linearity and another MLP to make a prediction. Separating then mixing adjacency and feature information. LINKX separately embeds the adjacency A to hA and the features X into hX before mixing them for a few reasons. First, we note that this design is reminiscent of fusion architectures in multimodal networks, where data from different modalities are processed and combined in a neural network [24, 78]. In our setting, we can view adjacency information and node feature information as separate modalities. Since node feature MLPs and LINK do well independently on different datasets, this allows us to preserve their individual performance if needed. Ignoring hX information is similar to just using LINK, and ignoring hA information is just using an node feature MLP. Still, to preserve the ability to just learn a similar mapping to LINK or to a node feature MLP, we find that having the additive skip connections helps to get performance at least as good as either baseline. Our initial empirical results showed that simply concatenating adjacency and node features as input to a network does worse overall empirically (see Appendix C.1). There are also computational benefits to our design choices. Embedding A is beneficial for depth as adding more layers to the MLPs only gives an O(d2) cost — depending only on the hidden dimension d — and thus does not scale in the number of edges |E| as when adding layers to message-passing GNNs. This is because the graph information in A is already compressed to hidden feature vectors after the first linear mapping of MLPA, and we do not need to propagate along the graph in later steps. Moreover, this enables a sparse-dense matrix product to compute the first linear mapping of MLPA on A, which greatly increases efficiency as A is typically very sparse for real-world graphs. Separate embeddings are key here, as this would not be possible if we for instance concatenated A and X when X is large and dense. Simple minibatching. Message-passing GNNs must take graph topology into account when minibatching with techniques such as neighbor sampling, subgraph sampling, or graph partitioning. However, LINKX does not require this, as it utilizes graph information solely through defining adjacency matrix columns as features. Thus, we can train LINKX with standard stochastic gradient descent variants by taking i.i.d. samples of nodes along with the corresponding columns of the adjacency and feature matrix as features. This is much simpler than the graph minibatching procedures for message-passing GNNs, which require specific hyperparameter choices, have to avoid exponential blowup of number of neighbors per layer, and are generally more complex to implement [77]. In Section 5.3, we use the simple LINKX minibatching procedure for large-scale experiments that show that LINKX with this minibatching style outperforms GNNs with graph minibatching methods. This is especially important on the scale of the wiki dataset, where none of our tested methods — other than MLP — is capable of running on a Titan RTX GPU with 24 GB GPU RAM (see Section 5). 4.3 Complexity Analysis Using the above notation, a forward pass of LINKX has a time complexity of O d|E|+ nd2L , in which d is the hidden dimension (which we assume to be on the same order as the input feature dimension D), L is the number of layers, n is the number of nodes, and |E| is the number of edges. We require a O(d|E|) cost for the first linear mapping of A and a O(d2) cost per layer for MLP operations on hidden features, for L total layers and each of n nodes. As mentioned above, message passing GNNs have to propagate using the adjacency in each layer, so they have an L|E| term in the complexity. For instance, an L-layer GCN [38] with d hidden dimensions has O(dL|E| + nd2L) complexity, as it costs O(d|E|) to propagate features in each layer, and O(nd2) to multiply by the weight matrix in each layer. Non-homophilous methods often make modifications to standard architectures that increase computational cost, such as using higher-order neighborhoods or using additional hidden embeddings [82]. For instance, the complexity of MixHop [1] is O(K(dL|E| + nd2L)), which has an extra factor K that is the number of adjacency powers to propagate with. The complexity of GCNII [15] is asymptotically the same as that of GCN, but in practice it requires more computations per layer due to residual connections and linear combinations, and it also often achieves best performance with a large number of layers L. H2GCN [82] is significantly more expensive due to its usage of strict two-hop neighborhoods, which requires it to form the squared adjacency A2. This makes the memory requirements intractable even for medium sized graphs (see Section 5). 5 Experiments We conduct two sets of experiments for node classification on our proposed non-homophilous datasets. One set of experiments does full batch gradient descent training for all applicable methods. This of course limits the size of each model, as the large datasets require substantial GPU memory to train on. Our other set of experiments uses minibatching methods. As all graph-based methods run out of memory on the wiki dataset, even on 24 GB GPUs, we only include wiki results in the minibatching section. In all settings, our LINKX model matches or outperforms other methods. 5.1 Experimental Setup Methods. We include both methods that are graph-agnostic and node-feature-agnostic as simple baselines. The node-feature-agnostic models of two-hop label propagation [57] and LINK (logistic regression on the adjacency matrix) [79] have been found to perform well in various non-homophilous settings, but they have often been overlooked by recent graph representation learning work. Also, we include SGC [71] and C&S [32] as simple, scalable methods that perform well on homophilic datasets. We include a two-hop propagation variant of C&S in analogy with two-step label propagation. In addition to representative general GNNs, we also include GNNs recently proposed for non-homophilous settings. The full list of methods is: Only node features: MLP [26]. Only graph topology: label propagation (standard and two-hop) [80, 57], LINK [79]. Simple methods: SGC [71], C&S [32] and their two-hop variants. General GNNs: GCN [38], GAT [69], jumping knowledge networks (GCNJK, GATJK) [72], and APPNP [39]. Non-homophilous methods: H2GCN [82], MixHop [1], GPR-GNN [17], GCNII [15], and LINKX (ours). Minibatching methods. We also evaluate GNNs with various minibatching methods. We take GCNJK [72] and MixHop [1] as our base models for evaluation, as they are representative of many GNN design choices and MixHop performs very well in full batch training. As other minibatching methods are trickier to make work with these models, we use the Cluster-GCN [16] and GraphSAINT [77] minibatching methods, which sample subgraphs. We include both the node based sampling and random walk based sampling variants of GraphSAINT. We compare these GNNs with MLP, LINK, and our LINKX, which use simple i.i.d. node minibatching. Training and evaluation. Following other works in non-homophilous graph learning evaluation, we take a high proportion of training nodes [82, 58, 73]; we run each method on the same five random 50/25/25 train/val/test splits for each dataset. All methods requiring gradient-based optimization are run for 500 epochs, with test performance reported for the learned parameters of highest validation performance. We use ROC-AUC as the metric for the class-imbalanced genius dataset (about 80% of nodes are in the majority class), as it is less sensitive to class-imbalance than accuracy. For other datasets, we use classification accuracy as the metric. Further experimental details are in Appendix B. 5.2 Full-Batch Results Table 3 lists the results of each method across the datasets that we propose. Our datasets reveal several important properties of non-homophilous node classification. Firstly, the stability of performance across runs is better for our datasets than those of Pei et al. [58] (see [82] results). Secondly, as suggested by prior theory and experiments [82, 1, 17], the non-homophilous GNNs usually do well — though not necessarily on every dataset. The core assumption of homophily in SGC and C&S that enables them to be simple and efficient does not hold on these non-homophilous datasets, and thus the performance of these methods is typically relatively low. Still, as expected, two-hop variants generally improve upon their one-hop counter-parts in these low-homophily settings. One consequence of using larger datasets for benchmarks is that the tradeoff between scalability and learning performance of non-homophilous methods has become starker, with some methods facing memory issues. This tradeoff is especially important to consider in light of the fact that many scalable graph learning methods rely on implicit or explicit homophily assumptions [71, 32, 20, 10], and thus face issues when used in non-homophilous settings. Finally, LINKX achieves superior performance on all datasets, taking advantage of LINK’s power, while also being able to utilize node features where they provide additional information. 5.3 Minibatching Results Our experimental results for minibatched methods on our proposed datasets are in Table 4. Since GraphSAINT does not partition the nodes of the graph into subgraphs that cover all nodes, we test on the full input graph for the smaller datasets and uniformly random partitions of the graph into 10 induced subgraphs for the larger datasets. First, we note that both Cluster-GCN and GraphSAINT sampling lead to performance degradation for these methods on our proposed non-homophilous datasets. When compared to the full-batch training results of the previous section, classification accuracy is typically substantially lower. Further experiments in Appendix C.2 give evidence that the performance degradation is often more substantial in non-homophilous settings, and provides possible explanations for why this may be the case. On the other hand, LINKX does not suffer much performance degradation with the simple i.i.d. node minibatching technique. In fact, it matches or outperforms all methods in this setting, often by a wide margin. Though LINK performs on par with LINKX in arXiv-year and pokec, our LINKX model significantly outperforms it on other datasets, again due to LINKX’s ability to integrate node feature information. We again stress that the LINKX minibatching is very simple to implement, yet it still substantially outperforms other methods. Consequently, LINKX is generally well-suited for scalable node classification across a broad range of non-homophilous settings, surpassing even specially designed non-homophilous GNNs with current graph minibatching techniques. 6 Discussion and Conclusion In this paper, we propose new, high-quality non-homophilous graph learning datasets, and we benchmark simple baselines and representative graph representation learning methods across these datasets. Further, we develop LINKX: a strong, simple, and scalable method for non-homophilous classification. Our experiments show that LINKX significantly outperforms other methods on our proposed datasets, thus providing one powerful method in the underexplored area of scalable learning on non-homophilous graphs. We hope that our contributions will provide researchers with new avenues of research in learning on non-homophilous graphs, along with better tools to test models and evaluate utility of new techniques. While we do find utility in our proposed datasets and LINKX model, this work is somewhat limited by only focusing on transductive node classification. This setting is the most natural for studying performance in the absence of homophily, since here we define homophily in terms of the node labels, and previous non-homophilous GNN work using the Pei et al. [58] data also studies this setting exclusively [82, 17]. Using other Facebook 100 datasets besides Penn94 [67] would allow for inductive node classification, but LINKX does not directly generalize to this setting. Our proposed datasets and model LINKX could be used for link prediction, but this is left for future work. Broader Impact. Fundamental research in graph learning on non-homophilous graphs has the potential for positive societal benefit. As a major application, it enables malicious node detection techniques in social networks and transaction networks that are not fooled by fraudsters’ connections to legitimate users and customers. This is a widely studied task, and past works have noted that non-homophilous structures are present in many such networks [11, 25, 55]. We hope that this paper provides insight on the homophily limitations of existing scalable graph learning models and help researchers design scalable models that continue to work well in the non-homophilous regime, thus improving the quality of node classification on graphs more broadly. As our proposed datasets have diverse structures and our model performs well across all of these datasets, the potential for future application of our work to important non-homophilous tasks is high. Nevertheless, our work could also have potential for different types of negative social consequences. Nefarious behavior by key actors could be one source of such consequences. Nonetheless, we expect that the actors that can make use of large-scale social networks for gender prediction as studied in our work are limited in number. Actors with both the capability and incentive to perform such operations probably mostly consist of entities with access to large social network data such as social media companies or government actors with auxiliary networks [50]. Smaller actors can perform certain attacks, but this may be made more difficult by resource requirements such as the need for certain external information [50] or the ability to add nodes and edges before an anonymized version of a social network is released [5]. Furthermore, additional actors could make use of deanonymization attacks [30, 49, 50] to reveal user identities in supposedly anonymized datasets. Also, accidental consequences and implicit biases are a potential issue, even if the applications of the learning algorithms are benign and intended to benefit society [47]. Performance of algorithms may vary substantially between intersectional subgroups of subjects — as in the case of vision-based gender predictors [12] (and some have questioned the propriety of vision-based gender classifiers altogether). Thus, there may be disparate effects on different populations, so care should be taken to understand the impact of those differences across subgroups. Moreover, large datasets require computing resources, so projects can only be pursued by large entities at the possible expense of the individual and smaller research groups [8]. This is alleviated by the fact that our experiments are each run on one GPU, and hence have significantly less GPU computing requirements than much current deep learning research. Thus, smaller research groups and independent researchers should find our work beneficial, and should be able to build on it. Finally, the nature of collection of online user information also comes with notable ethical concerns. Common notice-and-consent policies are often ineffective in actually protecting user privacy [52]. Indeed, users may not actually have much choice in using certain platforms or sharing data due to social or economic reasons. Also, users are generally unable to fully read and understand all of the different privacy policies that they come across, and may not understand the implications of having their data available for long periods of time to entities with powerful inference algorithms. Furthermore, people may rely on obscurity for privacy [29], but this assumption may be ignored in courts of law, and it may be directly broken when data leaks or is released in aggregated form without sufficient privacy protections. Overall, while we believe that our work will benefit machine learning research and enable positive applications, we must still be aware of possible negative consequences. Acknowledgements We thank Abhay Singh, Austin Benson, and Horace He for insightful discussions. We also thank the rest of Cornell University Artificial Intelligence for their support and discussion. We thank Facebook AI for funding equipment that made this work possible.
1. What is the focus of the paper regarding graph datasets and baselines? 2. What are the strengths of the proposed approach, particularly in terms of scalability and effectiveness? 3. How does the reviewer assess the impact of the paper on the community? 4. Are there any concerns or limitations regarding the provided baselines and experimental results?
Summary Of The Paper Review
Summary Of The Paper This paper proposes 7 new large heterophily graph datasets (up to 2 million nodes) and also provides a simple, scalable yet effective baseline LINKX which combines the information from topology and node features. The paper extensively evaluates the datasets on various baselines, including typical GNNs and GNNs specifically designed for heterophily graphs. They find that many baselines cannot compete with the simple baseline LINKX and some of them run out of memory. It also points out the minibatching problem and provides thorough results for fullbatch and minibatch training. Review I appreciate the authors' effort in proposing large-scale heterophily graph datasets and along with a strong baseline LINKX. Recently, many researchers have looked at the performances of GNNs on heterophily graphs but most of them are testing their models on very small heterophily datasets. So this work will potentially have a great impact on the community. The authors not only provide various baselines (including typical GNNs and GNNs specifically designed for heterophily graphs) but also provide thorough results for fullbatch and minibatch training. This paper is well written and easy to follow.
NIPS
Title Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods Abstract Many widely used datasets for graph machine learning tasks have generally been homophilous, where nodes with similar labels connect to each other. Recently, new Graph Neural Networks (GNNs) have been developed that move beyond the homophily regime; however, their evaluation has often been conducted on small graphs with limited application domains. We collect and introduce diverse nonhomophilous datasets from a variety of application areas that have up to 384x more nodes and 1398x more edges than prior datasets. We further show that existing scalable graph learning and graph minibatching techniques lead to performance degradation on these non-homophilous datasets, thus highlighting the need for further work on scalable non-homophilous methods. To address these concerns, we introduce LINKX — a strong simple method that admits straightforward minibatch training and inference. Extensive experimental results with representative simple methods and GNNs across our proposed datasets show that LINKX achieves state-ofthe-art performance for learning on non-homophilous graphs. Our codes and data are available at https://github.com/CUAI/Non-Homophily-Large-Scale. 1 Introduction Graph learning methods generate predictions by leveraging complex inductive biases captured in the topology of the graph [7]. A large volume of work in this area, including graph neural networks (GNNs), exploits homophily as a strong inductive bias, where connected nodes tend to be similar to each other in terms of labels [46, 3]. Such assumptions of homophily, however, do not always hold true. For example, malicious node detection, a key application of graph machine learning, is known to be non-homophilous in many settings [55, 13, 25, 11]. Further, while new GNNs that work better in these non-homophilous settings have been developed [82, 44, 81, 17, 15, 73, 36, 35, 9, 54], their evaluation is limited to a few graph datasets used by Pei et al. [58] (collected by [61, 66, 48]) that have certain undesirable properties such as small size, narrow range of application areas, and high variance between different train/test splits [82]. Consequently, method scalability has not been thoroughly studied in non-homophilous graph learning. In fact, many non-homophilous techniques frequently require more parameters and computational resources [82, 1, 17], which is neither evident nor detrimental when they are evaluated on very small datasets. Even though scalable graph learning techniques do exist, these methods generally cannot ⇤Equal contribution. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). be directly applied to the non-homophilous setting, as they oftentimes assume homophily in their construction [71, 32, 20, 10]. Non-homophily in graphs also degrades proven graph learning techniques that have been instrumental to strong performance in scalable graph learning. For instance, label propagation, personalized PageRank, and low-pass graph filtering have been used for scalable graph representation learning models, but these methods all assume homophily [71, 32, 20, 10]. Moreover, we give empirical evidence that existing minibatching techniques in graph learning [16, 77] significantly degrade performance in non-homophilous settings. In response, we develop a novel model, LINKX, that addresses these concerns; LINKX outperforms existing graph learning methods on large-scale nonhomophilous datasets and admits a simple minibatching procedure that maintains strong performance. To summarize, we demonstrate three key areas of deficiency as mentioned above, namely: (1) that there is a lack of large, high-quality datasets covering different non-homophilous applications, (2) that current graph minibatching techniques and scalable methods do not work well in non-homophilous settings, and (3) that prior non-homophilous methods are not scalable. To these ends, this paper makes the following contributions: Dataset Collection and Benchmarking. We collect a diverse series of large, non-homophilous graph datasets and define new node features and tasks for classification. These datasets are substantially larger than previous non-homophilous datasets, span wider application areas, and capture different types of complex label-topology relationships. With these proposed datasets, we conduct extensive experiments with 14 graph learning methods and 3 graph minibatching techniques that are broadly representative of the graph machine learning model space. Analyzing Scalable Methods and Minibatching. We analyze current graph minibatching techniques like GraphSAINT [77] in non-homophilous settings, showing that they substantially degrade performance in experiments. Also, we show empirically that scalable methods for graph learning like SGC and C&S [71, 32] do not perform well in non-homophilous settings — even though they achieve state-of-the-art results on many homophilous graph benchmarks. Finally, we demonstrate that existing non-homophilous methods often suffer from issues with scalability and performance in large non-homophilous graphs, in large part due to a lack of study of large-scale non-homophilous graph learning. LINKX: a strong, simple method. We propose a simple method LINKX that achieves excellent results for non-homophilous graphs while overcoming the above-mentioned minibatching issues. LINKX works by separately embedding the adjacency A and node features X, then combining them with multilayer perceptrons and simple transformations, as illustrated in Figure 1. It generalizes node feature MLP and LINK regression [79], two baselines that often work well on non-homophilous graphs. This method is simple to train and evaluate in a minibatched fashion, and does not face the performance degradation that other methods do in the minibatch setting. We develop the model and give more details in Section 4. 2 Prior Work Graph Representation Learning. Graph neural networks [28, 38, 69] have demonstrated their utility on a variety of graph machine learning tasks. Most GNNs are constructed by stacking layers that propagate transformed node features, which are then aggregated via different mechanisms. The neighborhood aggregation used in many existing GNNs implicitly leverage homophily, so they often fail to generalize on non-homophilous graphs [82, 6]. Indeed, a wide range of GNNs operate as low-pass graph filters [53, 71, 6] that smooth features over the graph topology, which produces similar representations and thus similar predictions for neighboring nodes. Scalable methods. A variety of scalable graph learning methods have been developed for efficient computation in larger datasets [77, 16, 75, 28, 71, 32, 20, 10]. Many of these methods explicitly make use of an assumption of homophily in the data [71, 32, 20, 10]. By leveraging this assumption, several simple, inexpensive models are able to achieve state-of-the-art performance on homophilic datasets [71, 32]. However, these methods are unable to achieve comparable performance in non-homophilous settings, as we show empirically in Section 5. Graph sampling. As node representations depend on other nodes in the graph, there are no simple minibatching techniques in graph learning as there are for i.i.d. data. To scale to large graphs, one line of work samples nodes that are used in each layer of a graph neural network [28, 75, 14]. Another family of methods samples subgraphs of an input graph, then passes each subgraph through a GNN to make a prediction for each node of the subgraph [16, 76, 77]. While these methods are useful for scalable graph learning, we show that they substantially degrade performance in our non-homphilous experiments (see Section 5). Non-Homophilous methods. Various GNNs have been proposed to achieve higher performance in low-homophily settings [82, 44, 81, 17, 15, 73, 36, 35]. Geom-GCN [58] introduces a geometric aggregation scheme, MixHop [1] proposes a graph convolutional layer that mixes powers of the adjacency matrix, GPR-GNN [17] features learnable weights that can be positive and negative in feature propagation, GCNII [15] allows deep graph convolutional networks with relieved oversmoothing, which empirically performs better in non-homophilous settings, and H2GCN [82] shows that separation of ego and neighbor embeddings, aggregation in higher-order neighborhoods, and the combination of intermediate representations improves GNN performance in low-homophily. There are several recurring design decisions across these methods that appear to strengthen performance in non-homophilous settings: using higher-order neighborhoods, decoupling neighbor information from ego information, and combining graph information at different scales [82]. Many of these design choices require additional overhead (see Section 4.3), thus reducing their scalability. Datasets. The widely used citation networks Cora, Citeseer, and Pubmed [62, 74] are highly homophilous (see Appendix A) [82]. Recently, the Open Graph Benchmark [31] has provided a series of datasets and leaderboards that improve the quality of evaluation in graph representation learning; however, most of the node classification datasets tend to be homophilous, as noted in past work [82] and expanded upon in Appendix A.2. A comparable set of high-quality benchmarks to evaluate non-homophilous methods does not currently exist. 3 Datasets for Non-Homophilous Graph Learning 3.1 Currently Used Datasets The most widely used datasets to evaluate non-homophilous graph representation learning methods were used by Pei et al. [58] (and collected by [61, 66, 48]); see our Table 1 for statistics. However, these datasets have fundamental issues. First, they are very small — the Cornell, Texas, and Wisconsin datasets have between 180-250 nodes, and the largest dataset Actor has 7,600 nodes. In analogy to certain pitfalls of graph neural network evaluation on small (homophilic) datasets discussed in [63], evaluation on the datasets of Pei et al. [58] is plagued by high variance across different train/test splits (see results in [82]). The small size of these datasets may tend to create models that are more prone to overfitting [21], which prevents the scaling up of GNNs designed for non-homophilous settings. Peel [57] also studies node classification on network datasets with various types of relationships between edges and labels. However, they only study methods that act on graph topology, and thus their datasets do not necessarily have node features. We take inspiration from their work, by testing on Pokec and Facebook networks with node features that we define, and by introducing other year-prediction tasks on citation networks that have node features. 3.2 An Improved Homophily Measure Various metrics have been proposed to measure the homophily of a graph. However, these metrics are sensitive to the number of classes and the number of nodes in each class. Let G = (V,E) be a graph with n nodes, none of which are isolated. Further let each node u 2 V have a class label ku 2 {0, 1, . . . , C 1} for some number of classes C, and denote by Ck the set of nodes in class k. The edge homophily [82] is the proportion of edges that connect two nodes of the same class: h = |{(u, v) 2 E : ku = kv}| |E| . (1) Another related measure is what we call the node homophily [58], defined as 1|V | P u2V d(ku)u du , in which du is the number of neighbors of node u, and d (ku) u is the number of neighbors of u that have the same class label. We focus on the edge homophily (1) in this work, but find that node homophily tends to have similar qualitative behavior in experiments. The sensitivity of edge homophily to the number of classes and size of each class limits its utility. We consider a null model for graphs in which the graph topology is independent of the labels; suppose that nodes with corresponding labels are fixed, and include edges uniformly at random in the graph that are independent of node labels. Under this null model, a node u 2 V would be expected to have d(ku)u /du ⇡ |Cku |/n as the proportion of nodes of the same class that they connect to [3]. For a dataset with C balanced classes, we would thus expect the edge homophily to be around 1C , so the interpretation of the measure depends on the number of classes. Also, if classes are imbalanced, then the edge homophily may be misleadingly large. For instance, if 99% of nodes were of one class, then most edges would likely be within that same class, so the edge homophily would be high, even when the graph is generated from the null model where labels are independent of graph topology. Thus, the edge homophily does not capture deviation of the label distribution from the null model. We introduce a metric that better captures the presence or absence of homophily. Unlike the edge homophily, our metric measures excess homophily that is not expected from the above null model where edges are randomly wired. Our metric does not distinguish between different non-homophilous settings (such as heterophily or independent edges); we believe that there are too many degrees of freedom in non-homophilous settings for a single scalar quantity to be able to distinguish them all. Our measure is given as: ĥ = 1 C 1 C 1X k=0 hk |Ck| n + , (2) where [a]+ = max(a, 0), and hk is the class-wise homophily metric hk = P u2Ck d (ku) uP u2Ck du . (3) Note that ĥ 2 [0, 1], with a fully homophilous graph (in which every node is only connected to nodes of the same class) having ĥ = 1. Since each class-wise homophily metric hk only contributes positive deviations from the null expected proportion |Ck|/n, the class-imbalance problem is substantially mitigated. Also, graphs in which edges are independent of node labels are expected to have ĥ ⇡ 0, for any number of classes. Our measure ĥ measures presence of homophily, but does not distinguish between the many types of possibly non-homophilous relationships. This is reasonable given the diversity of non-homophilous relationships. For example, non-homophily can imply independence of edges and classes, extreme heterophily, connections only among subsets of classes, or certain chemically / biologically determined relationships. Indeed, these relationships are very different, and are better captured by more than one scalar quantity, such as the compatibility matrices presented in the appendix. Further discussion is given in Appendix A. 3.3 Proposed Datasets Here, we detail the non-homophilous datasets that we propose for graph machine learning evaluation. Our datasets and tasks span diverse application areas. Penn94 [67], Pokec [41], genius [43], and twitch-gamers [60] are online social networks, where the task is to predict reported gender, certain account labels, or use of explicit content on user accounts. For the citation networks arXiv-year [31] and snap-patents [42, 41] the goal is to predict year of paper publication or the year that a patent is granted. The dataset wiki consists of Wikipedia articles, where the goal is to predict total page views of each article. Detailed descriptions about the graph structure, node features, node labels, and licenses of each dataset are given in Appendix D.2. Most of these datasets have been used for evaluation of graph machine learning models in past work; we make adjustments such as modifying node labels and adding node features that allow for evaluation of GNNs in non-homophilous settings. We define node features for Pokec, genius, and snap-patents, and we also define node labels for arXiv-year, snap-patents, and genius. Additionally, we crawl and clean the large-scale wiki dataset — a new Wikipedia dataset where the task is to predict page views, which is non-homophilous with respect to the graph of articles connected by links between articles (see Appendix D.3). This wiki dataset has 1,925,342 nodes and 303,434,860 edges, so training and inference require scalable algorithms. Basic dataset statistics are given in Table 2. Note the substantial difference between the size of our datasets and those of Pei et al. [58] in Table 1; our datasets have up to 384x more nodes and 1398x more edges. The homophily measures along with the lower empirical performance of homophilyassuming models (Section 5) and examination of compatibility matrices (Appendix A) show that our datasets are indeed non-homophilous. As there is little study in large-scale non-homophilous graph learning, our proposed large datasets strongly motivate the need for developing a new, scalable approach that can accurately learn on non-homophilous graphs. 4 LINKX: A New Scalable Model In this section, we introduce our novel model, LINKX, for scalable node classification in nonhomophilous settings. LINKX is built out of multilayer perceptrons (MLPs) and linear transformations, thus making it simple and scalable. It also admits simple row-wise minibatching procedures that allow it to perform well on large non-homophilous graphs. As a result, LINKX is able to circumvent aforementioned issues of graph minibatching and non-homophilous GNNs in large-scale non-homophilous settings. 4.1 Motivation from two simple baselines Here, we detail two simple baselines for node classification that we build on to develop LINKX. MLP on node features. A naïve method for node classification is to ignore the graph topology and simply train an MLP on node features. For the same reason that the graph topology has more complicated relationships with label distributions in non-homophilous graphs, many GNNs are not able to effectively leverage the graph topology in these settings. Thus, MLPs can actually perform comparatively well on non-homophilous graphs — achieving higher or approximately equal performance to various GNNs [82]. LINK regression on graph topology. On the other extreme, there is LINK [79] — a simple baseline that only utilizes graph topology. In particular, we consider LINK regression, which trains a logistic regression model in which each node’s features are taken from a column of the adjacency matrix. Letting A 2 {0, 1}n⇥n be the binary adjacency matrix of the graph, and W 2 Rc⇥n be a learned weight matrix, LINK computes class probabilities as Y = softmax(WA). (4) Let u 2 {1, . . . , n} be a specific node, and let k 2 {1, . . . , c} be a specific class. Then, expanding the matrix multiplication, the log-odds of node u belonging to class k is given by (WA)ku = X v2N (u) Wkv, (5) where N (u) contains the 1-hop neighbors of u. In other words, the logit is given by the sum of weights Wkv across the 1-hop neighbors of u. If a specific node v has many neighbors of class k, then Wkv is probably large, as we would expect with a high probability that any neighbor of v is of class k. In this sense, LINK is like a 2-hop method: for a given node u, the probability of being in a given class is related to the class memberships of u’s 2-hop neighbors in N (v) for each neighbor v 2 N (u). Related interpretations of LINK as a method acting on 2-hop paths between nodes are given by Altenburger and Ugander [3]. Though it is simple and has been overlooked in the recent non-homophilous GNN literature, LINK has been found to perform well in certain node classification tasks like gender prediction in social networks [3, 4]. A major reason why LINK does well in many settings is exactly because it acts as a 2-hop method. For example, while 1-hop neighbors are often not so informative for gender prediction in social networks due to lack of homophily, 2-hop neighbors are very informative due to so-called “monophily,” whereby many nodes have extreme preferences for connecting to a certain class [3]. Beyond just gender prediction, we show in Section 5 that LINK empirically outperforms many models across the various application areas of the non-homophilous datasets we propose. 4.2 LINKX We combine these two simple baselines through simple linear transformations and component-wise nonlinearities. Let X 2 RD⇥n denote the matrix of node features with input dimension D, and let [h1;h2] denote concatenation of vectors h1 and h2. Then our model outputs predictions Y through the following mapping: hA = MLPA(A) 2 Rd⇥n (6) hX = MLPX(X) 2 Rd⇥n (7) Y = MLPf ⇣ W[hA;hX] + hA + hX ⌘ , (8) in which d is the hidden dimension, W 2 Rd⇥2d is a weight matrix, and is a component-wise nonlinearity (which we take to be ReLU). We call our model LINKX, as it extends LINK with node feature information from the matrix X. A diagram of LINKX is given in Figure 1. First, LINKX computes hidden representations hA of the adjacency (extending LINK) and hX of the feature matrix (as in node-feature MLPs). Then it combines these hidden representations through a linear transform W of their concatenation, with skip connections that add back in hA and hX to better preserve pure adjacency or node feature information. Finally, it puts this combined representation through a non-linearity and another MLP to make a prediction. Separating then mixing adjacency and feature information. LINKX separately embeds the adjacency A to hA and the features X into hX before mixing them for a few reasons. First, we note that this design is reminiscent of fusion architectures in multimodal networks, where data from different modalities are processed and combined in a neural network [24, 78]. In our setting, we can view adjacency information and node feature information as separate modalities. Since node feature MLPs and LINK do well independently on different datasets, this allows us to preserve their individual performance if needed. Ignoring hX information is similar to just using LINK, and ignoring hA information is just using an node feature MLP. Still, to preserve the ability to just learn a similar mapping to LINK or to a node feature MLP, we find that having the additive skip connections helps to get performance at least as good as either baseline. Our initial empirical results showed that simply concatenating adjacency and node features as input to a network does worse overall empirically (see Appendix C.1). There are also computational benefits to our design choices. Embedding A is beneficial for depth as adding more layers to the MLPs only gives an O(d2) cost — depending only on the hidden dimension d — and thus does not scale in the number of edges |E| as when adding layers to message-passing GNNs. This is because the graph information in A is already compressed to hidden feature vectors after the first linear mapping of MLPA, and we do not need to propagate along the graph in later steps. Moreover, this enables a sparse-dense matrix product to compute the first linear mapping of MLPA on A, which greatly increases efficiency as A is typically very sparse for real-world graphs. Separate embeddings are key here, as this would not be possible if we for instance concatenated A and X when X is large and dense. Simple minibatching. Message-passing GNNs must take graph topology into account when minibatching with techniques such as neighbor sampling, subgraph sampling, or graph partitioning. However, LINKX does not require this, as it utilizes graph information solely through defining adjacency matrix columns as features. Thus, we can train LINKX with standard stochastic gradient descent variants by taking i.i.d. samples of nodes along with the corresponding columns of the adjacency and feature matrix as features. This is much simpler than the graph minibatching procedures for message-passing GNNs, which require specific hyperparameter choices, have to avoid exponential blowup of number of neighbors per layer, and are generally more complex to implement [77]. In Section 5.3, we use the simple LINKX minibatching procedure for large-scale experiments that show that LINKX with this minibatching style outperforms GNNs with graph minibatching methods. This is especially important on the scale of the wiki dataset, where none of our tested methods — other than MLP — is capable of running on a Titan RTX GPU with 24 GB GPU RAM (see Section 5). 4.3 Complexity Analysis Using the above notation, a forward pass of LINKX has a time complexity of O d|E|+ nd2L , in which d is the hidden dimension (which we assume to be on the same order as the input feature dimension D), L is the number of layers, n is the number of nodes, and |E| is the number of edges. We require a O(d|E|) cost for the first linear mapping of A and a O(d2) cost per layer for MLP operations on hidden features, for L total layers and each of n nodes. As mentioned above, message passing GNNs have to propagate using the adjacency in each layer, so they have an L|E| term in the complexity. For instance, an L-layer GCN [38] with d hidden dimensions has O(dL|E| + nd2L) complexity, as it costs O(d|E|) to propagate features in each layer, and O(nd2) to multiply by the weight matrix in each layer. Non-homophilous methods often make modifications to standard architectures that increase computational cost, such as using higher-order neighborhoods or using additional hidden embeddings [82]. For instance, the complexity of MixHop [1] is O(K(dL|E| + nd2L)), which has an extra factor K that is the number of adjacency powers to propagate with. The complexity of GCNII [15] is asymptotically the same as that of GCN, but in practice it requires more computations per layer due to residual connections and linear combinations, and it also often achieves best performance with a large number of layers L. H2GCN [82] is significantly more expensive due to its usage of strict two-hop neighborhoods, which requires it to form the squared adjacency A2. This makes the memory requirements intractable even for medium sized graphs (see Section 5). 5 Experiments We conduct two sets of experiments for node classification on our proposed non-homophilous datasets. One set of experiments does full batch gradient descent training for all applicable methods. This of course limits the size of each model, as the large datasets require substantial GPU memory to train on. Our other set of experiments uses minibatching methods. As all graph-based methods run out of memory on the wiki dataset, even on 24 GB GPUs, we only include wiki results in the minibatching section. In all settings, our LINKX model matches or outperforms other methods. 5.1 Experimental Setup Methods. We include both methods that are graph-agnostic and node-feature-agnostic as simple baselines. The node-feature-agnostic models of two-hop label propagation [57] and LINK (logistic regression on the adjacency matrix) [79] have been found to perform well in various non-homophilous settings, but they have often been overlooked by recent graph representation learning work. Also, we include SGC [71] and C&S [32] as simple, scalable methods that perform well on homophilic datasets. We include a two-hop propagation variant of C&S in analogy with two-step label propagation. In addition to representative general GNNs, we also include GNNs recently proposed for non-homophilous settings. The full list of methods is: Only node features: MLP [26]. Only graph topology: label propagation (standard and two-hop) [80, 57], LINK [79]. Simple methods: SGC [71], C&S [32] and their two-hop variants. General GNNs: GCN [38], GAT [69], jumping knowledge networks (GCNJK, GATJK) [72], and APPNP [39]. Non-homophilous methods: H2GCN [82], MixHop [1], GPR-GNN [17], GCNII [15], and LINKX (ours). Minibatching methods. We also evaluate GNNs with various minibatching methods. We take GCNJK [72] and MixHop [1] as our base models for evaluation, as they are representative of many GNN design choices and MixHop performs very well in full batch training. As other minibatching methods are trickier to make work with these models, we use the Cluster-GCN [16] and GraphSAINT [77] minibatching methods, which sample subgraphs. We include both the node based sampling and random walk based sampling variants of GraphSAINT. We compare these GNNs with MLP, LINK, and our LINKX, which use simple i.i.d. node minibatching. Training and evaluation. Following other works in non-homophilous graph learning evaluation, we take a high proportion of training nodes [82, 58, 73]; we run each method on the same five random 50/25/25 train/val/test splits for each dataset. All methods requiring gradient-based optimization are run for 500 epochs, with test performance reported for the learned parameters of highest validation performance. We use ROC-AUC as the metric for the class-imbalanced genius dataset (about 80% of nodes are in the majority class), as it is less sensitive to class-imbalance than accuracy. For other datasets, we use classification accuracy as the metric. Further experimental details are in Appendix B. 5.2 Full-Batch Results Table 3 lists the results of each method across the datasets that we propose. Our datasets reveal several important properties of non-homophilous node classification. Firstly, the stability of performance across runs is better for our datasets than those of Pei et al. [58] (see [82] results). Secondly, as suggested by prior theory and experiments [82, 1, 17], the non-homophilous GNNs usually do well — though not necessarily on every dataset. The core assumption of homophily in SGC and C&S that enables them to be simple and efficient does not hold on these non-homophilous datasets, and thus the performance of these methods is typically relatively low. Still, as expected, two-hop variants generally improve upon their one-hop counter-parts in these low-homophily settings. One consequence of using larger datasets for benchmarks is that the tradeoff between scalability and learning performance of non-homophilous methods has become starker, with some methods facing memory issues. This tradeoff is especially important to consider in light of the fact that many scalable graph learning methods rely on implicit or explicit homophily assumptions [71, 32, 20, 10], and thus face issues when used in non-homophilous settings. Finally, LINKX achieves superior performance on all datasets, taking advantage of LINK’s power, while also being able to utilize node features where they provide additional information. 5.3 Minibatching Results Our experimental results for minibatched methods on our proposed datasets are in Table 4. Since GraphSAINT does not partition the nodes of the graph into subgraphs that cover all nodes, we test on the full input graph for the smaller datasets and uniformly random partitions of the graph into 10 induced subgraphs for the larger datasets. First, we note that both Cluster-GCN and GraphSAINT sampling lead to performance degradation for these methods on our proposed non-homophilous datasets. When compared to the full-batch training results of the previous section, classification accuracy is typically substantially lower. Further experiments in Appendix C.2 give evidence that the performance degradation is often more substantial in non-homophilous settings, and provides possible explanations for why this may be the case. On the other hand, LINKX does not suffer much performance degradation with the simple i.i.d. node minibatching technique. In fact, it matches or outperforms all methods in this setting, often by a wide margin. Though LINK performs on par with LINKX in arXiv-year and pokec, our LINKX model significantly outperforms it on other datasets, again due to LINKX’s ability to integrate node feature information. We again stress that the LINKX minibatching is very simple to implement, yet it still substantially outperforms other methods. Consequently, LINKX is generally well-suited for scalable node classification across a broad range of non-homophilous settings, surpassing even specially designed non-homophilous GNNs with current graph minibatching techniques. 6 Discussion and Conclusion In this paper, we propose new, high-quality non-homophilous graph learning datasets, and we benchmark simple baselines and representative graph representation learning methods across these datasets. Further, we develop LINKX: a strong, simple, and scalable method for non-homophilous classification. Our experiments show that LINKX significantly outperforms other methods on our proposed datasets, thus providing one powerful method in the underexplored area of scalable learning on non-homophilous graphs. We hope that our contributions will provide researchers with new avenues of research in learning on non-homophilous graphs, along with better tools to test models and evaluate utility of new techniques. While we do find utility in our proposed datasets and LINKX model, this work is somewhat limited by only focusing on transductive node classification. This setting is the most natural for studying performance in the absence of homophily, since here we define homophily in terms of the node labels, and previous non-homophilous GNN work using the Pei et al. [58] data also studies this setting exclusively [82, 17]. Using other Facebook 100 datasets besides Penn94 [67] would allow for inductive node classification, but LINKX does not directly generalize to this setting. Our proposed datasets and model LINKX could be used for link prediction, but this is left for future work. Broader Impact. Fundamental research in graph learning on non-homophilous graphs has the potential for positive societal benefit. As a major application, it enables malicious node detection techniques in social networks and transaction networks that are not fooled by fraudsters’ connections to legitimate users and customers. This is a widely studied task, and past works have noted that non-homophilous structures are present in many such networks [11, 25, 55]. We hope that this paper provides insight on the homophily limitations of existing scalable graph learning models and help researchers design scalable models that continue to work well in the non-homophilous regime, thus improving the quality of node classification on graphs more broadly. As our proposed datasets have diverse structures and our model performs well across all of these datasets, the potential for future application of our work to important non-homophilous tasks is high. Nevertheless, our work could also have potential for different types of negative social consequences. Nefarious behavior by key actors could be one source of such consequences. Nonetheless, we expect that the actors that can make use of large-scale social networks for gender prediction as studied in our work are limited in number. Actors with both the capability and incentive to perform such operations probably mostly consist of entities with access to large social network data such as social media companies or government actors with auxiliary networks [50]. Smaller actors can perform certain attacks, but this may be made more difficult by resource requirements such as the need for certain external information [50] or the ability to add nodes and edges before an anonymized version of a social network is released [5]. Furthermore, additional actors could make use of deanonymization attacks [30, 49, 50] to reveal user identities in supposedly anonymized datasets. Also, accidental consequences and implicit biases are a potential issue, even if the applications of the learning algorithms are benign and intended to benefit society [47]. Performance of algorithms may vary substantially between intersectional subgroups of subjects — as in the case of vision-based gender predictors [12] (and some have questioned the propriety of vision-based gender classifiers altogether). Thus, there may be disparate effects on different populations, so care should be taken to understand the impact of those differences across subgroups. Moreover, large datasets require computing resources, so projects can only be pursued by large entities at the possible expense of the individual and smaller research groups [8]. This is alleviated by the fact that our experiments are each run on one GPU, and hence have significantly less GPU computing requirements than much current deep learning research. Thus, smaller research groups and independent researchers should find our work beneficial, and should be able to build on it. Finally, the nature of collection of online user information also comes with notable ethical concerns. Common notice-and-consent policies are often ineffective in actually protecting user privacy [52]. Indeed, users may not actually have much choice in using certain platforms or sharing data due to social or economic reasons. Also, users are generally unable to fully read and understand all of the different privacy policies that they come across, and may not understand the implications of having their data available for long periods of time to entities with powerful inference algorithms. Furthermore, people may rely on obscurity for privacy [29], but this assumption may be ignored in courts of law, and it may be directly broken when data leaks or is released in aggregated form without sufficient privacy protections. Overall, while we believe that our work will benefit machine learning research and enable positive applications, we must still be aware of possible negative consequences. Acknowledgements We thank Abhay Singh, Austin Benson, and Horace He for insightful discussions. We also thank the rest of Cornell University Artificial Intelligence for their support and discussion. We thank Facebook AI for funding equipment that made this work possible.
1. What are the contributions of the paper regarding graph methods and datasets? 2. What are the concerns regarding the proposed datasets, particularly in terms of potential misuse and unfair treatment of people? 3. How does the proposed method, LINKX, differ from the previous LINK method, and what are its advantages in scaling to large datasets? 4. What are the limitations of the empirical comparison, specifically regarding the lack of performance evaluation of LINKX on other datasets? 5. How does the reviewer assess the overall quality and impact of the paper, and what suggestions do they have for improving it?
Summary Of The Paper Review
Summary Of The Paper The paper studies the performance of graph methods on heterophilous datasets. In particular, they propose several new graph datasets, a method for such datasets, and an evaluation of many graph methods on these new datasets. The paper is written in a clear and thorough manner, there is an exhaustive list of references and introduction makes a good job of motivating the problems studied in the paper. Review The paper's contributions can be split into three parts: new datasets, new method, and empirical comparison. I have strong concerns about the datasets and their possible misuse in the future. In particular, 4 of the proposed datasets are social networks or alike, which could be used to train machine learning models for assessing properties of individuals or the communities involved in the network. Besides, the tasks of predicting properties such as gender could lead to the situation when the trained models discriminate certain groups of people. As one of the main paper's contributions are the new datasets, I am worried that such datasets could be used in many other future related works that would eventually lead to unfair treatment of people by the machine learning models. Creating a precedent in this work, could also lead other works to include datasets that mistreat people's interests. I think there is a related track on benchmark and datasets, where this work can get more support and feedback on the validity of the proposed datasets. The authors propose a new method called LINKX, which builds on a well-known method LINK. However, the previous LINK method was not used with node features and the authors propose to combine LINK embeddings with MLP embeddings on the node features. This allows the algorithm to scale easily to large datasets such as Wiki, whereas GNNs need more complex sampling solutions such as graphSAINT. The empirical comparison is done nicely across the proposed datasets and many graph models including vanilla GNNs, heterophilous GNNs, MLPs, LP, and others. However, I would like to see the performance of LINKX on other datasets, homophilous and heterphilous (see here for example) for the fair comparison. After rebuttal: Given a significant discussion between authors, reviewers, and ethics committee I tend to increase the score from 4 to 6. I still believe this paper would make a better fit to the dataset track of NeurIPS 2021.
NIPS
Title Reconstructing Perceptive Images from Brain Activity by Shape-Semantic GAN Abstract Reconstructing seeing images from fMRI recordings is an absorbing research area in neuroscience and provides a potential brain-reading technology. The challenge lies in that visual encoding in brain is highly complex and not fully revealed. Inspired by the theory that visual features are hierarchically represented in cortex, we propose to break the complex visual signals into multi-level components and decode each component separately. Specifically, we decode shape and semantic representations from the lower and higher visual cortex respectively, and merge the shape and semantic information to images by a generative adversarial network (Shape-Semantic GAN). This ’divide and conquer’ strategy captures visual information more accurately. Experiments demonstrate that Shape-Semantic GAN improves the reconstruction similarity and image quality, and achieves the state-of-the-art image reconstruction performance. 1 Introduction Decoding visual information and reconstructing stimulus images from brain activities is a meaningful and attractive task in neural decoding. The fMRI signals, which record the variations in blood oxygen level dependent (BOLD), can reveal the correlation between brain activities and different visual stimuli by monitoring blood oxygen content. Implementing an fMRI-based image reconstruction method can help us understand the visual mechanisms of brain and provide a way to ’read mind’. Previous studies. According to previous studies, the mapping between activities in visual cortex and visual stimulus is supposed to exist [19], and the perceived images are proved to be decodable from fMRI recordings [29, 16, 6]. Early approaches estimate the mapping using linear models such as linear regression [16, 6, 7, 17, 18]. These approaches usually firstly extract specific features from the images, for instance multi-scale local image bases [16] and features of gabor filters [29], then learn a linear mapping from fMRI signals to image features. Linear methods mostly focus on reconstructing low-level features, which is insufficient for reconstructing complex images, such as natural images. After the homogeneity between the hierarchical representations of the brain and deep neural networks (DNNs) was revealed [9], methods based on this finding have achieved great reconstruction performance [23, 28]. Shen et al.[23] used convolutional neural networks (CNN) models to extract image features and learned the mapping from fMRI signals to the CNN-based image features, which successfully reconstructed natural images. Recently, the development of DNN makes it possible to learn nonlinear mappings from brain signals to stimulus images in an end-to-end manner [24, 1, 30, 14, 3]. The DNN-based approaches have remarkably improved the reconstruction performance, such as Encoder-Decoder models [1, 26] and Generative Adversarial Network (GAN) ∗Corresponding authors: Yu Qi and Gang Pan 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. based models [24, 1, 30, 14, 3]. Shen et al. et al. [24] proposed a DNN-based decoder which learned nonlinear mappings from fMRI signals to the seeing images effectively. Beliy et al. et al. [1] learned a bidirectional mapping between fMRI signals and stimulus images using an unsupervised GAN. These recent approaches achieved higher image quality and can reconstruct more natural-looking images compared with linear methods. Learning a mapping from fMRI recordings to the corresponding stimulus images is a challenging problem. The difficulty mostly lies in that brain activity in the visual cortex is complex and not fully revealed. Studies have shown that there exists a hierarchical increase in the complexity of representations in visual cortex[5], and study [17] has demonstrated that exploiting information from different visual areas can help improve the reconstruction performance. Simple decoding models without considering the hierarchical information may be insufficient for accurate reconstruction. The hierarchical structure of information encoding in visual areas have been widely studied [8, 2, 5]. On the one hand, activities in the early visual areas show high response to low-level image features like shapes and orientations [10, 20, 25, 15]. On the other hand, anterior visual areas are mostly involved in high-level information processing, and activities in such visual areas show high correlation with the semantic content of stimulus images [9, 5]. Such high-level image features are more categorical and invariant than low-level features in identification or reconstruction [2]. The hierarchical processing in the visual cortex inspired us to decode the low-level and high-level image features from lower visual cortex (LVC) and higher visual cortex (HVC) separately [9]. In this study, we propose a novel method to realize image reconstruction from fMRI signals by decomposing the decoding task into hierarchical subtasks: shape/semantic decoding in lower/higher visual cortex respectively (Figure 1). In shape decoding, we propose a linear model to predict the outline of the core object from the fMRI signals of lower visual cortex. In semantic decoding, we propose to learn effective features with a DNN model to represent high-level information from higher visual cortex activities. Finally, the shape and semantic features are combined as the input to a GAN to generate natural-looking images with the shape and semantic conditions. Data augmentation is employed to supplement the limited fMRI data and improve the reconstruction quality. Experiments are conducted to evaluate the image reconstruction performance of our method in comparison with the state-of-the-art approaches. Results show that the Shape-Semantic GAN model outperforms the leading methods. The main contributions of this work can be summarized as follows: • Instead of directly using end-to-end models to predict seeing images from fMRI signals, we propose to break the complex visual signals into multi-level components and decode each component separately. This ’divide and conquer’ approach can extract visual information accurately. • We propose a linear model based shape decoder and a DNN based semantic decoder, which are capable of decoding shape and semantic information from the lower and higher visual cortex respectively. • We propose a GAN model to merge the decoded shape and semantic information to images, which can generate natural-looking images given shape and semantic conditions. The performance of GAN-based image generation can be further improved by data augmentation technique. 2 Methods The proposed framework is composed of three key components: a shape decoder, a semantic decoder and a GAN image generator. The framework of our approach is illustrated in Figure 1. Let x denote the fMRI recordings and y be the corresponding images perceived by the subjects during experiments. Our purpose is to reconstruct each subject’s perceived images y from the corresponding fMRI data x. In our method, we use the shape decoder C to reconstruct the stimulus images’ shapes rsp and the semantic decoder S to extract semantic features rsm from x. The image generator G implemented by GAN is introduced to reconstruct the stimulus images G(rsp, rsm) in the final stage of decoding. 2.1 Dataset We make use of a publicly available benchmark dataset from [23]. In this dataset brain activity data were collected in the form of functional images covering the whole brain. The corresponding stimulus images were selected from ImageNet including 1200 training images and 50 test images from 150 and 50 categories separately. Images in both of the training and test set have no overlap with each other. And each image has 5 or 24 fMRI recordings for training or test respectively. The fMRI signals contain information from different visual areas. Early visual areas like V1,V2 and V3 were defined by the standard retinatopic mapping procedures [4, 22, 23], and V1,V2 and V3 were concatenated as an area named lower visual cortex [9]. Higher visual cortex is composed of regions including parahippocampal place area (PPA), lateral occipital complex (LOC) and fusiform face area (FFA) defined in [9]. 2.2 Shape Decoder In order to obtain the outline of the visual stimulus, we present a shape decoder C to extract low-level image cues from the lower visual cortex based on linear models. Using a simple model to obtain low-level visual features, which has been demonstrated feasible by previous studies [29], can avoid the overfitting risk of complex models. Shape decoder C consists of three base decoders trained for V1, V2, V3 individually and a combiner to merge the results of base shape decoders. The process of shape decoding is described in Figure 2. The stimulus images are firstly preprocessed by shape detection and feature extraction before shape decoder training. Shape detection. First, image matting is conducted on the stimulus images to extract the core objects [13] and remove the interference of other parts in the images. The objects in the stimulus images are extracted based on saliency detection [12] and manual annotation. The results are binarized to eliminate the influence of minor variance in images and emphasize objects over background. By such preprocessing method, the core objects are extracted and some details (like colors or textures) that help little in shape decoding are eliminated. Features extraction. Second, square image patches are presented for feature extraction. Pixel values located in non-overlapping m×m pixels square patches are averaged as the image patch’s value. By representing shapes in contrast-defined patch images, the amount of calculation can be reduced and improve the invariance to small distortions. In our model m = 8 is selected. Model Training. Because V1, V2 and V3 areas have representations for the visual space respectively, we train the base decoders for each of them and use a linear weighting combiner to combine the decoded shapes. The values of the image patches are normalized to [0, 1] and flattened to onedimensional vectors p. The decoded shape vectors p∗k = ck(xk), where ck(xk) is the base shape decoder whose parameter ηk is optimized by: ηk = argmin ηk ‖ck(xk)− pk‖, k = V 1, V 2, V 3, (1) where ηk denotes the weights of base shape decoder ck and k denotes the visual area that the samples belong to. The base decoder ck is implemented by linear regression and the models are trained for fMRI recordings in V1, V2 and V3 individually. Then a combiner is trained to combine the predicted results p∗V 1, p ∗ V 2 and p ∗ V 3: rsp(i, j) = ∑ k wkijp ∗ k(i, j), k = V 1, V 2, V 3, (2) where rsp refers to the predicted shapes computed by the combiner, and rsp(i, j) is the pixel value at position (i, j). The results rsp predicted by the combiner are resized to the same size as the stimulus images (256× 256 pixels). wij is the weight of the combiner for pixel at position (i, j). The combining weight wij is computed independently for each pixel. 2.3 Semantic Decoder To render semantically meaningful details on shapes, a semantic decoder is used to provide categorical information. Although images can be rendered only based on shapes with a pre-trained GAN model, in practice we find that the results are not always acceptable because of the lack of conditions. The mapping from shapes to real images is not unique in many cases (e.g. a circular shape can be translated into a football/crystal ball/golf ball and all of these translations are correct judged by the discriminator). Besides, noise retained in shapes will interfere with the reconstruction quality in the absence of other conditions. Therefore reconstructing only on the shape condition is not sufficient. A semantic context, which is used to guide the GAN model with the image’s category, can be helpful when incorporated with the shape features in training phase. The input to the semantic decoder is fMRI signals in HVC. HVC covers the regions of LOC, FFA and PPA, whose voxels show significantly high response to the high-level features such as objects, faces or scenes respectively [9]. As showed in Figure 1, a lightweight DNN model is introduced to generate semantic features. The DNN model consists of one input layer (the same size as the input’s number of voxels), two hidden layers and one output layer. Tanh activation function is introduced between the hidden layers and sigmoid activation function is used for classification. When training the DNN model, the fMRI recordings in HVC are identified by the model to infer their corresponding stimulus images’ categories. After the training phase, the DNN model works as a semantic decoder. Note that the penultimate layer of DNN performs as a semantic space supporting the classification task at the output layer [28], the features in such layer are adopted as the semantic representation of fMRI signals in our method. 2.4 Image Generator To reconstruct images looking more realistic and filled with meaningful details, an encoder-decoder GAN, referring to the image translation methods [11], is introduced in the final stage of image reconstruction. In image reconstruction, there exists lots of low-level features (like contours) shared between the input shapes and the output natural images, which need to be passed across the decoder directly to reconstruct images with accurate shapes. Therefore, we propose the U-Net [21], an encoder-decoder structure, with skip connections. Traditional encoder-decoder model passes information through a bottleneck structure to extract high-level features, while low-level features such as shapes and textures can be lost. And few shape features retained in the output can cause deformation in reconstruction. By using the U-Net structure, more low-level features can be passed from the input space to the reconstruction space with the help of skip connections without the limitation of the bottleneck. The generator is composed of a pair of symmetrical encoder and decoder. The encoder and decoder have eight convolutional or deconvolutional layers with symmetrical parameters and no downsampling or up-sampling is used. The input layer takes the 256× 256 pixel images (shape images) as input. The bottleneck between encoder and decoder represents the high-level features extracted by convolutional layers in encoder, which is modified to take both of the semantic features rsm and the high-level features as input to the decoder. In this way the generator will be optimized under the constraint of semantic and shape conditions. The discriminator takes the shape rsp and the output of generator together as input and predicts the similarity of high-frequency structures between these two domains, using this similarity to guide the generator training. Let Gθ denotes the U-Net generator and Dφ denotes the discriminator. The generator Gθ and discriminator Dφ have parameters named θ and φ, which are optimized by minimizing the loss function L(θ, φ). The objective of the conditional GAN is composed of two components, which can be described as L(θ, φ) = Ladv(θ, φ) + λimgLimg(θ), (3) where Ladv(θ, φ) and Limg(θ) denote the adversarial loss and image space loss, and λimg define the weight of the image space loss Limg in L(θ, φ). As is inferred in [11], L1 loss is able to accurately capture the low frequencies. The GAN discriminator is designed to model the high-frequency structures. By combining these two terms in the loss function, blurred reconstructions will not be tolerated by the discriminator and low-frequency visual features can also be retained at the same time. The adversarial loss and the image space loss used in optimizing the generator can be expressed as Ladv(θ, φ) = −Ersp,rsm [log(Dφ(rsp, Gθ(rsp, rsm)))], (4) Limg(θ) = Ersp,rsm,y[|y −Gθ(rsp, rsm)|], (5) where rsp, rsm and y refer to shapes, semantic features and stimulus images. During the training phase, gradient descent is computed on Gθ and Dφ alternately. Instead of directly training Gθ to minimize log(1−Dφ(rsp, Gθ(rsp, rsm))), we followed the recommendations in [24] and maximize log(Dφ(rsp, Gθ(rsp, rsm))). The objective of the discriminator is: Ldiscr(θ, φ) = −Ersp,y[logDφ(rsp, y)]− Ersp,rsm [log(1−Dφ(rsp, Gθ(rsp, rsm)))]. (6) When Gθ is being trained, it tries to optimize θ to reduce the distance between generated images Gθ(rsp, rsm) and stimulus images y. It also tries to generate images that share similar high-frequency structure with shapes rsp to confuse Dφ and let Dφ predict Gθ(rsp, rsm) as correct. When Dφ is trained, it tries to optimize φ to distinguish the pairs of {rsp, y} from the pairs of {rsp, Gθ(rsp, rsm)}. Each time one of Gθ or Dφ is trained, the other’s parameters are fixed. 2.5 Data Augmentation Since the size of the fMRI dataset is limited, we propose to improve the image reconstruction performance by data augmentation in GAN training. We sampled the augmented images from the ImageNet dataset. For shape augmentation, the preprocess in Section 2.2 is conducted on augmented images and the contrast-defined, m×m-patch images Rsp represent as shapes of the augmented images. For semantics augmentation, the category-average semantic feature Rsm is computed as a substitute of semantic vector. Rsm is defined as the vector obtained by averaging the semantic features of samples annotated with the same category. By combining the shapes and category-average semantic features generated from the augmented images as the form of {Rsp, Rsm} pairs, the new samples are concatenated with the {rsp, rsm} pairs as the inputs to Gθ, enhancing the generality of Gθ eventually. Note that in our method the image augmentation could only be conducted within images that corresponds to the same classes as the training images. In reconstruction about 1.2k augmented natural images are randomly selected from the same image dataset as [23] (ILSVRC2012), and they have no overlap with the training or test set. 2.6 Implementation Details We implemented the image generator using the PyTorch framework and modified the image translation model provided by [11]. The image generator consists of a U-Net generator G and a discriminator D. In both of G and D, the kernel size is (4, 4), step size is (2,2) and the padding size is 1 for parameters of the layers. The generator is composed of 8 parametrically symmetric convolutional/deconvolutional layers with LeakyReLU (0.2) used as activation functions. All the input images (the stimulus images and shape images) of G and D are resized to (256, 256, 3). In GAN training, minibatch SGD is used and Adam solver is employed to optimize the parameters with momentum β1 = 0.9 and β2 = 0.999. The initial learning rate is 2× 10−4 and 10 samples are input in a batch. The weights of individual loss terms affect the quality of the reconstructed image. In our experiments, we set λimg = 100 to make a balance between the results’ sharpness and similarity with stimulus images. The image generator is trained for 200 epoch totally with the learning rate decay occurring at 120 epoch. 3 Results To evaluate the quality of reconstructed images, we conduct both visual comparison and quantitative comparison. In quantitative comparison, pairwise similarity comparison analysis is used to measure the reconstructed images’ quality, which is introduced in [24]. One reconstructed image is compared with two candidate images (the ground-truth image and a randomly selected test image) to test if its correlation with the ground truth image is higher. And in our experiments structural similarity index (SSIM) [27] is used as the correlation measure. SSIM measures the similarity of the local structure between the reconstructed and origin images in spatially close pixels [24]. 3.1 Comparison of Image Reconstruction Performance Here we compare the image reconstruction performance with existing approaches. The competitors include [1], [24], and [23]. For visual comparison, we directly use the reconstructed images reported in the papers of [1], [24], and [23], respectively. For quantitative comparison, we use the reported pairwise similarity with SSIM for [24]. For [1], we run the code published along with the paper, and use the same data augmentation images as our approach. All the pairwise similarity results are averaged by five runs to mitigate the effectiveness of randomness. Samples of reconstructed images are presented in Figure 3, in comparison with existing approaches. Similar to [23], the test fMRI samples corresponding to the same category are averaged across trials to improve the fMRI signals’ signal-to-noise ratio (SNR). The results are reconstructed from the test-fMRI recordings of three subjects (150 samples totally), and the performance of this model is compared with the leading methods on the same dataset [23]. In visual comparison, we compare our reconstructed images with methods in [23, 24] and [1] in Figure 3a. Owing to the U-Net model trained with semantic information, our model’s reconstructed images are vivid and close to the real stimulus images in color. Also, under the constraint of shape conditions, the reconstructed images share similar structures with the origin images. In quantitative comparison, we conduct pairwise similarity comparison based on SSIM with the existing methods of [1] and [24]. The comparison of different approaches used on three subjects’ fMRI recordings are displayed in Figure 3b. Results show that our method performs slightly better than [1] (ours 65.3 % vs. 64.3 % on average) and outperforms [24] (62.9% on average). 3.2 Decoding Performance of Different ROIs In this experiment, we evaluate the decoding performance of shape/semantic information from different ROIs (region of interest). Forty samples in the origin training set are reserved for validation and the rest are used for the decoders’ training in this experiment. To compare different ROIs’ semantic representation performance, the semantic decoders (DNN models) are trained on fMRI signals in different visual areas individually. The trained DNN models are used to decode semantic features from the validation fMRI samples and identify their corresponding categories. To facilitate the comparison, we use 10-category rough labels in this section (see supplementary materials). The identification accuracy of semantic representations decoded from different ROIs is compared in Figure 4a. Results show that semantic representations extracted from the fMRI data in HVC outperform those extracted from other areas such as LVC (by 14.5%), suggesting that voxels in more anterior areas like HVC show high correlation with abstract features. To compare the decoding performance of shape features from different ROIs, we train shape decoders from different visual areas respectively. The similarity between the decoded shapes and the stimulus images’ shapes is measured by the pairwise similarity comparison based on SSIM in Figure 4b. Results show that decoding shapes from fMRI data in LVC perform better than in other areas like HVC (by 19.3%), indicating that signals in LVC have high response to low-level image features and details. The findings that improved performance can be achieved when different decoding models are trained for low/high-level features with lower/higher visual areas respectively, are also in line with previous studies [17, 9]. In our experiments, models trained on the whole visual cortex (VC) perform slightly worse than those only trained on LVC/HVC in shape/semantic decoding tasks, probably because of the interference caused by low-correlation visual areas in VC (such as introducing higher visual areas in decoding low-level features like shapes). Note that theoretically the information processed in the HVC should also be contained in LVC in the form of low level features, it can be inferred that semantic decoding from signals in LVC may perform better when using a deeper model. 3.3 Effectiveness of Semantics We conduct an ablation study to evaluate the necessity of introducing semantics in our model. For comparison, two different training methods are used for reconstruction: reconstructing with and without semantic features. For model without semantics, we remove the semantic decoder and replace its image generator with a standard pix2pix model, which is trained for translating shapes to images directly. As shown in Figure 5a, using the image translation models to reconstruct images only from shapes perform well on part of the samples, which are similar to the results reconstructed with semantic information. These successful cases without semantic information usually depend on effective and clear shape decoding performance. However, most of the fMRI signals’ SNR is low [1] and many decoded shapes are similar. In Figure 5b, the GAN model can not deduce right decisions only from these noisy or similar shapes (left-hand side of Figure 5b), which causes the reconstructed images rendered uncorrelated details (like colors). Images reconstructed from the same shapes with semantic information are showed in the right-hand side of Figure 5b. The generated images’ colors are corrected with the guidance of the semantic information. Quantitative results are showed in Figure 5c. The images reconstructed with semantics perform better than those without semantics (65.3% vs. 62.5%). The results indicate that by reconstructing with categorical information in semantic features, our model is able to improve the reconstruction performance visually and can reconstruct images more accurately. 3.4 Effectiveness of Data Augmentation To evaluate the improvement of reconstruction quality by introducing data augmentation, we train models with and without augmentation respectively, and compare the models on the test set. One model is trained on such augmented dataset and the other one is trained solely on the origin training set as a contrast. The results are showed in Figure 6. In visual comparison, the images reconstructed with augmented data look more natural and more close to the ground truth images than those reconstructed only from the origin training set (Figure 6a). In quantitative comparison, model trained with data augmentation performs slightly better than that without augmentation (65.3% vs. 63.6%). By adding more images in the GAN training phase, the short board of the limited dataset size can be complemented and the model will learn the distribution over more natural images, contributing to the improvement in reconstruction. 4 Conclusion In this paper, we demonstrate the feasibility of reconstructing stimulus images from the fMRI recordings by decoding shape and semantic features separately, and merge the shape and semantic information to natural-looking images with GAN. This ’divide and conquer’ strategy simplified the fMRI decoding and image reconstruction task effectively. Results show that the proposed Shape-Semantic GAN improves the reconstruction similarity and image quality. Broader Impact The proposed Shape-Semantic GAN method provides a novel solution to visual reconstruction from brain activities and present a potential brain-reading technique. This method can help people recognize the human perception and thinking, and may help promote the development of neuroscience. However, the development of such brain-reading method may invade the privacy of the information within people’s mind, and may cause people to worry about the freedom of thought. Acknowledgment This work was partly supported by grants from the National Key Research and Development Program of China (2018YFA0701400), National Natural Science Foundation of China (61906166, U1909202, 61925603, 61673340), the Key Research and Development Program of Zhejiang Province in China (2020C03004).
1. What is the main contribution of the paper in the field of image reconstruction? 2. What are the strengths of the proposed approach, particularly in terms of leveraging neuroscience insights? 3. What are the weaknesses of the paper regarding comparisons with other methods? 4. How does the reviewer assess the clarity and quality of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions I have read the author response, read the other reviews, participated in the discussion with the reviewers and area chair. My score remains the same. The authors propose an approach for reconstructing stimulus images used in an fMRI experiment, from the resulting imaging data. The approach follows prior work in separating the processes of decoding a representation of the imaging content and generating a plausible candidate, given that the latter allows other information besides the imaging data (e.g. priors on stimuli) to be brought in, and training of the generation model without imaging data. The approach differs from preceding ones by explicitly separating the decoding of shape and semantic representations, to be used as input for the generator. The authors demonstrate state-of-the-art reconstruction performance, relative to competing approaches. Strengths strengths: - generally clear paper - leverages insights from neuroscience about types of information represented in different areas to improve decoding - shows that visual and semantic information about an image can be combined effectively into a GAN (and decoded vectors are good enough for that) - comparison of decoder effectiveness for different regions of interest has neuroscience value Weaknesses weaknesses: - the comparison procedure with other approaches needs to be improved (details provided in Correctness) - no account taken of other ways of representing semantic information (but these would likely just improve results, which are already quite good)
NIPS
Title Reconstructing Perceptive Images from Brain Activity by Shape-Semantic GAN Abstract Reconstructing seeing images from fMRI recordings is an absorbing research area in neuroscience and provides a potential brain-reading technology. The challenge lies in that visual encoding in brain is highly complex and not fully revealed. Inspired by the theory that visual features are hierarchically represented in cortex, we propose to break the complex visual signals into multi-level components and decode each component separately. Specifically, we decode shape and semantic representations from the lower and higher visual cortex respectively, and merge the shape and semantic information to images by a generative adversarial network (Shape-Semantic GAN). This ’divide and conquer’ strategy captures visual information more accurately. Experiments demonstrate that Shape-Semantic GAN improves the reconstruction similarity and image quality, and achieves the state-of-the-art image reconstruction performance. 1 Introduction Decoding visual information and reconstructing stimulus images from brain activities is a meaningful and attractive task in neural decoding. The fMRI signals, which record the variations in blood oxygen level dependent (BOLD), can reveal the correlation between brain activities and different visual stimuli by monitoring blood oxygen content. Implementing an fMRI-based image reconstruction method can help us understand the visual mechanisms of brain and provide a way to ’read mind’. Previous studies. According to previous studies, the mapping between activities in visual cortex and visual stimulus is supposed to exist [19], and the perceived images are proved to be decodable from fMRI recordings [29, 16, 6]. Early approaches estimate the mapping using linear models such as linear regression [16, 6, 7, 17, 18]. These approaches usually firstly extract specific features from the images, for instance multi-scale local image bases [16] and features of gabor filters [29], then learn a linear mapping from fMRI signals to image features. Linear methods mostly focus on reconstructing low-level features, which is insufficient for reconstructing complex images, such as natural images. After the homogeneity between the hierarchical representations of the brain and deep neural networks (DNNs) was revealed [9], methods based on this finding have achieved great reconstruction performance [23, 28]. Shen et al.[23] used convolutional neural networks (CNN) models to extract image features and learned the mapping from fMRI signals to the CNN-based image features, which successfully reconstructed natural images. Recently, the development of DNN makes it possible to learn nonlinear mappings from brain signals to stimulus images in an end-to-end manner [24, 1, 30, 14, 3]. The DNN-based approaches have remarkably improved the reconstruction performance, such as Encoder-Decoder models [1, 26] and Generative Adversarial Network (GAN) ∗Corresponding authors: Yu Qi and Gang Pan 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. based models [24, 1, 30, 14, 3]. Shen et al. et al. [24] proposed a DNN-based decoder which learned nonlinear mappings from fMRI signals to the seeing images effectively. Beliy et al. et al. [1] learned a bidirectional mapping between fMRI signals and stimulus images using an unsupervised GAN. These recent approaches achieved higher image quality and can reconstruct more natural-looking images compared with linear methods. Learning a mapping from fMRI recordings to the corresponding stimulus images is a challenging problem. The difficulty mostly lies in that brain activity in the visual cortex is complex and not fully revealed. Studies have shown that there exists a hierarchical increase in the complexity of representations in visual cortex[5], and study [17] has demonstrated that exploiting information from different visual areas can help improve the reconstruction performance. Simple decoding models without considering the hierarchical information may be insufficient for accurate reconstruction. The hierarchical structure of information encoding in visual areas have been widely studied [8, 2, 5]. On the one hand, activities in the early visual areas show high response to low-level image features like shapes and orientations [10, 20, 25, 15]. On the other hand, anterior visual areas are mostly involved in high-level information processing, and activities in such visual areas show high correlation with the semantic content of stimulus images [9, 5]. Such high-level image features are more categorical and invariant than low-level features in identification or reconstruction [2]. The hierarchical processing in the visual cortex inspired us to decode the low-level and high-level image features from lower visual cortex (LVC) and higher visual cortex (HVC) separately [9]. In this study, we propose a novel method to realize image reconstruction from fMRI signals by decomposing the decoding task into hierarchical subtasks: shape/semantic decoding in lower/higher visual cortex respectively (Figure 1). In shape decoding, we propose a linear model to predict the outline of the core object from the fMRI signals of lower visual cortex. In semantic decoding, we propose to learn effective features with a DNN model to represent high-level information from higher visual cortex activities. Finally, the shape and semantic features are combined as the input to a GAN to generate natural-looking images with the shape and semantic conditions. Data augmentation is employed to supplement the limited fMRI data and improve the reconstruction quality. Experiments are conducted to evaluate the image reconstruction performance of our method in comparison with the state-of-the-art approaches. Results show that the Shape-Semantic GAN model outperforms the leading methods. The main contributions of this work can be summarized as follows: • Instead of directly using end-to-end models to predict seeing images from fMRI signals, we propose to break the complex visual signals into multi-level components and decode each component separately. This ’divide and conquer’ approach can extract visual information accurately. • We propose a linear model based shape decoder and a DNN based semantic decoder, which are capable of decoding shape and semantic information from the lower and higher visual cortex respectively. • We propose a GAN model to merge the decoded shape and semantic information to images, which can generate natural-looking images given shape and semantic conditions. The performance of GAN-based image generation can be further improved by data augmentation technique. 2 Methods The proposed framework is composed of three key components: a shape decoder, a semantic decoder and a GAN image generator. The framework of our approach is illustrated in Figure 1. Let x denote the fMRI recordings and y be the corresponding images perceived by the subjects during experiments. Our purpose is to reconstruct each subject’s perceived images y from the corresponding fMRI data x. In our method, we use the shape decoder C to reconstruct the stimulus images’ shapes rsp and the semantic decoder S to extract semantic features rsm from x. The image generator G implemented by GAN is introduced to reconstruct the stimulus images G(rsp, rsm) in the final stage of decoding. 2.1 Dataset We make use of a publicly available benchmark dataset from [23]. In this dataset brain activity data were collected in the form of functional images covering the whole brain. The corresponding stimulus images were selected from ImageNet including 1200 training images and 50 test images from 150 and 50 categories separately. Images in both of the training and test set have no overlap with each other. And each image has 5 or 24 fMRI recordings for training or test respectively. The fMRI signals contain information from different visual areas. Early visual areas like V1,V2 and V3 were defined by the standard retinatopic mapping procedures [4, 22, 23], and V1,V2 and V3 were concatenated as an area named lower visual cortex [9]. Higher visual cortex is composed of regions including parahippocampal place area (PPA), lateral occipital complex (LOC) and fusiform face area (FFA) defined in [9]. 2.2 Shape Decoder In order to obtain the outline of the visual stimulus, we present a shape decoder C to extract low-level image cues from the lower visual cortex based on linear models. Using a simple model to obtain low-level visual features, which has been demonstrated feasible by previous studies [29], can avoid the overfitting risk of complex models. Shape decoder C consists of three base decoders trained for V1, V2, V3 individually and a combiner to merge the results of base shape decoders. The process of shape decoding is described in Figure 2. The stimulus images are firstly preprocessed by shape detection and feature extraction before shape decoder training. Shape detection. First, image matting is conducted on the stimulus images to extract the core objects [13] and remove the interference of other parts in the images. The objects in the stimulus images are extracted based on saliency detection [12] and manual annotation. The results are binarized to eliminate the influence of minor variance in images and emphasize objects over background. By such preprocessing method, the core objects are extracted and some details (like colors or textures) that help little in shape decoding are eliminated. Features extraction. Second, square image patches are presented for feature extraction. Pixel values located in non-overlapping m×m pixels square patches are averaged as the image patch’s value. By representing shapes in contrast-defined patch images, the amount of calculation can be reduced and improve the invariance to small distortions. In our model m = 8 is selected. Model Training. Because V1, V2 and V3 areas have representations for the visual space respectively, we train the base decoders for each of them and use a linear weighting combiner to combine the decoded shapes. The values of the image patches are normalized to [0, 1] and flattened to onedimensional vectors p. The decoded shape vectors p∗k = ck(xk), where ck(xk) is the base shape decoder whose parameter ηk is optimized by: ηk = argmin ηk ‖ck(xk)− pk‖, k = V 1, V 2, V 3, (1) where ηk denotes the weights of base shape decoder ck and k denotes the visual area that the samples belong to. The base decoder ck is implemented by linear regression and the models are trained for fMRI recordings in V1, V2 and V3 individually. Then a combiner is trained to combine the predicted results p∗V 1, p ∗ V 2 and p ∗ V 3: rsp(i, j) = ∑ k wkijp ∗ k(i, j), k = V 1, V 2, V 3, (2) where rsp refers to the predicted shapes computed by the combiner, and rsp(i, j) is the pixel value at position (i, j). The results rsp predicted by the combiner are resized to the same size as the stimulus images (256× 256 pixels). wij is the weight of the combiner for pixel at position (i, j). The combining weight wij is computed independently for each pixel. 2.3 Semantic Decoder To render semantically meaningful details on shapes, a semantic decoder is used to provide categorical information. Although images can be rendered only based on shapes with a pre-trained GAN model, in practice we find that the results are not always acceptable because of the lack of conditions. The mapping from shapes to real images is not unique in many cases (e.g. a circular shape can be translated into a football/crystal ball/golf ball and all of these translations are correct judged by the discriminator). Besides, noise retained in shapes will interfere with the reconstruction quality in the absence of other conditions. Therefore reconstructing only on the shape condition is not sufficient. A semantic context, which is used to guide the GAN model with the image’s category, can be helpful when incorporated with the shape features in training phase. The input to the semantic decoder is fMRI signals in HVC. HVC covers the regions of LOC, FFA and PPA, whose voxels show significantly high response to the high-level features such as objects, faces or scenes respectively [9]. As showed in Figure 1, a lightweight DNN model is introduced to generate semantic features. The DNN model consists of one input layer (the same size as the input’s number of voxels), two hidden layers and one output layer. Tanh activation function is introduced between the hidden layers and sigmoid activation function is used for classification. When training the DNN model, the fMRI recordings in HVC are identified by the model to infer their corresponding stimulus images’ categories. After the training phase, the DNN model works as a semantic decoder. Note that the penultimate layer of DNN performs as a semantic space supporting the classification task at the output layer [28], the features in such layer are adopted as the semantic representation of fMRI signals in our method. 2.4 Image Generator To reconstruct images looking more realistic and filled with meaningful details, an encoder-decoder GAN, referring to the image translation methods [11], is introduced in the final stage of image reconstruction. In image reconstruction, there exists lots of low-level features (like contours) shared between the input shapes and the output natural images, which need to be passed across the decoder directly to reconstruct images with accurate shapes. Therefore, we propose the U-Net [21], an encoder-decoder structure, with skip connections. Traditional encoder-decoder model passes information through a bottleneck structure to extract high-level features, while low-level features such as shapes and textures can be lost. And few shape features retained in the output can cause deformation in reconstruction. By using the U-Net structure, more low-level features can be passed from the input space to the reconstruction space with the help of skip connections without the limitation of the bottleneck. The generator is composed of a pair of symmetrical encoder and decoder. The encoder and decoder have eight convolutional or deconvolutional layers with symmetrical parameters and no downsampling or up-sampling is used. The input layer takes the 256× 256 pixel images (shape images) as input. The bottleneck between encoder and decoder represents the high-level features extracted by convolutional layers in encoder, which is modified to take both of the semantic features rsm and the high-level features as input to the decoder. In this way the generator will be optimized under the constraint of semantic and shape conditions. The discriminator takes the shape rsp and the output of generator together as input and predicts the similarity of high-frequency structures between these two domains, using this similarity to guide the generator training. Let Gθ denotes the U-Net generator and Dφ denotes the discriminator. The generator Gθ and discriminator Dφ have parameters named θ and φ, which are optimized by minimizing the loss function L(θ, φ). The objective of the conditional GAN is composed of two components, which can be described as L(θ, φ) = Ladv(θ, φ) + λimgLimg(θ), (3) where Ladv(θ, φ) and Limg(θ) denote the adversarial loss and image space loss, and λimg define the weight of the image space loss Limg in L(θ, φ). As is inferred in [11], L1 loss is able to accurately capture the low frequencies. The GAN discriminator is designed to model the high-frequency structures. By combining these two terms in the loss function, blurred reconstructions will not be tolerated by the discriminator and low-frequency visual features can also be retained at the same time. The adversarial loss and the image space loss used in optimizing the generator can be expressed as Ladv(θ, φ) = −Ersp,rsm [log(Dφ(rsp, Gθ(rsp, rsm)))], (4) Limg(θ) = Ersp,rsm,y[|y −Gθ(rsp, rsm)|], (5) where rsp, rsm and y refer to shapes, semantic features and stimulus images. During the training phase, gradient descent is computed on Gθ and Dφ alternately. Instead of directly training Gθ to minimize log(1−Dφ(rsp, Gθ(rsp, rsm))), we followed the recommendations in [24] and maximize log(Dφ(rsp, Gθ(rsp, rsm))). The objective of the discriminator is: Ldiscr(θ, φ) = −Ersp,y[logDφ(rsp, y)]− Ersp,rsm [log(1−Dφ(rsp, Gθ(rsp, rsm)))]. (6) When Gθ is being trained, it tries to optimize θ to reduce the distance between generated images Gθ(rsp, rsm) and stimulus images y. It also tries to generate images that share similar high-frequency structure with shapes rsp to confuse Dφ and let Dφ predict Gθ(rsp, rsm) as correct. When Dφ is trained, it tries to optimize φ to distinguish the pairs of {rsp, y} from the pairs of {rsp, Gθ(rsp, rsm)}. Each time one of Gθ or Dφ is trained, the other’s parameters are fixed. 2.5 Data Augmentation Since the size of the fMRI dataset is limited, we propose to improve the image reconstruction performance by data augmentation in GAN training. We sampled the augmented images from the ImageNet dataset. For shape augmentation, the preprocess in Section 2.2 is conducted on augmented images and the contrast-defined, m×m-patch images Rsp represent as shapes of the augmented images. For semantics augmentation, the category-average semantic feature Rsm is computed as a substitute of semantic vector. Rsm is defined as the vector obtained by averaging the semantic features of samples annotated with the same category. By combining the shapes and category-average semantic features generated from the augmented images as the form of {Rsp, Rsm} pairs, the new samples are concatenated with the {rsp, rsm} pairs as the inputs to Gθ, enhancing the generality of Gθ eventually. Note that in our method the image augmentation could only be conducted within images that corresponds to the same classes as the training images. In reconstruction about 1.2k augmented natural images are randomly selected from the same image dataset as [23] (ILSVRC2012), and they have no overlap with the training or test set. 2.6 Implementation Details We implemented the image generator using the PyTorch framework and modified the image translation model provided by [11]. The image generator consists of a U-Net generator G and a discriminator D. In both of G and D, the kernel size is (4, 4), step size is (2,2) and the padding size is 1 for parameters of the layers. The generator is composed of 8 parametrically symmetric convolutional/deconvolutional layers with LeakyReLU (0.2) used as activation functions. All the input images (the stimulus images and shape images) of G and D are resized to (256, 256, 3). In GAN training, minibatch SGD is used and Adam solver is employed to optimize the parameters with momentum β1 = 0.9 and β2 = 0.999. The initial learning rate is 2× 10−4 and 10 samples are input in a batch. The weights of individual loss terms affect the quality of the reconstructed image. In our experiments, we set λimg = 100 to make a balance between the results’ sharpness and similarity with stimulus images. The image generator is trained for 200 epoch totally with the learning rate decay occurring at 120 epoch. 3 Results To evaluate the quality of reconstructed images, we conduct both visual comparison and quantitative comparison. In quantitative comparison, pairwise similarity comparison analysis is used to measure the reconstructed images’ quality, which is introduced in [24]. One reconstructed image is compared with two candidate images (the ground-truth image and a randomly selected test image) to test if its correlation with the ground truth image is higher. And in our experiments structural similarity index (SSIM) [27] is used as the correlation measure. SSIM measures the similarity of the local structure between the reconstructed and origin images in spatially close pixels [24]. 3.1 Comparison of Image Reconstruction Performance Here we compare the image reconstruction performance with existing approaches. The competitors include [1], [24], and [23]. For visual comparison, we directly use the reconstructed images reported in the papers of [1], [24], and [23], respectively. For quantitative comparison, we use the reported pairwise similarity with SSIM for [24]. For [1], we run the code published along with the paper, and use the same data augmentation images as our approach. All the pairwise similarity results are averaged by five runs to mitigate the effectiveness of randomness. Samples of reconstructed images are presented in Figure 3, in comparison with existing approaches. Similar to [23], the test fMRI samples corresponding to the same category are averaged across trials to improve the fMRI signals’ signal-to-noise ratio (SNR). The results are reconstructed from the test-fMRI recordings of three subjects (150 samples totally), and the performance of this model is compared with the leading methods on the same dataset [23]. In visual comparison, we compare our reconstructed images with methods in [23, 24] and [1] in Figure 3a. Owing to the U-Net model trained with semantic information, our model’s reconstructed images are vivid and close to the real stimulus images in color. Also, under the constraint of shape conditions, the reconstructed images share similar structures with the origin images. In quantitative comparison, we conduct pairwise similarity comparison based on SSIM with the existing methods of [1] and [24]. The comparison of different approaches used on three subjects’ fMRI recordings are displayed in Figure 3b. Results show that our method performs slightly better than [1] (ours 65.3 % vs. 64.3 % on average) and outperforms [24] (62.9% on average). 3.2 Decoding Performance of Different ROIs In this experiment, we evaluate the decoding performance of shape/semantic information from different ROIs (region of interest). Forty samples in the origin training set are reserved for validation and the rest are used for the decoders’ training in this experiment. To compare different ROIs’ semantic representation performance, the semantic decoders (DNN models) are trained on fMRI signals in different visual areas individually. The trained DNN models are used to decode semantic features from the validation fMRI samples and identify their corresponding categories. To facilitate the comparison, we use 10-category rough labels in this section (see supplementary materials). The identification accuracy of semantic representations decoded from different ROIs is compared in Figure 4a. Results show that semantic representations extracted from the fMRI data in HVC outperform those extracted from other areas such as LVC (by 14.5%), suggesting that voxels in more anterior areas like HVC show high correlation with abstract features. To compare the decoding performance of shape features from different ROIs, we train shape decoders from different visual areas respectively. The similarity between the decoded shapes and the stimulus images’ shapes is measured by the pairwise similarity comparison based on SSIM in Figure 4b. Results show that decoding shapes from fMRI data in LVC perform better than in other areas like HVC (by 19.3%), indicating that signals in LVC have high response to low-level image features and details. The findings that improved performance can be achieved when different decoding models are trained for low/high-level features with lower/higher visual areas respectively, are also in line with previous studies [17, 9]. In our experiments, models trained on the whole visual cortex (VC) perform slightly worse than those only trained on LVC/HVC in shape/semantic decoding tasks, probably because of the interference caused by low-correlation visual areas in VC (such as introducing higher visual areas in decoding low-level features like shapes). Note that theoretically the information processed in the HVC should also be contained in LVC in the form of low level features, it can be inferred that semantic decoding from signals in LVC may perform better when using a deeper model. 3.3 Effectiveness of Semantics We conduct an ablation study to evaluate the necessity of introducing semantics in our model. For comparison, two different training methods are used for reconstruction: reconstructing with and without semantic features. For model without semantics, we remove the semantic decoder and replace its image generator with a standard pix2pix model, which is trained for translating shapes to images directly. As shown in Figure 5a, using the image translation models to reconstruct images only from shapes perform well on part of the samples, which are similar to the results reconstructed with semantic information. These successful cases without semantic information usually depend on effective and clear shape decoding performance. However, most of the fMRI signals’ SNR is low [1] and many decoded shapes are similar. In Figure 5b, the GAN model can not deduce right decisions only from these noisy or similar shapes (left-hand side of Figure 5b), which causes the reconstructed images rendered uncorrelated details (like colors). Images reconstructed from the same shapes with semantic information are showed in the right-hand side of Figure 5b. The generated images’ colors are corrected with the guidance of the semantic information. Quantitative results are showed in Figure 5c. The images reconstructed with semantics perform better than those without semantics (65.3% vs. 62.5%). The results indicate that by reconstructing with categorical information in semantic features, our model is able to improve the reconstruction performance visually and can reconstruct images more accurately. 3.4 Effectiveness of Data Augmentation To evaluate the improvement of reconstruction quality by introducing data augmentation, we train models with and without augmentation respectively, and compare the models on the test set. One model is trained on such augmented dataset and the other one is trained solely on the origin training set as a contrast. The results are showed in Figure 6. In visual comparison, the images reconstructed with augmented data look more natural and more close to the ground truth images than those reconstructed only from the origin training set (Figure 6a). In quantitative comparison, model trained with data augmentation performs slightly better than that without augmentation (65.3% vs. 63.6%). By adding more images in the GAN training phase, the short board of the limited dataset size can be complemented and the model will learn the distribution over more natural images, contributing to the improvement in reconstruction. 4 Conclusion In this paper, we demonstrate the feasibility of reconstructing stimulus images from the fMRI recordings by decoding shape and semantic features separately, and merge the shape and semantic information to natural-looking images with GAN. This ’divide and conquer’ strategy simplified the fMRI decoding and image reconstruction task effectively. Results show that the proposed Shape-Semantic GAN improves the reconstruction similarity and image quality. Broader Impact The proposed Shape-Semantic GAN method provides a novel solution to visual reconstruction from brain activities and present a potential brain-reading technique. This method can help people recognize the human perception and thinking, and may help promote the development of neuroscience. However, the development of such brain-reading method may invade the privacy of the information within people’s mind, and may cause people to worry about the freedom of thought. Acknowledgment This work was partly supported by grants from the National Key Research and Development Program of China (2018YFA0701400), National Natural Science Foundation of China (61906166, U1909202, 61925603, 61673340), the Key Research and Development Program of Zhejiang Province in China (2020C03004).
1. What is the focus and contribution of the paper on image reconstruction? 2. What are the strengths of the proposed approach, particularly in terms of its effectiveness and improvements over prior methods? 3. Are there any concerns or weaknesses in the paper, such as gaps or missteps in the research? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper used a new method for separately decoding shape and semantic information about images from the corresponding fMRI responses, and then combining the shape and semantic information together in order to reconstruct the image. This method seems highly effective – much more so than previous “all-in-one” methods. Strengths The method developed in this paper seems highly effective, a clear improvement upon earlier methods. The additional tests done by the authors—examining the effects of combining shape and semantic information and data augmentation—seem to rigorously show why this method work so well. This is a solid contribution. Weaknesses I cannot identify any major weaknesses in this paper. The results are solid and seem thoroughly explored. There is no major gap or mis-step.
NIPS
Title Reconstructing Perceptive Images from Brain Activity by Shape-Semantic GAN Abstract Reconstructing seeing images from fMRI recordings is an absorbing research area in neuroscience and provides a potential brain-reading technology. The challenge lies in that visual encoding in brain is highly complex and not fully revealed. Inspired by the theory that visual features are hierarchically represented in cortex, we propose to break the complex visual signals into multi-level components and decode each component separately. Specifically, we decode shape and semantic representations from the lower and higher visual cortex respectively, and merge the shape and semantic information to images by a generative adversarial network (Shape-Semantic GAN). This ’divide and conquer’ strategy captures visual information more accurately. Experiments demonstrate that Shape-Semantic GAN improves the reconstruction similarity and image quality, and achieves the state-of-the-art image reconstruction performance. 1 Introduction Decoding visual information and reconstructing stimulus images from brain activities is a meaningful and attractive task in neural decoding. The fMRI signals, which record the variations in blood oxygen level dependent (BOLD), can reveal the correlation between brain activities and different visual stimuli by monitoring blood oxygen content. Implementing an fMRI-based image reconstruction method can help us understand the visual mechanisms of brain and provide a way to ’read mind’. Previous studies. According to previous studies, the mapping between activities in visual cortex and visual stimulus is supposed to exist [19], and the perceived images are proved to be decodable from fMRI recordings [29, 16, 6]. Early approaches estimate the mapping using linear models such as linear regression [16, 6, 7, 17, 18]. These approaches usually firstly extract specific features from the images, for instance multi-scale local image bases [16] and features of gabor filters [29], then learn a linear mapping from fMRI signals to image features. Linear methods mostly focus on reconstructing low-level features, which is insufficient for reconstructing complex images, such as natural images. After the homogeneity between the hierarchical representations of the brain and deep neural networks (DNNs) was revealed [9], methods based on this finding have achieved great reconstruction performance [23, 28]. Shen et al.[23] used convolutional neural networks (CNN) models to extract image features and learned the mapping from fMRI signals to the CNN-based image features, which successfully reconstructed natural images. Recently, the development of DNN makes it possible to learn nonlinear mappings from brain signals to stimulus images in an end-to-end manner [24, 1, 30, 14, 3]. The DNN-based approaches have remarkably improved the reconstruction performance, such as Encoder-Decoder models [1, 26] and Generative Adversarial Network (GAN) ∗Corresponding authors: Yu Qi and Gang Pan 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. based models [24, 1, 30, 14, 3]. Shen et al. et al. [24] proposed a DNN-based decoder which learned nonlinear mappings from fMRI signals to the seeing images effectively. Beliy et al. et al. [1] learned a bidirectional mapping between fMRI signals and stimulus images using an unsupervised GAN. These recent approaches achieved higher image quality and can reconstruct more natural-looking images compared with linear methods. Learning a mapping from fMRI recordings to the corresponding stimulus images is a challenging problem. The difficulty mostly lies in that brain activity in the visual cortex is complex and not fully revealed. Studies have shown that there exists a hierarchical increase in the complexity of representations in visual cortex[5], and study [17] has demonstrated that exploiting information from different visual areas can help improve the reconstruction performance. Simple decoding models without considering the hierarchical information may be insufficient for accurate reconstruction. The hierarchical structure of information encoding in visual areas have been widely studied [8, 2, 5]. On the one hand, activities in the early visual areas show high response to low-level image features like shapes and orientations [10, 20, 25, 15]. On the other hand, anterior visual areas are mostly involved in high-level information processing, and activities in such visual areas show high correlation with the semantic content of stimulus images [9, 5]. Such high-level image features are more categorical and invariant than low-level features in identification or reconstruction [2]. The hierarchical processing in the visual cortex inspired us to decode the low-level and high-level image features from lower visual cortex (LVC) and higher visual cortex (HVC) separately [9]. In this study, we propose a novel method to realize image reconstruction from fMRI signals by decomposing the decoding task into hierarchical subtasks: shape/semantic decoding in lower/higher visual cortex respectively (Figure 1). In shape decoding, we propose a linear model to predict the outline of the core object from the fMRI signals of lower visual cortex. In semantic decoding, we propose to learn effective features with a DNN model to represent high-level information from higher visual cortex activities. Finally, the shape and semantic features are combined as the input to a GAN to generate natural-looking images with the shape and semantic conditions. Data augmentation is employed to supplement the limited fMRI data and improve the reconstruction quality. Experiments are conducted to evaluate the image reconstruction performance of our method in comparison with the state-of-the-art approaches. Results show that the Shape-Semantic GAN model outperforms the leading methods. The main contributions of this work can be summarized as follows: • Instead of directly using end-to-end models to predict seeing images from fMRI signals, we propose to break the complex visual signals into multi-level components and decode each component separately. This ’divide and conquer’ approach can extract visual information accurately. • We propose a linear model based shape decoder and a DNN based semantic decoder, which are capable of decoding shape and semantic information from the lower and higher visual cortex respectively. • We propose a GAN model to merge the decoded shape and semantic information to images, which can generate natural-looking images given shape and semantic conditions. The performance of GAN-based image generation can be further improved by data augmentation technique. 2 Methods The proposed framework is composed of three key components: a shape decoder, a semantic decoder and a GAN image generator. The framework of our approach is illustrated in Figure 1. Let x denote the fMRI recordings and y be the corresponding images perceived by the subjects during experiments. Our purpose is to reconstruct each subject’s perceived images y from the corresponding fMRI data x. In our method, we use the shape decoder C to reconstruct the stimulus images’ shapes rsp and the semantic decoder S to extract semantic features rsm from x. The image generator G implemented by GAN is introduced to reconstruct the stimulus images G(rsp, rsm) in the final stage of decoding. 2.1 Dataset We make use of a publicly available benchmark dataset from [23]. In this dataset brain activity data were collected in the form of functional images covering the whole brain. The corresponding stimulus images were selected from ImageNet including 1200 training images and 50 test images from 150 and 50 categories separately. Images in both of the training and test set have no overlap with each other. And each image has 5 or 24 fMRI recordings for training or test respectively. The fMRI signals contain information from different visual areas. Early visual areas like V1,V2 and V3 were defined by the standard retinatopic mapping procedures [4, 22, 23], and V1,V2 and V3 were concatenated as an area named lower visual cortex [9]. Higher visual cortex is composed of regions including parahippocampal place area (PPA), lateral occipital complex (LOC) and fusiform face area (FFA) defined in [9]. 2.2 Shape Decoder In order to obtain the outline of the visual stimulus, we present a shape decoder C to extract low-level image cues from the lower visual cortex based on linear models. Using a simple model to obtain low-level visual features, which has been demonstrated feasible by previous studies [29], can avoid the overfitting risk of complex models. Shape decoder C consists of three base decoders trained for V1, V2, V3 individually and a combiner to merge the results of base shape decoders. The process of shape decoding is described in Figure 2. The stimulus images are firstly preprocessed by shape detection and feature extraction before shape decoder training. Shape detection. First, image matting is conducted on the stimulus images to extract the core objects [13] and remove the interference of other parts in the images. The objects in the stimulus images are extracted based on saliency detection [12] and manual annotation. The results are binarized to eliminate the influence of minor variance in images and emphasize objects over background. By such preprocessing method, the core objects are extracted and some details (like colors or textures) that help little in shape decoding are eliminated. Features extraction. Second, square image patches are presented for feature extraction. Pixel values located in non-overlapping m×m pixels square patches are averaged as the image patch’s value. By representing shapes in contrast-defined patch images, the amount of calculation can be reduced and improve the invariance to small distortions. In our model m = 8 is selected. Model Training. Because V1, V2 and V3 areas have representations for the visual space respectively, we train the base decoders for each of them and use a linear weighting combiner to combine the decoded shapes. The values of the image patches are normalized to [0, 1] and flattened to onedimensional vectors p. The decoded shape vectors p∗k = ck(xk), where ck(xk) is the base shape decoder whose parameter ηk is optimized by: ηk = argmin ηk ‖ck(xk)− pk‖, k = V 1, V 2, V 3, (1) where ηk denotes the weights of base shape decoder ck and k denotes the visual area that the samples belong to. The base decoder ck is implemented by linear regression and the models are trained for fMRI recordings in V1, V2 and V3 individually. Then a combiner is trained to combine the predicted results p∗V 1, p ∗ V 2 and p ∗ V 3: rsp(i, j) = ∑ k wkijp ∗ k(i, j), k = V 1, V 2, V 3, (2) where rsp refers to the predicted shapes computed by the combiner, and rsp(i, j) is the pixel value at position (i, j). The results rsp predicted by the combiner are resized to the same size as the stimulus images (256× 256 pixels). wij is the weight of the combiner for pixel at position (i, j). The combining weight wij is computed independently for each pixel. 2.3 Semantic Decoder To render semantically meaningful details on shapes, a semantic decoder is used to provide categorical information. Although images can be rendered only based on shapes with a pre-trained GAN model, in practice we find that the results are not always acceptable because of the lack of conditions. The mapping from shapes to real images is not unique in many cases (e.g. a circular shape can be translated into a football/crystal ball/golf ball and all of these translations are correct judged by the discriminator). Besides, noise retained in shapes will interfere with the reconstruction quality in the absence of other conditions. Therefore reconstructing only on the shape condition is not sufficient. A semantic context, which is used to guide the GAN model with the image’s category, can be helpful when incorporated with the shape features in training phase. The input to the semantic decoder is fMRI signals in HVC. HVC covers the regions of LOC, FFA and PPA, whose voxels show significantly high response to the high-level features such as objects, faces or scenes respectively [9]. As showed in Figure 1, a lightweight DNN model is introduced to generate semantic features. The DNN model consists of one input layer (the same size as the input’s number of voxels), two hidden layers and one output layer. Tanh activation function is introduced between the hidden layers and sigmoid activation function is used for classification. When training the DNN model, the fMRI recordings in HVC are identified by the model to infer their corresponding stimulus images’ categories. After the training phase, the DNN model works as a semantic decoder. Note that the penultimate layer of DNN performs as a semantic space supporting the classification task at the output layer [28], the features in such layer are adopted as the semantic representation of fMRI signals in our method. 2.4 Image Generator To reconstruct images looking more realistic and filled with meaningful details, an encoder-decoder GAN, referring to the image translation methods [11], is introduced in the final stage of image reconstruction. In image reconstruction, there exists lots of low-level features (like contours) shared between the input shapes and the output natural images, which need to be passed across the decoder directly to reconstruct images with accurate shapes. Therefore, we propose the U-Net [21], an encoder-decoder structure, with skip connections. Traditional encoder-decoder model passes information through a bottleneck structure to extract high-level features, while low-level features such as shapes and textures can be lost. And few shape features retained in the output can cause deformation in reconstruction. By using the U-Net structure, more low-level features can be passed from the input space to the reconstruction space with the help of skip connections without the limitation of the bottleneck. The generator is composed of a pair of symmetrical encoder and decoder. The encoder and decoder have eight convolutional or deconvolutional layers with symmetrical parameters and no downsampling or up-sampling is used. The input layer takes the 256× 256 pixel images (shape images) as input. The bottleneck between encoder and decoder represents the high-level features extracted by convolutional layers in encoder, which is modified to take both of the semantic features rsm and the high-level features as input to the decoder. In this way the generator will be optimized under the constraint of semantic and shape conditions. The discriminator takes the shape rsp and the output of generator together as input and predicts the similarity of high-frequency structures between these two domains, using this similarity to guide the generator training. Let Gθ denotes the U-Net generator and Dφ denotes the discriminator. The generator Gθ and discriminator Dφ have parameters named θ and φ, which are optimized by minimizing the loss function L(θ, φ). The objective of the conditional GAN is composed of two components, which can be described as L(θ, φ) = Ladv(θ, φ) + λimgLimg(θ), (3) where Ladv(θ, φ) and Limg(θ) denote the adversarial loss and image space loss, and λimg define the weight of the image space loss Limg in L(θ, φ). As is inferred in [11], L1 loss is able to accurately capture the low frequencies. The GAN discriminator is designed to model the high-frequency structures. By combining these two terms in the loss function, blurred reconstructions will not be tolerated by the discriminator and low-frequency visual features can also be retained at the same time. The adversarial loss and the image space loss used in optimizing the generator can be expressed as Ladv(θ, φ) = −Ersp,rsm [log(Dφ(rsp, Gθ(rsp, rsm)))], (4) Limg(θ) = Ersp,rsm,y[|y −Gθ(rsp, rsm)|], (5) where rsp, rsm and y refer to shapes, semantic features and stimulus images. During the training phase, gradient descent is computed on Gθ and Dφ alternately. Instead of directly training Gθ to minimize log(1−Dφ(rsp, Gθ(rsp, rsm))), we followed the recommendations in [24] and maximize log(Dφ(rsp, Gθ(rsp, rsm))). The objective of the discriminator is: Ldiscr(θ, φ) = −Ersp,y[logDφ(rsp, y)]− Ersp,rsm [log(1−Dφ(rsp, Gθ(rsp, rsm)))]. (6) When Gθ is being trained, it tries to optimize θ to reduce the distance between generated images Gθ(rsp, rsm) and stimulus images y. It also tries to generate images that share similar high-frequency structure with shapes rsp to confuse Dφ and let Dφ predict Gθ(rsp, rsm) as correct. When Dφ is trained, it tries to optimize φ to distinguish the pairs of {rsp, y} from the pairs of {rsp, Gθ(rsp, rsm)}. Each time one of Gθ or Dφ is trained, the other’s parameters are fixed. 2.5 Data Augmentation Since the size of the fMRI dataset is limited, we propose to improve the image reconstruction performance by data augmentation in GAN training. We sampled the augmented images from the ImageNet dataset. For shape augmentation, the preprocess in Section 2.2 is conducted on augmented images and the contrast-defined, m×m-patch images Rsp represent as shapes of the augmented images. For semantics augmentation, the category-average semantic feature Rsm is computed as a substitute of semantic vector. Rsm is defined as the vector obtained by averaging the semantic features of samples annotated with the same category. By combining the shapes and category-average semantic features generated from the augmented images as the form of {Rsp, Rsm} pairs, the new samples are concatenated with the {rsp, rsm} pairs as the inputs to Gθ, enhancing the generality of Gθ eventually. Note that in our method the image augmentation could only be conducted within images that corresponds to the same classes as the training images. In reconstruction about 1.2k augmented natural images are randomly selected from the same image dataset as [23] (ILSVRC2012), and they have no overlap with the training or test set. 2.6 Implementation Details We implemented the image generator using the PyTorch framework and modified the image translation model provided by [11]. The image generator consists of a U-Net generator G and a discriminator D. In both of G and D, the kernel size is (4, 4), step size is (2,2) and the padding size is 1 for parameters of the layers. The generator is composed of 8 parametrically symmetric convolutional/deconvolutional layers with LeakyReLU (0.2) used as activation functions. All the input images (the stimulus images and shape images) of G and D are resized to (256, 256, 3). In GAN training, minibatch SGD is used and Adam solver is employed to optimize the parameters with momentum β1 = 0.9 and β2 = 0.999. The initial learning rate is 2× 10−4 and 10 samples are input in a batch. The weights of individual loss terms affect the quality of the reconstructed image. In our experiments, we set λimg = 100 to make a balance between the results’ sharpness and similarity with stimulus images. The image generator is trained for 200 epoch totally with the learning rate decay occurring at 120 epoch. 3 Results To evaluate the quality of reconstructed images, we conduct both visual comparison and quantitative comparison. In quantitative comparison, pairwise similarity comparison analysis is used to measure the reconstructed images’ quality, which is introduced in [24]. One reconstructed image is compared with two candidate images (the ground-truth image and a randomly selected test image) to test if its correlation with the ground truth image is higher. And in our experiments structural similarity index (SSIM) [27] is used as the correlation measure. SSIM measures the similarity of the local structure between the reconstructed and origin images in spatially close pixels [24]. 3.1 Comparison of Image Reconstruction Performance Here we compare the image reconstruction performance with existing approaches. The competitors include [1], [24], and [23]. For visual comparison, we directly use the reconstructed images reported in the papers of [1], [24], and [23], respectively. For quantitative comparison, we use the reported pairwise similarity with SSIM for [24]. For [1], we run the code published along with the paper, and use the same data augmentation images as our approach. All the pairwise similarity results are averaged by five runs to mitigate the effectiveness of randomness. Samples of reconstructed images are presented in Figure 3, in comparison with existing approaches. Similar to [23], the test fMRI samples corresponding to the same category are averaged across trials to improve the fMRI signals’ signal-to-noise ratio (SNR). The results are reconstructed from the test-fMRI recordings of three subjects (150 samples totally), and the performance of this model is compared with the leading methods on the same dataset [23]. In visual comparison, we compare our reconstructed images with methods in [23, 24] and [1] in Figure 3a. Owing to the U-Net model trained with semantic information, our model’s reconstructed images are vivid and close to the real stimulus images in color. Also, under the constraint of shape conditions, the reconstructed images share similar structures with the origin images. In quantitative comparison, we conduct pairwise similarity comparison based on SSIM with the existing methods of [1] and [24]. The comparison of different approaches used on three subjects’ fMRI recordings are displayed in Figure 3b. Results show that our method performs slightly better than [1] (ours 65.3 % vs. 64.3 % on average) and outperforms [24] (62.9% on average). 3.2 Decoding Performance of Different ROIs In this experiment, we evaluate the decoding performance of shape/semantic information from different ROIs (region of interest). Forty samples in the origin training set are reserved for validation and the rest are used for the decoders’ training in this experiment. To compare different ROIs’ semantic representation performance, the semantic decoders (DNN models) are trained on fMRI signals in different visual areas individually. The trained DNN models are used to decode semantic features from the validation fMRI samples and identify their corresponding categories. To facilitate the comparison, we use 10-category rough labels in this section (see supplementary materials). The identification accuracy of semantic representations decoded from different ROIs is compared in Figure 4a. Results show that semantic representations extracted from the fMRI data in HVC outperform those extracted from other areas such as LVC (by 14.5%), suggesting that voxels in more anterior areas like HVC show high correlation with abstract features. To compare the decoding performance of shape features from different ROIs, we train shape decoders from different visual areas respectively. The similarity between the decoded shapes and the stimulus images’ shapes is measured by the pairwise similarity comparison based on SSIM in Figure 4b. Results show that decoding shapes from fMRI data in LVC perform better than in other areas like HVC (by 19.3%), indicating that signals in LVC have high response to low-level image features and details. The findings that improved performance can be achieved when different decoding models are trained for low/high-level features with lower/higher visual areas respectively, are also in line with previous studies [17, 9]. In our experiments, models trained on the whole visual cortex (VC) perform slightly worse than those only trained on LVC/HVC in shape/semantic decoding tasks, probably because of the interference caused by low-correlation visual areas in VC (such as introducing higher visual areas in decoding low-level features like shapes). Note that theoretically the information processed in the HVC should also be contained in LVC in the form of low level features, it can be inferred that semantic decoding from signals in LVC may perform better when using a deeper model. 3.3 Effectiveness of Semantics We conduct an ablation study to evaluate the necessity of introducing semantics in our model. For comparison, two different training methods are used for reconstruction: reconstructing with and without semantic features. For model without semantics, we remove the semantic decoder and replace its image generator with a standard pix2pix model, which is trained for translating shapes to images directly. As shown in Figure 5a, using the image translation models to reconstruct images only from shapes perform well on part of the samples, which are similar to the results reconstructed with semantic information. These successful cases without semantic information usually depend on effective and clear shape decoding performance. However, most of the fMRI signals’ SNR is low [1] and many decoded shapes are similar. In Figure 5b, the GAN model can not deduce right decisions only from these noisy or similar shapes (left-hand side of Figure 5b), which causes the reconstructed images rendered uncorrelated details (like colors). Images reconstructed from the same shapes with semantic information are showed in the right-hand side of Figure 5b. The generated images’ colors are corrected with the guidance of the semantic information. Quantitative results are showed in Figure 5c. The images reconstructed with semantics perform better than those without semantics (65.3% vs. 62.5%). The results indicate that by reconstructing with categorical information in semantic features, our model is able to improve the reconstruction performance visually and can reconstruct images more accurately. 3.4 Effectiveness of Data Augmentation To evaluate the improvement of reconstruction quality by introducing data augmentation, we train models with and without augmentation respectively, and compare the models on the test set. One model is trained on such augmented dataset and the other one is trained solely on the origin training set as a contrast. The results are showed in Figure 6. In visual comparison, the images reconstructed with augmented data look more natural and more close to the ground truth images than those reconstructed only from the origin training set (Figure 6a). In quantitative comparison, model trained with data augmentation performs slightly better than that without augmentation (65.3% vs. 63.6%). By adding more images in the GAN training phase, the short board of the limited dataset size can be complemented and the model will learn the distribution over more natural images, contributing to the improvement in reconstruction. 4 Conclusion In this paper, we demonstrate the feasibility of reconstructing stimulus images from the fMRI recordings by decoding shape and semantic features separately, and merge the shape and semantic information to natural-looking images with GAN. This ’divide and conquer’ strategy simplified the fMRI decoding and image reconstruction task effectively. Results show that the proposed Shape-Semantic GAN improves the reconstruction similarity and image quality. Broader Impact The proposed Shape-Semantic GAN method provides a novel solution to visual reconstruction from brain activities and present a potential brain-reading technique. This method can help people recognize the human perception and thinking, and may help promote the development of neuroscience. However, the development of such brain-reading method may invade the privacy of the information within people’s mind, and may cause people to worry about the freedom of thought. Acknowledgment This work was partly supported by grants from the National Key Research and Development Program of China (2018YFA0701400), National Natural Science Foundation of China (61906166, U1909202, 61925603, 61673340), the Key Research and Development Program of Zhejiang Province in China (2020C03004).
1. What is the main contribution of the paper, and how does it relate to the hierarchical processing in the human vision system? 2. What are the strengths and weaknesses of the proposed method for reconstructing images from fMRI signals? 3. How does the U-Net structure in the generator contribute to the decoding performance, and what is the theoretical justification for its use? 4. Why are HVC signals necessary for the decoding problem, and how do they complement the LVC signals? 5. How does the classification network trained for semantic feature extraction potentially affect the results, and what could be an alternative approach? 6. What additional details are missing or unclear in the paper, such as data augmentation protocols, training-testing set splits, and preprocessing of fMRI data?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper proposes a modular method of reconstructing stimuli images from the corresponding fMRI signals. The method utilizes separate models to decode shape from lower visual cortex (LVC) signals and semantic features from higher visual cortex (HVC) signals. The shape information is extracted using three decoders of varying resolution to simulate the processing in the V1, V2, and V3 areas of the lower visual cortex. The semantic information is extracted from the penultimate layer of a deep neural network (DNN) trained to classify the HVC signals. The shape and semantic information are combined in a generative adversarial network (GAN) where the generator takes a U-Net structure. The output of the generator produces an image resembling the original stimuli image. Additionally, the paper provides studies into the benefits of data augmentation when training the GAN. Strengths The paper contributes a novel method for reconstructing images, which attempts to replicate the hierarchical processing within the brain. The authors adequately justify and explain their theoretical grounding with several citations demonstrating the hierarchical structure of the human vision system. The roles of the LVC and HVC are heavily cited, and the paper provides its own study (Section 3.4 Decoding Effects from Different ROIs) to show the roles translate to their model. While the results are still not at the quality for their proposed "brain-reading" application, the paper shows a significant step towards a system with such potential capabilities. The main contribution of this paper is a method for efficiently extracting features for better decoding performance. Weaknesses (a) The paper falls short in providing a clear theoretical backing for using the U-Net structure in the generator. The explanation in Section 2.4 seems to suggest that the U-Net structure was selected due to the benefits of its bottleneck passing low-level features, but later states that it is only passing high-level features. Clarification of this section is needed. --> Since the author already clarified the advantage of using U-Net structure, I think this weakness can be removed now. (b) The paper also lacks a theoretical justification for the necessity of the HVC signals. As the brain encodes the LVC signals into the HVC signals, the information from HVC signals should also be contained in the LVC signals. Why then, do we need the high level representation for the decoding problem? Could the semantic features not be extracted straight from the LVC signals where there is more information? The reason why HVC features are needed here seems to be the way in which you construct the LVC features: since there are only “shape” features constructed from LVC, which does not contain the color or semantic features. HVC signals are required in this case. A justification for using the LVC signals for only “shape” features and not semantic features as well would better solidify the theoretical reasoning. (c) Moreover, the semantic feature construction uses a network trained for classification. A possible problem may occur if the classification network is well learned since the network is trying to extract higher level features from the already high-level representation of the HVC, t. As features become higher level, they should become invariant to many variations within a class. For each category the deep feature vector (i.e., penultimate hidden layer) would be very similar, and hence will not be very representative of the semantic features for different objects of the same class . For example, attributes of same class objects like color may get lost. Therefore, the algorithm wouldn’t really learn to decode the image, but would rather produce a prototypical color image of the object. (d) Some necessary details are missing or unclear. For example, in the data-augmentation section, is the category-average semantic vector included in addition to the fMRI decoded semantic vector when fed into the bottleneck of the generator? If so, this should be stated clearly. Besides, the protocol for the training-testing set split, learning rate, or other technical details of the algorithm need to be explicitly stated. (e) Details regarding the preprocessing of fMRI data is missing. For instance, If one subject’s V1 region is larger than another subject’s, how do you ensure the feature vectors or fMRI signals are of the same dimension? Do you do subject level alignment as well? Better clarifications are needed for this, though there is a reference for it.
NIPS
Title Self-supervised surround-view depth estimation with volumetric feature fusion Abstract We present a self-supervised depth estimation approach using a unified volumetric feature fusion for surround-view images. Given a set of surround-view images, our method constructs a volumetric feature map by extracting image feature maps from surround-view images and fuse the feature maps into a shared, unified 3D voxel space. The volumetric feature map then can be used for estimating a depth map at each surround view by projecting it into an image coordinate. A volumetric feature contains 3D information at its local voxel coordinate; thus our method can also synthesize a depth map at arbitrary rotated viewpoints by projecting the volumetric feature map into the target viewpoints. Furthermore, assuming static camera extrinsics in the multi-camera system, we propose to estimate a canonical camera motion from the volumetric feature map. Our method leverages 3D spatiotemporal context to learn metric-scale depth and the canonical camera motion in a self-supervised manner. Our method outperforms the prior arts on DDAD and nuScenes datasets, especially estimating more accurate metric-scale depth and consistent depth between neighboring views. 1 Introduction Depth perception of surrounding environment is one of the key components for 3D vision applications such as autonomous driving, robotics, or augmented reality. Specifically for autonomous driving, 3D perception benefits numerous downstream tasks including 3D objects detection [7, 29, 46], 6D pose estimation [6, 21], object tracking [18, 41], etc. While human drivers can proactively move their heads and eyes to observe their surrounding environment, an autonomous vehicle equips multiple cameras with fixed viewpoints and monitors its surroundings. The multi-camera system hence exhibits limitations subjective to the fixed camera setup; the camera system might share a small portion between the adjacent viewpoints and rely on heterogeneous camera intrinsics. As a prior work, Full Surround Monodepth (FSM) [15] extended a self-supervised monocular depth estimation method [13] into a multi-camera setup of autonomous vehicles. Their method exploits small image overlaps between spatio-temporally neighboring cameras as a supervision signal to learn metric-scale depth. Yet, FSM [15] individually estimates depth and camera motion for each view with a shared CNN, and it does not utilize any additional image cues from neighboring views at test time. ∗denotes equal contribution. † This work has been done at 42dot Inc. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). To date, architectural design choices in the spatio-temporal domain are relatively less investigated, creating room for improvements. We introduce a self-supervised approach to surround-view depth estimation based on a unified volumetric feature representation. Given multiple images covering the surrounding view, our method extracts an image feature from each image, back-projects the features into 3D space (i. e., voxel coordinate), and fuses them into a unified volumetric feature map. Each volumetric feature contains 3D information at its located voxel coordinate. Some volumetric features are shared between neighboring views due to their image overlaps; we present a tailored multilayer perceptron (MLP) to process the volumetric features with consideration of the superposition. Given the volumetric features, we project the features back to the image coordinate with known camera information and pass through the depth decoder to obtain a depth map at each view. Because the volumetric feature contains 3D information at its local voxel coordinate, our method can also synthesize a depth map at arbitrary rotated viewpoints by projecting the features into an image coordinate with a target focal length and target extrinsics. This can overcome the limitation of fixed multi-camera setup. Fig. 1 demonstrates the results of depth map synthesis at novel views. Our method can seamlessly synthesize depth maps at arbitrary viewpoints from the volumetric features with different focal lengths and camera angles. Furthermore, unlike previous work [15], we introduces a canonical camera motion estimation, assuming static extrinsics between cameras. This geometry constraint leads to a stabilized egomotion learning and enhanced metric-scale depth estimation. On top of that, we adjust the intensity distribution for the neighboring viewpoints, which reduces irregular photometric reconstruction errors. We present a self-supervised learning framework that learns to estimate scale-aware depth and camera motion from unlabeled surround-view images, using 3D spatio-temporal reconstruction. We summarize our contributions as follows: (i) we introduce a novel volumetric feature representation that effectively learns to estimate the surround-view depth and canonical camera motion. To our knowledge, our method is the first approach that targets multi-camera feature fusion with selfsupervised depth estimation. (ii) We demonstrate synthesizing depth maps at arbitrary views with the unified volumetric feature representation. (iii) On popular multi-camera datasets [2, 14], our method demonstrates consistent improvements compared to the state-of-the-art method [15]. 2 Related Work Monocular depth estimation. Supervised-learning-based monocular depth approaches [8, 10, 20, 22, 36] require a large amount of annotated data for supervision, and as a result, it limits the scalability of their methods due to the expense of acquiring such annotated data on diverse scenes. To address the limitation, self-supervised methods propose proxy learning tasks that use unlabeled temporallyconsecutive images [3, 14, 23, 24, 37, 47, 49, 51, 52] or stereoscopic pairs [11–13, 25, 34, 35] during training time. For supervision signals, the approaches minimize image reconstruction errors between reference images and synthesized images that are generated from output depths and temporally-(or spatially-)neighboring images. Yet, the methods output depth maps only up to scale or at a fixed viewpoint. In this work, we demonstrate a surround-view depth method that outputs depth maps at arbitrary viewpoints as well as in a metric scale. Omnidirectional depth estimation. Previous work has demonstrated estimating depth on wide field-of-view (FOV) images [48] or 360◦ images [45, 44, 53] that covers surrounding view, and efforts on finding a suitable projective geometry to process those input images have been continued. Initial approaches [45, 53] directly used 360◦ images that are represented under equirectangular projection, but there exists severe visual distortion on the top and bottom parts of the images. To avoid such distortion on the 360◦ image, a cubemap representation [4, 43, 44] has been presented, which rectifies a input 360◦ image into a cube map coordinate and uses it for the input instead. Won et al. [48] proposed to directly perform a stereo matching of multiple fisheye images in a spherical coordinate. Neural Ray Surfaces [42] introduced a generic framework that jointly learns to estimate depth and motion without prior knowledge of specific camera models. Volumetric feature representation. The basic idea of fusing multi-view features into a shared voxel space has been presented in other tasks, such as multi-view stereo [19, 26, 30, 32, 50] or 3D semantic segmentation [5, 26]. The approaches mainly focus on solving a matching task for object-level or static indoor scene reconstruction using intractable 3D convolution. Based on the similar principle, our method presents a volumetric feature representation for surround-view depth and canonical motion estimation. Comparing to 3D convolution, we demonstrate that light-weighted MLP layers are more efficient for fusing features from surround-view images with small overlaps. Some approaches [17, 28, 33, 38] demonstrate aggregating 2D features from surround-view images onto a 2D Bird’s-eye-view (BEV) space for semantic segmentation or object detection. However, the BEV-based representation does not precisely preserve 3D information due to the abstraction of the height information. In contrast, our approach presents a voxel-based representation that is more suitable for depth and canonical pose estimation. 3 Surround-View Depth Estimation via Volumetric Feature Fusion Given two temporally consecutive surround-view images, Iti and I t+1 i , captured by multiple cameras Ci (i ∈ {1, 2, ..., 6} in our setup) that cover a surrounding view of a vehicle, our method estimates metric-scale depth Dti for each image I t i from each camera Ci (but also at an arbitrary rotated viewpoint) and a canonical camera motion of the vehicle Tt→t+1 as an auxiliary task. Our methods assumes known camera intrinsics Ki and extrinsics Ei, but each camera can have different intrinsics. 3.1 Architecture overview We propose a novel fusion module for surround-view depth estimation. Fig. 2 shows an overview of our approach, consisting of the surround-view volumetric feature fusion (Sec. 3.2), depth estimation (Sec. 3.3) and canonical motion estimation (Sec. 3.4) module. The surround-view feature fusion module constructs a single, unified volumetric feature map V by aggregating multi-scale image feature maps F that are extracted from each image in the surround-view camera setup. From the volumetric feature map V, the depth estimation module retrieves a per-frame feature that represents a specific viewpoint and passes it through a depth decoder to produce a depth map for the corresponding view. Because a volumetric feature at each voxel contains its local 3D information, our method can also synthesize a depth map at an arbitrary view by projecting the volumetric feature map into the target view. The canonical motion estimation module flattens the volumetric feature map along the z-axis and predicts a global motion referring to the canonical camera coordinate. In contrast to FSM [15] that independently estimates each camera motion, our design globally reasons a camera motion, which not only reduces the number of unknown factors but also makes the problem simpler. 3.2 Surround-view volumetric feature fusion Image feature encoding. We first extract image features from surround-view images. Given a set of surround-view images, we pass each image Ii through a shared 2D image encoder to obtain a image feature map Fi for ith view. We use ResNet [16] as the encoder which takes an image Ii (with a resolution of H ×W ) and outputs a multi-scale feature pyramid. We take the last three feature maps from the feature pyramid, reshape them up to a resolution of H8 × W 8 , concatenate them, and apply 1×1 convolution for channel reduction. As a result, we obtain a single feature map Fi for each image, where the feature map conveys multi-scale and high-dimensional information of the image. Volumetric feature encoding. We aggregate the image feature maps Fi from surround-view images into a single, unified volumetric feature map V in a pre-defined voxel space, as illustrated in Fig. 3. For each voxel, we find a corresponding pixel p(w, h) by projecting the voxel (x, y, z) into a pixel coordinate. Then, we bilinearly interpolate an image feature F(p) at the projected pixel p(w, h) to handle sub-pixel location and allocate the sampled image feature to the voxel. Because the sampled image feature F(p) contains high-level information along its pixel ray, we extract local 3D feature at the voxel (x, y, z) by concatenating the feature with a depth value of the voxel (i. e., depth positional encoding) and pass it through an MLP. In this way, voxels that get through by a single pixel ray have their individual 3D feature extracted by the MLP instead of having the same image feature. Back-projecting Image features <latexit sha1_base64="ov7tnl94Oi9ugN7xwk0ilsuw9VE=">AAAB83icbVBNS8NAFHypX7V+VT16WSyCBymJiHoseNFbBWsLTSmb7aZdutmE3RehhP4NLx4U8eqf8ea/cdPmoK0DC8PMe7zZCRIpDLrut1NaWV1b3yhvVra2d3b3qvsHjyZONeMtFstYdwJquBSKt1Cg5J1EcxoFkreD8U3ut5+4NiJWDzhJeC+iQyVCwShayfcjiqMgzO6mfdGv1ty6OwNZJl5BalCg2a9++YOYpRFXyCQ1puu5CfYyqlEwyacVPzU8oWxMh7xrqaIRN71slnlKTqwyIGGs7VNIZurvjYxGxkyiwE7mGc2il4v/ed0Uw+teJlSSIldsfihMJcGY5AWQgdCcoZxYQpkWNithI6opQ1tTxZbgLX55mTye173Lunt/UWucFXWU4QiO4RQ8uIIG3EITWsAggWd4hTcndV6cd+djPlpyip1D+APn8wcyf5G4</latexit> 4 Some voxels are associated with multiple image features due to spatial overlaps across the camera views. To merge the multiple features on overlap region, we use the other MLP that learns to fuse the multiple per-pixel image features into a per-voxel volumetric feature. 3.3 Depth estimation From the unified volumetric feature map, we aim to estimate a depth map of each surround-view input image but also synthesize depth maps at any arbitrary rotated camera viewpoints. Transformation of volumetric feature as projected image feature. For each target view Ci that we want to estimate its depth map, we project the volumetric feature map V into the projected image feature map F̃i with a resolution of H8 × W 8 , using camera extrinsics Ei of the target view. For each pixel p at the target image coordinate, we uniformly sample volumetric features along its pixel ray and concatenate the sampled features to obtain the projected image feature F̃i(p). Depth decoder. To predict a depth map from the projected image feature map F̃i, we use a lightweighted depth decoder Decdepth consisting of 4 convolutional layers: 3 convolutional layers for upsampling (H8 × W 8 → H ×W ) and one convolutional layer for the depth output. Depth map synthesis at a novel view. We further extend our depth estimation module to produce a depth map for a novel viewpoint which is not covered by the original multi-camera system. Because a volumetric feature at each voxel encodes local 3D structure information in a certain range, our method can synthesize a depth map at a desired arbitrary viewpoint by projecting the volumetric feature into the viewpoint and passing it through the depth decoder. Fig. 1 visualizes synthesized depth maps at arbitrary rotated viewpoints. Our method can seamlessly synthesize depth maps with variations of yaw, roll, or pitch angles as well as arbitrary focal lengths. 3.4 Canonical motion estimation The prior work, FSM [15], separately estimates a relative camera motion of each camera using a shared encoder-decoder network. However, this process can be redundant because a multi-camera system that is attached to a vehicle shares the same canonical motion. To improve, we introduce a canonical pose estimation module that takes a collapsed volumetric feature map FBEV and predicts one representative canonical camera motion as illustrated in Fig. 2. Given the estimated canonical camera motion, we can direct compute the motion of each camera with their known camera extrinsics. Here, we use the same architecture design to obtain a volumetric feature map (cf. Sec. 3.2) but with different trainable network weights, and use two temporally consecutive images for the input. Volumetric feature reduction. Given a surround-view volumetric feature map V ∈ RX×Y×Z×C, we flatten the feature map onto a Bird’s-Eye-View (BEV) shape (see Fig. 2). We collapse the Z dimension into the channel dimension C (i. e., reshaping it into a 3D tensor V′ ∈ RX×Y×(Z·C)) and apply 2D convolutions to reduce channel of the feature, resulting in a 3D tensor of FBEV ∈ RX×Y×C ′ . This collapsed volumetric feature map FBEV contains information on canonical ego-motion that is aggregated from surrounding views. Then we use the standard pose decoder from the PoseNet [13] to estimate the canonical camera motion Tt→t+1. Computing local camera poses. Assuming a static relationship between the cameras, we distribute the predicted canonical camera motion to each camera motion. From the given extrinsic Ei of each camera Ci and canonical camera motion Tt→t+1, we compute each camera motion as: Tt→t+1i = E −1 i E1T t→t+1E−11 Ei, (1) where E1 is the extrinsic of the canonical camera motion. Note that any viewpoints could also be a canonical motion. In this work, we define the canonical motion as the motion of the front-view camera. 3.5 Self-supervised learning Given the camera motion and depth map per each view, we apply our self-supervised loss given a temporal triplet of surround-view images. At test time, our method only requires images at the current time step t. Self-supervised loss. Our self-supervised loss consists of the image reconstruction loss Limg and the depth synthesis loss Ldepth: L = Limg + Ldepth. (2) The image reconstruction loss penalizes the reconstruction error between the reference image It and the synthesized images Ĩt+1 from each temporal, spatial, and spatio-temporal context (cf. Eq. (4)), Limg = Lt + λspLsp + λsp_tLsp_t + λsmoothLsmooth. (3) Each term stands for the temporal Lt, spatio Lsp, spatio-temporal Lsp_t, and smoothness loss Lsmooth. To measure the reconstruction error, we use a weighted sum of intensity difference and structure similarity [12, 13, 47]: (1−α)∥It− Ĩt+1∥1+α 1−SSIM(I t,Ĩt+1) 2 , with α = 0.85. The smoothness term Lsmooth follows an edge-aware 1st-order smoothness: Lsmooth = 1N ∑ p ∑ k∈x,y ∇kD · e−∥∇kI∥1 . One main difference to the baseline work [15] is that we do not use the pose consistency loss because our method outputs one representative camera motion for all views. This reduces the number of unknowns and thus encourages convergence and better accuracy. Spatio-temporal context. The core idea of the self-supervised proxy task [15] is to exploit small overlaps between spatially or/and temporally neighboring images for matching to provide supervisionary signal and scale information for camera motion and depth. For a reference image Iti from camera Ci at a time step t, it synthesizes reconstructed images Ĩti from (i) spatial neighboring images Itj (j: indices of neighboring cameras), (ii) temporally neighboring images I t′ i (t ′ ∈ {t+ 1, t− 1}), and (iii) spatio-temporally neighboring images It ′ j , given the predicted camera motion. A pixel-wise warping operation for the image reconstruction is defined as: Πt→t ′ ij = KjX t→t′ ij DiK −1 i (4a) with Xt→t ′ ij = T t→t′ i , t ′ ∈ {t− 1, t+ 1} for temporal context, EjE −1 i for spatio context, EjE −1 i T t→t′ i , t ′ ∈ {t− 1, t+ 1} for spatio-temporal context, (4b) with estimated depth Di and camera intrinsics Ki and Kj . Here, Xij is the camera motion between the two cameras depending on their spatio-temporal context. By minimizing the photometric difference between the reference image and reconstructed image, a self-supervised loss function guides the network to output metric-scale depths and camera motions that can satisfy the overall contexts. Depth synthesis loss. We introduce a depth synthesis loss for the successful depth synthesis at a novel view: Ldepth = λconsLcons + λdepth_smoothLdepth_smooth (5a) Lcons = 1N ∑ p(|Di − D̃i|)/(Di + D̃i) (5b) Ldepth_smooth = 1N ∑ p(∇xD̃i +∇yD̃i). (5c) The depth consistency loss Lcons penalizes the depth difference between the synthesized depth D̃i at the novel view and depth Di at each known camera view i, adopted from Bian et al. [1]. The smoothness loss Ldepth_smooth [27] encourages the 1st-order smoothness of the synthesized depth map D̃i at the novel view so that it regularizes a set of volumetric features retrieved at the novel view to improve 3D-awareness at each voxel coordinate. For the synthesis, we augment a depth map at a novel view for each known camera view i and apply the depth synthesis loss. 4 Experiments 4.1 Implementation details Dataset and evaluation protocol. We use the DDAD [14] and nuScenes [2] dataset for our experiments. Both datasets provide surround-view images from a total of 6 cameras mounted on a vehicle and LiDAR point clouds for the depth evaluation. We train our model on each train split and report the accuracy on the test split. During training, the input images are down-sampled to a resolution of 384× 640 for the DDAD dataset, and that of 352× 640 for the nuScenes dataset. We train our model on the DDAD dataset for 20 epochs and the nuScenes dataset for 5 epochs. For evaluation, we follow the same protocol from FSM [15] that evaluates depth up to 200 (m) for the DDAD dataset and 80 (m) for the nuScenes dataset. We use conventional depth evaluation metrics proposed by Eigen et al. [8]. We provide details of the metrics in the supplementary material. Training. We implemented our networks in PyTorch [31] and trained on four A100 GPUs. For the image encoders, we used ResNet-18 with weights pre-trained on ImageNet [39]. All experiments used the same training hyper-parameters (unless explicitly mentioned): Adam optimizer with β1 = 0.9 and β2 = 0.999; a mini-batch size of 2 per each GPU and a learning rate [40] with 1× 10−4, decaying at 34 of the entire training schedule with a factor of 0.1; the previous t − 1 and subsequent t + 1 images are used as temporal context at the training time. Note that our depth estimation module does not require temporal contexts, and therefore temporal contexts are not included in the test time. For our volumetric feature, we used voxel resolution of (1m, 1m, 0.75m) with spatial dimensions of (100, 100, 20) for (x, y, z) axis respectively. We use color jittering as data augmentation. For the depth synthesis loss, we use the random rotation with a range between [-5°, -5°, -25°] and [5°, 5°, 25°] for the depth map synthesis at a novel view. In the self-supervised loss in Eq. (2), we use depth smoothness weight λsmooth = 1× 10−3, spatio loss weight λsp = 0.03, spatio-temporal weight λsp_t = 0.1, depth consistency weight λcons = 0.05, and depth smoothness weight at novel views λdepth_smooth = 0.03. The DDAD dataset [14] includes images with different focal lengths; in order to train the network to output a consistent depth scale regardless of different focal lengths, we use the focal length normalization [9] which normalizes the scale of output depth to a default focal length. Intensity distribution alignment. We observe that different lighting conditions in neighboring images yields artifacts on the depth output (Fig. 4c) with the self-supervised loss in the previous work [15]. To address the issue, we align mean and variance on overlapped regions of the reference image I and synthesized image Ĩ: Ĩalign = (Ĩ− µ̃) · σ/σ̃ + µ, (6) where {µ, σ2} are the mean and variance of the reference image I, and {µ̃, σ̃2} are that of the synthesized image Ĩ. The aligned image Ĩalign is then used for the loss calculation in Eq. (2). Fig. 4a briefly illustrates this process. Here, the reconstructed image can be from either a spatial, temporal, or spatio-temporal context. 4.2 Surround-view depth evaluation We compare the accuracy of our method with state-of-the-art methods. Especially for our closest baseline FSM [15], we use our own reproduced version of FSM for the comparison because the implementation is not provided. For the reproduction of FSM [13], we found that the intensity distribution alignment and focal length normalization, which we explained above, were critical for stable self-supervised training. Despite our efforts, there is still a gap between our reproduced version and the reported accuracy; for a fair comparison, we conduct experiments under the same training and evaluation settings and demonstrate accuracy gain over the reproduced baseline. Table 1 demonstrates the evaluation on both DDAD [14] and nuScenes [2] datasets. We also report the accuracy using per-frame median scaling for the reference. On the DDAD dataset, our method shows competitive or better accuracy to the baseline method FSM [15] on 5 out of 7 metrics. Especially, our method substantially improves the accuracy on the Sq Rel metric by around 17%. On nuScenes dataset, our method demonstrates consistent improvement over the baseline, by outperforming the baseline on 6 out of 7 metrics, with over 24% improvement on the Sq Rel metric. Better accuracy is also observed when using the median scaling for evaluation. We further demonstrate point cloud reconstruction visualization in Sec D.3 of supplementary material to provide clear evidence of the strength of our method. 4.3 Ablation study Surround-view volumetric feature fusion. To validate our main contributions, we conduct an ablation study of our volumetric feature fusion and canonical motion estimation module over the FSM baseline [15]. We train the model on the DDAD dataset [14] while keeping the original experiment setup. As shown in Table 2, both of our contributions consistently improve the accuracy over the baseline. Our volumetric feature representation embeds effective 3D information and results in better depth accuracy; the main difference from previous works [13, 15] is a few additional multilayer Table 2: Ablation study on our volumetric feature fusion and canonical fusion modules: Our volumetric feature fusion scheme and canonical motion estimation module consistently improve the depth accuracy, substantially on Abs Rel and Sq Rel metrics. Volumetric feature fusion Canonical motion Abs Rel Sq Rel RMSE RMSE log δ < 1.25 δ < 1.252 δ < 1.253 (Our reproduced baseline) 0.228 4.409 13.433 0.342 0.687 0.870 0.932 ✓ 0.222 4.055 13.474 0.348 0.682 0.862 0.928 ✓ 0.222 3.969 13.492 0.344 0.677 0.865 0.931 ✓ ✓ 0.218 3.660 13.327 0.339 0.674 0.862 0.932 perceptrons with volumetric feature representation while keeping the similar conventional encoder and decoder networks. Furthermore, we show the benefit of the volumetric feature representation for the camera motion module; globally reasoning the camera motion benefits the depth estimation. Fig. 6 further provides qualitative results of each setting in the ablation study and demonstrates qualitative gains of each contribution. Volumetric feature encoder. Table 3 shows the comparison result using different approaches to aggregate surround-view image features into a shared voxel space. We compare three different methods that are commonly used to process features in the voxel space. Avg voxel simply averages features that are accumulated at the same voxel coordinate. 3D conv applies two sequential 3D convolutional blocks to the volumetric feature. Each 3D convolutional block is composed of a 3D convolutional layer, batch normalization, and LeakyReLU, in a consecutive order. Our method, using MLPs, clearly outperforms the rest two methods. Compared to 3D convolution, we find that MLPs more effectively learn to pool or weight multiple features on overlap regions. In Fig. 7, we further visualize error maps of each method (Avg voxel, 3D conv, MLPs) including the baseline, FSM [15]. The results provide clear evidence that the volumetric feature approach takes full advantage of overlap regions compared to the [15]. Furthermore, our MLP-based method shows the highest accuracy in overall regions including moving objects, compared to Avg voxel or 3D conv. 5 Conclusions We proposed a novel volumetric feature representation for self-supervised surround-view depth estimation. The volumetric feature representation aggregates image features from surround-view images, encodes 3D information, and then is used to estimate a depth map at each view and canonical motion between two temporally consecutive images. Furthermore, by projecting the volumetric feature at arbitrary rotated views, our method can also synthesize a depth map at the novel view, additionally with varying focal lengths. Our method outperforms the state-of-the-art surround-view depth method. The ablation study successfully validates our contributions as well as design choices on the volumetric feature encoding. In future work, we expect our volumetric feature representation can benefit other computer vision tasks such as object detection, image segmentation, and motion estimation in the surround-view camera setup. Limitation. Our volumetric feature representation is defined in a 3D cubic space; it needs to take the trade-off between memory usage and the resolution into consideration. Because we focus on embedding 3D information to a novel representation, sometimes some blurred regions occurs in the depth map (e. g., in Fig. 1 and Fig. 6). The usage of cylindrical or spherical coordinates would improve the efficiency of the memory usage. Furthermore, some artifacts sometimes appear in the synthesized depth map in Fig. 1 where no input image feature is given. In future work, we will consider using additional regularization terms to preserve details of depth results as well as reduce the artifact. Potential negative societal impacts. Methods for surround-view depth estimation require a large number of images for training, which may include images with privacy issues. To prevent this, the dataset should be precisely curated and be used for the research purpose only.
1. What is the novelty of the proposed approach in surround-view depth estimation? 2. How does the proposed method differ from other works that use a volumetric feature representation? 3. Can you provide more convincing comparisons with existing methods, particularly in terms of qualitative improvements? 4. How does the proposed method handle the trade-off between resolution and computation? 5. Can you clarify the "outer product" operation in Figure 3 and provide precise definitions of the loss terms? 6. Can you include more examples or analyses to demonstrate the advantages of the proposed approach over FSM and other methods?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors propose a self-supervised approach for the task of surround-view depth estimation. 2D image features are un-projected onto a shared 3D volume, which is later queried for decoding depth maps at target views. Similarly, pose changes of the canonical frame is also decoded from volumetric feature pooled from all 2D views. Evaluations conducted on DDAD and nuScenes show competitve results compared to FSM and others. Strengths And Weaknesses Strengths Because of the use of a volumetric feature representation, features from images are aggregated in a shared space. This should help produce more consistent depth maps across views. Also because of the use of a volumetric feature representation, the proposed model has the unique capability of predicting depth maps for views not among the inputs. Weaknesses A few relevant works are not included or discussed sufficiently. The way 2D features are un-projected onto a shared 3D volume aggregation has been used in many places, e.g. [1], [2], [3]. [1] Atlas: End-to-End 3D Scene Reconstruction from Posed Images, ECCV 2020. [2] Lift, Splat, Shoot: Encoding Images from Arbitrary Camera Rigs by Implicitly Unprojecting to 3D, ECCV 2020. [3] Learning a Multi-View Stereo Machine, NIPS 2017. In particular, [3] is very similar as it also fuses 2D features from multiple views on a shared 3D volume which is later projected to 2D views for depth estimation. The quality advantage over existing methods is not convincing: Although slightly better than FSM quantitatively in some case, the gap is usualy quite minimal and sometimes non-existence. Qualitative comparisons (e.g. in Fig.6) are not showing any obvious improvements. How do the depth maps look if viewed as point cloud? This can also provide insights if the proposed method has a better agreement between adjacent views. Because of the volumetric feature representation, there is a trade-off between resolution and compute. This is not a concern for image-space models. A few more minor issues: The "outer product" operation in Fig.3 is not mentioned anywhere in the text and contradicts the architecture given in the supplementary. The loss terms are not given precise definitions. L.257 ("provide our attempts to reproduce and discussions in the supplementary material.") -- not found in supplementary. Questions It's important to address the closely related prior work mentioned above and highlight any new contributions. Also, I feel the paper needs to have a more convincing analysis of the differences compared to FSM and others, e.g. examples highlighting typical qualitatively improvements. Limitations Adequately addressed.
NIPS
Title Self-supervised surround-view depth estimation with volumetric feature fusion Abstract We present a self-supervised depth estimation approach using a unified volumetric feature fusion for surround-view images. Given a set of surround-view images, our method constructs a volumetric feature map by extracting image feature maps from surround-view images and fuse the feature maps into a shared, unified 3D voxel space. The volumetric feature map then can be used for estimating a depth map at each surround view by projecting it into an image coordinate. A volumetric feature contains 3D information at its local voxel coordinate; thus our method can also synthesize a depth map at arbitrary rotated viewpoints by projecting the volumetric feature map into the target viewpoints. Furthermore, assuming static camera extrinsics in the multi-camera system, we propose to estimate a canonical camera motion from the volumetric feature map. Our method leverages 3D spatiotemporal context to learn metric-scale depth and the canonical camera motion in a self-supervised manner. Our method outperforms the prior arts on DDAD and nuScenes datasets, especially estimating more accurate metric-scale depth and consistent depth between neighboring views. 1 Introduction Depth perception of surrounding environment is one of the key components for 3D vision applications such as autonomous driving, robotics, or augmented reality. Specifically for autonomous driving, 3D perception benefits numerous downstream tasks including 3D objects detection [7, 29, 46], 6D pose estimation [6, 21], object tracking [18, 41], etc. While human drivers can proactively move their heads and eyes to observe their surrounding environment, an autonomous vehicle equips multiple cameras with fixed viewpoints and monitors its surroundings. The multi-camera system hence exhibits limitations subjective to the fixed camera setup; the camera system might share a small portion between the adjacent viewpoints and rely on heterogeneous camera intrinsics. As a prior work, Full Surround Monodepth (FSM) [15] extended a self-supervised monocular depth estimation method [13] into a multi-camera setup of autonomous vehicles. Their method exploits small image overlaps between spatio-temporally neighboring cameras as a supervision signal to learn metric-scale depth. Yet, FSM [15] individually estimates depth and camera motion for each view with a shared CNN, and it does not utilize any additional image cues from neighboring views at test time. ∗denotes equal contribution. † This work has been done at 42dot Inc. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). To date, architectural design choices in the spatio-temporal domain are relatively less investigated, creating room for improvements. We introduce a self-supervised approach to surround-view depth estimation based on a unified volumetric feature representation. Given multiple images covering the surrounding view, our method extracts an image feature from each image, back-projects the features into 3D space (i. e., voxel coordinate), and fuses them into a unified volumetric feature map. Each volumetric feature contains 3D information at its located voxel coordinate. Some volumetric features are shared between neighboring views due to their image overlaps; we present a tailored multilayer perceptron (MLP) to process the volumetric features with consideration of the superposition. Given the volumetric features, we project the features back to the image coordinate with known camera information and pass through the depth decoder to obtain a depth map at each view. Because the volumetric feature contains 3D information at its local voxel coordinate, our method can also synthesize a depth map at arbitrary rotated viewpoints by projecting the features into an image coordinate with a target focal length and target extrinsics. This can overcome the limitation of fixed multi-camera setup. Fig. 1 demonstrates the results of depth map synthesis at novel views. Our method can seamlessly synthesize depth maps at arbitrary viewpoints from the volumetric features with different focal lengths and camera angles. Furthermore, unlike previous work [15], we introduces a canonical camera motion estimation, assuming static extrinsics between cameras. This geometry constraint leads to a stabilized egomotion learning and enhanced metric-scale depth estimation. On top of that, we adjust the intensity distribution for the neighboring viewpoints, which reduces irregular photometric reconstruction errors. We present a self-supervised learning framework that learns to estimate scale-aware depth and camera motion from unlabeled surround-view images, using 3D spatio-temporal reconstruction. We summarize our contributions as follows: (i) we introduce a novel volumetric feature representation that effectively learns to estimate the surround-view depth and canonical camera motion. To our knowledge, our method is the first approach that targets multi-camera feature fusion with selfsupervised depth estimation. (ii) We demonstrate synthesizing depth maps at arbitrary views with the unified volumetric feature representation. (iii) On popular multi-camera datasets [2, 14], our method demonstrates consistent improvements compared to the state-of-the-art method [15]. 2 Related Work Monocular depth estimation. Supervised-learning-based monocular depth approaches [8, 10, 20, 22, 36] require a large amount of annotated data for supervision, and as a result, it limits the scalability of their methods due to the expense of acquiring such annotated data on diverse scenes. To address the limitation, self-supervised methods propose proxy learning tasks that use unlabeled temporallyconsecutive images [3, 14, 23, 24, 37, 47, 49, 51, 52] or stereoscopic pairs [11–13, 25, 34, 35] during training time. For supervision signals, the approaches minimize image reconstruction errors between reference images and synthesized images that are generated from output depths and temporally-(or spatially-)neighboring images. Yet, the methods output depth maps only up to scale or at a fixed viewpoint. In this work, we demonstrate a surround-view depth method that outputs depth maps at arbitrary viewpoints as well as in a metric scale. Omnidirectional depth estimation. Previous work has demonstrated estimating depth on wide field-of-view (FOV) images [48] or 360◦ images [45, 44, 53] that covers surrounding view, and efforts on finding a suitable projective geometry to process those input images have been continued. Initial approaches [45, 53] directly used 360◦ images that are represented under equirectangular projection, but there exists severe visual distortion on the top and bottom parts of the images. To avoid such distortion on the 360◦ image, a cubemap representation [4, 43, 44] has been presented, which rectifies a input 360◦ image into a cube map coordinate and uses it for the input instead. Won et al. [48] proposed to directly perform a stereo matching of multiple fisheye images in a spherical coordinate. Neural Ray Surfaces [42] introduced a generic framework that jointly learns to estimate depth and motion without prior knowledge of specific camera models. Volumetric feature representation. The basic idea of fusing multi-view features into a shared voxel space has been presented in other tasks, such as multi-view stereo [19, 26, 30, 32, 50] or 3D semantic segmentation [5, 26]. The approaches mainly focus on solving a matching task for object-level or static indoor scene reconstruction using intractable 3D convolution. Based on the similar principle, our method presents a volumetric feature representation for surround-view depth and canonical motion estimation. Comparing to 3D convolution, we demonstrate that light-weighted MLP layers are more efficient for fusing features from surround-view images with small overlaps. Some approaches [17, 28, 33, 38] demonstrate aggregating 2D features from surround-view images onto a 2D Bird’s-eye-view (BEV) space for semantic segmentation or object detection. However, the BEV-based representation does not precisely preserve 3D information due to the abstraction of the height information. In contrast, our approach presents a voxel-based representation that is more suitable for depth and canonical pose estimation. 3 Surround-View Depth Estimation via Volumetric Feature Fusion Given two temporally consecutive surround-view images, Iti and I t+1 i , captured by multiple cameras Ci (i ∈ {1, 2, ..., 6} in our setup) that cover a surrounding view of a vehicle, our method estimates metric-scale depth Dti for each image I t i from each camera Ci (but also at an arbitrary rotated viewpoint) and a canonical camera motion of the vehicle Tt→t+1 as an auxiliary task. Our methods assumes known camera intrinsics Ki and extrinsics Ei, but each camera can have different intrinsics. 3.1 Architecture overview We propose a novel fusion module for surround-view depth estimation. Fig. 2 shows an overview of our approach, consisting of the surround-view volumetric feature fusion (Sec. 3.2), depth estimation (Sec. 3.3) and canonical motion estimation (Sec. 3.4) module. The surround-view feature fusion module constructs a single, unified volumetric feature map V by aggregating multi-scale image feature maps F that are extracted from each image in the surround-view camera setup. From the volumetric feature map V, the depth estimation module retrieves a per-frame feature that represents a specific viewpoint and passes it through a depth decoder to produce a depth map for the corresponding view. Because a volumetric feature at each voxel contains its local 3D information, our method can also synthesize a depth map at an arbitrary view by projecting the volumetric feature map into the target view. The canonical motion estimation module flattens the volumetric feature map along the z-axis and predicts a global motion referring to the canonical camera coordinate. In contrast to FSM [15] that independently estimates each camera motion, our design globally reasons a camera motion, which not only reduces the number of unknown factors but also makes the problem simpler. 3.2 Surround-view volumetric feature fusion Image feature encoding. We first extract image features from surround-view images. Given a set of surround-view images, we pass each image Ii through a shared 2D image encoder to obtain a image feature map Fi for ith view. We use ResNet [16] as the encoder which takes an image Ii (with a resolution of H ×W ) and outputs a multi-scale feature pyramid. We take the last three feature maps from the feature pyramid, reshape them up to a resolution of H8 × W 8 , concatenate them, and apply 1×1 convolution for channel reduction. As a result, we obtain a single feature map Fi for each image, where the feature map conveys multi-scale and high-dimensional information of the image. Volumetric feature encoding. We aggregate the image feature maps Fi from surround-view images into a single, unified volumetric feature map V in a pre-defined voxel space, as illustrated in Fig. 3. For each voxel, we find a corresponding pixel p(w, h) by projecting the voxel (x, y, z) into a pixel coordinate. Then, we bilinearly interpolate an image feature F(p) at the projected pixel p(w, h) to handle sub-pixel location and allocate the sampled image feature to the voxel. Because the sampled image feature F(p) contains high-level information along its pixel ray, we extract local 3D feature at the voxel (x, y, z) by concatenating the feature with a depth value of the voxel (i. e., depth positional encoding) and pass it through an MLP. In this way, voxels that get through by a single pixel ray have their individual 3D feature extracted by the MLP instead of having the same image feature. Back-projecting Image features <latexit sha1_base64="ov7tnl94Oi9ugN7xwk0ilsuw9VE=">AAAB83icbVBNS8NAFHypX7V+VT16WSyCBymJiHoseNFbBWsLTSmb7aZdutmE3RehhP4NLx4U8eqf8ea/cdPmoK0DC8PMe7zZCRIpDLrut1NaWV1b3yhvVra2d3b3qvsHjyZONeMtFstYdwJquBSKt1Cg5J1EcxoFkreD8U3ut5+4NiJWDzhJeC+iQyVCwShayfcjiqMgzO6mfdGv1ty6OwNZJl5BalCg2a9++YOYpRFXyCQ1puu5CfYyqlEwyacVPzU8oWxMh7xrqaIRN71slnlKTqwyIGGs7VNIZurvjYxGxkyiwE7mGc2il4v/ed0Uw+teJlSSIldsfihMJcGY5AWQgdCcoZxYQpkWNithI6opQ1tTxZbgLX55mTye173Lunt/UWucFXWU4QiO4RQ8uIIG3EITWsAggWd4hTcndV6cd+djPlpyip1D+APn8wcyf5G4</latexit> 4 Some voxels are associated with multiple image features due to spatial overlaps across the camera views. To merge the multiple features on overlap region, we use the other MLP that learns to fuse the multiple per-pixel image features into a per-voxel volumetric feature. 3.3 Depth estimation From the unified volumetric feature map, we aim to estimate a depth map of each surround-view input image but also synthesize depth maps at any arbitrary rotated camera viewpoints. Transformation of volumetric feature as projected image feature. For each target view Ci that we want to estimate its depth map, we project the volumetric feature map V into the projected image feature map F̃i with a resolution of H8 × W 8 , using camera extrinsics Ei of the target view. For each pixel p at the target image coordinate, we uniformly sample volumetric features along its pixel ray and concatenate the sampled features to obtain the projected image feature F̃i(p). Depth decoder. To predict a depth map from the projected image feature map F̃i, we use a lightweighted depth decoder Decdepth consisting of 4 convolutional layers: 3 convolutional layers for upsampling (H8 × W 8 → H ×W ) and one convolutional layer for the depth output. Depth map synthesis at a novel view. We further extend our depth estimation module to produce a depth map for a novel viewpoint which is not covered by the original multi-camera system. Because a volumetric feature at each voxel encodes local 3D structure information in a certain range, our method can synthesize a depth map at a desired arbitrary viewpoint by projecting the volumetric feature into the viewpoint and passing it through the depth decoder. Fig. 1 visualizes synthesized depth maps at arbitrary rotated viewpoints. Our method can seamlessly synthesize depth maps with variations of yaw, roll, or pitch angles as well as arbitrary focal lengths. 3.4 Canonical motion estimation The prior work, FSM [15], separately estimates a relative camera motion of each camera using a shared encoder-decoder network. However, this process can be redundant because a multi-camera system that is attached to a vehicle shares the same canonical motion. To improve, we introduce a canonical pose estimation module that takes a collapsed volumetric feature map FBEV and predicts one representative canonical camera motion as illustrated in Fig. 2. Given the estimated canonical camera motion, we can direct compute the motion of each camera with their known camera extrinsics. Here, we use the same architecture design to obtain a volumetric feature map (cf. Sec. 3.2) but with different trainable network weights, and use two temporally consecutive images for the input. Volumetric feature reduction. Given a surround-view volumetric feature map V ∈ RX×Y×Z×C, we flatten the feature map onto a Bird’s-Eye-View (BEV) shape (see Fig. 2). We collapse the Z dimension into the channel dimension C (i. e., reshaping it into a 3D tensor V′ ∈ RX×Y×(Z·C)) and apply 2D convolutions to reduce channel of the feature, resulting in a 3D tensor of FBEV ∈ RX×Y×C ′ . This collapsed volumetric feature map FBEV contains information on canonical ego-motion that is aggregated from surrounding views. Then we use the standard pose decoder from the PoseNet [13] to estimate the canonical camera motion Tt→t+1. Computing local camera poses. Assuming a static relationship between the cameras, we distribute the predicted canonical camera motion to each camera motion. From the given extrinsic Ei of each camera Ci and canonical camera motion Tt→t+1, we compute each camera motion as: Tt→t+1i = E −1 i E1T t→t+1E−11 Ei, (1) where E1 is the extrinsic of the canonical camera motion. Note that any viewpoints could also be a canonical motion. In this work, we define the canonical motion as the motion of the front-view camera. 3.5 Self-supervised learning Given the camera motion and depth map per each view, we apply our self-supervised loss given a temporal triplet of surround-view images. At test time, our method only requires images at the current time step t. Self-supervised loss. Our self-supervised loss consists of the image reconstruction loss Limg and the depth synthesis loss Ldepth: L = Limg + Ldepth. (2) The image reconstruction loss penalizes the reconstruction error between the reference image It and the synthesized images Ĩt+1 from each temporal, spatial, and spatio-temporal context (cf. Eq. (4)), Limg = Lt + λspLsp + λsp_tLsp_t + λsmoothLsmooth. (3) Each term stands for the temporal Lt, spatio Lsp, spatio-temporal Lsp_t, and smoothness loss Lsmooth. To measure the reconstruction error, we use a weighted sum of intensity difference and structure similarity [12, 13, 47]: (1−α)∥It− Ĩt+1∥1+α 1−SSIM(I t,Ĩt+1) 2 , with α = 0.85. The smoothness term Lsmooth follows an edge-aware 1st-order smoothness: Lsmooth = 1N ∑ p ∑ k∈x,y ∇kD · e−∥∇kI∥1 . One main difference to the baseline work [15] is that we do not use the pose consistency loss because our method outputs one representative camera motion for all views. This reduces the number of unknowns and thus encourages convergence and better accuracy. Spatio-temporal context. The core idea of the self-supervised proxy task [15] is to exploit small overlaps between spatially or/and temporally neighboring images for matching to provide supervisionary signal and scale information for camera motion and depth. For a reference image Iti from camera Ci at a time step t, it synthesizes reconstructed images Ĩti from (i) spatial neighboring images Itj (j: indices of neighboring cameras), (ii) temporally neighboring images I t′ i (t ′ ∈ {t+ 1, t− 1}), and (iii) spatio-temporally neighboring images It ′ j , given the predicted camera motion. A pixel-wise warping operation for the image reconstruction is defined as: Πt→t ′ ij = KjX t→t′ ij DiK −1 i (4a) with Xt→t ′ ij = T t→t′ i , t ′ ∈ {t− 1, t+ 1} for temporal context, EjE −1 i for spatio context, EjE −1 i T t→t′ i , t ′ ∈ {t− 1, t+ 1} for spatio-temporal context, (4b) with estimated depth Di and camera intrinsics Ki and Kj . Here, Xij is the camera motion between the two cameras depending on their spatio-temporal context. By minimizing the photometric difference between the reference image and reconstructed image, a self-supervised loss function guides the network to output metric-scale depths and camera motions that can satisfy the overall contexts. Depth synthesis loss. We introduce a depth synthesis loss for the successful depth synthesis at a novel view: Ldepth = λconsLcons + λdepth_smoothLdepth_smooth (5a) Lcons = 1N ∑ p(|Di − D̃i|)/(Di + D̃i) (5b) Ldepth_smooth = 1N ∑ p(∇xD̃i +∇yD̃i). (5c) The depth consistency loss Lcons penalizes the depth difference between the synthesized depth D̃i at the novel view and depth Di at each known camera view i, adopted from Bian et al. [1]. The smoothness loss Ldepth_smooth [27] encourages the 1st-order smoothness of the synthesized depth map D̃i at the novel view so that it regularizes a set of volumetric features retrieved at the novel view to improve 3D-awareness at each voxel coordinate. For the synthesis, we augment a depth map at a novel view for each known camera view i and apply the depth synthesis loss. 4 Experiments 4.1 Implementation details Dataset and evaluation protocol. We use the DDAD [14] and nuScenes [2] dataset for our experiments. Both datasets provide surround-view images from a total of 6 cameras mounted on a vehicle and LiDAR point clouds for the depth evaluation. We train our model on each train split and report the accuracy on the test split. During training, the input images are down-sampled to a resolution of 384× 640 for the DDAD dataset, and that of 352× 640 for the nuScenes dataset. We train our model on the DDAD dataset for 20 epochs and the nuScenes dataset for 5 epochs. For evaluation, we follow the same protocol from FSM [15] that evaluates depth up to 200 (m) for the DDAD dataset and 80 (m) for the nuScenes dataset. We use conventional depth evaluation metrics proposed by Eigen et al. [8]. We provide details of the metrics in the supplementary material. Training. We implemented our networks in PyTorch [31] and trained on four A100 GPUs. For the image encoders, we used ResNet-18 with weights pre-trained on ImageNet [39]. All experiments used the same training hyper-parameters (unless explicitly mentioned): Adam optimizer with β1 = 0.9 and β2 = 0.999; a mini-batch size of 2 per each GPU and a learning rate [40] with 1× 10−4, decaying at 34 of the entire training schedule with a factor of 0.1; the previous t − 1 and subsequent t + 1 images are used as temporal context at the training time. Note that our depth estimation module does not require temporal contexts, and therefore temporal contexts are not included in the test time. For our volumetric feature, we used voxel resolution of (1m, 1m, 0.75m) with spatial dimensions of (100, 100, 20) for (x, y, z) axis respectively. We use color jittering as data augmentation. For the depth synthesis loss, we use the random rotation with a range between [-5°, -5°, -25°] and [5°, 5°, 25°] for the depth map synthesis at a novel view. In the self-supervised loss in Eq. (2), we use depth smoothness weight λsmooth = 1× 10−3, spatio loss weight λsp = 0.03, spatio-temporal weight λsp_t = 0.1, depth consistency weight λcons = 0.05, and depth smoothness weight at novel views λdepth_smooth = 0.03. The DDAD dataset [14] includes images with different focal lengths; in order to train the network to output a consistent depth scale regardless of different focal lengths, we use the focal length normalization [9] which normalizes the scale of output depth to a default focal length. Intensity distribution alignment. We observe that different lighting conditions in neighboring images yields artifacts on the depth output (Fig. 4c) with the self-supervised loss in the previous work [15]. To address the issue, we align mean and variance on overlapped regions of the reference image I and synthesized image Ĩ: Ĩalign = (Ĩ− µ̃) · σ/σ̃ + µ, (6) where {µ, σ2} are the mean and variance of the reference image I, and {µ̃, σ̃2} are that of the synthesized image Ĩ. The aligned image Ĩalign is then used for the loss calculation in Eq. (2). Fig. 4a briefly illustrates this process. Here, the reconstructed image can be from either a spatial, temporal, or spatio-temporal context. 4.2 Surround-view depth evaluation We compare the accuracy of our method with state-of-the-art methods. Especially for our closest baseline FSM [15], we use our own reproduced version of FSM for the comparison because the implementation is not provided. For the reproduction of FSM [13], we found that the intensity distribution alignment and focal length normalization, which we explained above, were critical for stable self-supervised training. Despite our efforts, there is still a gap between our reproduced version and the reported accuracy; for a fair comparison, we conduct experiments under the same training and evaluation settings and demonstrate accuracy gain over the reproduced baseline. Table 1 demonstrates the evaluation on both DDAD [14] and nuScenes [2] datasets. We also report the accuracy using per-frame median scaling for the reference. On the DDAD dataset, our method shows competitive or better accuracy to the baseline method FSM [15] on 5 out of 7 metrics. Especially, our method substantially improves the accuracy on the Sq Rel metric by around 17%. On nuScenes dataset, our method demonstrates consistent improvement over the baseline, by outperforming the baseline on 6 out of 7 metrics, with over 24% improvement on the Sq Rel metric. Better accuracy is also observed when using the median scaling for evaluation. We further demonstrate point cloud reconstruction visualization in Sec D.3 of supplementary material to provide clear evidence of the strength of our method. 4.3 Ablation study Surround-view volumetric feature fusion. To validate our main contributions, we conduct an ablation study of our volumetric feature fusion and canonical motion estimation module over the FSM baseline [15]. We train the model on the DDAD dataset [14] while keeping the original experiment setup. As shown in Table 2, both of our contributions consistently improve the accuracy over the baseline. Our volumetric feature representation embeds effective 3D information and results in better depth accuracy; the main difference from previous works [13, 15] is a few additional multilayer Table 2: Ablation study on our volumetric feature fusion and canonical fusion modules: Our volumetric feature fusion scheme and canonical motion estimation module consistently improve the depth accuracy, substantially on Abs Rel and Sq Rel metrics. Volumetric feature fusion Canonical motion Abs Rel Sq Rel RMSE RMSE log δ < 1.25 δ < 1.252 δ < 1.253 (Our reproduced baseline) 0.228 4.409 13.433 0.342 0.687 0.870 0.932 ✓ 0.222 4.055 13.474 0.348 0.682 0.862 0.928 ✓ 0.222 3.969 13.492 0.344 0.677 0.865 0.931 ✓ ✓ 0.218 3.660 13.327 0.339 0.674 0.862 0.932 perceptrons with volumetric feature representation while keeping the similar conventional encoder and decoder networks. Furthermore, we show the benefit of the volumetric feature representation for the camera motion module; globally reasoning the camera motion benefits the depth estimation. Fig. 6 further provides qualitative results of each setting in the ablation study and demonstrates qualitative gains of each contribution. Volumetric feature encoder. Table 3 shows the comparison result using different approaches to aggregate surround-view image features into a shared voxel space. We compare three different methods that are commonly used to process features in the voxel space. Avg voxel simply averages features that are accumulated at the same voxel coordinate. 3D conv applies two sequential 3D convolutional blocks to the volumetric feature. Each 3D convolutional block is composed of a 3D convolutional layer, batch normalization, and LeakyReLU, in a consecutive order. Our method, using MLPs, clearly outperforms the rest two methods. Compared to 3D convolution, we find that MLPs more effectively learn to pool or weight multiple features on overlap regions. In Fig. 7, we further visualize error maps of each method (Avg voxel, 3D conv, MLPs) including the baseline, FSM [15]. The results provide clear evidence that the volumetric feature approach takes full advantage of overlap regions compared to the [15]. Furthermore, our MLP-based method shows the highest accuracy in overall regions including moving objects, compared to Avg voxel or 3D conv. 5 Conclusions We proposed a novel volumetric feature representation for self-supervised surround-view depth estimation. The volumetric feature representation aggregates image features from surround-view images, encodes 3D information, and then is used to estimate a depth map at each view and canonical motion between two temporally consecutive images. Furthermore, by projecting the volumetric feature at arbitrary rotated views, our method can also synthesize a depth map at the novel view, additionally with varying focal lengths. Our method outperforms the state-of-the-art surround-view depth method. The ablation study successfully validates our contributions as well as design choices on the volumetric feature encoding. In future work, we expect our volumetric feature representation can benefit other computer vision tasks such as object detection, image segmentation, and motion estimation in the surround-view camera setup. Limitation. Our volumetric feature representation is defined in a 3D cubic space; it needs to take the trade-off between memory usage and the resolution into consideration. Because we focus on embedding 3D information to a novel representation, sometimes some blurred regions occurs in the depth map (e. g., in Fig. 1 and Fig. 6). The usage of cylindrical or spherical coordinates would improve the efficiency of the memory usage. Furthermore, some artifacts sometimes appear in the synthesized depth map in Fig. 1 where no input image feature is given. In future work, we will consider using additional regularization terms to preserve details of depth results as well as reduce the artifact. Potential negative societal impacts. Methods for surround-view depth estimation require a large number of images for training, which may include images with privacy issues. To prevent this, the dataset should be precisely curated and be used for the research purpose only.
1. What is the focus and contribution of the paper on surround-view depth estimation? 2. What are the strengths of the proposed approach, particularly in terms of its ability to enhance depth maps from overlapped views? 3. What are the weaknesses of the paper, especially regarding the experiment section and the lack of visualizations? 4. Do you have any concerns about the proposed method's comparison to monocular methods or its ability to train on real data without ground truth depth? 5. What are the limitations of the proposed approach, and how might they be addressed in future work?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes a novel surround-view depth estimation, and project features of different views into a volumetric feature spaces, where overlapped views can enhance each other and get better depth maps. The pipeline also enables novel-view depth synthesis. The problem setting may be useful for autonomous driving. The experiments are insufficient. Strengths And Weaknesses Paper is easy to follow. The problem setting is novel. The depth map quality is on par with monocular methods. No visualizations of different methods on DDAD and nuScenes are provided. From Figure 5, the depth maps are similar to existing monocular methods in quality. In Table 2, depth fusion is not evaluated individually. Questions Is it possible for the proposed method to compare with monocular methods? There are many common benchmarks for monocular depths. Would the cubic space introduce some artifacts and distortions comparing to spherical spaces? As a self-supervised method, I assume there is no GT depth needed? Is that so? Is it possible to train on real data without GT? Limitations Yes.
NIPS
Title Self-supervised surround-view depth estimation with volumetric feature fusion Abstract We present a self-supervised depth estimation approach using a unified volumetric feature fusion for surround-view images. Given a set of surround-view images, our method constructs a volumetric feature map by extracting image feature maps from surround-view images and fuse the feature maps into a shared, unified 3D voxel space. The volumetric feature map then can be used for estimating a depth map at each surround view by projecting it into an image coordinate. A volumetric feature contains 3D information at its local voxel coordinate; thus our method can also synthesize a depth map at arbitrary rotated viewpoints by projecting the volumetric feature map into the target viewpoints. Furthermore, assuming static camera extrinsics in the multi-camera system, we propose to estimate a canonical camera motion from the volumetric feature map. Our method leverages 3D spatiotemporal context to learn metric-scale depth and the canonical camera motion in a self-supervised manner. Our method outperforms the prior arts on DDAD and nuScenes datasets, especially estimating more accurate metric-scale depth and consistent depth between neighboring views. 1 Introduction Depth perception of surrounding environment is one of the key components for 3D vision applications such as autonomous driving, robotics, or augmented reality. Specifically for autonomous driving, 3D perception benefits numerous downstream tasks including 3D objects detection [7, 29, 46], 6D pose estimation [6, 21], object tracking [18, 41], etc. While human drivers can proactively move their heads and eyes to observe their surrounding environment, an autonomous vehicle equips multiple cameras with fixed viewpoints and monitors its surroundings. The multi-camera system hence exhibits limitations subjective to the fixed camera setup; the camera system might share a small portion between the adjacent viewpoints and rely on heterogeneous camera intrinsics. As a prior work, Full Surround Monodepth (FSM) [15] extended a self-supervised monocular depth estimation method [13] into a multi-camera setup of autonomous vehicles. Their method exploits small image overlaps between spatio-temporally neighboring cameras as a supervision signal to learn metric-scale depth. Yet, FSM [15] individually estimates depth and camera motion for each view with a shared CNN, and it does not utilize any additional image cues from neighboring views at test time. ∗denotes equal contribution. † This work has been done at 42dot Inc. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). To date, architectural design choices in the spatio-temporal domain are relatively less investigated, creating room for improvements. We introduce a self-supervised approach to surround-view depth estimation based on a unified volumetric feature representation. Given multiple images covering the surrounding view, our method extracts an image feature from each image, back-projects the features into 3D space (i. e., voxel coordinate), and fuses them into a unified volumetric feature map. Each volumetric feature contains 3D information at its located voxel coordinate. Some volumetric features are shared between neighboring views due to their image overlaps; we present a tailored multilayer perceptron (MLP) to process the volumetric features with consideration of the superposition. Given the volumetric features, we project the features back to the image coordinate with known camera information and pass through the depth decoder to obtain a depth map at each view. Because the volumetric feature contains 3D information at its local voxel coordinate, our method can also synthesize a depth map at arbitrary rotated viewpoints by projecting the features into an image coordinate with a target focal length and target extrinsics. This can overcome the limitation of fixed multi-camera setup. Fig. 1 demonstrates the results of depth map synthesis at novel views. Our method can seamlessly synthesize depth maps at arbitrary viewpoints from the volumetric features with different focal lengths and camera angles. Furthermore, unlike previous work [15], we introduces a canonical camera motion estimation, assuming static extrinsics between cameras. This geometry constraint leads to a stabilized egomotion learning and enhanced metric-scale depth estimation. On top of that, we adjust the intensity distribution for the neighboring viewpoints, which reduces irregular photometric reconstruction errors. We present a self-supervised learning framework that learns to estimate scale-aware depth and camera motion from unlabeled surround-view images, using 3D spatio-temporal reconstruction. We summarize our contributions as follows: (i) we introduce a novel volumetric feature representation that effectively learns to estimate the surround-view depth and canonical camera motion. To our knowledge, our method is the first approach that targets multi-camera feature fusion with selfsupervised depth estimation. (ii) We demonstrate synthesizing depth maps at arbitrary views with the unified volumetric feature representation. (iii) On popular multi-camera datasets [2, 14], our method demonstrates consistent improvements compared to the state-of-the-art method [15]. 2 Related Work Monocular depth estimation. Supervised-learning-based monocular depth approaches [8, 10, 20, 22, 36] require a large amount of annotated data for supervision, and as a result, it limits the scalability of their methods due to the expense of acquiring such annotated data on diverse scenes. To address the limitation, self-supervised methods propose proxy learning tasks that use unlabeled temporallyconsecutive images [3, 14, 23, 24, 37, 47, 49, 51, 52] or stereoscopic pairs [11–13, 25, 34, 35] during training time. For supervision signals, the approaches minimize image reconstruction errors between reference images and synthesized images that are generated from output depths and temporally-(or spatially-)neighboring images. Yet, the methods output depth maps only up to scale or at a fixed viewpoint. In this work, we demonstrate a surround-view depth method that outputs depth maps at arbitrary viewpoints as well as in a metric scale. Omnidirectional depth estimation. Previous work has demonstrated estimating depth on wide field-of-view (FOV) images [48] or 360◦ images [45, 44, 53] that covers surrounding view, and efforts on finding a suitable projective geometry to process those input images have been continued. Initial approaches [45, 53] directly used 360◦ images that are represented under equirectangular projection, but there exists severe visual distortion on the top and bottom parts of the images. To avoid such distortion on the 360◦ image, a cubemap representation [4, 43, 44] has been presented, which rectifies a input 360◦ image into a cube map coordinate and uses it for the input instead. Won et al. [48] proposed to directly perform a stereo matching of multiple fisheye images in a spherical coordinate. Neural Ray Surfaces [42] introduced a generic framework that jointly learns to estimate depth and motion without prior knowledge of specific camera models. Volumetric feature representation. The basic idea of fusing multi-view features into a shared voxel space has been presented in other tasks, such as multi-view stereo [19, 26, 30, 32, 50] or 3D semantic segmentation [5, 26]. The approaches mainly focus on solving a matching task for object-level or static indoor scene reconstruction using intractable 3D convolution. Based on the similar principle, our method presents a volumetric feature representation for surround-view depth and canonical motion estimation. Comparing to 3D convolution, we demonstrate that light-weighted MLP layers are more efficient for fusing features from surround-view images with small overlaps. Some approaches [17, 28, 33, 38] demonstrate aggregating 2D features from surround-view images onto a 2D Bird’s-eye-view (BEV) space for semantic segmentation or object detection. However, the BEV-based representation does not precisely preserve 3D information due to the abstraction of the height information. In contrast, our approach presents a voxel-based representation that is more suitable for depth and canonical pose estimation. 3 Surround-View Depth Estimation via Volumetric Feature Fusion Given two temporally consecutive surround-view images, Iti and I t+1 i , captured by multiple cameras Ci (i ∈ {1, 2, ..., 6} in our setup) that cover a surrounding view of a vehicle, our method estimates metric-scale depth Dti for each image I t i from each camera Ci (but also at an arbitrary rotated viewpoint) and a canonical camera motion of the vehicle Tt→t+1 as an auxiliary task. Our methods assumes known camera intrinsics Ki and extrinsics Ei, but each camera can have different intrinsics. 3.1 Architecture overview We propose a novel fusion module for surround-view depth estimation. Fig. 2 shows an overview of our approach, consisting of the surround-view volumetric feature fusion (Sec. 3.2), depth estimation (Sec. 3.3) and canonical motion estimation (Sec. 3.4) module. The surround-view feature fusion module constructs a single, unified volumetric feature map V by aggregating multi-scale image feature maps F that are extracted from each image in the surround-view camera setup. From the volumetric feature map V, the depth estimation module retrieves a per-frame feature that represents a specific viewpoint and passes it through a depth decoder to produce a depth map for the corresponding view. Because a volumetric feature at each voxel contains its local 3D information, our method can also synthesize a depth map at an arbitrary view by projecting the volumetric feature map into the target view. The canonical motion estimation module flattens the volumetric feature map along the z-axis and predicts a global motion referring to the canonical camera coordinate. In contrast to FSM [15] that independently estimates each camera motion, our design globally reasons a camera motion, which not only reduces the number of unknown factors but also makes the problem simpler. 3.2 Surround-view volumetric feature fusion Image feature encoding. We first extract image features from surround-view images. Given a set of surround-view images, we pass each image Ii through a shared 2D image encoder to obtain a image feature map Fi for ith view. We use ResNet [16] as the encoder which takes an image Ii (with a resolution of H ×W ) and outputs a multi-scale feature pyramid. We take the last three feature maps from the feature pyramid, reshape them up to a resolution of H8 × W 8 , concatenate them, and apply 1×1 convolution for channel reduction. As a result, we obtain a single feature map Fi for each image, where the feature map conveys multi-scale and high-dimensional information of the image. Volumetric feature encoding. We aggregate the image feature maps Fi from surround-view images into a single, unified volumetric feature map V in a pre-defined voxel space, as illustrated in Fig. 3. For each voxel, we find a corresponding pixel p(w, h) by projecting the voxel (x, y, z) into a pixel coordinate. Then, we bilinearly interpolate an image feature F(p) at the projected pixel p(w, h) to handle sub-pixel location and allocate the sampled image feature to the voxel. Because the sampled image feature F(p) contains high-level information along its pixel ray, we extract local 3D feature at the voxel (x, y, z) by concatenating the feature with a depth value of the voxel (i. e., depth positional encoding) and pass it through an MLP. In this way, voxels that get through by a single pixel ray have their individual 3D feature extracted by the MLP instead of having the same image feature. Back-projecting Image features <latexit sha1_base64="ov7tnl94Oi9ugN7xwk0ilsuw9VE=">AAAB83icbVBNS8NAFHypX7V+VT16WSyCBymJiHoseNFbBWsLTSmb7aZdutmE3RehhP4NLx4U8eqf8ea/cdPmoK0DC8PMe7zZCRIpDLrut1NaWV1b3yhvVra2d3b3qvsHjyZONeMtFstYdwJquBSKt1Cg5J1EcxoFkreD8U3ut5+4NiJWDzhJeC+iQyVCwShayfcjiqMgzO6mfdGv1ty6OwNZJl5BalCg2a9++YOYpRFXyCQ1puu5CfYyqlEwyacVPzU8oWxMh7xrqaIRN71slnlKTqwyIGGs7VNIZurvjYxGxkyiwE7mGc2il4v/ed0Uw+teJlSSIldsfihMJcGY5AWQgdCcoZxYQpkWNithI6opQ1tTxZbgLX55mTye173Lunt/UWucFXWU4QiO4RQ8uIIG3EITWsAggWd4hTcndV6cd+djPlpyip1D+APn8wcyf5G4</latexit> 4 Some voxels are associated with multiple image features due to spatial overlaps across the camera views. To merge the multiple features on overlap region, we use the other MLP that learns to fuse the multiple per-pixel image features into a per-voxel volumetric feature. 3.3 Depth estimation From the unified volumetric feature map, we aim to estimate a depth map of each surround-view input image but also synthesize depth maps at any arbitrary rotated camera viewpoints. Transformation of volumetric feature as projected image feature. For each target view Ci that we want to estimate its depth map, we project the volumetric feature map V into the projected image feature map F̃i with a resolution of H8 × W 8 , using camera extrinsics Ei of the target view. For each pixel p at the target image coordinate, we uniformly sample volumetric features along its pixel ray and concatenate the sampled features to obtain the projected image feature F̃i(p). Depth decoder. To predict a depth map from the projected image feature map F̃i, we use a lightweighted depth decoder Decdepth consisting of 4 convolutional layers: 3 convolutional layers for upsampling (H8 × W 8 → H ×W ) and one convolutional layer for the depth output. Depth map synthesis at a novel view. We further extend our depth estimation module to produce a depth map for a novel viewpoint which is not covered by the original multi-camera system. Because a volumetric feature at each voxel encodes local 3D structure information in a certain range, our method can synthesize a depth map at a desired arbitrary viewpoint by projecting the volumetric feature into the viewpoint and passing it through the depth decoder. Fig. 1 visualizes synthesized depth maps at arbitrary rotated viewpoints. Our method can seamlessly synthesize depth maps with variations of yaw, roll, or pitch angles as well as arbitrary focal lengths. 3.4 Canonical motion estimation The prior work, FSM [15], separately estimates a relative camera motion of each camera using a shared encoder-decoder network. However, this process can be redundant because a multi-camera system that is attached to a vehicle shares the same canonical motion. To improve, we introduce a canonical pose estimation module that takes a collapsed volumetric feature map FBEV and predicts one representative canonical camera motion as illustrated in Fig. 2. Given the estimated canonical camera motion, we can direct compute the motion of each camera with their known camera extrinsics. Here, we use the same architecture design to obtain a volumetric feature map (cf. Sec. 3.2) but with different trainable network weights, and use two temporally consecutive images for the input. Volumetric feature reduction. Given a surround-view volumetric feature map V ∈ RX×Y×Z×C, we flatten the feature map onto a Bird’s-Eye-View (BEV) shape (see Fig. 2). We collapse the Z dimension into the channel dimension C (i. e., reshaping it into a 3D tensor V′ ∈ RX×Y×(Z·C)) and apply 2D convolutions to reduce channel of the feature, resulting in a 3D tensor of FBEV ∈ RX×Y×C ′ . This collapsed volumetric feature map FBEV contains information on canonical ego-motion that is aggregated from surrounding views. Then we use the standard pose decoder from the PoseNet [13] to estimate the canonical camera motion Tt→t+1. Computing local camera poses. Assuming a static relationship between the cameras, we distribute the predicted canonical camera motion to each camera motion. From the given extrinsic Ei of each camera Ci and canonical camera motion Tt→t+1, we compute each camera motion as: Tt→t+1i = E −1 i E1T t→t+1E−11 Ei, (1) where E1 is the extrinsic of the canonical camera motion. Note that any viewpoints could also be a canonical motion. In this work, we define the canonical motion as the motion of the front-view camera. 3.5 Self-supervised learning Given the camera motion and depth map per each view, we apply our self-supervised loss given a temporal triplet of surround-view images. At test time, our method only requires images at the current time step t. Self-supervised loss. Our self-supervised loss consists of the image reconstruction loss Limg and the depth synthesis loss Ldepth: L = Limg + Ldepth. (2) The image reconstruction loss penalizes the reconstruction error between the reference image It and the synthesized images Ĩt+1 from each temporal, spatial, and spatio-temporal context (cf. Eq. (4)), Limg = Lt + λspLsp + λsp_tLsp_t + λsmoothLsmooth. (3) Each term stands for the temporal Lt, spatio Lsp, spatio-temporal Lsp_t, and smoothness loss Lsmooth. To measure the reconstruction error, we use a weighted sum of intensity difference and structure similarity [12, 13, 47]: (1−α)∥It− Ĩt+1∥1+α 1−SSIM(I t,Ĩt+1) 2 , with α = 0.85. The smoothness term Lsmooth follows an edge-aware 1st-order smoothness: Lsmooth = 1N ∑ p ∑ k∈x,y ∇kD · e−∥∇kI∥1 . One main difference to the baseline work [15] is that we do not use the pose consistency loss because our method outputs one representative camera motion for all views. This reduces the number of unknowns and thus encourages convergence and better accuracy. Spatio-temporal context. The core idea of the self-supervised proxy task [15] is to exploit small overlaps between spatially or/and temporally neighboring images for matching to provide supervisionary signal and scale information for camera motion and depth. For a reference image Iti from camera Ci at a time step t, it synthesizes reconstructed images Ĩti from (i) spatial neighboring images Itj (j: indices of neighboring cameras), (ii) temporally neighboring images I t′ i (t ′ ∈ {t+ 1, t− 1}), and (iii) spatio-temporally neighboring images It ′ j , given the predicted camera motion. A pixel-wise warping operation for the image reconstruction is defined as: Πt→t ′ ij = KjX t→t′ ij DiK −1 i (4a) with Xt→t ′ ij = T t→t′ i , t ′ ∈ {t− 1, t+ 1} for temporal context, EjE −1 i for spatio context, EjE −1 i T t→t′ i , t ′ ∈ {t− 1, t+ 1} for spatio-temporal context, (4b) with estimated depth Di and camera intrinsics Ki and Kj . Here, Xij is the camera motion between the two cameras depending on their spatio-temporal context. By minimizing the photometric difference between the reference image and reconstructed image, a self-supervised loss function guides the network to output metric-scale depths and camera motions that can satisfy the overall contexts. Depth synthesis loss. We introduce a depth synthesis loss for the successful depth synthesis at a novel view: Ldepth = λconsLcons + λdepth_smoothLdepth_smooth (5a) Lcons = 1N ∑ p(|Di − D̃i|)/(Di + D̃i) (5b) Ldepth_smooth = 1N ∑ p(∇xD̃i +∇yD̃i). (5c) The depth consistency loss Lcons penalizes the depth difference between the synthesized depth D̃i at the novel view and depth Di at each known camera view i, adopted from Bian et al. [1]. The smoothness loss Ldepth_smooth [27] encourages the 1st-order smoothness of the synthesized depth map D̃i at the novel view so that it regularizes a set of volumetric features retrieved at the novel view to improve 3D-awareness at each voxel coordinate. For the synthesis, we augment a depth map at a novel view for each known camera view i and apply the depth synthesis loss. 4 Experiments 4.1 Implementation details Dataset and evaluation protocol. We use the DDAD [14] and nuScenes [2] dataset for our experiments. Both datasets provide surround-view images from a total of 6 cameras mounted on a vehicle and LiDAR point clouds for the depth evaluation. We train our model on each train split and report the accuracy on the test split. During training, the input images are down-sampled to a resolution of 384× 640 for the DDAD dataset, and that of 352× 640 for the nuScenes dataset. We train our model on the DDAD dataset for 20 epochs and the nuScenes dataset for 5 epochs. For evaluation, we follow the same protocol from FSM [15] that evaluates depth up to 200 (m) for the DDAD dataset and 80 (m) for the nuScenes dataset. We use conventional depth evaluation metrics proposed by Eigen et al. [8]. We provide details of the metrics in the supplementary material. Training. We implemented our networks in PyTorch [31] and trained on four A100 GPUs. For the image encoders, we used ResNet-18 with weights pre-trained on ImageNet [39]. All experiments used the same training hyper-parameters (unless explicitly mentioned): Adam optimizer with β1 = 0.9 and β2 = 0.999; a mini-batch size of 2 per each GPU and a learning rate [40] with 1× 10−4, decaying at 34 of the entire training schedule with a factor of 0.1; the previous t − 1 and subsequent t + 1 images are used as temporal context at the training time. Note that our depth estimation module does not require temporal contexts, and therefore temporal contexts are not included in the test time. For our volumetric feature, we used voxel resolution of (1m, 1m, 0.75m) with spatial dimensions of (100, 100, 20) for (x, y, z) axis respectively. We use color jittering as data augmentation. For the depth synthesis loss, we use the random rotation with a range between [-5°, -5°, -25°] and [5°, 5°, 25°] for the depth map synthesis at a novel view. In the self-supervised loss in Eq. (2), we use depth smoothness weight λsmooth = 1× 10−3, spatio loss weight λsp = 0.03, spatio-temporal weight λsp_t = 0.1, depth consistency weight λcons = 0.05, and depth smoothness weight at novel views λdepth_smooth = 0.03. The DDAD dataset [14] includes images with different focal lengths; in order to train the network to output a consistent depth scale regardless of different focal lengths, we use the focal length normalization [9] which normalizes the scale of output depth to a default focal length. Intensity distribution alignment. We observe that different lighting conditions in neighboring images yields artifacts on the depth output (Fig. 4c) with the self-supervised loss in the previous work [15]. To address the issue, we align mean and variance on overlapped regions of the reference image I and synthesized image Ĩ: Ĩalign = (Ĩ− µ̃) · σ/σ̃ + µ, (6) where {µ, σ2} are the mean and variance of the reference image I, and {µ̃, σ̃2} are that of the synthesized image Ĩ. The aligned image Ĩalign is then used for the loss calculation in Eq. (2). Fig. 4a briefly illustrates this process. Here, the reconstructed image can be from either a spatial, temporal, or spatio-temporal context. 4.2 Surround-view depth evaluation We compare the accuracy of our method with state-of-the-art methods. Especially for our closest baseline FSM [15], we use our own reproduced version of FSM for the comparison because the implementation is not provided. For the reproduction of FSM [13], we found that the intensity distribution alignment and focal length normalization, which we explained above, were critical for stable self-supervised training. Despite our efforts, there is still a gap between our reproduced version and the reported accuracy; for a fair comparison, we conduct experiments under the same training and evaluation settings and demonstrate accuracy gain over the reproduced baseline. Table 1 demonstrates the evaluation on both DDAD [14] and nuScenes [2] datasets. We also report the accuracy using per-frame median scaling for the reference. On the DDAD dataset, our method shows competitive or better accuracy to the baseline method FSM [15] on 5 out of 7 metrics. Especially, our method substantially improves the accuracy on the Sq Rel metric by around 17%. On nuScenes dataset, our method demonstrates consistent improvement over the baseline, by outperforming the baseline on 6 out of 7 metrics, with over 24% improvement on the Sq Rel metric. Better accuracy is also observed when using the median scaling for evaluation. We further demonstrate point cloud reconstruction visualization in Sec D.3 of supplementary material to provide clear evidence of the strength of our method. 4.3 Ablation study Surround-view volumetric feature fusion. To validate our main contributions, we conduct an ablation study of our volumetric feature fusion and canonical motion estimation module over the FSM baseline [15]. We train the model on the DDAD dataset [14] while keeping the original experiment setup. As shown in Table 2, both of our contributions consistently improve the accuracy over the baseline. Our volumetric feature representation embeds effective 3D information and results in better depth accuracy; the main difference from previous works [13, 15] is a few additional multilayer Table 2: Ablation study on our volumetric feature fusion and canonical fusion modules: Our volumetric feature fusion scheme and canonical motion estimation module consistently improve the depth accuracy, substantially on Abs Rel and Sq Rel metrics. Volumetric feature fusion Canonical motion Abs Rel Sq Rel RMSE RMSE log δ < 1.25 δ < 1.252 δ < 1.253 (Our reproduced baseline) 0.228 4.409 13.433 0.342 0.687 0.870 0.932 ✓ 0.222 4.055 13.474 0.348 0.682 0.862 0.928 ✓ 0.222 3.969 13.492 0.344 0.677 0.865 0.931 ✓ ✓ 0.218 3.660 13.327 0.339 0.674 0.862 0.932 perceptrons with volumetric feature representation while keeping the similar conventional encoder and decoder networks. Furthermore, we show the benefit of the volumetric feature representation for the camera motion module; globally reasoning the camera motion benefits the depth estimation. Fig. 6 further provides qualitative results of each setting in the ablation study and demonstrates qualitative gains of each contribution. Volumetric feature encoder. Table 3 shows the comparison result using different approaches to aggregate surround-view image features into a shared voxel space. We compare three different methods that are commonly used to process features in the voxel space. Avg voxel simply averages features that are accumulated at the same voxel coordinate. 3D conv applies two sequential 3D convolutional blocks to the volumetric feature. Each 3D convolutional block is composed of a 3D convolutional layer, batch normalization, and LeakyReLU, in a consecutive order. Our method, using MLPs, clearly outperforms the rest two methods. Compared to 3D convolution, we find that MLPs more effectively learn to pool or weight multiple features on overlap regions. In Fig. 7, we further visualize error maps of each method (Avg voxel, 3D conv, MLPs) including the baseline, FSM [15]. The results provide clear evidence that the volumetric feature approach takes full advantage of overlap regions compared to the [15]. Furthermore, our MLP-based method shows the highest accuracy in overall regions including moving objects, compared to Avg voxel or 3D conv. 5 Conclusions We proposed a novel volumetric feature representation for self-supervised surround-view depth estimation. The volumetric feature representation aggregates image features from surround-view images, encodes 3D information, and then is used to estimate a depth map at each view and canonical motion between two temporally consecutive images. Furthermore, by projecting the volumetric feature at arbitrary rotated views, our method can also synthesize a depth map at the novel view, additionally with varying focal lengths. Our method outperforms the state-of-the-art surround-view depth method. The ablation study successfully validates our contributions as well as design choices on the volumetric feature encoding. In future work, we expect our volumetric feature representation can benefit other computer vision tasks such as object detection, image segmentation, and motion estimation in the surround-view camera setup. Limitation. Our volumetric feature representation is defined in a 3D cubic space; it needs to take the trade-off between memory usage and the resolution into consideration. Because we focus on embedding 3D information to a novel representation, sometimes some blurred regions occurs in the depth map (e. g., in Fig. 1 and Fig. 6). The usage of cylindrical or spherical coordinates would improve the efficiency of the memory usage. Furthermore, some artifacts sometimes appear in the synthesized depth map in Fig. 1 where no input image feature is given. In future work, we will consider using additional regularization terms to preserve details of depth results as well as reduce the artifact. Potential negative societal impacts. Methods for surround-view depth estimation require a large number of images for training, which may include images with privacy issues. To prevent this, the dataset should be precisely curated and be used for the research purpose only.
1. What is the focus and contribution of the paper on self-supervised depth estimation? 2. What are the strengths of the proposed approach, particularly in its ability to achieve improved results on certain datasets? 3. What are the weaknesses of the paper, especially regarding the construction of the unified volumetric feature? 4. Do you have any concerns or questions about the method's ability to handle multi-feature collisions or the use of different MLPs for fusing overlap and non-overlap features? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper introduces a self-supervised depth estimation method based on a unified volumetric feature representation encoded from surround-view. The proposed method consists of three parts. First, the surround-view feature fusion module generates a unified volumetric feature from the extracted multi-view 2D image features. Second, the depth fusion module reconstructs the depth map given an input camera viewpoint. Last, the global motion of the canonical camera is estimated by assuming static camera extrinsic. Experiments on DDAD and nuSecens datasets achieved improved results over existing methods. Strengths And Weaknesses Strengths: The idea of using a volumetric feature to encode information from surround-view images seems interesting and logical. With the construction of volumetric feature and depth fusion, the proposed method achieves improved results on DDAD and nuScenes datasets. Ablation study has been provided to verify the effectiveness of depth fusion and canonical motion estimation. Weaknesses: The core of this method is the construction of the unified volumetric feature to estimate the surround-view depth and canonical camera motion. However, some details for constructing a unified volumetric feature are missing. This method simply copies the image feature to all voxels went through by the corresponding pixel ray. In this case, some voxels may have been passed by multiple pixel rays. It is unclear how to handle multi-feature collisions. Related to the first question, why different MLPs are used to fuse the overlap features and non-overlap features in the volumetric feature (see Line 132)? No experiments are shown to verify this strategy. Since this method encodes the feature in a 3D volume, the effect of volume resolution on the memory consumption and depth estimation should be discussed. Overall I think the proposed idea is interesting, but I have some questions for the volumetric feature construction part. I would give a borderline accept rating at this stage. Questions More details for the construction of the volumetric feature. Effect of volume resolution on memory consumption and depth estimation. Limitations The authors have discussed the limitation and potential negative societal impact of their work.
NIPS
Title Multiplicative Weights Update with Constant Step-Size in Congestion Games: Convergence, Limit Cycles and Chaos Abstract The Multiplicative Weights Update (MWU) method is a ubiquitous meta-algorithm that works as follows: A distribution is maintained on a certain set, and at each step the probability assigned to action γ is multiplied by (1− C(γ)) > 0 where C(γ) is the “cost" of action γ and then rescaled to ensure that the new values form a distribution. We analyze MWU in congestion games where agents use arbitrary admissible constants as learning rates and prove convergence to exact Nash equilibria. Interestingly, this convergence result does not carry over to the nearly homologous MWU variant where at each step the probability assigned to action γ is multiplied by (1− ) even for the simplest case of two-agent, two-strategy load balancing games, where such dynamics can provably lead to limit cycles or even chaotic behavior. N/A The Multiplicative Weights Update (MWU) method is a ubiquitous meta-algorithm that works as follows: A distribution is maintained on a certain set, and at each step the probability assigned to action γ is multiplied by (1− C(γ)) > 0 where C(γ) is the “cost" of action γ and then rescaled to ensure that the new values form a distribution. We analyze MWU in congestion games where agents use arbitrary admissible constants as learning rates and prove convergence to exact Nash equilibria. Interestingly, this convergence result does not carry over to the nearly homologous MWU variant where at each step the probability assigned to action γ is multiplied by (1− )C(γ) even for the simplest case of two-agent, two-strategy load balancing games, where such dynamics can provably lead to limit cycles or even chaotic behavior. 1 Introduction The Multiplicative Weights Update (MWU) is a ubiquitous meta-algorithm with numerous applications in different fields [2]. It is particularly useful in game theory due to its regret-minimizing properties [24, 11]. It is typically introduced in two nearly identical variants, the one in which at each step the probability assigned to action γ is multiplied by (1 − C(γ)) and the one in which it is multiplied by (1 − )C(γ) where C(γ) is the cost of action γ. We will refer to the first as the linear variant, MWU`, and the second as the exponential, MWUe (also known as Hedge). In the literature there is little distinction between these two variants as both carry the same advantageous regret-minimizing property. It is also well known that in order to achieve sublinear regret, the learning rate must be decreasing as time progresses. This constraint raises a natural question: Are there interesting classes of games where MWU behaves well without the need to fine-tune its learning rate? A natural setting to test the learning behavior of MWU with constant learning rates is the wellstudied class of congestion games. Unfortunately, even for the simplest instances of congestion games MWUe fails to converge to equilibria. For example, even in the simplest case of two balls two ∗Gerasimos Palaiopanos would like to acknowledge a SUTD Presidential fellowship. †Ioannis Panageas would like to acknowledge a MIT-SUTD postdoctoral fellowship. Part of this work was completed while Ioannis Panageas was a PhD student at Georgia Institute of Technology and a visiting scientist at the Simons Institute for the Theory of Computing. ‡Georgios Piliouras would like to acknowledge SUTD grant SRG ESD 2015 097, MOE AcRF Tier 2 Grant 2016-T2-1-170 and a NRF Fellowship. Part of this work was completed while Georgios Piliouras was a visiting scientist at the Simons Institute for the Theory of Computing. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. bins games,4 MWUe with = 1− e−10 is shown to converge to a limit cycle of period 2 for infinitely many initial conditions (Theorem 4.1). If the cost functions of the two edges are not identical then we create instances of two player load balancing games such that MWUe has periodic orbits of length k for all k > 0, as well as uncountable many initial conditions which never settle on any periodic orbit but instead exhibit an irregular behavior known as Li-Yorke chaos (Theorem 4.2, see Corollary 4.3). The source of these problems is exactly the large, fixed learning rate , e.g., ≈ 1 for costs in [0, 1]. Intuitively, the key aspect of the problem can be captured by (simultaneous) best response dynamics. If both agents start from the same edge and best-respond simultaneously they will land on the second edge which now has a load of two. In the next step they will both jump back to the first edge and this motion will be continued perpetually. Naturally, MWUe dynamics are considerably more intricate as they evolve over mixed strategies and allow for more complicated non-equilibrium behavior but the key insight is correct. Each agent has the right goal, decrease his own cost and hence the potential of the game, however, as they pursue this goal too aggressively they cancel each other’s gains and lead to unpredictable non-converging behavior. In a sense, the cautionary tales above agree with our intuition. Large, constant learning rates nullify the known performance guarantees of MWU. We should expect erratic behavior in such cases. The typical way to circumvent these problems is through careful monitoring and possibly successive halving of the parameter, a standard technique in the MWU literature. In this paper, we explore an alternative, cleaner, and surprisingly elegant solution to this problem. We show that applying MWU`, the linear variant of MWU, suffices to guarantee convergence in all congestion games. Our key contributions. Our key result is the proof of convergence of MWU` in congestion games. The main technical contribution is a proof that the potential of the mixed state is always strictly decreasing along any nontrivial trajectory (Theorem 3.1). This result holds for all congestion games, irrespective of the number of agents or the size, topology of the strategy sets. Moreover, each agent i may be applying different learning rates i which will be constant along the dynamics ( i does not depend on the number of iterations T of the dynamics and therefore is bounded away from zero as T →∞; this is not the case for most of the results in the literature). The only restriction on the set of allowable learning rates i is that for each agent the multiplicative factor (1 − iCi(s)) should be positive for all strategy outcomes s.5 Arguing convergence to equilibria for all initial conditions (Theorem 3.4) and further, convergence to Nash equilibria for all interior initial conditions (Theorem 3.8) follows. Proving that the potential always decreases (Theorem 3.1) hinges upon discovering a novel interpretation of MWU dynamics. Specifically, we show that the class of dynamical systems derived by applying MWU` in congestion games is a special case of a convergent class of dynamical systems introduced by Baum and Eagon [5] (see Theorem 2.4). The most well known member of this class is the classic Baum-Welch algorithm, the standard instantiation of the Expectation-Maximization (EM) algorithm for hidden Markov models (HMM). Effectively, the proof of convergence of both these systems boils down to a proof of membership to the same class of Baum-Eagon systems (see section 2.3 for more details on these connections). In the second part we provide simple congestion games where MWUe provably fails to converge. The first main technical contribution of this section is proving convergence to a limit cycle, specifically a periodic orbit of length two, for the simplest case of two balls two bins games for infinitely many initial conditions (Theorem 4.1). Moreover, after normalizing costs to lie in [0, 1], i.e. c(x) = x/2, we prove that almost all symmetric non-equilibrium initial conditions converge to a unique limit cycle when both agents use learning rate = 1−e−10. In contrast, since 1− ·C(s) ≥ 1−(1−e−10)1 = e−10 > 0, MWU` successfully converges to equilibrium. In other words, for the same learning rates, MWUe exhibits chaotic behavior whereas MWU` converges to Nash equilibrium. Establishing chaotic behavior for the case of edges with different cost functions is rather straightforward in comparison (Theorem 4.2). The key step is to exploit symmetries in the system to reduce it to a single dimensional one and then establish the existence of a periodic orbit of length three. The existence of periodic orbits of any length as well as chaotic orbits then follows from the Li-Yorke theorem 2.3 [30] (see section 2.2 for background on chaos and dynamical systems). Finally, for any learning rate 1 > > 0, we construct n-player games so that MWUe has chaotic behavior for uncountably many starting points. 4n balls n bin games are symmetric load balancing games with n agent and n edges/elements each with a cost function of c(x)=x. We normalize costs equal to c(x) = x/n so that they lie in [0, 1]. 5This is an absolutely minimal restriction so that the denominator of MWU` cannot become equal to zero. Related work and Extensions/Implications of our results. Connections to learning in games and price of anarchy: Several recent papers, e.g., [40, 22] focus on proving welfare guarantees of no-regret dynamics in games exploiting connections to (robust) price of anarchy literature [37] by establishing fast convergence of the time average behavior to (approximate) coarse correlate equilibria. Although these approaches are rather powerful they are not always applicable. For example, it is well known that when we consider the makespan (i.e. the load of the most congested machine) instead of the social/total cost there can be an exponential gap between the performance of coarse correlated equilibria and Nash equilibria. For example the price of anarchy for the makespan objective for n balls n bins games is O(log(n)/ log log(n)) whereas for the worst no regret algorithm it can be Ω( √ n) [9]. Moreover, even if we focus on the social cost, the price of anarchy guarantees do not carry over if we perform affine transformation to the cost functions (e.g. if there exist users of different tiers/types that the system designer wants to account for in a differential manner). In contrast, our convergence results are robust to any affine cost transformation. In fact, our results apply for all weighted potential games [32] (Remark 3.5). Connections to distributed computation and adversarial agent scheduling: A rather realistic concern about results on learning in games has to do with their sensitivity to the ordering of the moves of the agent dynamics. For example, better-response dynamics in congestion games are guaranteed to converge only if in every round, exactly one agent deviates to a better strategy. A series of recent papers has established strong non-termination (cycling) results for large classes of bounded recall dynamics with a wide variety of interesting and timely applications: game theory, circuit design, social networks, routing and congestion control [26, 19, 34, 25]. In the case of games, these results translate to corollaries such as: “If there are two or more pure Nash equilibria in a game with unique best responses, then all bounded-recall self-independent dynamics6 for which those equilibria are fixed points can fail to converge in asynchronous environments." Even the simplest 2 balls 2 bins game satisfies these properties (two pure Nash and unique best responses) which shows the strength of this impossibility result. In contrast, our convergence result holds for any adversarial scheduling with the minimal fairness assumption that given any mixed state at least one agent who is not best responding eventually will be given the possibility to update their behavior, answering open questions in [26, 25]. In fact, our convergence result is in a sense the strongest possible, no matter how many agents get to update their behavior (as long as one of them does) then the potential of the game will strictly decrease (Corollary 3.6). Connections to complexity theory: Whereas the complexity of computing both mixed Nash equilibria in general games (PPAD-complete [17]) as well as the complexity of finding pure Nash equilibria in congestion games (PLS-complete [20]) have both been completely characterized and are thus unlikely to admit an efficient time algorithm, the complexity of computing mixed Nash equilibria in congestion games has withstood so far an exhaustive characterization. Naturally, it lies on the intersection of both PPAD and PLS, known as CLS [18]. Such an equilibrium can be found both via an end-of-line type of argument as well as a local search type of argument, but it is still not known if it is CLS-complete. Given the active interest for producing CLS-complete problems [16, 21] our constructive/convergence proof may help shed light on this open question. Chaos for arbitrary small learning rates : Although our example of chaotic behavior uses a very high learning rate = 1− e−10, it should be noted that for any learning rate (e.g. = e−10), as well as for any number of agents n, we can create congestion games with n agents where MWUe exhibits chaotic behavior (Corollary 4.3). Congestion/potential games: Congestion games are amongst the most well known and thoroughly studied class of games. Proposed in [36] and isomorphic to potential games [32], they have been successfully employed in myriad modeling problems. Despite the numerous positive convergence results for concurrent dynamics in congestion games, e.g., [33, 23, 7, 1, 6, 28, 10, 13, 12, 31], we know of no prior work establishing such a deterministic convergence result of the day-to-day agent behavior to exact Nash equilibria for general atomic congestion games. MWU has also been studied in congestion games. In [29] randomized variants of the exponential version of the MWU are shown to converge w.h.p. to pure Nash equilibria as long as the learning rate is small enough. In contrast our positive results for linear MWU` hold deterministically and for all learning rates. Recently, [14] showed that if the Hedge algorithm is run with a suitably decreasing learning factor , the sequence 6A dynamic is called self-independent if the agent’s response does not depend on his actions. of play converges to a Nash equilibrium with probability 1 (in the bandit case). The result and the techniques are orthogonal to ours, since we assume fixed learning rates. Non-convergent dynamics: Outside the class of congestion games, there exist several negative results in the literature concerning the non-convergence of MWU and variants thereof. In particular, in [15] it was shown that the multiplicative updates algorithm fails to find the unique Nash equilibrium of the 3× 3 Shapley game. Similar non-convergent results have been proven for perturbed zero-sum games [4], as well as for the continuous time version of MWU, the replicator dynamics [27, 35]. The possibility of applying Li-Yorke type arguments for MWU in congestion games with two agents was inspired by a remark in [3] for the case of continuum of agents. Our paper is the first to our knowledge where non-convergent MWU behavior in congestion games is formally proven capturing both limit cycles and chaos and we do so in the minimal case of two balls two bin games. 2 Preliminaries Notation. We use boldface letters, e.g., x, to denote column vectors (points). For a function f : Rm → Rm, by fn we denote the composition of f with itself n times, namely f ◦ f ◦ · · · ◦ f︸ ︷︷ ︸ n times . 2.1 Congestion Games A congestion game [36] is defined by the tuple (N ;E; (Si)i∈N ; (ce)e∈E) where N is the set of agents, N = |N |, E is a set of resources (also known as edges or bins or facilities) and each player i has a set Si of subsets of E (Si ⊆ 2E) and |Si| ≥ 1. Each strategy si ∈ Si is a set of edges and ce is a positive cost (latency) function associated with facility e. We use small greek characters like γ, δ to denote different strategies/paths. For a strategy profile s = (s1, s2, . . . , sN ), the cost of player i is given by ci(s) = ∑ e∈si ce(`e(s)), where `e(s) is the number of players using e in s (the load of edge e). The potential function is defined to be Φ(s) = ∑ e∈E ∑`e(s) j=1 ce(j). For each i ∈ N and γ ∈ Si, piγ denotes the probability player i chooses strategy γ. We denote by ∆(Si) = {p ≥ 0 : ∑ γ piγ = 1} the set of mixed (randomized) strategies of player i and ∆ = ×i∆(Si) the set of mixed strategies of all players. We use ciγ = Es−i∼p−ici(γ, s−i) to denote the expected cost of player i given that he chooses strategy γ and ĉi = ∑ δ∈Si piδciδ to denote his expected cost. 2.2 Dynamical Systems and Chaos Let x(t+1) = f(x(t)) be a discrete time dynamical system with update rule f : Rm → Rm. The point z is called a fixed point of f if f(z) = z. A sequence (f t(x(0)))t∈N is called a trajectory or orbit of the dynamics with x(0) as starting point. A common technique to show that a dynamical system converges to a fixed point is to construct a function P : Rm → R such that P (f(x)) > P (x) unless x is a fixed point. We call P a Lyapunov or potential function. Definition 2.1. C = {z1, . . . , zk} is called a periodic orbit of length k if zi+1 = f(zi) for 1 ≤ i ≤ k − 1 and f(zk) = z1. Each point z1, . . . , zk is called periodic point of period k. If the dynamics converges to some periodic orbit, we also use the term limit cycle. Some dynamical systems converge and their behavior can be fully understood and some others have strange, chaotic behavior. There are many different definitions for what chaotic behavior and chaos means. In this paper we follow the definition of chaos by Li and Yorke. Let us first give the definition of a scrambled set. Given a dynamical system with update rule f , a pair x and y is called “scrambled" if limn→∞ inf |fn(x) − fn(y)| = 0 (the trajectories get arbitrarily close) and also limn→∞ sup |fn(x)− fn(y)| > 0 (the trajectories move apart). A set S is called “scrambled" if ∀x, y ∈ S, the pair is “scrambled". Definition 2.2 (Li and Yorke). A discrete time dynamical system with update rule f , f : X → X continuous on a compact set X ⊂ R is called chaotic if (a) for each k ∈ Z+, there exists a periodic point p ∈ X of period k and (b) there is an uncountably infinite set S ⊆ X that is “scrambled". Li and Yorke proved the following theorem [30] (there is another theorem of similar flavor due to Sharkovskii [38]): Theorem 2.3 (Period three implies chaos). Let J be an interval and let F : J → J be continuous. Assume there is a point a ∈ J for which the points b = F (a), c = F 2(a) and d = F 3(a), satisfy d ≤ a < b < c (or d ≥ a > b > c). Then 1. For every k = 1, 2, . . . there is a periodic point in J having period k. 2. There is an uncountable set S ⊂ J (containing no periodic points), which satisfies the following conditions: • For every p, q ∈ S with p 6= q, lim n→∞ sup |Fn(p)− Fn(q)| > 0 and lim n→∞ inf |Fn(p)− Fn(q)| = 0. • For every point p ∈ S and periodic point q ∈ J , lim n→∞ sup |Fn(p)− Fn(q)| > 0. Notice that if there is a periodic point with period 3, then the hypothesis of the theorem will be satisfied. 2.3 Baum-Eagon Inequality, Baum-Welch and EM We start this subsection by stating the Baum-Eagon inequality. This inequality will be used to show that MWU` converges to fixed points and more specifically Nash equilibria for congestion games. Theorem 2.4 (Baum-Eagon inequality [5]). Let P (x) = P ({xij}) be a polynomial with nonnegative coefficients homogeneous of degree d in its variables {xij}. Let x = {xij} be any point of the domain D : xij ≥ 0, ∑qi j=1 xij = 1, i = 1, 2, ..., p, j = 1, 2, ..., qi. For x = {xij} ∈ D let =(x) = ={xij} denote the point of D whose i, j coordinate is =(x)ij = ( xij ∂P ∂xij ∣∣∣∣ (x) )/ qi∑ j′=1 xij′ ∂P ∂xij′ ∣∣∣∣ (x) Then P (=(x)) > P (x) unless =(x) = x. The Baum-Welch algorithm is a classic technique used to find the unknown parameters of a hidden Markov model (HMM). A HMM describes the joint probability of a collection of “hidden" and observed discrete random variables. It relies on the assumption that the i-th hidden variable given the (i− 1)-th hidden variable is independent of previous hidden variables, and the current observation variables depend only on the current hidden state. The Baum-Welch algorithm uses the well known EM algorithm to find the maximum likelihood estimate of the parameters of a hidden Markov model given a set of observed feature vectors. More detailed exposition of these ideas can be found here [8]. The probability of making a specific time series of observations of length T can be shown to be a homogeneous polynomial P of degree T with nonnegative (integer) coefficients of the model parameters. Baum-Welch algorithm is homologous to the iterative process derived by applying the Baum-Eagon theorem to polynomial P [5, 41]. In a nutshell, both Baum-Welch and MWU` in congestion games are special cases of the Baum-Eagon iterative process (for different polynomials P ). 2.4 Multiplicative Weights Update In this section, we describe the MWU dynamics (both the linear MWU`, and the exponential MWUe variants) applied in congestion games. The update rule (function) ξ : ∆ → ∆ (where p(t+ 1) = ξ(p(t))) for the linear variant MWU` is as follows: piγ(t+ 1) = (ξ(p(t)))iγ = piγ(t) 1− iciγ(t) 1− iĉi(t) , ∀i ∈ N ,∀γ ∈ Si, (1) where i is a constant (can depend on player i but not on p) so that both enumerator and denominator of the fraction in (1) are positive (and thus the fraction is well defined). Under the assumption that 1/ i > 1 β def = supi,p∈∆,γ∈Si {ciγ}, it follows that 1/ i > ciγ for all i, γ and hence 1/ i > ĉi. The update rule (function) η : ∆ → ∆ (where p(t + 1) = η(p(t))) for the exponential variant MWUe is as follows: piγ(t+ 1) = (η(p(t)))iγ = piγ(t) (1− i)ciγ(t)∑ γ′∈Si piγ′(t)(1− i) ciγ′ (t) , ∀i ∈ N ,∀γ ∈ Si, (2) where i < 1 is a constant (can depend on player i but not on p). Note that i can be small when the number of agents N is large enough. Remark 2.5. Observe that ∆ is invariant under the discrete dynamics (1), (2) defined above. If piγ = 0 then piγ remains zero, and if it is positive, it remains positive (both numerator and denominator are positive) and also is true that ∑ γ∈Si piγ = 1 for all agents i. A point p ∗ is called a fixed point if it stays invariant under the update rule of the dynamics, namely ξ(p∗) = p∗ or η(p∗) = p∗. A point p∗ is a fixed point of (1), (2) if for all i, γ with p∗iγ > 0 we have that ciγ = ĉi. To see why, observe that if p∗iγ , p ∗ iγ′ > 0, then ciγ = ciγ′ and thus ciγ = ĉi. We conclude that the set of fixed points of both dynamics (1), (2) coincide and are supersets of the set of Nash equilibria of the corresponding congestion game. 3 Convergence of MWU` to Nash Equilibria We first prove that MWU` (1) converges to fixed points7. Technically, we establish that function Ψ def = Es∼p [Φ(s)] is strictly decreasing along any nontrivial (i.e. nonequilibrium) trajectory, where Φ is the potential function of the congestion game as defined in Section 2. Formally we show the following theorem: Theorem 3.1 (Ψ is decreasing). Function Ψ is decreasing w.r.t. time, i.e., Ψ(p(t+ 1)) ≤ Ψ(p(t)) where equality Ψ(p(t+ 1)) = Ψ(p(t)) holds only at fixed points. We define the function Q(p) def = ∑ i∈N (1/ i − 1/β) ·∑ γ∈Si piγ + 1/β ·∏ i∈N ∑ γ∈Si piγ ︸ ︷︷ ︸ constant term −Ψ(p), (3) and show that Q(p) is strictly increasing w.r.t time, unless p is a fixed point. Observe that∑ γ∈Si piγ = 1 since p lies in ∆, but we include this terms in Q for technical reasons that will be made clear later in the section. By showing that Q is increasing with time, Theorem 3.1 trivially follows since Q = const − Ψ where const = ∑ i∈N 1/ i − 1/β(N − 1). To show that Q(p) is strictly increasing w.r.t time, unless p is a fixed point, we use a generalization of an inequality by Baum and Eagon [5] on function Q. Corollary 3.2 (Generalization of Baum-Eagon). Theorem 2.4 holds even if P is non-homogeneous. We want to apply Corollary 3.2 on Q. To do so, it suffices to show that Q(p) is a polynomial with nonnegative coefficients. Lemma 3.3. Q(p) is a polynomial with respect to piγ and has nonnegative coefficients. Using Lemma 3.3 and Corollary 3.2 we show the following: Theorem 3.4. Let Q be the function defined in (3). Let also p(t) ∈ ∆ be the point MWU` (1) outputs at time t with update rule ξ. It holds that Q(p(t + 1)) def= Q(ξ(p(t))) > Q(p(t)) unless ξ(p(t)) = p(t) (fixed point). Namely Q is strictly increasing with respect to the number of iterations t unless MWU` is at a fixed point. 7All missing proofs can be found in the full version of this paper http://arxiv.org/abs/1703.01138. Remark 3.5 (Weighted potential games). A congestion game is a potential game because if a player deviates, the difference he experiences in his cost is exactly captured by the deviation of the global (same for all players) function Φ = ∑ e∈E ∑`e(s) j=1 ce(j). In a weighted potential game, it holds that ci(si, s−i)− ci(s′i, s−i) = wi(Φ(si, s−i)− Φ(s′i, s−i)), where wi is some constant not necessarily 1 (as in the potential games case) and vector s−i captures the strategies of all players but i. It is not hard to see that Lemma 3.3 and thus Theorems 3.4 and 3.1 hold in this particular class of games (which is a generalization of congestion games), and so do the rest of the theorems of the section. Effectively, in terms of the weighted potential games analysis, it is possible to reduce it to the standard potential games analysis as follows: Consider the system with learning rates i and cost functions wici so that the game with cost functions ci is a potential game. The only necessary condition that we ask of this system is that iwici(s) < 1 for all i (as in the standard case) so that the enumerators/denominators are positive. By reduction, we can show that for every round T , even if a subset (that depends on the round T ) of the players update their strategy according to MWU` and the rest remain fixed, the potential still decreases. Corollary 3.6 (Any subset). Assume that at time t we partition the players in two sets St, S′t so that we allow only players in St to apply MWU` dynamics, whereas the players in S′t remain fixed. It holds that the expected potential function of the game at time t decreases. As stated earlier in the section, if Q(p(t)) is strictly increasing with respect to time t unless p(t) is a fixed point, it follows that the expected potential function Ψ(p(t)) = const−Q(p(t)) is strictly decreasing unless p(t) is a fixed point and Theorem 3.1 is proved. Moreover, we can derive the fact that our dynamics converges to fixed points as a corollary of Theorem 3.1. Theorem 3.7 (Convergence to fixed points). MWU` dynamics (1) converges to fixed points. We conclude the section by strengthening the convergence result (i.e., Theorem 3.7). We show that if the initial distribution p is in the interior of ∆ then we have convergence to Nash equilibria. Theorem 3.8 (Convergence to Nash equilibria). Assume that the fixed points of (1) are isolated. Let p(0) be a point in the interior of ∆. It follows that limt→∞ p(t) = p∗ is a Nash equilibrium. Proof. We showed in Theorem 3.7 that MWU` dynamics (1) converges, hence limt→∞ p(t) exists (under the assumption that the fixed points are isolated) and is equal to a fixed point of the dynamics p∗. Also it is clear from the dynamics that ∆ is invariant, i.e., ∑ δ∈Sj pjδ(t) = 1, pjδ(t) > 0 for all j and t ≥ 0 since p(0) is in the interior of ∆. Assume that p∗ is not a Nash equilibrium, then there exists a player i and a strategy γ ∈ Si so that ciγ(p ∗) < ĉi(p ∗) (on mixed strategies p∗) and p∗iγ = 0. Fix a ζ > 0 and let Uζ = {p : ciγ(p) < ĉi(p)− ζ}. By continuity we have that Uζ is open. It is also true that p∗ ∈ Uζ for ζ small enough. Since p(t) converges to p∗ as t → ∞, there exists a time t0 so that for all t′ ≥ t0 we have that p(t′) ∈ Uζ . However, from MWU` dynamics (1) we get that if p(t′) ∈ Uζ then 1 − iciγ(t′) > 1 − iĉi(t′) and hence piγ(t′ + 1) = piγ(t′) 1− iciγ(t ′) 1− iĉi(t′) ≥ piγ(t ′) > 0, i.e., piγ(t′) is positive and increasing with t′ ≥ t0. We reached a contradiction since piγ(t) → p∗iγ = 0, thus p∗ is a Nash equilibrium. 4 Non-Convergence of MWUe: Limit Cycle and Chaos We consider a symmetric two agent congestion game with two edges e1, e2. Both agents have the same two available strategies γ1 = {e1} and γ2 = {e2}. We denote x, y the probability that the first and the second agent respectively choose strategy γ1. For the first example, we assume that ce1(l) = 1 2 · l and ce2(l) = 1 2 · l. Computing the expected costs we get that c1γ1 = 1+y 2 , c1γ2 = 2−y 2 , c2γ1 = 1+x 2 , c2γ2 = 2−x 2 . MWUe then becomes xt+1 = xt (1− 1) (yt+1) 2 xt(1− 1) yt+1 2 +(1−xt)(1− 1) 2−yt 2 (first player) and yt+1 = yt (1− 2) xt+1 2 yt(1− 2) xt+1 2 +(1−yt)(1− 2) 2−xt 2 (sec- ond player). We assume that 1 = 2 and also that x0 = y0 (players start with the same mixed strategy. Due to symmetry, it follows that xt = yt for all t ∈ N, thus it suffices to keep track only of one variable (we have reduced the number of variables of the update rule of the dynamics to one) and the dynamics becomes xt+1 = xt (1− ) xt+1 2 xt(1− ) xt+1 2 +(1−xt)(1− ) 2−xt 2 . Finally, we choose = 1 − e−10 and we get xt+1 = H(xt) = xt e−5(xt+1) xte−5(xt+1) + (1− xt)e−5(2−xt) , i.e., we denote H(x) = xe −5(x+1) xe−5(x+1)+(1−x)e−5(2−x) . For the second example, we assume that ce1(l) = 1 4 · l and ce2(l) = 1.4 4 · l. Computing the expected costs we get that c1γ1 = 1+y 4 , c1γ2 = 1.4(2−y) 4 , c2γ1 = 1+x 4 , c2γ2 = 1.4(2−x) 4 . MWUe then becomes xt+1 = xt (1− 1) (yt+1) 4 xt(1− 1) yt+1 4 +(1−xt)(1− 1) 1.4(2−yt) 4 (first player) and yt+1 = yt (1− 2) xt+1 4 yt(1− 2) xt+1 4 +(1−yt)(1− 2) 1.4(2−xt) 4 (second player). We assume that 1 = 2 and also that x0 = y0 (players start with the same mixed strategy. Similarly, due to symmetry, it follows that xt = yt for all t ∈ N, thus it suffices to keep track only of one variable and the dynamics becomes xt+1 = xt (1− ) xt+1 4 xt(1− ) xt+1 4 +(1−xt)(1− ) 1.4(2−xt) 4 . Finally, we choose = 1− e−40 and we get xt+1 = G(xt) = xt e−10(xt+1) xte−10(xt+1) + (1− xt)e−14(2−xt) , i.e., we denote G(x) = xe −10(x+1) xe−10(x+1)+(1−x)e−14(2−x) . We show the following three statements, the proofs of which can be found in the full version. Theorem 4.1. For all but a measure zero set S of x ∈ (0, 1) we get that limt→∞H2t(x) = ρ1 or ρ2. Moreover, H(ρ1) = ρ2 and H(ρ2) = ρ1, i.e., {ρ1, ρ2} is a periodic orbit. Thus, all but a measure zero set S of initial conditions converge to the limit cycle {ρ1, ρ2}. Finally, the initial points in S converge to the equilibrium 12 . Theorem 4.2. There exist two player two strategy symmetric congestion games such that MWUe has periodic orbits of length n for any natural number n > 0 and as well as an uncountably infinite set of “scrambled" initial conditions (Li-Yorke chaos). Using Theorem 4.2, we conclude with the following corollary. Corollary 4.3. For any 1 > > 0 and n, there exists a n-player congestion game G( ) (depending on ) so that MWUe dynamics exhibits Li-Yorke chaos for uncountably many starting points. 5 Conclusion and Future Work We have analyzed MWU` in congestion games where agents use arbitrary admissible constants as learning rates and showed convergence to exact Nash equilibria. We have also shown that this result is not true for the nearly homologous exponential variant MWUe even for the simplest case of two-agent, two-strategy load balancing games. There we prove that such dynamics can provably lead to limit cycles or even chaotic behavior. For a small enough learning rate the behavior of MWUe approaches that of its smooth variant, replicator dynamics, and hence convergence is once again guaranteed [29]. This means that as we increase the learning rate from near zero values we start off with a convergent system and we end up with a chaotic one. Numerical experiments establish that between the convergent region and the chaotic region there exists a range of values for for which the system exhibits periodic behavior. Period doubling is known as standard route for 1-dimensional chaos (e.g. logistic map) and is characterized by unexpected regularities such as the Feigenbaum constant [39]. Elucidating these connections is an interesting open problem. More generally, what other type of regularities can be established in these non-equilibrium systems? Another interesting question has to do with developing a better understanding of the set of conditions that result to non-converging trajectories. So far, it has been critical for our non-convergent examples that the system starts from a symmetric initial condition. Whether such irregular MWUe trajectories can be constructed for generic initial conditions, possibly in larger congestion games, is not known. Nevertheless, the non-convergent results, despite their non-generic nature are rather useful since they imply that we cannot hope to leverage the power of Baum-Eagon techniques for MWUe. In conclusion, establishing generic (non)convergence results (e.g. for most initial conditions, most congestion games) for MWUe with constant step size is an interesting future direction.
1. What is the main contribution of the paper in the context of congestion games? 2. What is the significance of the connection between the dynamics and Baum-Eagon inequality? 3. What are the limitations of the paper's results, particularly in terms of applicability and overhead? 4. How do the proofs and potential connections to other fields add to the appeal of the contribution?
Review
Review The paper studies the dynamics associated with the multiplicative weights meta algorithm in the context of congestion games. The main question motivating this study is whether applying MWU with constant learning rate yields convergence to exact Nash equilibria. By connecting the dynamics to Baum-Eagon inequality, the authors prove a positive result for the standard MWU algorithm. Interestingly, it is shown that when a different variant of the algorithm is applied with constant step size, it leads to limit cycles or even chaotic behavior. The applicability of this result is questionable; indeed, applying MWU using the doubling trick yields optimal rates and does not incur any overhead. Furthermore, only asymptotic bounds are obtained. However, I still find the contribution interesting and even enlightening. The proofs are elegant and possible connections to other fields such as computational complexity and distributed computing make the contribution even more appealing.
NIPS
Title Multiplicative Weights Update with Constant Step-Size in Congestion Games: Convergence, Limit Cycles and Chaos Abstract The Multiplicative Weights Update (MWU) method is a ubiquitous meta-algorithm that works as follows: A distribution is maintained on a certain set, and at each step the probability assigned to action γ is multiplied by (1− C(γ)) > 0 where C(γ) is the “cost" of action γ and then rescaled to ensure that the new values form a distribution. We analyze MWU in congestion games where agents use arbitrary admissible constants as learning rates and prove convergence to exact Nash equilibria. Interestingly, this convergence result does not carry over to the nearly homologous MWU variant where at each step the probability assigned to action γ is multiplied by (1− ) even for the simplest case of two-agent, two-strategy load balancing games, where such dynamics can provably lead to limit cycles or even chaotic behavior. N/A The Multiplicative Weights Update (MWU) method is a ubiquitous meta-algorithm that works as follows: A distribution is maintained on a certain set, and at each step the probability assigned to action γ is multiplied by (1− C(γ)) > 0 where C(γ) is the “cost" of action γ and then rescaled to ensure that the new values form a distribution. We analyze MWU in congestion games where agents use arbitrary admissible constants as learning rates and prove convergence to exact Nash equilibria. Interestingly, this convergence result does not carry over to the nearly homologous MWU variant where at each step the probability assigned to action γ is multiplied by (1− )C(γ) even for the simplest case of two-agent, two-strategy load balancing games, where such dynamics can provably lead to limit cycles or even chaotic behavior. 1 Introduction The Multiplicative Weights Update (MWU) is a ubiquitous meta-algorithm with numerous applications in different fields [2]. It is particularly useful in game theory due to its regret-minimizing properties [24, 11]. It is typically introduced in two nearly identical variants, the one in which at each step the probability assigned to action γ is multiplied by (1 − C(γ)) and the one in which it is multiplied by (1 − )C(γ) where C(γ) is the cost of action γ. We will refer to the first as the linear variant, MWU`, and the second as the exponential, MWUe (also known as Hedge). In the literature there is little distinction between these two variants as both carry the same advantageous regret-minimizing property. It is also well known that in order to achieve sublinear regret, the learning rate must be decreasing as time progresses. This constraint raises a natural question: Are there interesting classes of games where MWU behaves well without the need to fine-tune its learning rate? A natural setting to test the learning behavior of MWU with constant learning rates is the wellstudied class of congestion games. Unfortunately, even for the simplest instances of congestion games MWUe fails to converge to equilibria. For example, even in the simplest case of two balls two ∗Gerasimos Palaiopanos would like to acknowledge a SUTD Presidential fellowship. †Ioannis Panageas would like to acknowledge a MIT-SUTD postdoctoral fellowship. Part of this work was completed while Ioannis Panageas was a PhD student at Georgia Institute of Technology and a visiting scientist at the Simons Institute for the Theory of Computing. ‡Georgios Piliouras would like to acknowledge SUTD grant SRG ESD 2015 097, MOE AcRF Tier 2 Grant 2016-T2-1-170 and a NRF Fellowship. Part of this work was completed while Georgios Piliouras was a visiting scientist at the Simons Institute for the Theory of Computing. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. bins games,4 MWUe with = 1− e−10 is shown to converge to a limit cycle of period 2 for infinitely many initial conditions (Theorem 4.1). If the cost functions of the two edges are not identical then we create instances of two player load balancing games such that MWUe has periodic orbits of length k for all k > 0, as well as uncountable many initial conditions which never settle on any periodic orbit but instead exhibit an irregular behavior known as Li-Yorke chaos (Theorem 4.2, see Corollary 4.3). The source of these problems is exactly the large, fixed learning rate , e.g., ≈ 1 for costs in [0, 1]. Intuitively, the key aspect of the problem can be captured by (simultaneous) best response dynamics. If both agents start from the same edge and best-respond simultaneously they will land on the second edge which now has a load of two. In the next step they will both jump back to the first edge and this motion will be continued perpetually. Naturally, MWUe dynamics are considerably more intricate as they evolve over mixed strategies and allow for more complicated non-equilibrium behavior but the key insight is correct. Each agent has the right goal, decrease his own cost and hence the potential of the game, however, as they pursue this goal too aggressively they cancel each other’s gains and lead to unpredictable non-converging behavior. In a sense, the cautionary tales above agree with our intuition. Large, constant learning rates nullify the known performance guarantees of MWU. We should expect erratic behavior in such cases. The typical way to circumvent these problems is through careful monitoring and possibly successive halving of the parameter, a standard technique in the MWU literature. In this paper, we explore an alternative, cleaner, and surprisingly elegant solution to this problem. We show that applying MWU`, the linear variant of MWU, suffices to guarantee convergence in all congestion games. Our key contributions. Our key result is the proof of convergence of MWU` in congestion games. The main technical contribution is a proof that the potential of the mixed state is always strictly decreasing along any nontrivial trajectory (Theorem 3.1). This result holds for all congestion games, irrespective of the number of agents or the size, topology of the strategy sets. Moreover, each agent i may be applying different learning rates i which will be constant along the dynamics ( i does not depend on the number of iterations T of the dynamics and therefore is bounded away from zero as T →∞; this is not the case for most of the results in the literature). The only restriction on the set of allowable learning rates i is that for each agent the multiplicative factor (1 − iCi(s)) should be positive for all strategy outcomes s.5 Arguing convergence to equilibria for all initial conditions (Theorem 3.4) and further, convergence to Nash equilibria for all interior initial conditions (Theorem 3.8) follows. Proving that the potential always decreases (Theorem 3.1) hinges upon discovering a novel interpretation of MWU dynamics. Specifically, we show that the class of dynamical systems derived by applying MWU` in congestion games is a special case of a convergent class of dynamical systems introduced by Baum and Eagon [5] (see Theorem 2.4). The most well known member of this class is the classic Baum-Welch algorithm, the standard instantiation of the Expectation-Maximization (EM) algorithm for hidden Markov models (HMM). Effectively, the proof of convergence of both these systems boils down to a proof of membership to the same class of Baum-Eagon systems (see section 2.3 for more details on these connections). In the second part we provide simple congestion games where MWUe provably fails to converge. The first main technical contribution of this section is proving convergence to a limit cycle, specifically a periodic orbit of length two, for the simplest case of two balls two bins games for infinitely many initial conditions (Theorem 4.1). Moreover, after normalizing costs to lie in [0, 1], i.e. c(x) = x/2, we prove that almost all symmetric non-equilibrium initial conditions converge to a unique limit cycle when both agents use learning rate = 1−e−10. In contrast, since 1− ·C(s) ≥ 1−(1−e−10)1 = e−10 > 0, MWU` successfully converges to equilibrium. In other words, for the same learning rates, MWUe exhibits chaotic behavior whereas MWU` converges to Nash equilibrium. Establishing chaotic behavior for the case of edges with different cost functions is rather straightforward in comparison (Theorem 4.2). The key step is to exploit symmetries in the system to reduce it to a single dimensional one and then establish the existence of a periodic orbit of length three. The existence of periodic orbits of any length as well as chaotic orbits then follows from the Li-Yorke theorem 2.3 [30] (see section 2.2 for background on chaos and dynamical systems). Finally, for any learning rate 1 > > 0, we construct n-player games so that MWUe has chaotic behavior for uncountably many starting points. 4n balls n bin games are symmetric load balancing games with n agent and n edges/elements each with a cost function of c(x)=x. We normalize costs equal to c(x) = x/n so that they lie in [0, 1]. 5This is an absolutely minimal restriction so that the denominator of MWU` cannot become equal to zero. Related work and Extensions/Implications of our results. Connections to learning in games and price of anarchy: Several recent papers, e.g., [40, 22] focus on proving welfare guarantees of no-regret dynamics in games exploiting connections to (robust) price of anarchy literature [37] by establishing fast convergence of the time average behavior to (approximate) coarse correlate equilibria. Although these approaches are rather powerful they are not always applicable. For example, it is well known that when we consider the makespan (i.e. the load of the most congested machine) instead of the social/total cost there can be an exponential gap between the performance of coarse correlated equilibria and Nash equilibria. For example the price of anarchy for the makespan objective for n balls n bins games is O(log(n)/ log log(n)) whereas for the worst no regret algorithm it can be Ω( √ n) [9]. Moreover, even if we focus on the social cost, the price of anarchy guarantees do not carry over if we perform affine transformation to the cost functions (e.g. if there exist users of different tiers/types that the system designer wants to account for in a differential manner). In contrast, our convergence results are robust to any affine cost transformation. In fact, our results apply for all weighted potential games [32] (Remark 3.5). Connections to distributed computation and adversarial agent scheduling: A rather realistic concern about results on learning in games has to do with their sensitivity to the ordering of the moves of the agent dynamics. For example, better-response dynamics in congestion games are guaranteed to converge only if in every round, exactly one agent deviates to a better strategy. A series of recent papers has established strong non-termination (cycling) results for large classes of bounded recall dynamics with a wide variety of interesting and timely applications: game theory, circuit design, social networks, routing and congestion control [26, 19, 34, 25]. In the case of games, these results translate to corollaries such as: “If there are two or more pure Nash equilibria in a game with unique best responses, then all bounded-recall self-independent dynamics6 for which those equilibria are fixed points can fail to converge in asynchronous environments." Even the simplest 2 balls 2 bins game satisfies these properties (two pure Nash and unique best responses) which shows the strength of this impossibility result. In contrast, our convergence result holds for any adversarial scheduling with the minimal fairness assumption that given any mixed state at least one agent who is not best responding eventually will be given the possibility to update their behavior, answering open questions in [26, 25]. In fact, our convergence result is in a sense the strongest possible, no matter how many agents get to update their behavior (as long as one of them does) then the potential of the game will strictly decrease (Corollary 3.6). Connections to complexity theory: Whereas the complexity of computing both mixed Nash equilibria in general games (PPAD-complete [17]) as well as the complexity of finding pure Nash equilibria in congestion games (PLS-complete [20]) have both been completely characterized and are thus unlikely to admit an efficient time algorithm, the complexity of computing mixed Nash equilibria in congestion games has withstood so far an exhaustive characterization. Naturally, it lies on the intersection of both PPAD and PLS, known as CLS [18]. Such an equilibrium can be found both via an end-of-line type of argument as well as a local search type of argument, but it is still not known if it is CLS-complete. Given the active interest for producing CLS-complete problems [16, 21] our constructive/convergence proof may help shed light on this open question. Chaos for arbitrary small learning rates : Although our example of chaotic behavior uses a very high learning rate = 1− e−10, it should be noted that for any learning rate (e.g. = e−10), as well as for any number of agents n, we can create congestion games with n agents where MWUe exhibits chaotic behavior (Corollary 4.3). Congestion/potential games: Congestion games are amongst the most well known and thoroughly studied class of games. Proposed in [36] and isomorphic to potential games [32], they have been successfully employed in myriad modeling problems. Despite the numerous positive convergence results for concurrent dynamics in congestion games, e.g., [33, 23, 7, 1, 6, 28, 10, 13, 12, 31], we know of no prior work establishing such a deterministic convergence result of the day-to-day agent behavior to exact Nash equilibria for general atomic congestion games. MWU has also been studied in congestion games. In [29] randomized variants of the exponential version of the MWU are shown to converge w.h.p. to pure Nash equilibria as long as the learning rate is small enough. In contrast our positive results for linear MWU` hold deterministically and for all learning rates. Recently, [14] showed that if the Hedge algorithm is run with a suitably decreasing learning factor , the sequence 6A dynamic is called self-independent if the agent’s response does not depend on his actions. of play converges to a Nash equilibrium with probability 1 (in the bandit case). The result and the techniques are orthogonal to ours, since we assume fixed learning rates. Non-convergent dynamics: Outside the class of congestion games, there exist several negative results in the literature concerning the non-convergence of MWU and variants thereof. In particular, in [15] it was shown that the multiplicative updates algorithm fails to find the unique Nash equilibrium of the 3× 3 Shapley game. Similar non-convergent results have been proven for perturbed zero-sum games [4], as well as for the continuous time version of MWU, the replicator dynamics [27, 35]. The possibility of applying Li-Yorke type arguments for MWU in congestion games with two agents was inspired by a remark in [3] for the case of continuum of agents. Our paper is the first to our knowledge where non-convergent MWU behavior in congestion games is formally proven capturing both limit cycles and chaos and we do so in the minimal case of two balls two bin games. 2 Preliminaries Notation. We use boldface letters, e.g., x, to denote column vectors (points). For a function f : Rm → Rm, by fn we denote the composition of f with itself n times, namely f ◦ f ◦ · · · ◦ f︸ ︷︷ ︸ n times . 2.1 Congestion Games A congestion game [36] is defined by the tuple (N ;E; (Si)i∈N ; (ce)e∈E) where N is the set of agents, N = |N |, E is a set of resources (also known as edges or bins or facilities) and each player i has a set Si of subsets of E (Si ⊆ 2E) and |Si| ≥ 1. Each strategy si ∈ Si is a set of edges and ce is a positive cost (latency) function associated with facility e. We use small greek characters like γ, δ to denote different strategies/paths. For a strategy profile s = (s1, s2, . . . , sN ), the cost of player i is given by ci(s) = ∑ e∈si ce(`e(s)), where `e(s) is the number of players using e in s (the load of edge e). The potential function is defined to be Φ(s) = ∑ e∈E ∑`e(s) j=1 ce(j). For each i ∈ N and γ ∈ Si, piγ denotes the probability player i chooses strategy γ. We denote by ∆(Si) = {p ≥ 0 : ∑ γ piγ = 1} the set of mixed (randomized) strategies of player i and ∆ = ×i∆(Si) the set of mixed strategies of all players. We use ciγ = Es−i∼p−ici(γ, s−i) to denote the expected cost of player i given that he chooses strategy γ and ĉi = ∑ δ∈Si piδciδ to denote his expected cost. 2.2 Dynamical Systems and Chaos Let x(t+1) = f(x(t)) be a discrete time dynamical system with update rule f : Rm → Rm. The point z is called a fixed point of f if f(z) = z. A sequence (f t(x(0)))t∈N is called a trajectory or orbit of the dynamics with x(0) as starting point. A common technique to show that a dynamical system converges to a fixed point is to construct a function P : Rm → R such that P (f(x)) > P (x) unless x is a fixed point. We call P a Lyapunov or potential function. Definition 2.1. C = {z1, . . . , zk} is called a periodic orbit of length k if zi+1 = f(zi) for 1 ≤ i ≤ k − 1 and f(zk) = z1. Each point z1, . . . , zk is called periodic point of period k. If the dynamics converges to some periodic orbit, we also use the term limit cycle. Some dynamical systems converge and their behavior can be fully understood and some others have strange, chaotic behavior. There are many different definitions for what chaotic behavior and chaos means. In this paper we follow the definition of chaos by Li and Yorke. Let us first give the definition of a scrambled set. Given a dynamical system with update rule f , a pair x and y is called “scrambled" if limn→∞ inf |fn(x) − fn(y)| = 0 (the trajectories get arbitrarily close) and also limn→∞ sup |fn(x)− fn(y)| > 0 (the trajectories move apart). A set S is called “scrambled" if ∀x, y ∈ S, the pair is “scrambled". Definition 2.2 (Li and Yorke). A discrete time dynamical system with update rule f , f : X → X continuous on a compact set X ⊂ R is called chaotic if (a) for each k ∈ Z+, there exists a periodic point p ∈ X of period k and (b) there is an uncountably infinite set S ⊆ X that is “scrambled". Li and Yorke proved the following theorem [30] (there is another theorem of similar flavor due to Sharkovskii [38]): Theorem 2.3 (Period three implies chaos). Let J be an interval and let F : J → J be continuous. Assume there is a point a ∈ J for which the points b = F (a), c = F 2(a) and d = F 3(a), satisfy d ≤ a < b < c (or d ≥ a > b > c). Then 1. For every k = 1, 2, . . . there is a periodic point in J having period k. 2. There is an uncountable set S ⊂ J (containing no periodic points), which satisfies the following conditions: • For every p, q ∈ S with p 6= q, lim n→∞ sup |Fn(p)− Fn(q)| > 0 and lim n→∞ inf |Fn(p)− Fn(q)| = 0. • For every point p ∈ S and periodic point q ∈ J , lim n→∞ sup |Fn(p)− Fn(q)| > 0. Notice that if there is a periodic point with period 3, then the hypothesis of the theorem will be satisfied. 2.3 Baum-Eagon Inequality, Baum-Welch and EM We start this subsection by stating the Baum-Eagon inequality. This inequality will be used to show that MWU` converges to fixed points and more specifically Nash equilibria for congestion games. Theorem 2.4 (Baum-Eagon inequality [5]). Let P (x) = P ({xij}) be a polynomial with nonnegative coefficients homogeneous of degree d in its variables {xij}. Let x = {xij} be any point of the domain D : xij ≥ 0, ∑qi j=1 xij = 1, i = 1, 2, ..., p, j = 1, 2, ..., qi. For x = {xij} ∈ D let =(x) = ={xij} denote the point of D whose i, j coordinate is =(x)ij = ( xij ∂P ∂xij ∣∣∣∣ (x) )/ qi∑ j′=1 xij′ ∂P ∂xij′ ∣∣∣∣ (x) Then P (=(x)) > P (x) unless =(x) = x. The Baum-Welch algorithm is a classic technique used to find the unknown parameters of a hidden Markov model (HMM). A HMM describes the joint probability of a collection of “hidden" and observed discrete random variables. It relies on the assumption that the i-th hidden variable given the (i− 1)-th hidden variable is independent of previous hidden variables, and the current observation variables depend only on the current hidden state. The Baum-Welch algorithm uses the well known EM algorithm to find the maximum likelihood estimate of the parameters of a hidden Markov model given a set of observed feature vectors. More detailed exposition of these ideas can be found here [8]. The probability of making a specific time series of observations of length T can be shown to be a homogeneous polynomial P of degree T with nonnegative (integer) coefficients of the model parameters. Baum-Welch algorithm is homologous to the iterative process derived by applying the Baum-Eagon theorem to polynomial P [5, 41]. In a nutshell, both Baum-Welch and MWU` in congestion games are special cases of the Baum-Eagon iterative process (for different polynomials P ). 2.4 Multiplicative Weights Update In this section, we describe the MWU dynamics (both the linear MWU`, and the exponential MWUe variants) applied in congestion games. The update rule (function) ξ : ∆ → ∆ (where p(t+ 1) = ξ(p(t))) for the linear variant MWU` is as follows: piγ(t+ 1) = (ξ(p(t)))iγ = piγ(t) 1− iciγ(t) 1− iĉi(t) , ∀i ∈ N ,∀γ ∈ Si, (1) where i is a constant (can depend on player i but not on p) so that both enumerator and denominator of the fraction in (1) are positive (and thus the fraction is well defined). Under the assumption that 1/ i > 1 β def = supi,p∈∆,γ∈Si {ciγ}, it follows that 1/ i > ciγ for all i, γ and hence 1/ i > ĉi. The update rule (function) η : ∆ → ∆ (where p(t + 1) = η(p(t))) for the exponential variant MWUe is as follows: piγ(t+ 1) = (η(p(t)))iγ = piγ(t) (1− i)ciγ(t)∑ γ′∈Si piγ′(t)(1− i) ciγ′ (t) , ∀i ∈ N ,∀γ ∈ Si, (2) where i < 1 is a constant (can depend on player i but not on p). Note that i can be small when the number of agents N is large enough. Remark 2.5. Observe that ∆ is invariant under the discrete dynamics (1), (2) defined above. If piγ = 0 then piγ remains zero, and if it is positive, it remains positive (both numerator and denominator are positive) and also is true that ∑ γ∈Si piγ = 1 for all agents i. A point p ∗ is called a fixed point if it stays invariant under the update rule of the dynamics, namely ξ(p∗) = p∗ or η(p∗) = p∗. A point p∗ is a fixed point of (1), (2) if for all i, γ with p∗iγ > 0 we have that ciγ = ĉi. To see why, observe that if p∗iγ , p ∗ iγ′ > 0, then ciγ = ciγ′ and thus ciγ = ĉi. We conclude that the set of fixed points of both dynamics (1), (2) coincide and are supersets of the set of Nash equilibria of the corresponding congestion game. 3 Convergence of MWU` to Nash Equilibria We first prove that MWU` (1) converges to fixed points7. Technically, we establish that function Ψ def = Es∼p [Φ(s)] is strictly decreasing along any nontrivial (i.e. nonequilibrium) trajectory, where Φ is the potential function of the congestion game as defined in Section 2. Formally we show the following theorem: Theorem 3.1 (Ψ is decreasing). Function Ψ is decreasing w.r.t. time, i.e., Ψ(p(t+ 1)) ≤ Ψ(p(t)) where equality Ψ(p(t+ 1)) = Ψ(p(t)) holds only at fixed points. We define the function Q(p) def = ∑ i∈N (1/ i − 1/β) ·∑ γ∈Si piγ + 1/β ·∏ i∈N ∑ γ∈Si piγ ︸ ︷︷ ︸ constant term −Ψ(p), (3) and show that Q(p) is strictly increasing w.r.t time, unless p is a fixed point. Observe that∑ γ∈Si piγ = 1 since p lies in ∆, but we include this terms in Q for technical reasons that will be made clear later in the section. By showing that Q is increasing with time, Theorem 3.1 trivially follows since Q = const − Ψ where const = ∑ i∈N 1/ i − 1/β(N − 1). To show that Q(p) is strictly increasing w.r.t time, unless p is a fixed point, we use a generalization of an inequality by Baum and Eagon [5] on function Q. Corollary 3.2 (Generalization of Baum-Eagon). Theorem 2.4 holds even if P is non-homogeneous. We want to apply Corollary 3.2 on Q. To do so, it suffices to show that Q(p) is a polynomial with nonnegative coefficients. Lemma 3.3. Q(p) is a polynomial with respect to piγ and has nonnegative coefficients. Using Lemma 3.3 and Corollary 3.2 we show the following: Theorem 3.4. Let Q be the function defined in (3). Let also p(t) ∈ ∆ be the point MWU` (1) outputs at time t with update rule ξ. It holds that Q(p(t + 1)) def= Q(ξ(p(t))) > Q(p(t)) unless ξ(p(t)) = p(t) (fixed point). Namely Q is strictly increasing with respect to the number of iterations t unless MWU` is at a fixed point. 7All missing proofs can be found in the full version of this paper http://arxiv.org/abs/1703.01138. Remark 3.5 (Weighted potential games). A congestion game is a potential game because if a player deviates, the difference he experiences in his cost is exactly captured by the deviation of the global (same for all players) function Φ = ∑ e∈E ∑`e(s) j=1 ce(j). In a weighted potential game, it holds that ci(si, s−i)− ci(s′i, s−i) = wi(Φ(si, s−i)− Φ(s′i, s−i)), where wi is some constant not necessarily 1 (as in the potential games case) and vector s−i captures the strategies of all players but i. It is not hard to see that Lemma 3.3 and thus Theorems 3.4 and 3.1 hold in this particular class of games (which is a generalization of congestion games), and so do the rest of the theorems of the section. Effectively, in terms of the weighted potential games analysis, it is possible to reduce it to the standard potential games analysis as follows: Consider the system with learning rates i and cost functions wici so that the game with cost functions ci is a potential game. The only necessary condition that we ask of this system is that iwici(s) < 1 for all i (as in the standard case) so that the enumerators/denominators are positive. By reduction, we can show that for every round T , even if a subset (that depends on the round T ) of the players update their strategy according to MWU` and the rest remain fixed, the potential still decreases. Corollary 3.6 (Any subset). Assume that at time t we partition the players in two sets St, S′t so that we allow only players in St to apply MWU` dynamics, whereas the players in S′t remain fixed. It holds that the expected potential function of the game at time t decreases. As stated earlier in the section, if Q(p(t)) is strictly increasing with respect to time t unless p(t) is a fixed point, it follows that the expected potential function Ψ(p(t)) = const−Q(p(t)) is strictly decreasing unless p(t) is a fixed point and Theorem 3.1 is proved. Moreover, we can derive the fact that our dynamics converges to fixed points as a corollary of Theorem 3.1. Theorem 3.7 (Convergence to fixed points). MWU` dynamics (1) converges to fixed points. We conclude the section by strengthening the convergence result (i.e., Theorem 3.7). We show that if the initial distribution p is in the interior of ∆ then we have convergence to Nash equilibria. Theorem 3.8 (Convergence to Nash equilibria). Assume that the fixed points of (1) are isolated. Let p(0) be a point in the interior of ∆. It follows that limt→∞ p(t) = p∗ is a Nash equilibrium. Proof. We showed in Theorem 3.7 that MWU` dynamics (1) converges, hence limt→∞ p(t) exists (under the assumption that the fixed points are isolated) and is equal to a fixed point of the dynamics p∗. Also it is clear from the dynamics that ∆ is invariant, i.e., ∑ δ∈Sj pjδ(t) = 1, pjδ(t) > 0 for all j and t ≥ 0 since p(0) is in the interior of ∆. Assume that p∗ is not a Nash equilibrium, then there exists a player i and a strategy γ ∈ Si so that ciγ(p ∗) < ĉi(p ∗) (on mixed strategies p∗) and p∗iγ = 0. Fix a ζ > 0 and let Uζ = {p : ciγ(p) < ĉi(p)− ζ}. By continuity we have that Uζ is open. It is also true that p∗ ∈ Uζ for ζ small enough. Since p(t) converges to p∗ as t → ∞, there exists a time t0 so that for all t′ ≥ t0 we have that p(t′) ∈ Uζ . However, from MWU` dynamics (1) we get that if p(t′) ∈ Uζ then 1 − iciγ(t′) > 1 − iĉi(t′) and hence piγ(t′ + 1) = piγ(t′) 1− iciγ(t ′) 1− iĉi(t′) ≥ piγ(t ′) > 0, i.e., piγ(t′) is positive and increasing with t′ ≥ t0. We reached a contradiction since piγ(t) → p∗iγ = 0, thus p∗ is a Nash equilibrium. 4 Non-Convergence of MWUe: Limit Cycle and Chaos We consider a symmetric two agent congestion game with two edges e1, e2. Both agents have the same two available strategies γ1 = {e1} and γ2 = {e2}. We denote x, y the probability that the first and the second agent respectively choose strategy γ1. For the first example, we assume that ce1(l) = 1 2 · l and ce2(l) = 1 2 · l. Computing the expected costs we get that c1γ1 = 1+y 2 , c1γ2 = 2−y 2 , c2γ1 = 1+x 2 , c2γ2 = 2−x 2 . MWUe then becomes xt+1 = xt (1− 1) (yt+1) 2 xt(1− 1) yt+1 2 +(1−xt)(1− 1) 2−yt 2 (first player) and yt+1 = yt (1− 2) xt+1 2 yt(1− 2) xt+1 2 +(1−yt)(1− 2) 2−xt 2 (sec- ond player). We assume that 1 = 2 and also that x0 = y0 (players start with the same mixed strategy. Due to symmetry, it follows that xt = yt for all t ∈ N, thus it suffices to keep track only of one variable (we have reduced the number of variables of the update rule of the dynamics to one) and the dynamics becomes xt+1 = xt (1− ) xt+1 2 xt(1− ) xt+1 2 +(1−xt)(1− ) 2−xt 2 . Finally, we choose = 1 − e−10 and we get xt+1 = H(xt) = xt e−5(xt+1) xte−5(xt+1) + (1− xt)e−5(2−xt) , i.e., we denote H(x) = xe −5(x+1) xe−5(x+1)+(1−x)e−5(2−x) . For the second example, we assume that ce1(l) = 1 4 · l and ce2(l) = 1.4 4 · l. Computing the expected costs we get that c1γ1 = 1+y 4 , c1γ2 = 1.4(2−y) 4 , c2γ1 = 1+x 4 , c2γ2 = 1.4(2−x) 4 . MWUe then becomes xt+1 = xt (1− 1) (yt+1) 4 xt(1− 1) yt+1 4 +(1−xt)(1− 1) 1.4(2−yt) 4 (first player) and yt+1 = yt (1− 2) xt+1 4 yt(1− 2) xt+1 4 +(1−yt)(1− 2) 1.4(2−xt) 4 (second player). We assume that 1 = 2 and also that x0 = y0 (players start with the same mixed strategy. Similarly, due to symmetry, it follows that xt = yt for all t ∈ N, thus it suffices to keep track only of one variable and the dynamics becomes xt+1 = xt (1− ) xt+1 4 xt(1− ) xt+1 4 +(1−xt)(1− ) 1.4(2−xt) 4 . Finally, we choose = 1− e−40 and we get xt+1 = G(xt) = xt e−10(xt+1) xte−10(xt+1) + (1− xt)e−14(2−xt) , i.e., we denote G(x) = xe −10(x+1) xe−10(x+1)+(1−x)e−14(2−x) . We show the following three statements, the proofs of which can be found in the full version. Theorem 4.1. For all but a measure zero set S of x ∈ (0, 1) we get that limt→∞H2t(x) = ρ1 or ρ2. Moreover, H(ρ1) = ρ2 and H(ρ2) = ρ1, i.e., {ρ1, ρ2} is a periodic orbit. Thus, all but a measure zero set S of initial conditions converge to the limit cycle {ρ1, ρ2}. Finally, the initial points in S converge to the equilibrium 12 . Theorem 4.2. There exist two player two strategy symmetric congestion games such that MWUe has periodic orbits of length n for any natural number n > 0 and as well as an uncountably infinite set of “scrambled" initial conditions (Li-Yorke chaos). Using Theorem 4.2, we conclude with the following corollary. Corollary 4.3. For any 1 > > 0 and n, there exists a n-player congestion game G( ) (depending on ) so that MWUe dynamics exhibits Li-Yorke chaos for uncountably many starting points. 5 Conclusion and Future Work We have analyzed MWU` in congestion games where agents use arbitrary admissible constants as learning rates and showed convergence to exact Nash equilibria. We have also shown that this result is not true for the nearly homologous exponential variant MWUe even for the simplest case of two-agent, two-strategy load balancing games. There we prove that such dynamics can provably lead to limit cycles or even chaotic behavior. For a small enough learning rate the behavior of MWUe approaches that of its smooth variant, replicator dynamics, and hence convergence is once again guaranteed [29]. This means that as we increase the learning rate from near zero values we start off with a convergent system and we end up with a chaotic one. Numerical experiments establish that between the convergent region and the chaotic region there exists a range of values for for which the system exhibits periodic behavior. Period doubling is known as standard route for 1-dimensional chaos (e.g. logistic map) and is characterized by unexpected regularities such as the Feigenbaum constant [39]. Elucidating these connections is an interesting open problem. More generally, what other type of regularities can be established in these non-equilibrium systems? Another interesting question has to do with developing a better understanding of the set of conditions that result to non-converging trajectories. So far, it has been critical for our non-convergent examples that the system starts from a symmetric initial condition. Whether such irregular MWUe trajectories can be constructed for generic initial conditions, possibly in larger congestion games, is not known. Nevertheless, the non-convergent results, despite their non-generic nature are rather useful since they imply that we cannot hope to leverage the power of Baum-Eagon techniques for MWUe. In conclusion, establishing generic (non)convergence results (e.g. for most initial conditions, most congestion games) for MWUe with constant step size is an interesting future direction.
1. What are the main contributions and strengths of the paper regarding convergence results in congestion games? 2. Do you have any concerns or questions about the Baum-Eagon inequality's role in proving the monotonicity of the potential function? 3. How does the paper's result generalize to congestion games with an infinite number of agents and pure strategies? 4. Can you provide more details on why the assumption that individual earning rates $\epsilon_i$ are bounded above is crucial for the Baum-Eagon inequality to work? 5. What are your thoughts on the proof of Theorem 3.7, and do you think it could be improved or made more straightforward? 6. In Remark 3.5, the authors claim that Lemma 3.3 and thus Theorem 3.4 and 3.1 hold in the extension of weighted potential games. Do you agree with this statement, and how might one address potential issues related to non-negativity of coefficients in $Q(p)$? 7. Should the constant in $Q(p)$ be adjusted to account for the missing term $(|N|-1) 1/\beta$, as mentioned in the review? If so, how would this affect the overall result?
Review
Review The paper revisits the convergence result of multiplicative weighted update (MWU) in a congestion game, due to Kleinberg, Piliouras, and Tardos (STOC 2009), and establishes a connection between MWU and Baum-Welch algorithm in a neat and elegant style. By showing the monotonicity of the potential function in a congestion game, the authors prove that any MWU with linear updating rule converges to the set of fixed points, which is a superset of all the Nash equilibria of the congestion game. The Baum-Eagon inequality offers a new interpretation of the dynamics of the linear updating rule in MWU and their results in congestion games are quite general, and so are the conditions on initialization and isolated Nash equilibrium in a congestion game with finite set of agents and pure strategies. The results in this paper hold for any congestion game irrespective of the topology of the strategy sets by the nature of their game, however, one should emphasize the assumption that, individual earning rates $\epsilon_i$ are bounded above, in order for Baum-Eagon inequality to work in their context. An elaborate analysis on the upper bounds would be nicer, since for any $i$, this paper requires that $$\frac{1}{\epsilon_i} > \sup_{i,\mathbf{p}\in \Delta,\gamma\in S_i} \{c_{i\gamma}\}\geq \max_{e} c_e(|N|),$$ which can be huge when the number of agents is large enough. Although Theorem 3.7 holds, the proof presented by the authors is flawed. To prove any point $\mathbf{y} \in \Omega$ is a fixed point of $\zeta$, one could guess from the argument in the paper that the authors intend to show that $\Phi(\mathbf{y}) = \Phi(\zeta(\mathbf{y}))$, thus by monotonicity of $\Phi$, $\zeta(\mathbf{y})=\mathbf{y}$. However, they applied the notation $\mathbf{y}(t)$, even though $\mathbf{y}$ is, according to their definition, the limit point of an orbit $\mathbf{p}(t)$ as $t$ goes to infinity. The subsequent argument is also hard to follow. A easy way to prove theorem 3.7 is by contradiction. Suppose that $\mathbf{y}$ is not a fixed point, then by definition (line221), there exists some $i,\gamma$ s.t., $y_{i\gamma} > 0$ and $c_{i\gamma}\neq \hat{c}_i$. If $c_{i\gamma} > \hat{c}_i$, then by convergence and continuity of the mapping, $\exists t_0$ and $\epsilon_0 > 0$, s.t., $\forall t > t_0$, $p_{i\gamma}(t) \geq \epsilon_0 > 0$, and $c_{i\gamma}(t) \geq \hat{c}_i(t)+ \epsilon_0.$ Then it's not hard to see that $$p_{i\gamma}(t+1) = p_{i\gamma}(t) \frac{\frac{1}{\epsilon_i} - c_{i\gamma}(t) }{\frac{1}{\epsilon_i} - \hat{c}_{i}(t)} \leq p_{i\gamma}(t) (1- \epsilon_0 \epsilon_i),$$ which yields $\lim_{t\to \infty} p_{i\gamma}(t) = y_{i\gamma}=0$, contradicting with $y_{i\gamma} > 0$. If $c_{i\gamma} < \hat{c}_i$, then by convergence and continuity of the mapping, $\exists t_0$ and $\epsilon_0 > 0$, s.t., $\forall t > t_0$, $p_{i\gamma}(t) \geq \epsilon_0 > 0$, and $c_{i\gamma}(t) + \epsilon_0 < \hat{c}_i(t).$ Then it's not hard to see that $$p_{i\gamma}(t+1) = p_{i\gamma}(t) \frac{\frac{1}{\epsilon_i} - c_{i\gamma}(t) }{\frac{1}{\epsilon_i} - \hat{c}_{i}(t)} \geq p_{i\gamma}(t) (1+ \epsilon_0)$$. Then $\lim_{t\to \infty} p_{i\gamma}(t)$ doesn't lie in the simplex. In Remark 3.5, the authors claim that Lemma 3.3 and thus Theorem 3.4 and 3.1 hold in the extension of weighted potential games. However, a underlying step in the proof of Lemma 3.3 might not hold without adding extra conditions: the non-negativity of coefficients in $Q(p)$ in a weighted potential game. Similar to the proof of Lemma 3.3, in the weighted case, $\frac{\partial Q}{\partial p_{i\gamma}} = \frac{1}{\epsilon_i} - \frac{c_{i\gamma}}{w_i}$. Whether the RHS is non-negative is unknown simply by the definition of $\epsilon_i$ nor the fact that $w_i \in [0,1]$. On Line 236, the constant in $Q(p)$ should rather be $Q(p)=const -\Phi$ and $const = \sum_{i\in N} 1/{\epsilon_i} + (|N|-1) 1/\beta$.
NIPS
Title Multiplicative Weights Update with Constant Step-Size in Congestion Games: Convergence, Limit Cycles and Chaos Abstract The Multiplicative Weights Update (MWU) method is a ubiquitous meta-algorithm that works as follows: A distribution is maintained on a certain set, and at each step the probability assigned to action γ is multiplied by (1− C(γ)) > 0 where C(γ) is the “cost" of action γ and then rescaled to ensure that the new values form a distribution. We analyze MWU in congestion games where agents use arbitrary admissible constants as learning rates and prove convergence to exact Nash equilibria. Interestingly, this convergence result does not carry over to the nearly homologous MWU variant where at each step the probability assigned to action γ is multiplied by (1− ) even for the simplest case of two-agent, two-strategy load balancing games, where such dynamics can provably lead to limit cycles or even chaotic behavior. N/A The Multiplicative Weights Update (MWU) method is a ubiquitous meta-algorithm that works as follows: A distribution is maintained on a certain set, and at each step the probability assigned to action γ is multiplied by (1− C(γ)) > 0 where C(γ) is the “cost" of action γ and then rescaled to ensure that the new values form a distribution. We analyze MWU in congestion games where agents use arbitrary admissible constants as learning rates and prove convergence to exact Nash equilibria. Interestingly, this convergence result does not carry over to the nearly homologous MWU variant where at each step the probability assigned to action γ is multiplied by (1− )C(γ) even for the simplest case of two-agent, two-strategy load balancing games, where such dynamics can provably lead to limit cycles or even chaotic behavior. 1 Introduction The Multiplicative Weights Update (MWU) is a ubiquitous meta-algorithm with numerous applications in different fields [2]. It is particularly useful in game theory due to its regret-minimizing properties [24, 11]. It is typically introduced in two nearly identical variants, the one in which at each step the probability assigned to action γ is multiplied by (1 − C(γ)) and the one in which it is multiplied by (1 − )C(γ) where C(γ) is the cost of action γ. We will refer to the first as the linear variant, MWU`, and the second as the exponential, MWUe (also known as Hedge). In the literature there is little distinction between these two variants as both carry the same advantageous regret-minimizing property. It is also well known that in order to achieve sublinear regret, the learning rate must be decreasing as time progresses. This constraint raises a natural question: Are there interesting classes of games where MWU behaves well without the need to fine-tune its learning rate? A natural setting to test the learning behavior of MWU with constant learning rates is the wellstudied class of congestion games. Unfortunately, even for the simplest instances of congestion games MWUe fails to converge to equilibria. For example, even in the simplest case of two balls two ∗Gerasimos Palaiopanos would like to acknowledge a SUTD Presidential fellowship. †Ioannis Panageas would like to acknowledge a MIT-SUTD postdoctoral fellowship. Part of this work was completed while Ioannis Panageas was a PhD student at Georgia Institute of Technology and a visiting scientist at the Simons Institute for the Theory of Computing. ‡Georgios Piliouras would like to acknowledge SUTD grant SRG ESD 2015 097, MOE AcRF Tier 2 Grant 2016-T2-1-170 and a NRF Fellowship. Part of this work was completed while Georgios Piliouras was a visiting scientist at the Simons Institute for the Theory of Computing. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. bins games,4 MWUe with = 1− e−10 is shown to converge to a limit cycle of period 2 for infinitely many initial conditions (Theorem 4.1). If the cost functions of the two edges are not identical then we create instances of two player load balancing games such that MWUe has periodic orbits of length k for all k > 0, as well as uncountable many initial conditions which never settle on any periodic orbit but instead exhibit an irregular behavior known as Li-Yorke chaos (Theorem 4.2, see Corollary 4.3). The source of these problems is exactly the large, fixed learning rate , e.g., ≈ 1 for costs in [0, 1]. Intuitively, the key aspect of the problem can be captured by (simultaneous) best response dynamics. If both agents start from the same edge and best-respond simultaneously they will land on the second edge which now has a load of two. In the next step they will both jump back to the first edge and this motion will be continued perpetually. Naturally, MWUe dynamics are considerably more intricate as they evolve over mixed strategies and allow for more complicated non-equilibrium behavior but the key insight is correct. Each agent has the right goal, decrease his own cost and hence the potential of the game, however, as they pursue this goal too aggressively they cancel each other’s gains and lead to unpredictable non-converging behavior. In a sense, the cautionary tales above agree with our intuition. Large, constant learning rates nullify the known performance guarantees of MWU. We should expect erratic behavior in such cases. The typical way to circumvent these problems is through careful monitoring and possibly successive halving of the parameter, a standard technique in the MWU literature. In this paper, we explore an alternative, cleaner, and surprisingly elegant solution to this problem. We show that applying MWU`, the linear variant of MWU, suffices to guarantee convergence in all congestion games. Our key contributions. Our key result is the proof of convergence of MWU` in congestion games. The main technical contribution is a proof that the potential of the mixed state is always strictly decreasing along any nontrivial trajectory (Theorem 3.1). This result holds for all congestion games, irrespective of the number of agents or the size, topology of the strategy sets. Moreover, each agent i may be applying different learning rates i which will be constant along the dynamics ( i does not depend on the number of iterations T of the dynamics and therefore is bounded away from zero as T →∞; this is not the case for most of the results in the literature). The only restriction on the set of allowable learning rates i is that for each agent the multiplicative factor (1 − iCi(s)) should be positive for all strategy outcomes s.5 Arguing convergence to equilibria for all initial conditions (Theorem 3.4) and further, convergence to Nash equilibria for all interior initial conditions (Theorem 3.8) follows. Proving that the potential always decreases (Theorem 3.1) hinges upon discovering a novel interpretation of MWU dynamics. Specifically, we show that the class of dynamical systems derived by applying MWU` in congestion games is a special case of a convergent class of dynamical systems introduced by Baum and Eagon [5] (see Theorem 2.4). The most well known member of this class is the classic Baum-Welch algorithm, the standard instantiation of the Expectation-Maximization (EM) algorithm for hidden Markov models (HMM). Effectively, the proof of convergence of both these systems boils down to a proof of membership to the same class of Baum-Eagon systems (see section 2.3 for more details on these connections). In the second part we provide simple congestion games where MWUe provably fails to converge. The first main technical contribution of this section is proving convergence to a limit cycle, specifically a periodic orbit of length two, for the simplest case of two balls two bins games for infinitely many initial conditions (Theorem 4.1). Moreover, after normalizing costs to lie in [0, 1], i.e. c(x) = x/2, we prove that almost all symmetric non-equilibrium initial conditions converge to a unique limit cycle when both agents use learning rate = 1−e−10. In contrast, since 1− ·C(s) ≥ 1−(1−e−10)1 = e−10 > 0, MWU` successfully converges to equilibrium. In other words, for the same learning rates, MWUe exhibits chaotic behavior whereas MWU` converges to Nash equilibrium. Establishing chaotic behavior for the case of edges with different cost functions is rather straightforward in comparison (Theorem 4.2). The key step is to exploit symmetries in the system to reduce it to a single dimensional one and then establish the existence of a periodic orbit of length three. The existence of periodic orbits of any length as well as chaotic orbits then follows from the Li-Yorke theorem 2.3 [30] (see section 2.2 for background on chaos and dynamical systems). Finally, for any learning rate 1 > > 0, we construct n-player games so that MWUe has chaotic behavior for uncountably many starting points. 4n balls n bin games are symmetric load balancing games with n agent and n edges/elements each with a cost function of c(x)=x. We normalize costs equal to c(x) = x/n so that they lie in [0, 1]. 5This is an absolutely minimal restriction so that the denominator of MWU` cannot become equal to zero. Related work and Extensions/Implications of our results. Connections to learning in games and price of anarchy: Several recent papers, e.g., [40, 22] focus on proving welfare guarantees of no-regret dynamics in games exploiting connections to (robust) price of anarchy literature [37] by establishing fast convergence of the time average behavior to (approximate) coarse correlate equilibria. Although these approaches are rather powerful they are not always applicable. For example, it is well known that when we consider the makespan (i.e. the load of the most congested machine) instead of the social/total cost there can be an exponential gap between the performance of coarse correlated equilibria and Nash equilibria. For example the price of anarchy for the makespan objective for n balls n bins games is O(log(n)/ log log(n)) whereas for the worst no regret algorithm it can be Ω( √ n) [9]. Moreover, even if we focus on the social cost, the price of anarchy guarantees do not carry over if we perform affine transformation to the cost functions (e.g. if there exist users of different tiers/types that the system designer wants to account for in a differential manner). In contrast, our convergence results are robust to any affine cost transformation. In fact, our results apply for all weighted potential games [32] (Remark 3.5). Connections to distributed computation and adversarial agent scheduling: A rather realistic concern about results on learning in games has to do with their sensitivity to the ordering of the moves of the agent dynamics. For example, better-response dynamics in congestion games are guaranteed to converge only if in every round, exactly one agent deviates to a better strategy. A series of recent papers has established strong non-termination (cycling) results for large classes of bounded recall dynamics with a wide variety of interesting and timely applications: game theory, circuit design, social networks, routing and congestion control [26, 19, 34, 25]. In the case of games, these results translate to corollaries such as: “If there are two or more pure Nash equilibria in a game with unique best responses, then all bounded-recall self-independent dynamics6 for which those equilibria are fixed points can fail to converge in asynchronous environments." Even the simplest 2 balls 2 bins game satisfies these properties (two pure Nash and unique best responses) which shows the strength of this impossibility result. In contrast, our convergence result holds for any adversarial scheduling with the minimal fairness assumption that given any mixed state at least one agent who is not best responding eventually will be given the possibility to update their behavior, answering open questions in [26, 25]. In fact, our convergence result is in a sense the strongest possible, no matter how many agents get to update their behavior (as long as one of them does) then the potential of the game will strictly decrease (Corollary 3.6). Connections to complexity theory: Whereas the complexity of computing both mixed Nash equilibria in general games (PPAD-complete [17]) as well as the complexity of finding pure Nash equilibria in congestion games (PLS-complete [20]) have both been completely characterized and are thus unlikely to admit an efficient time algorithm, the complexity of computing mixed Nash equilibria in congestion games has withstood so far an exhaustive characterization. Naturally, it lies on the intersection of both PPAD and PLS, known as CLS [18]. Such an equilibrium can be found both via an end-of-line type of argument as well as a local search type of argument, but it is still not known if it is CLS-complete. Given the active interest for producing CLS-complete problems [16, 21] our constructive/convergence proof may help shed light on this open question. Chaos for arbitrary small learning rates : Although our example of chaotic behavior uses a very high learning rate = 1− e−10, it should be noted that for any learning rate (e.g. = e−10), as well as for any number of agents n, we can create congestion games with n agents where MWUe exhibits chaotic behavior (Corollary 4.3). Congestion/potential games: Congestion games are amongst the most well known and thoroughly studied class of games. Proposed in [36] and isomorphic to potential games [32], they have been successfully employed in myriad modeling problems. Despite the numerous positive convergence results for concurrent dynamics in congestion games, e.g., [33, 23, 7, 1, 6, 28, 10, 13, 12, 31], we know of no prior work establishing such a deterministic convergence result of the day-to-day agent behavior to exact Nash equilibria for general atomic congestion games. MWU has also been studied in congestion games. In [29] randomized variants of the exponential version of the MWU are shown to converge w.h.p. to pure Nash equilibria as long as the learning rate is small enough. In contrast our positive results for linear MWU` hold deterministically and for all learning rates. Recently, [14] showed that if the Hedge algorithm is run with a suitably decreasing learning factor , the sequence 6A dynamic is called self-independent if the agent’s response does not depend on his actions. of play converges to a Nash equilibrium with probability 1 (in the bandit case). The result and the techniques are orthogonal to ours, since we assume fixed learning rates. Non-convergent dynamics: Outside the class of congestion games, there exist several negative results in the literature concerning the non-convergence of MWU and variants thereof. In particular, in [15] it was shown that the multiplicative updates algorithm fails to find the unique Nash equilibrium of the 3× 3 Shapley game. Similar non-convergent results have been proven for perturbed zero-sum games [4], as well as for the continuous time version of MWU, the replicator dynamics [27, 35]. The possibility of applying Li-Yorke type arguments for MWU in congestion games with two agents was inspired by a remark in [3] for the case of continuum of agents. Our paper is the first to our knowledge where non-convergent MWU behavior in congestion games is formally proven capturing both limit cycles and chaos and we do so in the minimal case of two balls two bin games. 2 Preliminaries Notation. We use boldface letters, e.g., x, to denote column vectors (points). For a function f : Rm → Rm, by fn we denote the composition of f with itself n times, namely f ◦ f ◦ · · · ◦ f︸ ︷︷ ︸ n times . 2.1 Congestion Games A congestion game [36] is defined by the tuple (N ;E; (Si)i∈N ; (ce)e∈E) where N is the set of agents, N = |N |, E is a set of resources (also known as edges or bins or facilities) and each player i has a set Si of subsets of E (Si ⊆ 2E) and |Si| ≥ 1. Each strategy si ∈ Si is a set of edges and ce is a positive cost (latency) function associated with facility e. We use small greek characters like γ, δ to denote different strategies/paths. For a strategy profile s = (s1, s2, . . . , sN ), the cost of player i is given by ci(s) = ∑ e∈si ce(`e(s)), where `e(s) is the number of players using e in s (the load of edge e). The potential function is defined to be Φ(s) = ∑ e∈E ∑`e(s) j=1 ce(j). For each i ∈ N and γ ∈ Si, piγ denotes the probability player i chooses strategy γ. We denote by ∆(Si) = {p ≥ 0 : ∑ γ piγ = 1} the set of mixed (randomized) strategies of player i and ∆ = ×i∆(Si) the set of mixed strategies of all players. We use ciγ = Es−i∼p−ici(γ, s−i) to denote the expected cost of player i given that he chooses strategy γ and ĉi = ∑ δ∈Si piδciδ to denote his expected cost. 2.2 Dynamical Systems and Chaos Let x(t+1) = f(x(t)) be a discrete time dynamical system with update rule f : Rm → Rm. The point z is called a fixed point of f if f(z) = z. A sequence (f t(x(0)))t∈N is called a trajectory or orbit of the dynamics with x(0) as starting point. A common technique to show that a dynamical system converges to a fixed point is to construct a function P : Rm → R such that P (f(x)) > P (x) unless x is a fixed point. We call P a Lyapunov or potential function. Definition 2.1. C = {z1, . . . , zk} is called a periodic orbit of length k if zi+1 = f(zi) for 1 ≤ i ≤ k − 1 and f(zk) = z1. Each point z1, . . . , zk is called periodic point of period k. If the dynamics converges to some periodic orbit, we also use the term limit cycle. Some dynamical systems converge and their behavior can be fully understood and some others have strange, chaotic behavior. There are many different definitions for what chaotic behavior and chaos means. In this paper we follow the definition of chaos by Li and Yorke. Let us first give the definition of a scrambled set. Given a dynamical system with update rule f , a pair x and y is called “scrambled" if limn→∞ inf |fn(x) − fn(y)| = 0 (the trajectories get arbitrarily close) and also limn→∞ sup |fn(x)− fn(y)| > 0 (the trajectories move apart). A set S is called “scrambled" if ∀x, y ∈ S, the pair is “scrambled". Definition 2.2 (Li and Yorke). A discrete time dynamical system with update rule f , f : X → X continuous on a compact set X ⊂ R is called chaotic if (a) for each k ∈ Z+, there exists a periodic point p ∈ X of period k and (b) there is an uncountably infinite set S ⊆ X that is “scrambled". Li and Yorke proved the following theorem [30] (there is another theorem of similar flavor due to Sharkovskii [38]): Theorem 2.3 (Period three implies chaos). Let J be an interval and let F : J → J be continuous. Assume there is a point a ∈ J for which the points b = F (a), c = F 2(a) and d = F 3(a), satisfy d ≤ a < b < c (or d ≥ a > b > c). Then 1. For every k = 1, 2, . . . there is a periodic point in J having period k. 2. There is an uncountable set S ⊂ J (containing no periodic points), which satisfies the following conditions: • For every p, q ∈ S with p 6= q, lim n→∞ sup |Fn(p)− Fn(q)| > 0 and lim n→∞ inf |Fn(p)− Fn(q)| = 0. • For every point p ∈ S and periodic point q ∈ J , lim n→∞ sup |Fn(p)− Fn(q)| > 0. Notice that if there is a periodic point with period 3, then the hypothesis of the theorem will be satisfied. 2.3 Baum-Eagon Inequality, Baum-Welch and EM We start this subsection by stating the Baum-Eagon inequality. This inequality will be used to show that MWU` converges to fixed points and more specifically Nash equilibria for congestion games. Theorem 2.4 (Baum-Eagon inequality [5]). Let P (x) = P ({xij}) be a polynomial with nonnegative coefficients homogeneous of degree d in its variables {xij}. Let x = {xij} be any point of the domain D : xij ≥ 0, ∑qi j=1 xij = 1, i = 1, 2, ..., p, j = 1, 2, ..., qi. For x = {xij} ∈ D let =(x) = ={xij} denote the point of D whose i, j coordinate is =(x)ij = ( xij ∂P ∂xij ∣∣∣∣ (x) )/ qi∑ j′=1 xij′ ∂P ∂xij′ ∣∣∣∣ (x) Then P (=(x)) > P (x) unless =(x) = x. The Baum-Welch algorithm is a classic technique used to find the unknown parameters of a hidden Markov model (HMM). A HMM describes the joint probability of a collection of “hidden" and observed discrete random variables. It relies on the assumption that the i-th hidden variable given the (i− 1)-th hidden variable is independent of previous hidden variables, and the current observation variables depend only on the current hidden state. The Baum-Welch algorithm uses the well known EM algorithm to find the maximum likelihood estimate of the parameters of a hidden Markov model given a set of observed feature vectors. More detailed exposition of these ideas can be found here [8]. The probability of making a specific time series of observations of length T can be shown to be a homogeneous polynomial P of degree T with nonnegative (integer) coefficients of the model parameters. Baum-Welch algorithm is homologous to the iterative process derived by applying the Baum-Eagon theorem to polynomial P [5, 41]. In a nutshell, both Baum-Welch and MWU` in congestion games are special cases of the Baum-Eagon iterative process (for different polynomials P ). 2.4 Multiplicative Weights Update In this section, we describe the MWU dynamics (both the linear MWU`, and the exponential MWUe variants) applied in congestion games. The update rule (function) ξ : ∆ → ∆ (where p(t+ 1) = ξ(p(t))) for the linear variant MWU` is as follows: piγ(t+ 1) = (ξ(p(t)))iγ = piγ(t) 1− iciγ(t) 1− iĉi(t) , ∀i ∈ N ,∀γ ∈ Si, (1) where i is a constant (can depend on player i but not on p) so that both enumerator and denominator of the fraction in (1) are positive (and thus the fraction is well defined). Under the assumption that 1/ i > 1 β def = supi,p∈∆,γ∈Si {ciγ}, it follows that 1/ i > ciγ for all i, γ and hence 1/ i > ĉi. The update rule (function) η : ∆ → ∆ (where p(t + 1) = η(p(t))) for the exponential variant MWUe is as follows: piγ(t+ 1) = (η(p(t)))iγ = piγ(t) (1− i)ciγ(t)∑ γ′∈Si piγ′(t)(1− i) ciγ′ (t) , ∀i ∈ N ,∀γ ∈ Si, (2) where i < 1 is a constant (can depend on player i but not on p). Note that i can be small when the number of agents N is large enough. Remark 2.5. Observe that ∆ is invariant under the discrete dynamics (1), (2) defined above. If piγ = 0 then piγ remains zero, and if it is positive, it remains positive (both numerator and denominator are positive) and also is true that ∑ γ∈Si piγ = 1 for all agents i. A point p ∗ is called a fixed point if it stays invariant under the update rule of the dynamics, namely ξ(p∗) = p∗ or η(p∗) = p∗. A point p∗ is a fixed point of (1), (2) if for all i, γ with p∗iγ > 0 we have that ciγ = ĉi. To see why, observe that if p∗iγ , p ∗ iγ′ > 0, then ciγ = ciγ′ and thus ciγ = ĉi. We conclude that the set of fixed points of both dynamics (1), (2) coincide and are supersets of the set of Nash equilibria of the corresponding congestion game. 3 Convergence of MWU` to Nash Equilibria We first prove that MWU` (1) converges to fixed points7. Technically, we establish that function Ψ def = Es∼p [Φ(s)] is strictly decreasing along any nontrivial (i.e. nonequilibrium) trajectory, where Φ is the potential function of the congestion game as defined in Section 2. Formally we show the following theorem: Theorem 3.1 (Ψ is decreasing). Function Ψ is decreasing w.r.t. time, i.e., Ψ(p(t+ 1)) ≤ Ψ(p(t)) where equality Ψ(p(t+ 1)) = Ψ(p(t)) holds only at fixed points. We define the function Q(p) def = ∑ i∈N (1/ i − 1/β) ·∑ γ∈Si piγ + 1/β ·∏ i∈N ∑ γ∈Si piγ ︸ ︷︷ ︸ constant term −Ψ(p), (3) and show that Q(p) is strictly increasing w.r.t time, unless p is a fixed point. Observe that∑ γ∈Si piγ = 1 since p lies in ∆, but we include this terms in Q for technical reasons that will be made clear later in the section. By showing that Q is increasing with time, Theorem 3.1 trivially follows since Q = const − Ψ where const = ∑ i∈N 1/ i − 1/β(N − 1). To show that Q(p) is strictly increasing w.r.t time, unless p is a fixed point, we use a generalization of an inequality by Baum and Eagon [5] on function Q. Corollary 3.2 (Generalization of Baum-Eagon). Theorem 2.4 holds even if P is non-homogeneous. We want to apply Corollary 3.2 on Q. To do so, it suffices to show that Q(p) is a polynomial with nonnegative coefficients. Lemma 3.3. Q(p) is a polynomial with respect to piγ and has nonnegative coefficients. Using Lemma 3.3 and Corollary 3.2 we show the following: Theorem 3.4. Let Q be the function defined in (3). Let also p(t) ∈ ∆ be the point MWU` (1) outputs at time t with update rule ξ. It holds that Q(p(t + 1)) def= Q(ξ(p(t))) > Q(p(t)) unless ξ(p(t)) = p(t) (fixed point). Namely Q is strictly increasing with respect to the number of iterations t unless MWU` is at a fixed point. 7All missing proofs can be found in the full version of this paper http://arxiv.org/abs/1703.01138. Remark 3.5 (Weighted potential games). A congestion game is a potential game because if a player deviates, the difference he experiences in his cost is exactly captured by the deviation of the global (same for all players) function Φ = ∑ e∈E ∑`e(s) j=1 ce(j). In a weighted potential game, it holds that ci(si, s−i)− ci(s′i, s−i) = wi(Φ(si, s−i)− Φ(s′i, s−i)), where wi is some constant not necessarily 1 (as in the potential games case) and vector s−i captures the strategies of all players but i. It is not hard to see that Lemma 3.3 and thus Theorems 3.4 and 3.1 hold in this particular class of games (which is a generalization of congestion games), and so do the rest of the theorems of the section. Effectively, in terms of the weighted potential games analysis, it is possible to reduce it to the standard potential games analysis as follows: Consider the system with learning rates i and cost functions wici so that the game with cost functions ci is a potential game. The only necessary condition that we ask of this system is that iwici(s) < 1 for all i (as in the standard case) so that the enumerators/denominators are positive. By reduction, we can show that for every round T , even if a subset (that depends on the round T ) of the players update their strategy according to MWU` and the rest remain fixed, the potential still decreases. Corollary 3.6 (Any subset). Assume that at time t we partition the players in two sets St, S′t so that we allow only players in St to apply MWU` dynamics, whereas the players in S′t remain fixed. It holds that the expected potential function of the game at time t decreases. As stated earlier in the section, if Q(p(t)) is strictly increasing with respect to time t unless p(t) is a fixed point, it follows that the expected potential function Ψ(p(t)) = const−Q(p(t)) is strictly decreasing unless p(t) is a fixed point and Theorem 3.1 is proved. Moreover, we can derive the fact that our dynamics converges to fixed points as a corollary of Theorem 3.1. Theorem 3.7 (Convergence to fixed points). MWU` dynamics (1) converges to fixed points. We conclude the section by strengthening the convergence result (i.e., Theorem 3.7). We show that if the initial distribution p is in the interior of ∆ then we have convergence to Nash equilibria. Theorem 3.8 (Convergence to Nash equilibria). Assume that the fixed points of (1) are isolated. Let p(0) be a point in the interior of ∆. It follows that limt→∞ p(t) = p∗ is a Nash equilibrium. Proof. We showed in Theorem 3.7 that MWU` dynamics (1) converges, hence limt→∞ p(t) exists (under the assumption that the fixed points are isolated) and is equal to a fixed point of the dynamics p∗. Also it is clear from the dynamics that ∆ is invariant, i.e., ∑ δ∈Sj pjδ(t) = 1, pjδ(t) > 0 for all j and t ≥ 0 since p(0) is in the interior of ∆. Assume that p∗ is not a Nash equilibrium, then there exists a player i and a strategy γ ∈ Si so that ciγ(p ∗) < ĉi(p ∗) (on mixed strategies p∗) and p∗iγ = 0. Fix a ζ > 0 and let Uζ = {p : ciγ(p) < ĉi(p)− ζ}. By continuity we have that Uζ is open. It is also true that p∗ ∈ Uζ for ζ small enough. Since p(t) converges to p∗ as t → ∞, there exists a time t0 so that for all t′ ≥ t0 we have that p(t′) ∈ Uζ . However, from MWU` dynamics (1) we get that if p(t′) ∈ Uζ then 1 − iciγ(t′) > 1 − iĉi(t′) and hence piγ(t′ + 1) = piγ(t′) 1− iciγ(t ′) 1− iĉi(t′) ≥ piγ(t ′) > 0, i.e., piγ(t′) is positive and increasing with t′ ≥ t0. We reached a contradiction since piγ(t) → p∗iγ = 0, thus p∗ is a Nash equilibrium. 4 Non-Convergence of MWUe: Limit Cycle and Chaos We consider a symmetric two agent congestion game with two edges e1, e2. Both agents have the same two available strategies γ1 = {e1} and γ2 = {e2}. We denote x, y the probability that the first and the second agent respectively choose strategy γ1. For the first example, we assume that ce1(l) = 1 2 · l and ce2(l) = 1 2 · l. Computing the expected costs we get that c1γ1 = 1+y 2 , c1γ2 = 2−y 2 , c2γ1 = 1+x 2 , c2γ2 = 2−x 2 . MWUe then becomes xt+1 = xt (1− 1) (yt+1) 2 xt(1− 1) yt+1 2 +(1−xt)(1− 1) 2−yt 2 (first player) and yt+1 = yt (1− 2) xt+1 2 yt(1− 2) xt+1 2 +(1−yt)(1− 2) 2−xt 2 (sec- ond player). We assume that 1 = 2 and also that x0 = y0 (players start with the same mixed strategy. Due to symmetry, it follows that xt = yt for all t ∈ N, thus it suffices to keep track only of one variable (we have reduced the number of variables of the update rule of the dynamics to one) and the dynamics becomes xt+1 = xt (1− ) xt+1 2 xt(1− ) xt+1 2 +(1−xt)(1− ) 2−xt 2 . Finally, we choose = 1 − e−10 and we get xt+1 = H(xt) = xt e−5(xt+1) xte−5(xt+1) + (1− xt)e−5(2−xt) , i.e., we denote H(x) = xe −5(x+1) xe−5(x+1)+(1−x)e−5(2−x) . For the second example, we assume that ce1(l) = 1 4 · l and ce2(l) = 1.4 4 · l. Computing the expected costs we get that c1γ1 = 1+y 4 , c1γ2 = 1.4(2−y) 4 , c2γ1 = 1+x 4 , c2γ2 = 1.4(2−x) 4 . MWUe then becomes xt+1 = xt (1− 1) (yt+1) 4 xt(1− 1) yt+1 4 +(1−xt)(1− 1) 1.4(2−yt) 4 (first player) and yt+1 = yt (1− 2) xt+1 4 yt(1− 2) xt+1 4 +(1−yt)(1− 2) 1.4(2−xt) 4 (second player). We assume that 1 = 2 and also that x0 = y0 (players start with the same mixed strategy. Similarly, due to symmetry, it follows that xt = yt for all t ∈ N, thus it suffices to keep track only of one variable and the dynamics becomes xt+1 = xt (1− ) xt+1 4 xt(1− ) xt+1 4 +(1−xt)(1− ) 1.4(2−xt) 4 . Finally, we choose = 1− e−40 and we get xt+1 = G(xt) = xt e−10(xt+1) xte−10(xt+1) + (1− xt)e−14(2−xt) , i.e., we denote G(x) = xe −10(x+1) xe−10(x+1)+(1−x)e−14(2−x) . We show the following three statements, the proofs of which can be found in the full version. Theorem 4.1. For all but a measure zero set S of x ∈ (0, 1) we get that limt→∞H2t(x) = ρ1 or ρ2. Moreover, H(ρ1) = ρ2 and H(ρ2) = ρ1, i.e., {ρ1, ρ2} is a periodic orbit. Thus, all but a measure zero set S of initial conditions converge to the limit cycle {ρ1, ρ2}. Finally, the initial points in S converge to the equilibrium 12 . Theorem 4.2. There exist two player two strategy symmetric congestion games such that MWUe has periodic orbits of length n for any natural number n > 0 and as well as an uncountably infinite set of “scrambled" initial conditions (Li-Yorke chaos). Using Theorem 4.2, we conclude with the following corollary. Corollary 4.3. For any 1 > > 0 and n, there exists a n-player congestion game G( ) (depending on ) so that MWUe dynamics exhibits Li-Yorke chaos for uncountably many starting points. 5 Conclusion and Future Work We have analyzed MWU` in congestion games where agents use arbitrary admissible constants as learning rates and showed convergence to exact Nash equilibria. We have also shown that this result is not true for the nearly homologous exponential variant MWUe even for the simplest case of two-agent, two-strategy load balancing games. There we prove that such dynamics can provably lead to limit cycles or even chaotic behavior. For a small enough learning rate the behavior of MWUe approaches that of its smooth variant, replicator dynamics, and hence convergence is once again guaranteed [29]. This means that as we increase the learning rate from near zero values we start off with a convergent system and we end up with a chaotic one. Numerical experiments establish that between the convergent region and the chaotic region there exists a range of values for for which the system exhibits periodic behavior. Period doubling is known as standard route for 1-dimensional chaos (e.g. logistic map) and is characterized by unexpected regularities such as the Feigenbaum constant [39]. Elucidating these connections is an interesting open problem. More generally, what other type of regularities can be established in these non-equilibrium systems? Another interesting question has to do with developing a better understanding of the set of conditions that result to non-converging trajectories. So far, it has been critical for our non-convergent examples that the system starts from a symmetric initial condition. Whether such irregular MWUe trajectories can be constructed for generic initial conditions, possibly in larger congestion games, is not known. Nevertheless, the non-convergent results, despite their non-generic nature are rather useful since they imply that we cannot hope to leverage the power of Baum-Eagon techniques for MWUe. In conclusion, establishing generic (non)convergence results (e.g. for most initial conditions, most congestion games) for MWUe with constant step size is an interesting future direction.
1. What are the key contributions and findings of the paper regarding multiplicative weight update algorithms in congestion games? 2. How do the authors approach the study of the algorithm's behavior when the parameter \eps is not updated? 3. Can you explain the difference in convergence behavior between linear and exponential updates? 4. Are there any open questions or areas for further research related to the paper's topics? 5. Do you have any minor comments or suggestions for improving the paper's presentation or references?
Review
Review This paper studies multiplicative weight update algorithm in congestion games. The typical multiplicative weights algorithm updates the probability by an action by 1 - \eps C(\gamma) (linear update) or (1 - \eps)^C(\gamma) (exponential update). In either case, generally the parameter \eps is also updated as the algorithm progresses. In this paper, the authors study what happens when one does not update the parameter \eps. In congestion games, the authors show that the linear update always converges. On the other hand, they construct examples of congestion games where for any values of \eps between (0, 1) the exponential update does not converge but instead exhibits something called Li-Yorke chaos. The paper is very well written and uses novel techniques. Minor nits: * Provide a reference for "it is well known that to achieve sub-linear regret, the learning rate eps must be decreasing as time progresses." Also how does sub-linear regret relate to convergence? * Should Corollary 3.2 be called corollary - I am assuming it generalizes Baum-Eagon but follows using the same/similar proof.
NIPS
Title A Unified Convergence Theorem for Stochastic Optimization Methods Abstract In this work, we provide a fundamental unified convergence theorem used for deriving expected and almost sure convergence results for a series of stochastic optimization methods. Our unified theorem only requires to verify several representative conditions and is not tailored to any specific algorithm. As a direct application, we recover expected and almost sure convergence results of the stochastic gradient method (SGD) and random reshuffling (RR) under more general settings. Moreover, we establish new expected and almost sure convergence results for the stochastic proximal gradient method (prox-SGD) and stochastic model-based methods for nonsmooth nonconvex optimization problems. These applications reveal that our unified theorem provides a plugin-type convergence analysis and strong convergence guarantees for a wide class of stochastic optimization methods. 1 Introduction Stochastic optimization methods are widely used to solve stochastic optimization problems and empirical risk minimization, serving as one of the foundations of machine learning. Among the many different stochastic methods, the most classic one is the stochastic gradient method (SGD), which dates back to Robbins and Monro [36]. If the problem at hand has a finite-sum structure, then another popular stochastic method is random reshuffling (RR) [20]. When the objective function has a composite form or is weakly convex (nonsmooth and nonconvex), then the stochastic proximal gradient method (prox-SGD) and stochastic model-based algorithms are the most typical approaches [18, 11]. Apart from the mentioned stochastic methods, there are many others like SGD with momentum, Adam, stochastic higher order methods, etc. In this work, our goal is to establish and understand fundamental convergence properties of these stochastic optimization methods via a novel unified convergence framework. Motivations. Suppose we apply SGD to minimize a smooth nonconvex function f . SGD generates a sequence of iterates {xk}k 0, which is a stochastic process due to the randomness of the algorithm and the utilized stochastic oracles. The most commonly seen ‘convergence result’ for SGD is the 36th Conference on Neural Information Processing Systems (NeurIPS 2022). expected iteration complexity, which typically takes the form [17] min k=0,...,T E[krf(xk)k2] O ✓ 1 p T + 1 ◆ or E[krf(xk̄)k2] O ✓ 1 p T + 1 ◆ , (1) where T denotes the total number of iterations and k̄ is an index sampled uniformly at random from {0, . . . , T}. Note that we ignored some higher-order convergence terms and constants to ease the presentation. Complexity results are integral to understand core properties and progress of the algorithm during the first T iterations, while the asymptotic convergence behavior plays an equally important role as it characterizes whether an algorithm can eventually approach an exact stationary point or not. We refer to Appendix H for additional motivational background for studying asymptotic convergence properties of stochastic optimization methods. Here, an expected convergence result, associated with the nonconvex minimization problem minx f(x), has the form lim k!1 E[krf(xk)k] = 0. (2) Intuitively, it should be possible to derive expected convergence from the expected iteration complexity (1) by letting T ! 1. However, this is not the case as the ‘min’ operator and the sampled k̄ are not well defined or become meaningless when T goes to 1. The above results are stated in expectation and describe the behavior of the algorithm by averaging infinitely many runs. Though this is an important convergence measure, in practical situations the algorithm is often only run once and the last iterate is returned as a solution. This observation motivates and necessitates almost sure convergence results, which establish convergence with probability 1 for a single run of the stochastic method: lim k!1 krf(xk)k = 0 almost surely. (3) Backgrounds. Expected and almost sure convergence results have been extensively studied for convex optimization; see, e.g., [10, 34, 42, 46, 5, 41]. Almost sure convergence of SGD for minimizing a smooth nonconvex function f was provided in the seminal work [3] using very standard assumptions, i.e., Lipschitz continuous rf and bounded variance. Under the same conditions, the same almost sure convergence of SGD was established in [33] based on a much simpler argument than that of [3]. A weaker ‘lim inf’-type almost sure convergence result for SGD with AdaGrad step sizes was shown in [26]. Recently, the work [28] derives almost sure convergence of SGD under the assumptions that f and rf are Lipschitz continuous, f is coercive, f is not asymptotically flat, and the -th moment of the stochastic error is bounded with 2. This result relies on stronger assumptions than the base results in [3]. Nonetheless, it allows more aggressive diminishing step sizes if > 2. Apart from standard SGD, almost sure convergence of different respective variants for min-max problems was discussed in [22]. In terms of expected convergence, the work [6] showed limk!1 E[krf(xk)k] = 0 under the additional assumptions that f is twice continuously differentiable and the multiplication of the Hessian and gradient r2f(x)rf(x) is Lipschitz continuous. Though the convergence of SGD is well-understood and a classical topic, asymptotic convergence results of the type (2) and (3) often require a careful and separate analysis for other stochastic optimization methods — especially when the objective function is simultaneously nonsmooth and nonconvex. In fact and as outlined, a direct transition from the more common complexity results (1) to the full convergence results (2) and (3) is often not possible without further investigation. Main contributions. We provide a fundamental unified convergence theorem (see Theorem 2.1) for deriving both expected and almost sure convergence of stochastic optimization methods. Our theorem is not tailored to any specific algorithm, instead it incorporates several abstract conditions that suit a vast and general class of problem structures and algorithms. The proof of this theorem is elementary. We then apply our novel theoretical framework to several classical stochastic optimization methods to recover existing and to establish new convergence results. Specifically, we recover expected and almost sure convergence results for SGD and RR. Though these results are largely known in the literature, we derive unified and slightly stronger results under a general ABC condition [24, 23] rather than the standard bounded variance assumption. We also remove the stringent assumption used in [6] to show (2) for SGD. As a core application of our framework, we derive expected and almost sure convergence results for prox-SGD in the nonconvex setting and under the more general ABC condition and for stochastic model-based methods under very standard assumptions. In particular, we show that the iterates {xk}k 0 generated by prox-SGD and other stochastic model-based methods will approach the set of stationary points almost surely and in an expectation sense. These results are new to our knowledge (see also Subsection 3.5 for further discussion). The above applications illustrate the general plugin-type purpose of our unified convergence analysis framework. Based on the given recursion and certain properties of the algorithmic update, we can derive broad convergence results by utilizing our theorem, which can significantly simplify the convergence analysis of stochastic optimization methods; see Subsection 2.1 for a summary. 2 A unified convergence theorem Throughout this work, let (⌦,F , {Fk}k 0,P) be a filtered probability space and let us assume that the sequence of iterates {xk}k 0 is adapted to the filtration {Fk}k 0, i.e., each of the random vectors xk : ⌦ ! Rn is Fk-measurable. In this section, we present a unified convergence theorem for the sequence {xk}k 0 based on an abstract convergence measure . To make the abstract convergence theorem more accessible, the readers may momentarily regard and {µk}k 0 as rf and the sequence related to the step sizes, respectively. We then present the main steps for showing the convergence of a stochastic optimization method by following a step-by-step verification of the conditions in our unified convergence theorem. Theorem 2.1. Let the mapping : Rn ! Rm and the sequences {xk}k 0 ✓ Rn and {µk}k 0 ✓ R++ be given. Consider the following conditions: (P.1) The function is L -Lipschitz continuous for some L > 0, i.e., we have k (x) (y)k L kx yk for all x,y 2 Rn. (P.2) There exists a constant a > 0 such that P1 k=0 µk E[k (xk)ka] < 1. The following statements are valid: (i) Let the conditions (P.1)–(P.2) be satisfied and suppose further that (P.3) There exist constants A,B, b 0 and p1, p2, q > 0 such that E[kxk+1 xkkq] Aµp1k + Bµ p2 k E[k (x k)kb]. (P.4) The sequence {µk}k 0 and the parameters a, b, q, p1, p2 satisfy {µk}k 0 is bounded, X1 k=0 µk = 1, and a, q 1, a b, p1, p2 q. Then, it holds that limk!1 E[k (xk)k] = 0. (ii) Let the properties (P.1)–(P.2) hold and assume further that (P.30) There exist constants A, b 0, p1, p2, q > 0 and random vectors Ak,Bk : ⌦ ! Rn such that xk+1 = xk + µp1k Ak + µ p2 k Bk and for all k, Ak,Bk are Fk+1-measurable and we have E[Ak | Fk] = 0 almost surely, E[kAkkq] A, and lim supk!1 kBkkq/(1 + k (xk)kb) < 1 almost surely. (P.40) The sequence {µk}k 0 and the parameters a, b, q, p1, p2 satisfy µk ! 0, X1 k=0 µk = 1, X1 k=0 µ2p1k < 1, and q 2, qa b, p1 > 1 2 , p2 1. Then, it holds that limk!1 k (xk)k = 0 almost surely. The proof of Theorem 2.1 is elementary. We provide the core ideas here and defer its proof to Appendix A. Item (i) is proved by contradiction. An easy first result is lim infk!1 E[k (xk)ka] = 0. We proceed and assume that {E[k (xk)k]}k 0 does not converge to zero. Then, for some > 0, we can construct two subsequences {`t}t 0 and {ut}t 0 such that `t < ut and E[k (x`t)k] 2 , E[k (xut)ka] a, and E[k (xk)ka] > a for all `t < k < ut. Based on this construction, the conditions in the theorem, and a set of inequalities, we will eventually reach a contradiction. We notice that the Lipschitz continuity of plays a prominent role when establishing this contradiction. Our overall proof strategy is inspired by the analysis of classical trust region-type methods, see, e.g., [9, Theorem 6.4.6]. Let us also mention that a different strategy for the fully deterministic setting and scalar case : Rn ! R was provided in [8]. For item (ii), we first control the stochastic behavior of the error terms Ak by martingale convergence theory. We can then conduct sample-based arguments to derive the final result, which is essentially deterministic and hence, follows similar arguments to that of item (i). The major application areas of our unified convergence framework comprise stochastic optimization methods that have non-vanishing stochastic errors or that utilize diminishing step sizes. In the next subsection, we state the main steps for showing convergence of stochastic optimization methods. This also clarifies the abstract conditions listed in the theorem. 2.1 The steps for showing convergence of stochastic optimization methods In order to apply the unified convergence theorem, we have to verify the conditions stated in the theorem, resulting in three main phases below. Phase I: Verifying (P.1)–(P.2). Conditions (P.1)–(P.2) are used for both the expected and the almost sure convergence results. Condition (P.1) is a problem property and is very standard. We present the final convergence results in terms of the abstract measure . This measure can be regarded as f f⇤ in convex optimization, rf in smooth nonconvex optimization, the gradient of the Moreau envelope in weakly convex optimization, etc. In all the situations, assuming Lipschitz continuity of the convergence measure is standard and is arguably a minimal assumption in order to obtain iteration complexity and/or convergence results. Condition (P.2) is typically a result of the algorithmic property or complexity analysis. To verify this condition, one first establishes the recursion of the stochastic method, which almost always has the form E[yk+1 | Fk] (1 + k)yk µkk (xk)ka + ⇣k. Here, yk is a suitable Lyapunov function measuring the (approximate) descent property of the stochastic method, ⇣k represents the error term satisfying P1 k=0 ⇣k < 1, k is often related to the step sizes and satisfies P1 k=0 k < 1. Then, applying the supermartingale convergence theorem (see Theorem B.1), we obtain P1 k=0 µk E[k (xk)ka] < 1, i.e., condition (P.2). Since condition (P.2) is typically a consequence of the underlying algorithmic recursion, one can also derive the standard finite-time complexity bound (1) in terms of the measure E[k (xk)ka] based on it. Hence, non-asymptotic complexity results are also included implicitly in our framework as a special case. To be more specific, (P.2) implies PT k=0 µkE[k (xk)ka] M for some constant M > 0 and some total number of iterations T . This then yields min0kT E[k (xk)ka] M/ PT k=0 µk. Note that the sequence {µk}k 0 is often related to the step sizes. Thus, choosing the step sizes properly results in the standard finite-time complexity result. Phase II: Verifying (P.3)–(P.4) for showing expected convergence. Condition (P.3) requires an upper bound on the step length of the update in terms of expectation, including upper bounds for the search direction and the stochastic error of the algorithm. It is often related to certain bounded variance-type assumptions for analyzing stochastic methods. For instance, (P.3) is satisfied under the standard bounded variance assumption for SGD, the more general ABC assumption for SGD, the bounded stochastic subgradients assumption, etc. Condition (P.4) is a standard diminishing step sizes condition used in stochastic optimization. Then, one can apply item (i) of Theorem 2.1 to obtain E[k (xk)k] ! 0. Phase III: Verifying (P.30)–(P.40) for showing almost sure convergence. Condition (P.30) is parallel to (P.3). It decomposes the update into a martingale term Ak and a bounded error term Bk. We will see later that this condition holds true for many stochastic methods. Though this condition requires the update to have a certain decomposable form, it indeed can be verified by bounding the step length of the update in conditional expectation, which is similar to (P.3). Hence, (P.30) can be interpreted as a conditional version of (P.3). To see this, we can construct xk+1 = xk + µk · 1 µk xk+1 xk E[xk+1 xk | Fk] Ak +µk · 1 µk E[xk+1 xk | Fk] Bk . (4) By Jensen’s inequality, we then have E[Ak | Fk] = 0, E[kAkkq] 2qµ qk · E[kx k+1 xkkq], and kBkkq µ qk · E[kx k+1 xkkq | Fk]. Thus, once it is possible to derive E[kxk+1 xkkq | Fk] = O(µqk) in an almost sure sense, condition (P.30) is verified with p1 = p2 = 1. Condition (P.40) is parallel to (P.4) and is standard in stochastic optimization. Application of item (ii) of Theorem 2.1 then yields k (xk)k ! 0 almost surely. In the next section, we will illustrate how to show convergence for a set of classic stochastic methods by following the above three steps. 3 Applications to stochastic optimization methods 3.1 Convergence results of SGD We consider the standard SGD method for solving the smooth optimization problem minx2Rn f(x), where the iteration of SGD is given by xk+1 = xk ↵kg k. (5) Here, gk denotes a stochastic approximation of the gradient rf(xk). We assume that each stochastic gradient gk is Fk+1-measurable and that the generated stochastic process {xk}k 0 is adapted to the filtration {Fk}k 0. We consider the following standard assumptions: (A.1) The mapping rf : Rn ! Rn is Lipschitz continuous on Rn with modulus L > 0. (A.2) The objective function f is bounded from below on Rn, i.e., there is f̄ such that f(x) f̄ for all x 2 Rn. (A.3) Each oracle gk defines an unbiased estimator of rf(xk), i.e., it holds that E[gk | Fk] = rf(xk) almost surely, and there exist C,D 0 such that E[kgk rf(xk)k2 | Fk] C[f(xk) f̄ ] + D almost surely 8 k 2 N. (A.4) The step sizes {↵k}k 0 satisfy P1 k=0 ↵k = 1 and P1 k=0 ↵ 2 k < 1. We now derive the convergence of SGD below by setting ⌘ rf and µk ⌘ ↵k. Phase I: Verifying (P.1)–(P.2). (A.1) verifies condition (P.1) with L ⌘ L. We now check (P.2). Using (A.2), (A.3), and a standard analysis for SGD gives the following recursion (see Appendix C.1 for the full derivation): E[f(xk+1) f̄ | Fk] ✓ 1 + LC↵2k 2 ◆ [f(xk) f̄ ] ↵k ✓ 1 L↵k 2 ◆ krf(xk)k2 + LD↵2k 2 . (6) Taking total expectation, using (A.4), and applying the supermartingale convergence theorem (Theorem B.1) gives P1 k=0 ↵kE[krf(xk)k2] < 1. Furthermore, the sequence {E[f(xk)]}k 0 converges to some finite value. This verifies (P.2) with a = 2. Phase II: Verifying (P.3)–(P.4) for showing expected convergence. For (P.3), we have by (5) and (A.3) that E[kxk+1 xkk2] ↵2kE[krf(xk)k2] + C↵2kE[f(xk) f̄ ] + D↵2k. Due to the convergence of {E[f(xk)]}k 0, there exists F such that E[f(xk) f̄ ] F for all k. Thus, condition (P.3) holds with q = 2, A = CF+ D, p1 = 2, B = 1, p2 = 2, and b = 2. Condition (P.4) is verified by (A.4) and the previous parameters choices. Therefore, we can apply Theorem 2.1 to deduce E[krf(xk)k] ! 0. Phase III: Verifying (P.30)–(P.40) for showing almost sure convergence. For (P.30), it follows from the update (5) that xk+1 = xk ↵k(g k rf(xk)) ↵krf(x k). We have p1 = 1, Ak = gk rf(xk), p2 = 1, and Bk = rf(xk). Using (A.2), (A.3), E[f(xk) f̄ ] F, and choosing any q = b > 0 establishes (P.30). As before, condition (P.40) follows from (A.4) and the previous parameters choices. Applying Theorem 2.1 yields krf(xk)k ! 0 almost surely. Finally, we summarize the above results in the following corollary. Corollary 3.1. Let us consider SGD (5) for smooth nonconvex optimization problems under (A.1)– (A.4). Then, we have limk!1 E[krf(xk)k] = 0 and limk!1 krf(xk)k = 0 almost surely. 3.2 Convergence results of random reshuffling We now consider random reshuffling (RR) applied to problems with a finite sum structure min x2Rn f(x) := 1 N XN i=1 f(x, i), where each component function f(·, i) : Rn ! R is supposed to be smooth. At iteration k, RR first generates a random permutation k+1 of the index set {1, . . . , N}. It then updates xk to xk+1 through N consecutive gradient descent-type steps by accessing and using the component gradients {rf(·, k+11 ), . . . ,rf(·, k+1 N )} sequentially. Specifically, one update-loop (epoch) of RR is given by x̃k0 = x k, x̃ki = x̃ k i 1 ↵krf(x̃ k i 1, k+1 i ), i = 1, . . . , N, x k+1 = x̃kN . (7) After one such loop, the step size ↵k and the permutation k+1 is updated accordingly; cf. [20, 30, 32]. We make the following standard assumptions: (B.1) For all i 2 {1, . . . , N}, f(·, i) is bounded from below by some f̄ and the gradient rf(·, i) is Lipschitz continuous on Rn with modulus L > 0. (B.2) The step sizes {↵k}k 0 satisfy P1 k=0 ↵k = 1 and P1 k=0 ↵ 3 k < 1. A detailed derivation of the steps shown in Subsection 2.1 for RR is deferred to Appendix D.2. Based on the discussion in Appendix D.2 and on Theorem 2.1, we obtain the following results for RR. Corollary 3.2. We consider RR (7) for smooth nonconvex optimization problems under (B.1)–(B.2). Then it holds that limk!1 E[krf(xk)k] = 0 and limk!1 krf(xk)k = 0 almost surely. 3.3 Convergence of the proximal stochastic gradient method We consider the composite-type optimization problem minx2Rn (x) := f(x) + '(x) (8) where f : Rn ! R is a continuously differentiable function and ' : Rn ! ( 1,1] is ⌧ -weakly convex (see Appendix E.1), proper, and lower semicontinuous. In this section, we want to apply our unified framework to study the convergence behavior of the well-known proximal stochastic gradient method (prox-SGD): xk+1 = prox↵k'(x k ↵kg k), (9) where gk ⇡ rf(xk) is a stochastic approximation of rf(xk), {↵k}k 0 ✓ R+ is a suitable step size sequence, and prox↵k' : R n ! Rn, prox↵k'(x) := argminy2Rn '(y) + 1 2↵k kx yk2 is the well-known proximity operator of '. 3.3.1 Assumptions and preparations We first recall several useful concepts from nonsmooth and variational analysis. For a function h : Rn ! ( 1,1], the Fréchet (or regular) subdifferential of h at the point x is given by @h(x) := {g 2 Rn : h(y) h(x) + hg,y xi+ o(ky xk) as y ! x}, see, e.g., [39, Chapter 8]. If h is convex, then the Fréchet subdifferential coincides with the standard (convex) subdifferential. It is well-known that the associated first-order optimality condition for the composite problem (8) — 0 2 @ (x) = rf(x) + @'(x) — can be represented as a nonsmooth equation, [39, 21], F↵nat(x) := x prox↵'(x ↵rf(x)) = 0, ↵ 2 (0, ⌧ 1), where F↵nat denotes the so-called natural residual. The natural residual F↵nat is a common stationarity measure for the nonsmooth problem (8) and widely used in the analysis of proximal methods. We will make the following assumptions on f , ', and the stochastic oracles {gk}k 0: (C.1) The function f is bounded from below on Rn, i.e., there is f̄ such that f(x) f̄ for all x 2 Rn, and the gradient mapping rf is Lipschitz continuous (on Rn) with modulus L > 0. (C.2) The function ' is ⌧ -weakly convex, proper, lower semicontinuous, and bounded from below on dom', i.e., we have '(x) '̄ for all x 2 dom'. (C.3) There exists L' > 0 such that '(x) '(y) L'kx yk for all x,y 2 dom'. (C.4) Each gk defines an unbiased estimator of rf(xk), i.e., we have E[gk | Fk] = rf(xk) almost surely, and there exist C,D 0 such that E[kgk rf(xk)k2 | Fk] C[f(xk) f̄ ] + D almost surely 8 k 2 N. (C.5) The step sizes {↵k}k 0 satisfy P1 k=0 ↵k = 1 and P1 k=0 ↵ 2 k < 1. Here, we again assume that the generated stochastic processes {xk}k 0 is adapted to the filtration {Fk}k 0. The assumptions (C.1), (C.2), (C.4), and (C.5) are fairly standard and broadly applicable. In particular, (C.1), (C.4), and (C.5) coincide with the conditions (A.1)–(A.4) used in the analysis of SGD. We continue with several remarks concerning condition (C.3). Remark 3.3. Assumption (C.3) requires the mapping ' to be Lipschitz continuous on its effective domain dom'. This condition holds in many important applications, e.g., when ' is chosen as a norm or indicator function. Nonconvex examples satisfying (C.2) and (C.3) include, e.g., the minimax concave penalty (MCP) function [45], the smoothly clipped absolute deviation (SCAD) [15], or the student-t loss function. We refer to [4] and Appendix E.2 for further discussion. 3.3.2 Convergence results of prox-SGD We now analyze the convergence of the random process {xk}k 0 generated by the stochastic algorithmic scheme (9). As pioneered in [11], we will use the Moreau envelope env✓ , env✓ : Rn ! R, env✓ (x) := miny2Rn (y) + 1 2✓ kx yk2, (10) as a smooth Lyapunov function to study the descent properties and convergence of prox-SGD. We first note that the conditions (C.1) and (C.2) imply ✓ 1-weak convexity of for every ✓ 2 (0, (L+ ⌧) 1]. In this case, the Moreau envelope env✓ is a well-defined and continuously differentiable function with gradient renv✓ (x) = 1✓ (x prox✓ (x)); see, e.g., [38, Theorem 31.5]. As shown in [13, 11], the norm of the Moreau envelope — krenv✓ (x)k — defines an alternative stationarity measure for problem (8) that is equivalent to the natural residual if ✓ is chosen sufficiently small. A more explicit derivation of this connection is provided in Lemma E.1. Next, we establish convergence of prox-SGD by setting ⌘ renv✓ and µk ⌘ ↵k. Our analysis is based on the following two estimates which are verified in Appendix E.4 and Appendix E.5. Lemma 3.4. Let {xk}k 0 be generated by prox-SGD and let the assumptions (C.1)–(C.4) be satisfied. Then, for ✓ 2 (0, [3L+ ⌧ ] 1) and all k with ↵k min{ 12⌧ , 1 2(✓ 1 [L+⌧ ])}, it holds that E[env✓ (xk+1) ̄ | Fk] (1 + 4C✓ 1↵2k) · [env✓ (xk) ̄] L✓↵kkrenv✓ (x k)k2 + 2↵2k(CL 2 ' + D✓ 1), (11) almost surely, where ̄ := f̄ + '̄. Lemma 3.5. Let {xk}k 0 be generated by prox-SGD and suppose that the assumptions (C.1)–(C.4) hold. Then, for ✓ 2 (0, [ 43L+ ⌧ ] 1) and all k with ↵k 12⌧ , we have almost surely E[kxk+1 xkk2 | Fk] 8(2L+ C)↵2k · [env✓ (xk) ̄] + 4(((2L+ C)✓ + 1)L2' + D)↵2k. (12) Phase I: Verifying (P.1)–(P.2). In [21, Corollary 3.4], it is shown that the gradient of the Moreau envelope is Lipschitz continuous with modulus Le := max{✓ 1, (1 [L + ⌧ ]✓) 1[L+ ⌧ ]} for all ✓ 2 (0, [L+ ⌧ ] 1). Thus, condition (P.1) is satisfied. Furthermore, due to ↵k ! 0 and choosing ✓ 2 (0, [3L + ⌧ ] 1), the estimate (11) in Lemma 3.4 holds for all k sufficiently large. Consequently, due to env✓ (x) (prox✓ (x)) ̄ and (C.5), Theorem B.1 is applicable and upon taking total expectation, {E[env✓ (xk)]}k 0 converges to some E 2 R. In addition, the sequence {env✓ (xk)}k 0 converges almost surely to some random variable e? and we have P1 k=0 ↵kE[krenv✓ (xk)k2] < 1. This verifies condition (P.2) with a = 2. Phase II: Verifying (P.3)–(P.4) for showing convergence in expectation. Assumptions (C.1)–(C.5) and Lemma 3.5 allow us to establish the required bound stated in (P.3). Specifically, taking total expectation in (12), we have E[kxk+1 xkk2] 8(2L+ C)↵2k · E[env✓ (xk) ̄] + 4(((2L+ C)✓ + 1)L2' + D)↵2k for all k sufficiently large. Due to E[env✓ (xk)] ! E, there exists F such that E[env✓ (xk) ̄] F for all k. Hence, (P.3) holds with q = 2, A = 8(2L+C)F+4(((2L+C)✓+1)L2' +D), p1 = 2, and B = 0. The property (P.4) easily follows from (C.5) and the parameter choices. Consequently, using Theorem 2.1, we can infer E[krenv✓ (xk)k] ! 0. Phase III: Verifying (P.30)–(P.40) for showing almost sure convergence. We follow the construction in (4) and set Ak = ↵ 1k (x k+1 xk E[xk+1 xk | Fk]), Bk = ↵ 1k E[xk+1 xk | Fk], and p1, p2 = 1. Clearly, we have E[Ak | Fk] = 0 and based on the previous results in Phase II, we can show E[kxk+1 xkk2] = O(↵2k) which establishes boundedness of {E[kAkk2}k 0. Similarly, for Bk and by Lemma 3.5 and Jensen’s inequality, we obtain kBkk 2 ↵ 2k E[kx k+1 xkk2 | Fk] 8(2L+ C) · [env✓ (x k) ̄] +O(1). Due to env✓ (xk) ! e? almost surely, this shows lim supk!1 kBkk2 < 1 almost surely. Hence, all requirements in (P.30) are satisfied with q = 2 and b = 0. Moreover, it is easy to see that property (P.40) also holds in this case. Overall, Theorem 2.1 implies krenv✓ (xk)k ! 0 almost surely. As mentioned, it is possible to express the obtained convergence results in terms of the natural residual Fnat = F 1nat, see, e.g., Lemma E.1. We summarize our observations in the following corollary. Corollary 3.6. Let us consider prox-SGD (9) for the composite problem (8) under (C.1)–(C.5). Then, we have limk!1 E[kFnat(xk)k] = 0 and limk!1 kFnat(xk)k = 0 almost surely. Remark 3.7. As a byproduct, Lemma 3.4 also leads to an expected iteration complexity result of prox-SGD by using the ABC condition (C.4) rather than the standard bounded variance assumption. This is a nontrivial extension of [11, Corollary 3.6]. We provide a full derivation in Appendix E.6. 3.4 Convergence of stochastic model-based methods In this section, we consider the convergence of stochastic model-based methods for nonsmooth weakly convex optimization problems minx2Rn (x) := f(x) + '(x) = E⇠⇠D[f(x, ⇠)] + '(x), (13) where both f and ' are assumed to be (nonsmooth) weakly convex functions and is lower bounded, i.e., (x) ̄ for all x 2 dom'. Classical stochastic optimization methods — including proximal stochastic subgradient, stochastic proximal point, and stochastic prox-linear methods — for solving (13) are unified by the stochastic model-based methods (SMM) [14, 11]: xk+1 = argminx2Rn fxk(x, ⇠ k) + '(x) + 1 2↵k kx xkk2, (14) where fxk(x, ⇠k) is a stochastic approximation of f around xk using the sample ⇠k; see Appendix F.1 for descriptions of three major types of SMM. Setting Fk := (⇠0, . . . , ⇠k 1), it is easy to see that {xk}k 0 is adapted to {Fk}k 0. We analyze convergence of SMM under the following assumptions. (D.1) The stochastic approximation function fx satisfies a one-sided accuracy property, i.e., we have E⇠[fx(x, ⇠)] = f(x) for all x 2 U and E⇠[fx(y, ⇠) f(y)] ⌧ 2 kx yk2 8 x,y 2 U, where U is an open convex set containing dom'. (D.2) The function y 7! fx(y, ⇠) + '(y) is ⌘-weakly convex for all x 2 U and almost every ⇠. (D.3) There exists L > 0 such that the stochastic approximation function fx satisfies fx(x, ⇠) fx(y, ⇠) Lkx yk 8 x,y 2 U, and almost every ⇠. (D.4) The function ' is L'-Lipschitz continuous. (D.5) The step sizes {↵k}k 0 satisfy P1 k=0 ↵k = 1 and P1 k=0 ↵ 2 k < 1. Assumptions (D.1), (D.2), (D.3) are standard for analyzing SMM and identical to that of [11]. (D.5) is convention for stochastic methods. Assumption (D.4) mimics (C.3); see Remark 3.3 for discussions. We now derive the convergence of SMM below by setting ⌘ renv✓ and µk ⌘ ↵k. Our derivation is based on the following two estimates, in which the proof of Lemma 3.9 is given in Appendix F.2. Lemma 3.8 (Theorem 4.3 of [11]). Let ✓ 2 (0, (⌧ + ⌘) 1) and ↵k < ✓ be given. Then, we have E[env✓ (xk+1) | Fk] env✓ (xk) (1 [⌧ + ⌘]✓)↵k 2(1 ⌘↵k) krenv✓ (x k)k2 + 2L2↵2k (1 ⌘↵k)(✓ ↵k) . Lemma 3.9. For all k with ↵k 1/(2⌘), it holds that E[kxk+1 xkk2 | Fk] (16(L+ L')2 + 8L2)↵2k. Phase I: Verifying (P.1)–(P.2). As before, [21, Corollary 3.4] implies that the mapping renv✓ is Lipschitz continuous for all ✓ 2 (0, (⌧ + ⌘) 1) Hence, condition (P.1) is satisfied. Using ↵k ! 0, we can apply Theorem B.1 to the recursion obtained in Lemma 3.8 for all k sufficiently large and it follows P1 k=0 ↵kE[krenv✓ (xk)k2] < 1. Thus, condition (P.2) holds with a = 2. Phase II: Verifying (P.3)–(P.4) for showing convergence in expectation. Taking total expectation in Lemma 3.9 verifies condition (P.3) with q = 2, A = (16(L+ L')2 +8L2), p1 = 2, B = 0. Moreover, condition (P.4) is true by assumption (D.5) and the previous parameters choices. Thus, applying Theorem 2.1 gives E[krenv✓ (xk)k] ! 0. Phase III: Verifying (P.30)–(P.40) for showing almost sure convergence. As in (4), we can set Ak = ↵ 1 k (x k+1 xk E[xk+1 xk | Fk]), Bk = ↵ 1k E[xk+1 xk | Fk]. Applying Lemma 3.9 and utilizing Jensen’s inequality, we have E[Ak | Fk] = 0, E[kAkk2] (4/↵2k)E[kxk+1 xkk2] 4(16(L + L')2 + 8L2) and kBkk2 16(L + L')2 + 8L2. Thus, condition (P.30) is satisfied with p1 = p2 = 1, q = 2. Assumption (D.5), together with the previous parameter choices verifies condition (P.40) and hence, applying Theorem 2.1 yields krenv✓ (xk)k ! 0 almost surely. Summarizing this discussion, we obtain the following convergence results for SMM. Corollary 3.10. We consider the family of stochastic model-based methods (14) for the optimization problem (13) under assumptions (D.1)–(D.5). Let {xk}k 0 be a generated sequence. Then, we have limk!1 E[krenv✓ (xk)k] = 0 and limk!1 krenv✓ (xk)k = 0 almost surely. Remark 3.11. The results presented in Corollary 3.10 also hold under certain extended settings. In fact, we can replace (D.3) by a slightly more general Lipschitz continuity assumption on f . Moreover, it is possible to establish convergence in the case where f is not Lipschitz continuous but has Lipschitz continuous gradient, which is particularly useful when we apply stochastic proximal point method for smooth f . A more detailed derivation and discussion of such extensions is deferred to Appendix F.3. 3.5 Related work and discussion SGD and RR. The literature for SGD is extremely rich and several connected and recent works have been discussed in Section 1. Our result in Corollary 3.1 unifies many of the existing convergence analyses of SGD and is based on the general ABC condition (A.3) (see [23, 24, 19] for comparison) rather than on the standard bounded variance assumption. Our expected convergence result generalizes the one in [6] using much weaker assumptions. Our results for RR are in line with the recent theoretical observations in [30, 32, 25]. In particular, Corollary 3.2 recovers the almost sure convergence result shown in [25], while the expected convergence result appears to be new. Prox-SGD and SMM. The work [11] established one of the first complexity results for prox-SGD using the Moreau envelope. Under a bounded variance assumption (C = 0 in condition (C.4)) and for general nonconvex and smooth f , the authors showed E[krenv✓ (xk̄)k2] = O((T + 1) 1/2), where xk̄ is sampled uniformly from the past T + 1 iterates x0, . . . ,xT . As mentioned, this result cannot be easily extended to the asymptotic convergence results discussed in this paper. Earlier studies of prox-SGD for nonconvex f and C = 0 include [18] where convergence of prox-SGD is established if the variance parameter D = Dk ! 0 vanishes as k ! 1. This can be achieved by progressively increasing the size of the selected mini-batches or via variance reduction techniques as in prox-SVRG and prox-SAGA, see [35]. The question whether prox-SGD can converge and whether the accumulation points of the iterates {xk}k 0 correspond to stationary points was only addressed recently in [27]. The authors use a differential inclusion approach to study convergence of prox-SGD. However, additional compact constraints x 2 X have to be introduced in the model (8) to guarantee sure boundedness of {xk}k 0 and applicability of the differential inclusion techniques. Lipschitz continuity of ' also appears as an essential requirement in [27, Theorem 5.4]. The analyses in [14, 12] establish asymptotic convergence guarantees for SMM. However, both works require a priori (almost) sure boundedness of {xk}k 0 and a density / Sard-type condition in order to show convergence. We refer to [16] for an extension of the results in [27, 12] to prox-SGD in Hilbert spaces. By contrast, our convergence framework allows to complement these differential inclusion-based results and — for the first time — fully removes any stringent boundedness assumption on {xk}k 0. Instead, our analysis relies on more transparent assumptions that are verifiable and common in stochastic optimization and machine learning. In summary, we are now able to claim: prox-SGD and SMM converge under standard stochastic conditions if ' is Lipschitz continuous. In the easier convex case, analogous results have been obtained, e.g., in [18, 1, 40]. We provide an overview of several related and representative results in Table 1 in Appendix G. 4 Conclusion In this work, we provided a novel convergence framework that allows to derive expected and almost sure convergence results for a vast class of stochastic optimization methods under state-of-the-art assumptions and in a unified way. We specified the steps on how to utilize our theorem in order to establish convergence results for a given stochastic algorithm. As concrete examples, we applied our theorem to derive asymptotic convergence guarantees for SGD, RR, prox-SGD, and SMM. To our surprise, some of the obtained results appear to be new and provide new insights into the convergence behavior of some well-known and standard stochastic methodologies. These applications revealed that our unified theorem can serve as a plugin-type tool with the potential to facilitate the convergence analysis of a wide class of stochastic optimization methods. Finally, it is important to investigate in which situations our convergence results in terms of the stationarity measure can be strengthened — say to almost sure convergence guarantees for the iterates {xk}k 0. We plan to consider such a possible extension in future work. Acknowledgments and Disclosure of Funding The authors would like to thank the Area Chair and anonymous reviewers for their detailed and constructive comments, which have helped greatly to improve the quality and presentation of the manuscript. In addition, we would like to thank Michael Ulbrich for valuable feedback and comments on an earlier version of this work. X. Li was partially supported by the National Natural Science Foundation of China (NSFC) under Grant No. 12201534 and 72150002, by the Shenzhen Science and Technology Program under Grant No. RCBS20210609103708017, and by the Shenzhen Institute of Artificial Intelligence and Robotics for Society (AIRS) under Grant No. AC01202101108. A. Milzarek was partly supported by the National Natural Science Foundation of China (NSFC) – Foreign Young Scholar Research Fund Project (RFIS) under Grant No. 12150410304 and by the Fundamental Research Fund – Shenzhen Research Institute of Big Data (SRIBD) Startup Fund JCYJ-AM20190601.
1. What is the main contribution of the paper regarding stochastic optimization methods? 2. What are the strengths and weaknesses of the proposed unified convergence analysis framework? 3. How does the reviewer assess the novelty and significance of the paper's content? 4. Are there any questions or suggestions regarding the applicability and comparison of the proposed framework with other recent developments in stochastic optimization algorithms? 5. What are the limitations of the proposed framework, and how can they be addressed?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This manuscript provides a unified asymptotic convergence analysis framework for several centralized stochastic optimization methods such as SGD, random reshuffling, proximal SGD and stochastic model-based methods. By introducing two sets of general conditions on the abstract convergence measure as well as the sequence generated by stochastic optimization algorithms, they derive both the expected and almost-sure convergence to the stationary points, respectively. The authors then apply this result to the abovementioned algorithms to obtain asymptotic convergence guarantees in the expected and almost-sure senses, which either recover existing convergence results (possibly under weaker assumptions) or generate new results. Strengths And Weaknesses Originality: The main novelty of this paper relies on the generality of the proposed unified convergence analysis based on the abstract stationarity measure \Phi, which seems new to the reviewer. The major comments are summarized as follows: Strengths: By introducing general conditions on the abstract stationarity measure \Phi, their theoretical analysis enjoys high flexibility to recover or complete the expected and almost-sure asymptotic convergence results of existing stochastic optimization methods under several problem settings, thus is potential to simplify algorithm design and analysis. This paper is well written and easy to follow. The convergence analysis for SGD, RR, prox-SGD and SMM methods is clear and reveals some insights among these methods. Weaknesses: On technical novelty: Since the considered stochastic optimization methods in this work have been well studied, and the proof techniques used in this paper are standard, thus the technical contribution is limited to some extent. On the obtained rate results: For non-convex problems, the convergence to a stationary point of the objective function is not very informative since the result cannot declare any optimality, from this understanding, the obtained asymptotic convergence results in this work are even weaker than the existing iteration complexity results which further shows the non-asymptotic convergence rates under certain measures. Questions On the generality of the proposed framework: To show the generality of the proposed unified theorem, the reviewer would like to see if it can be applied to other recently developed stochastic optimization algorithms that employs variance-reduction techniques and varying sizes of mini-batches. On significance of contribution: It is not very clear how this proposed framework will improve the existing analysis. The reviewer would thus suggests adding a table to compare the convergence results of the existing stochastic algorithms and that obtained from the proposed framework with respect to different assumptions and problem settings to illustrate generality and reveal some insights if any. Limitations The generality of the proposed framework is not very clear to the reviewer. The authors are thus suggested to make clear the scope of stochastic optimization algorithms that can be incorporated into this unified convergence analysis.
NIPS
Title A Unified Convergence Theorem for Stochastic Optimization Methods Abstract In this work, we provide a fundamental unified convergence theorem used for deriving expected and almost sure convergence results for a series of stochastic optimization methods. Our unified theorem only requires to verify several representative conditions and is not tailored to any specific algorithm. As a direct application, we recover expected and almost sure convergence results of the stochastic gradient method (SGD) and random reshuffling (RR) under more general settings. Moreover, we establish new expected and almost sure convergence results for the stochastic proximal gradient method (prox-SGD) and stochastic model-based methods for nonsmooth nonconvex optimization problems. These applications reveal that our unified theorem provides a plugin-type convergence analysis and strong convergence guarantees for a wide class of stochastic optimization methods. 1 Introduction Stochastic optimization methods are widely used to solve stochastic optimization problems and empirical risk minimization, serving as one of the foundations of machine learning. Among the many different stochastic methods, the most classic one is the stochastic gradient method (SGD), which dates back to Robbins and Monro [36]. If the problem at hand has a finite-sum structure, then another popular stochastic method is random reshuffling (RR) [20]. When the objective function has a composite form or is weakly convex (nonsmooth and nonconvex), then the stochastic proximal gradient method (prox-SGD) and stochastic model-based algorithms are the most typical approaches [18, 11]. Apart from the mentioned stochastic methods, there are many others like SGD with momentum, Adam, stochastic higher order methods, etc. In this work, our goal is to establish and understand fundamental convergence properties of these stochastic optimization methods via a novel unified convergence framework. Motivations. Suppose we apply SGD to minimize a smooth nonconvex function f . SGD generates a sequence of iterates {xk}k 0, which is a stochastic process due to the randomness of the algorithm and the utilized stochastic oracles. The most commonly seen ‘convergence result’ for SGD is the 36th Conference on Neural Information Processing Systems (NeurIPS 2022). expected iteration complexity, which typically takes the form [17] min k=0,...,T E[krf(xk)k2] O ✓ 1 p T + 1 ◆ or E[krf(xk̄)k2] O ✓ 1 p T + 1 ◆ , (1) where T denotes the total number of iterations and k̄ is an index sampled uniformly at random from {0, . . . , T}. Note that we ignored some higher-order convergence terms and constants to ease the presentation. Complexity results are integral to understand core properties and progress of the algorithm during the first T iterations, while the asymptotic convergence behavior plays an equally important role as it characterizes whether an algorithm can eventually approach an exact stationary point or not. We refer to Appendix H for additional motivational background for studying asymptotic convergence properties of stochastic optimization methods. Here, an expected convergence result, associated with the nonconvex minimization problem minx f(x), has the form lim k!1 E[krf(xk)k] = 0. (2) Intuitively, it should be possible to derive expected convergence from the expected iteration complexity (1) by letting T ! 1. However, this is not the case as the ‘min’ operator and the sampled k̄ are not well defined or become meaningless when T goes to 1. The above results are stated in expectation and describe the behavior of the algorithm by averaging infinitely many runs. Though this is an important convergence measure, in practical situations the algorithm is often only run once and the last iterate is returned as a solution. This observation motivates and necessitates almost sure convergence results, which establish convergence with probability 1 for a single run of the stochastic method: lim k!1 krf(xk)k = 0 almost surely. (3) Backgrounds. Expected and almost sure convergence results have been extensively studied for convex optimization; see, e.g., [10, 34, 42, 46, 5, 41]. Almost sure convergence of SGD for minimizing a smooth nonconvex function f was provided in the seminal work [3] using very standard assumptions, i.e., Lipschitz continuous rf and bounded variance. Under the same conditions, the same almost sure convergence of SGD was established in [33] based on a much simpler argument than that of [3]. A weaker ‘lim inf’-type almost sure convergence result for SGD with AdaGrad step sizes was shown in [26]. Recently, the work [28] derives almost sure convergence of SGD under the assumptions that f and rf are Lipschitz continuous, f is coercive, f is not asymptotically flat, and the -th moment of the stochastic error is bounded with 2. This result relies on stronger assumptions than the base results in [3]. Nonetheless, it allows more aggressive diminishing step sizes if > 2. Apart from standard SGD, almost sure convergence of different respective variants for min-max problems was discussed in [22]. In terms of expected convergence, the work [6] showed limk!1 E[krf(xk)k] = 0 under the additional assumptions that f is twice continuously differentiable and the multiplication of the Hessian and gradient r2f(x)rf(x) is Lipschitz continuous. Though the convergence of SGD is well-understood and a classical topic, asymptotic convergence results of the type (2) and (3) often require a careful and separate analysis for other stochastic optimization methods — especially when the objective function is simultaneously nonsmooth and nonconvex. In fact and as outlined, a direct transition from the more common complexity results (1) to the full convergence results (2) and (3) is often not possible without further investigation. Main contributions. We provide a fundamental unified convergence theorem (see Theorem 2.1) for deriving both expected and almost sure convergence of stochastic optimization methods. Our theorem is not tailored to any specific algorithm, instead it incorporates several abstract conditions that suit a vast and general class of problem structures and algorithms. The proof of this theorem is elementary. We then apply our novel theoretical framework to several classical stochastic optimization methods to recover existing and to establish new convergence results. Specifically, we recover expected and almost sure convergence results for SGD and RR. Though these results are largely known in the literature, we derive unified and slightly stronger results under a general ABC condition [24, 23] rather than the standard bounded variance assumption. We also remove the stringent assumption used in [6] to show (2) for SGD. As a core application of our framework, we derive expected and almost sure convergence results for prox-SGD in the nonconvex setting and under the more general ABC condition and for stochastic model-based methods under very standard assumptions. In particular, we show that the iterates {xk}k 0 generated by prox-SGD and other stochastic model-based methods will approach the set of stationary points almost surely and in an expectation sense. These results are new to our knowledge (see also Subsection 3.5 for further discussion). The above applications illustrate the general plugin-type purpose of our unified convergence analysis framework. Based on the given recursion and certain properties of the algorithmic update, we can derive broad convergence results by utilizing our theorem, which can significantly simplify the convergence analysis of stochastic optimization methods; see Subsection 2.1 for a summary. 2 A unified convergence theorem Throughout this work, let (⌦,F , {Fk}k 0,P) be a filtered probability space and let us assume that the sequence of iterates {xk}k 0 is adapted to the filtration {Fk}k 0, i.e., each of the random vectors xk : ⌦ ! Rn is Fk-measurable. In this section, we present a unified convergence theorem for the sequence {xk}k 0 based on an abstract convergence measure . To make the abstract convergence theorem more accessible, the readers may momentarily regard and {µk}k 0 as rf and the sequence related to the step sizes, respectively. We then present the main steps for showing the convergence of a stochastic optimization method by following a step-by-step verification of the conditions in our unified convergence theorem. Theorem 2.1. Let the mapping : Rn ! Rm and the sequences {xk}k 0 ✓ Rn and {µk}k 0 ✓ R++ be given. Consider the following conditions: (P.1) The function is L -Lipschitz continuous for some L > 0, i.e., we have k (x) (y)k L kx yk for all x,y 2 Rn. (P.2) There exists a constant a > 0 such that P1 k=0 µk E[k (xk)ka] < 1. The following statements are valid: (i) Let the conditions (P.1)–(P.2) be satisfied and suppose further that (P.3) There exist constants A,B, b 0 and p1, p2, q > 0 such that E[kxk+1 xkkq] Aµp1k + Bµ p2 k E[k (x k)kb]. (P.4) The sequence {µk}k 0 and the parameters a, b, q, p1, p2 satisfy {µk}k 0 is bounded, X1 k=0 µk = 1, and a, q 1, a b, p1, p2 q. Then, it holds that limk!1 E[k (xk)k] = 0. (ii) Let the properties (P.1)–(P.2) hold and assume further that (P.30) There exist constants A, b 0, p1, p2, q > 0 and random vectors Ak,Bk : ⌦ ! Rn such that xk+1 = xk + µp1k Ak + µ p2 k Bk and for all k, Ak,Bk are Fk+1-measurable and we have E[Ak | Fk] = 0 almost surely, E[kAkkq] A, and lim supk!1 kBkkq/(1 + k (xk)kb) < 1 almost surely. (P.40) The sequence {µk}k 0 and the parameters a, b, q, p1, p2 satisfy µk ! 0, X1 k=0 µk = 1, X1 k=0 µ2p1k < 1, and q 2, qa b, p1 > 1 2 , p2 1. Then, it holds that limk!1 k (xk)k = 0 almost surely. The proof of Theorem 2.1 is elementary. We provide the core ideas here and defer its proof to Appendix A. Item (i) is proved by contradiction. An easy first result is lim infk!1 E[k (xk)ka] = 0. We proceed and assume that {E[k (xk)k]}k 0 does not converge to zero. Then, for some > 0, we can construct two subsequences {`t}t 0 and {ut}t 0 such that `t < ut and E[k (x`t)k] 2 , E[k (xut)ka] a, and E[k (xk)ka] > a for all `t < k < ut. Based on this construction, the conditions in the theorem, and a set of inequalities, we will eventually reach a contradiction. We notice that the Lipschitz continuity of plays a prominent role when establishing this contradiction. Our overall proof strategy is inspired by the analysis of classical trust region-type methods, see, e.g., [9, Theorem 6.4.6]. Let us also mention that a different strategy for the fully deterministic setting and scalar case : Rn ! R was provided in [8]. For item (ii), we first control the stochastic behavior of the error terms Ak by martingale convergence theory. We can then conduct sample-based arguments to derive the final result, which is essentially deterministic and hence, follows similar arguments to that of item (i). The major application areas of our unified convergence framework comprise stochastic optimization methods that have non-vanishing stochastic errors or that utilize diminishing step sizes. In the next subsection, we state the main steps for showing convergence of stochastic optimization methods. This also clarifies the abstract conditions listed in the theorem. 2.1 The steps for showing convergence of stochastic optimization methods In order to apply the unified convergence theorem, we have to verify the conditions stated in the theorem, resulting in three main phases below. Phase I: Verifying (P.1)–(P.2). Conditions (P.1)–(P.2) are used for both the expected and the almost sure convergence results. Condition (P.1) is a problem property and is very standard. We present the final convergence results in terms of the abstract measure . This measure can be regarded as f f⇤ in convex optimization, rf in smooth nonconvex optimization, the gradient of the Moreau envelope in weakly convex optimization, etc. In all the situations, assuming Lipschitz continuity of the convergence measure is standard and is arguably a minimal assumption in order to obtain iteration complexity and/or convergence results. Condition (P.2) is typically a result of the algorithmic property or complexity analysis. To verify this condition, one first establishes the recursion of the stochastic method, which almost always has the form E[yk+1 | Fk] (1 + k)yk µkk (xk)ka + ⇣k. Here, yk is a suitable Lyapunov function measuring the (approximate) descent property of the stochastic method, ⇣k represents the error term satisfying P1 k=0 ⇣k < 1, k is often related to the step sizes and satisfies P1 k=0 k < 1. Then, applying the supermartingale convergence theorem (see Theorem B.1), we obtain P1 k=0 µk E[k (xk)ka] < 1, i.e., condition (P.2). Since condition (P.2) is typically a consequence of the underlying algorithmic recursion, one can also derive the standard finite-time complexity bound (1) in terms of the measure E[k (xk)ka] based on it. Hence, non-asymptotic complexity results are also included implicitly in our framework as a special case. To be more specific, (P.2) implies PT k=0 µkE[k (xk)ka] M for some constant M > 0 and some total number of iterations T . This then yields min0kT E[k (xk)ka] M/ PT k=0 µk. Note that the sequence {µk}k 0 is often related to the step sizes. Thus, choosing the step sizes properly results in the standard finite-time complexity result. Phase II: Verifying (P.3)–(P.4) for showing expected convergence. Condition (P.3) requires an upper bound on the step length of the update in terms of expectation, including upper bounds for the search direction and the stochastic error of the algorithm. It is often related to certain bounded variance-type assumptions for analyzing stochastic methods. For instance, (P.3) is satisfied under the standard bounded variance assumption for SGD, the more general ABC assumption for SGD, the bounded stochastic subgradients assumption, etc. Condition (P.4) is a standard diminishing step sizes condition used in stochastic optimization. Then, one can apply item (i) of Theorem 2.1 to obtain E[k (xk)k] ! 0. Phase III: Verifying (P.30)–(P.40) for showing almost sure convergence. Condition (P.30) is parallel to (P.3). It decomposes the update into a martingale term Ak and a bounded error term Bk. We will see later that this condition holds true for many stochastic methods. Though this condition requires the update to have a certain decomposable form, it indeed can be verified by bounding the step length of the update in conditional expectation, which is similar to (P.3). Hence, (P.30) can be interpreted as a conditional version of (P.3). To see this, we can construct xk+1 = xk + µk · 1 µk xk+1 xk E[xk+1 xk | Fk] Ak +µk · 1 µk E[xk+1 xk | Fk] Bk . (4) By Jensen’s inequality, we then have E[Ak | Fk] = 0, E[kAkkq] 2qµ qk · E[kx k+1 xkkq], and kBkkq µ qk · E[kx k+1 xkkq | Fk]. Thus, once it is possible to derive E[kxk+1 xkkq | Fk] = O(µqk) in an almost sure sense, condition (P.30) is verified with p1 = p2 = 1. Condition (P.40) is parallel to (P.4) and is standard in stochastic optimization. Application of item (ii) of Theorem 2.1 then yields k (xk)k ! 0 almost surely. In the next section, we will illustrate how to show convergence for a set of classic stochastic methods by following the above three steps. 3 Applications to stochastic optimization methods 3.1 Convergence results of SGD We consider the standard SGD method for solving the smooth optimization problem minx2Rn f(x), where the iteration of SGD is given by xk+1 = xk ↵kg k. (5) Here, gk denotes a stochastic approximation of the gradient rf(xk). We assume that each stochastic gradient gk is Fk+1-measurable and that the generated stochastic process {xk}k 0 is adapted to the filtration {Fk}k 0. We consider the following standard assumptions: (A.1) The mapping rf : Rn ! Rn is Lipschitz continuous on Rn with modulus L > 0. (A.2) The objective function f is bounded from below on Rn, i.e., there is f̄ such that f(x) f̄ for all x 2 Rn. (A.3) Each oracle gk defines an unbiased estimator of rf(xk), i.e., it holds that E[gk | Fk] = rf(xk) almost surely, and there exist C,D 0 such that E[kgk rf(xk)k2 | Fk] C[f(xk) f̄ ] + D almost surely 8 k 2 N. (A.4) The step sizes {↵k}k 0 satisfy P1 k=0 ↵k = 1 and P1 k=0 ↵ 2 k < 1. We now derive the convergence of SGD below by setting ⌘ rf and µk ⌘ ↵k. Phase I: Verifying (P.1)–(P.2). (A.1) verifies condition (P.1) with L ⌘ L. We now check (P.2). Using (A.2), (A.3), and a standard analysis for SGD gives the following recursion (see Appendix C.1 for the full derivation): E[f(xk+1) f̄ | Fk] ✓ 1 + LC↵2k 2 ◆ [f(xk) f̄ ] ↵k ✓ 1 L↵k 2 ◆ krf(xk)k2 + LD↵2k 2 . (6) Taking total expectation, using (A.4), and applying the supermartingale convergence theorem (Theorem B.1) gives P1 k=0 ↵kE[krf(xk)k2] < 1. Furthermore, the sequence {E[f(xk)]}k 0 converges to some finite value. This verifies (P.2) with a = 2. Phase II: Verifying (P.3)–(P.4) for showing expected convergence. For (P.3), we have by (5) and (A.3) that E[kxk+1 xkk2] ↵2kE[krf(xk)k2] + C↵2kE[f(xk) f̄ ] + D↵2k. Due to the convergence of {E[f(xk)]}k 0, there exists F such that E[f(xk) f̄ ] F for all k. Thus, condition (P.3) holds with q = 2, A = CF+ D, p1 = 2, B = 1, p2 = 2, and b = 2. Condition (P.4) is verified by (A.4) and the previous parameters choices. Therefore, we can apply Theorem 2.1 to deduce E[krf(xk)k] ! 0. Phase III: Verifying (P.30)–(P.40) for showing almost sure convergence. For (P.30), it follows from the update (5) that xk+1 = xk ↵k(g k rf(xk)) ↵krf(x k). We have p1 = 1, Ak = gk rf(xk), p2 = 1, and Bk = rf(xk). Using (A.2), (A.3), E[f(xk) f̄ ] F, and choosing any q = b > 0 establishes (P.30). As before, condition (P.40) follows from (A.4) and the previous parameters choices. Applying Theorem 2.1 yields krf(xk)k ! 0 almost surely. Finally, we summarize the above results in the following corollary. Corollary 3.1. Let us consider SGD (5) for smooth nonconvex optimization problems under (A.1)– (A.4). Then, we have limk!1 E[krf(xk)k] = 0 and limk!1 krf(xk)k = 0 almost surely. 3.2 Convergence results of random reshuffling We now consider random reshuffling (RR) applied to problems with a finite sum structure min x2Rn f(x) := 1 N XN i=1 f(x, i), where each component function f(·, i) : Rn ! R is supposed to be smooth. At iteration k, RR first generates a random permutation k+1 of the index set {1, . . . , N}. It then updates xk to xk+1 through N consecutive gradient descent-type steps by accessing and using the component gradients {rf(·, k+11 ), . . . ,rf(·, k+1 N )} sequentially. Specifically, one update-loop (epoch) of RR is given by x̃k0 = x k, x̃ki = x̃ k i 1 ↵krf(x̃ k i 1, k+1 i ), i = 1, . . . , N, x k+1 = x̃kN . (7) After one such loop, the step size ↵k and the permutation k+1 is updated accordingly; cf. [20, 30, 32]. We make the following standard assumptions: (B.1) For all i 2 {1, . . . , N}, f(·, i) is bounded from below by some f̄ and the gradient rf(·, i) is Lipschitz continuous on Rn with modulus L > 0. (B.2) The step sizes {↵k}k 0 satisfy P1 k=0 ↵k = 1 and P1 k=0 ↵ 3 k < 1. A detailed derivation of the steps shown in Subsection 2.1 for RR is deferred to Appendix D.2. Based on the discussion in Appendix D.2 and on Theorem 2.1, we obtain the following results for RR. Corollary 3.2. We consider RR (7) for smooth nonconvex optimization problems under (B.1)–(B.2). Then it holds that limk!1 E[krf(xk)k] = 0 and limk!1 krf(xk)k = 0 almost surely. 3.3 Convergence of the proximal stochastic gradient method We consider the composite-type optimization problem minx2Rn (x) := f(x) + '(x) (8) where f : Rn ! R is a continuously differentiable function and ' : Rn ! ( 1,1] is ⌧ -weakly convex (see Appendix E.1), proper, and lower semicontinuous. In this section, we want to apply our unified framework to study the convergence behavior of the well-known proximal stochastic gradient method (prox-SGD): xk+1 = prox↵k'(x k ↵kg k), (9) where gk ⇡ rf(xk) is a stochastic approximation of rf(xk), {↵k}k 0 ✓ R+ is a suitable step size sequence, and prox↵k' : R n ! Rn, prox↵k'(x) := argminy2Rn '(y) + 1 2↵k kx yk2 is the well-known proximity operator of '. 3.3.1 Assumptions and preparations We first recall several useful concepts from nonsmooth and variational analysis. For a function h : Rn ! ( 1,1], the Fréchet (or regular) subdifferential of h at the point x is given by @h(x) := {g 2 Rn : h(y) h(x) + hg,y xi+ o(ky xk) as y ! x}, see, e.g., [39, Chapter 8]. If h is convex, then the Fréchet subdifferential coincides with the standard (convex) subdifferential. It is well-known that the associated first-order optimality condition for the composite problem (8) — 0 2 @ (x) = rf(x) + @'(x) — can be represented as a nonsmooth equation, [39, 21], F↵nat(x) := x prox↵'(x ↵rf(x)) = 0, ↵ 2 (0, ⌧ 1), where F↵nat denotes the so-called natural residual. The natural residual F↵nat is a common stationarity measure for the nonsmooth problem (8) and widely used in the analysis of proximal methods. We will make the following assumptions on f , ', and the stochastic oracles {gk}k 0: (C.1) The function f is bounded from below on Rn, i.e., there is f̄ such that f(x) f̄ for all x 2 Rn, and the gradient mapping rf is Lipschitz continuous (on Rn) with modulus L > 0. (C.2) The function ' is ⌧ -weakly convex, proper, lower semicontinuous, and bounded from below on dom', i.e., we have '(x) '̄ for all x 2 dom'. (C.3) There exists L' > 0 such that '(x) '(y) L'kx yk for all x,y 2 dom'. (C.4) Each gk defines an unbiased estimator of rf(xk), i.e., we have E[gk | Fk] = rf(xk) almost surely, and there exist C,D 0 such that E[kgk rf(xk)k2 | Fk] C[f(xk) f̄ ] + D almost surely 8 k 2 N. (C.5) The step sizes {↵k}k 0 satisfy P1 k=0 ↵k = 1 and P1 k=0 ↵ 2 k < 1. Here, we again assume that the generated stochastic processes {xk}k 0 is adapted to the filtration {Fk}k 0. The assumptions (C.1), (C.2), (C.4), and (C.5) are fairly standard and broadly applicable. In particular, (C.1), (C.4), and (C.5) coincide with the conditions (A.1)–(A.4) used in the analysis of SGD. We continue with several remarks concerning condition (C.3). Remark 3.3. Assumption (C.3) requires the mapping ' to be Lipschitz continuous on its effective domain dom'. This condition holds in many important applications, e.g., when ' is chosen as a norm or indicator function. Nonconvex examples satisfying (C.2) and (C.3) include, e.g., the minimax concave penalty (MCP) function [45], the smoothly clipped absolute deviation (SCAD) [15], or the student-t loss function. We refer to [4] and Appendix E.2 for further discussion. 3.3.2 Convergence results of prox-SGD We now analyze the convergence of the random process {xk}k 0 generated by the stochastic algorithmic scheme (9). As pioneered in [11], we will use the Moreau envelope env✓ , env✓ : Rn ! R, env✓ (x) := miny2Rn (y) + 1 2✓ kx yk2, (10) as a smooth Lyapunov function to study the descent properties and convergence of prox-SGD. We first note that the conditions (C.1) and (C.2) imply ✓ 1-weak convexity of for every ✓ 2 (0, (L+ ⌧) 1]. In this case, the Moreau envelope env✓ is a well-defined and continuously differentiable function with gradient renv✓ (x) = 1✓ (x prox✓ (x)); see, e.g., [38, Theorem 31.5]. As shown in [13, 11], the norm of the Moreau envelope — krenv✓ (x)k — defines an alternative stationarity measure for problem (8) that is equivalent to the natural residual if ✓ is chosen sufficiently small. A more explicit derivation of this connection is provided in Lemma E.1. Next, we establish convergence of prox-SGD by setting ⌘ renv✓ and µk ⌘ ↵k. Our analysis is based on the following two estimates which are verified in Appendix E.4 and Appendix E.5. Lemma 3.4. Let {xk}k 0 be generated by prox-SGD and let the assumptions (C.1)–(C.4) be satisfied. Then, for ✓ 2 (0, [3L+ ⌧ ] 1) and all k with ↵k min{ 12⌧ , 1 2(✓ 1 [L+⌧ ])}, it holds that E[env✓ (xk+1) ̄ | Fk] (1 + 4C✓ 1↵2k) · [env✓ (xk) ̄] L✓↵kkrenv✓ (x k)k2 + 2↵2k(CL 2 ' + D✓ 1), (11) almost surely, where ̄ := f̄ + '̄. Lemma 3.5. Let {xk}k 0 be generated by prox-SGD and suppose that the assumptions (C.1)–(C.4) hold. Then, for ✓ 2 (0, [ 43L+ ⌧ ] 1) and all k with ↵k 12⌧ , we have almost surely E[kxk+1 xkk2 | Fk] 8(2L+ C)↵2k · [env✓ (xk) ̄] + 4(((2L+ C)✓ + 1)L2' + D)↵2k. (12) Phase I: Verifying (P.1)–(P.2). In [21, Corollary 3.4], it is shown that the gradient of the Moreau envelope is Lipschitz continuous with modulus Le := max{✓ 1, (1 [L + ⌧ ]✓) 1[L+ ⌧ ]} for all ✓ 2 (0, [L+ ⌧ ] 1). Thus, condition (P.1) is satisfied. Furthermore, due to ↵k ! 0 and choosing ✓ 2 (0, [3L + ⌧ ] 1), the estimate (11) in Lemma 3.4 holds for all k sufficiently large. Consequently, due to env✓ (x) (prox✓ (x)) ̄ and (C.5), Theorem B.1 is applicable and upon taking total expectation, {E[env✓ (xk)]}k 0 converges to some E 2 R. In addition, the sequence {env✓ (xk)}k 0 converges almost surely to some random variable e? and we have P1 k=0 ↵kE[krenv✓ (xk)k2] < 1. This verifies condition (P.2) with a = 2. Phase II: Verifying (P.3)–(P.4) for showing convergence in expectation. Assumptions (C.1)–(C.5) and Lemma 3.5 allow us to establish the required bound stated in (P.3). Specifically, taking total expectation in (12), we have E[kxk+1 xkk2] 8(2L+ C)↵2k · E[env✓ (xk) ̄] + 4(((2L+ C)✓ + 1)L2' + D)↵2k for all k sufficiently large. Due to E[env✓ (xk)] ! E, there exists F such that E[env✓ (xk) ̄] F for all k. Hence, (P.3) holds with q = 2, A = 8(2L+C)F+4(((2L+C)✓+1)L2' +D), p1 = 2, and B = 0. The property (P.4) easily follows from (C.5) and the parameter choices. Consequently, using Theorem 2.1, we can infer E[krenv✓ (xk)k] ! 0. Phase III: Verifying (P.30)–(P.40) for showing almost sure convergence. We follow the construction in (4) and set Ak = ↵ 1k (x k+1 xk E[xk+1 xk | Fk]), Bk = ↵ 1k E[xk+1 xk | Fk], and p1, p2 = 1. Clearly, we have E[Ak | Fk] = 0 and based on the previous results in Phase II, we can show E[kxk+1 xkk2] = O(↵2k) which establishes boundedness of {E[kAkk2}k 0. Similarly, for Bk and by Lemma 3.5 and Jensen’s inequality, we obtain kBkk 2 ↵ 2k E[kx k+1 xkk2 | Fk] 8(2L+ C) · [env✓ (x k) ̄] +O(1). Due to env✓ (xk) ! e? almost surely, this shows lim supk!1 kBkk2 < 1 almost surely. Hence, all requirements in (P.30) are satisfied with q = 2 and b = 0. Moreover, it is easy to see that property (P.40) also holds in this case. Overall, Theorem 2.1 implies krenv✓ (xk)k ! 0 almost surely. As mentioned, it is possible to express the obtained convergence results in terms of the natural residual Fnat = F 1nat, see, e.g., Lemma E.1. We summarize our observations in the following corollary. Corollary 3.6. Let us consider prox-SGD (9) for the composite problem (8) under (C.1)–(C.5). Then, we have limk!1 E[kFnat(xk)k] = 0 and limk!1 kFnat(xk)k = 0 almost surely. Remark 3.7. As a byproduct, Lemma 3.4 also leads to an expected iteration complexity result of prox-SGD by using the ABC condition (C.4) rather than the standard bounded variance assumption. This is a nontrivial extension of [11, Corollary 3.6]. We provide a full derivation in Appendix E.6. 3.4 Convergence of stochastic model-based methods In this section, we consider the convergence of stochastic model-based methods for nonsmooth weakly convex optimization problems minx2Rn (x) := f(x) + '(x) = E⇠⇠D[f(x, ⇠)] + '(x), (13) where both f and ' are assumed to be (nonsmooth) weakly convex functions and is lower bounded, i.e., (x) ̄ for all x 2 dom'. Classical stochastic optimization methods — including proximal stochastic subgradient, stochastic proximal point, and stochastic prox-linear methods — for solving (13) are unified by the stochastic model-based methods (SMM) [14, 11]: xk+1 = argminx2Rn fxk(x, ⇠ k) + '(x) + 1 2↵k kx xkk2, (14) where fxk(x, ⇠k) is a stochastic approximation of f around xk using the sample ⇠k; see Appendix F.1 for descriptions of three major types of SMM. Setting Fk := (⇠0, . . . , ⇠k 1), it is easy to see that {xk}k 0 is adapted to {Fk}k 0. We analyze convergence of SMM under the following assumptions. (D.1) The stochastic approximation function fx satisfies a one-sided accuracy property, i.e., we have E⇠[fx(x, ⇠)] = f(x) for all x 2 U and E⇠[fx(y, ⇠) f(y)] ⌧ 2 kx yk2 8 x,y 2 U, where U is an open convex set containing dom'. (D.2) The function y 7! fx(y, ⇠) + '(y) is ⌘-weakly convex for all x 2 U and almost every ⇠. (D.3) There exists L > 0 such that the stochastic approximation function fx satisfies fx(x, ⇠) fx(y, ⇠) Lkx yk 8 x,y 2 U, and almost every ⇠. (D.4) The function ' is L'-Lipschitz continuous. (D.5) The step sizes {↵k}k 0 satisfy P1 k=0 ↵k = 1 and P1 k=0 ↵ 2 k < 1. Assumptions (D.1), (D.2), (D.3) are standard for analyzing SMM and identical to that of [11]. (D.5) is convention for stochastic methods. Assumption (D.4) mimics (C.3); see Remark 3.3 for discussions. We now derive the convergence of SMM below by setting ⌘ renv✓ and µk ⌘ ↵k. Our derivation is based on the following two estimates, in which the proof of Lemma 3.9 is given in Appendix F.2. Lemma 3.8 (Theorem 4.3 of [11]). Let ✓ 2 (0, (⌧ + ⌘) 1) and ↵k < ✓ be given. Then, we have E[env✓ (xk+1) | Fk] env✓ (xk) (1 [⌧ + ⌘]✓)↵k 2(1 ⌘↵k) krenv✓ (x k)k2 + 2L2↵2k (1 ⌘↵k)(✓ ↵k) . Lemma 3.9. For all k with ↵k 1/(2⌘), it holds that E[kxk+1 xkk2 | Fk] (16(L+ L')2 + 8L2)↵2k. Phase I: Verifying (P.1)–(P.2). As before, [21, Corollary 3.4] implies that the mapping renv✓ is Lipschitz continuous for all ✓ 2 (0, (⌧ + ⌘) 1) Hence, condition (P.1) is satisfied. Using ↵k ! 0, we can apply Theorem B.1 to the recursion obtained in Lemma 3.8 for all k sufficiently large and it follows P1 k=0 ↵kE[krenv✓ (xk)k2] < 1. Thus, condition (P.2) holds with a = 2. Phase II: Verifying (P.3)–(P.4) for showing convergence in expectation. Taking total expectation in Lemma 3.9 verifies condition (P.3) with q = 2, A = (16(L+ L')2 +8L2), p1 = 2, B = 0. Moreover, condition (P.4) is true by assumption (D.5) and the previous parameters choices. Thus, applying Theorem 2.1 gives E[krenv✓ (xk)k] ! 0. Phase III: Verifying (P.30)–(P.40) for showing almost sure convergence. As in (4), we can set Ak = ↵ 1 k (x k+1 xk E[xk+1 xk | Fk]), Bk = ↵ 1k E[xk+1 xk | Fk]. Applying Lemma 3.9 and utilizing Jensen’s inequality, we have E[Ak | Fk] = 0, E[kAkk2] (4/↵2k)E[kxk+1 xkk2] 4(16(L + L')2 + 8L2) and kBkk2 16(L + L')2 + 8L2. Thus, condition (P.30) is satisfied with p1 = p2 = 1, q = 2. Assumption (D.5), together with the previous parameter choices verifies condition (P.40) and hence, applying Theorem 2.1 yields krenv✓ (xk)k ! 0 almost surely. Summarizing this discussion, we obtain the following convergence results for SMM. Corollary 3.10. We consider the family of stochastic model-based methods (14) for the optimization problem (13) under assumptions (D.1)–(D.5). Let {xk}k 0 be a generated sequence. Then, we have limk!1 E[krenv✓ (xk)k] = 0 and limk!1 krenv✓ (xk)k = 0 almost surely. Remark 3.11. The results presented in Corollary 3.10 also hold under certain extended settings. In fact, we can replace (D.3) by a slightly more general Lipschitz continuity assumption on f . Moreover, it is possible to establish convergence in the case where f is not Lipschitz continuous but has Lipschitz continuous gradient, which is particularly useful when we apply stochastic proximal point method for smooth f . A more detailed derivation and discussion of such extensions is deferred to Appendix F.3. 3.5 Related work and discussion SGD and RR. The literature for SGD is extremely rich and several connected and recent works have been discussed in Section 1. Our result in Corollary 3.1 unifies many of the existing convergence analyses of SGD and is based on the general ABC condition (A.3) (see [23, 24, 19] for comparison) rather than on the standard bounded variance assumption. Our expected convergence result generalizes the one in [6] using much weaker assumptions. Our results for RR are in line with the recent theoretical observations in [30, 32, 25]. In particular, Corollary 3.2 recovers the almost sure convergence result shown in [25], while the expected convergence result appears to be new. Prox-SGD and SMM. The work [11] established one of the first complexity results for prox-SGD using the Moreau envelope. Under a bounded variance assumption (C = 0 in condition (C.4)) and for general nonconvex and smooth f , the authors showed E[krenv✓ (xk̄)k2] = O((T + 1) 1/2), where xk̄ is sampled uniformly from the past T + 1 iterates x0, . . . ,xT . As mentioned, this result cannot be easily extended to the asymptotic convergence results discussed in this paper. Earlier studies of prox-SGD for nonconvex f and C = 0 include [18] where convergence of prox-SGD is established if the variance parameter D = Dk ! 0 vanishes as k ! 1. This can be achieved by progressively increasing the size of the selected mini-batches or via variance reduction techniques as in prox-SVRG and prox-SAGA, see [35]. The question whether prox-SGD can converge and whether the accumulation points of the iterates {xk}k 0 correspond to stationary points was only addressed recently in [27]. The authors use a differential inclusion approach to study convergence of prox-SGD. However, additional compact constraints x 2 X have to be introduced in the model (8) to guarantee sure boundedness of {xk}k 0 and applicability of the differential inclusion techniques. Lipschitz continuity of ' also appears as an essential requirement in [27, Theorem 5.4]. The analyses in [14, 12] establish asymptotic convergence guarantees for SMM. However, both works require a priori (almost) sure boundedness of {xk}k 0 and a density / Sard-type condition in order to show convergence. We refer to [16] for an extension of the results in [27, 12] to prox-SGD in Hilbert spaces. By contrast, our convergence framework allows to complement these differential inclusion-based results and — for the first time — fully removes any stringent boundedness assumption on {xk}k 0. Instead, our analysis relies on more transparent assumptions that are verifiable and common in stochastic optimization and machine learning. In summary, we are now able to claim: prox-SGD and SMM converge under standard stochastic conditions if ' is Lipschitz continuous. In the easier convex case, analogous results have been obtained, e.g., in [18, 1, 40]. We provide an overview of several related and representative results in Table 1 in Appendix G. 4 Conclusion In this work, we provided a novel convergence framework that allows to derive expected and almost sure convergence results for a vast class of stochastic optimization methods under state-of-the-art assumptions and in a unified way. We specified the steps on how to utilize our theorem in order to establish convergence results for a given stochastic algorithm. As concrete examples, we applied our theorem to derive asymptotic convergence guarantees for SGD, RR, prox-SGD, and SMM. To our surprise, some of the obtained results appear to be new and provide new insights into the convergence behavior of some well-known and standard stochastic methodologies. These applications revealed that our unified theorem can serve as a plugin-type tool with the potential to facilitate the convergence analysis of a wide class of stochastic optimization methods. Finally, it is important to investigate in which situations our convergence results in terms of the stationarity measure can be strengthened — say to almost sure convergence guarantees for the iterates {xk}k 0. We plan to consider such a possible extension in future work. Acknowledgments and Disclosure of Funding The authors would like to thank the Area Chair and anonymous reviewers for their detailed and constructive comments, which have helped greatly to improve the quality and presentation of the manuscript. In addition, we would like to thank Michael Ulbrich for valuable feedback and comments on an earlier version of this work. X. Li was partially supported by the National Natural Science Foundation of China (NSFC) under Grant No. 12201534 and 72150002, by the Shenzhen Science and Technology Program under Grant No. RCBS20210609103708017, and by the Shenzhen Institute of Artificial Intelligence and Robotics for Society (AIRS) under Grant No. AC01202101108. A. Milzarek was partly supported by the National Natural Science Foundation of China (NSFC) – Foreign Young Scholar Research Fund Project (RFIS) under Grant No. 12150410304 and by the Fundamental Research Fund – Shenzhen Research Institute of Big Data (SRIBD) Startup Fund JCYJ-AM20190601.
1. What is the focus of the paper regarding stochastic optimization methods? 2. What are the strengths and weaknesses of the proposed approach? 3. Do you have any concerns or questions regarding the provided examples and the nature of the results? 4. How do you assess the clarity and quality of the writing in the paper? 5. Are there any limitations or areas where the paper could be improved?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper derives general convergence results for stochastic optimization methods. The results include the cases where the cost function is nonconvex and nonsmooth. Strengths And Weaknesses Strength: The approach is general and can be applicable to several settings. Weakness: The results are only asymptotic and about the gradient norm (as such, not "global" optimization results). Questions I have the following questions about writing as well as the nature of the results: Please give an example, before providing your main theorem in Sec. 2, what an abstract convergence measure \Phi could be. In convex and nonconvex optimization, asymptotic results are somewhat of secondary importance, as nonasymptotic bounds could provide reasonable heuristics to optimize the parameters of optimization methods. Why do the authors provide only asymptotic results in this paper? In other words, what was the difficulty in getting nonasymptotic results? Related to my second question, if authors manage to convert some of these results into nonasymptotic ones, is it clear that the rates of convergence would match standard rates? In other words, someone can derive a very slow convergence rate for these optimization methods (which would all converge in the same way asymptotically), but would not be very useful otherwise. I would be inclined to reconsider my rating if the authors provide a convincing discussion whether their technique can lead to nonasymptotic bounds. Limitations Not applicable.
NIPS
Title A Unified Convergence Theorem for Stochastic Optimization Methods Abstract In this work, we provide a fundamental unified convergence theorem used for deriving expected and almost sure convergence results for a series of stochastic optimization methods. Our unified theorem only requires to verify several representative conditions and is not tailored to any specific algorithm. As a direct application, we recover expected and almost sure convergence results of the stochastic gradient method (SGD) and random reshuffling (RR) under more general settings. Moreover, we establish new expected and almost sure convergence results for the stochastic proximal gradient method (prox-SGD) and stochastic model-based methods for nonsmooth nonconvex optimization problems. These applications reveal that our unified theorem provides a plugin-type convergence analysis and strong convergence guarantees for a wide class of stochastic optimization methods. 1 Introduction Stochastic optimization methods are widely used to solve stochastic optimization problems and empirical risk minimization, serving as one of the foundations of machine learning. Among the many different stochastic methods, the most classic one is the stochastic gradient method (SGD), which dates back to Robbins and Monro [36]. If the problem at hand has a finite-sum structure, then another popular stochastic method is random reshuffling (RR) [20]. When the objective function has a composite form or is weakly convex (nonsmooth and nonconvex), then the stochastic proximal gradient method (prox-SGD) and stochastic model-based algorithms are the most typical approaches [18, 11]. Apart from the mentioned stochastic methods, there are many others like SGD with momentum, Adam, stochastic higher order methods, etc. In this work, our goal is to establish and understand fundamental convergence properties of these stochastic optimization methods via a novel unified convergence framework. Motivations. Suppose we apply SGD to minimize a smooth nonconvex function f . SGD generates a sequence of iterates {xk}k 0, which is a stochastic process due to the randomness of the algorithm and the utilized stochastic oracles. The most commonly seen ‘convergence result’ for SGD is the 36th Conference on Neural Information Processing Systems (NeurIPS 2022). expected iteration complexity, which typically takes the form [17] min k=0,...,T E[krf(xk)k2] O ✓ 1 p T + 1 ◆ or E[krf(xk̄)k2] O ✓ 1 p T + 1 ◆ , (1) where T denotes the total number of iterations and k̄ is an index sampled uniformly at random from {0, . . . , T}. Note that we ignored some higher-order convergence terms and constants to ease the presentation. Complexity results are integral to understand core properties and progress of the algorithm during the first T iterations, while the asymptotic convergence behavior plays an equally important role as it characterizes whether an algorithm can eventually approach an exact stationary point or not. We refer to Appendix H for additional motivational background for studying asymptotic convergence properties of stochastic optimization methods. Here, an expected convergence result, associated with the nonconvex minimization problem minx f(x), has the form lim k!1 E[krf(xk)k] = 0. (2) Intuitively, it should be possible to derive expected convergence from the expected iteration complexity (1) by letting T ! 1. However, this is not the case as the ‘min’ operator and the sampled k̄ are not well defined or become meaningless when T goes to 1. The above results are stated in expectation and describe the behavior of the algorithm by averaging infinitely many runs. Though this is an important convergence measure, in practical situations the algorithm is often only run once and the last iterate is returned as a solution. This observation motivates and necessitates almost sure convergence results, which establish convergence with probability 1 for a single run of the stochastic method: lim k!1 krf(xk)k = 0 almost surely. (3) Backgrounds. Expected and almost sure convergence results have been extensively studied for convex optimization; see, e.g., [10, 34, 42, 46, 5, 41]. Almost sure convergence of SGD for minimizing a smooth nonconvex function f was provided in the seminal work [3] using very standard assumptions, i.e., Lipschitz continuous rf and bounded variance. Under the same conditions, the same almost sure convergence of SGD was established in [33] based on a much simpler argument than that of [3]. A weaker ‘lim inf’-type almost sure convergence result for SGD with AdaGrad step sizes was shown in [26]. Recently, the work [28] derives almost sure convergence of SGD under the assumptions that f and rf are Lipschitz continuous, f is coercive, f is not asymptotically flat, and the -th moment of the stochastic error is bounded with 2. This result relies on stronger assumptions than the base results in [3]. Nonetheless, it allows more aggressive diminishing step sizes if > 2. Apart from standard SGD, almost sure convergence of different respective variants for min-max problems was discussed in [22]. In terms of expected convergence, the work [6] showed limk!1 E[krf(xk)k] = 0 under the additional assumptions that f is twice continuously differentiable and the multiplication of the Hessian and gradient r2f(x)rf(x) is Lipschitz continuous. Though the convergence of SGD is well-understood and a classical topic, asymptotic convergence results of the type (2) and (3) often require a careful and separate analysis for other stochastic optimization methods — especially when the objective function is simultaneously nonsmooth and nonconvex. In fact and as outlined, a direct transition from the more common complexity results (1) to the full convergence results (2) and (3) is often not possible without further investigation. Main contributions. We provide a fundamental unified convergence theorem (see Theorem 2.1) for deriving both expected and almost sure convergence of stochastic optimization methods. Our theorem is not tailored to any specific algorithm, instead it incorporates several abstract conditions that suit a vast and general class of problem structures and algorithms. The proof of this theorem is elementary. We then apply our novel theoretical framework to several classical stochastic optimization methods to recover existing and to establish new convergence results. Specifically, we recover expected and almost sure convergence results for SGD and RR. Though these results are largely known in the literature, we derive unified and slightly stronger results under a general ABC condition [24, 23] rather than the standard bounded variance assumption. We also remove the stringent assumption used in [6] to show (2) for SGD. As a core application of our framework, we derive expected and almost sure convergence results for prox-SGD in the nonconvex setting and under the more general ABC condition and for stochastic model-based methods under very standard assumptions. In particular, we show that the iterates {xk}k 0 generated by prox-SGD and other stochastic model-based methods will approach the set of stationary points almost surely and in an expectation sense. These results are new to our knowledge (see also Subsection 3.5 for further discussion). The above applications illustrate the general plugin-type purpose of our unified convergence analysis framework. Based on the given recursion and certain properties of the algorithmic update, we can derive broad convergence results by utilizing our theorem, which can significantly simplify the convergence analysis of stochastic optimization methods; see Subsection 2.1 for a summary. 2 A unified convergence theorem Throughout this work, let (⌦,F , {Fk}k 0,P) be a filtered probability space and let us assume that the sequence of iterates {xk}k 0 is adapted to the filtration {Fk}k 0, i.e., each of the random vectors xk : ⌦ ! Rn is Fk-measurable. In this section, we present a unified convergence theorem for the sequence {xk}k 0 based on an abstract convergence measure . To make the abstract convergence theorem more accessible, the readers may momentarily regard and {µk}k 0 as rf and the sequence related to the step sizes, respectively. We then present the main steps for showing the convergence of a stochastic optimization method by following a step-by-step verification of the conditions in our unified convergence theorem. Theorem 2.1. Let the mapping : Rn ! Rm and the sequences {xk}k 0 ✓ Rn and {µk}k 0 ✓ R++ be given. Consider the following conditions: (P.1) The function is L -Lipschitz continuous for some L > 0, i.e., we have k (x) (y)k L kx yk for all x,y 2 Rn. (P.2) There exists a constant a > 0 such that P1 k=0 µk E[k (xk)ka] < 1. The following statements are valid: (i) Let the conditions (P.1)–(P.2) be satisfied and suppose further that (P.3) There exist constants A,B, b 0 and p1, p2, q > 0 such that E[kxk+1 xkkq] Aµp1k + Bµ p2 k E[k (x k)kb]. (P.4) The sequence {µk}k 0 and the parameters a, b, q, p1, p2 satisfy {µk}k 0 is bounded, X1 k=0 µk = 1, and a, q 1, a b, p1, p2 q. Then, it holds that limk!1 E[k (xk)k] = 0. (ii) Let the properties (P.1)–(P.2) hold and assume further that (P.30) There exist constants A, b 0, p1, p2, q > 0 and random vectors Ak,Bk : ⌦ ! Rn such that xk+1 = xk + µp1k Ak + µ p2 k Bk and for all k, Ak,Bk are Fk+1-measurable and we have E[Ak | Fk] = 0 almost surely, E[kAkkq] A, and lim supk!1 kBkkq/(1 + k (xk)kb) < 1 almost surely. (P.40) The sequence {µk}k 0 and the parameters a, b, q, p1, p2 satisfy µk ! 0, X1 k=0 µk = 1, X1 k=0 µ2p1k < 1, and q 2, qa b, p1 > 1 2 , p2 1. Then, it holds that limk!1 k (xk)k = 0 almost surely. The proof of Theorem 2.1 is elementary. We provide the core ideas here and defer its proof to Appendix A. Item (i) is proved by contradiction. An easy first result is lim infk!1 E[k (xk)ka] = 0. We proceed and assume that {E[k (xk)k]}k 0 does not converge to zero. Then, for some > 0, we can construct two subsequences {`t}t 0 and {ut}t 0 such that `t < ut and E[k (x`t)k] 2 , E[k (xut)ka] a, and E[k (xk)ka] > a for all `t < k < ut. Based on this construction, the conditions in the theorem, and a set of inequalities, we will eventually reach a contradiction. We notice that the Lipschitz continuity of plays a prominent role when establishing this contradiction. Our overall proof strategy is inspired by the analysis of classical trust region-type methods, see, e.g., [9, Theorem 6.4.6]. Let us also mention that a different strategy for the fully deterministic setting and scalar case : Rn ! R was provided in [8]. For item (ii), we first control the stochastic behavior of the error terms Ak by martingale convergence theory. We can then conduct sample-based arguments to derive the final result, which is essentially deterministic and hence, follows similar arguments to that of item (i). The major application areas of our unified convergence framework comprise stochastic optimization methods that have non-vanishing stochastic errors or that utilize diminishing step sizes. In the next subsection, we state the main steps for showing convergence of stochastic optimization methods. This also clarifies the abstract conditions listed in the theorem. 2.1 The steps for showing convergence of stochastic optimization methods In order to apply the unified convergence theorem, we have to verify the conditions stated in the theorem, resulting in three main phases below. Phase I: Verifying (P.1)–(P.2). Conditions (P.1)–(P.2) are used for both the expected and the almost sure convergence results. Condition (P.1) is a problem property and is very standard. We present the final convergence results in terms of the abstract measure . This measure can be regarded as f f⇤ in convex optimization, rf in smooth nonconvex optimization, the gradient of the Moreau envelope in weakly convex optimization, etc. In all the situations, assuming Lipschitz continuity of the convergence measure is standard and is arguably a minimal assumption in order to obtain iteration complexity and/or convergence results. Condition (P.2) is typically a result of the algorithmic property or complexity analysis. To verify this condition, one first establishes the recursion of the stochastic method, which almost always has the form E[yk+1 | Fk] (1 + k)yk µkk (xk)ka + ⇣k. Here, yk is a suitable Lyapunov function measuring the (approximate) descent property of the stochastic method, ⇣k represents the error term satisfying P1 k=0 ⇣k < 1, k is often related to the step sizes and satisfies P1 k=0 k < 1. Then, applying the supermartingale convergence theorem (see Theorem B.1), we obtain P1 k=0 µk E[k (xk)ka] < 1, i.e., condition (P.2). Since condition (P.2) is typically a consequence of the underlying algorithmic recursion, one can also derive the standard finite-time complexity bound (1) in terms of the measure E[k (xk)ka] based on it. Hence, non-asymptotic complexity results are also included implicitly in our framework as a special case. To be more specific, (P.2) implies PT k=0 µkE[k (xk)ka] M for some constant M > 0 and some total number of iterations T . This then yields min0kT E[k (xk)ka] M/ PT k=0 µk. Note that the sequence {µk}k 0 is often related to the step sizes. Thus, choosing the step sizes properly results in the standard finite-time complexity result. Phase II: Verifying (P.3)–(P.4) for showing expected convergence. Condition (P.3) requires an upper bound on the step length of the update in terms of expectation, including upper bounds for the search direction and the stochastic error of the algorithm. It is often related to certain bounded variance-type assumptions for analyzing stochastic methods. For instance, (P.3) is satisfied under the standard bounded variance assumption for SGD, the more general ABC assumption for SGD, the bounded stochastic subgradients assumption, etc. Condition (P.4) is a standard diminishing step sizes condition used in stochastic optimization. Then, one can apply item (i) of Theorem 2.1 to obtain E[k (xk)k] ! 0. Phase III: Verifying (P.30)–(P.40) for showing almost sure convergence. Condition (P.30) is parallel to (P.3). It decomposes the update into a martingale term Ak and a bounded error term Bk. We will see later that this condition holds true for many stochastic methods. Though this condition requires the update to have a certain decomposable form, it indeed can be verified by bounding the step length of the update in conditional expectation, which is similar to (P.3). Hence, (P.30) can be interpreted as a conditional version of (P.3). To see this, we can construct xk+1 = xk + µk · 1 µk xk+1 xk E[xk+1 xk | Fk] Ak +µk · 1 µk E[xk+1 xk | Fk] Bk . (4) By Jensen’s inequality, we then have E[Ak | Fk] = 0, E[kAkkq] 2qµ qk · E[kx k+1 xkkq], and kBkkq µ qk · E[kx k+1 xkkq | Fk]. Thus, once it is possible to derive E[kxk+1 xkkq | Fk] = O(µqk) in an almost sure sense, condition (P.30) is verified with p1 = p2 = 1. Condition (P.40) is parallel to (P.4) and is standard in stochastic optimization. Application of item (ii) of Theorem 2.1 then yields k (xk)k ! 0 almost surely. In the next section, we will illustrate how to show convergence for a set of classic stochastic methods by following the above three steps. 3 Applications to stochastic optimization methods 3.1 Convergence results of SGD We consider the standard SGD method for solving the smooth optimization problem minx2Rn f(x), where the iteration of SGD is given by xk+1 = xk ↵kg k. (5) Here, gk denotes a stochastic approximation of the gradient rf(xk). We assume that each stochastic gradient gk is Fk+1-measurable and that the generated stochastic process {xk}k 0 is adapted to the filtration {Fk}k 0. We consider the following standard assumptions: (A.1) The mapping rf : Rn ! Rn is Lipschitz continuous on Rn with modulus L > 0. (A.2) The objective function f is bounded from below on Rn, i.e., there is f̄ such that f(x) f̄ for all x 2 Rn. (A.3) Each oracle gk defines an unbiased estimator of rf(xk), i.e., it holds that E[gk | Fk] = rf(xk) almost surely, and there exist C,D 0 such that E[kgk rf(xk)k2 | Fk] C[f(xk) f̄ ] + D almost surely 8 k 2 N. (A.4) The step sizes {↵k}k 0 satisfy P1 k=0 ↵k = 1 and P1 k=0 ↵ 2 k < 1. We now derive the convergence of SGD below by setting ⌘ rf and µk ⌘ ↵k. Phase I: Verifying (P.1)–(P.2). (A.1) verifies condition (P.1) with L ⌘ L. We now check (P.2). Using (A.2), (A.3), and a standard analysis for SGD gives the following recursion (see Appendix C.1 for the full derivation): E[f(xk+1) f̄ | Fk] ✓ 1 + LC↵2k 2 ◆ [f(xk) f̄ ] ↵k ✓ 1 L↵k 2 ◆ krf(xk)k2 + LD↵2k 2 . (6) Taking total expectation, using (A.4), and applying the supermartingale convergence theorem (Theorem B.1) gives P1 k=0 ↵kE[krf(xk)k2] < 1. Furthermore, the sequence {E[f(xk)]}k 0 converges to some finite value. This verifies (P.2) with a = 2. Phase II: Verifying (P.3)–(P.4) for showing expected convergence. For (P.3), we have by (5) and (A.3) that E[kxk+1 xkk2] ↵2kE[krf(xk)k2] + C↵2kE[f(xk) f̄ ] + D↵2k. Due to the convergence of {E[f(xk)]}k 0, there exists F such that E[f(xk) f̄ ] F for all k. Thus, condition (P.3) holds with q = 2, A = CF+ D, p1 = 2, B = 1, p2 = 2, and b = 2. Condition (P.4) is verified by (A.4) and the previous parameters choices. Therefore, we can apply Theorem 2.1 to deduce E[krf(xk)k] ! 0. Phase III: Verifying (P.30)–(P.40) for showing almost sure convergence. For (P.30), it follows from the update (5) that xk+1 = xk ↵k(g k rf(xk)) ↵krf(x k). We have p1 = 1, Ak = gk rf(xk), p2 = 1, and Bk = rf(xk). Using (A.2), (A.3), E[f(xk) f̄ ] F, and choosing any q = b > 0 establishes (P.30). As before, condition (P.40) follows from (A.4) and the previous parameters choices. Applying Theorem 2.1 yields krf(xk)k ! 0 almost surely. Finally, we summarize the above results in the following corollary. Corollary 3.1. Let us consider SGD (5) for smooth nonconvex optimization problems under (A.1)– (A.4). Then, we have limk!1 E[krf(xk)k] = 0 and limk!1 krf(xk)k = 0 almost surely. 3.2 Convergence results of random reshuffling We now consider random reshuffling (RR) applied to problems with a finite sum structure min x2Rn f(x) := 1 N XN i=1 f(x, i), where each component function f(·, i) : Rn ! R is supposed to be smooth. At iteration k, RR first generates a random permutation k+1 of the index set {1, . . . , N}. It then updates xk to xk+1 through N consecutive gradient descent-type steps by accessing and using the component gradients {rf(·, k+11 ), . . . ,rf(·, k+1 N )} sequentially. Specifically, one update-loop (epoch) of RR is given by x̃k0 = x k, x̃ki = x̃ k i 1 ↵krf(x̃ k i 1, k+1 i ), i = 1, . . . , N, x k+1 = x̃kN . (7) After one such loop, the step size ↵k and the permutation k+1 is updated accordingly; cf. [20, 30, 32]. We make the following standard assumptions: (B.1) For all i 2 {1, . . . , N}, f(·, i) is bounded from below by some f̄ and the gradient rf(·, i) is Lipschitz continuous on Rn with modulus L > 0. (B.2) The step sizes {↵k}k 0 satisfy P1 k=0 ↵k = 1 and P1 k=0 ↵ 3 k < 1. A detailed derivation of the steps shown in Subsection 2.1 for RR is deferred to Appendix D.2. Based on the discussion in Appendix D.2 and on Theorem 2.1, we obtain the following results for RR. Corollary 3.2. We consider RR (7) for smooth nonconvex optimization problems under (B.1)–(B.2). Then it holds that limk!1 E[krf(xk)k] = 0 and limk!1 krf(xk)k = 0 almost surely. 3.3 Convergence of the proximal stochastic gradient method We consider the composite-type optimization problem minx2Rn (x) := f(x) + '(x) (8) where f : Rn ! R is a continuously differentiable function and ' : Rn ! ( 1,1] is ⌧ -weakly convex (see Appendix E.1), proper, and lower semicontinuous. In this section, we want to apply our unified framework to study the convergence behavior of the well-known proximal stochastic gradient method (prox-SGD): xk+1 = prox↵k'(x k ↵kg k), (9) where gk ⇡ rf(xk) is a stochastic approximation of rf(xk), {↵k}k 0 ✓ R+ is a suitable step size sequence, and prox↵k' : R n ! Rn, prox↵k'(x) := argminy2Rn '(y) + 1 2↵k kx yk2 is the well-known proximity operator of '. 3.3.1 Assumptions and preparations We first recall several useful concepts from nonsmooth and variational analysis. For a function h : Rn ! ( 1,1], the Fréchet (or regular) subdifferential of h at the point x is given by @h(x) := {g 2 Rn : h(y) h(x) + hg,y xi+ o(ky xk) as y ! x}, see, e.g., [39, Chapter 8]. If h is convex, then the Fréchet subdifferential coincides with the standard (convex) subdifferential. It is well-known that the associated first-order optimality condition for the composite problem (8) — 0 2 @ (x) = rf(x) + @'(x) — can be represented as a nonsmooth equation, [39, 21], F↵nat(x) := x prox↵'(x ↵rf(x)) = 0, ↵ 2 (0, ⌧ 1), where F↵nat denotes the so-called natural residual. The natural residual F↵nat is a common stationarity measure for the nonsmooth problem (8) and widely used in the analysis of proximal methods. We will make the following assumptions on f , ', and the stochastic oracles {gk}k 0: (C.1) The function f is bounded from below on Rn, i.e., there is f̄ such that f(x) f̄ for all x 2 Rn, and the gradient mapping rf is Lipschitz continuous (on Rn) with modulus L > 0. (C.2) The function ' is ⌧ -weakly convex, proper, lower semicontinuous, and bounded from below on dom', i.e., we have '(x) '̄ for all x 2 dom'. (C.3) There exists L' > 0 such that '(x) '(y) L'kx yk for all x,y 2 dom'. (C.4) Each gk defines an unbiased estimator of rf(xk), i.e., we have E[gk | Fk] = rf(xk) almost surely, and there exist C,D 0 such that E[kgk rf(xk)k2 | Fk] C[f(xk) f̄ ] + D almost surely 8 k 2 N. (C.5) The step sizes {↵k}k 0 satisfy P1 k=0 ↵k = 1 and P1 k=0 ↵ 2 k < 1. Here, we again assume that the generated stochastic processes {xk}k 0 is adapted to the filtration {Fk}k 0. The assumptions (C.1), (C.2), (C.4), and (C.5) are fairly standard and broadly applicable. In particular, (C.1), (C.4), and (C.5) coincide with the conditions (A.1)–(A.4) used in the analysis of SGD. We continue with several remarks concerning condition (C.3). Remark 3.3. Assumption (C.3) requires the mapping ' to be Lipschitz continuous on its effective domain dom'. This condition holds in many important applications, e.g., when ' is chosen as a norm or indicator function. Nonconvex examples satisfying (C.2) and (C.3) include, e.g., the minimax concave penalty (MCP) function [45], the smoothly clipped absolute deviation (SCAD) [15], or the student-t loss function. We refer to [4] and Appendix E.2 for further discussion. 3.3.2 Convergence results of prox-SGD We now analyze the convergence of the random process {xk}k 0 generated by the stochastic algorithmic scheme (9). As pioneered in [11], we will use the Moreau envelope env✓ , env✓ : Rn ! R, env✓ (x) := miny2Rn (y) + 1 2✓ kx yk2, (10) as a smooth Lyapunov function to study the descent properties and convergence of prox-SGD. We first note that the conditions (C.1) and (C.2) imply ✓ 1-weak convexity of for every ✓ 2 (0, (L+ ⌧) 1]. In this case, the Moreau envelope env✓ is a well-defined and continuously differentiable function with gradient renv✓ (x) = 1✓ (x prox✓ (x)); see, e.g., [38, Theorem 31.5]. As shown in [13, 11], the norm of the Moreau envelope — krenv✓ (x)k — defines an alternative stationarity measure for problem (8) that is equivalent to the natural residual if ✓ is chosen sufficiently small. A more explicit derivation of this connection is provided in Lemma E.1. Next, we establish convergence of prox-SGD by setting ⌘ renv✓ and µk ⌘ ↵k. Our analysis is based on the following two estimates which are verified in Appendix E.4 and Appendix E.5. Lemma 3.4. Let {xk}k 0 be generated by prox-SGD and let the assumptions (C.1)–(C.4) be satisfied. Then, for ✓ 2 (0, [3L+ ⌧ ] 1) and all k with ↵k min{ 12⌧ , 1 2(✓ 1 [L+⌧ ])}, it holds that E[env✓ (xk+1) ̄ | Fk] (1 + 4C✓ 1↵2k) · [env✓ (xk) ̄] L✓↵kkrenv✓ (x k)k2 + 2↵2k(CL 2 ' + D✓ 1), (11) almost surely, where ̄ := f̄ + '̄. Lemma 3.5. Let {xk}k 0 be generated by prox-SGD and suppose that the assumptions (C.1)–(C.4) hold. Then, for ✓ 2 (0, [ 43L+ ⌧ ] 1) and all k with ↵k 12⌧ , we have almost surely E[kxk+1 xkk2 | Fk] 8(2L+ C)↵2k · [env✓ (xk) ̄] + 4(((2L+ C)✓ + 1)L2' + D)↵2k. (12) Phase I: Verifying (P.1)–(P.2). In [21, Corollary 3.4], it is shown that the gradient of the Moreau envelope is Lipschitz continuous with modulus Le := max{✓ 1, (1 [L + ⌧ ]✓) 1[L+ ⌧ ]} for all ✓ 2 (0, [L+ ⌧ ] 1). Thus, condition (P.1) is satisfied. Furthermore, due to ↵k ! 0 and choosing ✓ 2 (0, [3L + ⌧ ] 1), the estimate (11) in Lemma 3.4 holds for all k sufficiently large. Consequently, due to env✓ (x) (prox✓ (x)) ̄ and (C.5), Theorem B.1 is applicable and upon taking total expectation, {E[env✓ (xk)]}k 0 converges to some E 2 R. In addition, the sequence {env✓ (xk)}k 0 converges almost surely to some random variable e? and we have P1 k=0 ↵kE[krenv✓ (xk)k2] < 1. This verifies condition (P.2) with a = 2. Phase II: Verifying (P.3)–(P.4) for showing convergence in expectation. Assumptions (C.1)–(C.5) and Lemma 3.5 allow us to establish the required bound stated in (P.3). Specifically, taking total expectation in (12), we have E[kxk+1 xkk2] 8(2L+ C)↵2k · E[env✓ (xk) ̄] + 4(((2L+ C)✓ + 1)L2' + D)↵2k for all k sufficiently large. Due to E[env✓ (xk)] ! E, there exists F such that E[env✓ (xk) ̄] F for all k. Hence, (P.3) holds with q = 2, A = 8(2L+C)F+4(((2L+C)✓+1)L2' +D), p1 = 2, and B = 0. The property (P.4) easily follows from (C.5) and the parameter choices. Consequently, using Theorem 2.1, we can infer E[krenv✓ (xk)k] ! 0. Phase III: Verifying (P.30)–(P.40) for showing almost sure convergence. We follow the construction in (4) and set Ak = ↵ 1k (x k+1 xk E[xk+1 xk | Fk]), Bk = ↵ 1k E[xk+1 xk | Fk], and p1, p2 = 1. Clearly, we have E[Ak | Fk] = 0 and based on the previous results in Phase II, we can show E[kxk+1 xkk2] = O(↵2k) which establishes boundedness of {E[kAkk2}k 0. Similarly, for Bk and by Lemma 3.5 and Jensen’s inequality, we obtain kBkk 2 ↵ 2k E[kx k+1 xkk2 | Fk] 8(2L+ C) · [env✓ (x k) ̄] +O(1). Due to env✓ (xk) ! e? almost surely, this shows lim supk!1 kBkk2 < 1 almost surely. Hence, all requirements in (P.30) are satisfied with q = 2 and b = 0. Moreover, it is easy to see that property (P.40) also holds in this case. Overall, Theorem 2.1 implies krenv✓ (xk)k ! 0 almost surely. As mentioned, it is possible to express the obtained convergence results in terms of the natural residual Fnat = F 1nat, see, e.g., Lemma E.1. We summarize our observations in the following corollary. Corollary 3.6. Let us consider prox-SGD (9) for the composite problem (8) under (C.1)–(C.5). Then, we have limk!1 E[kFnat(xk)k] = 0 and limk!1 kFnat(xk)k = 0 almost surely. Remark 3.7. As a byproduct, Lemma 3.4 also leads to an expected iteration complexity result of prox-SGD by using the ABC condition (C.4) rather than the standard bounded variance assumption. This is a nontrivial extension of [11, Corollary 3.6]. We provide a full derivation in Appendix E.6. 3.4 Convergence of stochastic model-based methods In this section, we consider the convergence of stochastic model-based methods for nonsmooth weakly convex optimization problems minx2Rn (x) := f(x) + '(x) = E⇠⇠D[f(x, ⇠)] + '(x), (13) where both f and ' are assumed to be (nonsmooth) weakly convex functions and is lower bounded, i.e., (x) ̄ for all x 2 dom'. Classical stochastic optimization methods — including proximal stochastic subgradient, stochastic proximal point, and stochastic prox-linear methods — for solving (13) are unified by the stochastic model-based methods (SMM) [14, 11]: xk+1 = argminx2Rn fxk(x, ⇠ k) + '(x) + 1 2↵k kx xkk2, (14) where fxk(x, ⇠k) is a stochastic approximation of f around xk using the sample ⇠k; see Appendix F.1 for descriptions of three major types of SMM. Setting Fk := (⇠0, . . . , ⇠k 1), it is easy to see that {xk}k 0 is adapted to {Fk}k 0. We analyze convergence of SMM under the following assumptions. (D.1) The stochastic approximation function fx satisfies a one-sided accuracy property, i.e., we have E⇠[fx(x, ⇠)] = f(x) for all x 2 U and E⇠[fx(y, ⇠) f(y)] ⌧ 2 kx yk2 8 x,y 2 U, where U is an open convex set containing dom'. (D.2) The function y 7! fx(y, ⇠) + '(y) is ⌘-weakly convex for all x 2 U and almost every ⇠. (D.3) There exists L > 0 such that the stochastic approximation function fx satisfies fx(x, ⇠) fx(y, ⇠) Lkx yk 8 x,y 2 U, and almost every ⇠. (D.4) The function ' is L'-Lipschitz continuous. (D.5) The step sizes {↵k}k 0 satisfy P1 k=0 ↵k = 1 and P1 k=0 ↵ 2 k < 1. Assumptions (D.1), (D.2), (D.3) are standard for analyzing SMM and identical to that of [11]. (D.5) is convention for stochastic methods. Assumption (D.4) mimics (C.3); see Remark 3.3 for discussions. We now derive the convergence of SMM below by setting ⌘ renv✓ and µk ⌘ ↵k. Our derivation is based on the following two estimates, in which the proof of Lemma 3.9 is given in Appendix F.2. Lemma 3.8 (Theorem 4.3 of [11]). Let ✓ 2 (0, (⌧ + ⌘) 1) and ↵k < ✓ be given. Then, we have E[env✓ (xk+1) | Fk] env✓ (xk) (1 [⌧ + ⌘]✓)↵k 2(1 ⌘↵k) krenv✓ (x k)k2 + 2L2↵2k (1 ⌘↵k)(✓ ↵k) . Lemma 3.9. For all k with ↵k 1/(2⌘), it holds that E[kxk+1 xkk2 | Fk] (16(L+ L')2 + 8L2)↵2k. Phase I: Verifying (P.1)–(P.2). As before, [21, Corollary 3.4] implies that the mapping renv✓ is Lipschitz continuous for all ✓ 2 (0, (⌧ + ⌘) 1) Hence, condition (P.1) is satisfied. Using ↵k ! 0, we can apply Theorem B.1 to the recursion obtained in Lemma 3.8 for all k sufficiently large and it follows P1 k=0 ↵kE[krenv✓ (xk)k2] < 1. Thus, condition (P.2) holds with a = 2. Phase II: Verifying (P.3)–(P.4) for showing convergence in expectation. Taking total expectation in Lemma 3.9 verifies condition (P.3) with q = 2, A = (16(L+ L')2 +8L2), p1 = 2, B = 0. Moreover, condition (P.4) is true by assumption (D.5) and the previous parameters choices. Thus, applying Theorem 2.1 gives E[krenv✓ (xk)k] ! 0. Phase III: Verifying (P.30)–(P.40) for showing almost sure convergence. As in (4), we can set Ak = ↵ 1 k (x k+1 xk E[xk+1 xk | Fk]), Bk = ↵ 1k E[xk+1 xk | Fk]. Applying Lemma 3.9 and utilizing Jensen’s inequality, we have E[Ak | Fk] = 0, E[kAkk2] (4/↵2k)E[kxk+1 xkk2] 4(16(L + L')2 + 8L2) and kBkk2 16(L + L')2 + 8L2. Thus, condition (P.30) is satisfied with p1 = p2 = 1, q = 2. Assumption (D.5), together with the previous parameter choices verifies condition (P.40) and hence, applying Theorem 2.1 yields krenv✓ (xk)k ! 0 almost surely. Summarizing this discussion, we obtain the following convergence results for SMM. Corollary 3.10. We consider the family of stochastic model-based methods (14) for the optimization problem (13) under assumptions (D.1)–(D.5). Let {xk}k 0 be a generated sequence. Then, we have limk!1 E[krenv✓ (xk)k] = 0 and limk!1 krenv✓ (xk)k = 0 almost surely. Remark 3.11. The results presented in Corollary 3.10 also hold under certain extended settings. In fact, we can replace (D.3) by a slightly more general Lipschitz continuity assumption on f . Moreover, it is possible to establish convergence in the case where f is not Lipschitz continuous but has Lipschitz continuous gradient, which is particularly useful when we apply stochastic proximal point method for smooth f . A more detailed derivation and discussion of such extensions is deferred to Appendix F.3. 3.5 Related work and discussion SGD and RR. The literature for SGD is extremely rich and several connected and recent works have been discussed in Section 1. Our result in Corollary 3.1 unifies many of the existing convergence analyses of SGD and is based on the general ABC condition (A.3) (see [23, 24, 19] for comparison) rather than on the standard bounded variance assumption. Our expected convergence result generalizes the one in [6] using much weaker assumptions. Our results for RR are in line with the recent theoretical observations in [30, 32, 25]. In particular, Corollary 3.2 recovers the almost sure convergence result shown in [25], while the expected convergence result appears to be new. Prox-SGD and SMM. The work [11] established one of the first complexity results for prox-SGD using the Moreau envelope. Under a bounded variance assumption (C = 0 in condition (C.4)) and for general nonconvex and smooth f , the authors showed E[krenv✓ (xk̄)k2] = O((T + 1) 1/2), where xk̄ is sampled uniformly from the past T + 1 iterates x0, . . . ,xT . As mentioned, this result cannot be easily extended to the asymptotic convergence results discussed in this paper. Earlier studies of prox-SGD for nonconvex f and C = 0 include [18] where convergence of prox-SGD is established if the variance parameter D = Dk ! 0 vanishes as k ! 1. This can be achieved by progressively increasing the size of the selected mini-batches or via variance reduction techniques as in prox-SVRG and prox-SAGA, see [35]. The question whether prox-SGD can converge and whether the accumulation points of the iterates {xk}k 0 correspond to stationary points was only addressed recently in [27]. The authors use a differential inclusion approach to study convergence of prox-SGD. However, additional compact constraints x 2 X have to be introduced in the model (8) to guarantee sure boundedness of {xk}k 0 and applicability of the differential inclusion techniques. Lipschitz continuity of ' also appears as an essential requirement in [27, Theorem 5.4]. The analyses in [14, 12] establish asymptotic convergence guarantees for SMM. However, both works require a priori (almost) sure boundedness of {xk}k 0 and a density / Sard-type condition in order to show convergence. We refer to [16] for an extension of the results in [27, 12] to prox-SGD in Hilbert spaces. By contrast, our convergence framework allows to complement these differential inclusion-based results and — for the first time — fully removes any stringent boundedness assumption on {xk}k 0. Instead, our analysis relies on more transparent assumptions that are verifiable and common in stochastic optimization and machine learning. In summary, we are now able to claim: prox-SGD and SMM converge under standard stochastic conditions if ' is Lipschitz continuous. In the easier convex case, analogous results have been obtained, e.g., in [18, 1, 40]. We provide an overview of several related and representative results in Table 1 in Appendix G. 4 Conclusion In this work, we provided a novel convergence framework that allows to derive expected and almost sure convergence results for a vast class of stochastic optimization methods under state-of-the-art assumptions and in a unified way. We specified the steps on how to utilize our theorem in order to establish convergence results for a given stochastic algorithm. As concrete examples, we applied our theorem to derive asymptotic convergence guarantees for SGD, RR, prox-SGD, and SMM. To our surprise, some of the obtained results appear to be new and provide new insights into the convergence behavior of some well-known and standard stochastic methodologies. These applications revealed that our unified theorem can serve as a plugin-type tool with the potential to facilitate the convergence analysis of a wide class of stochastic optimization methods. Finally, it is important to investigate in which situations our convergence results in terms of the stationarity measure can be strengthened — say to almost sure convergence guarantees for the iterates {xk}k 0. We plan to consider such a possible extension in future work. Acknowledgments and Disclosure of Funding The authors would like to thank the Area Chair and anonymous reviewers for their detailed and constructive comments, which have helped greatly to improve the quality and presentation of the manuscript. In addition, we would like to thank Michael Ulbrich for valuable feedback and comments on an earlier version of this work. X. Li was partially supported by the National Natural Science Foundation of China (NSFC) under Grant No. 12201534 and 72150002, by the Shenzhen Science and Technology Program under Grant No. RCBS20210609103708017, and by the Shenzhen Institute of Artificial Intelligence and Robotics for Society (AIRS) under Grant No. AC01202101108. A. Milzarek was partly supported by the National Natural Science Foundation of China (NSFC) – Foreign Young Scholar Research Fund Project (RFIS) under Grant No. 12150410304 and by the Fundamental Research Fund – Shenzhen Research Institute of Big Data (SRIBD) Startup Fund JCYJ-AM20190601.
1. What is the focus and contribution of the paper on stochastic optimization methods? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical analysis? 3. What are the weaknesses of the paper regarding its claims and comparisons with other works? 4. Do you have any concerns or questions about the paper's content, such as typos, assumptions, or minor issues? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This work provides a unified theorem for analyzing several stochastic optimization methods. Both expected and almost sure convergence are derived. For applications, the authors recover the convergence results for stochastic gradient descent (SGD) and random reshuffling (RR). In addition, this paper also obtains the convergence result for the stochastic proximal gradient method (prox-SGD) by using the proposed framework. Strengths And Weaknesses This paper is well written and easy to follow in general. The structure of the paper is good and provides some new insights into the convergence results for the stochastic optimization methods. The core theorem 2.1 is very important and its proof makes sense to me. Furthermore, the prox-SGD is analyzed via this framework to obtain some convergence results. The main weakness is that the proposed method only provides an asymptotic convergence result, which is less informative compared to the finite-time type of bounds. In addition, the function value convergence might be also interested. Questions Is there a typo in assumption (B.2)? Why do you require the sum of alpha^3 < infinity. The description for random reshuffling is rather rough, I would suggest the authors provide more details in the appendix for this method. Some minors: in line 54 the hessian and gradient, in line 350 analyses. EDIT: after the authors' response, I increased my score to 7. Limitations Yes.
NIPS
Title A Unified Convergence Theorem for Stochastic Optimization Methods Abstract In this work, we provide a fundamental unified convergence theorem used for deriving expected and almost sure convergence results for a series of stochastic optimization methods. Our unified theorem only requires to verify several representative conditions and is not tailored to any specific algorithm. As a direct application, we recover expected and almost sure convergence results of the stochastic gradient method (SGD) and random reshuffling (RR) under more general settings. Moreover, we establish new expected and almost sure convergence results for the stochastic proximal gradient method (prox-SGD) and stochastic model-based methods for nonsmooth nonconvex optimization problems. These applications reveal that our unified theorem provides a plugin-type convergence analysis and strong convergence guarantees for a wide class of stochastic optimization methods. 1 Introduction Stochastic optimization methods are widely used to solve stochastic optimization problems and empirical risk minimization, serving as one of the foundations of machine learning. Among the many different stochastic methods, the most classic one is the stochastic gradient method (SGD), which dates back to Robbins and Monro [36]. If the problem at hand has a finite-sum structure, then another popular stochastic method is random reshuffling (RR) [20]. When the objective function has a composite form or is weakly convex (nonsmooth and nonconvex), then the stochastic proximal gradient method (prox-SGD) and stochastic model-based algorithms are the most typical approaches [18, 11]. Apart from the mentioned stochastic methods, there are many others like SGD with momentum, Adam, stochastic higher order methods, etc. In this work, our goal is to establish and understand fundamental convergence properties of these stochastic optimization methods via a novel unified convergence framework. Motivations. Suppose we apply SGD to minimize a smooth nonconvex function f . SGD generates a sequence of iterates {xk}k 0, which is a stochastic process due to the randomness of the algorithm and the utilized stochastic oracles. The most commonly seen ‘convergence result’ for SGD is the 36th Conference on Neural Information Processing Systems (NeurIPS 2022). expected iteration complexity, which typically takes the form [17] min k=0,...,T E[krf(xk)k2] O ✓ 1 p T + 1 ◆ or E[krf(xk̄)k2] O ✓ 1 p T + 1 ◆ , (1) where T denotes the total number of iterations and k̄ is an index sampled uniformly at random from {0, . . . , T}. Note that we ignored some higher-order convergence terms and constants to ease the presentation. Complexity results are integral to understand core properties and progress of the algorithm during the first T iterations, while the asymptotic convergence behavior plays an equally important role as it characterizes whether an algorithm can eventually approach an exact stationary point or not. We refer to Appendix H for additional motivational background for studying asymptotic convergence properties of stochastic optimization methods. Here, an expected convergence result, associated with the nonconvex minimization problem minx f(x), has the form lim k!1 E[krf(xk)k] = 0. (2) Intuitively, it should be possible to derive expected convergence from the expected iteration complexity (1) by letting T ! 1. However, this is not the case as the ‘min’ operator and the sampled k̄ are not well defined or become meaningless when T goes to 1. The above results are stated in expectation and describe the behavior of the algorithm by averaging infinitely many runs. Though this is an important convergence measure, in practical situations the algorithm is often only run once and the last iterate is returned as a solution. This observation motivates and necessitates almost sure convergence results, which establish convergence with probability 1 for a single run of the stochastic method: lim k!1 krf(xk)k = 0 almost surely. (3) Backgrounds. Expected and almost sure convergence results have been extensively studied for convex optimization; see, e.g., [10, 34, 42, 46, 5, 41]. Almost sure convergence of SGD for minimizing a smooth nonconvex function f was provided in the seminal work [3] using very standard assumptions, i.e., Lipschitz continuous rf and bounded variance. Under the same conditions, the same almost sure convergence of SGD was established in [33] based on a much simpler argument than that of [3]. A weaker ‘lim inf’-type almost sure convergence result for SGD with AdaGrad step sizes was shown in [26]. Recently, the work [28] derives almost sure convergence of SGD under the assumptions that f and rf are Lipschitz continuous, f is coercive, f is not asymptotically flat, and the -th moment of the stochastic error is bounded with 2. This result relies on stronger assumptions than the base results in [3]. Nonetheless, it allows more aggressive diminishing step sizes if > 2. Apart from standard SGD, almost sure convergence of different respective variants for min-max problems was discussed in [22]. In terms of expected convergence, the work [6] showed limk!1 E[krf(xk)k] = 0 under the additional assumptions that f is twice continuously differentiable and the multiplication of the Hessian and gradient r2f(x)rf(x) is Lipschitz continuous. Though the convergence of SGD is well-understood and a classical topic, asymptotic convergence results of the type (2) and (3) often require a careful and separate analysis for other stochastic optimization methods — especially when the objective function is simultaneously nonsmooth and nonconvex. In fact and as outlined, a direct transition from the more common complexity results (1) to the full convergence results (2) and (3) is often not possible without further investigation. Main contributions. We provide a fundamental unified convergence theorem (see Theorem 2.1) for deriving both expected and almost sure convergence of stochastic optimization methods. Our theorem is not tailored to any specific algorithm, instead it incorporates several abstract conditions that suit a vast and general class of problem structures and algorithms. The proof of this theorem is elementary. We then apply our novel theoretical framework to several classical stochastic optimization methods to recover existing and to establish new convergence results. Specifically, we recover expected and almost sure convergence results for SGD and RR. Though these results are largely known in the literature, we derive unified and slightly stronger results under a general ABC condition [24, 23] rather than the standard bounded variance assumption. We also remove the stringent assumption used in [6] to show (2) for SGD. As a core application of our framework, we derive expected and almost sure convergence results for prox-SGD in the nonconvex setting and under the more general ABC condition and for stochastic model-based methods under very standard assumptions. In particular, we show that the iterates {xk}k 0 generated by prox-SGD and other stochastic model-based methods will approach the set of stationary points almost surely and in an expectation sense. These results are new to our knowledge (see also Subsection 3.5 for further discussion). The above applications illustrate the general plugin-type purpose of our unified convergence analysis framework. Based on the given recursion and certain properties of the algorithmic update, we can derive broad convergence results by utilizing our theorem, which can significantly simplify the convergence analysis of stochastic optimization methods; see Subsection 2.1 for a summary. 2 A unified convergence theorem Throughout this work, let (⌦,F , {Fk}k 0,P) be a filtered probability space and let us assume that the sequence of iterates {xk}k 0 is adapted to the filtration {Fk}k 0, i.e., each of the random vectors xk : ⌦ ! Rn is Fk-measurable. In this section, we present a unified convergence theorem for the sequence {xk}k 0 based on an abstract convergence measure . To make the abstract convergence theorem more accessible, the readers may momentarily regard and {µk}k 0 as rf and the sequence related to the step sizes, respectively. We then present the main steps for showing the convergence of a stochastic optimization method by following a step-by-step verification of the conditions in our unified convergence theorem. Theorem 2.1. Let the mapping : Rn ! Rm and the sequences {xk}k 0 ✓ Rn and {µk}k 0 ✓ R++ be given. Consider the following conditions: (P.1) The function is L -Lipschitz continuous for some L > 0, i.e., we have k (x) (y)k L kx yk for all x,y 2 Rn. (P.2) There exists a constant a > 0 such that P1 k=0 µk E[k (xk)ka] < 1. The following statements are valid: (i) Let the conditions (P.1)–(P.2) be satisfied and suppose further that (P.3) There exist constants A,B, b 0 and p1, p2, q > 0 such that E[kxk+1 xkkq] Aµp1k + Bµ p2 k E[k (x k)kb]. (P.4) The sequence {µk}k 0 and the parameters a, b, q, p1, p2 satisfy {µk}k 0 is bounded, X1 k=0 µk = 1, and a, q 1, a b, p1, p2 q. Then, it holds that limk!1 E[k (xk)k] = 0. (ii) Let the properties (P.1)–(P.2) hold and assume further that (P.30) There exist constants A, b 0, p1, p2, q > 0 and random vectors Ak,Bk : ⌦ ! Rn such that xk+1 = xk + µp1k Ak + µ p2 k Bk and for all k, Ak,Bk are Fk+1-measurable and we have E[Ak | Fk] = 0 almost surely, E[kAkkq] A, and lim supk!1 kBkkq/(1 + k (xk)kb) < 1 almost surely. (P.40) The sequence {µk}k 0 and the parameters a, b, q, p1, p2 satisfy µk ! 0, X1 k=0 µk = 1, X1 k=0 µ2p1k < 1, and q 2, qa b, p1 > 1 2 , p2 1. Then, it holds that limk!1 k (xk)k = 0 almost surely. The proof of Theorem 2.1 is elementary. We provide the core ideas here and defer its proof to Appendix A. Item (i) is proved by contradiction. An easy first result is lim infk!1 E[k (xk)ka] = 0. We proceed and assume that {E[k (xk)k]}k 0 does not converge to zero. Then, for some > 0, we can construct two subsequences {`t}t 0 and {ut}t 0 such that `t < ut and E[k (x`t)k] 2 , E[k (xut)ka] a, and E[k (xk)ka] > a for all `t < k < ut. Based on this construction, the conditions in the theorem, and a set of inequalities, we will eventually reach a contradiction. We notice that the Lipschitz continuity of plays a prominent role when establishing this contradiction. Our overall proof strategy is inspired by the analysis of classical trust region-type methods, see, e.g., [9, Theorem 6.4.6]. Let us also mention that a different strategy for the fully deterministic setting and scalar case : Rn ! R was provided in [8]. For item (ii), we first control the stochastic behavior of the error terms Ak by martingale convergence theory. We can then conduct sample-based arguments to derive the final result, which is essentially deterministic and hence, follows similar arguments to that of item (i). The major application areas of our unified convergence framework comprise stochastic optimization methods that have non-vanishing stochastic errors or that utilize diminishing step sizes. In the next subsection, we state the main steps for showing convergence of stochastic optimization methods. This also clarifies the abstract conditions listed in the theorem. 2.1 The steps for showing convergence of stochastic optimization methods In order to apply the unified convergence theorem, we have to verify the conditions stated in the theorem, resulting in three main phases below. Phase I: Verifying (P.1)–(P.2). Conditions (P.1)–(P.2) are used for both the expected and the almost sure convergence results. Condition (P.1) is a problem property and is very standard. We present the final convergence results in terms of the abstract measure . This measure can be regarded as f f⇤ in convex optimization, rf in smooth nonconvex optimization, the gradient of the Moreau envelope in weakly convex optimization, etc. In all the situations, assuming Lipschitz continuity of the convergence measure is standard and is arguably a minimal assumption in order to obtain iteration complexity and/or convergence results. Condition (P.2) is typically a result of the algorithmic property or complexity analysis. To verify this condition, one first establishes the recursion of the stochastic method, which almost always has the form E[yk+1 | Fk] (1 + k)yk µkk (xk)ka + ⇣k. Here, yk is a suitable Lyapunov function measuring the (approximate) descent property of the stochastic method, ⇣k represents the error term satisfying P1 k=0 ⇣k < 1, k is often related to the step sizes and satisfies P1 k=0 k < 1. Then, applying the supermartingale convergence theorem (see Theorem B.1), we obtain P1 k=0 µk E[k (xk)ka] < 1, i.e., condition (P.2). Since condition (P.2) is typically a consequence of the underlying algorithmic recursion, one can also derive the standard finite-time complexity bound (1) in terms of the measure E[k (xk)ka] based on it. Hence, non-asymptotic complexity results are also included implicitly in our framework as a special case. To be more specific, (P.2) implies PT k=0 µkE[k (xk)ka] M for some constant M > 0 and some total number of iterations T . This then yields min0kT E[k (xk)ka] M/ PT k=0 µk. Note that the sequence {µk}k 0 is often related to the step sizes. Thus, choosing the step sizes properly results in the standard finite-time complexity result. Phase II: Verifying (P.3)–(P.4) for showing expected convergence. Condition (P.3) requires an upper bound on the step length of the update in terms of expectation, including upper bounds for the search direction and the stochastic error of the algorithm. It is often related to certain bounded variance-type assumptions for analyzing stochastic methods. For instance, (P.3) is satisfied under the standard bounded variance assumption for SGD, the more general ABC assumption for SGD, the bounded stochastic subgradients assumption, etc. Condition (P.4) is a standard diminishing step sizes condition used in stochastic optimization. Then, one can apply item (i) of Theorem 2.1 to obtain E[k (xk)k] ! 0. Phase III: Verifying (P.30)–(P.40) for showing almost sure convergence. Condition (P.30) is parallel to (P.3). It decomposes the update into a martingale term Ak and a bounded error term Bk. We will see later that this condition holds true for many stochastic methods. Though this condition requires the update to have a certain decomposable form, it indeed can be verified by bounding the step length of the update in conditional expectation, which is similar to (P.3). Hence, (P.30) can be interpreted as a conditional version of (P.3). To see this, we can construct xk+1 = xk + µk · 1 µk xk+1 xk E[xk+1 xk | Fk] Ak +µk · 1 µk E[xk+1 xk | Fk] Bk . (4) By Jensen’s inequality, we then have E[Ak | Fk] = 0, E[kAkkq] 2qµ qk · E[kx k+1 xkkq], and kBkkq µ qk · E[kx k+1 xkkq | Fk]. Thus, once it is possible to derive E[kxk+1 xkkq | Fk] = O(µqk) in an almost sure sense, condition (P.30) is verified with p1 = p2 = 1. Condition (P.40) is parallel to (P.4) and is standard in stochastic optimization. Application of item (ii) of Theorem 2.1 then yields k (xk)k ! 0 almost surely. In the next section, we will illustrate how to show convergence for a set of classic stochastic methods by following the above three steps. 3 Applications to stochastic optimization methods 3.1 Convergence results of SGD We consider the standard SGD method for solving the smooth optimization problem minx2Rn f(x), where the iteration of SGD is given by xk+1 = xk ↵kg k. (5) Here, gk denotes a stochastic approximation of the gradient rf(xk). We assume that each stochastic gradient gk is Fk+1-measurable and that the generated stochastic process {xk}k 0 is adapted to the filtration {Fk}k 0. We consider the following standard assumptions: (A.1) The mapping rf : Rn ! Rn is Lipschitz continuous on Rn with modulus L > 0. (A.2) The objective function f is bounded from below on Rn, i.e., there is f̄ such that f(x) f̄ for all x 2 Rn. (A.3) Each oracle gk defines an unbiased estimator of rf(xk), i.e., it holds that E[gk | Fk] = rf(xk) almost surely, and there exist C,D 0 such that E[kgk rf(xk)k2 | Fk] C[f(xk) f̄ ] + D almost surely 8 k 2 N. (A.4) The step sizes {↵k}k 0 satisfy P1 k=0 ↵k = 1 and P1 k=0 ↵ 2 k < 1. We now derive the convergence of SGD below by setting ⌘ rf and µk ⌘ ↵k. Phase I: Verifying (P.1)–(P.2). (A.1) verifies condition (P.1) with L ⌘ L. We now check (P.2). Using (A.2), (A.3), and a standard analysis for SGD gives the following recursion (see Appendix C.1 for the full derivation): E[f(xk+1) f̄ | Fk] ✓ 1 + LC↵2k 2 ◆ [f(xk) f̄ ] ↵k ✓ 1 L↵k 2 ◆ krf(xk)k2 + LD↵2k 2 . (6) Taking total expectation, using (A.4), and applying the supermartingale convergence theorem (Theorem B.1) gives P1 k=0 ↵kE[krf(xk)k2] < 1. Furthermore, the sequence {E[f(xk)]}k 0 converges to some finite value. This verifies (P.2) with a = 2. Phase II: Verifying (P.3)–(P.4) for showing expected convergence. For (P.3), we have by (5) and (A.3) that E[kxk+1 xkk2] ↵2kE[krf(xk)k2] + C↵2kE[f(xk) f̄ ] + D↵2k. Due to the convergence of {E[f(xk)]}k 0, there exists F such that E[f(xk) f̄ ] F for all k. Thus, condition (P.3) holds with q = 2, A = CF+ D, p1 = 2, B = 1, p2 = 2, and b = 2. Condition (P.4) is verified by (A.4) and the previous parameters choices. Therefore, we can apply Theorem 2.1 to deduce E[krf(xk)k] ! 0. Phase III: Verifying (P.30)–(P.40) for showing almost sure convergence. For (P.30), it follows from the update (5) that xk+1 = xk ↵k(g k rf(xk)) ↵krf(x k). We have p1 = 1, Ak = gk rf(xk), p2 = 1, and Bk = rf(xk). Using (A.2), (A.3), E[f(xk) f̄ ] F, and choosing any q = b > 0 establishes (P.30). As before, condition (P.40) follows from (A.4) and the previous parameters choices. Applying Theorem 2.1 yields krf(xk)k ! 0 almost surely. Finally, we summarize the above results in the following corollary. Corollary 3.1. Let us consider SGD (5) for smooth nonconvex optimization problems under (A.1)– (A.4). Then, we have limk!1 E[krf(xk)k] = 0 and limk!1 krf(xk)k = 0 almost surely. 3.2 Convergence results of random reshuffling We now consider random reshuffling (RR) applied to problems with a finite sum structure min x2Rn f(x) := 1 N XN i=1 f(x, i), where each component function f(·, i) : Rn ! R is supposed to be smooth. At iteration k, RR first generates a random permutation k+1 of the index set {1, . . . , N}. It then updates xk to xk+1 through N consecutive gradient descent-type steps by accessing and using the component gradients {rf(·, k+11 ), . . . ,rf(·, k+1 N )} sequentially. Specifically, one update-loop (epoch) of RR is given by x̃k0 = x k, x̃ki = x̃ k i 1 ↵krf(x̃ k i 1, k+1 i ), i = 1, . . . , N, x k+1 = x̃kN . (7) After one such loop, the step size ↵k and the permutation k+1 is updated accordingly; cf. [20, 30, 32]. We make the following standard assumptions: (B.1) For all i 2 {1, . . . , N}, f(·, i) is bounded from below by some f̄ and the gradient rf(·, i) is Lipschitz continuous on Rn with modulus L > 0. (B.2) The step sizes {↵k}k 0 satisfy P1 k=0 ↵k = 1 and P1 k=0 ↵ 3 k < 1. A detailed derivation of the steps shown in Subsection 2.1 for RR is deferred to Appendix D.2. Based on the discussion in Appendix D.2 and on Theorem 2.1, we obtain the following results for RR. Corollary 3.2. We consider RR (7) for smooth nonconvex optimization problems under (B.1)–(B.2). Then it holds that limk!1 E[krf(xk)k] = 0 and limk!1 krf(xk)k = 0 almost surely. 3.3 Convergence of the proximal stochastic gradient method We consider the composite-type optimization problem minx2Rn (x) := f(x) + '(x) (8) where f : Rn ! R is a continuously differentiable function and ' : Rn ! ( 1,1] is ⌧ -weakly convex (see Appendix E.1), proper, and lower semicontinuous. In this section, we want to apply our unified framework to study the convergence behavior of the well-known proximal stochastic gradient method (prox-SGD): xk+1 = prox↵k'(x k ↵kg k), (9) where gk ⇡ rf(xk) is a stochastic approximation of rf(xk), {↵k}k 0 ✓ R+ is a suitable step size sequence, and prox↵k' : R n ! Rn, prox↵k'(x) := argminy2Rn '(y) + 1 2↵k kx yk2 is the well-known proximity operator of '. 3.3.1 Assumptions and preparations We first recall several useful concepts from nonsmooth and variational analysis. For a function h : Rn ! ( 1,1], the Fréchet (or regular) subdifferential of h at the point x is given by @h(x) := {g 2 Rn : h(y) h(x) + hg,y xi+ o(ky xk) as y ! x}, see, e.g., [39, Chapter 8]. If h is convex, then the Fréchet subdifferential coincides with the standard (convex) subdifferential. It is well-known that the associated first-order optimality condition for the composite problem (8) — 0 2 @ (x) = rf(x) + @'(x) — can be represented as a nonsmooth equation, [39, 21], F↵nat(x) := x prox↵'(x ↵rf(x)) = 0, ↵ 2 (0, ⌧ 1), where F↵nat denotes the so-called natural residual. The natural residual F↵nat is a common stationarity measure for the nonsmooth problem (8) and widely used in the analysis of proximal methods. We will make the following assumptions on f , ', and the stochastic oracles {gk}k 0: (C.1) The function f is bounded from below on Rn, i.e., there is f̄ such that f(x) f̄ for all x 2 Rn, and the gradient mapping rf is Lipschitz continuous (on Rn) with modulus L > 0. (C.2) The function ' is ⌧ -weakly convex, proper, lower semicontinuous, and bounded from below on dom', i.e., we have '(x) '̄ for all x 2 dom'. (C.3) There exists L' > 0 such that '(x) '(y) L'kx yk for all x,y 2 dom'. (C.4) Each gk defines an unbiased estimator of rf(xk), i.e., we have E[gk | Fk] = rf(xk) almost surely, and there exist C,D 0 such that E[kgk rf(xk)k2 | Fk] C[f(xk) f̄ ] + D almost surely 8 k 2 N. (C.5) The step sizes {↵k}k 0 satisfy P1 k=0 ↵k = 1 and P1 k=0 ↵ 2 k < 1. Here, we again assume that the generated stochastic processes {xk}k 0 is adapted to the filtration {Fk}k 0. The assumptions (C.1), (C.2), (C.4), and (C.5) are fairly standard and broadly applicable. In particular, (C.1), (C.4), and (C.5) coincide with the conditions (A.1)–(A.4) used in the analysis of SGD. We continue with several remarks concerning condition (C.3). Remark 3.3. Assumption (C.3) requires the mapping ' to be Lipschitz continuous on its effective domain dom'. This condition holds in many important applications, e.g., when ' is chosen as a norm or indicator function. Nonconvex examples satisfying (C.2) and (C.3) include, e.g., the minimax concave penalty (MCP) function [45], the smoothly clipped absolute deviation (SCAD) [15], or the student-t loss function. We refer to [4] and Appendix E.2 for further discussion. 3.3.2 Convergence results of prox-SGD We now analyze the convergence of the random process {xk}k 0 generated by the stochastic algorithmic scheme (9). As pioneered in [11], we will use the Moreau envelope env✓ , env✓ : Rn ! R, env✓ (x) := miny2Rn (y) + 1 2✓ kx yk2, (10) as a smooth Lyapunov function to study the descent properties and convergence of prox-SGD. We first note that the conditions (C.1) and (C.2) imply ✓ 1-weak convexity of for every ✓ 2 (0, (L+ ⌧) 1]. In this case, the Moreau envelope env✓ is a well-defined and continuously differentiable function with gradient renv✓ (x) = 1✓ (x prox✓ (x)); see, e.g., [38, Theorem 31.5]. As shown in [13, 11], the norm of the Moreau envelope — krenv✓ (x)k — defines an alternative stationarity measure for problem (8) that is equivalent to the natural residual if ✓ is chosen sufficiently small. A more explicit derivation of this connection is provided in Lemma E.1. Next, we establish convergence of prox-SGD by setting ⌘ renv✓ and µk ⌘ ↵k. Our analysis is based on the following two estimates which are verified in Appendix E.4 and Appendix E.5. Lemma 3.4. Let {xk}k 0 be generated by prox-SGD and let the assumptions (C.1)–(C.4) be satisfied. Then, for ✓ 2 (0, [3L+ ⌧ ] 1) and all k with ↵k min{ 12⌧ , 1 2(✓ 1 [L+⌧ ])}, it holds that E[env✓ (xk+1) ̄ | Fk] (1 + 4C✓ 1↵2k) · [env✓ (xk) ̄] L✓↵kkrenv✓ (x k)k2 + 2↵2k(CL 2 ' + D✓ 1), (11) almost surely, where ̄ := f̄ + '̄. Lemma 3.5. Let {xk}k 0 be generated by prox-SGD and suppose that the assumptions (C.1)–(C.4) hold. Then, for ✓ 2 (0, [ 43L+ ⌧ ] 1) and all k with ↵k 12⌧ , we have almost surely E[kxk+1 xkk2 | Fk] 8(2L+ C)↵2k · [env✓ (xk) ̄] + 4(((2L+ C)✓ + 1)L2' + D)↵2k. (12) Phase I: Verifying (P.1)–(P.2). In [21, Corollary 3.4], it is shown that the gradient of the Moreau envelope is Lipschitz continuous with modulus Le := max{✓ 1, (1 [L + ⌧ ]✓) 1[L+ ⌧ ]} for all ✓ 2 (0, [L+ ⌧ ] 1). Thus, condition (P.1) is satisfied. Furthermore, due to ↵k ! 0 and choosing ✓ 2 (0, [3L + ⌧ ] 1), the estimate (11) in Lemma 3.4 holds for all k sufficiently large. Consequently, due to env✓ (x) (prox✓ (x)) ̄ and (C.5), Theorem B.1 is applicable and upon taking total expectation, {E[env✓ (xk)]}k 0 converges to some E 2 R. In addition, the sequence {env✓ (xk)}k 0 converges almost surely to some random variable e? and we have P1 k=0 ↵kE[krenv✓ (xk)k2] < 1. This verifies condition (P.2) with a = 2. Phase II: Verifying (P.3)–(P.4) for showing convergence in expectation. Assumptions (C.1)–(C.5) and Lemma 3.5 allow us to establish the required bound stated in (P.3). Specifically, taking total expectation in (12), we have E[kxk+1 xkk2] 8(2L+ C)↵2k · E[env✓ (xk) ̄] + 4(((2L+ C)✓ + 1)L2' + D)↵2k for all k sufficiently large. Due to E[env✓ (xk)] ! E, there exists F such that E[env✓ (xk) ̄] F for all k. Hence, (P.3) holds with q = 2, A = 8(2L+C)F+4(((2L+C)✓+1)L2' +D), p1 = 2, and B = 0. The property (P.4) easily follows from (C.5) and the parameter choices. Consequently, using Theorem 2.1, we can infer E[krenv✓ (xk)k] ! 0. Phase III: Verifying (P.30)–(P.40) for showing almost sure convergence. We follow the construction in (4) and set Ak = ↵ 1k (x k+1 xk E[xk+1 xk | Fk]), Bk = ↵ 1k E[xk+1 xk | Fk], and p1, p2 = 1. Clearly, we have E[Ak | Fk] = 0 and based on the previous results in Phase II, we can show E[kxk+1 xkk2] = O(↵2k) which establishes boundedness of {E[kAkk2}k 0. Similarly, for Bk and by Lemma 3.5 and Jensen’s inequality, we obtain kBkk 2 ↵ 2k E[kx k+1 xkk2 | Fk] 8(2L+ C) · [env✓ (x k) ̄] +O(1). Due to env✓ (xk) ! e? almost surely, this shows lim supk!1 kBkk2 < 1 almost surely. Hence, all requirements in (P.30) are satisfied with q = 2 and b = 0. Moreover, it is easy to see that property (P.40) also holds in this case. Overall, Theorem 2.1 implies krenv✓ (xk)k ! 0 almost surely. As mentioned, it is possible to express the obtained convergence results in terms of the natural residual Fnat = F 1nat, see, e.g., Lemma E.1. We summarize our observations in the following corollary. Corollary 3.6. Let us consider prox-SGD (9) for the composite problem (8) under (C.1)–(C.5). Then, we have limk!1 E[kFnat(xk)k] = 0 and limk!1 kFnat(xk)k = 0 almost surely. Remark 3.7. As a byproduct, Lemma 3.4 also leads to an expected iteration complexity result of prox-SGD by using the ABC condition (C.4) rather than the standard bounded variance assumption. This is a nontrivial extension of [11, Corollary 3.6]. We provide a full derivation in Appendix E.6. 3.4 Convergence of stochastic model-based methods In this section, we consider the convergence of stochastic model-based methods for nonsmooth weakly convex optimization problems minx2Rn (x) := f(x) + '(x) = E⇠⇠D[f(x, ⇠)] + '(x), (13) where both f and ' are assumed to be (nonsmooth) weakly convex functions and is lower bounded, i.e., (x) ̄ for all x 2 dom'. Classical stochastic optimization methods — including proximal stochastic subgradient, stochastic proximal point, and stochastic prox-linear methods — for solving (13) are unified by the stochastic model-based methods (SMM) [14, 11]: xk+1 = argminx2Rn fxk(x, ⇠ k) + '(x) + 1 2↵k kx xkk2, (14) where fxk(x, ⇠k) is a stochastic approximation of f around xk using the sample ⇠k; see Appendix F.1 for descriptions of three major types of SMM. Setting Fk := (⇠0, . . . , ⇠k 1), it is easy to see that {xk}k 0 is adapted to {Fk}k 0. We analyze convergence of SMM under the following assumptions. (D.1) The stochastic approximation function fx satisfies a one-sided accuracy property, i.e., we have E⇠[fx(x, ⇠)] = f(x) for all x 2 U and E⇠[fx(y, ⇠) f(y)] ⌧ 2 kx yk2 8 x,y 2 U, where U is an open convex set containing dom'. (D.2) The function y 7! fx(y, ⇠) + '(y) is ⌘-weakly convex for all x 2 U and almost every ⇠. (D.3) There exists L > 0 such that the stochastic approximation function fx satisfies fx(x, ⇠) fx(y, ⇠) Lkx yk 8 x,y 2 U, and almost every ⇠. (D.4) The function ' is L'-Lipschitz continuous. (D.5) The step sizes {↵k}k 0 satisfy P1 k=0 ↵k = 1 and P1 k=0 ↵ 2 k < 1. Assumptions (D.1), (D.2), (D.3) are standard for analyzing SMM and identical to that of [11]. (D.5) is convention for stochastic methods. Assumption (D.4) mimics (C.3); see Remark 3.3 for discussions. We now derive the convergence of SMM below by setting ⌘ renv✓ and µk ⌘ ↵k. Our derivation is based on the following two estimates, in which the proof of Lemma 3.9 is given in Appendix F.2. Lemma 3.8 (Theorem 4.3 of [11]). Let ✓ 2 (0, (⌧ + ⌘) 1) and ↵k < ✓ be given. Then, we have E[env✓ (xk+1) | Fk] env✓ (xk) (1 [⌧ + ⌘]✓)↵k 2(1 ⌘↵k) krenv✓ (x k)k2 + 2L2↵2k (1 ⌘↵k)(✓ ↵k) . Lemma 3.9. For all k with ↵k 1/(2⌘), it holds that E[kxk+1 xkk2 | Fk] (16(L+ L')2 + 8L2)↵2k. Phase I: Verifying (P.1)–(P.2). As before, [21, Corollary 3.4] implies that the mapping renv✓ is Lipschitz continuous for all ✓ 2 (0, (⌧ + ⌘) 1) Hence, condition (P.1) is satisfied. Using ↵k ! 0, we can apply Theorem B.1 to the recursion obtained in Lemma 3.8 for all k sufficiently large and it follows P1 k=0 ↵kE[krenv✓ (xk)k2] < 1. Thus, condition (P.2) holds with a = 2. Phase II: Verifying (P.3)–(P.4) for showing convergence in expectation. Taking total expectation in Lemma 3.9 verifies condition (P.3) with q = 2, A = (16(L+ L')2 +8L2), p1 = 2, B = 0. Moreover, condition (P.4) is true by assumption (D.5) and the previous parameters choices. Thus, applying Theorem 2.1 gives E[krenv✓ (xk)k] ! 0. Phase III: Verifying (P.30)–(P.40) for showing almost sure convergence. As in (4), we can set Ak = ↵ 1 k (x k+1 xk E[xk+1 xk | Fk]), Bk = ↵ 1k E[xk+1 xk | Fk]. Applying Lemma 3.9 and utilizing Jensen’s inequality, we have E[Ak | Fk] = 0, E[kAkk2] (4/↵2k)E[kxk+1 xkk2] 4(16(L + L')2 + 8L2) and kBkk2 16(L + L')2 + 8L2. Thus, condition (P.30) is satisfied with p1 = p2 = 1, q = 2. Assumption (D.5), together with the previous parameter choices verifies condition (P.40) and hence, applying Theorem 2.1 yields krenv✓ (xk)k ! 0 almost surely. Summarizing this discussion, we obtain the following convergence results for SMM. Corollary 3.10. We consider the family of stochastic model-based methods (14) for the optimization problem (13) under assumptions (D.1)–(D.5). Let {xk}k 0 be a generated sequence. Then, we have limk!1 E[krenv✓ (xk)k] = 0 and limk!1 krenv✓ (xk)k = 0 almost surely. Remark 3.11. The results presented in Corollary 3.10 also hold under certain extended settings. In fact, we can replace (D.3) by a slightly more general Lipschitz continuity assumption on f . Moreover, it is possible to establish convergence in the case where f is not Lipschitz continuous but has Lipschitz continuous gradient, which is particularly useful when we apply stochastic proximal point method for smooth f . A more detailed derivation and discussion of such extensions is deferred to Appendix F.3. 3.5 Related work and discussion SGD and RR. The literature for SGD is extremely rich and several connected and recent works have been discussed in Section 1. Our result in Corollary 3.1 unifies many of the existing convergence analyses of SGD and is based on the general ABC condition (A.3) (see [23, 24, 19] for comparison) rather than on the standard bounded variance assumption. Our expected convergence result generalizes the one in [6] using much weaker assumptions. Our results for RR are in line with the recent theoretical observations in [30, 32, 25]. In particular, Corollary 3.2 recovers the almost sure convergence result shown in [25], while the expected convergence result appears to be new. Prox-SGD and SMM. The work [11] established one of the first complexity results for prox-SGD using the Moreau envelope. Under a bounded variance assumption (C = 0 in condition (C.4)) and for general nonconvex and smooth f , the authors showed E[krenv✓ (xk̄)k2] = O((T + 1) 1/2), where xk̄ is sampled uniformly from the past T + 1 iterates x0, . . . ,xT . As mentioned, this result cannot be easily extended to the asymptotic convergence results discussed in this paper. Earlier studies of prox-SGD for nonconvex f and C = 0 include [18] where convergence of prox-SGD is established if the variance parameter D = Dk ! 0 vanishes as k ! 1. This can be achieved by progressively increasing the size of the selected mini-batches or via variance reduction techniques as in prox-SVRG and prox-SAGA, see [35]. The question whether prox-SGD can converge and whether the accumulation points of the iterates {xk}k 0 correspond to stationary points was only addressed recently in [27]. The authors use a differential inclusion approach to study convergence of prox-SGD. However, additional compact constraints x 2 X have to be introduced in the model (8) to guarantee sure boundedness of {xk}k 0 and applicability of the differential inclusion techniques. Lipschitz continuity of ' also appears as an essential requirement in [27, Theorem 5.4]. The analyses in [14, 12] establish asymptotic convergence guarantees for SMM. However, both works require a priori (almost) sure boundedness of {xk}k 0 and a density / Sard-type condition in order to show convergence. We refer to [16] for an extension of the results in [27, 12] to prox-SGD in Hilbert spaces. By contrast, our convergence framework allows to complement these differential inclusion-based results and — for the first time — fully removes any stringent boundedness assumption on {xk}k 0. Instead, our analysis relies on more transparent assumptions that are verifiable and common in stochastic optimization and machine learning. In summary, we are now able to claim: prox-SGD and SMM converge under standard stochastic conditions if ' is Lipschitz continuous. In the easier convex case, analogous results have been obtained, e.g., in [18, 1, 40]. We provide an overview of several related and representative results in Table 1 in Appendix G. 4 Conclusion In this work, we provided a novel convergence framework that allows to derive expected and almost sure convergence results for a vast class of stochastic optimization methods under state-of-the-art assumptions and in a unified way. We specified the steps on how to utilize our theorem in order to establish convergence results for a given stochastic algorithm. As concrete examples, we applied our theorem to derive asymptotic convergence guarantees for SGD, RR, prox-SGD, and SMM. To our surprise, some of the obtained results appear to be new and provide new insights into the convergence behavior of some well-known and standard stochastic methodologies. These applications revealed that our unified theorem can serve as a plugin-type tool with the potential to facilitate the convergence analysis of a wide class of stochastic optimization methods. Finally, it is important to investigate in which situations our convergence results in terms of the stationarity measure can be strengthened — say to almost sure convergence guarantees for the iterates {xk}k 0. We plan to consider such a possible extension in future work. Acknowledgments and Disclosure of Funding The authors would like to thank the Area Chair and anonymous reviewers for their detailed and constructive comments, which have helped greatly to improve the quality and presentation of the manuscript. In addition, we would like to thank Michael Ulbrich for valuable feedback and comments on an earlier version of this work. X. Li was partially supported by the National Natural Science Foundation of China (NSFC) under Grant No. 12201534 and 72150002, by the Shenzhen Science and Technology Program under Grant No. RCBS20210609103708017, and by the Shenzhen Institute of Artificial Intelligence and Robotics for Society (AIRS) under Grant No. AC01202101108. A. Milzarek was partly supported by the National Natural Science Foundation of China (NSFC) – Foreign Young Scholar Research Fund Project (RFIS) under Grant No. 12150410304 and by the Fundamental Research Fund – Shenzhen Research Institute of Big Data (SRIBD) Startup Fund JCYJ-AM20190601.
1. What is the main contribution of the paper regarding the convergence of SGD-like methods? 2. What are the strengths and weaknesses of the proposed framework for analyzing the convergence of SGD-like methods? 3. Do you have any suggestions for improving the proof of Theorem 2.1, particularly in simplifying the differentiation process? 4. Are there any limitations or areas for further research regarding the application of the unified convergence theorem?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors analyze the convergence of SGD-like methods such that the norm of the gradient in the last iterate convergence to zero in expectation and almost surely. They provide the general framework to analyze the convergence for SGD-like methods, including SGD, Random Reshuffling, Prox-SGD, and Model-Based methods. Using this framework, they show the convergence of these methods. Strengths And Weaknesses i) The authors prove the main Theorem 2.1, which generalizes results from, e.g. https://parameterfree.com/2020/10/05/almost-sure-convergence-of-sgd-on-smooth-non-convex-functions/ or Gradient convergence in gradient methods with errors. Bertsekas et al. Strengths: The generalization can be useful and give insights into the community. Weaknesses: Yes, it can help to analyze a broad family of methods, but I think that the paper doesn't have enough examples to be sure that Theorem 2.1 is the "Unified Convergence Theorem." ii) As the authors noted, the last-iterate convergence of SGD and RR was analyzed before, so, qualitatively, the paper's contribution is the analysis of Prox-SGD and SMM. Strengths: Even the fact that it is possible to prove the convergence of Prox-SGD is interesting. Weaknesses: To prove the convergence, the authors use Assumption (C.3) that φ is L -Lipschitz. For instance, one of the most popular regularizers φ ( x ) = 1 2 | | x | | 2 2 is not L -Lipschitz. I think the paper is good. It provides proof that Prox-SGD and SMM converge. At the same time, I'm not sure that Theorem 2.1 can be considered the "Unified Convergence Theorem," but it can be helpful for the community. Questions Suggestion: In the proof of Theorem 2.1, I think that it is possible not to differentiate q = 1 and q > 1 . Can we in (16) use the boundness of β t and μ k , and then use the same arguments as for the case q = 1 ? This can simplify the proof. Limitations NA
NIPS
Title Unsupervised Representation Learning from Pre-trained Diffusion Probabilistic Models Abstract Diffusion Probabilistic Models (DPMs) have shown a powerful capacity of generating high-quality image samples. Recently, diffusion autoencoders (Diff-AE) have been proposed to explore DPMs for representation learning via autoencoding. Their key idea is to jointly train an encoder for discovering meaningful representations from images and a conditional DPM as the decoder for reconstructing images. Considering that training DPMs from scratch will take a long time and there have existed numerous pre-trained DPMs, we propose Pre-trained DPM AutoEncoding (PDAE), a general method to adapt existing pre-trained DPMs to the decoders for image reconstruction, with better training efficiency and performance than Diff-AE. Specifically, we find that the reason that pre-trained DPMs fail to reconstruct an image from its latent variables is due to the information loss of forward process, which causes a gap between their predicted posterior mean and the true one. From this perspective, the classifier-guided sampling method can be explained as computing an extra mean shift to fill the gap, reconstructing the lost class information in samples. These imply that the gap corresponds to the lost information of the image, and we can reconstruct the image by filling the gap. Drawing inspiration from this, we employ a trainable model to predict a mean shift according to encoded representation and train it to fill as much gap as possible, in this way, the encoder is forced to learn as much information as possible from images to help the filling. By reusing a part of network of pre-trained DPMs and redesigning the weighting scheme of diffusion loss, PDAE can learn meaningful representations from images efficiently. Extensive experiments demonstrate the effectiveness, efficiency and flexibility of PDAE. Our implementation is available at https://github.com/ckczzj/PDAE. 1 Introduction Deep generative models such as variational autoencoders (VAEs) [25, 39], generative adversarial networks (GANs) [13], autoregressive models [50, 48], normalizing flows (NFs) [38, 23] and energybased models (EBMs) [9, 45] have shown remarkable capacity to synthesize striking image samples. Recently, another kind of generative models, Diffusion Probabilistic Models (DPMs) [43, 14] are further developed and becoming popular for their stable training process and state-of-the-art sample quality [8]. Although a large number of degrees of freedom in implementation, the DPMs discussed in this paper will refer exclusively to those trained by the denoising method proposed in DDPMs [14]. Unsupervised representation learning via generative modeling is a popular topic in computer vision. Latent variable generative models, such as GANs and VAEs, are a natural candidate for this, since they inherently involve a latent representation of the data they generate. Likewise, DPMs inherently ∗Corresponding author. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). yield latent variables through the forward process. However, these latent variables lack high-level semantic information because they are just a sequence of spatially corrupted images. In light of this, diffusion autoencoders (Diff-AE) [36] explore DPMs for representation learning via autoencoding. Specifically, they employ an encoder for discovering meaningful representations from images and a conditional DPM as the decoder for image reconstruction by taking the encoded representations as input conditions. Diff-AE is competitive with the state-of-the-art model on image reconstruction and capable of various downstream tasks. Following the paradigm of autoencoders, PDAE aims to adapt existing pre-trained DPMs to the decoders for image reconstruction and benefit from it. Generally, pre-trained DPMs cannot accurately predict the posterior mean of xt−1 from xt in the reverse process due to the information loss of forward process, which results in a gap between their predicted posterior mean and the true one. This is the reason that they fail to reconstruct an image (x0) from its latent variables (xt). From this perspective, the classifier-guided sampling method [8] can be explained as reconstructing the lost class information in samples by shifting the predicted posterior mean with an extra item computed by the gradient of a classifier to fill the gap. Drawing inspiration from this method that uses the prior knowledges (class label) to fill the gap, we aim to inversely extract the knowledges from the gap, i.e., learn representations that can help to fill the gap. In light of this, we employ a novel gradient estimator to predict the mean shift according to encoded representations and train it to fill as much gap as possible, in this way, the encoder is forced to learn as much information as possible from images to help the filling. PDAE follows this principle to build an autoencoder based on pre-trained DPMs. Furthermore, we find that the posterior mean gap in different time stages contain different levels of information, so we redesign the weighting scheme of diffusion loss to encourage the model to learn rich representations efficiently. We also reuse a part of network of pre-trained DPMs to accelerate the convergence of our model. Based on pre-trained DPMs, PDAE only needs less than half of the training time that Diff-AE costs to complete the representation learning but still outperforms Diff-AE. Moreover, PDAE also enables some other interesting features. 2 Background 2.1 Denoising Diffusion Probabilistic Models DDPMs [14] employ a forward process that starts from the data distribution q(x0) and sequentially corrupts it to N (0, I) with Markov diffusion kernels q(xt|xt−1) defined by a fixed variance schedule {βt}Tt=1. The process can be expressed by: q(xt|xt−1) = N (xt; √ 1− βtxt−1 , βtI) q(x1:T |x0) = T∏ t=1 q(xt|xt−1) , (1) where {xt}Tt=1 are latent variables of DDPMs. According to the rule of the sum of normally distributed random variables, we can directly sample xt from x0 for arbitrary t with q(xt|x0) = N (xt; √ ᾱtx0 , (1− ᾱt)I), where αt = 1− βt and ᾱt = ∏t i=1 αi. The reverse (generative) process is defined as another Markov chain parameterized by θ to describe the same but reverse process, denoising an arbitrary Gaussian noise to a clean data sample: pθ(xt−1|xt) = N (xt−1;µθ(xt, t) , Σθ(xt, t)) pθ(x0:T ) = p(xT ) T∏ t=1 pθ(xt−1|xt) , (2) where p(xT ) = N (xT ;0, I). It employs pθ(xt−1|xt) of Gaussian form because the reversal of the diffusion process has the identical functional form as the forward process when βt is small [11, 43]. The generative distribution can be represented as pθ(x0) = ∫ pθ(x0:T )dx1:T . Training is performed to maximize the model log likelihood ∫ q(x0) log pθ(x0)dx0 by minimizing the variational upper bound of the negative one. The final objective is derived by some parameterization and simplication [14]: Lsimple(θ) = Ex0,t,ϵ [∥∥ϵ− ϵθ(√ᾱtx0 +√1− ᾱtϵ, t)∥∥2] , (3) where ϵθ is a function approximator to predict ϵ from xt. 2.2 Denoising Diffusion Implicit Models DDIMs [44] define a non-Markov forward process that leads to the same training objective with DDPMs, but the corresponding reverse process can be much more flexible and faster to sample from. Specifically, one can sample xt−1 from xt using the ϵθ of some pre-trained DDPMs via: xt−1 = √ ᾱt−1 ( xt − √ 1− ᾱt · ϵθ(xt, t)√ ᾱt ) + √ 1− ᾱt−1 − σ2t · ϵθ(xt, t) + σtϵt , (4) where ϵt ∼ N (0, I) and σt controls the stochasticity of forward process. The strides greater than 1 are allowed for accelerated sampling. When σt = 0, the generative process becomes deterministic, which is named as DDIMs. 2.3 Classifier-guided Sampling Method Classifier-guided sampling method [43, 46, 8] shows that one can train a classifier pϕ(y|xt) on noisy data and use its gradient ∇xt log pϕ(y|xt) to guide some pre-trained unconditional DDPM to sample towards specified class y. The conditional reverse process can be approximated by a Gaussian similar to that of the unconditional one in Eq.(2), but with a shifted mean: pθ,ϕ(xt−1|xt,y) ≈ N (xt−1;µθ(xt, t) +Σθ(xt, t) · ∇xt log pϕ(y|xt) , Σθ(xt, t)) . (5) For deterministic sampling methods like DDIMs, one can use score-based conditioning trick [46, 45] to define a new function approximator for conditional sampling: ϵ̂θ(xt, t) = ϵθ(xt, t)− √ 1− ᾱt · ∇xt log pϕ(y|xt) . (6) More generally, any similarity estimator between noisy data and conditions can be applied for guided sampling, such as noisy-CLIP guidance [33, 31]. 3 Method 3.1 Forward Process Posterior Mean Gap Generally, one will train unconditional and conditional DPMs by respectively learning pθ(xt−1|xt) = N (xt−1;µθ(xt, t),Σθ(xt, t)) and pθ(xt−1|xt,y) = N (xt−1;µθ(xt,y, t),Σθ(xt,y, t)) to approximate the same forward process posterior q(xt−1|xt,x0) = N (xt−1; µ̃t(xt,x0), 1−ᾱt−11−ᾱt βtI). Here y is some condition that contains some prior knowledges of corresponding x0, such as class label. Assuming that both Σθ is set to untrained time dependent constants, under the same experimental settings, the conditional DPMs will reach a lower optimized diffusion loss. The experiment in Figure 1 can prove this fact, which means that µθ(xt,y, t) is closer to µ̃t(xt,x0) than µθ(xt, t). This implies that there exists a gap between the posterior mean predicted by the unconditional DPMs ( µθ(xt, t) ) and the true one ( µ̃t(xt,x0) ) . Essentially, the posterior mean gap is caused by the information loss of forward process so that the reverse process cannot recover it in xt−1 only according to xt. If we introduce some knowledges of x0 for DPMs, like y here, the gap will be smaller. The more information of x0 that y contains, the smaller the gap is. Moreover, according to Eq.(5), the Gaussian mean of classifier-guided conditional reverse process contains an extra shift item compared with that of the unconditional one. From the perspective of posterior mean gap, the mean shift item can partially fill the gap and help the reverse process to reconstruct the lost class information in samples. In theory, if y in Eq.(5) contains all information of x0, the mean shift will fully fill the gap and guide the reverse process to reconstruct x0. On the other hand, if we employ a model to predict mean shift according to our encoded representations z and train it to fill as much gap as possible, the encoder will be forced to learn as much information as possible from x0 to help the filling. The more the gap is filled, the more accurate the mean shift is, the more perfect the reconstruction is, and the more information of x0 that z contains. PDAE follows this principle to build an autoencoder based on pre-trained DPMs. 3.2 Unsupervised Representation Learning by Filling the Gap Following the paradigm of autoencoders, we employ an encoder z = Eφ(x0) for learning compact and meaningful representations from input images and adapt a pre-trained unconditional DPM pθ(xt−1|xt) = N (xt−1;µθ(xt, t),Σθ(xt, t)) to the decoder for image reconstruction. Specifically, we employ a gradient estimator Gψ(xt, z, t) to simulate ∇xt log p(z|xt), where p(z|xt) is some implicit classifier that we will not use explicitly, and use it to assemble a conditional DPM pθ,ψ(xt−1|xt, z) = N (xt−1;µθ(xt, t)+Σθ(xt, t) ·Gψ(xt, z, t) , Σθ(xt, t)) as the decoder. Then we train it like a regular conditional DPM by optimizing following derived objective (assuming the ϵ-prediction parameterization is adopted): L(ψ,φ) = Ex0,t,ϵ [ λt ∥∥ϵ− ϵθ(xt, t) + √αt√1− ᾱt βt ·Σθ(xt, t) ·Gψ(xt,Eφ(x0), t) ∥∥2] , (7) where xt = √ ᾱtx0+ √ 1− ᾱtϵ and λt is a new weighting scheme that we will discuss in Section 3.4. Note that we use pre-trained DPMs so that θ are frozen during the optimization. Usually we set Σθ = 1−ᾱt−1 1−ᾱt βtI to untrained time-dependent constants. The optimization is equivalent to minimizing ∥∥Σθ(xt, t)·Gψ(xt,Eφ(x0), t)−(µ̃t(xt,x0)−µθ(xt, t))∥∥2, which forces the predicted mean shift Σθ(xt, t) ·Gψ(xt,Eφ(x0), t) to fill the posterior mean gap µ̃t(xt,x0)− µθ(xt, t). With trained Gψ(xt, z, t), we can treat it as the score of an optimal classifier p(z|xt) and use the classifier-guided sampling method in Eq.(5) for DDPM sampling or use the modified function approximator ϵ̂θ in Eq.(6) for DDIM sampling, based on pre-trained ϵθ(xt, t). We put detailed algorithm procedures in Appendix ??. Except the semantic latent code z, we can infer a stochastic latent code xT [36] by running the deterministic generative process of DDIMs in reverse: xt+1 = √ ᾱt+1 ( xt − √ 1− ᾱt · ϵ̂θ(xt, t)√ ᾱt ) + √ 1− ᾱt+1 · ϵ̂θ(xt, t) . (8) This procedure is optional, but helpful to near-exact reconstruction and real-image manipulation for reconstructing minor details of input images when using DDIM sampling. We also train a latent DPM pω(zt−1|zt) to model the learned semantic latent space, same with that in Diff-AE [36]. With a trained latent DPM, we can sample z from it to help pre-trained DPMs to achieve faster and better unconditional sampling under the guidance of Gψ(xt, z, t). 3.3 Network Design Figure 2 shows the network and data flow of PDAE. For encoder Eφ, unlike Diff-AE that uses the encoder part of U-Net [40], we find that simply stacked convolution layers and a linear layer is enough to learn meaningful z from x0. For gradient estimator Gψ, we use U-Net similar to the function approximator ϵθ of pre-trained DPM. Considering that ϵθ also takes xt and t as input, we can further leverage the knowledges of pre-trained DPM by reusing its trained encoder part and time embedding layer, so that we only need to employ new middle blocks, decoder part and output blocks of U-Net for Gψ . To incorporate z into them, we follow [8] to extend Group Normalization [53] by applying scaling & shifting twice on normalized feature maps: AdaGN(h, t,z) = zs(tsGroupNorm(h) + tb) + zb , (9) where [ts, tb] and [zs, zb] are obtained from a linear projection of t and z, respectively. Note that we still use skip connections from reused encoder to new decoder. In this way, Gψ is totally determined by pre-trained DPM and can be universally applied to different U-Net architectures. 3.4 Weighting Scheme Redesign We originally worked with simplified training objective like that in DDPMs [14], i.e. setting λt = 1 in Eq.(7), but found the training extremely unstable, resulting in slow/non- convergence and poor performance. Inspired by P2-weighting [7], which has shown that the weighting scheme of diffusion loss can greatly affect the performance of DPMs, we attribute this phenomenon to the weighting scheme and investigate it in Figure 3. Specifically, we train an unconditional DPM and a noisy classifier on MNIST [28], and divide the diffusion forward process into three stages: early-stage between 0 and t1, critical-stage between t1 and t2 and late-stage between t2 and T , as shown in the top row. Then we design a mixed sampling procedure that employs unconditional sampling but switches to classifier-guided sampling only during the specified stage. The bottom three rows show the samples generated by three different mixed sampling procedures, where each row only employs classifier-guided sampling during the specified stage on the right. As we can see, only the samples guided by the classifier during critical-stage match the input class labels. We can conclude that the mean shift during critical-stage contains more crucial information to reconstruct the input class label in samples than the other two stages. From the view of diffusion trajectories, the sampling trajectories are separated from each other during critical-stage and they need the mean shift to guide them towards specified direction, otherwise it will be determined by the stochasticity of Langevin dynamics. Therefore, we opt to down-weight the objective function for the t in early- and late-stage to encourage the model to learn rich representations from critical-stage. Inspired by P2-weighting [7], we redesign a weighting scheme of diffusion loss (λt in Eq.(7)) in terms of signal-to-noise ratio [24] (SNR(t) = ᾱt1−ᾱt ): λt = ( 1 1 + SNR(t) )1−γ · ( SNR(t) 1 + SNR(t) )γ , (10) where the first item is for early-stage and the second one is for late-stage. γ is a hyperparameter that balances the strength of down-weighting between two items. Empirically we set γ = 0.1. Figure 4 shows the normalized weighting schemes of diffusion loss for different DPMs relative to the true variational lower bound loss. Compared with other DPMs, our weighting scheme down-weights the diffusion loss for both low and high SNR. 4 Experiments To compare PDAE with Diff-AE [36], we follow their experiments with the same settings. Moreover, we also show that PDAE enables some added features. For fair comparison, we use the baseline DPMs provided by official Diff-AE implementation as our pre-trained models (also as our baselines), which have the same network architectures (hyperparameters) with their Diff-AE models. For brevity, we use the notation such as "FFHQ128-130M-z512-64M" to name our model, which means that we use a baseline DPM pre-trained with 130M images and leverage it for PDAE training with 64M images, on 128× 128 FFHQ dataset [21], with the semantic latent code z of 512-d. We put all implementation details in Appendix ?? and additional samples of following experiments in Appendix ??. 4.1 Training Efficiency We demonstrate the better training efficiency of PDAE compared with Diff-AE from two aspects: training time and times. For training time, we train both models with the same network architectures (hyperparameters) on 128×128 image dataset using 4 Nvidia A100-SXM4 GPUs for distributed training and set batch size to 128 (32 for each GPU) to calculate their training throughput (imgs/sec./A100). PDAE achieves a throughput of 81.57 and Diff-AE achieves that of 75.41. Owing to the reuse of the U-Net encoder part of pre-trained DPM, PDAE has less trainable parameters and achieves a higher training throughput than Diff-AE. For training times, we find that PDAE needs about 13 ∼ 1 2 of the number of training batches (images) that Diff-AE needs for loss convergence. We think this is because that modeling the posterior mean gap based on pre-trained DPMs is easier than modeling a conditional DPM from scratch. The network reuse and the weighting scheme redesign also help. As a result, based on pre-trained DPMs, PDAE needs less than half of the training time that Diff-AE costs to complete the representation learning. 4.2 Learned Mean Shift Fills Posterior Mean Gap We train a model of "FFHQ128-130M-z512-64M" and show that our learned mean shift can fill the posterior mean gap with qualitative and quantitative results in Figure 5. Specifically, we select some images x0 from FFHQ, sample xt = √ ᾱtx0 + √ 1− ᾱtϵ for different t and predict x̂0 from xt by denoising them for only one step (i.e., x̂0 = xt− √ 1−ᾱtϵ̂√ ᾱt ), using pre-trained DPM and PDAE respectively. As we can see in the figure (left), even for large t, PDAE can predict accurate noise from xt and reconstruct plausible images, which shows that the predicted mean shift fills the posterior mean gap and the learned representation helps to recover the lost information of forward process. Furthermore, we randomly select 1000 images from FFHQ, sample xt = √ ᾱtx0 + √ 1− ᾱtϵ and calculate their average posterior mean gap for each step using pre-trained DPM: ∥µ̃t(xt,x0) − µθ(xt, t)∥2 and PDAE: ∥µ̃t(xt,x0)− (µθ(xt, t) +Σθ(xt, t) ·Gψ(xt,Eφ(x0), t))∥2 respectively, shown in the figure (right). As we can see, PDAE predicts the mean shift that significantly fills the posterior mean gap. 4.3 Autoencoding Reconstruction We use "FFHQ128-130M-z512-64M" to run some autoencoding reconstruction examples using PDAE generative process of DDIM and DDPM respectively. As we can see in Figure 6, both methods generate samples with similar contents to the input. Some stochastic variations [36] occur in minor details of hair, eye and skin when introducing stochasticity. Due to the similar performance between DDPM and DDIM with random xT , we will always use DDIM sampling method in later experiments. We can get a near-exact reconstruction if we use the stochastic latent code inferred from aforementioned ODE, which further proves that the stochastic latent code controls the local details. To further evaluate the autoencoding reconstruction quality of PDAE, we conduct the same quantitative experiments with Diff-AE. Specifically, we use "FFHQ128-130M-z512-64M" to encode-andreconstruct all 30k images of CelebA-HQ [20] and evaluate the reconstruction quality with their average SSIM [52], LPIPS [56] and MSE. We use the same baselines described in [36], and the results are shown in Table 1. We can see that PDAE is competitive with the state-of-the-art NVAE even with much less latent dimensionality and also outperforms Diff-AE in all metrics except the LPIPS for random xT . Moreover, PDAE only needs about half of the training times that Diff-AE needs for representation learning, which shows that PDAE can learn richer representations from images more efficiently based on pre-trained DPM. 4.4 Interpolation of Semantic Latent Codes and Trajectories Given two images x10 and x 2 0 from FFHQ, we use "FFHQ128-130M-z512-64M" to encode them into (z1,x1T ) and (z 2,x2T ) and run PDAE generative process of DDIM starting from Slerp(x 1 T ,x 2 T ;λ) under the guidance of Gψ ( xt, Lerp(z 1, z2;λ), t ) with 100 steps, expecting smooth transitions along λ. Moreover, from the view of the diffusion trajectories, PDAE generates desired samples by shifting the unconditional sampling trajectories towards the spatial direction predicted by Gψ(xt, z, t). This enables PDAE to directly interpolate between two different sampling trajectories. Intuitively, the spatial direction predicted by the linear interpolation of two semantic latent codes, Gψ ( xt, Lerp(z 1, z2;λ), t ) , should be equivalent to the linear interpolation of two spatial directions predicted by respective semantic latent code, Lerp ( Gψ(xt, z 1, t),Gψ(xt, z 2, t);λ ) . We present some examples of these two kinds of interpolation methods in Figure 7. As we can see, both methods generate similar samples that smoothly transition from one endpoint to the other, which means that Gψ ( xt, Lerp(z 1, z2;λ), t ) ≈ Lerp ( Gψ(xt, z 1, t),Gψ(xt, z 2, t);λ ) , so that Gψ(xt, z, t) can be seen as a function of z analogous to a linear map. The linearity guarantees a meaningful semantic latent space that represents the semantic spatial change of image by a linear change of latent code. 4.5 Attribute Manipulation We can further explore the learned semantic latent space in a supervised way. To illustrate this, we train a model of "CelebA-HQ128-52M-z512-25M" and conduct attribute manipulation experiments by utilizing the attribute annotations of CelebA-HQ dataset. Specifically, we first encode an image to its semantic latent code, then move it along the learned direction and finally decode it to the manipulated image. Similar to Diff-AE, we train a linear classifier to separate the semantic latent codes of the images with different attribute labels and use the normal vector of separating hyperplane (i.e. the weight of linear classifier) as the direction vector. We present some attribute manipulation examples in Figure 8. As we can see, PDAE succeeds in manipulating images by moving their semantic latent codes along the direction of desired attribute with different scales. Like Diff-AE, PDAE can change attribute-relevant features while keeping other irrelevant details almost stationary if using the inferred xT of input image. 4.6 Truncation-like Effect According to [8, 15], we can obtain a truncation-like effect in DPMs by scaling the strength of classifier guidance. We have assumed that Gψ(xt, z, t) trained by filling the posterior mean gap simulates the gradient of some implicit classifier, and it can actually work as desired. In theory, it can Figure 9: The truncation-like effect for "ImageNet64-77M-y-38M" by scaling Gψ(xt,y, t) with 0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0 respectively. Dataset Model FIDT=10 T=20 T=50 T=100 FFHQ DDIM 31.87 20.53 15.82 11.95 Diff-AE 21.95 18.10 13.14 10.55 PDAE 20.16 17.18 12.81 10.31 Horse [55] DDIM 25.24 14.41 7.98 5.93 Diff-AE 12.66 9.21 7.12 5.27 PDAE 11.94 8.51 6.83 5.09 Bedroom [55] DDIM 14.07 9.29 7.31 5.88 Diff-AE 10.79 8.42 6.49 5.32 PDAE 10.05 7.89 6.33 5.47 CelebA DDIM 18.89 13.82 8.48 5.94 Diff-AE 12.92 10.18 7.05 5.30 PDAE 11.84 9.65 7.23 5.19 Table 2: FID scores for unconditional sampling. also be applied in truncation-like effect. To illustrate this, we directly incorporate the class label into Gψ(xt,y, t) and train it to fill the gap. Specifically, we train a model of "ImageNet64-77M-y-38M" and use DDIM sampling method with 100 steps to generate 50k samples, guided by the predicted mean shift with different scales for a truncation-like effect. Figure 9 shows the sample quality effects of sweeping over the scale. As we can see, it achieves the truncation-like effect similar to that of classifier-guided sampling method, which helps us to build connections between filling the posterior mean gap and classifier-guided sampling method. The gradient estimator trained by filling the posterior mean gap is an alternative to the noisy classifier. 4.7 Few-shot Conditional Generation Following D2C [42], we train a model of "CelebA64-72M-z512-38M" on CelebA [20] and aim to achieve conditional sampling given a small number of labeled images. To achieve this, we train a latent DPM pω(zt−1|zt) on semantic latent space and a latent classifier pη(y|z) using given labeled images. For binary scenario, the images are labeled by a binary class (100 samples, 50 for each class). For PU scenario, the images are either labeled positive or unlabeled (100 positively labeled and 10k unlabeled samples). Then we sample z from pω(zt−1|zt) and accept it with the probability of pη(y|z). We use the accepted z to generate 5k samples for every class and compute the FID score between these samples and all images belonging to corresponding class in dataset. We compare PDAE with Diff-AE and D2C. We also use the naive approach that computes the FID score between the training images and the corresponding subset of images in dataset. Table 3 shows that PDAE achieves better FID scores than Diff-AE and D2C. 4.8 Improved Unconditional Sampling As shown in Section 4.2, under the help of z, PDAE can generate plausible images in only one step. If we can get z in advance, PDAE can achieve better sample quality than pre-trained DPMs in the same number of sampling steps. Similar to Diff-AE, we train a latent DPM on semantic latent space and sample z from it to improve the unconditional sampling of pre-trained DPMs. Unlike Diff-AE that must take z as input for sampling, PDAE uses an independent gradient estimator as a corrector of the pre-trained DPM for sampling. We find that only using pre-trained DPMs in the last few sampling steps can achieve better sample quality, which may be because that the gradient estimator is sensitive to z in the last few sampling steps and the stochasticity of sampled z will lead to out-of-domain samples. Asyrp [27] also finds similar phenomenon. Empirically, we carry out this strategy in the last 30% sampling steps. We evaluate unconditional sampling result on "FFHQ128-130M-z512-64M", "Horse128-130M-z512-64M", "Bedroom128-120M-z512-70M" and "CelebA64-72M-z512-38M" using DDIM sampling method with different steps. For each dataset, we calculate the FID scores between 50k generated samples and 50k real images randomly selected from dataset. Table 2 shows that PDAE significantly improves the sample quality of pre-trained DPMs and outperforms Diff-AE. Note that PDAE can be applied for any pre-trained DPMs as an auxiliary booster to improve their sample quality. 5 Related Work Our work is based on an emerging latent variable generative model known as Diffusion Probabilistic Models (DPMs) [43, 14], which are now popular for their stable training process and competitive sample quality. Numerous studies [34, 24, 8, 15, 44, 19, 46, 30] and applications [5, 26, 18, 32, 57, 6, 29, 41, 3, 16, 17] have further significantly improved and expanded DPMs. Unsupervised representation learning via generative modeling is a popular topic in computer vision. Latent variable generative models, such as GANs [13], VAEs [25, 39], and DPMs, are a natural candidate for this, since they inherently involve a latent representation of the data they generate. For GANs, due to its lack of inference functionality, one have to extract the representations for any given real samples by an extra technique called GAN Inversion [54], which invert samples back into the latent space of trained GANs. Existing inversion methods [58, 35, 4, 1, 2, 51] either have limited reconstruction quality or need significantly higher computational cost. VAEs explicitly learn representations for samples, but still face representation-generation trade-off challenges [49, 42]. VQ-VAE [49, 37] and D2C [42] overcome these problems by modeling latent variables post-hoc in different ways. DPMs also yield latent variables through the forward process. However, these latent variables lack high-level semantic information because they are just a sequence of spatially corrupted images. In light of this, diffusion autoencoders (Diff-AE) [36] explore DPMs for representation learning via autoencoding. Specifically, they jointly train an encoder for discovering meaningful representations from images and a conditional DPM as the decoder for image reconstruction by treating the representations as input conditions. Diff-AE is competitive with the state-of-the-art model on image reconstruction and capable of various downstream tasks. Compared with Diff-AE, PDAE leverages existing pre-trained DPMs for representation learning also via autoencoding, but with better training efficiency and performance. A concurrent work with the similar idea is the textual inversion of pre-trained text-to-image DPMs [12]. Specifically, given only 3-5 images of a user-provided concept, like an object or a style, they learn to represent it through new "words" in the embedding space of the frozen text-to-image DPMs. These learned "words" can be further composed into natural language sentences, guiding personalized creation in an intuitive way. From the perspective of posterior mean gap, for the given new concept, textual inversion optimizes its corresponding new "words" embedding vector to find a best textual condition ( c ) , so that which can be fed into pre-trained text-to-image DPMs ( ϵθ(xt, c, t) ) to fill as much gap ( ϵ− ϵθ(xt, ∅, t) ) as possible. 6 Conclusion In conclusion, we present a general method called PDAE that leverages pre-trained DPMs for representation learning via autoencoding and achieves better training efficiency and performance than Diff-AE. Our key idea is based on the concept of posterior mean gap and its connections with classifier-guided sampling method. A concurrent work, textual inversion of pre-trained text-to-image DPMs, can also be explained from this perspective. We think the idea can be further explored to extract knowledges from pre-trained DPMs, such as interpretable direction discovery [51], and we leave it as future work. Acknowledgments and Disclosure of Funding This work was supported in part by the National Natural Science Foundation of China (Grant No. 62072397 and No.61836002), Zhejiang Natural Science Foundation (LR19F020006) and Yiwise.
1. What is the focus and contribution of the paper on semantic correspondence? 2. What are the strengths of the proposed approach, particularly in terms of neural representation? 3. What are the weaknesses of the paper, especially for the experiment section? 4. Do you have any concerns on the semantic correspondence representation? 5. What are the limitations regarding the NeMF approach? 6. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 7. What is the main contribution of the paper on dictionary learning? 8. What are the strengths of the paper, especially in the theoretical analysis? 9. Do you have any questions regarding the paper? 10. How does the closeness between the pretrained DPM's domain and the target domain affect the efficacy of this method? 11. If the pretrained DPM were from a completely unrelated domain, would this just mean that the posterior gap is larger and that the training time of this method would take much longer? 12. What are the limitations of the proposed approach that the reviewer did not discuss?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper Whereas prior work (DiffAE) uses a conditional DPM as the decoder in an auto-encoder setup, this work attempts to leverage pretrained unconditional DPMs instead. The proposed formulation extends the classifier-guidance solution for latent autoencoder semantic codes z. However, instead of explicitly training classifiers p(z | x_t), taking its corresponding score, they propose to directly fit to the posterior mean gap of the pretrained, unsupervised, and frozen DPM. Results demonstrate higher quality reconstruction than DiffAE, with an overall simpler training setup. Strengths And Weaknesses strengths paper was clearly written. Formulation was easy to follow prior works using classifier guidance had some issues regarding the scale of the guidance vector being used, but I believe the proposed formulation gets around that issue by fitting to the posterior mean gap instead. weaknesses In section 3.4, it's not clear how the start and end of the critical stage are determined. L124 Except the latent code z (awkward phrasing) One thing that would've been nice to see is whether it's truly necessary to use another DPM for modeling the generative distribution z . DiffAE justifies their choice of a DPM by stating that a VAE's objective would have been difficult to tune, but perhaps a simple auto-encoder with a post-fit GMM over the latent vectors might suffice? Questions See weaknesses How does the closeness between the pretrained DPM's domain and the target domain affect the efficacy of this method? If the pretrained DPM were from a completely unrelated domain, would this just mean that the posterior gap is larger and that the training time of this method would take much longer? Limitations Limitations were not discussed. Example topics would include deep fakes etc.
NIPS
Title Unsupervised Representation Learning from Pre-trained Diffusion Probabilistic Models Abstract Diffusion Probabilistic Models (DPMs) have shown a powerful capacity of generating high-quality image samples. Recently, diffusion autoencoders (Diff-AE) have been proposed to explore DPMs for representation learning via autoencoding. Their key idea is to jointly train an encoder for discovering meaningful representations from images and a conditional DPM as the decoder for reconstructing images. Considering that training DPMs from scratch will take a long time and there have existed numerous pre-trained DPMs, we propose Pre-trained DPM AutoEncoding (PDAE), a general method to adapt existing pre-trained DPMs to the decoders for image reconstruction, with better training efficiency and performance than Diff-AE. Specifically, we find that the reason that pre-trained DPMs fail to reconstruct an image from its latent variables is due to the information loss of forward process, which causes a gap between their predicted posterior mean and the true one. From this perspective, the classifier-guided sampling method can be explained as computing an extra mean shift to fill the gap, reconstructing the lost class information in samples. These imply that the gap corresponds to the lost information of the image, and we can reconstruct the image by filling the gap. Drawing inspiration from this, we employ a trainable model to predict a mean shift according to encoded representation and train it to fill as much gap as possible, in this way, the encoder is forced to learn as much information as possible from images to help the filling. By reusing a part of network of pre-trained DPMs and redesigning the weighting scheme of diffusion loss, PDAE can learn meaningful representations from images efficiently. Extensive experiments demonstrate the effectiveness, efficiency and flexibility of PDAE. Our implementation is available at https://github.com/ckczzj/PDAE. 1 Introduction Deep generative models such as variational autoencoders (VAEs) [25, 39], generative adversarial networks (GANs) [13], autoregressive models [50, 48], normalizing flows (NFs) [38, 23] and energybased models (EBMs) [9, 45] have shown remarkable capacity to synthesize striking image samples. Recently, another kind of generative models, Diffusion Probabilistic Models (DPMs) [43, 14] are further developed and becoming popular for their stable training process and state-of-the-art sample quality [8]. Although a large number of degrees of freedom in implementation, the DPMs discussed in this paper will refer exclusively to those trained by the denoising method proposed in DDPMs [14]. Unsupervised representation learning via generative modeling is a popular topic in computer vision. Latent variable generative models, such as GANs and VAEs, are a natural candidate for this, since they inherently involve a latent representation of the data they generate. Likewise, DPMs inherently ∗Corresponding author. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). yield latent variables through the forward process. However, these latent variables lack high-level semantic information because they are just a sequence of spatially corrupted images. In light of this, diffusion autoencoders (Diff-AE) [36] explore DPMs for representation learning via autoencoding. Specifically, they employ an encoder for discovering meaningful representations from images and a conditional DPM as the decoder for image reconstruction by taking the encoded representations as input conditions. Diff-AE is competitive with the state-of-the-art model on image reconstruction and capable of various downstream tasks. Following the paradigm of autoencoders, PDAE aims to adapt existing pre-trained DPMs to the decoders for image reconstruction and benefit from it. Generally, pre-trained DPMs cannot accurately predict the posterior mean of xt−1 from xt in the reverse process due to the information loss of forward process, which results in a gap between their predicted posterior mean and the true one. This is the reason that they fail to reconstruct an image (x0) from its latent variables (xt). From this perspective, the classifier-guided sampling method [8] can be explained as reconstructing the lost class information in samples by shifting the predicted posterior mean with an extra item computed by the gradient of a classifier to fill the gap. Drawing inspiration from this method that uses the prior knowledges (class label) to fill the gap, we aim to inversely extract the knowledges from the gap, i.e., learn representations that can help to fill the gap. In light of this, we employ a novel gradient estimator to predict the mean shift according to encoded representations and train it to fill as much gap as possible, in this way, the encoder is forced to learn as much information as possible from images to help the filling. PDAE follows this principle to build an autoencoder based on pre-trained DPMs. Furthermore, we find that the posterior mean gap in different time stages contain different levels of information, so we redesign the weighting scheme of diffusion loss to encourage the model to learn rich representations efficiently. We also reuse a part of network of pre-trained DPMs to accelerate the convergence of our model. Based on pre-trained DPMs, PDAE only needs less than half of the training time that Diff-AE costs to complete the representation learning but still outperforms Diff-AE. Moreover, PDAE also enables some other interesting features. 2 Background 2.1 Denoising Diffusion Probabilistic Models DDPMs [14] employ a forward process that starts from the data distribution q(x0) and sequentially corrupts it to N (0, I) with Markov diffusion kernels q(xt|xt−1) defined by a fixed variance schedule {βt}Tt=1. The process can be expressed by: q(xt|xt−1) = N (xt; √ 1− βtxt−1 , βtI) q(x1:T |x0) = T∏ t=1 q(xt|xt−1) , (1) where {xt}Tt=1 are latent variables of DDPMs. According to the rule of the sum of normally distributed random variables, we can directly sample xt from x0 for arbitrary t with q(xt|x0) = N (xt; √ ᾱtx0 , (1− ᾱt)I), where αt = 1− βt and ᾱt = ∏t i=1 αi. The reverse (generative) process is defined as another Markov chain parameterized by θ to describe the same but reverse process, denoising an arbitrary Gaussian noise to a clean data sample: pθ(xt−1|xt) = N (xt−1;µθ(xt, t) , Σθ(xt, t)) pθ(x0:T ) = p(xT ) T∏ t=1 pθ(xt−1|xt) , (2) where p(xT ) = N (xT ;0, I). It employs pθ(xt−1|xt) of Gaussian form because the reversal of the diffusion process has the identical functional form as the forward process when βt is small [11, 43]. The generative distribution can be represented as pθ(x0) = ∫ pθ(x0:T )dx1:T . Training is performed to maximize the model log likelihood ∫ q(x0) log pθ(x0)dx0 by minimizing the variational upper bound of the negative one. The final objective is derived by some parameterization and simplication [14]: Lsimple(θ) = Ex0,t,ϵ [∥∥ϵ− ϵθ(√ᾱtx0 +√1− ᾱtϵ, t)∥∥2] , (3) where ϵθ is a function approximator to predict ϵ from xt. 2.2 Denoising Diffusion Implicit Models DDIMs [44] define a non-Markov forward process that leads to the same training objective with DDPMs, but the corresponding reverse process can be much more flexible and faster to sample from. Specifically, one can sample xt−1 from xt using the ϵθ of some pre-trained DDPMs via: xt−1 = √ ᾱt−1 ( xt − √ 1− ᾱt · ϵθ(xt, t)√ ᾱt ) + √ 1− ᾱt−1 − σ2t · ϵθ(xt, t) + σtϵt , (4) where ϵt ∼ N (0, I) and σt controls the stochasticity of forward process. The strides greater than 1 are allowed for accelerated sampling. When σt = 0, the generative process becomes deterministic, which is named as DDIMs. 2.3 Classifier-guided Sampling Method Classifier-guided sampling method [43, 46, 8] shows that one can train a classifier pϕ(y|xt) on noisy data and use its gradient ∇xt log pϕ(y|xt) to guide some pre-trained unconditional DDPM to sample towards specified class y. The conditional reverse process can be approximated by a Gaussian similar to that of the unconditional one in Eq.(2), but with a shifted mean: pθ,ϕ(xt−1|xt,y) ≈ N (xt−1;µθ(xt, t) +Σθ(xt, t) · ∇xt log pϕ(y|xt) , Σθ(xt, t)) . (5) For deterministic sampling methods like DDIMs, one can use score-based conditioning trick [46, 45] to define a new function approximator for conditional sampling: ϵ̂θ(xt, t) = ϵθ(xt, t)− √ 1− ᾱt · ∇xt log pϕ(y|xt) . (6) More generally, any similarity estimator between noisy data and conditions can be applied for guided sampling, such as noisy-CLIP guidance [33, 31]. 3 Method 3.1 Forward Process Posterior Mean Gap Generally, one will train unconditional and conditional DPMs by respectively learning pθ(xt−1|xt) = N (xt−1;µθ(xt, t),Σθ(xt, t)) and pθ(xt−1|xt,y) = N (xt−1;µθ(xt,y, t),Σθ(xt,y, t)) to approximate the same forward process posterior q(xt−1|xt,x0) = N (xt−1; µ̃t(xt,x0), 1−ᾱt−11−ᾱt βtI). Here y is some condition that contains some prior knowledges of corresponding x0, such as class label. Assuming that both Σθ is set to untrained time dependent constants, under the same experimental settings, the conditional DPMs will reach a lower optimized diffusion loss. The experiment in Figure 1 can prove this fact, which means that µθ(xt,y, t) is closer to µ̃t(xt,x0) than µθ(xt, t). This implies that there exists a gap between the posterior mean predicted by the unconditional DPMs ( µθ(xt, t) ) and the true one ( µ̃t(xt,x0) ) . Essentially, the posterior mean gap is caused by the information loss of forward process so that the reverse process cannot recover it in xt−1 only according to xt. If we introduce some knowledges of x0 for DPMs, like y here, the gap will be smaller. The more information of x0 that y contains, the smaller the gap is. Moreover, according to Eq.(5), the Gaussian mean of classifier-guided conditional reverse process contains an extra shift item compared with that of the unconditional one. From the perspective of posterior mean gap, the mean shift item can partially fill the gap and help the reverse process to reconstruct the lost class information in samples. In theory, if y in Eq.(5) contains all information of x0, the mean shift will fully fill the gap and guide the reverse process to reconstruct x0. On the other hand, if we employ a model to predict mean shift according to our encoded representations z and train it to fill as much gap as possible, the encoder will be forced to learn as much information as possible from x0 to help the filling. The more the gap is filled, the more accurate the mean shift is, the more perfect the reconstruction is, and the more information of x0 that z contains. PDAE follows this principle to build an autoencoder based on pre-trained DPMs. 3.2 Unsupervised Representation Learning by Filling the Gap Following the paradigm of autoencoders, we employ an encoder z = Eφ(x0) for learning compact and meaningful representations from input images and adapt a pre-trained unconditional DPM pθ(xt−1|xt) = N (xt−1;µθ(xt, t),Σθ(xt, t)) to the decoder for image reconstruction. Specifically, we employ a gradient estimator Gψ(xt, z, t) to simulate ∇xt log p(z|xt), where p(z|xt) is some implicit classifier that we will not use explicitly, and use it to assemble a conditional DPM pθ,ψ(xt−1|xt, z) = N (xt−1;µθ(xt, t)+Σθ(xt, t) ·Gψ(xt, z, t) , Σθ(xt, t)) as the decoder. Then we train it like a regular conditional DPM by optimizing following derived objective (assuming the ϵ-prediction parameterization is adopted): L(ψ,φ) = Ex0,t,ϵ [ λt ∥∥ϵ− ϵθ(xt, t) + √αt√1− ᾱt βt ·Σθ(xt, t) ·Gψ(xt,Eφ(x0), t) ∥∥2] , (7) where xt = √ ᾱtx0+ √ 1− ᾱtϵ and λt is a new weighting scheme that we will discuss in Section 3.4. Note that we use pre-trained DPMs so that θ are frozen during the optimization. Usually we set Σθ = 1−ᾱt−1 1−ᾱt βtI to untrained time-dependent constants. The optimization is equivalent to minimizing ∥∥Σθ(xt, t)·Gψ(xt,Eφ(x0), t)−(µ̃t(xt,x0)−µθ(xt, t))∥∥2, which forces the predicted mean shift Σθ(xt, t) ·Gψ(xt,Eφ(x0), t) to fill the posterior mean gap µ̃t(xt,x0)− µθ(xt, t). With trained Gψ(xt, z, t), we can treat it as the score of an optimal classifier p(z|xt) and use the classifier-guided sampling method in Eq.(5) for DDPM sampling or use the modified function approximator ϵ̂θ in Eq.(6) for DDIM sampling, based on pre-trained ϵθ(xt, t). We put detailed algorithm procedures in Appendix ??. Except the semantic latent code z, we can infer a stochastic latent code xT [36] by running the deterministic generative process of DDIMs in reverse: xt+1 = √ ᾱt+1 ( xt − √ 1− ᾱt · ϵ̂θ(xt, t)√ ᾱt ) + √ 1− ᾱt+1 · ϵ̂θ(xt, t) . (8) This procedure is optional, but helpful to near-exact reconstruction and real-image manipulation for reconstructing minor details of input images when using DDIM sampling. We also train a latent DPM pω(zt−1|zt) to model the learned semantic latent space, same with that in Diff-AE [36]. With a trained latent DPM, we can sample z from it to help pre-trained DPMs to achieve faster and better unconditional sampling under the guidance of Gψ(xt, z, t). 3.3 Network Design Figure 2 shows the network and data flow of PDAE. For encoder Eφ, unlike Diff-AE that uses the encoder part of U-Net [40], we find that simply stacked convolution layers and a linear layer is enough to learn meaningful z from x0. For gradient estimator Gψ, we use U-Net similar to the function approximator ϵθ of pre-trained DPM. Considering that ϵθ also takes xt and t as input, we can further leverage the knowledges of pre-trained DPM by reusing its trained encoder part and time embedding layer, so that we only need to employ new middle blocks, decoder part and output blocks of U-Net for Gψ . To incorporate z into them, we follow [8] to extend Group Normalization [53] by applying scaling & shifting twice on normalized feature maps: AdaGN(h, t,z) = zs(tsGroupNorm(h) + tb) + zb , (9) where [ts, tb] and [zs, zb] are obtained from a linear projection of t and z, respectively. Note that we still use skip connections from reused encoder to new decoder. In this way, Gψ is totally determined by pre-trained DPM and can be universally applied to different U-Net architectures. 3.4 Weighting Scheme Redesign We originally worked with simplified training objective like that in DDPMs [14], i.e. setting λt = 1 in Eq.(7), but found the training extremely unstable, resulting in slow/non- convergence and poor performance. Inspired by P2-weighting [7], which has shown that the weighting scheme of diffusion loss can greatly affect the performance of DPMs, we attribute this phenomenon to the weighting scheme and investigate it in Figure 3. Specifically, we train an unconditional DPM and a noisy classifier on MNIST [28], and divide the diffusion forward process into three stages: early-stage between 0 and t1, critical-stage between t1 and t2 and late-stage between t2 and T , as shown in the top row. Then we design a mixed sampling procedure that employs unconditional sampling but switches to classifier-guided sampling only during the specified stage. The bottom three rows show the samples generated by three different mixed sampling procedures, where each row only employs classifier-guided sampling during the specified stage on the right. As we can see, only the samples guided by the classifier during critical-stage match the input class labels. We can conclude that the mean shift during critical-stage contains more crucial information to reconstruct the input class label in samples than the other two stages. From the view of diffusion trajectories, the sampling trajectories are separated from each other during critical-stage and they need the mean shift to guide them towards specified direction, otherwise it will be determined by the stochasticity of Langevin dynamics. Therefore, we opt to down-weight the objective function for the t in early- and late-stage to encourage the model to learn rich representations from critical-stage. Inspired by P2-weighting [7], we redesign a weighting scheme of diffusion loss (λt in Eq.(7)) in terms of signal-to-noise ratio [24] (SNR(t) = ᾱt1−ᾱt ): λt = ( 1 1 + SNR(t) )1−γ · ( SNR(t) 1 + SNR(t) )γ , (10) where the first item is for early-stage and the second one is for late-stage. γ is a hyperparameter that balances the strength of down-weighting between two items. Empirically we set γ = 0.1. Figure 4 shows the normalized weighting schemes of diffusion loss for different DPMs relative to the true variational lower bound loss. Compared with other DPMs, our weighting scheme down-weights the diffusion loss for both low and high SNR. 4 Experiments To compare PDAE with Diff-AE [36], we follow their experiments with the same settings. Moreover, we also show that PDAE enables some added features. For fair comparison, we use the baseline DPMs provided by official Diff-AE implementation as our pre-trained models (also as our baselines), which have the same network architectures (hyperparameters) with their Diff-AE models. For brevity, we use the notation such as "FFHQ128-130M-z512-64M" to name our model, which means that we use a baseline DPM pre-trained with 130M images and leverage it for PDAE training with 64M images, on 128× 128 FFHQ dataset [21], with the semantic latent code z of 512-d. We put all implementation details in Appendix ?? and additional samples of following experiments in Appendix ??. 4.1 Training Efficiency We demonstrate the better training efficiency of PDAE compared with Diff-AE from two aspects: training time and times. For training time, we train both models with the same network architectures (hyperparameters) on 128×128 image dataset using 4 Nvidia A100-SXM4 GPUs for distributed training and set batch size to 128 (32 for each GPU) to calculate their training throughput (imgs/sec./A100). PDAE achieves a throughput of 81.57 and Diff-AE achieves that of 75.41. Owing to the reuse of the U-Net encoder part of pre-trained DPM, PDAE has less trainable parameters and achieves a higher training throughput than Diff-AE. For training times, we find that PDAE needs about 13 ∼ 1 2 of the number of training batches (images) that Diff-AE needs for loss convergence. We think this is because that modeling the posterior mean gap based on pre-trained DPMs is easier than modeling a conditional DPM from scratch. The network reuse and the weighting scheme redesign also help. As a result, based on pre-trained DPMs, PDAE needs less than half of the training time that Diff-AE costs to complete the representation learning. 4.2 Learned Mean Shift Fills Posterior Mean Gap We train a model of "FFHQ128-130M-z512-64M" and show that our learned mean shift can fill the posterior mean gap with qualitative and quantitative results in Figure 5. Specifically, we select some images x0 from FFHQ, sample xt = √ ᾱtx0 + √ 1− ᾱtϵ for different t and predict x̂0 from xt by denoising them for only one step (i.e., x̂0 = xt− √ 1−ᾱtϵ̂√ ᾱt ), using pre-trained DPM and PDAE respectively. As we can see in the figure (left), even for large t, PDAE can predict accurate noise from xt and reconstruct plausible images, which shows that the predicted mean shift fills the posterior mean gap and the learned representation helps to recover the lost information of forward process. Furthermore, we randomly select 1000 images from FFHQ, sample xt = √ ᾱtx0 + √ 1− ᾱtϵ and calculate their average posterior mean gap for each step using pre-trained DPM: ∥µ̃t(xt,x0) − µθ(xt, t)∥2 and PDAE: ∥µ̃t(xt,x0)− (µθ(xt, t) +Σθ(xt, t) ·Gψ(xt,Eφ(x0), t))∥2 respectively, shown in the figure (right). As we can see, PDAE predicts the mean shift that significantly fills the posterior mean gap. 4.3 Autoencoding Reconstruction We use "FFHQ128-130M-z512-64M" to run some autoencoding reconstruction examples using PDAE generative process of DDIM and DDPM respectively. As we can see in Figure 6, both methods generate samples with similar contents to the input. Some stochastic variations [36] occur in minor details of hair, eye and skin when introducing stochasticity. Due to the similar performance between DDPM and DDIM with random xT , we will always use DDIM sampling method in later experiments. We can get a near-exact reconstruction if we use the stochastic latent code inferred from aforementioned ODE, which further proves that the stochastic latent code controls the local details. To further evaluate the autoencoding reconstruction quality of PDAE, we conduct the same quantitative experiments with Diff-AE. Specifically, we use "FFHQ128-130M-z512-64M" to encode-andreconstruct all 30k images of CelebA-HQ [20] and evaluate the reconstruction quality with their average SSIM [52], LPIPS [56] and MSE. We use the same baselines described in [36], and the results are shown in Table 1. We can see that PDAE is competitive with the state-of-the-art NVAE even with much less latent dimensionality and also outperforms Diff-AE in all metrics except the LPIPS for random xT . Moreover, PDAE only needs about half of the training times that Diff-AE needs for representation learning, which shows that PDAE can learn richer representations from images more efficiently based on pre-trained DPM. 4.4 Interpolation of Semantic Latent Codes and Trajectories Given two images x10 and x 2 0 from FFHQ, we use "FFHQ128-130M-z512-64M" to encode them into (z1,x1T ) and (z 2,x2T ) and run PDAE generative process of DDIM starting from Slerp(x 1 T ,x 2 T ;λ) under the guidance of Gψ ( xt, Lerp(z 1, z2;λ), t ) with 100 steps, expecting smooth transitions along λ. Moreover, from the view of the diffusion trajectories, PDAE generates desired samples by shifting the unconditional sampling trajectories towards the spatial direction predicted by Gψ(xt, z, t). This enables PDAE to directly interpolate between two different sampling trajectories. Intuitively, the spatial direction predicted by the linear interpolation of two semantic latent codes, Gψ ( xt, Lerp(z 1, z2;λ), t ) , should be equivalent to the linear interpolation of two spatial directions predicted by respective semantic latent code, Lerp ( Gψ(xt, z 1, t),Gψ(xt, z 2, t);λ ) . We present some examples of these two kinds of interpolation methods in Figure 7. As we can see, both methods generate similar samples that smoothly transition from one endpoint to the other, which means that Gψ ( xt, Lerp(z 1, z2;λ), t ) ≈ Lerp ( Gψ(xt, z 1, t),Gψ(xt, z 2, t);λ ) , so that Gψ(xt, z, t) can be seen as a function of z analogous to a linear map. The linearity guarantees a meaningful semantic latent space that represents the semantic spatial change of image by a linear change of latent code. 4.5 Attribute Manipulation We can further explore the learned semantic latent space in a supervised way. To illustrate this, we train a model of "CelebA-HQ128-52M-z512-25M" and conduct attribute manipulation experiments by utilizing the attribute annotations of CelebA-HQ dataset. Specifically, we first encode an image to its semantic latent code, then move it along the learned direction and finally decode it to the manipulated image. Similar to Diff-AE, we train a linear classifier to separate the semantic latent codes of the images with different attribute labels and use the normal vector of separating hyperplane (i.e. the weight of linear classifier) as the direction vector. We present some attribute manipulation examples in Figure 8. As we can see, PDAE succeeds in manipulating images by moving their semantic latent codes along the direction of desired attribute with different scales. Like Diff-AE, PDAE can change attribute-relevant features while keeping other irrelevant details almost stationary if using the inferred xT of input image. 4.6 Truncation-like Effect According to [8, 15], we can obtain a truncation-like effect in DPMs by scaling the strength of classifier guidance. We have assumed that Gψ(xt, z, t) trained by filling the posterior mean gap simulates the gradient of some implicit classifier, and it can actually work as desired. In theory, it can Figure 9: The truncation-like effect for "ImageNet64-77M-y-38M" by scaling Gψ(xt,y, t) with 0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0 respectively. Dataset Model FIDT=10 T=20 T=50 T=100 FFHQ DDIM 31.87 20.53 15.82 11.95 Diff-AE 21.95 18.10 13.14 10.55 PDAE 20.16 17.18 12.81 10.31 Horse [55] DDIM 25.24 14.41 7.98 5.93 Diff-AE 12.66 9.21 7.12 5.27 PDAE 11.94 8.51 6.83 5.09 Bedroom [55] DDIM 14.07 9.29 7.31 5.88 Diff-AE 10.79 8.42 6.49 5.32 PDAE 10.05 7.89 6.33 5.47 CelebA DDIM 18.89 13.82 8.48 5.94 Diff-AE 12.92 10.18 7.05 5.30 PDAE 11.84 9.65 7.23 5.19 Table 2: FID scores for unconditional sampling. also be applied in truncation-like effect. To illustrate this, we directly incorporate the class label into Gψ(xt,y, t) and train it to fill the gap. Specifically, we train a model of "ImageNet64-77M-y-38M" and use DDIM sampling method with 100 steps to generate 50k samples, guided by the predicted mean shift with different scales for a truncation-like effect. Figure 9 shows the sample quality effects of sweeping over the scale. As we can see, it achieves the truncation-like effect similar to that of classifier-guided sampling method, which helps us to build connections between filling the posterior mean gap and classifier-guided sampling method. The gradient estimator trained by filling the posterior mean gap is an alternative to the noisy classifier. 4.7 Few-shot Conditional Generation Following D2C [42], we train a model of "CelebA64-72M-z512-38M" on CelebA [20] and aim to achieve conditional sampling given a small number of labeled images. To achieve this, we train a latent DPM pω(zt−1|zt) on semantic latent space and a latent classifier pη(y|z) using given labeled images. For binary scenario, the images are labeled by a binary class (100 samples, 50 for each class). For PU scenario, the images are either labeled positive or unlabeled (100 positively labeled and 10k unlabeled samples). Then we sample z from pω(zt−1|zt) and accept it with the probability of pη(y|z). We use the accepted z to generate 5k samples for every class and compute the FID score between these samples and all images belonging to corresponding class in dataset. We compare PDAE with Diff-AE and D2C. We also use the naive approach that computes the FID score between the training images and the corresponding subset of images in dataset. Table 3 shows that PDAE achieves better FID scores than Diff-AE and D2C. 4.8 Improved Unconditional Sampling As shown in Section 4.2, under the help of z, PDAE can generate plausible images in only one step. If we can get z in advance, PDAE can achieve better sample quality than pre-trained DPMs in the same number of sampling steps. Similar to Diff-AE, we train a latent DPM on semantic latent space and sample z from it to improve the unconditional sampling of pre-trained DPMs. Unlike Diff-AE that must take z as input for sampling, PDAE uses an independent gradient estimator as a corrector of the pre-trained DPM for sampling. We find that only using pre-trained DPMs in the last few sampling steps can achieve better sample quality, which may be because that the gradient estimator is sensitive to z in the last few sampling steps and the stochasticity of sampled z will lead to out-of-domain samples. Asyrp [27] also finds similar phenomenon. Empirically, we carry out this strategy in the last 30% sampling steps. We evaluate unconditional sampling result on "FFHQ128-130M-z512-64M", "Horse128-130M-z512-64M", "Bedroom128-120M-z512-70M" and "CelebA64-72M-z512-38M" using DDIM sampling method with different steps. For each dataset, we calculate the FID scores between 50k generated samples and 50k real images randomly selected from dataset. Table 2 shows that PDAE significantly improves the sample quality of pre-trained DPMs and outperforms Diff-AE. Note that PDAE can be applied for any pre-trained DPMs as an auxiliary booster to improve their sample quality. 5 Related Work Our work is based on an emerging latent variable generative model known as Diffusion Probabilistic Models (DPMs) [43, 14], which are now popular for their stable training process and competitive sample quality. Numerous studies [34, 24, 8, 15, 44, 19, 46, 30] and applications [5, 26, 18, 32, 57, 6, 29, 41, 3, 16, 17] have further significantly improved and expanded DPMs. Unsupervised representation learning via generative modeling is a popular topic in computer vision. Latent variable generative models, such as GANs [13], VAEs [25, 39], and DPMs, are a natural candidate for this, since they inherently involve a latent representation of the data they generate. For GANs, due to its lack of inference functionality, one have to extract the representations for any given real samples by an extra technique called GAN Inversion [54], which invert samples back into the latent space of trained GANs. Existing inversion methods [58, 35, 4, 1, 2, 51] either have limited reconstruction quality or need significantly higher computational cost. VAEs explicitly learn representations for samples, but still face representation-generation trade-off challenges [49, 42]. VQ-VAE [49, 37] and D2C [42] overcome these problems by modeling latent variables post-hoc in different ways. DPMs also yield latent variables through the forward process. However, these latent variables lack high-level semantic information because they are just a sequence of spatially corrupted images. In light of this, diffusion autoencoders (Diff-AE) [36] explore DPMs for representation learning via autoencoding. Specifically, they jointly train an encoder for discovering meaningful representations from images and a conditional DPM as the decoder for image reconstruction by treating the representations as input conditions. Diff-AE is competitive with the state-of-the-art model on image reconstruction and capable of various downstream tasks. Compared with Diff-AE, PDAE leverages existing pre-trained DPMs for representation learning also via autoencoding, but with better training efficiency and performance. A concurrent work with the similar idea is the textual inversion of pre-trained text-to-image DPMs [12]. Specifically, given only 3-5 images of a user-provided concept, like an object or a style, they learn to represent it through new "words" in the embedding space of the frozen text-to-image DPMs. These learned "words" can be further composed into natural language sentences, guiding personalized creation in an intuitive way. From the perspective of posterior mean gap, for the given new concept, textual inversion optimizes its corresponding new "words" embedding vector to find a best textual condition ( c ) , so that which can be fed into pre-trained text-to-image DPMs ( ϵθ(xt, c, t) ) to fill as much gap ( ϵ− ϵθ(xt, ∅, t) ) as possible. 6 Conclusion In conclusion, we present a general method called PDAE that leverages pre-trained DPMs for representation learning via autoencoding and achieves better training efficiency and performance than Diff-AE. Our key idea is based on the concept of posterior mean gap and its connections with classifier-guided sampling method. A concurrent work, textual inversion of pre-trained text-to-image DPMs, can also be explained from this perspective. We think the idea can be further explored to extract knowledges from pre-trained DPMs, such as interpretable direction discovery [51], and we leave it as future work. Acknowledgments and Disclosure of Funding This work was supported in part by the National Natural Science Foundation of China (Grant No. 62072397 and No.61836002), Zhejiang Natural Science Foundation (LR19F020006) and Yiwise.
1. What is the focus of the paper regarding learning representations? 2. What are the strengths and weaknesses of the proposed method compared to previous works, particularly DiffusionAE? 3. Are there any questions regarding the novelty and improvement of the method over other baseline methods? 4. How does the reviewer assess the technical soundness and formulation of the proposed method? 5. What are some suggestions for future improvements or comparisons with related works? 6. Have any limitations or potential negative social impacts been considered in the paper?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper presents a method of learning representations out-of pretrained unconditional diffusion models. Different from DiffusionAE which learns the encoder and condition DPM together, this paper proposes to keep a pretrained unconditional diffusion models unchanged, and learn the gradient of classifier guidance which takes a latent variable as the additional input, and the latent variable in encoded by a learnable encoder. Empirical results show that this method leads to better reconstruction and few-shot conditional generation results compared to DiffusionAE with less training expense. Qualitatively the method learns meaningful representations, and it can also improve the sample quality of unconditional diffusion models with a learned latent DPM on the latent variables. Strengths And Weaknesses Strength: This paper is well written and easy to follow. It clearly states its connection with the previous close related work, DiffusionAE, and points out the advantage over the previous work. The proposed method is technically sound and naturally combines learning representation and classifier guidance. Various experiments are carried out and results are compared with baseline methods, making the method more convincing. Weaknesses: IMO the novelty of this work is kind of limited. Compared to DiffusionAE, this work includes the classifier guidance and the latent codes are embedded into the learnable guidance term instead of the diffusion model, expect for which all other components remains the same as DiffusionAE, such as reversing DDIM sampler for latent code and learning latent DPM for z. Technically speaking I don't think there's a significant novel method proposed here. Compared to DiffusionAE, this method includes guidance term so it is not quite surprising to me that it can outperform DiffusionAE. It's not clear to me whether the improvement is due to the guidance mechanism or due to the novel formulation G ψ . Questions One baseline worthwhile to compare is DiffusionAE + classifier-free guidance. If the proposed method can beat this baseline, it will be more convincing that the improvement is given by the novel formulation G ψ , instead of the guidance formulation which is not the contribution of this paper. For classifier(-free) guidance, it is known that results are better if the diffusion models are also conditional, i.e., the modified score is given by ∇ x t [ log ⁡ p ( x t | c ) + ω log ⁡ p ( c | x t ) ] instead of ∇ x t [ log ⁡ p ( x t ) + ω log ⁡ p ( c | x t ) ] . I'm wondering if similar conclusion applies here. I.e., if you condition both the diffusion model and the classifier gradient G ψ with the latent variables z , whether you can get better performance than the current formulation. Limitations No limitation and potential negative social impact are discussed .
NIPS
Title Unsupervised Representation Learning from Pre-trained Diffusion Probabilistic Models Abstract Diffusion Probabilistic Models (DPMs) have shown a powerful capacity of generating high-quality image samples. Recently, diffusion autoencoders (Diff-AE) have been proposed to explore DPMs for representation learning via autoencoding. Their key idea is to jointly train an encoder for discovering meaningful representations from images and a conditional DPM as the decoder for reconstructing images. Considering that training DPMs from scratch will take a long time and there have existed numerous pre-trained DPMs, we propose Pre-trained DPM AutoEncoding (PDAE), a general method to adapt existing pre-trained DPMs to the decoders for image reconstruction, with better training efficiency and performance than Diff-AE. Specifically, we find that the reason that pre-trained DPMs fail to reconstruct an image from its latent variables is due to the information loss of forward process, which causes a gap between their predicted posterior mean and the true one. From this perspective, the classifier-guided sampling method can be explained as computing an extra mean shift to fill the gap, reconstructing the lost class information in samples. These imply that the gap corresponds to the lost information of the image, and we can reconstruct the image by filling the gap. Drawing inspiration from this, we employ a trainable model to predict a mean shift according to encoded representation and train it to fill as much gap as possible, in this way, the encoder is forced to learn as much information as possible from images to help the filling. By reusing a part of network of pre-trained DPMs and redesigning the weighting scheme of diffusion loss, PDAE can learn meaningful representations from images efficiently. Extensive experiments demonstrate the effectiveness, efficiency and flexibility of PDAE. Our implementation is available at https://github.com/ckczzj/PDAE. 1 Introduction Deep generative models such as variational autoencoders (VAEs) [25, 39], generative adversarial networks (GANs) [13], autoregressive models [50, 48], normalizing flows (NFs) [38, 23] and energybased models (EBMs) [9, 45] have shown remarkable capacity to synthesize striking image samples. Recently, another kind of generative models, Diffusion Probabilistic Models (DPMs) [43, 14] are further developed and becoming popular for their stable training process and state-of-the-art sample quality [8]. Although a large number of degrees of freedom in implementation, the DPMs discussed in this paper will refer exclusively to those trained by the denoising method proposed in DDPMs [14]. Unsupervised representation learning via generative modeling is a popular topic in computer vision. Latent variable generative models, such as GANs and VAEs, are a natural candidate for this, since they inherently involve a latent representation of the data they generate. Likewise, DPMs inherently ∗Corresponding author. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). yield latent variables through the forward process. However, these latent variables lack high-level semantic information because they are just a sequence of spatially corrupted images. In light of this, diffusion autoencoders (Diff-AE) [36] explore DPMs for representation learning via autoencoding. Specifically, they employ an encoder for discovering meaningful representations from images and a conditional DPM as the decoder for image reconstruction by taking the encoded representations as input conditions. Diff-AE is competitive with the state-of-the-art model on image reconstruction and capable of various downstream tasks. Following the paradigm of autoencoders, PDAE aims to adapt existing pre-trained DPMs to the decoders for image reconstruction and benefit from it. Generally, pre-trained DPMs cannot accurately predict the posterior mean of xt−1 from xt in the reverse process due to the information loss of forward process, which results in a gap between their predicted posterior mean and the true one. This is the reason that they fail to reconstruct an image (x0) from its latent variables (xt). From this perspective, the classifier-guided sampling method [8] can be explained as reconstructing the lost class information in samples by shifting the predicted posterior mean with an extra item computed by the gradient of a classifier to fill the gap. Drawing inspiration from this method that uses the prior knowledges (class label) to fill the gap, we aim to inversely extract the knowledges from the gap, i.e., learn representations that can help to fill the gap. In light of this, we employ a novel gradient estimator to predict the mean shift according to encoded representations and train it to fill as much gap as possible, in this way, the encoder is forced to learn as much information as possible from images to help the filling. PDAE follows this principle to build an autoencoder based on pre-trained DPMs. Furthermore, we find that the posterior mean gap in different time stages contain different levels of information, so we redesign the weighting scheme of diffusion loss to encourage the model to learn rich representations efficiently. We also reuse a part of network of pre-trained DPMs to accelerate the convergence of our model. Based on pre-trained DPMs, PDAE only needs less than half of the training time that Diff-AE costs to complete the representation learning but still outperforms Diff-AE. Moreover, PDAE also enables some other interesting features. 2 Background 2.1 Denoising Diffusion Probabilistic Models DDPMs [14] employ a forward process that starts from the data distribution q(x0) and sequentially corrupts it to N (0, I) with Markov diffusion kernels q(xt|xt−1) defined by a fixed variance schedule {βt}Tt=1. The process can be expressed by: q(xt|xt−1) = N (xt; √ 1− βtxt−1 , βtI) q(x1:T |x0) = T∏ t=1 q(xt|xt−1) , (1) where {xt}Tt=1 are latent variables of DDPMs. According to the rule of the sum of normally distributed random variables, we can directly sample xt from x0 for arbitrary t with q(xt|x0) = N (xt; √ ᾱtx0 , (1− ᾱt)I), where αt = 1− βt and ᾱt = ∏t i=1 αi. The reverse (generative) process is defined as another Markov chain parameterized by θ to describe the same but reverse process, denoising an arbitrary Gaussian noise to a clean data sample: pθ(xt−1|xt) = N (xt−1;µθ(xt, t) , Σθ(xt, t)) pθ(x0:T ) = p(xT ) T∏ t=1 pθ(xt−1|xt) , (2) where p(xT ) = N (xT ;0, I). It employs pθ(xt−1|xt) of Gaussian form because the reversal of the diffusion process has the identical functional form as the forward process when βt is small [11, 43]. The generative distribution can be represented as pθ(x0) = ∫ pθ(x0:T )dx1:T . Training is performed to maximize the model log likelihood ∫ q(x0) log pθ(x0)dx0 by minimizing the variational upper bound of the negative one. The final objective is derived by some parameterization and simplication [14]: Lsimple(θ) = Ex0,t,ϵ [∥∥ϵ− ϵθ(√ᾱtx0 +√1− ᾱtϵ, t)∥∥2] , (3) where ϵθ is a function approximator to predict ϵ from xt. 2.2 Denoising Diffusion Implicit Models DDIMs [44] define a non-Markov forward process that leads to the same training objective with DDPMs, but the corresponding reverse process can be much more flexible and faster to sample from. Specifically, one can sample xt−1 from xt using the ϵθ of some pre-trained DDPMs via: xt−1 = √ ᾱt−1 ( xt − √ 1− ᾱt · ϵθ(xt, t)√ ᾱt ) + √ 1− ᾱt−1 − σ2t · ϵθ(xt, t) + σtϵt , (4) where ϵt ∼ N (0, I) and σt controls the stochasticity of forward process. The strides greater than 1 are allowed for accelerated sampling. When σt = 0, the generative process becomes deterministic, which is named as DDIMs. 2.3 Classifier-guided Sampling Method Classifier-guided sampling method [43, 46, 8] shows that one can train a classifier pϕ(y|xt) on noisy data and use its gradient ∇xt log pϕ(y|xt) to guide some pre-trained unconditional DDPM to sample towards specified class y. The conditional reverse process can be approximated by a Gaussian similar to that of the unconditional one in Eq.(2), but with a shifted mean: pθ,ϕ(xt−1|xt,y) ≈ N (xt−1;µθ(xt, t) +Σθ(xt, t) · ∇xt log pϕ(y|xt) , Σθ(xt, t)) . (5) For deterministic sampling methods like DDIMs, one can use score-based conditioning trick [46, 45] to define a new function approximator for conditional sampling: ϵ̂θ(xt, t) = ϵθ(xt, t)− √ 1− ᾱt · ∇xt log pϕ(y|xt) . (6) More generally, any similarity estimator between noisy data and conditions can be applied for guided sampling, such as noisy-CLIP guidance [33, 31]. 3 Method 3.1 Forward Process Posterior Mean Gap Generally, one will train unconditional and conditional DPMs by respectively learning pθ(xt−1|xt) = N (xt−1;µθ(xt, t),Σθ(xt, t)) and pθ(xt−1|xt,y) = N (xt−1;µθ(xt,y, t),Σθ(xt,y, t)) to approximate the same forward process posterior q(xt−1|xt,x0) = N (xt−1; µ̃t(xt,x0), 1−ᾱt−11−ᾱt βtI). Here y is some condition that contains some prior knowledges of corresponding x0, such as class label. Assuming that both Σθ is set to untrained time dependent constants, under the same experimental settings, the conditional DPMs will reach a lower optimized diffusion loss. The experiment in Figure 1 can prove this fact, which means that µθ(xt,y, t) is closer to µ̃t(xt,x0) than µθ(xt, t). This implies that there exists a gap between the posterior mean predicted by the unconditional DPMs ( µθ(xt, t) ) and the true one ( µ̃t(xt,x0) ) . Essentially, the posterior mean gap is caused by the information loss of forward process so that the reverse process cannot recover it in xt−1 only according to xt. If we introduce some knowledges of x0 for DPMs, like y here, the gap will be smaller. The more information of x0 that y contains, the smaller the gap is. Moreover, according to Eq.(5), the Gaussian mean of classifier-guided conditional reverse process contains an extra shift item compared with that of the unconditional one. From the perspective of posterior mean gap, the mean shift item can partially fill the gap and help the reverse process to reconstruct the lost class information in samples. In theory, if y in Eq.(5) contains all information of x0, the mean shift will fully fill the gap and guide the reverse process to reconstruct x0. On the other hand, if we employ a model to predict mean shift according to our encoded representations z and train it to fill as much gap as possible, the encoder will be forced to learn as much information as possible from x0 to help the filling. The more the gap is filled, the more accurate the mean shift is, the more perfect the reconstruction is, and the more information of x0 that z contains. PDAE follows this principle to build an autoencoder based on pre-trained DPMs. 3.2 Unsupervised Representation Learning by Filling the Gap Following the paradigm of autoencoders, we employ an encoder z = Eφ(x0) for learning compact and meaningful representations from input images and adapt a pre-trained unconditional DPM pθ(xt−1|xt) = N (xt−1;µθ(xt, t),Σθ(xt, t)) to the decoder for image reconstruction. Specifically, we employ a gradient estimator Gψ(xt, z, t) to simulate ∇xt log p(z|xt), where p(z|xt) is some implicit classifier that we will not use explicitly, and use it to assemble a conditional DPM pθ,ψ(xt−1|xt, z) = N (xt−1;µθ(xt, t)+Σθ(xt, t) ·Gψ(xt, z, t) , Σθ(xt, t)) as the decoder. Then we train it like a regular conditional DPM by optimizing following derived objective (assuming the ϵ-prediction parameterization is adopted): L(ψ,φ) = Ex0,t,ϵ [ λt ∥∥ϵ− ϵθ(xt, t) + √αt√1− ᾱt βt ·Σθ(xt, t) ·Gψ(xt,Eφ(x0), t) ∥∥2] , (7) where xt = √ ᾱtx0+ √ 1− ᾱtϵ and λt is a new weighting scheme that we will discuss in Section 3.4. Note that we use pre-trained DPMs so that θ are frozen during the optimization. Usually we set Σθ = 1−ᾱt−1 1−ᾱt βtI to untrained time-dependent constants. The optimization is equivalent to minimizing ∥∥Σθ(xt, t)·Gψ(xt,Eφ(x0), t)−(µ̃t(xt,x0)−µθ(xt, t))∥∥2, which forces the predicted mean shift Σθ(xt, t) ·Gψ(xt,Eφ(x0), t) to fill the posterior mean gap µ̃t(xt,x0)− µθ(xt, t). With trained Gψ(xt, z, t), we can treat it as the score of an optimal classifier p(z|xt) and use the classifier-guided sampling method in Eq.(5) for DDPM sampling or use the modified function approximator ϵ̂θ in Eq.(6) for DDIM sampling, based on pre-trained ϵθ(xt, t). We put detailed algorithm procedures in Appendix ??. Except the semantic latent code z, we can infer a stochastic latent code xT [36] by running the deterministic generative process of DDIMs in reverse: xt+1 = √ ᾱt+1 ( xt − √ 1− ᾱt · ϵ̂θ(xt, t)√ ᾱt ) + √ 1− ᾱt+1 · ϵ̂θ(xt, t) . (8) This procedure is optional, but helpful to near-exact reconstruction and real-image manipulation for reconstructing minor details of input images when using DDIM sampling. We also train a latent DPM pω(zt−1|zt) to model the learned semantic latent space, same with that in Diff-AE [36]. With a trained latent DPM, we can sample z from it to help pre-trained DPMs to achieve faster and better unconditional sampling under the guidance of Gψ(xt, z, t). 3.3 Network Design Figure 2 shows the network and data flow of PDAE. For encoder Eφ, unlike Diff-AE that uses the encoder part of U-Net [40], we find that simply stacked convolution layers and a linear layer is enough to learn meaningful z from x0. For gradient estimator Gψ, we use U-Net similar to the function approximator ϵθ of pre-trained DPM. Considering that ϵθ also takes xt and t as input, we can further leverage the knowledges of pre-trained DPM by reusing its trained encoder part and time embedding layer, so that we only need to employ new middle blocks, decoder part and output blocks of U-Net for Gψ . To incorporate z into them, we follow [8] to extend Group Normalization [53] by applying scaling & shifting twice on normalized feature maps: AdaGN(h, t,z) = zs(tsGroupNorm(h) + tb) + zb , (9) where [ts, tb] and [zs, zb] are obtained from a linear projection of t and z, respectively. Note that we still use skip connections from reused encoder to new decoder. In this way, Gψ is totally determined by pre-trained DPM and can be universally applied to different U-Net architectures. 3.4 Weighting Scheme Redesign We originally worked with simplified training objective like that in DDPMs [14], i.e. setting λt = 1 in Eq.(7), but found the training extremely unstable, resulting in slow/non- convergence and poor performance. Inspired by P2-weighting [7], which has shown that the weighting scheme of diffusion loss can greatly affect the performance of DPMs, we attribute this phenomenon to the weighting scheme and investigate it in Figure 3. Specifically, we train an unconditional DPM and a noisy classifier on MNIST [28], and divide the diffusion forward process into three stages: early-stage between 0 and t1, critical-stage between t1 and t2 and late-stage between t2 and T , as shown in the top row. Then we design a mixed sampling procedure that employs unconditional sampling but switches to classifier-guided sampling only during the specified stage. The bottom three rows show the samples generated by three different mixed sampling procedures, where each row only employs classifier-guided sampling during the specified stage on the right. As we can see, only the samples guided by the classifier during critical-stage match the input class labels. We can conclude that the mean shift during critical-stage contains more crucial information to reconstruct the input class label in samples than the other two stages. From the view of diffusion trajectories, the sampling trajectories are separated from each other during critical-stage and they need the mean shift to guide them towards specified direction, otherwise it will be determined by the stochasticity of Langevin dynamics. Therefore, we opt to down-weight the objective function for the t in early- and late-stage to encourage the model to learn rich representations from critical-stage. Inspired by P2-weighting [7], we redesign a weighting scheme of diffusion loss (λt in Eq.(7)) in terms of signal-to-noise ratio [24] (SNR(t) = ᾱt1−ᾱt ): λt = ( 1 1 + SNR(t) )1−γ · ( SNR(t) 1 + SNR(t) )γ , (10) where the first item is for early-stage and the second one is for late-stage. γ is a hyperparameter that balances the strength of down-weighting between two items. Empirically we set γ = 0.1. Figure 4 shows the normalized weighting schemes of diffusion loss for different DPMs relative to the true variational lower bound loss. Compared with other DPMs, our weighting scheme down-weights the diffusion loss for both low and high SNR. 4 Experiments To compare PDAE with Diff-AE [36], we follow their experiments with the same settings. Moreover, we also show that PDAE enables some added features. For fair comparison, we use the baseline DPMs provided by official Diff-AE implementation as our pre-trained models (also as our baselines), which have the same network architectures (hyperparameters) with their Diff-AE models. For brevity, we use the notation such as "FFHQ128-130M-z512-64M" to name our model, which means that we use a baseline DPM pre-trained with 130M images and leverage it for PDAE training with 64M images, on 128× 128 FFHQ dataset [21], with the semantic latent code z of 512-d. We put all implementation details in Appendix ?? and additional samples of following experiments in Appendix ??. 4.1 Training Efficiency We demonstrate the better training efficiency of PDAE compared with Diff-AE from two aspects: training time and times. For training time, we train both models with the same network architectures (hyperparameters) on 128×128 image dataset using 4 Nvidia A100-SXM4 GPUs for distributed training and set batch size to 128 (32 for each GPU) to calculate their training throughput (imgs/sec./A100). PDAE achieves a throughput of 81.57 and Diff-AE achieves that of 75.41. Owing to the reuse of the U-Net encoder part of pre-trained DPM, PDAE has less trainable parameters and achieves a higher training throughput than Diff-AE. For training times, we find that PDAE needs about 13 ∼ 1 2 of the number of training batches (images) that Diff-AE needs for loss convergence. We think this is because that modeling the posterior mean gap based on pre-trained DPMs is easier than modeling a conditional DPM from scratch. The network reuse and the weighting scheme redesign also help. As a result, based on pre-trained DPMs, PDAE needs less than half of the training time that Diff-AE costs to complete the representation learning. 4.2 Learned Mean Shift Fills Posterior Mean Gap We train a model of "FFHQ128-130M-z512-64M" and show that our learned mean shift can fill the posterior mean gap with qualitative and quantitative results in Figure 5. Specifically, we select some images x0 from FFHQ, sample xt = √ ᾱtx0 + √ 1− ᾱtϵ for different t and predict x̂0 from xt by denoising them for only one step (i.e., x̂0 = xt− √ 1−ᾱtϵ̂√ ᾱt ), using pre-trained DPM and PDAE respectively. As we can see in the figure (left), even for large t, PDAE can predict accurate noise from xt and reconstruct plausible images, which shows that the predicted mean shift fills the posterior mean gap and the learned representation helps to recover the lost information of forward process. Furthermore, we randomly select 1000 images from FFHQ, sample xt = √ ᾱtx0 + √ 1− ᾱtϵ and calculate their average posterior mean gap for each step using pre-trained DPM: ∥µ̃t(xt,x0) − µθ(xt, t)∥2 and PDAE: ∥µ̃t(xt,x0)− (µθ(xt, t) +Σθ(xt, t) ·Gψ(xt,Eφ(x0), t))∥2 respectively, shown in the figure (right). As we can see, PDAE predicts the mean shift that significantly fills the posterior mean gap. 4.3 Autoencoding Reconstruction We use "FFHQ128-130M-z512-64M" to run some autoencoding reconstruction examples using PDAE generative process of DDIM and DDPM respectively. As we can see in Figure 6, both methods generate samples with similar contents to the input. Some stochastic variations [36] occur in minor details of hair, eye and skin when introducing stochasticity. Due to the similar performance between DDPM and DDIM with random xT , we will always use DDIM sampling method in later experiments. We can get a near-exact reconstruction if we use the stochastic latent code inferred from aforementioned ODE, which further proves that the stochastic latent code controls the local details. To further evaluate the autoencoding reconstruction quality of PDAE, we conduct the same quantitative experiments with Diff-AE. Specifically, we use "FFHQ128-130M-z512-64M" to encode-andreconstruct all 30k images of CelebA-HQ [20] and evaluate the reconstruction quality with their average SSIM [52], LPIPS [56] and MSE. We use the same baselines described in [36], and the results are shown in Table 1. We can see that PDAE is competitive with the state-of-the-art NVAE even with much less latent dimensionality and also outperforms Diff-AE in all metrics except the LPIPS for random xT . Moreover, PDAE only needs about half of the training times that Diff-AE needs for representation learning, which shows that PDAE can learn richer representations from images more efficiently based on pre-trained DPM. 4.4 Interpolation of Semantic Latent Codes and Trajectories Given two images x10 and x 2 0 from FFHQ, we use "FFHQ128-130M-z512-64M" to encode them into (z1,x1T ) and (z 2,x2T ) and run PDAE generative process of DDIM starting from Slerp(x 1 T ,x 2 T ;λ) under the guidance of Gψ ( xt, Lerp(z 1, z2;λ), t ) with 100 steps, expecting smooth transitions along λ. Moreover, from the view of the diffusion trajectories, PDAE generates desired samples by shifting the unconditional sampling trajectories towards the spatial direction predicted by Gψ(xt, z, t). This enables PDAE to directly interpolate between two different sampling trajectories. Intuitively, the spatial direction predicted by the linear interpolation of two semantic latent codes, Gψ ( xt, Lerp(z 1, z2;λ), t ) , should be equivalent to the linear interpolation of two spatial directions predicted by respective semantic latent code, Lerp ( Gψ(xt, z 1, t),Gψ(xt, z 2, t);λ ) . We present some examples of these two kinds of interpolation methods in Figure 7. As we can see, both methods generate similar samples that smoothly transition from one endpoint to the other, which means that Gψ ( xt, Lerp(z 1, z2;λ), t ) ≈ Lerp ( Gψ(xt, z 1, t),Gψ(xt, z 2, t);λ ) , so that Gψ(xt, z, t) can be seen as a function of z analogous to a linear map. The linearity guarantees a meaningful semantic latent space that represents the semantic spatial change of image by a linear change of latent code. 4.5 Attribute Manipulation We can further explore the learned semantic latent space in a supervised way. To illustrate this, we train a model of "CelebA-HQ128-52M-z512-25M" and conduct attribute manipulation experiments by utilizing the attribute annotations of CelebA-HQ dataset. Specifically, we first encode an image to its semantic latent code, then move it along the learned direction and finally decode it to the manipulated image. Similar to Diff-AE, we train a linear classifier to separate the semantic latent codes of the images with different attribute labels and use the normal vector of separating hyperplane (i.e. the weight of linear classifier) as the direction vector. We present some attribute manipulation examples in Figure 8. As we can see, PDAE succeeds in manipulating images by moving their semantic latent codes along the direction of desired attribute with different scales. Like Diff-AE, PDAE can change attribute-relevant features while keeping other irrelevant details almost stationary if using the inferred xT of input image. 4.6 Truncation-like Effect According to [8, 15], we can obtain a truncation-like effect in DPMs by scaling the strength of classifier guidance. We have assumed that Gψ(xt, z, t) trained by filling the posterior mean gap simulates the gradient of some implicit classifier, and it can actually work as desired. In theory, it can Figure 9: The truncation-like effect for "ImageNet64-77M-y-38M" by scaling Gψ(xt,y, t) with 0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0 respectively. Dataset Model FIDT=10 T=20 T=50 T=100 FFHQ DDIM 31.87 20.53 15.82 11.95 Diff-AE 21.95 18.10 13.14 10.55 PDAE 20.16 17.18 12.81 10.31 Horse [55] DDIM 25.24 14.41 7.98 5.93 Diff-AE 12.66 9.21 7.12 5.27 PDAE 11.94 8.51 6.83 5.09 Bedroom [55] DDIM 14.07 9.29 7.31 5.88 Diff-AE 10.79 8.42 6.49 5.32 PDAE 10.05 7.89 6.33 5.47 CelebA DDIM 18.89 13.82 8.48 5.94 Diff-AE 12.92 10.18 7.05 5.30 PDAE 11.84 9.65 7.23 5.19 Table 2: FID scores for unconditional sampling. also be applied in truncation-like effect. To illustrate this, we directly incorporate the class label into Gψ(xt,y, t) and train it to fill the gap. Specifically, we train a model of "ImageNet64-77M-y-38M" and use DDIM sampling method with 100 steps to generate 50k samples, guided by the predicted mean shift with different scales for a truncation-like effect. Figure 9 shows the sample quality effects of sweeping over the scale. As we can see, it achieves the truncation-like effect similar to that of classifier-guided sampling method, which helps us to build connections between filling the posterior mean gap and classifier-guided sampling method. The gradient estimator trained by filling the posterior mean gap is an alternative to the noisy classifier. 4.7 Few-shot Conditional Generation Following D2C [42], we train a model of "CelebA64-72M-z512-38M" on CelebA [20] and aim to achieve conditional sampling given a small number of labeled images. To achieve this, we train a latent DPM pω(zt−1|zt) on semantic latent space and a latent classifier pη(y|z) using given labeled images. For binary scenario, the images are labeled by a binary class (100 samples, 50 for each class). For PU scenario, the images are either labeled positive or unlabeled (100 positively labeled and 10k unlabeled samples). Then we sample z from pω(zt−1|zt) and accept it with the probability of pη(y|z). We use the accepted z to generate 5k samples for every class and compute the FID score between these samples and all images belonging to corresponding class in dataset. We compare PDAE with Diff-AE and D2C. We also use the naive approach that computes the FID score between the training images and the corresponding subset of images in dataset. Table 3 shows that PDAE achieves better FID scores than Diff-AE and D2C. 4.8 Improved Unconditional Sampling As shown in Section 4.2, under the help of z, PDAE can generate plausible images in only one step. If we can get z in advance, PDAE can achieve better sample quality than pre-trained DPMs in the same number of sampling steps. Similar to Diff-AE, we train a latent DPM on semantic latent space and sample z from it to improve the unconditional sampling of pre-trained DPMs. Unlike Diff-AE that must take z as input for sampling, PDAE uses an independent gradient estimator as a corrector of the pre-trained DPM for sampling. We find that only using pre-trained DPMs in the last few sampling steps can achieve better sample quality, which may be because that the gradient estimator is sensitive to z in the last few sampling steps and the stochasticity of sampled z will lead to out-of-domain samples. Asyrp [27] also finds similar phenomenon. Empirically, we carry out this strategy in the last 30% sampling steps. We evaluate unconditional sampling result on "FFHQ128-130M-z512-64M", "Horse128-130M-z512-64M", "Bedroom128-120M-z512-70M" and "CelebA64-72M-z512-38M" using DDIM sampling method with different steps. For each dataset, we calculate the FID scores between 50k generated samples and 50k real images randomly selected from dataset. Table 2 shows that PDAE significantly improves the sample quality of pre-trained DPMs and outperforms Diff-AE. Note that PDAE can be applied for any pre-trained DPMs as an auxiliary booster to improve their sample quality. 5 Related Work Our work is based on an emerging latent variable generative model known as Diffusion Probabilistic Models (DPMs) [43, 14], which are now popular for their stable training process and competitive sample quality. Numerous studies [34, 24, 8, 15, 44, 19, 46, 30] and applications [5, 26, 18, 32, 57, 6, 29, 41, 3, 16, 17] have further significantly improved and expanded DPMs. Unsupervised representation learning via generative modeling is a popular topic in computer vision. Latent variable generative models, such as GANs [13], VAEs [25, 39], and DPMs, are a natural candidate for this, since they inherently involve a latent representation of the data they generate. For GANs, due to its lack of inference functionality, one have to extract the representations for any given real samples by an extra technique called GAN Inversion [54], which invert samples back into the latent space of trained GANs. Existing inversion methods [58, 35, 4, 1, 2, 51] either have limited reconstruction quality or need significantly higher computational cost. VAEs explicitly learn representations for samples, but still face representation-generation trade-off challenges [49, 42]. VQ-VAE [49, 37] and D2C [42] overcome these problems by modeling latent variables post-hoc in different ways. DPMs also yield latent variables through the forward process. However, these latent variables lack high-level semantic information because they are just a sequence of spatially corrupted images. In light of this, diffusion autoencoders (Diff-AE) [36] explore DPMs for representation learning via autoencoding. Specifically, they jointly train an encoder for discovering meaningful representations from images and a conditional DPM as the decoder for image reconstruction by treating the representations as input conditions. Diff-AE is competitive with the state-of-the-art model on image reconstruction and capable of various downstream tasks. Compared with Diff-AE, PDAE leverages existing pre-trained DPMs for representation learning also via autoencoding, but with better training efficiency and performance. A concurrent work with the similar idea is the textual inversion of pre-trained text-to-image DPMs [12]. Specifically, given only 3-5 images of a user-provided concept, like an object or a style, they learn to represent it through new "words" in the embedding space of the frozen text-to-image DPMs. These learned "words" can be further composed into natural language sentences, guiding personalized creation in an intuitive way. From the perspective of posterior mean gap, for the given new concept, textual inversion optimizes its corresponding new "words" embedding vector to find a best textual condition ( c ) , so that which can be fed into pre-trained text-to-image DPMs ( ϵθ(xt, c, t) ) to fill as much gap ( ϵ− ϵθ(xt, ∅, t) ) as possible. 6 Conclusion In conclusion, we present a general method called PDAE that leverages pre-trained DPMs for representation learning via autoencoding and achieves better training efficiency and performance than Diff-AE. Our key idea is based on the concept of posterior mean gap and its connections with classifier-guided sampling method. A concurrent work, textual inversion of pre-trained text-to-image DPMs, can also be explained from this perspective. We think the idea can be further explored to extract knowledges from pre-trained DPMs, such as interpretable direction discovery [51], and we leave it as future work. Acknowledgments and Disclosure of Funding This work was supported in part by the National Natural Science Foundation of China (Grant No. 62072397 and No.61836002), Zhejiang Natural Science Foundation (LR19F020006) and Yiwise.
1. What is the focus and contribution of the paper on diffusion probabilistic models? 2. What are the strengths of the proposed framework, particularly in its novel approach to representation learning? 3. What are the weaknesses of the paper, especially regarding its writing quality, technical novelty, and experimental results? 4. How does the reviewer question the similarity between the proposed framework and the latest state-of-the-art diffusion autoencoders (Diff-AE)? 5. What are the limitations of the paper that the reviewer identifies?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper presents a new learning framework for diffusion probabilistic models. Unlike the most existing diffusion probabilistic models, which mainly embed the data into a predefined distribution through the Markov chain, the proposed framework aims to learn a robust representation feature through a pre-trained diffusion models. The authors also introduce a weighting scheme redesign function to further improve the representation learning in the diffusion models. Extensive experiments are conducted on multiple scenarios (e.g. FFHQ, CelebA, Bedroom). Strengths And Weaknesses Strengths The task of representation learning via diffusion probabilistic models is novel and interesting. The overall idea and the approach towards addressing it seems reasonable. Weaknesses The paper is not well written and hard to read. The technical novelty is limited. Neither the designed network nor the theory are new. The qualitative results are poor, and the quantitative improvement is limited. Questions What's the key difference between the proposed framework with the latest state-of-the-art diffusion autoencoders (Diff-AE) [21]. While the authors consider Diff-AE as a baseline, the two works seem to be similar. The authors should clearly explain what is the difference between them. Besides, The qualitative comparison to the Diff-AE baseline is missing, even after considering the results in the Appendix. What's more, the quantitative improvement in Tables 1, 2, and 3 is limited compared to the Diff-AE baseline. It takes me a hard time buying the x 0 → x t in Figure 1. If I understand correctly, this should be a diffusion processing in general. Then, does this contain any pre-trained parameters? Could please the authors illuminate the difference between these results shown in Figure 5? While the authors claimed to "some stochastic variations", it seems to be very hard to capture such a conclusion. Limitations Limitations are not discussed.
NIPS
Title Unsupervised Representation Learning from Pre-trained Diffusion Probabilistic Models Abstract Diffusion Probabilistic Models (DPMs) have shown a powerful capacity of generating high-quality image samples. Recently, diffusion autoencoders (Diff-AE) have been proposed to explore DPMs for representation learning via autoencoding. Their key idea is to jointly train an encoder for discovering meaningful representations from images and a conditional DPM as the decoder for reconstructing images. Considering that training DPMs from scratch will take a long time and there have existed numerous pre-trained DPMs, we propose Pre-trained DPM AutoEncoding (PDAE), a general method to adapt existing pre-trained DPMs to the decoders for image reconstruction, with better training efficiency and performance than Diff-AE. Specifically, we find that the reason that pre-trained DPMs fail to reconstruct an image from its latent variables is due to the information loss of forward process, which causes a gap between their predicted posterior mean and the true one. From this perspective, the classifier-guided sampling method can be explained as computing an extra mean shift to fill the gap, reconstructing the lost class information in samples. These imply that the gap corresponds to the lost information of the image, and we can reconstruct the image by filling the gap. Drawing inspiration from this, we employ a trainable model to predict a mean shift according to encoded representation and train it to fill as much gap as possible, in this way, the encoder is forced to learn as much information as possible from images to help the filling. By reusing a part of network of pre-trained DPMs and redesigning the weighting scheme of diffusion loss, PDAE can learn meaningful representations from images efficiently. Extensive experiments demonstrate the effectiveness, efficiency and flexibility of PDAE. Our implementation is available at https://github.com/ckczzj/PDAE. 1 Introduction Deep generative models such as variational autoencoders (VAEs) [25, 39], generative adversarial networks (GANs) [13], autoregressive models [50, 48], normalizing flows (NFs) [38, 23] and energybased models (EBMs) [9, 45] have shown remarkable capacity to synthesize striking image samples. Recently, another kind of generative models, Diffusion Probabilistic Models (DPMs) [43, 14] are further developed and becoming popular for their stable training process and state-of-the-art sample quality [8]. Although a large number of degrees of freedom in implementation, the DPMs discussed in this paper will refer exclusively to those trained by the denoising method proposed in DDPMs [14]. Unsupervised representation learning via generative modeling is a popular topic in computer vision. Latent variable generative models, such as GANs and VAEs, are a natural candidate for this, since they inherently involve a latent representation of the data they generate. Likewise, DPMs inherently ∗Corresponding author. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). yield latent variables through the forward process. However, these latent variables lack high-level semantic information because they are just a sequence of spatially corrupted images. In light of this, diffusion autoencoders (Diff-AE) [36] explore DPMs for representation learning via autoencoding. Specifically, they employ an encoder for discovering meaningful representations from images and a conditional DPM as the decoder for image reconstruction by taking the encoded representations as input conditions. Diff-AE is competitive with the state-of-the-art model on image reconstruction and capable of various downstream tasks. Following the paradigm of autoencoders, PDAE aims to adapt existing pre-trained DPMs to the decoders for image reconstruction and benefit from it. Generally, pre-trained DPMs cannot accurately predict the posterior mean of xt−1 from xt in the reverse process due to the information loss of forward process, which results in a gap between their predicted posterior mean and the true one. This is the reason that they fail to reconstruct an image (x0) from its latent variables (xt). From this perspective, the classifier-guided sampling method [8] can be explained as reconstructing the lost class information in samples by shifting the predicted posterior mean with an extra item computed by the gradient of a classifier to fill the gap. Drawing inspiration from this method that uses the prior knowledges (class label) to fill the gap, we aim to inversely extract the knowledges from the gap, i.e., learn representations that can help to fill the gap. In light of this, we employ a novel gradient estimator to predict the mean shift according to encoded representations and train it to fill as much gap as possible, in this way, the encoder is forced to learn as much information as possible from images to help the filling. PDAE follows this principle to build an autoencoder based on pre-trained DPMs. Furthermore, we find that the posterior mean gap in different time stages contain different levels of information, so we redesign the weighting scheme of diffusion loss to encourage the model to learn rich representations efficiently. We also reuse a part of network of pre-trained DPMs to accelerate the convergence of our model. Based on pre-trained DPMs, PDAE only needs less than half of the training time that Diff-AE costs to complete the representation learning but still outperforms Diff-AE. Moreover, PDAE also enables some other interesting features. 2 Background 2.1 Denoising Diffusion Probabilistic Models DDPMs [14] employ a forward process that starts from the data distribution q(x0) and sequentially corrupts it to N (0, I) with Markov diffusion kernels q(xt|xt−1) defined by a fixed variance schedule {βt}Tt=1. The process can be expressed by: q(xt|xt−1) = N (xt; √ 1− βtxt−1 , βtI) q(x1:T |x0) = T∏ t=1 q(xt|xt−1) , (1) where {xt}Tt=1 are latent variables of DDPMs. According to the rule of the sum of normally distributed random variables, we can directly sample xt from x0 for arbitrary t with q(xt|x0) = N (xt; √ ᾱtx0 , (1− ᾱt)I), where αt = 1− βt and ᾱt = ∏t i=1 αi. The reverse (generative) process is defined as another Markov chain parameterized by θ to describe the same but reverse process, denoising an arbitrary Gaussian noise to a clean data sample: pθ(xt−1|xt) = N (xt−1;µθ(xt, t) , Σθ(xt, t)) pθ(x0:T ) = p(xT ) T∏ t=1 pθ(xt−1|xt) , (2) where p(xT ) = N (xT ;0, I). It employs pθ(xt−1|xt) of Gaussian form because the reversal of the diffusion process has the identical functional form as the forward process when βt is small [11, 43]. The generative distribution can be represented as pθ(x0) = ∫ pθ(x0:T )dx1:T . Training is performed to maximize the model log likelihood ∫ q(x0) log pθ(x0)dx0 by minimizing the variational upper bound of the negative one. The final objective is derived by some parameterization and simplication [14]: Lsimple(θ) = Ex0,t,ϵ [∥∥ϵ− ϵθ(√ᾱtx0 +√1− ᾱtϵ, t)∥∥2] , (3) where ϵθ is a function approximator to predict ϵ from xt. 2.2 Denoising Diffusion Implicit Models DDIMs [44] define a non-Markov forward process that leads to the same training objective with DDPMs, but the corresponding reverse process can be much more flexible and faster to sample from. Specifically, one can sample xt−1 from xt using the ϵθ of some pre-trained DDPMs via: xt−1 = √ ᾱt−1 ( xt − √ 1− ᾱt · ϵθ(xt, t)√ ᾱt ) + √ 1− ᾱt−1 − σ2t · ϵθ(xt, t) + σtϵt , (4) where ϵt ∼ N (0, I) and σt controls the stochasticity of forward process. The strides greater than 1 are allowed for accelerated sampling. When σt = 0, the generative process becomes deterministic, which is named as DDIMs. 2.3 Classifier-guided Sampling Method Classifier-guided sampling method [43, 46, 8] shows that one can train a classifier pϕ(y|xt) on noisy data and use its gradient ∇xt log pϕ(y|xt) to guide some pre-trained unconditional DDPM to sample towards specified class y. The conditional reverse process can be approximated by a Gaussian similar to that of the unconditional one in Eq.(2), but with a shifted mean: pθ,ϕ(xt−1|xt,y) ≈ N (xt−1;µθ(xt, t) +Σθ(xt, t) · ∇xt log pϕ(y|xt) , Σθ(xt, t)) . (5) For deterministic sampling methods like DDIMs, one can use score-based conditioning trick [46, 45] to define a new function approximator for conditional sampling: ϵ̂θ(xt, t) = ϵθ(xt, t)− √ 1− ᾱt · ∇xt log pϕ(y|xt) . (6) More generally, any similarity estimator between noisy data and conditions can be applied for guided sampling, such as noisy-CLIP guidance [33, 31]. 3 Method 3.1 Forward Process Posterior Mean Gap Generally, one will train unconditional and conditional DPMs by respectively learning pθ(xt−1|xt) = N (xt−1;µθ(xt, t),Σθ(xt, t)) and pθ(xt−1|xt,y) = N (xt−1;µθ(xt,y, t),Σθ(xt,y, t)) to approximate the same forward process posterior q(xt−1|xt,x0) = N (xt−1; µ̃t(xt,x0), 1−ᾱt−11−ᾱt βtI). Here y is some condition that contains some prior knowledges of corresponding x0, such as class label. Assuming that both Σθ is set to untrained time dependent constants, under the same experimental settings, the conditional DPMs will reach a lower optimized diffusion loss. The experiment in Figure 1 can prove this fact, which means that µθ(xt,y, t) is closer to µ̃t(xt,x0) than µθ(xt, t). This implies that there exists a gap between the posterior mean predicted by the unconditional DPMs ( µθ(xt, t) ) and the true one ( µ̃t(xt,x0) ) . Essentially, the posterior mean gap is caused by the information loss of forward process so that the reverse process cannot recover it in xt−1 only according to xt. If we introduce some knowledges of x0 for DPMs, like y here, the gap will be smaller. The more information of x0 that y contains, the smaller the gap is. Moreover, according to Eq.(5), the Gaussian mean of classifier-guided conditional reverse process contains an extra shift item compared with that of the unconditional one. From the perspective of posterior mean gap, the mean shift item can partially fill the gap and help the reverse process to reconstruct the lost class information in samples. In theory, if y in Eq.(5) contains all information of x0, the mean shift will fully fill the gap and guide the reverse process to reconstruct x0. On the other hand, if we employ a model to predict mean shift according to our encoded representations z and train it to fill as much gap as possible, the encoder will be forced to learn as much information as possible from x0 to help the filling. The more the gap is filled, the more accurate the mean shift is, the more perfect the reconstruction is, and the more information of x0 that z contains. PDAE follows this principle to build an autoencoder based on pre-trained DPMs. 3.2 Unsupervised Representation Learning by Filling the Gap Following the paradigm of autoencoders, we employ an encoder z = Eφ(x0) for learning compact and meaningful representations from input images and adapt a pre-trained unconditional DPM pθ(xt−1|xt) = N (xt−1;µθ(xt, t),Σθ(xt, t)) to the decoder for image reconstruction. Specifically, we employ a gradient estimator Gψ(xt, z, t) to simulate ∇xt log p(z|xt), where p(z|xt) is some implicit classifier that we will not use explicitly, and use it to assemble a conditional DPM pθ,ψ(xt−1|xt, z) = N (xt−1;µθ(xt, t)+Σθ(xt, t) ·Gψ(xt, z, t) , Σθ(xt, t)) as the decoder. Then we train it like a regular conditional DPM by optimizing following derived objective (assuming the ϵ-prediction parameterization is adopted): L(ψ,φ) = Ex0,t,ϵ [ λt ∥∥ϵ− ϵθ(xt, t) + √αt√1− ᾱt βt ·Σθ(xt, t) ·Gψ(xt,Eφ(x0), t) ∥∥2] , (7) where xt = √ ᾱtx0+ √ 1− ᾱtϵ and λt is a new weighting scheme that we will discuss in Section 3.4. Note that we use pre-trained DPMs so that θ are frozen during the optimization. Usually we set Σθ = 1−ᾱt−1 1−ᾱt βtI to untrained time-dependent constants. The optimization is equivalent to minimizing ∥∥Σθ(xt, t)·Gψ(xt,Eφ(x0), t)−(µ̃t(xt,x0)−µθ(xt, t))∥∥2, which forces the predicted mean shift Σθ(xt, t) ·Gψ(xt,Eφ(x0), t) to fill the posterior mean gap µ̃t(xt,x0)− µθ(xt, t). With trained Gψ(xt, z, t), we can treat it as the score of an optimal classifier p(z|xt) and use the classifier-guided sampling method in Eq.(5) for DDPM sampling or use the modified function approximator ϵ̂θ in Eq.(6) for DDIM sampling, based on pre-trained ϵθ(xt, t). We put detailed algorithm procedures in Appendix ??. Except the semantic latent code z, we can infer a stochastic latent code xT [36] by running the deterministic generative process of DDIMs in reverse: xt+1 = √ ᾱt+1 ( xt − √ 1− ᾱt · ϵ̂θ(xt, t)√ ᾱt ) + √ 1− ᾱt+1 · ϵ̂θ(xt, t) . (8) This procedure is optional, but helpful to near-exact reconstruction and real-image manipulation for reconstructing minor details of input images when using DDIM sampling. We also train a latent DPM pω(zt−1|zt) to model the learned semantic latent space, same with that in Diff-AE [36]. With a trained latent DPM, we can sample z from it to help pre-trained DPMs to achieve faster and better unconditional sampling under the guidance of Gψ(xt, z, t). 3.3 Network Design Figure 2 shows the network and data flow of PDAE. For encoder Eφ, unlike Diff-AE that uses the encoder part of U-Net [40], we find that simply stacked convolution layers and a linear layer is enough to learn meaningful z from x0. For gradient estimator Gψ, we use U-Net similar to the function approximator ϵθ of pre-trained DPM. Considering that ϵθ also takes xt and t as input, we can further leverage the knowledges of pre-trained DPM by reusing its trained encoder part and time embedding layer, so that we only need to employ new middle blocks, decoder part and output blocks of U-Net for Gψ . To incorporate z into them, we follow [8] to extend Group Normalization [53] by applying scaling & shifting twice on normalized feature maps: AdaGN(h, t,z) = zs(tsGroupNorm(h) + tb) + zb , (9) where [ts, tb] and [zs, zb] are obtained from a linear projection of t and z, respectively. Note that we still use skip connections from reused encoder to new decoder. In this way, Gψ is totally determined by pre-trained DPM and can be universally applied to different U-Net architectures. 3.4 Weighting Scheme Redesign We originally worked with simplified training objective like that in DDPMs [14], i.e. setting λt = 1 in Eq.(7), but found the training extremely unstable, resulting in slow/non- convergence and poor performance. Inspired by P2-weighting [7], which has shown that the weighting scheme of diffusion loss can greatly affect the performance of DPMs, we attribute this phenomenon to the weighting scheme and investigate it in Figure 3. Specifically, we train an unconditional DPM and a noisy classifier on MNIST [28], and divide the diffusion forward process into three stages: early-stage between 0 and t1, critical-stage between t1 and t2 and late-stage between t2 and T , as shown in the top row. Then we design a mixed sampling procedure that employs unconditional sampling but switches to classifier-guided sampling only during the specified stage. The bottom three rows show the samples generated by three different mixed sampling procedures, where each row only employs classifier-guided sampling during the specified stage on the right. As we can see, only the samples guided by the classifier during critical-stage match the input class labels. We can conclude that the mean shift during critical-stage contains more crucial information to reconstruct the input class label in samples than the other two stages. From the view of diffusion trajectories, the sampling trajectories are separated from each other during critical-stage and they need the mean shift to guide them towards specified direction, otherwise it will be determined by the stochasticity of Langevin dynamics. Therefore, we opt to down-weight the objective function for the t in early- and late-stage to encourage the model to learn rich representations from critical-stage. Inspired by P2-weighting [7], we redesign a weighting scheme of diffusion loss (λt in Eq.(7)) in terms of signal-to-noise ratio [24] (SNR(t) = ᾱt1−ᾱt ): λt = ( 1 1 + SNR(t) )1−γ · ( SNR(t) 1 + SNR(t) )γ , (10) where the first item is for early-stage and the second one is for late-stage. γ is a hyperparameter that balances the strength of down-weighting between two items. Empirically we set γ = 0.1. Figure 4 shows the normalized weighting schemes of diffusion loss for different DPMs relative to the true variational lower bound loss. Compared with other DPMs, our weighting scheme down-weights the diffusion loss for both low and high SNR. 4 Experiments To compare PDAE with Diff-AE [36], we follow their experiments with the same settings. Moreover, we also show that PDAE enables some added features. For fair comparison, we use the baseline DPMs provided by official Diff-AE implementation as our pre-trained models (also as our baselines), which have the same network architectures (hyperparameters) with their Diff-AE models. For brevity, we use the notation such as "FFHQ128-130M-z512-64M" to name our model, which means that we use a baseline DPM pre-trained with 130M images and leverage it for PDAE training with 64M images, on 128× 128 FFHQ dataset [21], with the semantic latent code z of 512-d. We put all implementation details in Appendix ?? and additional samples of following experiments in Appendix ??. 4.1 Training Efficiency We demonstrate the better training efficiency of PDAE compared with Diff-AE from two aspects: training time and times. For training time, we train both models with the same network architectures (hyperparameters) on 128×128 image dataset using 4 Nvidia A100-SXM4 GPUs for distributed training and set batch size to 128 (32 for each GPU) to calculate their training throughput (imgs/sec./A100). PDAE achieves a throughput of 81.57 and Diff-AE achieves that of 75.41. Owing to the reuse of the U-Net encoder part of pre-trained DPM, PDAE has less trainable parameters and achieves a higher training throughput than Diff-AE. For training times, we find that PDAE needs about 13 ∼ 1 2 of the number of training batches (images) that Diff-AE needs for loss convergence. We think this is because that modeling the posterior mean gap based on pre-trained DPMs is easier than modeling a conditional DPM from scratch. The network reuse and the weighting scheme redesign also help. As a result, based on pre-trained DPMs, PDAE needs less than half of the training time that Diff-AE costs to complete the representation learning. 4.2 Learned Mean Shift Fills Posterior Mean Gap We train a model of "FFHQ128-130M-z512-64M" and show that our learned mean shift can fill the posterior mean gap with qualitative and quantitative results in Figure 5. Specifically, we select some images x0 from FFHQ, sample xt = √ ᾱtx0 + √ 1− ᾱtϵ for different t and predict x̂0 from xt by denoising them for only one step (i.e., x̂0 = xt− √ 1−ᾱtϵ̂√ ᾱt ), using pre-trained DPM and PDAE respectively. As we can see in the figure (left), even for large t, PDAE can predict accurate noise from xt and reconstruct plausible images, which shows that the predicted mean shift fills the posterior mean gap and the learned representation helps to recover the lost information of forward process. Furthermore, we randomly select 1000 images from FFHQ, sample xt = √ ᾱtx0 + √ 1− ᾱtϵ and calculate their average posterior mean gap for each step using pre-trained DPM: ∥µ̃t(xt,x0) − µθ(xt, t)∥2 and PDAE: ∥µ̃t(xt,x0)− (µθ(xt, t) +Σθ(xt, t) ·Gψ(xt,Eφ(x0), t))∥2 respectively, shown in the figure (right). As we can see, PDAE predicts the mean shift that significantly fills the posterior mean gap. 4.3 Autoencoding Reconstruction We use "FFHQ128-130M-z512-64M" to run some autoencoding reconstruction examples using PDAE generative process of DDIM and DDPM respectively. As we can see in Figure 6, both methods generate samples with similar contents to the input. Some stochastic variations [36] occur in minor details of hair, eye and skin when introducing stochasticity. Due to the similar performance between DDPM and DDIM with random xT , we will always use DDIM sampling method in later experiments. We can get a near-exact reconstruction if we use the stochastic latent code inferred from aforementioned ODE, which further proves that the stochastic latent code controls the local details. To further evaluate the autoencoding reconstruction quality of PDAE, we conduct the same quantitative experiments with Diff-AE. Specifically, we use "FFHQ128-130M-z512-64M" to encode-andreconstruct all 30k images of CelebA-HQ [20] and evaluate the reconstruction quality with their average SSIM [52], LPIPS [56] and MSE. We use the same baselines described in [36], and the results are shown in Table 1. We can see that PDAE is competitive with the state-of-the-art NVAE even with much less latent dimensionality and also outperforms Diff-AE in all metrics except the LPIPS for random xT . Moreover, PDAE only needs about half of the training times that Diff-AE needs for representation learning, which shows that PDAE can learn richer representations from images more efficiently based on pre-trained DPM. 4.4 Interpolation of Semantic Latent Codes and Trajectories Given two images x10 and x 2 0 from FFHQ, we use "FFHQ128-130M-z512-64M" to encode them into (z1,x1T ) and (z 2,x2T ) and run PDAE generative process of DDIM starting from Slerp(x 1 T ,x 2 T ;λ) under the guidance of Gψ ( xt, Lerp(z 1, z2;λ), t ) with 100 steps, expecting smooth transitions along λ. Moreover, from the view of the diffusion trajectories, PDAE generates desired samples by shifting the unconditional sampling trajectories towards the spatial direction predicted by Gψ(xt, z, t). This enables PDAE to directly interpolate between two different sampling trajectories. Intuitively, the spatial direction predicted by the linear interpolation of two semantic latent codes, Gψ ( xt, Lerp(z 1, z2;λ), t ) , should be equivalent to the linear interpolation of two spatial directions predicted by respective semantic latent code, Lerp ( Gψ(xt, z 1, t),Gψ(xt, z 2, t);λ ) . We present some examples of these two kinds of interpolation methods in Figure 7. As we can see, both methods generate similar samples that smoothly transition from one endpoint to the other, which means that Gψ ( xt, Lerp(z 1, z2;λ), t ) ≈ Lerp ( Gψ(xt, z 1, t),Gψ(xt, z 2, t);λ ) , so that Gψ(xt, z, t) can be seen as a function of z analogous to a linear map. The linearity guarantees a meaningful semantic latent space that represents the semantic spatial change of image by a linear change of latent code. 4.5 Attribute Manipulation We can further explore the learned semantic latent space in a supervised way. To illustrate this, we train a model of "CelebA-HQ128-52M-z512-25M" and conduct attribute manipulation experiments by utilizing the attribute annotations of CelebA-HQ dataset. Specifically, we first encode an image to its semantic latent code, then move it along the learned direction and finally decode it to the manipulated image. Similar to Diff-AE, we train a linear classifier to separate the semantic latent codes of the images with different attribute labels and use the normal vector of separating hyperplane (i.e. the weight of linear classifier) as the direction vector. We present some attribute manipulation examples in Figure 8. As we can see, PDAE succeeds in manipulating images by moving their semantic latent codes along the direction of desired attribute with different scales. Like Diff-AE, PDAE can change attribute-relevant features while keeping other irrelevant details almost stationary if using the inferred xT of input image. 4.6 Truncation-like Effect According to [8, 15], we can obtain a truncation-like effect in DPMs by scaling the strength of classifier guidance. We have assumed that Gψ(xt, z, t) trained by filling the posterior mean gap simulates the gradient of some implicit classifier, and it can actually work as desired. In theory, it can Figure 9: The truncation-like effect for "ImageNet64-77M-y-38M" by scaling Gψ(xt,y, t) with 0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0 respectively. Dataset Model FIDT=10 T=20 T=50 T=100 FFHQ DDIM 31.87 20.53 15.82 11.95 Diff-AE 21.95 18.10 13.14 10.55 PDAE 20.16 17.18 12.81 10.31 Horse [55] DDIM 25.24 14.41 7.98 5.93 Diff-AE 12.66 9.21 7.12 5.27 PDAE 11.94 8.51 6.83 5.09 Bedroom [55] DDIM 14.07 9.29 7.31 5.88 Diff-AE 10.79 8.42 6.49 5.32 PDAE 10.05 7.89 6.33 5.47 CelebA DDIM 18.89 13.82 8.48 5.94 Diff-AE 12.92 10.18 7.05 5.30 PDAE 11.84 9.65 7.23 5.19 Table 2: FID scores for unconditional sampling. also be applied in truncation-like effect. To illustrate this, we directly incorporate the class label into Gψ(xt,y, t) and train it to fill the gap. Specifically, we train a model of "ImageNet64-77M-y-38M" and use DDIM sampling method with 100 steps to generate 50k samples, guided by the predicted mean shift with different scales for a truncation-like effect. Figure 9 shows the sample quality effects of sweeping over the scale. As we can see, it achieves the truncation-like effect similar to that of classifier-guided sampling method, which helps us to build connections between filling the posterior mean gap and classifier-guided sampling method. The gradient estimator trained by filling the posterior mean gap is an alternative to the noisy classifier. 4.7 Few-shot Conditional Generation Following D2C [42], we train a model of "CelebA64-72M-z512-38M" on CelebA [20] and aim to achieve conditional sampling given a small number of labeled images. To achieve this, we train a latent DPM pω(zt−1|zt) on semantic latent space and a latent classifier pη(y|z) using given labeled images. For binary scenario, the images are labeled by a binary class (100 samples, 50 for each class). For PU scenario, the images are either labeled positive or unlabeled (100 positively labeled and 10k unlabeled samples). Then we sample z from pω(zt−1|zt) and accept it with the probability of pη(y|z). We use the accepted z to generate 5k samples for every class and compute the FID score between these samples and all images belonging to corresponding class in dataset. We compare PDAE with Diff-AE and D2C. We also use the naive approach that computes the FID score between the training images and the corresponding subset of images in dataset. Table 3 shows that PDAE achieves better FID scores than Diff-AE and D2C. 4.8 Improved Unconditional Sampling As shown in Section 4.2, under the help of z, PDAE can generate plausible images in only one step. If we can get z in advance, PDAE can achieve better sample quality than pre-trained DPMs in the same number of sampling steps. Similar to Diff-AE, we train a latent DPM on semantic latent space and sample z from it to improve the unconditional sampling of pre-trained DPMs. Unlike Diff-AE that must take z as input for sampling, PDAE uses an independent gradient estimator as a corrector of the pre-trained DPM for sampling. We find that only using pre-trained DPMs in the last few sampling steps can achieve better sample quality, which may be because that the gradient estimator is sensitive to z in the last few sampling steps and the stochasticity of sampled z will lead to out-of-domain samples. Asyrp [27] also finds similar phenomenon. Empirically, we carry out this strategy in the last 30% sampling steps. We evaluate unconditional sampling result on "FFHQ128-130M-z512-64M", "Horse128-130M-z512-64M", "Bedroom128-120M-z512-70M" and "CelebA64-72M-z512-38M" using DDIM sampling method with different steps. For each dataset, we calculate the FID scores between 50k generated samples and 50k real images randomly selected from dataset. Table 2 shows that PDAE significantly improves the sample quality of pre-trained DPMs and outperforms Diff-AE. Note that PDAE can be applied for any pre-trained DPMs as an auxiliary booster to improve their sample quality. 5 Related Work Our work is based on an emerging latent variable generative model known as Diffusion Probabilistic Models (DPMs) [43, 14], which are now popular for their stable training process and competitive sample quality. Numerous studies [34, 24, 8, 15, 44, 19, 46, 30] and applications [5, 26, 18, 32, 57, 6, 29, 41, 3, 16, 17] have further significantly improved and expanded DPMs. Unsupervised representation learning via generative modeling is a popular topic in computer vision. Latent variable generative models, such as GANs [13], VAEs [25, 39], and DPMs, are a natural candidate for this, since they inherently involve a latent representation of the data they generate. For GANs, due to its lack of inference functionality, one have to extract the representations for any given real samples by an extra technique called GAN Inversion [54], which invert samples back into the latent space of trained GANs. Existing inversion methods [58, 35, 4, 1, 2, 51] either have limited reconstruction quality or need significantly higher computational cost. VAEs explicitly learn representations for samples, but still face representation-generation trade-off challenges [49, 42]. VQ-VAE [49, 37] and D2C [42] overcome these problems by modeling latent variables post-hoc in different ways. DPMs also yield latent variables through the forward process. However, these latent variables lack high-level semantic information because they are just a sequence of spatially corrupted images. In light of this, diffusion autoencoders (Diff-AE) [36] explore DPMs for representation learning via autoencoding. Specifically, they jointly train an encoder for discovering meaningful representations from images and a conditional DPM as the decoder for image reconstruction by treating the representations as input conditions. Diff-AE is competitive with the state-of-the-art model on image reconstruction and capable of various downstream tasks. Compared with Diff-AE, PDAE leverages existing pre-trained DPMs for representation learning also via autoencoding, but with better training efficiency and performance. A concurrent work with the similar idea is the textual inversion of pre-trained text-to-image DPMs [12]. Specifically, given only 3-5 images of a user-provided concept, like an object or a style, they learn to represent it through new "words" in the embedding space of the frozen text-to-image DPMs. These learned "words" can be further composed into natural language sentences, guiding personalized creation in an intuitive way. From the perspective of posterior mean gap, for the given new concept, textual inversion optimizes its corresponding new "words" embedding vector to find a best textual condition ( c ) , so that which can be fed into pre-trained text-to-image DPMs ( ϵθ(xt, c, t) ) to fill as much gap ( ϵ− ϵθ(xt, ∅, t) ) as possible. 6 Conclusion In conclusion, we present a general method called PDAE that leverages pre-trained DPMs for representation learning via autoencoding and achieves better training efficiency and performance than Diff-AE. Our key idea is based on the concept of posterior mean gap and its connections with classifier-guided sampling method. A concurrent work, textual inversion of pre-trained text-to-image DPMs, can also be explained from this perspective. We think the idea can be further explored to extract knowledges from pre-trained DPMs, such as interpretable direction discovery [51], and we leave it as future work. Acknowledgments and Disclosure of Funding This work was supported in part by the National Natural Science Foundation of China (Grant No. 62072397 and No.61836002), Zhejiang Natural Science Foundation (LR19F020006) and Yiwise.
1. What is the focus and contribution of the paper regarding data representation and reconstruction? 2. What are the strengths of the proposed approach, particularly in terms of novelty and explanation? 3. What are the weaknesses of the paper, especially regarding the sampling step and quantitative results? 4. Do you have any questions or suggestions regarding the paper's content, such as figure placement, proofreading, and additional experiments? 5. Are there any concerns or limitations regarding the method's training time comparison with other approaches?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors propose an unsupervised method for data representation and reconstruction, utilizing a pre-trained unconditional diffusion probabilistic model (DMP) in order to avoid the long training time of such models. The method incorporates a gradient estimator, that learns the posterior mean shift, which is then used as a condition in order to guide the sampling process towards the image reconstruction. The authors also propose a new weighting scheme for training the objective function. Learning the gradient estimator encourages the encoder to learn meaningful data representation, and the sampling process to produce better data reconstruction. The authors provide quantitative and qualitative evaluations showing improvements in representation learning, image reconstruction, and image sampling. Strengths And Weaknesses Strength: The problem is known to be valid and challenging, and the level of novelty is reasonable. The method, idea, and motivation are well explained in the paper. The authors provided a good intuition about their proposed network architecture. The authors well-explained their weighting method by describing their insights from an experiment that shows differences in the result with and without the classifier-guided sampling Weaknesses: There is no algorithm that explains how the sampling step works in the author's method (how does the sampling-guided method is employed when using the model described in the paper). I suggest showing Figures 1,2 in separate locations (figure 2 should be closer to the text that describes it, along with figure 3). According to the quantitative results, the improvement of data representation is not significant. However, there is an improvement in the rest of the experiments, such as in unconditional sampling. Overall proofreading is required. Questions Line 16: "and train it like a normal DMP": this causes some confusion, as the DMP part of the model is fixed. Also, the objective function aims to learn image reconstruction rather than image generation. It would be nice to see the results of the experiment shown in section 3.4, but when using the gradient-estimator as classifier-guided sampling, instead of a separate classifier, as done on MNIST. In the experiment section, the prediction of x_0 is done from x_t in only one step. What does this "one-step" mean? Isn't the whole reverse process required in order to retrieve x_0? It would be great to have an experiment that supports the claim that this method requires less training time than its competitors. Limitations The author did not address their method's limitations. The described work has no potential negative societal impact.
NIPS
Title Finite-Sample Analysis of Fixed-k Nearest Neighbor Density Functional Estimators Abstract We provide finite-sample analysis of a general framework for using k-nearest neighbor statistics to estimate functionals of a nonparametric continuous probability density, including entropies and divergences. Rather than plugging a consistent density estimate (which requires k → ∞ as the sample size n → ∞) into the functional of interest, the estimators we consider fix k and perform a bias correction. This is more efficient computationally, and, as we show in certain cases, statistically, leading to faster convergence rates. Our framework unifies several previous estimators, for most of which ours are the first finite sample guarantees. 1 Introduction Estimating entropies and divergences of probability distributions in a consistent manner is of importance in a number of problems in machine learning. Entropy estimators have applications in goodness-of-fit testing [13], parameter estimation in semi-parametric models [51], studying fractal random walks [3], and texture classification [14, 15]. Divergence estimators have been used to generalize machine learning algorithms for regression, classification, and clustering from inputs in RD to sets and distributions [40, 33]. Divergences also include mutual informations as a special case; mutual information estimators have applications in feature selection [35], clustering [2], causality detection [16], optimal experimental design [26, 38], fMRI data analysis [7], prediction of protein structures [1], and boosting and facial expression recognition [41]. Both entropy estimators and mutual information estimators have been used for independent component and subspace analysis [23, 47, 37, 17], as well as for image registration [14, 15]. Further applications can be found in [25]. This paper considers the more general problem of estimating functionals of the form F (P ) := E X∼P [f(p(X))] , (1) using n IID samples from P , where P is an unknown probability measure with smooth density function p and f is a known smooth function. We are interested in analyzing a class of nonparametric estimators based on k-nearest neighbor (k-NN) distance statistics. Rather than plugging a consistent estimator of p into (1), which requires k →∞ as n→∞, these estimators derive a bias correction for the plug-in estimator with fixed k; hence, we refer to this type of estimator as a fixed-k estimator. Compared to plug-in estimators, fixed-k estimators are faster to compute. As we show, fixed-k estimators can also exhibit superior rates of convergence. As shown in Table 1, several authors have derived bias corrections necessary for fixed-k estimators of entropies and divergences, including, most famously, the Shannon entropy estimator of [20]. 1 The estimators in Table 1 estimators are known to be weakly consistent, 2 but, except for Shannon entropy, 1MATLAB code for these estimators is in the ITE toolbox https://bitbucket.org/szzoli/ite/ [48]. 2Several of these proofs contain errors regarding the use of integral convergence theorems when their conditions do not hold, as described in [39]. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. no finite sample bounds are known. The main goal of this paper is to provide finite-sample analysis of these estimators, via unified analysis of the estimator after bias correction. Specifically, we show conditions under which, for β-Hölder continuous (β ∈ (0, 2]) densities on D dimensional space, the bias of fixed-k estimators decays as O ( n−β/D ) and the variance decays as O ( n−1 ) , giving a mean squared error of O ( n−2β/D + n−1 ) . Hence, the estimators converge at the parametric O(n−1) rate when β ≥ D/2, and at the slower rateO(n−2β/D) otherwise. A modification of the estimators would be necessary to leverage additional smoothness for β > 2, but we do not pursue this here. Along the way, we prove a finite-sample version of the useful fact [25] that (normalized) k-NN distances have an Erlang asymptotic distribution, which may be of independent interest. We present our results for distributions P supported on the unit cube in RD because this significantly simplifies the statements of our results, but, as we discuss in the supplement, our results generalize fairly naturally, for example to distributions supported on smooth compact manifolds. In this context, it is worth noting that our results scale with the intrinsic dimension of the manifold. As we discuss later, we believe deriving finite sample rates for distributions with unbounded support may require a truncated modification of the estimators we study (as in [49]), but we do not pursue this here. 2 Problem statement and notation LetX := [0, 1]D denote the unit cube in RD, and let µ denote the Lebesgue measure. Suppose P is an unknown µ-absolutely continuous Borel probability measure supported on X , and let p : X → [0,∞) denote the density of P . Consider a (known) differentiable function f : (0,∞) → R. Given n samples X1, ..., Xn drawn IID from P , we are interested in estimating the functional F (P ) := E X∼P [f(p(X))] . Somewhat more generally (as in divergence estimation), we may have a function f : (0,∞)2 → R of two variables and a second unknown probability measure Q, with density q and n IID samples Y1, ..., Yn. Then, we are interested in estimating F (P,Q) := E X∼P [f(p(X), q(X))] . Fix r ∈ [1,∞] and a positive integer k. We will work with distances induced by the r-norm ‖x‖r := ( D∑ i=1 xri )1/r and define cD,r := (2Γ(1 + 1/r)) D Γ(1 +D/r) = µ(B(0, 1)), where B(x, ε) := {y ∈ RD : ‖x − y‖r < ε} denotes the open radius-ε ball centered at x. Our estimators use k-nearest neighbor (k-NN) distances: Definition 1. (k-NN distance): Given n IID samples X1, ..., Xn from P , for x ∈ RD, we define the k-NN distance εk(x) by εk(x) = ‖x−Xi‖r, where Xi is the kth-nearest element (in ‖ · ‖r) of the set {X1, ..., Xn} to x. For divergence estimation, given n samples Y1, ..., Yn from Q, then we similarly define δk(x) by δk(x) = ‖x− Yi‖r, where Yi is the kth-nearest element of {Y1, ..., Yn} to x. µ-absolute continuity of P precludes the existence of atoms (i.e., ∀x ∈ RD, P ({x}) = µ({x}) = 0). Hence, each εk(x) > 0 a.s. We will require this to study quantities such as log εk(x) and 1/εk(x). 3 Estimator 3.1 k-NN density estimation and plug-in functional estimators The k-NN density estimator p̂k(x) = k/n µ(B(x, εk(x)) = k/n cDεDk (x) is well-studied nonparametric density estimator [28], motivated by noting that, for small ε > 0, p(x) ≈ P (B(x, ε)) µ(B(x, ε)) , and that, P (B(x, εk(x))) ≈ k/n. One can show that, for x ∈ RD at which p is continuous, if k → ∞ and k/n → 0 as n → ∞, then p̂k(x) → p(x) in probability ([28], Theorem 3.1). Thus, a natural approach for estimating F (P ) is the plug-in estimator F̂PI := 1 n n∑ i=1 f (p̂k(Xi)) . (2) Since p̂k → p in probability pointwise as k, n→∞ and f is smooth, one can show F̂PI is consistent, and in fact derive finite sample convergence rates (depending on how k →∞). For example, [44] show a convergence rate of O ( n−min{ 2β β+D ,1} ) for β-Hölder continuous densities (after sample splitting and boundary correction) by setting k n β β+d . Unfortunately, while necessary to ensure V [p̂k(x)]→ 0, the requirement k →∞ is computationally burdensome. Furthermore, increasing k can increase the bias of p̂k due to over-smoothing (see (5) below), suggesting that this may be sub-optimal for estimating F (P ). Indeed, similar work based on kernel density estimation [42] suggests that, for plug-in functional estimators, under-smoothing may be preferable, since the empirical mean results in additional smoothing. 3.2 Fixed-k functional estimators An alternative approach is to fix k as n→∞. Since F̂PI is itself an empirical mean, unlike V [p̂k(x)], V [ F̂PI ] → 0 as n → ∞. The more critical complication of fixing k is bias. Since f is typically non-linear, the non-vanishing variance of p̂k translates into asymptotic bias. A solution adopted by several papers is to derive a bias correction function B (depending only on known factors) such that E X1,...,Xn [ B ( f ( k/n µ(B(x, εk(x)) ))] = E X1,...,Xn [ f ( P (B(x, εk(x))) µ(B(x, εk(x)) )] . (3) For continuous p, the quantity pεk(x)(x) := P (B(x, εk(x))) µ(B(x, εk(x)) (4) is a consistent estimate of p(x) with k fixed, but it is not computable, since P is unknown. The bias correction B gives us an asymptotically unbiased estimator F̂B(P ) := 1 n n∑ i=1 B (f (p̂k(Xi))) = 1 n n∑ i=1 B ( f ( k/n µ(B(Xi, εk(Xi)) )) . that uses k/n in place of P (B(x, εk(x))). This estimate extends naturally to divergences: F̂B(P,Q) := 1 n n∑ i=1 B (f (p̂k(Xi), q̂k(Xi))) . As an example, if f = log (as in Shannon entropy), then it can be shown that, for any continuous p, E [logP (B(x, εk(x)))] = ψ(k)− ψ(n). Hence, for Bn,k := ψ(k)− ψ(n) + log(n)− log(k), E X1,...,Xn [ f ( k/n µ(B(x, εk(x)) )] +Bn,k = E X1,...,Xn [ f ( P (B(x, εk(x))) µ(B(x, εk(x)) )] . giving the estimator of [20]. Other examples of functionals for which the bias correction is known are given in Table 1. In general, deriving an appropriate bias correction can be quite a difficult problem specific to the functional of interest, and it is not our goal presently to study this problem; rather, we are interested in bounding the error of F̂B(P ), assuming the bias correction is known. Hence, our results apply to all of the estimators in Table 1, as well as any estimators of this form that may be derived in the future. 4 Related work 4.1 Estimating information theoretic functionals Recently, there has been much work on analyzing estimators for entropy, mutual information, divergences, and other functionals of densities. Besides bias-corrected fixed-k estimators, most of this work has taken one of three approaches. One series of papers [27, 42, 43] studied a boundarycorrected plug-in approach based on under-smoothed kernel density estimation. This approach has strong finite sample guarantees, but requires prior knowledge of the support of the density, and can have a slow rate of convergence. A second approach [18, 22] uses von Mises expansion to partially correct the bias of optimally smoothed density estimates. This is statistically more efficient, but can require computationally demanding numerical integration over the support of the density. A final line of work [30, 31, 44, 46] studied plug-in estimators based on consistent, boundary corrected k-NN density estimates (i.e., with k →∞ as n→∞). [32] study a divergence estimator based on convex risk minimization, but this relies of the context of an RKHS, making results are difficult to compare. Rates of Convergence: For densities over RD satisfying a Hölder smoothness condition parametrized by β ∈ (0,∞), the minimax mean squared error rate for estimating functionals of the form∫ f(p(x)) dx has been known since [6] to be O ( n−min{ 8β 4β+D ,1} ) . [22] recently derived iden- tical minimax rates for divergence estimation. Most of the above estimators have been shown to converge at the rate O ( n−min{ 2β β+D ,1} ) . Only the von Mises approach [22] is known to achieve the minimax rate for general β and D, but due to its computational demand (O(2Dn3)), 3 the authors suggest using other statistically less efficient estimators for moderate sample size. Here, we show that, for β ∈ (0, 2], bias-corrected fixed-k estimators converge at the relatively fast rate O ( n−min{ 2β D ,1} ) . For β > 2, modifications are needed for the estimator to leverage the additional smoothness of the density. Notably, this rate is adaptive; that is, it does not require selecting a smoothing parameter depending on the unknown β; our results (Theorem 5) imply the above rate is achieved for any fixed choice of k. On the other hand, since no empirical error metric is available for cross-validation, parameter selection is an obstacle for competing estimators. 4.2 Prior analysis of fixed-k estimators As of writing this paper, the only finite-sample results for F̂B(P ) were those of [5] for the KozachenkoLeonenko (KL) 4 Shannon entropy estimator. [20] Theorem 7.1 of [5] shows that, if the density p has compact support, then the variance of the KL estimator decays as O(n−1). They also claim (Theorem 7.2) to bound the bias of the KL estimator by O(n−β), under the assumptions that p is β-Hölder continuous (β ∈ (0, 1]), bounded away from 0, and supported on the interval [0, 1]. However, in their proof, [5] neglect to bound the additional bias incurred near the boundaries of [0, 1], where the density cannot simultaneously be bounded away from 0 and continuous. In fact, because the KL estimator does not attempt to correct for boundary bias, it is not clear that the bias should decay as O(n−β) under these conditions; we require additional conditions at the boundary of X . 3Fixed-k estimators can be computed in O ( Dn2 ) time, or O ( 2Dn logn ) using k-d trees for small D. 4Not to be confused with Kullback-Leibler (KL) divergence, for which we also analyze an estimator. [49] studied a closely related entropy estimator for which they prove √ n-consistency. Their estimator is identical to the KL estimator, except that it truncates k-NN distances at √ n, replacing εk(x) with min{εk(x), √ n}. This sort of truncation may be necessary for certain fixed-k estimators to satisfy finite-sample bounds for densities of unbounded support, though consistency can be shown regardless. Finally, two very recent papers [12, 4] have analyzed the KL estimator. In this case, [12] generalize the results of [5] to D > 1, and [4] weaken the regularity and boundary assumptions required by our bias bound, while deriving the same rate of convergence. Moreover, they show that, if k increases with n at the rate k log5 n, the KL estimator is asymptotically efficient (i.e., asymptotically normal, with optimal asymptotic variance). As explained in Section 8, together with our results this elucidates the role of k in the KL estimator: fixing k optimizes the convergence rate of the estimator, but increasing k slowly can further improve error by constant factors. 5 Discussion of assumptions The lack of finite-sample results for fixed-k estimators is due to several technical challenges. Here, we discuss some of these challenges, motivating the assumptions we make to overcome them. First, these estimators are sensitive to regions of low probability (i.e., p(x) small), for two reasons: 1. Many functions f of interest (e.g., f = log or f(z) = zα, α < 0) have singularities at 0. 2. The k-NN estimate p̂k(x) of p(x) is highly biased when p(x) is small. For example, for p β-Hölder continuous (β ∈ (0, 2]), one has ([29], Theorem 2) Bias(p̂k(x)) ( k np(x) )β/D . (5) For these reasons, it is common in analysis of k-NN estimators to assume the following [5, 39]: (A1) p is bounded away from zero on its support. That is, p∗ := infx∈X p(x) > 0. Second, unlike many functional estimators (see e.g., [34, 45, 42]), the fixed-k estimators we consider do not attempt correct for boundary bias (i.e., bias incurred due to discontinuity of p on the boundary ∂X of X ). 5 The boundary bias of the density estimate p̂k(x) does vanish at x in the interior X ◦ of X as n→∞, but additional assumptions are needed to obtain finite-sample rates. Either of the following assumptions would suffice: (A2) p is continuous not only on X ◦ but also on ∂X (i.e., p(x)→ 0 as dist(x, ∂X )→ 0). (A3) p is supported on all of RD. That is, the support of p has no boundary. This is the approach of [49], but we reiterate that, to handle an unbounded domain, they require truncating εk(x). Unfortunately, both assumptions (A2) and (A3) are inconsistent with (A1). Our approach is to assume (A2) and replace assumption (A1) with a much milder assumption that p is locally lower bounded on its support in the following sense: (A4) There exist ρ > 0 and a function p∗ : X → (0,∞) such that, for all x ∈ X , r ∈ (0, ρ], p∗(x) ≤ P (B(x,r))µ(B(x,r)) . We show in Lemma 2 that assumption (A4) is in fact very mild; in a metric measure space of positive dimension D, as long as p is continuous on X , such a p∗ exists for any desired ρ > 0. For simplicity, we will use ρ = √ D = diam(X ). As hinted by (5) and the fact that F (P ) is an expectation, our bounds will contain terms of the form E X∼P [ 1 (p∗(X)) β/D ] = ∫ X p(x) (p∗(x)) β/D dµ(x) (with an additional f ′(p∗(x)) factor if f has a singularity at zero). Hence, our key assumption is that these quantities are finite. This depends primarily on how quickly p approaches zero near ∂X . For many functionals, Lemma 6 gives a simple sufficient condition. 5This complication was omitted in the bias bound (Theorem 7.2) of [5] for entropy estimation. 6 Preliminary lemmas Here, we present some lemmas, both as a means of summarizing our proof techniques and also because they may be of independent interest for proving finite-sample bounds for other k-NN methods. Due to space constraints, all proofs are given in the appendix. Our first lemma states that, if p is continuous, then it is locally lower bounded as described in the previous section. Lemma 2. (Existence of Local Bounds) If p is continuous on X and strictly positive on the interior X ◦ of X , then, for ρ := √ D = diam(X ), there exists a continuous function p∗ : X ◦ → (0,∞) and a constant p∗ ∈ (0,∞) such that 0 < p∗(x) ≤ P (B(x, r)) µ(B(x, r)) ≤ p∗ <∞, ∀x ∈ X , r ∈ (0, ρ]. We now use these local lower and upper bounds to prove that k-NN distances concentrate around a term of order (k/(np(x)))1/D. Related lemmas, also based on multiplicative Chernoff bounds, are used by [21, 9] and [8, 19] to prove finite-sample bounds on k-NN methods for cluster tree pruning and classification, respectively. For cluster tree pruning, the relevant inequalities bound the error of the k-NN density estimate, and, for classification, they lower bound the probability of nearby samples of the same class. Unlike in cluster tree pruning, we are not using a consistent density estimate, and, unlike in classification, our estimator is a function of k-NN distances themselves (rather than their ordering). Thus, our statement is somewhat different, bounding the k-NN distances themselves: Lemma 3. (Concentration of k-NN Distances) Suppose p is continuous on X and strictly positive on X ◦. Let p∗ and p∗ be as in Lemma 2. Then, for any x ∈ X ◦, 1. if r > ( k p∗(x)n )1/D , then P [εk(x) > r] ≤ e−p∗(x)r Dn ( e p∗(x)r Dn k )k . 2. if r ∈ [ 0, ( k p∗n )1/D) , then P [εk(x) < r] ≤ e−p∗(x)r Dn ( ep∗rDn k )kp∗(x)/p∗ . It is worth noting an asymmetry in the above bounds: counter-intuitively, the lower bound depends on p∗. This asymmetry is related to the large bias of k-NN density estimators when p is small (as in (5)). The next lemma uses Lemma 3 to bound expectations of monotone functions of the ratio p̂k/p∗. As suggested by the form of integrals (6) and (7), this is essentially a finite-sample statement of the fact that (appropriately normalized) k-NN distances have Erlang asymptotic distributions; this asymptotic statement is key to consistency proofs of [25] and [39] for α-entropy and divergence estimators. Lemma 4. Let p be continuous on X and strictly positive on X ◦. Define p∗ and p∗ as in Lemma 2. Suppose f : (0,∞)→ R is continuously differentiable and f ′ > 0. Then, we have the upper bound 6 sup x∈X◦ E [ f+ ( p∗(x) p̂k(x) )] ≤ f+(1) + e √ k ∫ ∞ k e−yyk Γ(k + 1) f+ (y k ) dy, (6) and, for all x ∈ X ◦, for κ(x) := kp∗(x)/p∗, the lower bound E [ f− ( p∗(x) p̂k(x) )] ≤ f−(1) + e √ k κ(x) ∫ κ(x) 0 e−yyκ(x) Γ(κ(x) + 1) f− (y k ) dy (7) Note that plugging the function z 7→ f (( kz cD,rnp∗(x) ) 1 D ) into Lemma 4 gives bounds on E [f(εk(x))]. As one might guess from Lemma 3 and the assumption that f is smooth, this bound is roughly of the order ( k np(x) ) 1 D . For example, for any α > 0, a simple calculation from (6) gives E [εαk (x)] ≤ ( 1 + α D )( k cD,rnp∗(x) ) α D . (8) (8) is used for our bias bound, and more direct applications of Lemma 4 are used in variance bound. 6f+(x) = max{0, f(x)} and f−(x) = −min{0, f(x)} denote the positive and negative parts of f . Recall that E [f(X)] = E [f+(X)]− E [f−(X)]. 7 Main results Here, we present our main results on the bias and variance of F̂B(P ). Again, due to space constraints, all proofs are given in the appendix. We begin with bounding the bias: Theorem 5. (Bias Bound) Suppose that, for some β ∈ (0, 2], p is β-Hölder continuous with constant L > 0 on X , and p is strictly positive on X ◦. Let p∗ and p∗ be as in Lemma 2. Let f : (0,∞)→ R be differentiable, and define Mf,p : X → [0,∞) by Mf,p(x) := sup z∈[p∗(x),p∗] ∣∣∣∣ ddz f(z) ∣∣∣∣ Assume Cf := E X∼p [ Mf,p(X) (p∗(X)) β D ] <∞. Then, ∣∣∣E F̂B(P )− F (P )∣∣∣ ≤ CfL(k n ) β D . The statement for divergences is similar, assuming that q is also β-Hölder continuous with constant L and strictly positive on X ◦. Specifically, we get the same bound if we replace Mf,o with Mf,p(x) := sup (w,z)∈[p∗(x),p∗]×[q∗(x),q∗] ∣∣∣∣ ∂∂wf(w, z) ∣∣∣∣ and define Mf,q similarly (i.e., with ∂∂z ) and we assume that Cf := E X∼p [ Mf,p(X) (p∗(X)) β D ] + E X∼p [ Mf,q(X) (q∗(X)) β D ] <∞. As an example of the applicability of Theorem 5, consider estimating the Shannon entropy. Then, f(z) = log(x), and so we need Cf = ∫ X (p∗(x)) −β/D dµ(x) <∞. The assumption Cf <∞ is not immediately transparent. For the functionals in Table 1, Cf has the form ∫ X (p(x)) −c dx, for some c > 0, and hence Cf <∞ intuitively means p(x) cannot approach zero too quickly as dist(x, ∂X )→ 0. The following lemma gives a formal sufficient condition: Lemma 6. (Boundary Condition) Let c > 0. Suppose there exist b∂ ∈ (0, 1c ), c∂ , ρ∂ > 0 such that, for all x ∈ X with ε(x) := dist(x, ∂X ) < ρ∂ , p(x) ≥ c∂εb∂ (x). Then, ∫ X (p∗(x)) −c dµ(x) <∞. In the supplement, we give examples showing that this condition is fairly general, satisfied by densities proportional to xb∂ near ∂X (i.e., those with at least b∂ nonzero one-sided derivatives on the boundary). We now bound the variance. The main obstacle here is that the fixed-k estimator is an empirical mean of dependent terms (functions of k-NN distances). We generalize the approach used by [5] to bound the variance of the KL estimator of Shannon entropy. The key insight is the geometric fact that, in (RD, ‖ · ‖p), there exists a constant Nk,D (independent of n) such that any sample Xi can be amongst the k-nearest neighbors of at most Nk,D other samples. Hence, at most Nk,D + 1 of the terms in (2) can change when a single Xi is added, suggesting a variance bound via the Efron-Stein inequality [10], which bounds the variance of a function of random variables in terms of its expected change when its arguments are resampled. [11] originally used this approach to prove a general Law of Large Numbers (LLN) for nearest-neighbors statistics. Unfortunately, this LLN relies on bounded kurtosis assumptions that are difficult to justify for the log or negative power statistics we study. Theorem 7. (Variance Bound) Suppose B ◦ f is continuously differentiable and strictly monotone. Assume Cf,p := EX∼P [ B2(f(p∗(X))) ] <∞, and Cf := ∫∞ 0 e−yykf(y) <∞. Then, for CV := 2 (1 +Nk,D) (3 + 4k) (Cf,p + Cf ) , we have V [ F̂B(P ) ] ≤ CV n . As an example, if f = log (as in Shannon entropy), then, since B is an additive constant, we simply require ∫ X p(x) log 2(p∗(x)) < ∞. In general, Nk,D is of the order k2cD, for some c > 0. Our bound is likely quite loose in k; in practice, V [ F̂B(P ) ] typically decreases somewhat with k. 8 Conclusions and discussion In this paper, we gave finite-sample bias and variance error bounds for a class of fixed-k estimators of functionals of probability density functions, including the entropy and divergence estimators in Table 1. The bias and variance bounds in turn imply a bound on the mean squared error (MSE) of the bias-corrected estimator via the usual decomposition into squared bias and variance: Corollary 8. (MSE Bound) Under the conditions of Theorems 5 and 7, E [( Ĥk(X)−H(X) )2] ≤ C2fL2 ( k n )2β/D + CV n . (9) Choosing k: Contrary to the name, fixing k is not required for “fixed-k” estimators. [36] empirically studied the effect of changing k with n and found that fixing k = 1 gave best results for estimating F (P ). However, there has been no theoretical justification for fixing k. Assuming tightness of our bias bound in k, we provide this in a worst-case sense: since our bias bound is nondecreasing in k and our variance bound is no larger than the minimax MSE rate for these estimation problems, reducing variance (i.e., increasing k) does not improve the (worst-case) convergence rate. On the other hand, [4] recently showed that slowly increasing k can improves the asymptotic variance of the estimator, with the rate k log5 n leading to asymptotic efficiency. In view of these results, we suggest that increasing k can improve error by constant factors, but cannot improve the convergence rate. Finally, we note that [36] found increasing k quickly (e.g., k = n/2) was best for certain hypothesis tests based on these estimators. Intuitively, this is because, in testing problems, bias is less problematic than variance (e.g., an asymptotically biased estimator can still lead to a consistent test). Acknowledgments This material is based upon work supported by a National Science Foundation Graduate Research Fellowship to the first author under Grant No. DGE-1252522.
1. What is the focus of the paper regarding density functional estimators? 2. What are the strengths of the proposed approach, particularly in terms of computational efficiency? 3. What are the weaknesses of the paper, especially regarding the emphasis on smoothness assumptions? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any relevant works that the reviewer suggests comparing the paper to?
Review
Review The paper analyzes the performance of k-NN density functional estimators for fixed value of k. The paper considers Holder continuous densities with b \in [0,2]. The minimax rates for this problem was derived in Birge, and Massart, however most estimators are sub-optimal, with the exception of Krishnamurthy etal. However, their complexity is exponential in D, and cubic in n. In comparison, for fixed k, the k-NN estimators can be computed in time O(Dn^2). The main result of the paper is an upper bound on the convergence rate of fixed k-NN estimators. The paper studies k-NN density functional estimators, and for Holder continuous densities show an upper bound on the convergence rate. This is a nice result. I am not convinced about two things. One is the emphasis on the smoothness assumptions on the boundary, and whether indeed it is a hinderance or a mere technicality. Also the authors can provide a few more details on the results of Baiu and Devroye. I found the following paper very related to the current submission. Shashank Singh Barnabas Poczos, "Analysis of k-Nearest Neighbor Distances with Application to Entropy Estimation", ICML 2016 I would encourage the authors to make a comparison with the paper. I am willing to upgrade my score on the originality/novelty if the authors can point the differences and improvements.
NIPS
Title Finite-Sample Analysis of Fixed-k Nearest Neighbor Density Functional Estimators Abstract We provide finite-sample analysis of a general framework for using k-nearest neighbor statistics to estimate functionals of a nonparametric continuous probability density, including entropies and divergences. Rather than plugging a consistent density estimate (which requires k → ∞ as the sample size n → ∞) into the functional of interest, the estimators we consider fix k and perform a bias correction. This is more efficient computationally, and, as we show in certain cases, statistically, leading to faster convergence rates. Our framework unifies several previous estimators, for most of which ours are the first finite sample guarantees. 1 Introduction Estimating entropies and divergences of probability distributions in a consistent manner is of importance in a number of problems in machine learning. Entropy estimators have applications in goodness-of-fit testing [13], parameter estimation in semi-parametric models [51], studying fractal random walks [3], and texture classification [14, 15]. Divergence estimators have been used to generalize machine learning algorithms for regression, classification, and clustering from inputs in RD to sets and distributions [40, 33]. Divergences also include mutual informations as a special case; mutual information estimators have applications in feature selection [35], clustering [2], causality detection [16], optimal experimental design [26, 38], fMRI data analysis [7], prediction of protein structures [1], and boosting and facial expression recognition [41]. Both entropy estimators and mutual information estimators have been used for independent component and subspace analysis [23, 47, 37, 17], as well as for image registration [14, 15]. Further applications can be found in [25]. This paper considers the more general problem of estimating functionals of the form F (P ) := E X∼P [f(p(X))] , (1) using n IID samples from P , where P is an unknown probability measure with smooth density function p and f is a known smooth function. We are interested in analyzing a class of nonparametric estimators based on k-nearest neighbor (k-NN) distance statistics. Rather than plugging a consistent estimator of p into (1), which requires k →∞ as n→∞, these estimators derive a bias correction for the plug-in estimator with fixed k; hence, we refer to this type of estimator as a fixed-k estimator. Compared to plug-in estimators, fixed-k estimators are faster to compute. As we show, fixed-k estimators can also exhibit superior rates of convergence. As shown in Table 1, several authors have derived bias corrections necessary for fixed-k estimators of entropies and divergences, including, most famously, the Shannon entropy estimator of [20]. 1 The estimators in Table 1 estimators are known to be weakly consistent, 2 but, except for Shannon entropy, 1MATLAB code for these estimators is in the ITE toolbox https://bitbucket.org/szzoli/ite/ [48]. 2Several of these proofs contain errors regarding the use of integral convergence theorems when their conditions do not hold, as described in [39]. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. no finite sample bounds are known. The main goal of this paper is to provide finite-sample analysis of these estimators, via unified analysis of the estimator after bias correction. Specifically, we show conditions under which, for β-Hölder continuous (β ∈ (0, 2]) densities on D dimensional space, the bias of fixed-k estimators decays as O ( n−β/D ) and the variance decays as O ( n−1 ) , giving a mean squared error of O ( n−2β/D + n−1 ) . Hence, the estimators converge at the parametric O(n−1) rate when β ≥ D/2, and at the slower rateO(n−2β/D) otherwise. A modification of the estimators would be necessary to leverage additional smoothness for β > 2, but we do not pursue this here. Along the way, we prove a finite-sample version of the useful fact [25] that (normalized) k-NN distances have an Erlang asymptotic distribution, which may be of independent interest. We present our results for distributions P supported on the unit cube in RD because this significantly simplifies the statements of our results, but, as we discuss in the supplement, our results generalize fairly naturally, for example to distributions supported on smooth compact manifolds. In this context, it is worth noting that our results scale with the intrinsic dimension of the manifold. As we discuss later, we believe deriving finite sample rates for distributions with unbounded support may require a truncated modification of the estimators we study (as in [49]), but we do not pursue this here. 2 Problem statement and notation LetX := [0, 1]D denote the unit cube in RD, and let µ denote the Lebesgue measure. Suppose P is an unknown µ-absolutely continuous Borel probability measure supported on X , and let p : X → [0,∞) denote the density of P . Consider a (known) differentiable function f : (0,∞) → R. Given n samples X1, ..., Xn drawn IID from P , we are interested in estimating the functional F (P ) := E X∼P [f(p(X))] . Somewhat more generally (as in divergence estimation), we may have a function f : (0,∞)2 → R of two variables and a second unknown probability measure Q, with density q and n IID samples Y1, ..., Yn. Then, we are interested in estimating F (P,Q) := E X∼P [f(p(X), q(X))] . Fix r ∈ [1,∞] and a positive integer k. We will work with distances induced by the r-norm ‖x‖r := ( D∑ i=1 xri )1/r and define cD,r := (2Γ(1 + 1/r)) D Γ(1 +D/r) = µ(B(0, 1)), where B(x, ε) := {y ∈ RD : ‖x − y‖r < ε} denotes the open radius-ε ball centered at x. Our estimators use k-nearest neighbor (k-NN) distances: Definition 1. (k-NN distance): Given n IID samples X1, ..., Xn from P , for x ∈ RD, we define the k-NN distance εk(x) by εk(x) = ‖x−Xi‖r, where Xi is the kth-nearest element (in ‖ · ‖r) of the set {X1, ..., Xn} to x. For divergence estimation, given n samples Y1, ..., Yn from Q, then we similarly define δk(x) by δk(x) = ‖x− Yi‖r, where Yi is the kth-nearest element of {Y1, ..., Yn} to x. µ-absolute continuity of P precludes the existence of atoms (i.e., ∀x ∈ RD, P ({x}) = µ({x}) = 0). Hence, each εk(x) > 0 a.s. We will require this to study quantities such as log εk(x) and 1/εk(x). 3 Estimator 3.1 k-NN density estimation and plug-in functional estimators The k-NN density estimator p̂k(x) = k/n µ(B(x, εk(x)) = k/n cDεDk (x) is well-studied nonparametric density estimator [28], motivated by noting that, for small ε > 0, p(x) ≈ P (B(x, ε)) µ(B(x, ε)) , and that, P (B(x, εk(x))) ≈ k/n. One can show that, for x ∈ RD at which p is continuous, if k → ∞ and k/n → 0 as n → ∞, then p̂k(x) → p(x) in probability ([28], Theorem 3.1). Thus, a natural approach for estimating F (P ) is the plug-in estimator F̂PI := 1 n n∑ i=1 f (p̂k(Xi)) . (2) Since p̂k → p in probability pointwise as k, n→∞ and f is smooth, one can show F̂PI is consistent, and in fact derive finite sample convergence rates (depending on how k →∞). For example, [44] show a convergence rate of O ( n−min{ 2β β+D ,1} ) for β-Hölder continuous densities (after sample splitting and boundary correction) by setting k n β β+d . Unfortunately, while necessary to ensure V [p̂k(x)]→ 0, the requirement k →∞ is computationally burdensome. Furthermore, increasing k can increase the bias of p̂k due to over-smoothing (see (5) below), suggesting that this may be sub-optimal for estimating F (P ). Indeed, similar work based on kernel density estimation [42] suggests that, for plug-in functional estimators, under-smoothing may be preferable, since the empirical mean results in additional smoothing. 3.2 Fixed-k functional estimators An alternative approach is to fix k as n→∞. Since F̂PI is itself an empirical mean, unlike V [p̂k(x)], V [ F̂PI ] → 0 as n → ∞. The more critical complication of fixing k is bias. Since f is typically non-linear, the non-vanishing variance of p̂k translates into asymptotic bias. A solution adopted by several papers is to derive a bias correction function B (depending only on known factors) such that E X1,...,Xn [ B ( f ( k/n µ(B(x, εk(x)) ))] = E X1,...,Xn [ f ( P (B(x, εk(x))) µ(B(x, εk(x)) )] . (3) For continuous p, the quantity pεk(x)(x) := P (B(x, εk(x))) µ(B(x, εk(x)) (4) is a consistent estimate of p(x) with k fixed, but it is not computable, since P is unknown. The bias correction B gives us an asymptotically unbiased estimator F̂B(P ) := 1 n n∑ i=1 B (f (p̂k(Xi))) = 1 n n∑ i=1 B ( f ( k/n µ(B(Xi, εk(Xi)) )) . that uses k/n in place of P (B(x, εk(x))). This estimate extends naturally to divergences: F̂B(P,Q) := 1 n n∑ i=1 B (f (p̂k(Xi), q̂k(Xi))) . As an example, if f = log (as in Shannon entropy), then it can be shown that, for any continuous p, E [logP (B(x, εk(x)))] = ψ(k)− ψ(n). Hence, for Bn,k := ψ(k)− ψ(n) + log(n)− log(k), E X1,...,Xn [ f ( k/n µ(B(x, εk(x)) )] +Bn,k = E X1,...,Xn [ f ( P (B(x, εk(x))) µ(B(x, εk(x)) )] . giving the estimator of [20]. Other examples of functionals for which the bias correction is known are given in Table 1. In general, deriving an appropriate bias correction can be quite a difficult problem specific to the functional of interest, and it is not our goal presently to study this problem; rather, we are interested in bounding the error of F̂B(P ), assuming the bias correction is known. Hence, our results apply to all of the estimators in Table 1, as well as any estimators of this form that may be derived in the future. 4 Related work 4.1 Estimating information theoretic functionals Recently, there has been much work on analyzing estimators for entropy, mutual information, divergences, and other functionals of densities. Besides bias-corrected fixed-k estimators, most of this work has taken one of three approaches. One series of papers [27, 42, 43] studied a boundarycorrected plug-in approach based on under-smoothed kernel density estimation. This approach has strong finite sample guarantees, but requires prior knowledge of the support of the density, and can have a slow rate of convergence. A second approach [18, 22] uses von Mises expansion to partially correct the bias of optimally smoothed density estimates. This is statistically more efficient, but can require computationally demanding numerical integration over the support of the density. A final line of work [30, 31, 44, 46] studied plug-in estimators based on consistent, boundary corrected k-NN density estimates (i.e., with k →∞ as n→∞). [32] study a divergence estimator based on convex risk minimization, but this relies of the context of an RKHS, making results are difficult to compare. Rates of Convergence: For densities over RD satisfying a Hölder smoothness condition parametrized by β ∈ (0,∞), the minimax mean squared error rate for estimating functionals of the form∫ f(p(x)) dx has been known since [6] to be O ( n−min{ 8β 4β+D ,1} ) . [22] recently derived iden- tical minimax rates for divergence estimation. Most of the above estimators have been shown to converge at the rate O ( n−min{ 2β β+D ,1} ) . Only the von Mises approach [22] is known to achieve the minimax rate for general β and D, but due to its computational demand (O(2Dn3)), 3 the authors suggest using other statistically less efficient estimators for moderate sample size. Here, we show that, for β ∈ (0, 2], bias-corrected fixed-k estimators converge at the relatively fast rate O ( n−min{ 2β D ,1} ) . For β > 2, modifications are needed for the estimator to leverage the additional smoothness of the density. Notably, this rate is adaptive; that is, it does not require selecting a smoothing parameter depending on the unknown β; our results (Theorem 5) imply the above rate is achieved for any fixed choice of k. On the other hand, since no empirical error metric is available for cross-validation, parameter selection is an obstacle for competing estimators. 4.2 Prior analysis of fixed-k estimators As of writing this paper, the only finite-sample results for F̂B(P ) were those of [5] for the KozachenkoLeonenko (KL) 4 Shannon entropy estimator. [20] Theorem 7.1 of [5] shows that, if the density p has compact support, then the variance of the KL estimator decays as O(n−1). They also claim (Theorem 7.2) to bound the bias of the KL estimator by O(n−β), under the assumptions that p is β-Hölder continuous (β ∈ (0, 1]), bounded away from 0, and supported on the interval [0, 1]. However, in their proof, [5] neglect to bound the additional bias incurred near the boundaries of [0, 1], where the density cannot simultaneously be bounded away from 0 and continuous. In fact, because the KL estimator does not attempt to correct for boundary bias, it is not clear that the bias should decay as O(n−β) under these conditions; we require additional conditions at the boundary of X . 3Fixed-k estimators can be computed in O ( Dn2 ) time, or O ( 2Dn logn ) using k-d trees for small D. 4Not to be confused with Kullback-Leibler (KL) divergence, for which we also analyze an estimator. [49] studied a closely related entropy estimator for which they prove √ n-consistency. Their estimator is identical to the KL estimator, except that it truncates k-NN distances at √ n, replacing εk(x) with min{εk(x), √ n}. This sort of truncation may be necessary for certain fixed-k estimators to satisfy finite-sample bounds for densities of unbounded support, though consistency can be shown regardless. Finally, two very recent papers [12, 4] have analyzed the KL estimator. In this case, [12] generalize the results of [5] to D > 1, and [4] weaken the regularity and boundary assumptions required by our bias bound, while deriving the same rate of convergence. Moreover, they show that, if k increases with n at the rate k log5 n, the KL estimator is asymptotically efficient (i.e., asymptotically normal, with optimal asymptotic variance). As explained in Section 8, together with our results this elucidates the role of k in the KL estimator: fixing k optimizes the convergence rate of the estimator, but increasing k slowly can further improve error by constant factors. 5 Discussion of assumptions The lack of finite-sample results for fixed-k estimators is due to several technical challenges. Here, we discuss some of these challenges, motivating the assumptions we make to overcome them. First, these estimators are sensitive to regions of low probability (i.e., p(x) small), for two reasons: 1. Many functions f of interest (e.g., f = log or f(z) = zα, α < 0) have singularities at 0. 2. The k-NN estimate p̂k(x) of p(x) is highly biased when p(x) is small. For example, for p β-Hölder continuous (β ∈ (0, 2]), one has ([29], Theorem 2) Bias(p̂k(x)) ( k np(x) )β/D . (5) For these reasons, it is common in analysis of k-NN estimators to assume the following [5, 39]: (A1) p is bounded away from zero on its support. That is, p∗ := infx∈X p(x) > 0. Second, unlike many functional estimators (see e.g., [34, 45, 42]), the fixed-k estimators we consider do not attempt correct for boundary bias (i.e., bias incurred due to discontinuity of p on the boundary ∂X of X ). 5 The boundary bias of the density estimate p̂k(x) does vanish at x in the interior X ◦ of X as n→∞, but additional assumptions are needed to obtain finite-sample rates. Either of the following assumptions would suffice: (A2) p is continuous not only on X ◦ but also on ∂X (i.e., p(x)→ 0 as dist(x, ∂X )→ 0). (A3) p is supported on all of RD. That is, the support of p has no boundary. This is the approach of [49], but we reiterate that, to handle an unbounded domain, they require truncating εk(x). Unfortunately, both assumptions (A2) and (A3) are inconsistent with (A1). Our approach is to assume (A2) and replace assumption (A1) with a much milder assumption that p is locally lower bounded on its support in the following sense: (A4) There exist ρ > 0 and a function p∗ : X → (0,∞) such that, for all x ∈ X , r ∈ (0, ρ], p∗(x) ≤ P (B(x,r))µ(B(x,r)) . We show in Lemma 2 that assumption (A4) is in fact very mild; in a metric measure space of positive dimension D, as long as p is continuous on X , such a p∗ exists for any desired ρ > 0. For simplicity, we will use ρ = √ D = diam(X ). As hinted by (5) and the fact that F (P ) is an expectation, our bounds will contain terms of the form E X∼P [ 1 (p∗(X)) β/D ] = ∫ X p(x) (p∗(x)) β/D dµ(x) (with an additional f ′(p∗(x)) factor if f has a singularity at zero). Hence, our key assumption is that these quantities are finite. This depends primarily on how quickly p approaches zero near ∂X . For many functionals, Lemma 6 gives a simple sufficient condition. 5This complication was omitted in the bias bound (Theorem 7.2) of [5] for entropy estimation. 6 Preliminary lemmas Here, we present some lemmas, both as a means of summarizing our proof techniques and also because they may be of independent interest for proving finite-sample bounds for other k-NN methods. Due to space constraints, all proofs are given in the appendix. Our first lemma states that, if p is continuous, then it is locally lower bounded as described in the previous section. Lemma 2. (Existence of Local Bounds) If p is continuous on X and strictly positive on the interior X ◦ of X , then, for ρ := √ D = diam(X ), there exists a continuous function p∗ : X ◦ → (0,∞) and a constant p∗ ∈ (0,∞) such that 0 < p∗(x) ≤ P (B(x, r)) µ(B(x, r)) ≤ p∗ <∞, ∀x ∈ X , r ∈ (0, ρ]. We now use these local lower and upper bounds to prove that k-NN distances concentrate around a term of order (k/(np(x)))1/D. Related lemmas, also based on multiplicative Chernoff bounds, are used by [21, 9] and [8, 19] to prove finite-sample bounds on k-NN methods for cluster tree pruning and classification, respectively. For cluster tree pruning, the relevant inequalities bound the error of the k-NN density estimate, and, for classification, they lower bound the probability of nearby samples of the same class. Unlike in cluster tree pruning, we are not using a consistent density estimate, and, unlike in classification, our estimator is a function of k-NN distances themselves (rather than their ordering). Thus, our statement is somewhat different, bounding the k-NN distances themselves: Lemma 3. (Concentration of k-NN Distances) Suppose p is continuous on X and strictly positive on X ◦. Let p∗ and p∗ be as in Lemma 2. Then, for any x ∈ X ◦, 1. if r > ( k p∗(x)n )1/D , then P [εk(x) > r] ≤ e−p∗(x)r Dn ( e p∗(x)r Dn k )k . 2. if r ∈ [ 0, ( k p∗n )1/D) , then P [εk(x) < r] ≤ e−p∗(x)r Dn ( ep∗rDn k )kp∗(x)/p∗ . It is worth noting an asymmetry in the above bounds: counter-intuitively, the lower bound depends on p∗. This asymmetry is related to the large bias of k-NN density estimators when p is small (as in (5)). The next lemma uses Lemma 3 to bound expectations of monotone functions of the ratio p̂k/p∗. As suggested by the form of integrals (6) and (7), this is essentially a finite-sample statement of the fact that (appropriately normalized) k-NN distances have Erlang asymptotic distributions; this asymptotic statement is key to consistency proofs of [25] and [39] for α-entropy and divergence estimators. Lemma 4. Let p be continuous on X and strictly positive on X ◦. Define p∗ and p∗ as in Lemma 2. Suppose f : (0,∞)→ R is continuously differentiable and f ′ > 0. Then, we have the upper bound 6 sup x∈X◦ E [ f+ ( p∗(x) p̂k(x) )] ≤ f+(1) + e √ k ∫ ∞ k e−yyk Γ(k + 1) f+ (y k ) dy, (6) and, for all x ∈ X ◦, for κ(x) := kp∗(x)/p∗, the lower bound E [ f− ( p∗(x) p̂k(x) )] ≤ f−(1) + e √ k κ(x) ∫ κ(x) 0 e−yyκ(x) Γ(κ(x) + 1) f− (y k ) dy (7) Note that plugging the function z 7→ f (( kz cD,rnp∗(x) ) 1 D ) into Lemma 4 gives bounds on E [f(εk(x))]. As one might guess from Lemma 3 and the assumption that f is smooth, this bound is roughly of the order ( k np(x) ) 1 D . For example, for any α > 0, a simple calculation from (6) gives E [εαk (x)] ≤ ( 1 + α D )( k cD,rnp∗(x) ) α D . (8) (8) is used for our bias bound, and more direct applications of Lemma 4 are used in variance bound. 6f+(x) = max{0, f(x)} and f−(x) = −min{0, f(x)} denote the positive and negative parts of f . Recall that E [f(X)] = E [f+(X)]− E [f−(X)]. 7 Main results Here, we present our main results on the bias and variance of F̂B(P ). Again, due to space constraints, all proofs are given in the appendix. We begin with bounding the bias: Theorem 5. (Bias Bound) Suppose that, for some β ∈ (0, 2], p is β-Hölder continuous with constant L > 0 on X , and p is strictly positive on X ◦. Let p∗ and p∗ be as in Lemma 2. Let f : (0,∞)→ R be differentiable, and define Mf,p : X → [0,∞) by Mf,p(x) := sup z∈[p∗(x),p∗] ∣∣∣∣ ddz f(z) ∣∣∣∣ Assume Cf := E X∼p [ Mf,p(X) (p∗(X)) β D ] <∞. Then, ∣∣∣E F̂B(P )− F (P )∣∣∣ ≤ CfL(k n ) β D . The statement for divergences is similar, assuming that q is also β-Hölder continuous with constant L and strictly positive on X ◦. Specifically, we get the same bound if we replace Mf,o with Mf,p(x) := sup (w,z)∈[p∗(x),p∗]×[q∗(x),q∗] ∣∣∣∣ ∂∂wf(w, z) ∣∣∣∣ and define Mf,q similarly (i.e., with ∂∂z ) and we assume that Cf := E X∼p [ Mf,p(X) (p∗(X)) β D ] + E X∼p [ Mf,q(X) (q∗(X)) β D ] <∞. As an example of the applicability of Theorem 5, consider estimating the Shannon entropy. Then, f(z) = log(x), and so we need Cf = ∫ X (p∗(x)) −β/D dµ(x) <∞. The assumption Cf <∞ is not immediately transparent. For the functionals in Table 1, Cf has the form ∫ X (p(x)) −c dx, for some c > 0, and hence Cf <∞ intuitively means p(x) cannot approach zero too quickly as dist(x, ∂X )→ 0. The following lemma gives a formal sufficient condition: Lemma 6. (Boundary Condition) Let c > 0. Suppose there exist b∂ ∈ (0, 1c ), c∂ , ρ∂ > 0 such that, for all x ∈ X with ε(x) := dist(x, ∂X ) < ρ∂ , p(x) ≥ c∂εb∂ (x). Then, ∫ X (p∗(x)) −c dµ(x) <∞. In the supplement, we give examples showing that this condition is fairly general, satisfied by densities proportional to xb∂ near ∂X (i.e., those with at least b∂ nonzero one-sided derivatives on the boundary). We now bound the variance. The main obstacle here is that the fixed-k estimator is an empirical mean of dependent terms (functions of k-NN distances). We generalize the approach used by [5] to bound the variance of the KL estimator of Shannon entropy. The key insight is the geometric fact that, in (RD, ‖ · ‖p), there exists a constant Nk,D (independent of n) such that any sample Xi can be amongst the k-nearest neighbors of at most Nk,D other samples. Hence, at most Nk,D + 1 of the terms in (2) can change when a single Xi is added, suggesting a variance bound via the Efron-Stein inequality [10], which bounds the variance of a function of random variables in terms of its expected change when its arguments are resampled. [11] originally used this approach to prove a general Law of Large Numbers (LLN) for nearest-neighbors statistics. Unfortunately, this LLN relies on bounded kurtosis assumptions that are difficult to justify for the log or negative power statistics we study. Theorem 7. (Variance Bound) Suppose B ◦ f is continuously differentiable and strictly monotone. Assume Cf,p := EX∼P [ B2(f(p∗(X))) ] <∞, and Cf := ∫∞ 0 e−yykf(y) <∞. Then, for CV := 2 (1 +Nk,D) (3 + 4k) (Cf,p + Cf ) , we have V [ F̂B(P ) ] ≤ CV n . As an example, if f = log (as in Shannon entropy), then, since B is an additive constant, we simply require ∫ X p(x) log 2(p∗(x)) < ∞. In general, Nk,D is of the order k2cD, for some c > 0. Our bound is likely quite loose in k; in practice, V [ F̂B(P ) ] typically decreases somewhat with k. 8 Conclusions and discussion In this paper, we gave finite-sample bias and variance error bounds for a class of fixed-k estimators of functionals of probability density functions, including the entropy and divergence estimators in Table 1. The bias and variance bounds in turn imply a bound on the mean squared error (MSE) of the bias-corrected estimator via the usual decomposition into squared bias and variance: Corollary 8. (MSE Bound) Under the conditions of Theorems 5 and 7, E [( Ĥk(X)−H(X) )2] ≤ C2fL2 ( k n )2β/D + CV n . (9) Choosing k: Contrary to the name, fixing k is not required for “fixed-k” estimators. [36] empirically studied the effect of changing k with n and found that fixing k = 1 gave best results for estimating F (P ). However, there has been no theoretical justification for fixing k. Assuming tightness of our bias bound in k, we provide this in a worst-case sense: since our bias bound is nondecreasing in k and our variance bound is no larger than the minimax MSE rate for these estimation problems, reducing variance (i.e., increasing k) does not improve the (worst-case) convergence rate. On the other hand, [4] recently showed that slowly increasing k can improves the asymptotic variance of the estimator, with the rate k log5 n leading to asymptotic efficiency. In view of these results, we suggest that increasing k can improve error by constant factors, but cannot improve the convergence rate. Finally, we note that [36] found increasing k quickly (e.g., k = n/2) was best for certain hypothesis tests based on these estimators. Intuitively, this is because, in testing problems, bias is less problematic than variance (e.g., an asymptotically biased estimator can still lead to a consistent test). Acknowledgments This material is based upon work supported by a National Science Foundation Graduate Research Fellowship to the first author under Grant No. DGE-1252522.
1. What is the main contribution of the paper regarding density functional estimation? 2. What are the strengths of the proposed approach, particularly in addressing the bias correction? 3. Do you have any questions regarding the notation used in the paper, such as $c_D \epsilon^D_k(x)$? 4. How does the reviewer assess the clarity and presentation of the paper's content? 5. Are there any concerns regarding the assumption made in the paper, such as the consistency of the estimator $pˆk(x)$?
Review
Review This paper provides finite sample bounds for the performance of k-nearest neighbor estimators of functionals of probability densities. That is, the goal is to estimate a quantity of the form $F(P) = E[f(p(X))]$ when the probability density $p$ is unknown. The bounds assume the use of $k$-nearest neighbor density estimator, $pˆk(x)$, which allows for non-parametric estimates of continuous densities based on countable number of samples. The estimator $pˆk(x)$ is consistent if $k$ is allowed to go to infinity, and biased for a fixed $k$. One approach is to use a consistent sequence of estimators $pˆk(x)$ as a plug-in estimator, but it has the disadvantage that, for large $k$, this is difficult to compute. The approach taken in this work is to fix $k$ and correct for the bias. The work does not actually compute the bias correcting terms (these are well known for many functions of interest). Instead, it assumes that this bias correction is known and analyzes the performance of bias-corrected estimators of $F(P)$ (which are asymptotically unbiased).This is a nice contribution to the literature on density functional estimation. The authors present a unified analysis of bias-corrected $k-NN estimators of functionals of probability densities and obtain finite-sample bounds on their bias and variance. The presentation is very clear, and all the assumptions are carefully stated. Minor comments: 1. Line 76: What is $c_D \epsilon^D_k(x)$? Did the authors mean $c_{D,r} \epsilon^D_k(x)$? 2. Line 174: Any function $p_*$ or a continuous function $p_*$? 3. Lines 230-231: The quantity $\hat{F}_{{\cal B}}(P)$ is a random quantity that depends on $X_1,\ldots,X_n$. Is there an expectation missing around $| \hat{F}_{{\cal B}}(P) - F(P)|$?
NIPS
Title Finite-Sample Analysis of Fixed-k Nearest Neighbor Density Functional Estimators Abstract We provide finite-sample analysis of a general framework for using k-nearest neighbor statistics to estimate functionals of a nonparametric continuous probability density, including entropies and divergences. Rather than plugging a consistent density estimate (which requires k → ∞ as the sample size n → ∞) into the functional of interest, the estimators we consider fix k and perform a bias correction. This is more efficient computationally, and, as we show in certain cases, statistically, leading to faster convergence rates. Our framework unifies several previous estimators, for most of which ours are the first finite sample guarantees. 1 Introduction Estimating entropies and divergences of probability distributions in a consistent manner is of importance in a number of problems in machine learning. Entropy estimators have applications in goodness-of-fit testing [13], parameter estimation in semi-parametric models [51], studying fractal random walks [3], and texture classification [14, 15]. Divergence estimators have been used to generalize machine learning algorithms for regression, classification, and clustering from inputs in RD to sets and distributions [40, 33]. Divergences also include mutual informations as a special case; mutual information estimators have applications in feature selection [35], clustering [2], causality detection [16], optimal experimental design [26, 38], fMRI data analysis [7], prediction of protein structures [1], and boosting and facial expression recognition [41]. Both entropy estimators and mutual information estimators have been used for independent component and subspace analysis [23, 47, 37, 17], as well as for image registration [14, 15]. Further applications can be found in [25]. This paper considers the more general problem of estimating functionals of the form F (P ) := E X∼P [f(p(X))] , (1) using n IID samples from P , where P is an unknown probability measure with smooth density function p and f is a known smooth function. We are interested in analyzing a class of nonparametric estimators based on k-nearest neighbor (k-NN) distance statistics. Rather than plugging a consistent estimator of p into (1), which requires k →∞ as n→∞, these estimators derive a bias correction for the plug-in estimator with fixed k; hence, we refer to this type of estimator as a fixed-k estimator. Compared to plug-in estimators, fixed-k estimators are faster to compute. As we show, fixed-k estimators can also exhibit superior rates of convergence. As shown in Table 1, several authors have derived bias corrections necessary for fixed-k estimators of entropies and divergences, including, most famously, the Shannon entropy estimator of [20]. 1 The estimators in Table 1 estimators are known to be weakly consistent, 2 but, except for Shannon entropy, 1MATLAB code for these estimators is in the ITE toolbox https://bitbucket.org/szzoli/ite/ [48]. 2Several of these proofs contain errors regarding the use of integral convergence theorems when their conditions do not hold, as described in [39]. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. no finite sample bounds are known. The main goal of this paper is to provide finite-sample analysis of these estimators, via unified analysis of the estimator after bias correction. Specifically, we show conditions under which, for β-Hölder continuous (β ∈ (0, 2]) densities on D dimensional space, the bias of fixed-k estimators decays as O ( n−β/D ) and the variance decays as O ( n−1 ) , giving a mean squared error of O ( n−2β/D + n−1 ) . Hence, the estimators converge at the parametric O(n−1) rate when β ≥ D/2, and at the slower rateO(n−2β/D) otherwise. A modification of the estimators would be necessary to leverage additional smoothness for β > 2, but we do not pursue this here. Along the way, we prove a finite-sample version of the useful fact [25] that (normalized) k-NN distances have an Erlang asymptotic distribution, which may be of independent interest. We present our results for distributions P supported on the unit cube in RD because this significantly simplifies the statements of our results, but, as we discuss in the supplement, our results generalize fairly naturally, for example to distributions supported on smooth compact manifolds. In this context, it is worth noting that our results scale with the intrinsic dimension of the manifold. As we discuss later, we believe deriving finite sample rates for distributions with unbounded support may require a truncated modification of the estimators we study (as in [49]), but we do not pursue this here. 2 Problem statement and notation LetX := [0, 1]D denote the unit cube in RD, and let µ denote the Lebesgue measure. Suppose P is an unknown µ-absolutely continuous Borel probability measure supported on X , and let p : X → [0,∞) denote the density of P . Consider a (known) differentiable function f : (0,∞) → R. Given n samples X1, ..., Xn drawn IID from P , we are interested in estimating the functional F (P ) := E X∼P [f(p(X))] . Somewhat more generally (as in divergence estimation), we may have a function f : (0,∞)2 → R of two variables and a second unknown probability measure Q, with density q and n IID samples Y1, ..., Yn. Then, we are interested in estimating F (P,Q) := E X∼P [f(p(X), q(X))] . Fix r ∈ [1,∞] and a positive integer k. We will work with distances induced by the r-norm ‖x‖r := ( D∑ i=1 xri )1/r and define cD,r := (2Γ(1 + 1/r)) D Γ(1 +D/r) = µ(B(0, 1)), where B(x, ε) := {y ∈ RD : ‖x − y‖r < ε} denotes the open radius-ε ball centered at x. Our estimators use k-nearest neighbor (k-NN) distances: Definition 1. (k-NN distance): Given n IID samples X1, ..., Xn from P , for x ∈ RD, we define the k-NN distance εk(x) by εk(x) = ‖x−Xi‖r, where Xi is the kth-nearest element (in ‖ · ‖r) of the set {X1, ..., Xn} to x. For divergence estimation, given n samples Y1, ..., Yn from Q, then we similarly define δk(x) by δk(x) = ‖x− Yi‖r, where Yi is the kth-nearest element of {Y1, ..., Yn} to x. µ-absolute continuity of P precludes the existence of atoms (i.e., ∀x ∈ RD, P ({x}) = µ({x}) = 0). Hence, each εk(x) > 0 a.s. We will require this to study quantities such as log εk(x) and 1/εk(x). 3 Estimator 3.1 k-NN density estimation and plug-in functional estimators The k-NN density estimator p̂k(x) = k/n µ(B(x, εk(x)) = k/n cDεDk (x) is well-studied nonparametric density estimator [28], motivated by noting that, for small ε > 0, p(x) ≈ P (B(x, ε)) µ(B(x, ε)) , and that, P (B(x, εk(x))) ≈ k/n. One can show that, for x ∈ RD at which p is continuous, if k → ∞ and k/n → 0 as n → ∞, then p̂k(x) → p(x) in probability ([28], Theorem 3.1). Thus, a natural approach for estimating F (P ) is the plug-in estimator F̂PI := 1 n n∑ i=1 f (p̂k(Xi)) . (2) Since p̂k → p in probability pointwise as k, n→∞ and f is smooth, one can show F̂PI is consistent, and in fact derive finite sample convergence rates (depending on how k →∞). For example, [44] show a convergence rate of O ( n−min{ 2β β+D ,1} ) for β-Hölder continuous densities (after sample splitting and boundary correction) by setting k n β β+d . Unfortunately, while necessary to ensure V [p̂k(x)]→ 0, the requirement k →∞ is computationally burdensome. Furthermore, increasing k can increase the bias of p̂k due to over-smoothing (see (5) below), suggesting that this may be sub-optimal for estimating F (P ). Indeed, similar work based on kernel density estimation [42] suggests that, for plug-in functional estimators, under-smoothing may be preferable, since the empirical mean results in additional smoothing. 3.2 Fixed-k functional estimators An alternative approach is to fix k as n→∞. Since F̂PI is itself an empirical mean, unlike V [p̂k(x)], V [ F̂PI ] → 0 as n → ∞. The more critical complication of fixing k is bias. Since f is typically non-linear, the non-vanishing variance of p̂k translates into asymptotic bias. A solution adopted by several papers is to derive a bias correction function B (depending only on known factors) such that E X1,...,Xn [ B ( f ( k/n µ(B(x, εk(x)) ))] = E X1,...,Xn [ f ( P (B(x, εk(x))) µ(B(x, εk(x)) )] . (3) For continuous p, the quantity pεk(x)(x) := P (B(x, εk(x))) µ(B(x, εk(x)) (4) is a consistent estimate of p(x) with k fixed, but it is not computable, since P is unknown. The bias correction B gives us an asymptotically unbiased estimator F̂B(P ) := 1 n n∑ i=1 B (f (p̂k(Xi))) = 1 n n∑ i=1 B ( f ( k/n µ(B(Xi, εk(Xi)) )) . that uses k/n in place of P (B(x, εk(x))). This estimate extends naturally to divergences: F̂B(P,Q) := 1 n n∑ i=1 B (f (p̂k(Xi), q̂k(Xi))) . As an example, if f = log (as in Shannon entropy), then it can be shown that, for any continuous p, E [logP (B(x, εk(x)))] = ψ(k)− ψ(n). Hence, for Bn,k := ψ(k)− ψ(n) + log(n)− log(k), E X1,...,Xn [ f ( k/n µ(B(x, εk(x)) )] +Bn,k = E X1,...,Xn [ f ( P (B(x, εk(x))) µ(B(x, εk(x)) )] . giving the estimator of [20]. Other examples of functionals for which the bias correction is known are given in Table 1. In general, deriving an appropriate bias correction can be quite a difficult problem specific to the functional of interest, and it is not our goal presently to study this problem; rather, we are interested in bounding the error of F̂B(P ), assuming the bias correction is known. Hence, our results apply to all of the estimators in Table 1, as well as any estimators of this form that may be derived in the future. 4 Related work 4.1 Estimating information theoretic functionals Recently, there has been much work on analyzing estimators for entropy, mutual information, divergences, and other functionals of densities. Besides bias-corrected fixed-k estimators, most of this work has taken one of three approaches. One series of papers [27, 42, 43] studied a boundarycorrected plug-in approach based on under-smoothed kernel density estimation. This approach has strong finite sample guarantees, but requires prior knowledge of the support of the density, and can have a slow rate of convergence. A second approach [18, 22] uses von Mises expansion to partially correct the bias of optimally smoothed density estimates. This is statistically more efficient, but can require computationally demanding numerical integration over the support of the density. A final line of work [30, 31, 44, 46] studied plug-in estimators based on consistent, boundary corrected k-NN density estimates (i.e., with k →∞ as n→∞). [32] study a divergence estimator based on convex risk minimization, but this relies of the context of an RKHS, making results are difficult to compare. Rates of Convergence: For densities over RD satisfying a Hölder smoothness condition parametrized by β ∈ (0,∞), the minimax mean squared error rate for estimating functionals of the form∫ f(p(x)) dx has been known since [6] to be O ( n−min{ 8β 4β+D ,1} ) . [22] recently derived iden- tical minimax rates for divergence estimation. Most of the above estimators have been shown to converge at the rate O ( n−min{ 2β β+D ,1} ) . Only the von Mises approach [22] is known to achieve the minimax rate for general β and D, but due to its computational demand (O(2Dn3)), 3 the authors suggest using other statistically less efficient estimators for moderate sample size. Here, we show that, for β ∈ (0, 2], bias-corrected fixed-k estimators converge at the relatively fast rate O ( n−min{ 2β D ,1} ) . For β > 2, modifications are needed for the estimator to leverage the additional smoothness of the density. Notably, this rate is adaptive; that is, it does not require selecting a smoothing parameter depending on the unknown β; our results (Theorem 5) imply the above rate is achieved for any fixed choice of k. On the other hand, since no empirical error metric is available for cross-validation, parameter selection is an obstacle for competing estimators. 4.2 Prior analysis of fixed-k estimators As of writing this paper, the only finite-sample results for F̂B(P ) were those of [5] for the KozachenkoLeonenko (KL) 4 Shannon entropy estimator. [20] Theorem 7.1 of [5] shows that, if the density p has compact support, then the variance of the KL estimator decays as O(n−1). They also claim (Theorem 7.2) to bound the bias of the KL estimator by O(n−β), under the assumptions that p is β-Hölder continuous (β ∈ (0, 1]), bounded away from 0, and supported on the interval [0, 1]. However, in their proof, [5] neglect to bound the additional bias incurred near the boundaries of [0, 1], where the density cannot simultaneously be bounded away from 0 and continuous. In fact, because the KL estimator does not attempt to correct for boundary bias, it is not clear that the bias should decay as O(n−β) under these conditions; we require additional conditions at the boundary of X . 3Fixed-k estimators can be computed in O ( Dn2 ) time, or O ( 2Dn logn ) using k-d trees for small D. 4Not to be confused with Kullback-Leibler (KL) divergence, for which we also analyze an estimator. [49] studied a closely related entropy estimator for which they prove √ n-consistency. Their estimator is identical to the KL estimator, except that it truncates k-NN distances at √ n, replacing εk(x) with min{εk(x), √ n}. This sort of truncation may be necessary for certain fixed-k estimators to satisfy finite-sample bounds for densities of unbounded support, though consistency can be shown regardless. Finally, two very recent papers [12, 4] have analyzed the KL estimator. In this case, [12] generalize the results of [5] to D > 1, and [4] weaken the regularity and boundary assumptions required by our bias bound, while deriving the same rate of convergence. Moreover, they show that, if k increases with n at the rate k log5 n, the KL estimator is asymptotically efficient (i.e., asymptotically normal, with optimal asymptotic variance). As explained in Section 8, together with our results this elucidates the role of k in the KL estimator: fixing k optimizes the convergence rate of the estimator, but increasing k slowly can further improve error by constant factors. 5 Discussion of assumptions The lack of finite-sample results for fixed-k estimators is due to several technical challenges. Here, we discuss some of these challenges, motivating the assumptions we make to overcome them. First, these estimators are sensitive to regions of low probability (i.e., p(x) small), for two reasons: 1. Many functions f of interest (e.g., f = log or f(z) = zα, α < 0) have singularities at 0. 2. The k-NN estimate p̂k(x) of p(x) is highly biased when p(x) is small. For example, for p β-Hölder continuous (β ∈ (0, 2]), one has ([29], Theorem 2) Bias(p̂k(x)) ( k np(x) )β/D . (5) For these reasons, it is common in analysis of k-NN estimators to assume the following [5, 39]: (A1) p is bounded away from zero on its support. That is, p∗ := infx∈X p(x) > 0. Second, unlike many functional estimators (see e.g., [34, 45, 42]), the fixed-k estimators we consider do not attempt correct for boundary bias (i.e., bias incurred due to discontinuity of p on the boundary ∂X of X ). 5 The boundary bias of the density estimate p̂k(x) does vanish at x in the interior X ◦ of X as n→∞, but additional assumptions are needed to obtain finite-sample rates. Either of the following assumptions would suffice: (A2) p is continuous not only on X ◦ but also on ∂X (i.e., p(x)→ 0 as dist(x, ∂X )→ 0). (A3) p is supported on all of RD. That is, the support of p has no boundary. This is the approach of [49], but we reiterate that, to handle an unbounded domain, they require truncating εk(x). Unfortunately, both assumptions (A2) and (A3) are inconsistent with (A1). Our approach is to assume (A2) and replace assumption (A1) with a much milder assumption that p is locally lower bounded on its support in the following sense: (A4) There exist ρ > 0 and a function p∗ : X → (0,∞) such that, for all x ∈ X , r ∈ (0, ρ], p∗(x) ≤ P (B(x,r))µ(B(x,r)) . We show in Lemma 2 that assumption (A4) is in fact very mild; in a metric measure space of positive dimension D, as long as p is continuous on X , such a p∗ exists for any desired ρ > 0. For simplicity, we will use ρ = √ D = diam(X ). As hinted by (5) and the fact that F (P ) is an expectation, our bounds will contain terms of the form E X∼P [ 1 (p∗(X)) β/D ] = ∫ X p(x) (p∗(x)) β/D dµ(x) (with an additional f ′(p∗(x)) factor if f has a singularity at zero). Hence, our key assumption is that these quantities are finite. This depends primarily on how quickly p approaches zero near ∂X . For many functionals, Lemma 6 gives a simple sufficient condition. 5This complication was omitted in the bias bound (Theorem 7.2) of [5] for entropy estimation. 6 Preliminary lemmas Here, we present some lemmas, both as a means of summarizing our proof techniques and also because they may be of independent interest for proving finite-sample bounds for other k-NN methods. Due to space constraints, all proofs are given in the appendix. Our first lemma states that, if p is continuous, then it is locally lower bounded as described in the previous section. Lemma 2. (Existence of Local Bounds) If p is continuous on X and strictly positive on the interior X ◦ of X , then, for ρ := √ D = diam(X ), there exists a continuous function p∗ : X ◦ → (0,∞) and a constant p∗ ∈ (0,∞) such that 0 < p∗(x) ≤ P (B(x, r)) µ(B(x, r)) ≤ p∗ <∞, ∀x ∈ X , r ∈ (0, ρ]. We now use these local lower and upper bounds to prove that k-NN distances concentrate around a term of order (k/(np(x)))1/D. Related lemmas, also based on multiplicative Chernoff bounds, are used by [21, 9] and [8, 19] to prove finite-sample bounds on k-NN methods for cluster tree pruning and classification, respectively. For cluster tree pruning, the relevant inequalities bound the error of the k-NN density estimate, and, for classification, they lower bound the probability of nearby samples of the same class. Unlike in cluster tree pruning, we are not using a consistent density estimate, and, unlike in classification, our estimator is a function of k-NN distances themselves (rather than their ordering). Thus, our statement is somewhat different, bounding the k-NN distances themselves: Lemma 3. (Concentration of k-NN Distances) Suppose p is continuous on X and strictly positive on X ◦. Let p∗ and p∗ be as in Lemma 2. Then, for any x ∈ X ◦, 1. if r > ( k p∗(x)n )1/D , then P [εk(x) > r] ≤ e−p∗(x)r Dn ( e p∗(x)r Dn k )k . 2. if r ∈ [ 0, ( k p∗n )1/D) , then P [εk(x) < r] ≤ e−p∗(x)r Dn ( ep∗rDn k )kp∗(x)/p∗ . It is worth noting an asymmetry in the above bounds: counter-intuitively, the lower bound depends on p∗. This asymmetry is related to the large bias of k-NN density estimators when p is small (as in (5)). The next lemma uses Lemma 3 to bound expectations of monotone functions of the ratio p̂k/p∗. As suggested by the form of integrals (6) and (7), this is essentially a finite-sample statement of the fact that (appropriately normalized) k-NN distances have Erlang asymptotic distributions; this asymptotic statement is key to consistency proofs of [25] and [39] for α-entropy and divergence estimators. Lemma 4. Let p be continuous on X and strictly positive on X ◦. Define p∗ and p∗ as in Lemma 2. Suppose f : (0,∞)→ R is continuously differentiable and f ′ > 0. Then, we have the upper bound 6 sup x∈X◦ E [ f+ ( p∗(x) p̂k(x) )] ≤ f+(1) + e √ k ∫ ∞ k e−yyk Γ(k + 1) f+ (y k ) dy, (6) and, for all x ∈ X ◦, for κ(x) := kp∗(x)/p∗, the lower bound E [ f− ( p∗(x) p̂k(x) )] ≤ f−(1) + e √ k κ(x) ∫ κ(x) 0 e−yyκ(x) Γ(κ(x) + 1) f− (y k ) dy (7) Note that plugging the function z 7→ f (( kz cD,rnp∗(x) ) 1 D ) into Lemma 4 gives bounds on E [f(εk(x))]. As one might guess from Lemma 3 and the assumption that f is smooth, this bound is roughly of the order ( k np(x) ) 1 D . For example, for any α > 0, a simple calculation from (6) gives E [εαk (x)] ≤ ( 1 + α D )( k cD,rnp∗(x) ) α D . (8) (8) is used for our bias bound, and more direct applications of Lemma 4 are used in variance bound. 6f+(x) = max{0, f(x)} and f−(x) = −min{0, f(x)} denote the positive and negative parts of f . Recall that E [f(X)] = E [f+(X)]− E [f−(X)]. 7 Main results Here, we present our main results on the bias and variance of F̂B(P ). Again, due to space constraints, all proofs are given in the appendix. We begin with bounding the bias: Theorem 5. (Bias Bound) Suppose that, for some β ∈ (0, 2], p is β-Hölder continuous with constant L > 0 on X , and p is strictly positive on X ◦. Let p∗ and p∗ be as in Lemma 2. Let f : (0,∞)→ R be differentiable, and define Mf,p : X → [0,∞) by Mf,p(x) := sup z∈[p∗(x),p∗] ∣∣∣∣ ddz f(z) ∣∣∣∣ Assume Cf := E X∼p [ Mf,p(X) (p∗(X)) β D ] <∞. Then, ∣∣∣E F̂B(P )− F (P )∣∣∣ ≤ CfL(k n ) β D . The statement for divergences is similar, assuming that q is also β-Hölder continuous with constant L and strictly positive on X ◦. Specifically, we get the same bound if we replace Mf,o with Mf,p(x) := sup (w,z)∈[p∗(x),p∗]×[q∗(x),q∗] ∣∣∣∣ ∂∂wf(w, z) ∣∣∣∣ and define Mf,q similarly (i.e., with ∂∂z ) and we assume that Cf := E X∼p [ Mf,p(X) (p∗(X)) β D ] + E X∼p [ Mf,q(X) (q∗(X)) β D ] <∞. As an example of the applicability of Theorem 5, consider estimating the Shannon entropy. Then, f(z) = log(x), and so we need Cf = ∫ X (p∗(x)) −β/D dµ(x) <∞. The assumption Cf <∞ is not immediately transparent. For the functionals in Table 1, Cf has the form ∫ X (p(x)) −c dx, for some c > 0, and hence Cf <∞ intuitively means p(x) cannot approach zero too quickly as dist(x, ∂X )→ 0. The following lemma gives a formal sufficient condition: Lemma 6. (Boundary Condition) Let c > 0. Suppose there exist b∂ ∈ (0, 1c ), c∂ , ρ∂ > 0 such that, for all x ∈ X with ε(x) := dist(x, ∂X ) < ρ∂ , p(x) ≥ c∂εb∂ (x). Then, ∫ X (p∗(x)) −c dµ(x) <∞. In the supplement, we give examples showing that this condition is fairly general, satisfied by densities proportional to xb∂ near ∂X (i.e., those with at least b∂ nonzero one-sided derivatives on the boundary). We now bound the variance. The main obstacle here is that the fixed-k estimator is an empirical mean of dependent terms (functions of k-NN distances). We generalize the approach used by [5] to bound the variance of the KL estimator of Shannon entropy. The key insight is the geometric fact that, in (RD, ‖ · ‖p), there exists a constant Nk,D (independent of n) such that any sample Xi can be amongst the k-nearest neighbors of at most Nk,D other samples. Hence, at most Nk,D + 1 of the terms in (2) can change when a single Xi is added, suggesting a variance bound via the Efron-Stein inequality [10], which bounds the variance of a function of random variables in terms of its expected change when its arguments are resampled. [11] originally used this approach to prove a general Law of Large Numbers (LLN) for nearest-neighbors statistics. Unfortunately, this LLN relies on bounded kurtosis assumptions that are difficult to justify for the log or negative power statistics we study. Theorem 7. (Variance Bound) Suppose B ◦ f is continuously differentiable and strictly monotone. Assume Cf,p := EX∼P [ B2(f(p∗(X))) ] <∞, and Cf := ∫∞ 0 e−yykf(y) <∞. Then, for CV := 2 (1 +Nk,D) (3 + 4k) (Cf,p + Cf ) , we have V [ F̂B(P ) ] ≤ CV n . As an example, if f = log (as in Shannon entropy), then, since B is an additive constant, we simply require ∫ X p(x) log 2(p∗(x)) < ∞. In general, Nk,D is of the order k2cD, for some c > 0. Our bound is likely quite loose in k; in practice, V [ F̂B(P ) ] typically decreases somewhat with k. 8 Conclusions and discussion In this paper, we gave finite-sample bias and variance error bounds for a class of fixed-k estimators of functionals of probability density functions, including the entropy and divergence estimators in Table 1. The bias and variance bounds in turn imply a bound on the mean squared error (MSE) of the bias-corrected estimator via the usual decomposition into squared bias and variance: Corollary 8. (MSE Bound) Under the conditions of Theorems 5 and 7, E [( Ĥk(X)−H(X) )2] ≤ C2fL2 ( k n )2β/D + CV n . (9) Choosing k: Contrary to the name, fixing k is not required for “fixed-k” estimators. [36] empirically studied the effect of changing k with n and found that fixing k = 1 gave best results for estimating F (P ). However, there has been no theoretical justification for fixing k. Assuming tightness of our bias bound in k, we provide this in a worst-case sense: since our bias bound is nondecreasing in k and our variance bound is no larger than the minimax MSE rate for these estimation problems, reducing variance (i.e., increasing k) does not improve the (worst-case) convergence rate. On the other hand, [4] recently showed that slowly increasing k can improves the asymptotic variance of the estimator, with the rate k log5 n leading to asymptotic efficiency. In view of these results, we suggest that increasing k can improve error by constant factors, but cannot improve the convergence rate. Finally, we note that [36] found increasing k quickly (e.g., k = n/2) was best for certain hypothesis tests based on these estimators. Intuitively, this is because, in testing problems, bias is less problematic than variance (e.g., an asymptotically biased estimator can still lead to a consistent test). Acknowledgments This material is based upon work supported by a National Science Foundation Graduate Research Fellowship to the first author under Grant No. DGE-1252522.
1. What are the issues with the conditions imposed on distributions in the paper? 2. How does the local lower bounds hypothesis impact the results, and what should be included in the rebuttal? 3. Why are the assumptions on Shannon entropy problematic, especially regarding how p may go to zero at the boundary of the support? 4. How do the results compare to those in [Berrett, Samworth, Yuan (16)], and what are the differences in assumptions and bias bounds? 5. Is the variance bound crude, and how does it impact the suggestion that fixed k is best? 6. Are there any errors in the claim about the relationship between the variance bound and the minimax MSE rate? 7. Should k be fixed or allowed to diverge, and how might this choice impact the constant factor and practical applications?
Review
Review This paper addresses an important problem, of estimating functionals of a probability distribution, based on k-nearest neighbor statistics of independent samples. The results are all linked to a fixed-k setting. There are some issues with the conditions imposed on distributions in this paper. In particular, regarding the local bound hypothesis: in the proof of Lemma 2 epsilon may depend on x, so p^* will be smaller than stated. As an example, if p(x) goes to 0 as x^a when x goes to 0, then p^*(x) goes to 0 as x^(a+1). This is problematic with the assumption that \int p_*(x)^(-\beta/d) dx < \infty. +++++++++++++++++ AFTER REBUTTAL: 1) The rebuttal here is helpful, but there seem to still be some issues with the local lower bounds hypothesis. They claim that it is very mild in light of Lemma 2, but Lemma 2 doesn't apply very well to the situation of p=x^a. I think that they should at least include what they've written in the rebuttal. The assumption for Shannon entropy that \int p_*(x)^(-beta/d) dx < infty is still a very strong assumption on how p may go to zero at the boundary of the support (which must have finite Lebesgue measure), and they do insist that p does go to zero at the boundary of the support. +++++++++++++++++ The results are globally worse than in [Berrett, Samworth, Yuan (16)], which is not cited. They have much less restrictive assumptions and Prop 3 in this paper provides a better bias bound. The assumption here requires, as an example that the support of the distribution has finite Lebesgue measure. The variance bound is also very crude. Indeed, the bound is used to suggest that fixed k is best, and is of order O(k^2 /n), whereas the correct order of the variance in O(1/n) in many cases. It is claimed that the variance bound is no larger than the minimax MSE rate which is incorrect. +++++++++++++++++ AFTER REBUTTAL: 2,3) It is true that increasing k won't improve the rate of convergence, but it can improve the constant factor significantly and to claim that fixed k is the best isn't necessarily true. Indeed, looking at Theorem 1 in the following paper: http://arxiv.org/pdf/1602.07440v1.pdf. There they find the asymptotic variance when k=1 and it is larger than the asymptotic variance when k diverges. It seems that from a statistical point of view,fixing k isn't the right thing to do, and that in practice larger k will be chosen for larger sample sizes. If k is chosen to diverge, their variance bound gives the wrong order, though it is the right order for fixed k.
NIPS
Title Finite-Sample Analysis of Fixed-k Nearest Neighbor Density Functional Estimators Abstract We provide finite-sample analysis of a general framework for using k-nearest neighbor statistics to estimate functionals of a nonparametric continuous probability density, including entropies and divergences. Rather than plugging a consistent density estimate (which requires k → ∞ as the sample size n → ∞) into the functional of interest, the estimators we consider fix k and perform a bias correction. This is more efficient computationally, and, as we show in certain cases, statistically, leading to faster convergence rates. Our framework unifies several previous estimators, for most of which ours are the first finite sample guarantees. 1 Introduction Estimating entropies and divergences of probability distributions in a consistent manner is of importance in a number of problems in machine learning. Entropy estimators have applications in goodness-of-fit testing [13], parameter estimation in semi-parametric models [51], studying fractal random walks [3], and texture classification [14, 15]. Divergence estimators have been used to generalize machine learning algorithms for regression, classification, and clustering from inputs in RD to sets and distributions [40, 33]. Divergences also include mutual informations as a special case; mutual information estimators have applications in feature selection [35], clustering [2], causality detection [16], optimal experimental design [26, 38], fMRI data analysis [7], prediction of protein structures [1], and boosting and facial expression recognition [41]. Both entropy estimators and mutual information estimators have been used for independent component and subspace analysis [23, 47, 37, 17], as well as for image registration [14, 15]. Further applications can be found in [25]. This paper considers the more general problem of estimating functionals of the form F (P ) := E X∼P [f(p(X))] , (1) using n IID samples from P , where P is an unknown probability measure with smooth density function p and f is a known smooth function. We are interested in analyzing a class of nonparametric estimators based on k-nearest neighbor (k-NN) distance statistics. Rather than plugging a consistent estimator of p into (1), which requires k →∞ as n→∞, these estimators derive a bias correction for the plug-in estimator with fixed k; hence, we refer to this type of estimator as a fixed-k estimator. Compared to plug-in estimators, fixed-k estimators are faster to compute. As we show, fixed-k estimators can also exhibit superior rates of convergence. As shown in Table 1, several authors have derived bias corrections necessary for fixed-k estimators of entropies and divergences, including, most famously, the Shannon entropy estimator of [20]. 1 The estimators in Table 1 estimators are known to be weakly consistent, 2 but, except for Shannon entropy, 1MATLAB code for these estimators is in the ITE toolbox https://bitbucket.org/szzoli/ite/ [48]. 2Several of these proofs contain errors regarding the use of integral convergence theorems when their conditions do not hold, as described in [39]. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. no finite sample bounds are known. The main goal of this paper is to provide finite-sample analysis of these estimators, via unified analysis of the estimator after bias correction. Specifically, we show conditions under which, for β-Hölder continuous (β ∈ (0, 2]) densities on D dimensional space, the bias of fixed-k estimators decays as O ( n−β/D ) and the variance decays as O ( n−1 ) , giving a mean squared error of O ( n−2β/D + n−1 ) . Hence, the estimators converge at the parametric O(n−1) rate when β ≥ D/2, and at the slower rateO(n−2β/D) otherwise. A modification of the estimators would be necessary to leverage additional smoothness for β > 2, but we do not pursue this here. Along the way, we prove a finite-sample version of the useful fact [25] that (normalized) k-NN distances have an Erlang asymptotic distribution, which may be of independent interest. We present our results for distributions P supported on the unit cube in RD because this significantly simplifies the statements of our results, but, as we discuss in the supplement, our results generalize fairly naturally, for example to distributions supported on smooth compact manifolds. In this context, it is worth noting that our results scale with the intrinsic dimension of the manifold. As we discuss later, we believe deriving finite sample rates for distributions with unbounded support may require a truncated modification of the estimators we study (as in [49]), but we do not pursue this here. 2 Problem statement and notation LetX := [0, 1]D denote the unit cube in RD, and let µ denote the Lebesgue measure. Suppose P is an unknown µ-absolutely continuous Borel probability measure supported on X , and let p : X → [0,∞) denote the density of P . Consider a (known) differentiable function f : (0,∞) → R. Given n samples X1, ..., Xn drawn IID from P , we are interested in estimating the functional F (P ) := E X∼P [f(p(X))] . Somewhat more generally (as in divergence estimation), we may have a function f : (0,∞)2 → R of two variables and a second unknown probability measure Q, with density q and n IID samples Y1, ..., Yn. Then, we are interested in estimating F (P,Q) := E X∼P [f(p(X), q(X))] . Fix r ∈ [1,∞] and a positive integer k. We will work with distances induced by the r-norm ‖x‖r := ( D∑ i=1 xri )1/r and define cD,r := (2Γ(1 + 1/r)) D Γ(1 +D/r) = µ(B(0, 1)), where B(x, ε) := {y ∈ RD : ‖x − y‖r < ε} denotes the open radius-ε ball centered at x. Our estimators use k-nearest neighbor (k-NN) distances: Definition 1. (k-NN distance): Given n IID samples X1, ..., Xn from P , for x ∈ RD, we define the k-NN distance εk(x) by εk(x) = ‖x−Xi‖r, where Xi is the kth-nearest element (in ‖ · ‖r) of the set {X1, ..., Xn} to x. For divergence estimation, given n samples Y1, ..., Yn from Q, then we similarly define δk(x) by δk(x) = ‖x− Yi‖r, where Yi is the kth-nearest element of {Y1, ..., Yn} to x. µ-absolute continuity of P precludes the existence of atoms (i.e., ∀x ∈ RD, P ({x}) = µ({x}) = 0). Hence, each εk(x) > 0 a.s. We will require this to study quantities such as log εk(x) and 1/εk(x). 3 Estimator 3.1 k-NN density estimation and plug-in functional estimators The k-NN density estimator p̂k(x) = k/n µ(B(x, εk(x)) = k/n cDεDk (x) is well-studied nonparametric density estimator [28], motivated by noting that, for small ε > 0, p(x) ≈ P (B(x, ε)) µ(B(x, ε)) , and that, P (B(x, εk(x))) ≈ k/n. One can show that, for x ∈ RD at which p is continuous, if k → ∞ and k/n → 0 as n → ∞, then p̂k(x) → p(x) in probability ([28], Theorem 3.1). Thus, a natural approach for estimating F (P ) is the plug-in estimator F̂PI := 1 n n∑ i=1 f (p̂k(Xi)) . (2) Since p̂k → p in probability pointwise as k, n→∞ and f is smooth, one can show F̂PI is consistent, and in fact derive finite sample convergence rates (depending on how k →∞). For example, [44] show a convergence rate of O ( n−min{ 2β β+D ,1} ) for β-Hölder continuous densities (after sample splitting and boundary correction) by setting k n β β+d . Unfortunately, while necessary to ensure V [p̂k(x)]→ 0, the requirement k →∞ is computationally burdensome. Furthermore, increasing k can increase the bias of p̂k due to over-smoothing (see (5) below), suggesting that this may be sub-optimal for estimating F (P ). Indeed, similar work based on kernel density estimation [42] suggests that, for plug-in functional estimators, under-smoothing may be preferable, since the empirical mean results in additional smoothing. 3.2 Fixed-k functional estimators An alternative approach is to fix k as n→∞. Since F̂PI is itself an empirical mean, unlike V [p̂k(x)], V [ F̂PI ] → 0 as n → ∞. The more critical complication of fixing k is bias. Since f is typically non-linear, the non-vanishing variance of p̂k translates into asymptotic bias. A solution adopted by several papers is to derive a bias correction function B (depending only on known factors) such that E X1,...,Xn [ B ( f ( k/n µ(B(x, εk(x)) ))] = E X1,...,Xn [ f ( P (B(x, εk(x))) µ(B(x, εk(x)) )] . (3) For continuous p, the quantity pεk(x)(x) := P (B(x, εk(x))) µ(B(x, εk(x)) (4) is a consistent estimate of p(x) with k fixed, but it is not computable, since P is unknown. The bias correction B gives us an asymptotically unbiased estimator F̂B(P ) := 1 n n∑ i=1 B (f (p̂k(Xi))) = 1 n n∑ i=1 B ( f ( k/n µ(B(Xi, εk(Xi)) )) . that uses k/n in place of P (B(x, εk(x))). This estimate extends naturally to divergences: F̂B(P,Q) := 1 n n∑ i=1 B (f (p̂k(Xi), q̂k(Xi))) . As an example, if f = log (as in Shannon entropy), then it can be shown that, for any continuous p, E [logP (B(x, εk(x)))] = ψ(k)− ψ(n). Hence, for Bn,k := ψ(k)− ψ(n) + log(n)− log(k), E X1,...,Xn [ f ( k/n µ(B(x, εk(x)) )] +Bn,k = E X1,...,Xn [ f ( P (B(x, εk(x))) µ(B(x, εk(x)) )] . giving the estimator of [20]. Other examples of functionals for which the bias correction is known are given in Table 1. In general, deriving an appropriate bias correction can be quite a difficult problem specific to the functional of interest, and it is not our goal presently to study this problem; rather, we are interested in bounding the error of F̂B(P ), assuming the bias correction is known. Hence, our results apply to all of the estimators in Table 1, as well as any estimators of this form that may be derived in the future. 4 Related work 4.1 Estimating information theoretic functionals Recently, there has been much work on analyzing estimators for entropy, mutual information, divergences, and other functionals of densities. Besides bias-corrected fixed-k estimators, most of this work has taken one of three approaches. One series of papers [27, 42, 43] studied a boundarycorrected plug-in approach based on under-smoothed kernel density estimation. This approach has strong finite sample guarantees, but requires prior knowledge of the support of the density, and can have a slow rate of convergence. A second approach [18, 22] uses von Mises expansion to partially correct the bias of optimally smoothed density estimates. This is statistically more efficient, but can require computationally demanding numerical integration over the support of the density. A final line of work [30, 31, 44, 46] studied plug-in estimators based on consistent, boundary corrected k-NN density estimates (i.e., with k →∞ as n→∞). [32] study a divergence estimator based on convex risk minimization, but this relies of the context of an RKHS, making results are difficult to compare. Rates of Convergence: For densities over RD satisfying a Hölder smoothness condition parametrized by β ∈ (0,∞), the minimax mean squared error rate for estimating functionals of the form∫ f(p(x)) dx has been known since [6] to be O ( n−min{ 8β 4β+D ,1} ) . [22] recently derived iden- tical minimax rates for divergence estimation. Most of the above estimators have been shown to converge at the rate O ( n−min{ 2β β+D ,1} ) . Only the von Mises approach [22] is known to achieve the minimax rate for general β and D, but due to its computational demand (O(2Dn3)), 3 the authors suggest using other statistically less efficient estimators for moderate sample size. Here, we show that, for β ∈ (0, 2], bias-corrected fixed-k estimators converge at the relatively fast rate O ( n−min{ 2β D ,1} ) . For β > 2, modifications are needed for the estimator to leverage the additional smoothness of the density. Notably, this rate is adaptive; that is, it does not require selecting a smoothing parameter depending on the unknown β; our results (Theorem 5) imply the above rate is achieved for any fixed choice of k. On the other hand, since no empirical error metric is available for cross-validation, parameter selection is an obstacle for competing estimators. 4.2 Prior analysis of fixed-k estimators As of writing this paper, the only finite-sample results for F̂B(P ) were those of [5] for the KozachenkoLeonenko (KL) 4 Shannon entropy estimator. [20] Theorem 7.1 of [5] shows that, if the density p has compact support, then the variance of the KL estimator decays as O(n−1). They also claim (Theorem 7.2) to bound the bias of the KL estimator by O(n−β), under the assumptions that p is β-Hölder continuous (β ∈ (0, 1]), bounded away from 0, and supported on the interval [0, 1]. However, in their proof, [5] neglect to bound the additional bias incurred near the boundaries of [0, 1], where the density cannot simultaneously be bounded away from 0 and continuous. In fact, because the KL estimator does not attempt to correct for boundary bias, it is not clear that the bias should decay as O(n−β) under these conditions; we require additional conditions at the boundary of X . 3Fixed-k estimators can be computed in O ( Dn2 ) time, or O ( 2Dn logn ) using k-d trees for small D. 4Not to be confused with Kullback-Leibler (KL) divergence, for which we also analyze an estimator. [49] studied a closely related entropy estimator for which they prove √ n-consistency. Their estimator is identical to the KL estimator, except that it truncates k-NN distances at √ n, replacing εk(x) with min{εk(x), √ n}. This sort of truncation may be necessary for certain fixed-k estimators to satisfy finite-sample bounds for densities of unbounded support, though consistency can be shown regardless. Finally, two very recent papers [12, 4] have analyzed the KL estimator. In this case, [12] generalize the results of [5] to D > 1, and [4] weaken the regularity and boundary assumptions required by our bias bound, while deriving the same rate of convergence. Moreover, they show that, if k increases with n at the rate k log5 n, the KL estimator is asymptotically efficient (i.e., asymptotically normal, with optimal asymptotic variance). As explained in Section 8, together with our results this elucidates the role of k in the KL estimator: fixing k optimizes the convergence rate of the estimator, but increasing k slowly can further improve error by constant factors. 5 Discussion of assumptions The lack of finite-sample results for fixed-k estimators is due to several technical challenges. Here, we discuss some of these challenges, motivating the assumptions we make to overcome them. First, these estimators are sensitive to regions of low probability (i.e., p(x) small), for two reasons: 1. Many functions f of interest (e.g., f = log or f(z) = zα, α < 0) have singularities at 0. 2. The k-NN estimate p̂k(x) of p(x) is highly biased when p(x) is small. For example, for p β-Hölder continuous (β ∈ (0, 2]), one has ([29], Theorem 2) Bias(p̂k(x)) ( k np(x) )β/D . (5) For these reasons, it is common in analysis of k-NN estimators to assume the following [5, 39]: (A1) p is bounded away from zero on its support. That is, p∗ := infx∈X p(x) > 0. Second, unlike many functional estimators (see e.g., [34, 45, 42]), the fixed-k estimators we consider do not attempt correct for boundary bias (i.e., bias incurred due to discontinuity of p on the boundary ∂X of X ). 5 The boundary bias of the density estimate p̂k(x) does vanish at x in the interior X ◦ of X as n→∞, but additional assumptions are needed to obtain finite-sample rates. Either of the following assumptions would suffice: (A2) p is continuous not only on X ◦ but also on ∂X (i.e., p(x)→ 0 as dist(x, ∂X )→ 0). (A3) p is supported on all of RD. That is, the support of p has no boundary. This is the approach of [49], but we reiterate that, to handle an unbounded domain, they require truncating εk(x). Unfortunately, both assumptions (A2) and (A3) are inconsistent with (A1). Our approach is to assume (A2) and replace assumption (A1) with a much milder assumption that p is locally lower bounded on its support in the following sense: (A4) There exist ρ > 0 and a function p∗ : X → (0,∞) such that, for all x ∈ X , r ∈ (0, ρ], p∗(x) ≤ P (B(x,r))µ(B(x,r)) . We show in Lemma 2 that assumption (A4) is in fact very mild; in a metric measure space of positive dimension D, as long as p is continuous on X , such a p∗ exists for any desired ρ > 0. For simplicity, we will use ρ = √ D = diam(X ). As hinted by (5) and the fact that F (P ) is an expectation, our bounds will contain terms of the form E X∼P [ 1 (p∗(X)) β/D ] = ∫ X p(x) (p∗(x)) β/D dµ(x) (with an additional f ′(p∗(x)) factor if f has a singularity at zero). Hence, our key assumption is that these quantities are finite. This depends primarily on how quickly p approaches zero near ∂X . For many functionals, Lemma 6 gives a simple sufficient condition. 5This complication was omitted in the bias bound (Theorem 7.2) of [5] for entropy estimation. 6 Preliminary lemmas Here, we present some lemmas, both as a means of summarizing our proof techniques and also because they may be of independent interest for proving finite-sample bounds for other k-NN methods. Due to space constraints, all proofs are given in the appendix. Our first lemma states that, if p is continuous, then it is locally lower bounded as described in the previous section. Lemma 2. (Existence of Local Bounds) If p is continuous on X and strictly positive on the interior X ◦ of X , then, for ρ := √ D = diam(X ), there exists a continuous function p∗ : X ◦ → (0,∞) and a constant p∗ ∈ (0,∞) such that 0 < p∗(x) ≤ P (B(x, r)) µ(B(x, r)) ≤ p∗ <∞, ∀x ∈ X , r ∈ (0, ρ]. We now use these local lower and upper bounds to prove that k-NN distances concentrate around a term of order (k/(np(x)))1/D. Related lemmas, also based on multiplicative Chernoff bounds, are used by [21, 9] and [8, 19] to prove finite-sample bounds on k-NN methods for cluster tree pruning and classification, respectively. For cluster tree pruning, the relevant inequalities bound the error of the k-NN density estimate, and, for classification, they lower bound the probability of nearby samples of the same class. Unlike in cluster tree pruning, we are not using a consistent density estimate, and, unlike in classification, our estimator is a function of k-NN distances themselves (rather than their ordering). Thus, our statement is somewhat different, bounding the k-NN distances themselves: Lemma 3. (Concentration of k-NN Distances) Suppose p is continuous on X and strictly positive on X ◦. Let p∗ and p∗ be as in Lemma 2. Then, for any x ∈ X ◦, 1. if r > ( k p∗(x)n )1/D , then P [εk(x) > r] ≤ e−p∗(x)r Dn ( e p∗(x)r Dn k )k . 2. if r ∈ [ 0, ( k p∗n )1/D) , then P [εk(x) < r] ≤ e−p∗(x)r Dn ( ep∗rDn k )kp∗(x)/p∗ . It is worth noting an asymmetry in the above bounds: counter-intuitively, the lower bound depends on p∗. This asymmetry is related to the large bias of k-NN density estimators when p is small (as in (5)). The next lemma uses Lemma 3 to bound expectations of monotone functions of the ratio p̂k/p∗. As suggested by the form of integrals (6) and (7), this is essentially a finite-sample statement of the fact that (appropriately normalized) k-NN distances have Erlang asymptotic distributions; this asymptotic statement is key to consistency proofs of [25] and [39] for α-entropy and divergence estimators. Lemma 4. Let p be continuous on X and strictly positive on X ◦. Define p∗ and p∗ as in Lemma 2. Suppose f : (0,∞)→ R is continuously differentiable and f ′ > 0. Then, we have the upper bound 6 sup x∈X◦ E [ f+ ( p∗(x) p̂k(x) )] ≤ f+(1) + e √ k ∫ ∞ k e−yyk Γ(k + 1) f+ (y k ) dy, (6) and, for all x ∈ X ◦, for κ(x) := kp∗(x)/p∗, the lower bound E [ f− ( p∗(x) p̂k(x) )] ≤ f−(1) + e √ k κ(x) ∫ κ(x) 0 e−yyκ(x) Γ(κ(x) + 1) f− (y k ) dy (7) Note that plugging the function z 7→ f (( kz cD,rnp∗(x) ) 1 D ) into Lemma 4 gives bounds on E [f(εk(x))]. As one might guess from Lemma 3 and the assumption that f is smooth, this bound is roughly of the order ( k np(x) ) 1 D . For example, for any α > 0, a simple calculation from (6) gives E [εαk (x)] ≤ ( 1 + α D )( k cD,rnp∗(x) ) α D . (8) (8) is used for our bias bound, and more direct applications of Lemma 4 are used in variance bound. 6f+(x) = max{0, f(x)} and f−(x) = −min{0, f(x)} denote the positive and negative parts of f . Recall that E [f(X)] = E [f+(X)]− E [f−(X)]. 7 Main results Here, we present our main results on the bias and variance of F̂B(P ). Again, due to space constraints, all proofs are given in the appendix. We begin with bounding the bias: Theorem 5. (Bias Bound) Suppose that, for some β ∈ (0, 2], p is β-Hölder continuous with constant L > 0 on X , and p is strictly positive on X ◦. Let p∗ and p∗ be as in Lemma 2. Let f : (0,∞)→ R be differentiable, and define Mf,p : X → [0,∞) by Mf,p(x) := sup z∈[p∗(x),p∗] ∣∣∣∣ ddz f(z) ∣∣∣∣ Assume Cf := E X∼p [ Mf,p(X) (p∗(X)) β D ] <∞. Then, ∣∣∣E F̂B(P )− F (P )∣∣∣ ≤ CfL(k n ) β D . The statement for divergences is similar, assuming that q is also β-Hölder continuous with constant L and strictly positive on X ◦. Specifically, we get the same bound if we replace Mf,o with Mf,p(x) := sup (w,z)∈[p∗(x),p∗]×[q∗(x),q∗] ∣∣∣∣ ∂∂wf(w, z) ∣∣∣∣ and define Mf,q similarly (i.e., with ∂∂z ) and we assume that Cf := E X∼p [ Mf,p(X) (p∗(X)) β D ] + E X∼p [ Mf,q(X) (q∗(X)) β D ] <∞. As an example of the applicability of Theorem 5, consider estimating the Shannon entropy. Then, f(z) = log(x), and so we need Cf = ∫ X (p∗(x)) −β/D dµ(x) <∞. The assumption Cf <∞ is not immediately transparent. For the functionals in Table 1, Cf has the form ∫ X (p(x)) −c dx, for some c > 0, and hence Cf <∞ intuitively means p(x) cannot approach zero too quickly as dist(x, ∂X )→ 0. The following lemma gives a formal sufficient condition: Lemma 6. (Boundary Condition) Let c > 0. Suppose there exist b∂ ∈ (0, 1c ), c∂ , ρ∂ > 0 such that, for all x ∈ X with ε(x) := dist(x, ∂X ) < ρ∂ , p(x) ≥ c∂εb∂ (x). Then, ∫ X (p∗(x)) −c dµ(x) <∞. In the supplement, we give examples showing that this condition is fairly general, satisfied by densities proportional to xb∂ near ∂X (i.e., those with at least b∂ nonzero one-sided derivatives on the boundary). We now bound the variance. The main obstacle here is that the fixed-k estimator is an empirical mean of dependent terms (functions of k-NN distances). We generalize the approach used by [5] to bound the variance of the KL estimator of Shannon entropy. The key insight is the geometric fact that, in (RD, ‖ · ‖p), there exists a constant Nk,D (independent of n) such that any sample Xi can be amongst the k-nearest neighbors of at most Nk,D other samples. Hence, at most Nk,D + 1 of the terms in (2) can change when a single Xi is added, suggesting a variance bound via the Efron-Stein inequality [10], which bounds the variance of a function of random variables in terms of its expected change when its arguments are resampled. [11] originally used this approach to prove a general Law of Large Numbers (LLN) for nearest-neighbors statistics. Unfortunately, this LLN relies on bounded kurtosis assumptions that are difficult to justify for the log or negative power statistics we study. Theorem 7. (Variance Bound) Suppose B ◦ f is continuously differentiable and strictly monotone. Assume Cf,p := EX∼P [ B2(f(p∗(X))) ] <∞, and Cf := ∫∞ 0 e−yykf(y) <∞. Then, for CV := 2 (1 +Nk,D) (3 + 4k) (Cf,p + Cf ) , we have V [ F̂B(P ) ] ≤ CV n . As an example, if f = log (as in Shannon entropy), then, since B is an additive constant, we simply require ∫ X p(x) log 2(p∗(x)) < ∞. In general, Nk,D is of the order k2cD, for some c > 0. Our bound is likely quite loose in k; in practice, V [ F̂B(P ) ] typically decreases somewhat with k. 8 Conclusions and discussion In this paper, we gave finite-sample bias and variance error bounds for a class of fixed-k estimators of functionals of probability density functions, including the entropy and divergence estimators in Table 1. The bias and variance bounds in turn imply a bound on the mean squared error (MSE) of the bias-corrected estimator via the usual decomposition into squared bias and variance: Corollary 8. (MSE Bound) Under the conditions of Theorems 5 and 7, E [( Ĥk(X)−H(X) )2] ≤ C2fL2 ( k n )2β/D + CV n . (9) Choosing k: Contrary to the name, fixing k is not required for “fixed-k” estimators. [36] empirically studied the effect of changing k with n and found that fixing k = 1 gave best results for estimating F (P ). However, there has been no theoretical justification for fixing k. Assuming tightness of our bias bound in k, we provide this in a worst-case sense: since our bias bound is nondecreasing in k and our variance bound is no larger than the minimax MSE rate for these estimation problems, reducing variance (i.e., increasing k) does not improve the (worst-case) convergence rate. On the other hand, [4] recently showed that slowly increasing k can improves the asymptotic variance of the estimator, with the rate k log5 n leading to asymptotic efficiency. In view of these results, we suggest that increasing k can improve error by constant factors, but cannot improve the convergence rate. Finally, we note that [36] found increasing k quickly (e.g., k = n/2) was best for certain hypothesis tests based on these estimators. Intuitively, this is because, in testing problems, bias is less problematic than variance (e.g., an asymptotically biased estimator can still lead to a consistent test). Acknowledgments This material is based upon work supported by a National Science Foundation Graduate Research Fellowship to the first author under Grant No. DGE-1252522.
1. What is the main contribution of the paper regarding nonparametric methods for estimating functionals of densities? 2. What are the strengths of the paper, particularly in terms of theory and mathematical sophistication? 3. Do you have any concerns or questions regarding the paper, such as formatting issues, typos, notation reuse, or the application of Lemma 4? 4. How does the paper compare to previous works in terms of convergence and computational advantage? 5. What is the significance of the error bounds proved in the paper for machine learning and statistics problems?
Review
Review The paper gives a finite sample analysis (bias and variance bounds) for a framework of nonparametric methods, estimating functionals of densities (such as divergences and entropies) by plugging in a k-nearest neighbor density estimate and taking a sample average. The interesting property of the methods being analyzed is that instead of increasing k with the sample size-- which leads to consistent density estimates-- k is fixed. The implication of considering fixed k is that the variance of the density estimate does not go to zero asymptotically. The non vanishing variance in the density estimate manifests as bias in the estimate of the functional. The methods being considered do some correction to obtain an asymptotically unbiased estimator. The variance of the functional estimate is asymptotically 0, in spite of the non vanishing variance of the density estimate, because of a sample averaging. The paper derives finite sample bounds for the bias and the variance of such methods under certain weak assumptions on the underlying density. The paper argues that the bounds are superior to most of the state of art approaches and those approaches that might have better convergence are computationally too demanding, concluding that the fixed-k approach with bias correction gives competitive convergence with significant computational advantage. Congratulations to the authors for their excellent work. Quality: This is a high quality paper with significant theory and sophisticated math. The claims of the paper are well supported by the derived error bounds, though, I have a some concerns about the math at a couple of places (see under Concerns). The authors have clearly stated the assumptions under which the bounds apply and have given an objective comparison of their work with previously known results. Clarity: Overall the paper is very well written and reads easily. The authors have provided intuition which makes it easy to follow the math. The assumptions are very well motivated and discussed. Though there are some formatting issues, typos and notation reuse. Originality: The error bounds proved in the paper are novel. The proofs are non trivial and interesting. The authors have done an exceptional job of discussing the related work. Significance: The problem of estimating functionals of densities is very important in several machine learning and statistics problems. The competitive error bounds derived in the paper and the computational advantage of fixed-k nearest neighbor based methods justify using them for estimation. Main concern: As I understand, the argument used to apply lemma 4 to derive bounds on $E[f(\epsilon_k(x))]$, is that $f(\epsilon_k(x))$ can be expressed as $f(h(z))$ where $z=\frac{p_*(x)}{\hat{p}_k(x)}$ and h(z)=(\frac{kz}{cnp_*(x)})^{1/D}, and $f o h$ is an increasing function. However, the definition of $h$ is not independent of z; its definition changes with its argument $z$ because it contains $p_*(x)$ as well in the denominator. In this light, it is not straightforward how lemma 4 can be applied. The appendix version of lemma 4 seems to derive the bounds on $E[f(\epsilon_k(x))]$ directly. The proof starts with defining $g$, but never uses it. Also, it might be better to have the same result in the appendix and the main paper or lemma with a detailed proof should be added to derive one from the other. (Line 236) Expression for C_f when f(x)=log(x) is given. It seems like the M_{f,p}(x) is evaluated to be 1. However, I do not see how it can be a constant. Also the expectation should be with respect to p, but the integral is with respect to the borel measure. In lemma 4, line 218 says that the inequality 7 is a lower bound, however, it is actually an upper bound. Also upper bound for E(f) =E(f_+)-E(f_-) cannot be derived with <= in 7. From the expression for the variance bound (line 254), it seems that the upper bound increases with k, instead of decreasing as one would expect. Minor comments: There are some typos below Theorem 5: line 233 M_{f,o}, Equation below line 233 the second term should be E w.r.t q, not p. Line 232 gives the impression that q and p should have the same L, it might be better to have L_1, L_2. Before line 235, it would be good to have expression for the bias bounds for functionals of p and q and perhaps a lemma in the appendix. Notation reuse: r is used as the exponent in the norm as well as the radius of the ball. It would be good to include proofs that all the functionals in table 1 satisfy the assumptions of the theorems. The intuition provided in line 208 about asymmetry and bias is not is not obvious. Line 274 about testing problems is difficult to parse. Overall, I recommend accepting the paper if they address my concerns. After author's response: The authors have responded to my concerns satisfactorily. Though, I would like to suggest them to add few technical comments to Lemma 4 and Theorem 5 for more clarity.
NIPS
Title Finite-Sample Analysis of Fixed-k Nearest Neighbor Density Functional Estimators Abstract We provide finite-sample analysis of a general framework for using k-nearest neighbor statistics to estimate functionals of a nonparametric continuous probability density, including entropies and divergences. Rather than plugging a consistent density estimate (which requires k → ∞ as the sample size n → ∞) into the functional of interest, the estimators we consider fix k and perform a bias correction. This is more efficient computationally, and, as we show in certain cases, statistically, leading to faster convergence rates. Our framework unifies several previous estimators, for most of which ours are the first finite sample guarantees. 1 Introduction Estimating entropies and divergences of probability distributions in a consistent manner is of importance in a number of problems in machine learning. Entropy estimators have applications in goodness-of-fit testing [13], parameter estimation in semi-parametric models [51], studying fractal random walks [3], and texture classification [14, 15]. Divergence estimators have been used to generalize machine learning algorithms for regression, classification, and clustering from inputs in RD to sets and distributions [40, 33]. Divergences also include mutual informations as a special case; mutual information estimators have applications in feature selection [35], clustering [2], causality detection [16], optimal experimental design [26, 38], fMRI data analysis [7], prediction of protein structures [1], and boosting and facial expression recognition [41]. Both entropy estimators and mutual information estimators have been used for independent component and subspace analysis [23, 47, 37, 17], as well as for image registration [14, 15]. Further applications can be found in [25]. This paper considers the more general problem of estimating functionals of the form F (P ) := E X∼P [f(p(X))] , (1) using n IID samples from P , where P is an unknown probability measure with smooth density function p and f is a known smooth function. We are interested in analyzing a class of nonparametric estimators based on k-nearest neighbor (k-NN) distance statistics. Rather than plugging a consistent estimator of p into (1), which requires k →∞ as n→∞, these estimators derive a bias correction for the plug-in estimator with fixed k; hence, we refer to this type of estimator as a fixed-k estimator. Compared to plug-in estimators, fixed-k estimators are faster to compute. As we show, fixed-k estimators can also exhibit superior rates of convergence. As shown in Table 1, several authors have derived bias corrections necessary for fixed-k estimators of entropies and divergences, including, most famously, the Shannon entropy estimator of [20]. 1 The estimators in Table 1 estimators are known to be weakly consistent, 2 but, except for Shannon entropy, 1MATLAB code for these estimators is in the ITE toolbox https://bitbucket.org/szzoli/ite/ [48]. 2Several of these proofs contain errors regarding the use of integral convergence theorems when their conditions do not hold, as described in [39]. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. no finite sample bounds are known. The main goal of this paper is to provide finite-sample analysis of these estimators, via unified analysis of the estimator after bias correction. Specifically, we show conditions under which, for β-Hölder continuous (β ∈ (0, 2]) densities on D dimensional space, the bias of fixed-k estimators decays as O ( n−β/D ) and the variance decays as O ( n−1 ) , giving a mean squared error of O ( n−2β/D + n−1 ) . Hence, the estimators converge at the parametric O(n−1) rate when β ≥ D/2, and at the slower rateO(n−2β/D) otherwise. A modification of the estimators would be necessary to leverage additional smoothness for β > 2, but we do not pursue this here. Along the way, we prove a finite-sample version of the useful fact [25] that (normalized) k-NN distances have an Erlang asymptotic distribution, which may be of independent interest. We present our results for distributions P supported on the unit cube in RD because this significantly simplifies the statements of our results, but, as we discuss in the supplement, our results generalize fairly naturally, for example to distributions supported on smooth compact manifolds. In this context, it is worth noting that our results scale with the intrinsic dimension of the manifold. As we discuss later, we believe deriving finite sample rates for distributions with unbounded support may require a truncated modification of the estimators we study (as in [49]), but we do not pursue this here. 2 Problem statement and notation LetX := [0, 1]D denote the unit cube in RD, and let µ denote the Lebesgue measure. Suppose P is an unknown µ-absolutely continuous Borel probability measure supported on X , and let p : X → [0,∞) denote the density of P . Consider a (known) differentiable function f : (0,∞) → R. Given n samples X1, ..., Xn drawn IID from P , we are interested in estimating the functional F (P ) := E X∼P [f(p(X))] . Somewhat more generally (as in divergence estimation), we may have a function f : (0,∞)2 → R of two variables and a second unknown probability measure Q, with density q and n IID samples Y1, ..., Yn. Then, we are interested in estimating F (P,Q) := E X∼P [f(p(X), q(X))] . Fix r ∈ [1,∞] and a positive integer k. We will work with distances induced by the r-norm ‖x‖r := ( D∑ i=1 xri )1/r and define cD,r := (2Γ(1 + 1/r)) D Γ(1 +D/r) = µ(B(0, 1)), where B(x, ε) := {y ∈ RD : ‖x − y‖r < ε} denotes the open radius-ε ball centered at x. Our estimators use k-nearest neighbor (k-NN) distances: Definition 1. (k-NN distance): Given n IID samples X1, ..., Xn from P , for x ∈ RD, we define the k-NN distance εk(x) by εk(x) = ‖x−Xi‖r, where Xi is the kth-nearest element (in ‖ · ‖r) of the set {X1, ..., Xn} to x. For divergence estimation, given n samples Y1, ..., Yn from Q, then we similarly define δk(x) by δk(x) = ‖x− Yi‖r, where Yi is the kth-nearest element of {Y1, ..., Yn} to x. µ-absolute continuity of P precludes the existence of atoms (i.e., ∀x ∈ RD, P ({x}) = µ({x}) = 0). Hence, each εk(x) > 0 a.s. We will require this to study quantities such as log εk(x) and 1/εk(x). 3 Estimator 3.1 k-NN density estimation and plug-in functional estimators The k-NN density estimator p̂k(x) = k/n µ(B(x, εk(x)) = k/n cDεDk (x) is well-studied nonparametric density estimator [28], motivated by noting that, for small ε > 0, p(x) ≈ P (B(x, ε)) µ(B(x, ε)) , and that, P (B(x, εk(x))) ≈ k/n. One can show that, for x ∈ RD at which p is continuous, if k → ∞ and k/n → 0 as n → ∞, then p̂k(x) → p(x) in probability ([28], Theorem 3.1). Thus, a natural approach for estimating F (P ) is the plug-in estimator F̂PI := 1 n n∑ i=1 f (p̂k(Xi)) . (2) Since p̂k → p in probability pointwise as k, n→∞ and f is smooth, one can show F̂PI is consistent, and in fact derive finite sample convergence rates (depending on how k →∞). For example, [44] show a convergence rate of O ( n−min{ 2β β+D ,1} ) for β-Hölder continuous densities (after sample splitting and boundary correction) by setting k n β β+d . Unfortunately, while necessary to ensure V [p̂k(x)]→ 0, the requirement k →∞ is computationally burdensome. Furthermore, increasing k can increase the bias of p̂k due to over-smoothing (see (5) below), suggesting that this may be sub-optimal for estimating F (P ). Indeed, similar work based on kernel density estimation [42] suggests that, for plug-in functional estimators, under-smoothing may be preferable, since the empirical mean results in additional smoothing. 3.2 Fixed-k functional estimators An alternative approach is to fix k as n→∞. Since F̂PI is itself an empirical mean, unlike V [p̂k(x)], V [ F̂PI ] → 0 as n → ∞. The more critical complication of fixing k is bias. Since f is typically non-linear, the non-vanishing variance of p̂k translates into asymptotic bias. A solution adopted by several papers is to derive a bias correction function B (depending only on known factors) such that E X1,...,Xn [ B ( f ( k/n µ(B(x, εk(x)) ))] = E X1,...,Xn [ f ( P (B(x, εk(x))) µ(B(x, εk(x)) )] . (3) For continuous p, the quantity pεk(x)(x) := P (B(x, εk(x))) µ(B(x, εk(x)) (4) is a consistent estimate of p(x) with k fixed, but it is not computable, since P is unknown. The bias correction B gives us an asymptotically unbiased estimator F̂B(P ) := 1 n n∑ i=1 B (f (p̂k(Xi))) = 1 n n∑ i=1 B ( f ( k/n µ(B(Xi, εk(Xi)) )) . that uses k/n in place of P (B(x, εk(x))). This estimate extends naturally to divergences: F̂B(P,Q) := 1 n n∑ i=1 B (f (p̂k(Xi), q̂k(Xi))) . As an example, if f = log (as in Shannon entropy), then it can be shown that, for any continuous p, E [logP (B(x, εk(x)))] = ψ(k)− ψ(n). Hence, for Bn,k := ψ(k)− ψ(n) + log(n)− log(k), E X1,...,Xn [ f ( k/n µ(B(x, εk(x)) )] +Bn,k = E X1,...,Xn [ f ( P (B(x, εk(x))) µ(B(x, εk(x)) )] . giving the estimator of [20]. Other examples of functionals for which the bias correction is known are given in Table 1. In general, deriving an appropriate bias correction can be quite a difficult problem specific to the functional of interest, and it is not our goal presently to study this problem; rather, we are interested in bounding the error of F̂B(P ), assuming the bias correction is known. Hence, our results apply to all of the estimators in Table 1, as well as any estimators of this form that may be derived in the future. 4 Related work 4.1 Estimating information theoretic functionals Recently, there has been much work on analyzing estimators for entropy, mutual information, divergences, and other functionals of densities. Besides bias-corrected fixed-k estimators, most of this work has taken one of three approaches. One series of papers [27, 42, 43] studied a boundarycorrected plug-in approach based on under-smoothed kernel density estimation. This approach has strong finite sample guarantees, but requires prior knowledge of the support of the density, and can have a slow rate of convergence. A second approach [18, 22] uses von Mises expansion to partially correct the bias of optimally smoothed density estimates. This is statistically more efficient, but can require computationally demanding numerical integration over the support of the density. A final line of work [30, 31, 44, 46] studied plug-in estimators based on consistent, boundary corrected k-NN density estimates (i.e., with k →∞ as n→∞). [32] study a divergence estimator based on convex risk minimization, but this relies of the context of an RKHS, making results are difficult to compare. Rates of Convergence: For densities over RD satisfying a Hölder smoothness condition parametrized by β ∈ (0,∞), the minimax mean squared error rate for estimating functionals of the form∫ f(p(x)) dx has been known since [6] to be O ( n−min{ 8β 4β+D ,1} ) . [22] recently derived iden- tical minimax rates for divergence estimation. Most of the above estimators have been shown to converge at the rate O ( n−min{ 2β β+D ,1} ) . Only the von Mises approach [22] is known to achieve the minimax rate for general β and D, but due to its computational demand (O(2Dn3)), 3 the authors suggest using other statistically less efficient estimators for moderate sample size. Here, we show that, for β ∈ (0, 2], bias-corrected fixed-k estimators converge at the relatively fast rate O ( n−min{ 2β D ,1} ) . For β > 2, modifications are needed for the estimator to leverage the additional smoothness of the density. Notably, this rate is adaptive; that is, it does not require selecting a smoothing parameter depending on the unknown β; our results (Theorem 5) imply the above rate is achieved for any fixed choice of k. On the other hand, since no empirical error metric is available for cross-validation, parameter selection is an obstacle for competing estimators. 4.2 Prior analysis of fixed-k estimators As of writing this paper, the only finite-sample results for F̂B(P ) were those of [5] for the KozachenkoLeonenko (KL) 4 Shannon entropy estimator. [20] Theorem 7.1 of [5] shows that, if the density p has compact support, then the variance of the KL estimator decays as O(n−1). They also claim (Theorem 7.2) to bound the bias of the KL estimator by O(n−β), under the assumptions that p is β-Hölder continuous (β ∈ (0, 1]), bounded away from 0, and supported on the interval [0, 1]. However, in their proof, [5] neglect to bound the additional bias incurred near the boundaries of [0, 1], where the density cannot simultaneously be bounded away from 0 and continuous. In fact, because the KL estimator does not attempt to correct for boundary bias, it is not clear that the bias should decay as O(n−β) under these conditions; we require additional conditions at the boundary of X . 3Fixed-k estimators can be computed in O ( Dn2 ) time, or O ( 2Dn logn ) using k-d trees for small D. 4Not to be confused with Kullback-Leibler (KL) divergence, for which we also analyze an estimator. [49] studied a closely related entropy estimator for which they prove √ n-consistency. Their estimator is identical to the KL estimator, except that it truncates k-NN distances at √ n, replacing εk(x) with min{εk(x), √ n}. This sort of truncation may be necessary for certain fixed-k estimators to satisfy finite-sample bounds for densities of unbounded support, though consistency can be shown regardless. Finally, two very recent papers [12, 4] have analyzed the KL estimator. In this case, [12] generalize the results of [5] to D > 1, and [4] weaken the regularity and boundary assumptions required by our bias bound, while deriving the same rate of convergence. Moreover, they show that, if k increases with n at the rate k log5 n, the KL estimator is asymptotically efficient (i.e., asymptotically normal, with optimal asymptotic variance). As explained in Section 8, together with our results this elucidates the role of k in the KL estimator: fixing k optimizes the convergence rate of the estimator, but increasing k slowly can further improve error by constant factors. 5 Discussion of assumptions The lack of finite-sample results for fixed-k estimators is due to several technical challenges. Here, we discuss some of these challenges, motivating the assumptions we make to overcome them. First, these estimators are sensitive to regions of low probability (i.e., p(x) small), for two reasons: 1. Many functions f of interest (e.g., f = log or f(z) = zα, α < 0) have singularities at 0. 2. The k-NN estimate p̂k(x) of p(x) is highly biased when p(x) is small. For example, for p β-Hölder continuous (β ∈ (0, 2]), one has ([29], Theorem 2) Bias(p̂k(x)) ( k np(x) )β/D . (5) For these reasons, it is common in analysis of k-NN estimators to assume the following [5, 39]: (A1) p is bounded away from zero on its support. That is, p∗ := infx∈X p(x) > 0. Second, unlike many functional estimators (see e.g., [34, 45, 42]), the fixed-k estimators we consider do not attempt correct for boundary bias (i.e., bias incurred due to discontinuity of p on the boundary ∂X of X ). 5 The boundary bias of the density estimate p̂k(x) does vanish at x in the interior X ◦ of X as n→∞, but additional assumptions are needed to obtain finite-sample rates. Either of the following assumptions would suffice: (A2) p is continuous not only on X ◦ but also on ∂X (i.e., p(x)→ 0 as dist(x, ∂X )→ 0). (A3) p is supported on all of RD. That is, the support of p has no boundary. This is the approach of [49], but we reiterate that, to handle an unbounded domain, they require truncating εk(x). Unfortunately, both assumptions (A2) and (A3) are inconsistent with (A1). Our approach is to assume (A2) and replace assumption (A1) with a much milder assumption that p is locally lower bounded on its support in the following sense: (A4) There exist ρ > 0 and a function p∗ : X → (0,∞) such that, for all x ∈ X , r ∈ (0, ρ], p∗(x) ≤ P (B(x,r))µ(B(x,r)) . We show in Lemma 2 that assumption (A4) is in fact very mild; in a metric measure space of positive dimension D, as long as p is continuous on X , such a p∗ exists for any desired ρ > 0. For simplicity, we will use ρ = √ D = diam(X ). As hinted by (5) and the fact that F (P ) is an expectation, our bounds will contain terms of the form E X∼P [ 1 (p∗(X)) β/D ] = ∫ X p(x) (p∗(x)) β/D dµ(x) (with an additional f ′(p∗(x)) factor if f has a singularity at zero). Hence, our key assumption is that these quantities are finite. This depends primarily on how quickly p approaches zero near ∂X . For many functionals, Lemma 6 gives a simple sufficient condition. 5This complication was omitted in the bias bound (Theorem 7.2) of [5] for entropy estimation. 6 Preliminary lemmas Here, we present some lemmas, both as a means of summarizing our proof techniques and also because they may be of independent interest for proving finite-sample bounds for other k-NN methods. Due to space constraints, all proofs are given in the appendix. Our first lemma states that, if p is continuous, then it is locally lower bounded as described in the previous section. Lemma 2. (Existence of Local Bounds) If p is continuous on X and strictly positive on the interior X ◦ of X , then, for ρ := √ D = diam(X ), there exists a continuous function p∗ : X ◦ → (0,∞) and a constant p∗ ∈ (0,∞) such that 0 < p∗(x) ≤ P (B(x, r)) µ(B(x, r)) ≤ p∗ <∞, ∀x ∈ X , r ∈ (0, ρ]. We now use these local lower and upper bounds to prove that k-NN distances concentrate around a term of order (k/(np(x)))1/D. Related lemmas, also based on multiplicative Chernoff bounds, are used by [21, 9] and [8, 19] to prove finite-sample bounds on k-NN methods for cluster tree pruning and classification, respectively. For cluster tree pruning, the relevant inequalities bound the error of the k-NN density estimate, and, for classification, they lower bound the probability of nearby samples of the same class. Unlike in cluster tree pruning, we are not using a consistent density estimate, and, unlike in classification, our estimator is a function of k-NN distances themselves (rather than their ordering). Thus, our statement is somewhat different, bounding the k-NN distances themselves: Lemma 3. (Concentration of k-NN Distances) Suppose p is continuous on X and strictly positive on X ◦. Let p∗ and p∗ be as in Lemma 2. Then, for any x ∈ X ◦, 1. if r > ( k p∗(x)n )1/D , then P [εk(x) > r] ≤ e−p∗(x)r Dn ( e p∗(x)r Dn k )k . 2. if r ∈ [ 0, ( k p∗n )1/D) , then P [εk(x) < r] ≤ e−p∗(x)r Dn ( ep∗rDn k )kp∗(x)/p∗ . It is worth noting an asymmetry in the above bounds: counter-intuitively, the lower bound depends on p∗. This asymmetry is related to the large bias of k-NN density estimators when p is small (as in (5)). The next lemma uses Lemma 3 to bound expectations of monotone functions of the ratio p̂k/p∗. As suggested by the form of integrals (6) and (7), this is essentially a finite-sample statement of the fact that (appropriately normalized) k-NN distances have Erlang asymptotic distributions; this asymptotic statement is key to consistency proofs of [25] and [39] for α-entropy and divergence estimators. Lemma 4. Let p be continuous on X and strictly positive on X ◦. Define p∗ and p∗ as in Lemma 2. Suppose f : (0,∞)→ R is continuously differentiable and f ′ > 0. Then, we have the upper bound 6 sup x∈X◦ E [ f+ ( p∗(x) p̂k(x) )] ≤ f+(1) + e √ k ∫ ∞ k e−yyk Γ(k + 1) f+ (y k ) dy, (6) and, for all x ∈ X ◦, for κ(x) := kp∗(x)/p∗, the lower bound E [ f− ( p∗(x) p̂k(x) )] ≤ f−(1) + e √ k κ(x) ∫ κ(x) 0 e−yyκ(x) Γ(κ(x) + 1) f− (y k ) dy (7) Note that plugging the function z 7→ f (( kz cD,rnp∗(x) ) 1 D ) into Lemma 4 gives bounds on E [f(εk(x))]. As one might guess from Lemma 3 and the assumption that f is smooth, this bound is roughly of the order ( k np(x) ) 1 D . For example, for any α > 0, a simple calculation from (6) gives E [εαk (x)] ≤ ( 1 + α D )( k cD,rnp∗(x) ) α D . (8) (8) is used for our bias bound, and more direct applications of Lemma 4 are used in variance bound. 6f+(x) = max{0, f(x)} and f−(x) = −min{0, f(x)} denote the positive and negative parts of f . Recall that E [f(X)] = E [f+(X)]− E [f−(X)]. 7 Main results Here, we present our main results on the bias and variance of F̂B(P ). Again, due to space constraints, all proofs are given in the appendix. We begin with bounding the bias: Theorem 5. (Bias Bound) Suppose that, for some β ∈ (0, 2], p is β-Hölder continuous with constant L > 0 on X , and p is strictly positive on X ◦. Let p∗ and p∗ be as in Lemma 2. Let f : (0,∞)→ R be differentiable, and define Mf,p : X → [0,∞) by Mf,p(x) := sup z∈[p∗(x),p∗] ∣∣∣∣ ddz f(z) ∣∣∣∣ Assume Cf := E X∼p [ Mf,p(X) (p∗(X)) β D ] <∞. Then, ∣∣∣E F̂B(P )− F (P )∣∣∣ ≤ CfL(k n ) β D . The statement for divergences is similar, assuming that q is also β-Hölder continuous with constant L and strictly positive on X ◦. Specifically, we get the same bound if we replace Mf,o with Mf,p(x) := sup (w,z)∈[p∗(x),p∗]×[q∗(x),q∗] ∣∣∣∣ ∂∂wf(w, z) ∣∣∣∣ and define Mf,q similarly (i.e., with ∂∂z ) and we assume that Cf := E X∼p [ Mf,p(X) (p∗(X)) β D ] + E X∼p [ Mf,q(X) (q∗(X)) β D ] <∞. As an example of the applicability of Theorem 5, consider estimating the Shannon entropy. Then, f(z) = log(x), and so we need Cf = ∫ X (p∗(x)) −β/D dµ(x) <∞. The assumption Cf <∞ is not immediately transparent. For the functionals in Table 1, Cf has the form ∫ X (p(x)) −c dx, for some c > 0, and hence Cf <∞ intuitively means p(x) cannot approach zero too quickly as dist(x, ∂X )→ 0. The following lemma gives a formal sufficient condition: Lemma 6. (Boundary Condition) Let c > 0. Suppose there exist b∂ ∈ (0, 1c ), c∂ , ρ∂ > 0 such that, for all x ∈ X with ε(x) := dist(x, ∂X ) < ρ∂ , p(x) ≥ c∂εb∂ (x). Then, ∫ X (p∗(x)) −c dµ(x) <∞. In the supplement, we give examples showing that this condition is fairly general, satisfied by densities proportional to xb∂ near ∂X (i.e., those with at least b∂ nonzero one-sided derivatives on the boundary). We now bound the variance. The main obstacle here is that the fixed-k estimator is an empirical mean of dependent terms (functions of k-NN distances). We generalize the approach used by [5] to bound the variance of the KL estimator of Shannon entropy. The key insight is the geometric fact that, in (RD, ‖ · ‖p), there exists a constant Nk,D (independent of n) such that any sample Xi can be amongst the k-nearest neighbors of at most Nk,D other samples. Hence, at most Nk,D + 1 of the terms in (2) can change when a single Xi is added, suggesting a variance bound via the Efron-Stein inequality [10], which bounds the variance of a function of random variables in terms of its expected change when its arguments are resampled. [11] originally used this approach to prove a general Law of Large Numbers (LLN) for nearest-neighbors statistics. Unfortunately, this LLN relies on bounded kurtosis assumptions that are difficult to justify for the log or negative power statistics we study. Theorem 7. (Variance Bound) Suppose B ◦ f is continuously differentiable and strictly monotone. Assume Cf,p := EX∼P [ B2(f(p∗(X))) ] <∞, and Cf := ∫∞ 0 e−yykf(y) <∞. Then, for CV := 2 (1 +Nk,D) (3 + 4k) (Cf,p + Cf ) , we have V [ F̂B(P ) ] ≤ CV n . As an example, if f = log (as in Shannon entropy), then, since B is an additive constant, we simply require ∫ X p(x) log 2(p∗(x)) < ∞. In general, Nk,D is of the order k2cD, for some c > 0. Our bound is likely quite loose in k; in practice, V [ F̂B(P ) ] typically decreases somewhat with k. 8 Conclusions and discussion In this paper, we gave finite-sample bias and variance error bounds for a class of fixed-k estimators of functionals of probability density functions, including the entropy and divergence estimators in Table 1. The bias and variance bounds in turn imply a bound on the mean squared error (MSE) of the bias-corrected estimator via the usual decomposition into squared bias and variance: Corollary 8. (MSE Bound) Under the conditions of Theorems 5 and 7, E [( Ĥk(X)−H(X) )2] ≤ C2fL2 ( k n )2β/D + CV n . (9) Choosing k: Contrary to the name, fixing k is not required for “fixed-k” estimators. [36] empirically studied the effect of changing k with n and found that fixing k = 1 gave best results for estimating F (P ). However, there has been no theoretical justification for fixing k. Assuming tightness of our bias bound in k, we provide this in a worst-case sense: since our bias bound is nondecreasing in k and our variance bound is no larger than the minimax MSE rate for these estimation problems, reducing variance (i.e., increasing k) does not improve the (worst-case) convergence rate. On the other hand, [4] recently showed that slowly increasing k can improves the asymptotic variance of the estimator, with the rate k log5 n leading to asymptotic efficiency. In view of these results, we suggest that increasing k can improve error by constant factors, but cannot improve the convergence rate. Finally, we note that [36] found increasing k quickly (e.g., k = n/2) was best for certain hypothesis tests based on these estimators. Intuitively, this is because, in testing problems, bias is less problematic than variance (e.g., an asymptotically biased estimator can still lead to a consistent test). Acknowledgments This material is based upon work supported by a National Science Foundation Graduate Research Fellowship to the first author under Grant No. DGE-1252522.
1. What are the strengths and contributions of the paper on density functional estimation? 2. What are the concerns regarding the superiority of the proposed method over plug-in estimators? 3. How do the reviewer assess the relationships between the different assumptions made in the paper? 4. Are there any suggestions for additional discussions or simulations to support the theoretical analysis? 5. Are there any minor errors or typos in the review that should be addressed?
Review
Review This paper establishes the finite sample bounds for fixed-kNN density functionals estimator. The analysis applies for a family of estimators with known bias correction terms.Overall I think the paper is well written, addresses an important question in density functional estimation. The math seems rigorous and the assumptions are motivated. Below are some concerns. 1. The author claims in line 87 the superior of their method over plug-in estimator because \hat{p} induces bias. My question is what if the bias correction of \hat{p} is known for fixed k? Do the two methods end up equivalent? If not, is the bound still better? 2. I would like to see more discussion about the relationship between the assumptions. For example, between A1 and A4 the author claim in line 172 that A4 is "much milder", but is it true that Lemma 2 shows A4 implies A1? Also between A2 and A3, it's not obvious to me which is stronger. Does A2 work for atom points, i.e., those who have non-zero probability mass? 3. Even for a theory paper, it's still of interest to show some simulations to illustrate the ideas, for example, the tightness of the upper bounds given in the paper, or the computational savings discussed here by using small k. Minor corrections: line 11: a number of; line 50: to; line 87: section 5
NIPS
Title Finite-Sample Analysis of Fixed-k Nearest Neighbor Density Functional Estimators Abstract We provide finite-sample analysis of a general framework for using k-nearest neighbor statistics to estimate functionals of a nonparametric continuous probability density, including entropies and divergences. Rather than plugging a consistent density estimate (which requires k → ∞ as the sample size n → ∞) into the functional of interest, the estimators we consider fix k and perform a bias correction. This is more efficient computationally, and, as we show in certain cases, statistically, leading to faster convergence rates. Our framework unifies several previous estimators, for most of which ours are the first finite sample guarantees. 1 Introduction Estimating entropies and divergences of probability distributions in a consistent manner is of importance in a number of problems in machine learning. Entropy estimators have applications in goodness-of-fit testing [13], parameter estimation in semi-parametric models [51], studying fractal random walks [3], and texture classification [14, 15]. Divergence estimators have been used to generalize machine learning algorithms for regression, classification, and clustering from inputs in RD to sets and distributions [40, 33]. Divergences also include mutual informations as a special case; mutual information estimators have applications in feature selection [35], clustering [2], causality detection [16], optimal experimental design [26, 38], fMRI data analysis [7], prediction of protein structures [1], and boosting and facial expression recognition [41]. Both entropy estimators and mutual information estimators have been used for independent component and subspace analysis [23, 47, 37, 17], as well as for image registration [14, 15]. Further applications can be found in [25]. This paper considers the more general problem of estimating functionals of the form F (P ) := E X∼P [f(p(X))] , (1) using n IID samples from P , where P is an unknown probability measure with smooth density function p and f is a known smooth function. We are interested in analyzing a class of nonparametric estimators based on k-nearest neighbor (k-NN) distance statistics. Rather than plugging a consistent estimator of p into (1), which requires k →∞ as n→∞, these estimators derive a bias correction for the plug-in estimator with fixed k; hence, we refer to this type of estimator as a fixed-k estimator. Compared to plug-in estimators, fixed-k estimators are faster to compute. As we show, fixed-k estimators can also exhibit superior rates of convergence. As shown in Table 1, several authors have derived bias corrections necessary for fixed-k estimators of entropies and divergences, including, most famously, the Shannon entropy estimator of [20]. 1 The estimators in Table 1 estimators are known to be weakly consistent, 2 but, except for Shannon entropy, 1MATLAB code for these estimators is in the ITE toolbox https://bitbucket.org/szzoli/ite/ [48]. 2Several of these proofs contain errors regarding the use of integral convergence theorems when their conditions do not hold, as described in [39]. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. no finite sample bounds are known. The main goal of this paper is to provide finite-sample analysis of these estimators, via unified analysis of the estimator after bias correction. Specifically, we show conditions under which, for β-Hölder continuous (β ∈ (0, 2]) densities on D dimensional space, the bias of fixed-k estimators decays as O ( n−β/D ) and the variance decays as O ( n−1 ) , giving a mean squared error of O ( n−2β/D + n−1 ) . Hence, the estimators converge at the parametric O(n−1) rate when β ≥ D/2, and at the slower rateO(n−2β/D) otherwise. A modification of the estimators would be necessary to leverage additional smoothness for β > 2, but we do not pursue this here. Along the way, we prove a finite-sample version of the useful fact [25] that (normalized) k-NN distances have an Erlang asymptotic distribution, which may be of independent interest. We present our results for distributions P supported on the unit cube in RD because this significantly simplifies the statements of our results, but, as we discuss in the supplement, our results generalize fairly naturally, for example to distributions supported on smooth compact manifolds. In this context, it is worth noting that our results scale with the intrinsic dimension of the manifold. As we discuss later, we believe deriving finite sample rates for distributions with unbounded support may require a truncated modification of the estimators we study (as in [49]), but we do not pursue this here. 2 Problem statement and notation LetX := [0, 1]D denote the unit cube in RD, and let µ denote the Lebesgue measure. Suppose P is an unknown µ-absolutely continuous Borel probability measure supported on X , and let p : X → [0,∞) denote the density of P . Consider a (known) differentiable function f : (0,∞) → R. Given n samples X1, ..., Xn drawn IID from P , we are interested in estimating the functional F (P ) := E X∼P [f(p(X))] . Somewhat more generally (as in divergence estimation), we may have a function f : (0,∞)2 → R of two variables and a second unknown probability measure Q, with density q and n IID samples Y1, ..., Yn. Then, we are interested in estimating F (P,Q) := E X∼P [f(p(X), q(X))] . Fix r ∈ [1,∞] and a positive integer k. We will work with distances induced by the r-norm ‖x‖r := ( D∑ i=1 xri )1/r and define cD,r := (2Γ(1 + 1/r)) D Γ(1 +D/r) = µ(B(0, 1)), where B(x, ε) := {y ∈ RD : ‖x − y‖r < ε} denotes the open radius-ε ball centered at x. Our estimators use k-nearest neighbor (k-NN) distances: Definition 1. (k-NN distance): Given n IID samples X1, ..., Xn from P , for x ∈ RD, we define the k-NN distance εk(x) by εk(x) = ‖x−Xi‖r, where Xi is the kth-nearest element (in ‖ · ‖r) of the set {X1, ..., Xn} to x. For divergence estimation, given n samples Y1, ..., Yn from Q, then we similarly define δk(x) by δk(x) = ‖x− Yi‖r, where Yi is the kth-nearest element of {Y1, ..., Yn} to x. µ-absolute continuity of P precludes the existence of atoms (i.e., ∀x ∈ RD, P ({x}) = µ({x}) = 0). Hence, each εk(x) > 0 a.s. We will require this to study quantities such as log εk(x) and 1/εk(x). 3 Estimator 3.1 k-NN density estimation and plug-in functional estimators The k-NN density estimator p̂k(x) = k/n µ(B(x, εk(x)) = k/n cDεDk (x) is well-studied nonparametric density estimator [28], motivated by noting that, for small ε > 0, p(x) ≈ P (B(x, ε)) µ(B(x, ε)) , and that, P (B(x, εk(x))) ≈ k/n. One can show that, for x ∈ RD at which p is continuous, if k → ∞ and k/n → 0 as n → ∞, then p̂k(x) → p(x) in probability ([28], Theorem 3.1). Thus, a natural approach for estimating F (P ) is the plug-in estimator F̂PI := 1 n n∑ i=1 f (p̂k(Xi)) . (2) Since p̂k → p in probability pointwise as k, n→∞ and f is smooth, one can show F̂PI is consistent, and in fact derive finite sample convergence rates (depending on how k →∞). For example, [44] show a convergence rate of O ( n−min{ 2β β+D ,1} ) for β-Hölder continuous densities (after sample splitting and boundary correction) by setting k n β β+d . Unfortunately, while necessary to ensure V [p̂k(x)]→ 0, the requirement k →∞ is computationally burdensome. Furthermore, increasing k can increase the bias of p̂k due to over-smoothing (see (5) below), suggesting that this may be sub-optimal for estimating F (P ). Indeed, similar work based on kernel density estimation [42] suggests that, for plug-in functional estimators, under-smoothing may be preferable, since the empirical mean results in additional smoothing. 3.2 Fixed-k functional estimators An alternative approach is to fix k as n→∞. Since F̂PI is itself an empirical mean, unlike V [p̂k(x)], V [ F̂PI ] → 0 as n → ∞. The more critical complication of fixing k is bias. Since f is typically non-linear, the non-vanishing variance of p̂k translates into asymptotic bias. A solution adopted by several papers is to derive a bias correction function B (depending only on known factors) such that E X1,...,Xn [ B ( f ( k/n µ(B(x, εk(x)) ))] = E X1,...,Xn [ f ( P (B(x, εk(x))) µ(B(x, εk(x)) )] . (3) For continuous p, the quantity pεk(x)(x) := P (B(x, εk(x))) µ(B(x, εk(x)) (4) is a consistent estimate of p(x) with k fixed, but it is not computable, since P is unknown. The bias correction B gives us an asymptotically unbiased estimator F̂B(P ) := 1 n n∑ i=1 B (f (p̂k(Xi))) = 1 n n∑ i=1 B ( f ( k/n µ(B(Xi, εk(Xi)) )) . that uses k/n in place of P (B(x, εk(x))). This estimate extends naturally to divergences: F̂B(P,Q) := 1 n n∑ i=1 B (f (p̂k(Xi), q̂k(Xi))) . As an example, if f = log (as in Shannon entropy), then it can be shown that, for any continuous p, E [logP (B(x, εk(x)))] = ψ(k)− ψ(n). Hence, for Bn,k := ψ(k)− ψ(n) + log(n)− log(k), E X1,...,Xn [ f ( k/n µ(B(x, εk(x)) )] +Bn,k = E X1,...,Xn [ f ( P (B(x, εk(x))) µ(B(x, εk(x)) )] . giving the estimator of [20]. Other examples of functionals for which the bias correction is known are given in Table 1. In general, deriving an appropriate bias correction can be quite a difficult problem specific to the functional of interest, and it is not our goal presently to study this problem; rather, we are interested in bounding the error of F̂B(P ), assuming the bias correction is known. Hence, our results apply to all of the estimators in Table 1, as well as any estimators of this form that may be derived in the future. 4 Related work 4.1 Estimating information theoretic functionals Recently, there has been much work on analyzing estimators for entropy, mutual information, divergences, and other functionals of densities. Besides bias-corrected fixed-k estimators, most of this work has taken one of three approaches. One series of papers [27, 42, 43] studied a boundarycorrected plug-in approach based on under-smoothed kernel density estimation. This approach has strong finite sample guarantees, but requires prior knowledge of the support of the density, and can have a slow rate of convergence. A second approach [18, 22] uses von Mises expansion to partially correct the bias of optimally smoothed density estimates. This is statistically more efficient, but can require computationally demanding numerical integration over the support of the density. A final line of work [30, 31, 44, 46] studied plug-in estimators based on consistent, boundary corrected k-NN density estimates (i.e., with k →∞ as n→∞). [32] study a divergence estimator based on convex risk minimization, but this relies of the context of an RKHS, making results are difficult to compare. Rates of Convergence: For densities over RD satisfying a Hölder smoothness condition parametrized by β ∈ (0,∞), the minimax mean squared error rate for estimating functionals of the form∫ f(p(x)) dx has been known since [6] to be O ( n−min{ 8β 4β+D ,1} ) . [22] recently derived iden- tical minimax rates for divergence estimation. Most of the above estimators have been shown to converge at the rate O ( n−min{ 2β β+D ,1} ) . Only the von Mises approach [22] is known to achieve the minimax rate for general β and D, but due to its computational demand (O(2Dn3)), 3 the authors suggest using other statistically less efficient estimators for moderate sample size. Here, we show that, for β ∈ (0, 2], bias-corrected fixed-k estimators converge at the relatively fast rate O ( n−min{ 2β D ,1} ) . For β > 2, modifications are needed for the estimator to leverage the additional smoothness of the density. Notably, this rate is adaptive; that is, it does not require selecting a smoothing parameter depending on the unknown β; our results (Theorem 5) imply the above rate is achieved for any fixed choice of k. On the other hand, since no empirical error metric is available for cross-validation, parameter selection is an obstacle for competing estimators. 4.2 Prior analysis of fixed-k estimators As of writing this paper, the only finite-sample results for F̂B(P ) were those of [5] for the KozachenkoLeonenko (KL) 4 Shannon entropy estimator. [20] Theorem 7.1 of [5] shows that, if the density p has compact support, then the variance of the KL estimator decays as O(n−1). They also claim (Theorem 7.2) to bound the bias of the KL estimator by O(n−β), under the assumptions that p is β-Hölder continuous (β ∈ (0, 1]), bounded away from 0, and supported on the interval [0, 1]. However, in their proof, [5] neglect to bound the additional bias incurred near the boundaries of [0, 1], where the density cannot simultaneously be bounded away from 0 and continuous. In fact, because the KL estimator does not attempt to correct for boundary bias, it is not clear that the bias should decay as O(n−β) under these conditions; we require additional conditions at the boundary of X . 3Fixed-k estimators can be computed in O ( Dn2 ) time, or O ( 2Dn logn ) using k-d trees for small D. 4Not to be confused with Kullback-Leibler (KL) divergence, for which we also analyze an estimator. [49] studied a closely related entropy estimator for which they prove √ n-consistency. Their estimator is identical to the KL estimator, except that it truncates k-NN distances at √ n, replacing εk(x) with min{εk(x), √ n}. This sort of truncation may be necessary for certain fixed-k estimators to satisfy finite-sample bounds for densities of unbounded support, though consistency can be shown regardless. Finally, two very recent papers [12, 4] have analyzed the KL estimator. In this case, [12] generalize the results of [5] to D > 1, and [4] weaken the regularity and boundary assumptions required by our bias bound, while deriving the same rate of convergence. Moreover, they show that, if k increases with n at the rate k log5 n, the KL estimator is asymptotically efficient (i.e., asymptotically normal, with optimal asymptotic variance). As explained in Section 8, together with our results this elucidates the role of k in the KL estimator: fixing k optimizes the convergence rate of the estimator, but increasing k slowly can further improve error by constant factors. 5 Discussion of assumptions The lack of finite-sample results for fixed-k estimators is due to several technical challenges. Here, we discuss some of these challenges, motivating the assumptions we make to overcome them. First, these estimators are sensitive to regions of low probability (i.e., p(x) small), for two reasons: 1. Many functions f of interest (e.g., f = log or f(z) = zα, α < 0) have singularities at 0. 2. The k-NN estimate p̂k(x) of p(x) is highly biased when p(x) is small. For example, for p β-Hölder continuous (β ∈ (0, 2]), one has ([29], Theorem 2) Bias(p̂k(x)) ( k np(x) )β/D . (5) For these reasons, it is common in analysis of k-NN estimators to assume the following [5, 39]: (A1) p is bounded away from zero on its support. That is, p∗ := infx∈X p(x) > 0. Second, unlike many functional estimators (see e.g., [34, 45, 42]), the fixed-k estimators we consider do not attempt correct for boundary bias (i.e., bias incurred due to discontinuity of p on the boundary ∂X of X ). 5 The boundary bias of the density estimate p̂k(x) does vanish at x in the interior X ◦ of X as n→∞, but additional assumptions are needed to obtain finite-sample rates. Either of the following assumptions would suffice: (A2) p is continuous not only on X ◦ but also on ∂X (i.e., p(x)→ 0 as dist(x, ∂X )→ 0). (A3) p is supported on all of RD. That is, the support of p has no boundary. This is the approach of [49], but we reiterate that, to handle an unbounded domain, they require truncating εk(x). Unfortunately, both assumptions (A2) and (A3) are inconsistent with (A1). Our approach is to assume (A2) and replace assumption (A1) with a much milder assumption that p is locally lower bounded on its support in the following sense: (A4) There exist ρ > 0 and a function p∗ : X → (0,∞) such that, for all x ∈ X , r ∈ (0, ρ], p∗(x) ≤ P (B(x,r))µ(B(x,r)) . We show in Lemma 2 that assumption (A4) is in fact very mild; in a metric measure space of positive dimension D, as long as p is continuous on X , such a p∗ exists for any desired ρ > 0. For simplicity, we will use ρ = √ D = diam(X ). As hinted by (5) and the fact that F (P ) is an expectation, our bounds will contain terms of the form E X∼P [ 1 (p∗(X)) β/D ] = ∫ X p(x) (p∗(x)) β/D dµ(x) (with an additional f ′(p∗(x)) factor if f has a singularity at zero). Hence, our key assumption is that these quantities are finite. This depends primarily on how quickly p approaches zero near ∂X . For many functionals, Lemma 6 gives a simple sufficient condition. 5This complication was omitted in the bias bound (Theorem 7.2) of [5] for entropy estimation. 6 Preliminary lemmas Here, we present some lemmas, both as a means of summarizing our proof techniques and also because they may be of independent interest for proving finite-sample bounds for other k-NN methods. Due to space constraints, all proofs are given in the appendix. Our first lemma states that, if p is continuous, then it is locally lower bounded as described in the previous section. Lemma 2. (Existence of Local Bounds) If p is continuous on X and strictly positive on the interior X ◦ of X , then, for ρ := √ D = diam(X ), there exists a continuous function p∗ : X ◦ → (0,∞) and a constant p∗ ∈ (0,∞) such that 0 < p∗(x) ≤ P (B(x, r)) µ(B(x, r)) ≤ p∗ <∞, ∀x ∈ X , r ∈ (0, ρ]. We now use these local lower and upper bounds to prove that k-NN distances concentrate around a term of order (k/(np(x)))1/D. Related lemmas, also based on multiplicative Chernoff bounds, are used by [21, 9] and [8, 19] to prove finite-sample bounds on k-NN methods for cluster tree pruning and classification, respectively. For cluster tree pruning, the relevant inequalities bound the error of the k-NN density estimate, and, for classification, they lower bound the probability of nearby samples of the same class. Unlike in cluster tree pruning, we are not using a consistent density estimate, and, unlike in classification, our estimator is a function of k-NN distances themselves (rather than their ordering). Thus, our statement is somewhat different, bounding the k-NN distances themselves: Lemma 3. (Concentration of k-NN Distances) Suppose p is continuous on X and strictly positive on X ◦. Let p∗ and p∗ be as in Lemma 2. Then, for any x ∈ X ◦, 1. if r > ( k p∗(x)n )1/D , then P [εk(x) > r] ≤ e−p∗(x)r Dn ( e p∗(x)r Dn k )k . 2. if r ∈ [ 0, ( k p∗n )1/D) , then P [εk(x) < r] ≤ e−p∗(x)r Dn ( ep∗rDn k )kp∗(x)/p∗ . It is worth noting an asymmetry in the above bounds: counter-intuitively, the lower bound depends on p∗. This asymmetry is related to the large bias of k-NN density estimators when p is small (as in (5)). The next lemma uses Lemma 3 to bound expectations of monotone functions of the ratio p̂k/p∗. As suggested by the form of integrals (6) and (7), this is essentially a finite-sample statement of the fact that (appropriately normalized) k-NN distances have Erlang asymptotic distributions; this asymptotic statement is key to consistency proofs of [25] and [39] for α-entropy and divergence estimators. Lemma 4. Let p be continuous on X and strictly positive on X ◦. Define p∗ and p∗ as in Lemma 2. Suppose f : (0,∞)→ R is continuously differentiable and f ′ > 0. Then, we have the upper bound 6 sup x∈X◦ E [ f+ ( p∗(x) p̂k(x) )] ≤ f+(1) + e √ k ∫ ∞ k e−yyk Γ(k + 1) f+ (y k ) dy, (6) and, for all x ∈ X ◦, for κ(x) := kp∗(x)/p∗, the lower bound E [ f− ( p∗(x) p̂k(x) )] ≤ f−(1) + e √ k κ(x) ∫ κ(x) 0 e−yyκ(x) Γ(κ(x) + 1) f− (y k ) dy (7) Note that plugging the function z 7→ f (( kz cD,rnp∗(x) ) 1 D ) into Lemma 4 gives bounds on E [f(εk(x))]. As one might guess from Lemma 3 and the assumption that f is smooth, this bound is roughly of the order ( k np(x) ) 1 D . For example, for any α > 0, a simple calculation from (6) gives E [εαk (x)] ≤ ( 1 + α D )( k cD,rnp∗(x) ) α D . (8) (8) is used for our bias bound, and more direct applications of Lemma 4 are used in variance bound. 6f+(x) = max{0, f(x)} and f−(x) = −min{0, f(x)} denote the positive and negative parts of f . Recall that E [f(X)] = E [f+(X)]− E [f−(X)]. 7 Main results Here, we present our main results on the bias and variance of F̂B(P ). Again, due to space constraints, all proofs are given in the appendix. We begin with bounding the bias: Theorem 5. (Bias Bound) Suppose that, for some β ∈ (0, 2], p is β-Hölder continuous with constant L > 0 on X , and p is strictly positive on X ◦. Let p∗ and p∗ be as in Lemma 2. Let f : (0,∞)→ R be differentiable, and define Mf,p : X → [0,∞) by Mf,p(x) := sup z∈[p∗(x),p∗] ∣∣∣∣ ddz f(z) ∣∣∣∣ Assume Cf := E X∼p [ Mf,p(X) (p∗(X)) β D ] <∞. Then, ∣∣∣E F̂B(P )− F (P )∣∣∣ ≤ CfL(k n ) β D . The statement for divergences is similar, assuming that q is also β-Hölder continuous with constant L and strictly positive on X ◦. Specifically, we get the same bound if we replace Mf,o with Mf,p(x) := sup (w,z)∈[p∗(x),p∗]×[q∗(x),q∗] ∣∣∣∣ ∂∂wf(w, z) ∣∣∣∣ and define Mf,q similarly (i.e., with ∂∂z ) and we assume that Cf := E X∼p [ Mf,p(X) (p∗(X)) β D ] + E X∼p [ Mf,q(X) (q∗(X)) β D ] <∞. As an example of the applicability of Theorem 5, consider estimating the Shannon entropy. Then, f(z) = log(x), and so we need Cf = ∫ X (p∗(x)) −β/D dµ(x) <∞. The assumption Cf <∞ is not immediately transparent. For the functionals in Table 1, Cf has the form ∫ X (p(x)) −c dx, for some c > 0, and hence Cf <∞ intuitively means p(x) cannot approach zero too quickly as dist(x, ∂X )→ 0. The following lemma gives a formal sufficient condition: Lemma 6. (Boundary Condition) Let c > 0. Suppose there exist b∂ ∈ (0, 1c ), c∂ , ρ∂ > 0 such that, for all x ∈ X with ε(x) := dist(x, ∂X ) < ρ∂ , p(x) ≥ c∂εb∂ (x). Then, ∫ X (p∗(x)) −c dµ(x) <∞. In the supplement, we give examples showing that this condition is fairly general, satisfied by densities proportional to xb∂ near ∂X (i.e., those with at least b∂ nonzero one-sided derivatives on the boundary). We now bound the variance. The main obstacle here is that the fixed-k estimator is an empirical mean of dependent terms (functions of k-NN distances). We generalize the approach used by [5] to bound the variance of the KL estimator of Shannon entropy. The key insight is the geometric fact that, in (RD, ‖ · ‖p), there exists a constant Nk,D (independent of n) such that any sample Xi can be amongst the k-nearest neighbors of at most Nk,D other samples. Hence, at most Nk,D + 1 of the terms in (2) can change when a single Xi is added, suggesting a variance bound via the Efron-Stein inequality [10], which bounds the variance of a function of random variables in terms of its expected change when its arguments are resampled. [11] originally used this approach to prove a general Law of Large Numbers (LLN) for nearest-neighbors statistics. Unfortunately, this LLN relies on bounded kurtosis assumptions that are difficult to justify for the log or negative power statistics we study. Theorem 7. (Variance Bound) Suppose B ◦ f is continuously differentiable and strictly monotone. Assume Cf,p := EX∼P [ B2(f(p∗(X))) ] <∞, and Cf := ∫∞ 0 e−yykf(y) <∞. Then, for CV := 2 (1 +Nk,D) (3 + 4k) (Cf,p + Cf ) , we have V [ F̂B(P ) ] ≤ CV n . As an example, if f = log (as in Shannon entropy), then, since B is an additive constant, we simply require ∫ X p(x) log 2(p∗(x)) < ∞. In general, Nk,D is of the order k2cD, for some c > 0. Our bound is likely quite loose in k; in practice, V [ F̂B(P ) ] typically decreases somewhat with k. 8 Conclusions and discussion In this paper, we gave finite-sample bias and variance error bounds for a class of fixed-k estimators of functionals of probability density functions, including the entropy and divergence estimators in Table 1. The bias and variance bounds in turn imply a bound on the mean squared error (MSE) of the bias-corrected estimator via the usual decomposition into squared bias and variance: Corollary 8. (MSE Bound) Under the conditions of Theorems 5 and 7, E [( Ĥk(X)−H(X) )2] ≤ C2fL2 ( k n )2β/D + CV n . (9) Choosing k: Contrary to the name, fixing k is not required for “fixed-k” estimators. [36] empirically studied the effect of changing k with n and found that fixing k = 1 gave best results for estimating F (P ). However, there has been no theoretical justification for fixing k. Assuming tightness of our bias bound in k, we provide this in a worst-case sense: since our bias bound is nondecreasing in k and our variance bound is no larger than the minimax MSE rate for these estimation problems, reducing variance (i.e., increasing k) does not improve the (worst-case) convergence rate. On the other hand, [4] recently showed that slowly increasing k can improves the asymptotic variance of the estimator, with the rate k log5 n leading to asymptotic efficiency. In view of these results, we suggest that increasing k can improve error by constant factors, but cannot improve the convergence rate. Finally, we note that [36] found increasing k quickly (e.g., k = n/2) was best for certain hypothesis tests based on these estimators. Intuitively, this is because, in testing problems, bias is less problematic than variance (e.g., an asymptotically biased estimator can still lead to a consistent test). Acknowledgments This material is based upon work supported by a National Science Foundation Graduate Research Fellowship to the first author under Grant No. DGE-1252522.
1. What is the focus of the paper regarding the k-Nearest-Neighbor based estimator? 2. What are the strengths of the proposed approach, particularly in deriving bounds on the moments of KNN distances? 3. Do you have any concerns or questions about the assumptions made in the paper? 4. How does the proposed approach compare to other methods in dealing with the bias at the boundary? 5. What are the limitations of the paper regarding the lack of numerical simulations and examples of distributions that fulfill the assumptions? 6. Could you provide more explanations or examples to help understand the conditions for the bias bound and variance bounds?
Review
Review This paper analyzes the k-Nearest-Neighbor based estimator proposed for general functionals of densities. The authors derive upper bounds on the bias and variance of the estimator. The approach is based on deriving bounds on the moments of KNN distances. The benefit of this estimator compared to the previous KNN-based estimators is that the estimator is asymptotically unbiased and consistent for fixed k. This provides some of the first results on the bias and variance of several popular estimators when the bias at the boundary is taken into consideration. The essential assumption for proposing such estimators is knowledge about the bias correction function, which creates some complexities for practical use. This paper provides a new way of analyzing k-nn based entropy and divergence estimators. In order to analyze the convergence rates, most papers assume that the densities are bounded away from zero. The authors of this paper provide an alternative approach where the densities are locally lower bounded. This approach may be useful in the analysis of other estimators of information measures. However, it is not exactly clear if the assumptions are practical. See below for more details. 1- The paper could benefit from a little more proofreading. For example, the final sentence on line 273 does not make sense grammatically. 2- The paper didn’t provide any numerical simulations to validate the derived rates for convergence. While this is not required for publication, it would strengthen the paper to show simulations that compare the convergence of the estimator to the theoretical bounds. Also it would be nice if authors could compare the convergence rates for different values of k, by numerical experiments. 3- The examples given as bias correction functions in Table 1 are generally for asymptotic settings. For example, the multiplicative constants for Renyi alpha entropy and divergence derived in “Leonenko and Pronzato [2010]”, and “Poczos and Schneider [2011]” are such the estimator is asymptotically unbiased and consistent. But for finite-sample case, we cannot make sure that with these choices of k, the fact that with these choices of k, relation (3) holds for every n, is not clear. The authors mention that finding the bias correction function is not addressed in this paper, and they assume it given, however, they should at least show that such function exists the functions in Table 1. Basically, the question is that under what conditions we can make sure that some bias correction function exists such that it makes the relation (3) true for any choice of n. 4- The authors mention that they replace assumption (A1) with a much milder assumption that p is locally lower bounded on its support as discussed in the paper. In conjunction with assumption A2, this assumption appears to be more restrictive than A1. It would be good for the authors to provide some examples of distributions that fulfill A4 and A2 to verify that this approach is practical. 5- A common method for dealing with the bias at the boundary is to assume continuity at the boundary by extending the density beyond the boundary (e.g. Sricharan et al (2013)). Can the authors comment briefly on a comparison between their method and the extension approach? Is the authors' approach easier? 6- In the bias bound and variance bounds, it is assumed that C_f <\inf and C_{f,p}<\inf, but what do these conditions mean in terms of the distribution? The authors attempt to explain this by providing a sufficient condition in Lemma 6. However, the conditions in the lemma are not intuitive. Please provide more details. An example would be helpful.
NIPS
Title An Asymptotically Optimal Batched Algorithm for the Dueling Bandit Problem Abstract We study the K-armed dueling bandit problem, a variation of the traditional multiarmed bandit problem in which feedback is obtained in the form of pairwise comparisons. Previous learning algorithms have focused on the fully adaptive setting, where the algorithm can make updates after every comparison. The “batched” dueling bandit problem is motivated by large-scale applications like web search ranking and recommendation systems, where performing sequential updates may be infeasible. In this work, we ask: is there a solution using only a few adaptive rounds that matches the asymptotic regret bounds of the best sequential algorithms for K-armed dueling bandits? We answer this in the affirmative under the Condorcet condition, a standard setting of the K-armed dueling bandit problem. We obtain asymptotic regret of O(K2 log(K)) +O(K log(T )) in O(log(T )) rounds, where T is the time horizon. Our regret bounds nearly match the best regret bounds known in the fully sequential setting under the Condorcet condition. Finally, in computational experiments over a variety of real-world datasets, we observe that our algorithm using O(log(T )) rounds achieves almost the same performance as fully sequential algorithms (that use T rounds). N/A for K-armed dueling bandits? We answer this in the affirmative under the Condorcet condition, a standard setting of the K-armed dueling bandit problem. We obtain asymptotic regret of O(K2 log2(K)) +O(K log(T )) in O(log(T )) rounds, where T is the time horizon. Our regret bounds nearly match the best regret bounds known in the fully sequential setting under the Condorcet condition. Finally, in computational experiments over a variety of real-world datasets, we observe that our algorithm using O(log(T )) rounds achieves almost the same performance as fully sequential algorithms (that use T rounds). 1 Introduction The K-armed dueling bandit problem is a variation of the traditional multi-armed bandit problem in which feedback is obtained in the form of pairwise preferences. This problem has applications in a wide-variety of domains like search ranking, recommendation systems and sports ranking where eliciting qualitative feedback is easy while real-valued feedback is not easily interpretable; thus, it has been a popular topic of research in the machine learning community (see, for example, [51, 49, 47, 5, 54, 52, 53, 21, 32, 35, 36, 42, 18]). Previous learning algorithms have focused on a fully adaptive setting; that is, the learning algorithm can make updates in a sequential fashion. Such updates might be impractical in large systems; for example, consider web-search ranking where the goal is to provide a list (usually ranked) of candidate documents to the user of the system in response to a query [41, 33, 50, 31]. Modern day search engines use hundred of parameters to compute a ranked list in response to a query, and online learning frameworks (based on user feedback) have been invaluable in automatically tuning these parameters [38]. However, given the scale of the system, it may be infeasible to adapt after each interaction: users may make multiple queries in a short time or multiple users may simultaneously query the system. Hence, we prefer solutions with limited rounds of adaptivity. Concretely, we ask: is there a solution using only a few adaptive rounds that matches the asymptotic regret bounds of the best sequential algorithms for K-armed dueling bandits? This “batched” dueling bandit problem was introduced recently in [2]. Here, the learning algorithm’s actions are partitioned into a limited number of rounds. In each round/batch, the algorithm commits to a fixed set of pairwise comparisons, and the feedback for all these comparisons is received 36th Conference on Neural Information Processing Systems (NeurIPS 2022). simultaneously. Then, the algorithm uses the feedback from the current batch of comparisons to choose comparisons for the next batch. [2] studied this problem under two different conditions: (i) the strong stochastic transitivity and stochastic triangle inequality (SST+STI) condition, which enforces a certain linear ordering over the arms; (ii) the Condorcet condition, which requires one arm to be superior to all others. Under SST+STI, their work provided almost tight upper and lower bounds on the trade-off between number of rounds and regret; in particular, they showed that one can achieve worst-case regret of O(K log2 T ) using ⇥(log T ) rounds (T is the time-horizon).1 Under the Condorcet condition, which is more general than SST+STI, they achieved a regret upper bound of O(K2 log T ) in O(log T ) rounds. Previous work [54, 35] on fully sequential algorithms has shown that it is possible to achieve an asymptotic upper bound of O(K2 +K log T ) under the Condorcet condition. Very recently, [43] improved the sequential regret bound even further by obtaining regret O(K log T ), which is the best possible even in the special case of SST+STI [49]. In the batched setting, the upper bound of [2] does not achieve this asymptotic optimality, irrespective of the number of batches, due to the presence of a K2 multiplicative factor in the regret bound. Their work left open the possibility of obtaining a batched algorithm achieving asymptotic optimality under the Condorcet condition. In this paper, we nearly resolve this question, by providing an algorithm with O(K2 log2 K +K log T ) regret in ⇥(log T ) rounds, under the Condorcet condition. 1.1 Contributions • We design an algorithm, denoted C2B, for the batched dueling bandit problem, and analyze its regret under the Condorcet condition. This algorithm achieves a smooth trade-off between the expected regret and the number of batches, B. • Crucially, when B = log(T ), our regret bounds nearly match the best regret bounds [35, 54] known in the fully sequential setting. Hence, our results show that O(log T ) rounds are sufficient to achieve asymptotically optimal regret as a function of T . • Our results rely on new ideas for showing that the Condorcet winner arm can be ‘trapped’ using few adaptive rounds with high (constant) probability while incurring a reasonable amount of regret. We can then integrate over this space of probabilities to obtain a bound on the expected regret (in the same vein as [54]). Once the Condorcet arm is ‘trapped’, we can quickly eliminate all other ‘sub-optimal’ arms and minimize regret in the process. • Finally, we run computational experiments to validate our theoretical results. We show that C2B, using O(log T ) batches, achieves almost the same performance as fully sequential algorithms (which effectively use T batches) over a variety of real datasets. 1.2 Preliminaries The K-armed dueling bandit problem [49] is an online optimization problem, where the goal is to find the best among K bandits B = {1, . . . ,K} using noisy pairwise comparisons with low regret. In each time-step, a noisy comparison between two arms (possibly the same), say (i, j), is performed. The outcome of the comparison is an independent random variable, and the probability of picking i over j is denoted pi,j = 12 + i,j where i,j 2 ( 1 2 , 1 2 ). Here, i,j can be thought of as a measure of distinguishability between the two arms, and we use i j when i,j > 0. We also refer to i,j as the gap between i and j. This problem has been studied under various conditions on the pairwise probabilities pi,j’s. One such condition is the strong stochastic transitivity and stochastic triangle inequality (SST+STI) condition where there exists an ordering over arms, denoted by ⌫, such that for every triple i ⌫ j ⌫ k, we have i,k max{ i,j , j,k}, and i,k i,j + j,k [49, 51]. In this paper, we work under the well-studied Condorcet winner condition, which is much more general than the SST+STI condition [47, 54, 35]. We say that arm i is a Condorcet winner if, and only if, pi,j > 12 for all j 2 B \ {i}. The Condorcet condition means that there exists a Condorcet winner. Throughout the paper, we let a⇤ refer to the Condorcet arm. To further simplify notation, we define j = a⇤,j ; that is, the gap between a⇤ and j. We define the regret per time-step as follows: suppose arms it and jt are chosen in time-step t, then the regret r(t) = it+ jt 2 . The cumulative regret up to time T is R(T ) = P T t=1 r(t), where T is the time horizon, and it’s assumed that K T . The 1They also gave a more complicated algorithm with regret O(K log2 K log T ) in O(log T + logK log logK) rounds, under the SST+STI condition. cumulative regret can be equivalently stated as R(T ) = 12 P K j=1 Tj j , where Tj denotes the number comparisons involving arm j. The goal of an algorithm is to minimize the cumulative R(T ). We define min = minj: j>0 j to be the smallest non-zero gap of any arm with a⇤. 1.3 Batch Policies In traditional bandit settings, actions are performed sequentially, utilizing the results of all prior actions in determining the next action. In the batched setting, the algorithm must commit to a round (or batch) of actions to be performed in parallel, and can only observe the results after all actions in the batch have been performed. More formally, given a number B of batches, the algorithm proceeds as follows. In each batch r = 1, 2, . . . B, the algorithm first decides on the comparisons to be performed; then, all outcomes of the batch-r comparisons are received simultaneously. The algorithm can then, adaptively, select the next batch of comparisons. Note that even the size of the next batch can be adaptively decided based on the observations in previous batches. Finally, the total number of comparisons (across all batches) must sum to T . We assume that the values of T and B are known. Observe that when T = B, we recover the fully sequential setting. 1.4 Results and Techniques We provide a overview of our results and prior results in Table 1. Given any integer B 1, we obtain a B-round algorithm for the dueling bandit problem. We provide both high-probability and expected regret bounds, stated in the following theorems. Theorem 1.1. For any integer B 1, there is an algorithm for the K-armed dueling bandit problem that uses at most B rounds with the following guarantee. For any > 0, with probability at least 1 1 T , its regret under the Condorcet condition is at most R(T ) O ✓ T 1/B · K 2 log(K) 2min · log ✓ logK min ◆◆ + O T 2/B ·K2 · r 1 ! + X j 6=a⇤ O ✓ T 1/B · log(KT ) j ◆ . Theorem 1.2. For any integer B 1, there is an algorithm for the K-armed dueling bandit problem that uses at most B rounds, with expected regret under the Condorcet condition at most E[R(T )] = O ✓ T 1/B · K 2 log(K) 2min · log ✓ logK min ◆◆ + O ⇣ T 2/B ·K2 ⌘ + X j 6=a⇤ O ✓ T 1/B · log(KT ) j ◆ . When the number of rounds B = log(T ), we obtain a batched algorithm that achieves the asymptotic optimality (in terms of T ), even for sequential algorithms. We formalize this observation in the following corollary. Corollary 1.3. There is an algorithm for the K-armed dueling bandit problem that uses at most log(T ) rounds, with expected regret under the Condorcet condition at most E[R(T )] = O ✓ K 2 log(K) 2min · log ✓ logK min ◆◆ + X j 6=a⇤ O ✓ log(KT ) j ◆ . By a lower-bound result from [2], it follows that no algorithm can achieve O( K min · poly(log T )) regret using o( log Tlog log T ) rounds, even under the SST+STI condition. So, the O(log T ) rounds required to achieve asymptotic optimality in Corollary 1.3 is nearly the best possible. Technical Challenges. The only prior approach for batched dueling bandits (under the Condorcet condition) is the algorithm PCOMP from [2], which performs all-pairs comparisons among arms in an active set. Such an approach cannot achieve regret better than O(K2 log T ) because the active set may remain large throughout. In order to achieve better regret bounds, [2] focus on the stronger SST+STI condition. In this setting, their main idea is to first sample a seed set, and use this seed set to eliminate sub-optimal arms. Their algorithm proceeds by performing all pairwise comparisons between the seed set and the set of active arms. However, the analysis of these ‘seeded comparison’ algorithms crucially rely on the total-ordering imposed by the SST and STI assumptions. Unfortunately, there is no such structure to exploit in the Condorcet setting: if the seed set does not contain the Condorcet winner, we immediately incur high regret. The existing fully sequential algorithms such as RUCB [54] and RMED [35] are highly adaptive in nature. For instance, RUCB plays each candidate arm against an optimistic competitor arm using upper confidence bounds (UCB) on pairwise probabilities. This allows RUCB to quickly filter out candidates and uncover the Condorcet arm. Similarly, RMED plays each arm against a carefully selected competitor arm that is likely to beat this arm. However, such competitors can change frequently over trials in both RUCB and RMED. Since the batched setting requires comparisons to be predetermined, we do not have the flexibility to adapt to such changes in competitors. Hence, these existing fully sequential algorithms cannot be easily implemented in our setting. Furthermore, we might also be tempted to consider an explore-then-exploit strategy where we first explore to find the Condorcet arm and exploit by playing this arm for remaining trials. However, this strategy is likely to fail because identifying the Condorcet arm with high probability might involve performing many comparisons, directly leading to high (⌦(K2 log T )) regret; on the other hand, if the Condorcet winner is not identified with high probability, the exploit phase becomes expensive. This motivated us to consider algorithms that allow some form of recourse; that is, unless an arm is found to be sub-optimal, it must be given the opportunity to participate in the comparisons (as it could be the Condorcet winner). The idea behind our algorithm is to identify the Condorcet winner a⇤ in a small expected number of rounds, after which it uses this arm as an “anchor” to eliminate sub-optimal arms while incurring low regret. To identify the best arm, in each round we define a candidate arm and compare it against arms that it “defeats”. Arms that are not defeated by the candidate arm are compared to all active arms: this step ensures that the Condorcet winner is eventually discovered. We show that a⇤ becomes the candidate, and defeats all other arms within a small number of rounds (though the algorithm may not know if this has occurred). Additionally, once this condition is established, it remains invariant in future rounds. This allows us to eliminate sub-optimal arms and achieve low regret. Comparison to RUCB. Initially, RUCB puts all arms in a pool of potential champions, and “optimistically” (using a upper confidence bound) performs all pairwise comparisons. Using these, it constructs a set of candidates C. If |C|= 1, then that arm is the hypothesised Condorcet winner and placed in a set B. Then, a randomized strategy is employed to choose a champion arm ac (from sets C and B) which is compared to arm ad which is most likely to beat it. The pair (ac, ad) is compared, the probabilities are updated and the algorithm continues. Although our algorithm also seeks to identify the best arm, we do not employ the UCB approach nor do we use any randomness. In their analysis, [54] show that the best arm eventually enters the set B, and remains in B: we also show a similar property for our algorithm in the analysis. Finally, similar to the analysis of [54], we first give a high-probability regret bound for our algorithm which we then convert to a bound on the expected regret. 2 Related Work The K-armed dueling bandit problem has been widely studied in recent years (we refer the reader to [46] for a comprehensive survey). Here, we survey the works that are most closely related to our setting. This problem was first studied in [49] under the SST and STI setting. The authors obtained a worst-case regret upper bound of eO(K log T/ min) and provided a matching lower bound. [51] considered a slightly more general version of the SST and STI setting and achieved an instance-wise optimal regret upper bound of P j: j>0 O (log(T )/ j). Since, the SST+STI condition imposes a total order over the arms and might not hold for real-world datasets, [47] initiated the study of dueling bandits under the Condorcet winner condition. [47] proved a O(K2 log T/ min) regret upper bound under the Condorcet condition, which was improved by [54] to O(K2/ 2min) +P j: j>0 O(log T/ 2 j ). [35] achieved a similar but tighter KL divergence-based bound, which is shown to be asymptotically instance-wise optimal (even in terms constant factors). There are also other works that improve the dependence on K in the upper bound, but suffer a worse dependence on j’s [53]. This problem has also been studied under other noise models such as utility based models [5] and other notions of regret [18]. Alternate notions of winners such as Borda winner [32], Copeland winner [52, 36, 48], and von Neumann winner [21] have also been considered. There are also several works on extensions of dueling bandits that allow multiple arms to be compared at once [45, 3, 44]. All of the aforementioned works on the dueling bandits problem are limited to the sequential setting. Recently, [2] initiated the study of the batched version of the K-armed dueling bandits. Their main results are under the SST and STI setting. They give two algorithms, called SCOMP and SCOMP2, for the batched K-armed dueling bandit problem. For any integer B, SCOMP uses at most B + 1 batches and has expected regret bounded by P j: j>0 O( p KT 1/B log(T )/ j). When B = log(T ), this nearly matches (up to a factor of p K) the best known instance-dependent regret bound ofP j: j>0 O(log(T )/ j) obtained by [49]. SCOMP2 aims to achieve better worst-case regret: it uses at most 2B + 1 batches, and has regret O KBT 1/B log(T )/ min . Thus, when B = log(T ), the expected worst-case regret is O K log2(T )/ min , matching the best known result in the sequential setting up to an additional logarithmic factor. Under the Condorcet condition, [2] give a straightforward pairwise comparison algorithm (PCOMP), that achieves expected regret bounded by O(K2 log(T )/ min) in log(T ) batches. They also provide a nearly matching lower bound of ⌦( KT 1/B B2 min ) for any B-batched algorithm. This implies that our bound (for B-round algorithms) in Theorem 1.2 cannot be significantly improved. Recently, [43] designed a fully adaptive algorithm achieving an optimal regret of P j: j>0 O(log T ) j for dueling bandits under the Condorcet setting. This algorithm is based on the idea of dueling two classical bandit (MAB) algorithms against each other in a repeated zero-sum game with carefully designed rewards. The reward for one algorithm depends on the actions of the other; hence, these algorithms need to achieve best-of-both-worlds guarantee for both stochastic and adversarial settings. However, the approach of [43] is not directly applicable to the batched setting that we consider. This is because, as shown by [23], any B-round algorithm for batched MAB in the adversarial setting has regret ⌦(T/B). There has also been substantial work on best-arm or top-k identification using pairwise comparisons with limited adaptivity. [15, 14, 19] considered this problem under the noisy pairwise comparison setting, which is a special case of SST+STI. They showed that constant number of rounds of adaptivity are sufficient to solve these problem with the optimal sample complexity. [4] showed that one can also solve this problem under SST in constant number of rounds with the optimal sample complexity. However, these existing results focus on the SST setting, whereas we focus on the more general Condorcet winner setting. Moreover, these existing works focus on sample complexity for best-arm identification whereas our goal is regret minimization. 3 The Batched Algorithm In this section, we describe a B-round algorithm for the K-armed dueling bandit problem under the Condorcet condition. Recall that given a set of K arms, B = {1, . . . ,K}, and a positive integer B log(T ), we wish to find a sequence of B batches of noisy comparisons with low regret. Given arms i and j, recall that pi,j = 12 + i,j denotes the probability of i winning over j where i,j 2 ( 1/2, 1/2). We use a⇤ to denote the Condorcet winner; recall that a⇤ is a Condorcet winner if pa⇤,j 1/2 for all j 2 B. To simplify notation, we use j = a⇤,j . Before describing our algorithm, we first define some notation. We use A to denote the current set of active arms; i.e., the arms that have not been eliminated. We will use index r for rounds or batches. If pair (i, j) is compared in round r, it is compared qr = bqrc times where q = T 1/B . We define the following quantities at the end of each round r: • Ni,j(r) is the total number of times the pair (i, j) has been compared. • bpi,j(r) is the frequentist estimate of pi,j , i.e., bpi,j(r) = # i wins against j until end of round r Ni,j(r) . (1) • Two confidence-interval radii for each (i, j) pair: ci,j(r) = s 2 log(2K2qr) Ni,j(r) and i,j(r) = s log(K2BT ) 2Ni,j(r) (2) We now describe our B-round algorithm, called CATCHING THE CONDORCET WINNER IN BATCHES (or, C2B). At a high-level, the algorithm identifies the best arm a⇤ in a small expected number of rounds, after which it uses this arm as an “anchor” to eliminate sub-optimal arms while incurring low regret. In every round r, we do the following: 1. We define a defeated set Dr(i) for every active arm i; this set comprises arms that are defeated with confidence by i. Specifically, j 2 Dr(i) if bpi,j(r 1) > 1/2 + ci,j(r 1). 2. Then, we define a candidate ir as the arm that defeats the most number of arms; that is, ir = argmaxi2A|Dr(i)|. 3. For every arm i 6= ir: • If i 2 Dr(ir), then we compare i to ir for qr times. The idea here is to use ir as an anchor against i. We will show that a⇤ becomes the candidate ir in a small number of rounds. Then, this step ensures that we eliminate arms efficiently using a⇤ as an anchor. • If i /2 Dr(ir), then i is compared to all arms in A for qr times. This step crucially protects the algorithm against cases where a sub-optimal arm becomes the candidate (and continues to become the candidate). For example, suppose K = [5] and the arms are linearly ordered as 1 2 · · · 5. Furthermore suppose that in some round r, we have that (a) 2 defeats 3, 4, 5 and (b) 1 (best arm) defeats 2 but not the others. So, 2 is the candidate in round r; if 1 is not compared to 3, 4, 5, then 2 would continue to be the candidate (leading to high regret). 4. If, for any arm j, there is arm i such that bpi,j(r) > 12 + i,j(r), then j is eliminated from A. This continues until T total comparisons are performed. See Algorithm 1 for a formal description. The main result of this section is to show that C2B achieves the guarantees stated in Theorems 1.1 and 1.2. Overview of the Analysis. We provide a brief outline of the proofs of our main results. Towards proving Theorem 1.1, we first define two events: • The first event, denoted G, ensures that a⇤ is not eliminated during the execution of C2B. We show that P(G) 1 1/T . • The second event, denoted E( ), says that there exists a round C( ) (defined later) such that for all r > C( ), the estimate bpi,j(r 1) satisfies the confidence interval of ci,j(r 1). Moreover, P(E( )) 1 . By union bound, P(G \ E( )) 1 1/T . Together, we use G and E( ) to argue that: Algorithm 1 C2B (CATCHING THE CONDORCET WINNER IN BATCHES) 1: Input: Arms B, time-horizon T , integer B 1 2: active arms A B, r 1, emprical probabilities bpi,j(0) = 12 for all i, j 2 B 2 3: while number of comparisons T do 4: if A = {i} for some i then play (i, i) for remaining trials 5: Dr(i) {j 2 A : bpi,j(r 1) > 12 + ci,j(r 1)} 6: ir argmaxi2A|Dr(i)| 7: for i 2 A \ {ir} do 8: if i 2 Dr(ir) then 9: compare (ir, i) for qr times 10: else 11: for each j 2 A, compare (i, j) for qr times 12: compute bpi,j(r) values 13: if 9i, j : bpi,j(r) > 12 + i,j(r) then 14: A A \ {j} 15: r r + 1 • the best arm, a⇤, is not defeated by any arm i in any round r > C( ) (formalized in Lemma 3.5), • and that there exists a round r( ) C( ) such that for every round after r( ), arm a⇤ defeats every other arm (formalized in Lemma 3.7). Intuitively, these observations imply that our algorithm identifies the best arm after r( ) rounds. Thus, beyond round r( ), we only perform pairwise comparisons of the form (a⇤, i) for i 6= a⇤: thus, a⇤ is used as an anchor to eliminate sub-optimal arms. We then analyze the regret in two parts: (i) regret incurred up to round r( ), which is upper bounded by K2 P rr( ) q r and (ii) regret after r( ), which is the regret incurred in eliminating sub-optimal arms using a⇤ as an anchor. Finally, we can use the high-probability bound to also obtain a bound on the expected regret, proving Theorem 1.2. We provide some details of the proof of Theorem 1.1. We defer the proof of Theorem 1.2 to Appendix D. 3.1 The Analysis In this section, we give high-probability and expected regret bounds for C2B. Recall that q = T 1/B , and that q 2. The following lemma is used to prove that a⇤ is never eliminated. We defer the proofs of the Lemmas 3.1, 3.2 to Appendix B. Lemma 3.1. For any batch r 2 [B], and for any pair (i, j), we have P (|bpi,j(r) pi,j |> i,j(r)) 2⌘, where ⌘ = 1/K2BT . We first define the good event G as follows. Definition 3.1 (Event G). An estimate bpi,j(r) at the end of batch r is strongly-correct if |bpi,j(r) pi,j | i,j(r). We say that event G occurs if every estimate in every batch r 2 [B] is strongly-correct. The following two lemmas show that G occurs with high probability and that a⇤ is not eliminated under G. Lemma 3.2. The probability that every estimate in every batch of C2B is strongly-correct is at least 1 1/T . Lemma 3.3. Conditioned on G, a⇤ is never eliminated from A in the elimination step of C2B. Proof. In C2B, an arm j is deleted in batch r iff there is an arm i 2 A with bpi,j(r) > 12+ i,j(r). If a ⇤ is eliminated due to some arm j, then by definition of event G, we get pj,a⇤ bpi,j(r) i,j(r) > 12 , a contradiction. 3.1.1 High-probability Regret Bound In this section, we give details required to prove Theorem 1.1. Fix any > 0. We first define another good event as follows. Definition 3.2 (Event E( )). An estimate bpi,j(r) in batch r is weakly-correct if |bpi,j(r) pi,j | ci,j(r). Let C( ) := d 12 logq(1/ )e. We say that event E( ) occurs if for each batch r C( ), every estimate is weakly-correct. The next lemma shows that E( ) occurs with probability at least 1 . Lemma 3.4. For all > 0, we have P(¬E( )) = P (9r C( ), i, j : |bpi,j(r) pi,j |> ci,j(r)) . We will analyze our algorithm under both events G and E( ). Note that event G is required to ensure that a⇤ is not eliminated in rounds before C( ) (where the Lemma 3.4 does not apply). Lemma 3.5. Conditioned on G and E( ), for any round r > C( ), arm a⇤ is not defeated by any other arm, i.e., a ⇤ /2 [i 6=a⇤Dr(i). To proceed, we need the following definitions. Definition 3.3. The candidate ir of round r is called the champion if |Dr(ir)|= |A| 1; that is, if ir defeats every other active arm. Definition 3.4. Let r( ) C( ) + 1 be the smallest integer such that q r( ) 2A logA, where A := 32 2min · log(2K2). We use the following inequality based on this choice of r( ). Lemma 3.6. The above choice of r( ) satisfies q r > 8 2min · log 2K2qr , 8r r( ). Proof of Lemma 3.6. Using the fact that qr qr, it suffices to show qr 8 2min · log(2K2) + log qr . Moreover, log(2K2) + log qr 1 + log(2K2) · (1 + log qr) 4 · log(2K2) · log qr, where the last inequality uses K 2, r r( ) 1 and q 2. So, it suffices to show: q r > A · log(qr), 8r r( ), where A = 32 2min · log(2K2) (3) Below, let x = qr, R := 2A logA and function f(x) := x A log x. We will show that f(x) > 0 for all x R, which would imply (3) because qr( ) R. As R A, and f is increasing for x A, it suffices to show that f(R) 0. Indeed, f(R) A = 2 logA log(2A logA) = logA log(2 logA) > 0, where the inequality uses A 8. Then, we have the following. Lemma 3.7. Conditioned on G and E( ), the best arm a⇤ is the champion in every round r > r( ). We now have all components required to prove Theorem 1.1; its proof, and the proofs of the aforementioned lemmas can be found in Appendix C. 4 Computational Results In this section, we provide details of our computational experiments. The goal of our experiments is to answer the following questions: (i) How does the regret of C2B using B = blog(T )c batches compare to that of existing fully sequential as well as batched algorithms? and (ii) Can the regret of C2B match the regret of the best known sequential algorithms; if yes, then how many rounds suffice to achieve this? Towards answering (i), we compare C2B to a representative set of sequential algorithms for dueling bandits using the library due to [35]. We compare C2B to the sequential algorithms RUCB [54], RMED [35], and BEAT-THE-MEAN (BTM) [51]. We allow these algorithms to work as prescribed; that is, they work in B = T batches. The reason that we chose these sequential algorithms is that our batched algorithm (C2B) is based on a similar paradigm, and such a comparison demonstrates the power of adaptivity in this context. We also compare C2B to the batched algorithm SCOMP2 [2]. We plot the cumulative regret R(t) incurred by the algorithms against time t. We set B = blog(T )c for C2B and SCOMP2 in this experiment. For (ii), we increased B by a small amount; we found that the performance of C2B improves noticeably when given a constant number of additional rounds (we use B = blog(T )c+ 6 in this experiment). We perform these experiments using the following real-world datasets. Six rankers. This dataset is based on the 6 retrieval functions used in the engine of ArXiv.org. Sushi. The Sushi dataset is based on the Sushi preference dataset [34] that contains the preference data regarding 100 types of Sushi. A preference dataset using the top-16 most popular types of sushi is obtained. Irish election data. The Irish election data for Dublin and Meath is available at preflib.org. It contains partial preference orders over candidates. As in [3], these are transformed into preference matrices by selecting a subset of candiates to ensure that a Condorcet winner exists. There are 12 candidates in the Irish-Meath dataset, and 8 in the Irish-Dublin dataset. MSLR and Yahoo! data. We also run experiments on two web search ranking datasets: the Microsoft Learning to Rank (MSLR) dataset [40] and the Yahoo! Learning to Rank Challenge Set 1 [16]. These datasets have been used in prior work on online ranker evaluation [53, 37]. We use preference matrices generated using the “navigational” configuration (see [37] for details). The MSLR dataset has 136 rankers and the Yahoo! dataset has 700 rankers. We sample 30 rankers from each dataset while ensuring the existence of a Condorcet winner. In this way, we obtain two datasets, denoted MSLR30 and Yahoo30. Note that there exists a Condorcet winner in all datasets. We repeat each experiment 20 times and report the average regret. In our algorithm, we use the KL-divergence based confidence bound due to [35] for elimination as it performs much better empirically, and our theoretical bounds continue to hold (see §E). This KL-divergence based elimination criterion eliminates an arm i in round r if Ii(r) I⇤(r) > log(T ) + f(K) where Ii(r) = P j:bpi,j(r)< 12 Ni,j(r) · DKL(bpi,j(r), 12 ) and I ⇤(r) = minj2[K] Ii(r). Computational Results. As mentioned earlier, we compare our algorithms against a representative set of sequential dueling bandits algorithms (RUCB, RMED, and BTM). We set ↵ = 0.51 for RUCB, and f(K) = 0.3K1.01 for RMED and C2B, and = 1.3 for BTM: these parameters are known to perform well both theoretically and empirically [35]. We set T = 106 for MSLR30 and Yahoo30 datasets (as they have larger number of arms), and T = 105 for the remaining four. For the first set of experiments, we set B = blog(T )c. We observe that C2B always outperforms BTM and beats SCOMP2 on most of the datasets. We observe that even when SCOMP2 beats C2B it has a slightly linear curve (implying that its regret would keep increasing as T increases) while the regret curve of C2B is mostly flat. Furthermore, C2B performs comparably to RUCB in all datasets except Yahoo30. We plot the results in Figure 1. In the second set of experiments, we set B = blog(T )c + 6. We observe that C2B always outperforms RUCB and, in fact, performs comparably to RMED on all datasets except Yahoo30. We plot the results in Figure 2. Finally, we note that SCOMP2 exhibits varying performance across runs (even on the same dataset) and we think that this is due to the randomness involved in selecting the “seed set”. 5 Conclusion In this paper, we proposed a batched algorithm, named C2B, for the K-armed dueling bandit problem. Assuming the existence of a Condorcet winner, we show both high-probability and expected regret bounds for C2B that trade-off smoothly with the number of batches. Furthermore, we obtain asymptotic regret of O(K2 log2(K))+O(K log(T )) in O(log(T )) batches, nearly matching the best regret bounds known in the fully sequential setting under the Condorcet condition. Our computational results show that C2B, using O(log(T )) batches, achieves almost the same performance as fully sequential algorithms over a variety of real-world datasets. A direction for future research is to design batched algorithms for the K-armed dueling bandit problem when a Condorcet winner does not exist; for example, designing an algorithm for a more general concept of winner, such as Copeland winner [48] or von Neumann winner [22].
1. What is the focus and contribution of the paper regarding the batched dueling bandit problem? 2. What are the strengths of the proposed algorithm, particularly its ability to achieve good performance with a small number of batches? 3. What are the weaknesses of the paper, such as the gap between the derived regret bound and the lower bound, and the lack of adaptation in the comparison process? 4. Do you have any concerns or suggestions regarding the paper's methodology or potential improvements? 5. Are there any limitations or potential negative societal impacts associated with the paper's contributions?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors study the batched dueling bandit problem under the Condorcet condition. They design an algorithm named C2B with a few adaptive rounds, and derive the upper bound for the asymptotic regret. Computational experiments are carried out with a number of real-world datasets. Strengths And Weaknesses Strength: The authors show that the proposed algorithm can achieve good performance with number of batches as small as O ( log ⁡ t ) , which seems an interesting and new result. The feature of few adaptive rounds of the algorithm might accommodate modern applications of large scale. Overall, the paper is well-organized and easy to follow. The paper is detailed in backgrounds as well. Weakness: Please find my questions/concerns in the next part. Questions The authors claim that their method can nearly achieve the best rate for regret (line 12 and 115). Meanwhile, they mention a lower bound of O ( K Δ min ⋅ p o l y ( log ⁡ T ) ) (line 113), while the regret bound they derive is O ~ ( K 2 Δ min 2 ) . It seems a gap exists here, especially when K is large or Δ min . Is it possible to diminish the gap by trade-off the number of batches with performance? In the proposed algorithm, whenever a pair ( i , j ) is compared in round r , it is always compared q r = ⌊ q r ⌋ times. Will it be useful to adaptively assign the number of times for compare at each round based on previous compare results, in order to improve the performance? Intuitively, we might want to compare a pair more times if it is harder to tell which one is better. Limitations The authors discuss the limitations of their work to some extent. There seems no negative societal impact.
NIPS
Title An Asymptotically Optimal Batched Algorithm for the Dueling Bandit Problem Abstract We study the K-armed dueling bandit problem, a variation of the traditional multiarmed bandit problem in which feedback is obtained in the form of pairwise comparisons. Previous learning algorithms have focused on the fully adaptive setting, where the algorithm can make updates after every comparison. The “batched” dueling bandit problem is motivated by large-scale applications like web search ranking and recommendation systems, where performing sequential updates may be infeasible. In this work, we ask: is there a solution using only a few adaptive rounds that matches the asymptotic regret bounds of the best sequential algorithms for K-armed dueling bandits? We answer this in the affirmative under the Condorcet condition, a standard setting of the K-armed dueling bandit problem. We obtain asymptotic regret of O(K2 log(K)) +O(K log(T )) in O(log(T )) rounds, where T is the time horizon. Our regret bounds nearly match the best regret bounds known in the fully sequential setting under the Condorcet condition. Finally, in computational experiments over a variety of real-world datasets, we observe that our algorithm using O(log(T )) rounds achieves almost the same performance as fully sequential algorithms (that use T rounds). N/A for K-armed dueling bandits? We answer this in the affirmative under the Condorcet condition, a standard setting of the K-armed dueling bandit problem. We obtain asymptotic regret of O(K2 log2(K)) +O(K log(T )) in O(log(T )) rounds, where T is the time horizon. Our regret bounds nearly match the best regret bounds known in the fully sequential setting under the Condorcet condition. Finally, in computational experiments over a variety of real-world datasets, we observe that our algorithm using O(log(T )) rounds achieves almost the same performance as fully sequential algorithms (that use T rounds). 1 Introduction The K-armed dueling bandit problem is a variation of the traditional multi-armed bandit problem in which feedback is obtained in the form of pairwise preferences. This problem has applications in a wide-variety of domains like search ranking, recommendation systems and sports ranking where eliciting qualitative feedback is easy while real-valued feedback is not easily interpretable; thus, it has been a popular topic of research in the machine learning community (see, for example, [51, 49, 47, 5, 54, 52, 53, 21, 32, 35, 36, 42, 18]). Previous learning algorithms have focused on a fully adaptive setting; that is, the learning algorithm can make updates in a sequential fashion. Such updates might be impractical in large systems; for example, consider web-search ranking where the goal is to provide a list (usually ranked) of candidate documents to the user of the system in response to a query [41, 33, 50, 31]. Modern day search engines use hundred of parameters to compute a ranked list in response to a query, and online learning frameworks (based on user feedback) have been invaluable in automatically tuning these parameters [38]. However, given the scale of the system, it may be infeasible to adapt after each interaction: users may make multiple queries in a short time or multiple users may simultaneously query the system. Hence, we prefer solutions with limited rounds of adaptivity. Concretely, we ask: is there a solution using only a few adaptive rounds that matches the asymptotic regret bounds of the best sequential algorithms for K-armed dueling bandits? This “batched” dueling bandit problem was introduced recently in [2]. Here, the learning algorithm’s actions are partitioned into a limited number of rounds. In each round/batch, the algorithm commits to a fixed set of pairwise comparisons, and the feedback for all these comparisons is received 36th Conference on Neural Information Processing Systems (NeurIPS 2022). simultaneously. Then, the algorithm uses the feedback from the current batch of comparisons to choose comparisons for the next batch. [2] studied this problem under two different conditions: (i) the strong stochastic transitivity and stochastic triangle inequality (SST+STI) condition, which enforces a certain linear ordering over the arms; (ii) the Condorcet condition, which requires one arm to be superior to all others. Under SST+STI, their work provided almost tight upper and lower bounds on the trade-off between number of rounds and regret; in particular, they showed that one can achieve worst-case regret of O(K log2 T ) using ⇥(log T ) rounds (T is the time-horizon).1 Under the Condorcet condition, which is more general than SST+STI, they achieved a regret upper bound of O(K2 log T ) in O(log T ) rounds. Previous work [54, 35] on fully sequential algorithms has shown that it is possible to achieve an asymptotic upper bound of O(K2 +K log T ) under the Condorcet condition. Very recently, [43] improved the sequential regret bound even further by obtaining regret O(K log T ), which is the best possible even in the special case of SST+STI [49]. In the batched setting, the upper bound of [2] does not achieve this asymptotic optimality, irrespective of the number of batches, due to the presence of a K2 multiplicative factor in the regret bound. Their work left open the possibility of obtaining a batched algorithm achieving asymptotic optimality under the Condorcet condition. In this paper, we nearly resolve this question, by providing an algorithm with O(K2 log2 K +K log T ) regret in ⇥(log T ) rounds, under the Condorcet condition. 1.1 Contributions • We design an algorithm, denoted C2B, for the batched dueling bandit problem, and analyze its regret under the Condorcet condition. This algorithm achieves a smooth trade-off between the expected regret and the number of batches, B. • Crucially, when B = log(T ), our regret bounds nearly match the best regret bounds [35, 54] known in the fully sequential setting. Hence, our results show that O(log T ) rounds are sufficient to achieve asymptotically optimal regret as a function of T . • Our results rely on new ideas for showing that the Condorcet winner arm can be ‘trapped’ using few adaptive rounds with high (constant) probability while incurring a reasonable amount of regret. We can then integrate over this space of probabilities to obtain a bound on the expected regret (in the same vein as [54]). Once the Condorcet arm is ‘trapped’, we can quickly eliminate all other ‘sub-optimal’ arms and minimize regret in the process. • Finally, we run computational experiments to validate our theoretical results. We show that C2B, using O(log T ) batches, achieves almost the same performance as fully sequential algorithms (which effectively use T batches) over a variety of real datasets. 1.2 Preliminaries The K-armed dueling bandit problem [49] is an online optimization problem, where the goal is to find the best among K bandits B = {1, . . . ,K} using noisy pairwise comparisons with low regret. In each time-step, a noisy comparison between two arms (possibly the same), say (i, j), is performed. The outcome of the comparison is an independent random variable, and the probability of picking i over j is denoted pi,j = 12 + i,j where i,j 2 ( 1 2 , 1 2 ). Here, i,j can be thought of as a measure of distinguishability between the two arms, and we use i j when i,j > 0. We also refer to i,j as the gap between i and j. This problem has been studied under various conditions on the pairwise probabilities pi,j’s. One such condition is the strong stochastic transitivity and stochastic triangle inequality (SST+STI) condition where there exists an ordering over arms, denoted by ⌫, such that for every triple i ⌫ j ⌫ k, we have i,k max{ i,j , j,k}, and i,k i,j + j,k [49, 51]. In this paper, we work under the well-studied Condorcet winner condition, which is much more general than the SST+STI condition [47, 54, 35]. We say that arm i is a Condorcet winner if, and only if, pi,j > 12 for all j 2 B \ {i}. The Condorcet condition means that there exists a Condorcet winner. Throughout the paper, we let a⇤ refer to the Condorcet arm. To further simplify notation, we define j = a⇤,j ; that is, the gap between a⇤ and j. We define the regret per time-step as follows: suppose arms it and jt are chosen in time-step t, then the regret r(t) = it+ jt 2 . The cumulative regret up to time T is R(T ) = P T t=1 r(t), where T is the time horizon, and it’s assumed that K T . The 1They also gave a more complicated algorithm with regret O(K log2 K log T ) in O(log T + logK log logK) rounds, under the SST+STI condition. cumulative regret can be equivalently stated as R(T ) = 12 P K j=1 Tj j , where Tj denotes the number comparisons involving arm j. The goal of an algorithm is to minimize the cumulative R(T ). We define min = minj: j>0 j to be the smallest non-zero gap of any arm with a⇤. 1.3 Batch Policies In traditional bandit settings, actions are performed sequentially, utilizing the results of all prior actions in determining the next action. In the batched setting, the algorithm must commit to a round (or batch) of actions to be performed in parallel, and can only observe the results after all actions in the batch have been performed. More formally, given a number B of batches, the algorithm proceeds as follows. In each batch r = 1, 2, . . . B, the algorithm first decides on the comparisons to be performed; then, all outcomes of the batch-r comparisons are received simultaneously. The algorithm can then, adaptively, select the next batch of comparisons. Note that even the size of the next batch can be adaptively decided based on the observations in previous batches. Finally, the total number of comparisons (across all batches) must sum to T . We assume that the values of T and B are known. Observe that when T = B, we recover the fully sequential setting. 1.4 Results and Techniques We provide a overview of our results and prior results in Table 1. Given any integer B 1, we obtain a B-round algorithm for the dueling bandit problem. We provide both high-probability and expected regret bounds, stated in the following theorems. Theorem 1.1. For any integer B 1, there is an algorithm for the K-armed dueling bandit problem that uses at most B rounds with the following guarantee. For any > 0, with probability at least 1 1 T , its regret under the Condorcet condition is at most R(T ) O ✓ T 1/B · K 2 log(K) 2min · log ✓ logK min ◆◆ + O T 2/B ·K2 · r 1 ! + X j 6=a⇤ O ✓ T 1/B · log(KT ) j ◆ . Theorem 1.2. For any integer B 1, there is an algorithm for the K-armed dueling bandit problem that uses at most B rounds, with expected regret under the Condorcet condition at most E[R(T )] = O ✓ T 1/B · K 2 log(K) 2min · log ✓ logK min ◆◆ + O ⇣ T 2/B ·K2 ⌘ + X j 6=a⇤ O ✓ T 1/B · log(KT ) j ◆ . When the number of rounds B = log(T ), we obtain a batched algorithm that achieves the asymptotic optimality (in terms of T ), even for sequential algorithms. We formalize this observation in the following corollary. Corollary 1.3. There is an algorithm for the K-armed dueling bandit problem that uses at most log(T ) rounds, with expected regret under the Condorcet condition at most E[R(T )] = O ✓ K 2 log(K) 2min · log ✓ logK min ◆◆ + X j 6=a⇤ O ✓ log(KT ) j ◆ . By a lower-bound result from [2], it follows that no algorithm can achieve O( K min · poly(log T )) regret using o( log Tlog log T ) rounds, even under the SST+STI condition. So, the O(log T ) rounds required to achieve asymptotic optimality in Corollary 1.3 is nearly the best possible. Technical Challenges. The only prior approach for batched dueling bandits (under the Condorcet condition) is the algorithm PCOMP from [2], which performs all-pairs comparisons among arms in an active set. Such an approach cannot achieve regret better than O(K2 log T ) because the active set may remain large throughout. In order to achieve better regret bounds, [2] focus on the stronger SST+STI condition. In this setting, their main idea is to first sample a seed set, and use this seed set to eliminate sub-optimal arms. Their algorithm proceeds by performing all pairwise comparisons between the seed set and the set of active arms. However, the analysis of these ‘seeded comparison’ algorithms crucially rely on the total-ordering imposed by the SST and STI assumptions. Unfortunately, there is no such structure to exploit in the Condorcet setting: if the seed set does not contain the Condorcet winner, we immediately incur high regret. The existing fully sequential algorithms such as RUCB [54] and RMED [35] are highly adaptive in nature. For instance, RUCB plays each candidate arm against an optimistic competitor arm using upper confidence bounds (UCB) on pairwise probabilities. This allows RUCB to quickly filter out candidates and uncover the Condorcet arm. Similarly, RMED plays each arm against a carefully selected competitor arm that is likely to beat this arm. However, such competitors can change frequently over trials in both RUCB and RMED. Since the batched setting requires comparisons to be predetermined, we do not have the flexibility to adapt to such changes in competitors. Hence, these existing fully sequential algorithms cannot be easily implemented in our setting. Furthermore, we might also be tempted to consider an explore-then-exploit strategy where we first explore to find the Condorcet arm and exploit by playing this arm for remaining trials. However, this strategy is likely to fail because identifying the Condorcet arm with high probability might involve performing many comparisons, directly leading to high (⌦(K2 log T )) regret; on the other hand, if the Condorcet winner is not identified with high probability, the exploit phase becomes expensive. This motivated us to consider algorithms that allow some form of recourse; that is, unless an arm is found to be sub-optimal, it must be given the opportunity to participate in the comparisons (as it could be the Condorcet winner). The idea behind our algorithm is to identify the Condorcet winner a⇤ in a small expected number of rounds, after which it uses this arm as an “anchor” to eliminate sub-optimal arms while incurring low regret. To identify the best arm, in each round we define a candidate arm and compare it against arms that it “defeats”. Arms that are not defeated by the candidate arm are compared to all active arms: this step ensures that the Condorcet winner is eventually discovered. We show that a⇤ becomes the candidate, and defeats all other arms within a small number of rounds (though the algorithm may not know if this has occurred). Additionally, once this condition is established, it remains invariant in future rounds. This allows us to eliminate sub-optimal arms and achieve low regret. Comparison to RUCB. Initially, RUCB puts all arms in a pool of potential champions, and “optimistically” (using a upper confidence bound) performs all pairwise comparisons. Using these, it constructs a set of candidates C. If |C|= 1, then that arm is the hypothesised Condorcet winner and placed in a set B. Then, a randomized strategy is employed to choose a champion arm ac (from sets C and B) which is compared to arm ad which is most likely to beat it. The pair (ac, ad) is compared, the probabilities are updated and the algorithm continues. Although our algorithm also seeks to identify the best arm, we do not employ the UCB approach nor do we use any randomness. In their analysis, [54] show that the best arm eventually enters the set B, and remains in B: we also show a similar property for our algorithm in the analysis. Finally, similar to the analysis of [54], we first give a high-probability regret bound for our algorithm which we then convert to a bound on the expected regret. 2 Related Work The K-armed dueling bandit problem has been widely studied in recent years (we refer the reader to [46] for a comprehensive survey). Here, we survey the works that are most closely related to our setting. This problem was first studied in [49] under the SST and STI setting. The authors obtained a worst-case regret upper bound of eO(K log T/ min) and provided a matching lower bound. [51] considered a slightly more general version of the SST and STI setting and achieved an instance-wise optimal regret upper bound of P j: j>0 O (log(T )/ j). Since, the SST+STI condition imposes a total order over the arms and might not hold for real-world datasets, [47] initiated the study of dueling bandits under the Condorcet winner condition. [47] proved a O(K2 log T/ min) regret upper bound under the Condorcet condition, which was improved by [54] to O(K2/ 2min) +P j: j>0 O(log T/ 2 j ). [35] achieved a similar but tighter KL divergence-based bound, which is shown to be asymptotically instance-wise optimal (even in terms constant factors). There are also other works that improve the dependence on K in the upper bound, but suffer a worse dependence on j’s [53]. This problem has also been studied under other noise models such as utility based models [5] and other notions of regret [18]. Alternate notions of winners such as Borda winner [32], Copeland winner [52, 36, 48], and von Neumann winner [21] have also been considered. There are also several works on extensions of dueling bandits that allow multiple arms to be compared at once [45, 3, 44]. All of the aforementioned works on the dueling bandits problem are limited to the sequential setting. Recently, [2] initiated the study of the batched version of the K-armed dueling bandits. Their main results are under the SST and STI setting. They give two algorithms, called SCOMP and SCOMP2, for the batched K-armed dueling bandit problem. For any integer B, SCOMP uses at most B + 1 batches and has expected regret bounded by P j: j>0 O( p KT 1/B log(T )/ j). When B = log(T ), this nearly matches (up to a factor of p K) the best known instance-dependent regret bound ofP j: j>0 O(log(T )/ j) obtained by [49]. SCOMP2 aims to achieve better worst-case regret: it uses at most 2B + 1 batches, and has regret O KBT 1/B log(T )/ min . Thus, when B = log(T ), the expected worst-case regret is O K log2(T )/ min , matching the best known result in the sequential setting up to an additional logarithmic factor. Under the Condorcet condition, [2] give a straightforward pairwise comparison algorithm (PCOMP), that achieves expected regret bounded by O(K2 log(T )/ min) in log(T ) batches. They also provide a nearly matching lower bound of ⌦( KT 1/B B2 min ) for any B-batched algorithm. This implies that our bound (for B-round algorithms) in Theorem 1.2 cannot be significantly improved. Recently, [43] designed a fully adaptive algorithm achieving an optimal regret of P j: j>0 O(log T ) j for dueling bandits under the Condorcet setting. This algorithm is based on the idea of dueling two classical bandit (MAB) algorithms against each other in a repeated zero-sum game with carefully designed rewards. The reward for one algorithm depends on the actions of the other; hence, these algorithms need to achieve best-of-both-worlds guarantee for both stochastic and adversarial settings. However, the approach of [43] is not directly applicable to the batched setting that we consider. This is because, as shown by [23], any B-round algorithm for batched MAB in the adversarial setting has regret ⌦(T/B). There has also been substantial work on best-arm or top-k identification using pairwise comparisons with limited adaptivity. [15, 14, 19] considered this problem under the noisy pairwise comparison setting, which is a special case of SST+STI. They showed that constant number of rounds of adaptivity are sufficient to solve these problem with the optimal sample complexity. [4] showed that one can also solve this problem under SST in constant number of rounds with the optimal sample complexity. However, these existing results focus on the SST setting, whereas we focus on the more general Condorcet winner setting. Moreover, these existing works focus on sample complexity for best-arm identification whereas our goal is regret minimization. 3 The Batched Algorithm In this section, we describe a B-round algorithm for the K-armed dueling bandit problem under the Condorcet condition. Recall that given a set of K arms, B = {1, . . . ,K}, and a positive integer B log(T ), we wish to find a sequence of B batches of noisy comparisons with low regret. Given arms i and j, recall that pi,j = 12 + i,j denotes the probability of i winning over j where i,j 2 ( 1/2, 1/2). We use a⇤ to denote the Condorcet winner; recall that a⇤ is a Condorcet winner if pa⇤,j 1/2 for all j 2 B. To simplify notation, we use j = a⇤,j . Before describing our algorithm, we first define some notation. We use A to denote the current set of active arms; i.e., the arms that have not been eliminated. We will use index r for rounds or batches. If pair (i, j) is compared in round r, it is compared qr = bqrc times where q = T 1/B . We define the following quantities at the end of each round r: • Ni,j(r) is the total number of times the pair (i, j) has been compared. • bpi,j(r) is the frequentist estimate of pi,j , i.e., bpi,j(r) = # i wins against j until end of round r Ni,j(r) . (1) • Two confidence-interval radii for each (i, j) pair: ci,j(r) = s 2 log(2K2qr) Ni,j(r) and i,j(r) = s log(K2BT ) 2Ni,j(r) (2) We now describe our B-round algorithm, called CATCHING THE CONDORCET WINNER IN BATCHES (or, C2B). At a high-level, the algorithm identifies the best arm a⇤ in a small expected number of rounds, after which it uses this arm as an “anchor” to eliminate sub-optimal arms while incurring low regret. In every round r, we do the following: 1. We define a defeated set Dr(i) for every active arm i; this set comprises arms that are defeated with confidence by i. Specifically, j 2 Dr(i) if bpi,j(r 1) > 1/2 + ci,j(r 1). 2. Then, we define a candidate ir as the arm that defeats the most number of arms; that is, ir = argmaxi2A|Dr(i)|. 3. For every arm i 6= ir: • If i 2 Dr(ir), then we compare i to ir for qr times. The idea here is to use ir as an anchor against i. We will show that a⇤ becomes the candidate ir in a small number of rounds. Then, this step ensures that we eliminate arms efficiently using a⇤ as an anchor. • If i /2 Dr(ir), then i is compared to all arms in A for qr times. This step crucially protects the algorithm against cases where a sub-optimal arm becomes the candidate (and continues to become the candidate). For example, suppose K = [5] and the arms are linearly ordered as 1 2 · · · 5. Furthermore suppose that in some round r, we have that (a) 2 defeats 3, 4, 5 and (b) 1 (best arm) defeats 2 but not the others. So, 2 is the candidate in round r; if 1 is not compared to 3, 4, 5, then 2 would continue to be the candidate (leading to high regret). 4. If, for any arm j, there is arm i such that bpi,j(r) > 12 + i,j(r), then j is eliminated from A. This continues until T total comparisons are performed. See Algorithm 1 for a formal description. The main result of this section is to show that C2B achieves the guarantees stated in Theorems 1.1 and 1.2. Overview of the Analysis. We provide a brief outline of the proofs of our main results. Towards proving Theorem 1.1, we first define two events: • The first event, denoted G, ensures that a⇤ is not eliminated during the execution of C2B. We show that P(G) 1 1/T . • The second event, denoted E( ), says that there exists a round C( ) (defined later) such that for all r > C( ), the estimate bpi,j(r 1) satisfies the confidence interval of ci,j(r 1). Moreover, P(E( )) 1 . By union bound, P(G \ E( )) 1 1/T . Together, we use G and E( ) to argue that: Algorithm 1 C2B (CATCHING THE CONDORCET WINNER IN BATCHES) 1: Input: Arms B, time-horizon T , integer B 1 2: active arms A B, r 1, emprical probabilities bpi,j(0) = 12 for all i, j 2 B 2 3: while number of comparisons T do 4: if A = {i} for some i then play (i, i) for remaining trials 5: Dr(i) {j 2 A : bpi,j(r 1) > 12 + ci,j(r 1)} 6: ir argmaxi2A|Dr(i)| 7: for i 2 A \ {ir} do 8: if i 2 Dr(ir) then 9: compare (ir, i) for qr times 10: else 11: for each j 2 A, compare (i, j) for qr times 12: compute bpi,j(r) values 13: if 9i, j : bpi,j(r) > 12 + i,j(r) then 14: A A \ {j} 15: r r + 1 • the best arm, a⇤, is not defeated by any arm i in any round r > C( ) (formalized in Lemma 3.5), • and that there exists a round r( ) C( ) such that for every round after r( ), arm a⇤ defeats every other arm (formalized in Lemma 3.7). Intuitively, these observations imply that our algorithm identifies the best arm after r( ) rounds. Thus, beyond round r( ), we only perform pairwise comparisons of the form (a⇤, i) for i 6= a⇤: thus, a⇤ is used as an anchor to eliminate sub-optimal arms. We then analyze the regret in two parts: (i) regret incurred up to round r( ), which is upper bounded by K2 P rr( ) q r and (ii) regret after r( ), which is the regret incurred in eliminating sub-optimal arms using a⇤ as an anchor. Finally, we can use the high-probability bound to also obtain a bound on the expected regret, proving Theorem 1.2. We provide some details of the proof of Theorem 1.1. We defer the proof of Theorem 1.2 to Appendix D. 3.1 The Analysis In this section, we give high-probability and expected regret bounds for C2B. Recall that q = T 1/B , and that q 2. The following lemma is used to prove that a⇤ is never eliminated. We defer the proofs of the Lemmas 3.1, 3.2 to Appendix B. Lemma 3.1. For any batch r 2 [B], and for any pair (i, j), we have P (|bpi,j(r) pi,j |> i,j(r)) 2⌘, where ⌘ = 1/K2BT . We first define the good event G as follows. Definition 3.1 (Event G). An estimate bpi,j(r) at the end of batch r is strongly-correct if |bpi,j(r) pi,j | i,j(r). We say that event G occurs if every estimate in every batch r 2 [B] is strongly-correct. The following two lemmas show that G occurs with high probability and that a⇤ is not eliminated under G. Lemma 3.2. The probability that every estimate in every batch of C2B is strongly-correct is at least 1 1/T . Lemma 3.3. Conditioned on G, a⇤ is never eliminated from A in the elimination step of C2B. Proof. In C2B, an arm j is deleted in batch r iff there is an arm i 2 A with bpi,j(r) > 12+ i,j(r). If a ⇤ is eliminated due to some arm j, then by definition of event G, we get pj,a⇤ bpi,j(r) i,j(r) > 12 , a contradiction. 3.1.1 High-probability Regret Bound In this section, we give details required to prove Theorem 1.1. Fix any > 0. We first define another good event as follows. Definition 3.2 (Event E( )). An estimate bpi,j(r) in batch r is weakly-correct if |bpi,j(r) pi,j | ci,j(r). Let C( ) := d 12 logq(1/ )e. We say that event E( ) occurs if for each batch r C( ), every estimate is weakly-correct. The next lemma shows that E( ) occurs with probability at least 1 . Lemma 3.4. For all > 0, we have P(¬E( )) = P (9r C( ), i, j : |bpi,j(r) pi,j |> ci,j(r)) . We will analyze our algorithm under both events G and E( ). Note that event G is required to ensure that a⇤ is not eliminated in rounds before C( ) (where the Lemma 3.4 does not apply). Lemma 3.5. Conditioned on G and E( ), for any round r > C( ), arm a⇤ is not defeated by any other arm, i.e., a ⇤ /2 [i 6=a⇤Dr(i). To proceed, we need the following definitions. Definition 3.3. The candidate ir of round r is called the champion if |Dr(ir)|= |A| 1; that is, if ir defeats every other active arm. Definition 3.4. Let r( ) C( ) + 1 be the smallest integer such that q r( ) 2A logA, where A := 32 2min · log(2K2). We use the following inequality based on this choice of r( ). Lemma 3.6. The above choice of r( ) satisfies q r > 8 2min · log 2K2qr , 8r r( ). Proof of Lemma 3.6. Using the fact that qr qr, it suffices to show qr 8 2min · log(2K2) + log qr . Moreover, log(2K2) + log qr 1 + log(2K2) · (1 + log qr) 4 · log(2K2) · log qr, where the last inequality uses K 2, r r( ) 1 and q 2. So, it suffices to show: q r > A · log(qr), 8r r( ), where A = 32 2min · log(2K2) (3) Below, let x = qr, R := 2A logA and function f(x) := x A log x. We will show that f(x) > 0 for all x R, which would imply (3) because qr( ) R. As R A, and f is increasing for x A, it suffices to show that f(R) 0. Indeed, f(R) A = 2 logA log(2A logA) = logA log(2 logA) > 0, where the inequality uses A 8. Then, we have the following. Lemma 3.7. Conditioned on G and E( ), the best arm a⇤ is the champion in every round r > r( ). We now have all components required to prove Theorem 1.1; its proof, and the proofs of the aforementioned lemmas can be found in Appendix C. 4 Computational Results In this section, we provide details of our computational experiments. The goal of our experiments is to answer the following questions: (i) How does the regret of C2B using B = blog(T )c batches compare to that of existing fully sequential as well as batched algorithms? and (ii) Can the regret of C2B match the regret of the best known sequential algorithms; if yes, then how many rounds suffice to achieve this? Towards answering (i), we compare C2B to a representative set of sequential algorithms for dueling bandits using the library due to [35]. We compare C2B to the sequential algorithms RUCB [54], RMED [35], and BEAT-THE-MEAN (BTM) [51]. We allow these algorithms to work as prescribed; that is, they work in B = T batches. The reason that we chose these sequential algorithms is that our batched algorithm (C2B) is based on a similar paradigm, and such a comparison demonstrates the power of adaptivity in this context. We also compare C2B to the batched algorithm SCOMP2 [2]. We plot the cumulative regret R(t) incurred by the algorithms against time t. We set B = blog(T )c for C2B and SCOMP2 in this experiment. For (ii), we increased B by a small amount; we found that the performance of C2B improves noticeably when given a constant number of additional rounds (we use B = blog(T )c+ 6 in this experiment). We perform these experiments using the following real-world datasets. Six rankers. This dataset is based on the 6 retrieval functions used in the engine of ArXiv.org. Sushi. The Sushi dataset is based on the Sushi preference dataset [34] that contains the preference data regarding 100 types of Sushi. A preference dataset using the top-16 most popular types of sushi is obtained. Irish election data. The Irish election data for Dublin and Meath is available at preflib.org. It contains partial preference orders over candidates. As in [3], these are transformed into preference matrices by selecting a subset of candiates to ensure that a Condorcet winner exists. There are 12 candidates in the Irish-Meath dataset, and 8 in the Irish-Dublin dataset. MSLR and Yahoo! data. We also run experiments on two web search ranking datasets: the Microsoft Learning to Rank (MSLR) dataset [40] and the Yahoo! Learning to Rank Challenge Set 1 [16]. These datasets have been used in prior work on online ranker evaluation [53, 37]. We use preference matrices generated using the “navigational” configuration (see [37] for details). The MSLR dataset has 136 rankers and the Yahoo! dataset has 700 rankers. We sample 30 rankers from each dataset while ensuring the existence of a Condorcet winner. In this way, we obtain two datasets, denoted MSLR30 and Yahoo30. Note that there exists a Condorcet winner in all datasets. We repeat each experiment 20 times and report the average regret. In our algorithm, we use the KL-divergence based confidence bound due to [35] for elimination as it performs much better empirically, and our theoretical bounds continue to hold (see §E). This KL-divergence based elimination criterion eliminates an arm i in round r if Ii(r) I⇤(r) > log(T ) + f(K) where Ii(r) = P j:bpi,j(r)< 12 Ni,j(r) · DKL(bpi,j(r), 12 ) and I ⇤(r) = minj2[K] Ii(r). Computational Results. As mentioned earlier, we compare our algorithms against a representative set of sequential dueling bandits algorithms (RUCB, RMED, and BTM). We set ↵ = 0.51 for RUCB, and f(K) = 0.3K1.01 for RMED and C2B, and = 1.3 for BTM: these parameters are known to perform well both theoretically and empirically [35]. We set T = 106 for MSLR30 and Yahoo30 datasets (as they have larger number of arms), and T = 105 for the remaining four. For the first set of experiments, we set B = blog(T )c. We observe that C2B always outperforms BTM and beats SCOMP2 on most of the datasets. We observe that even when SCOMP2 beats C2B it has a slightly linear curve (implying that its regret would keep increasing as T increases) while the regret curve of C2B is mostly flat. Furthermore, C2B performs comparably to RUCB in all datasets except Yahoo30. We plot the results in Figure 1. In the second set of experiments, we set B = blog(T )c + 6. We observe that C2B always outperforms RUCB and, in fact, performs comparably to RMED on all datasets except Yahoo30. We plot the results in Figure 2. Finally, we note that SCOMP2 exhibits varying performance across runs (even on the same dataset) and we think that this is due to the randomness involved in selecting the “seed set”. 5 Conclusion In this paper, we proposed a batched algorithm, named C2B, for the K-armed dueling bandit problem. Assuming the existence of a Condorcet winner, we show both high-probability and expected regret bounds for C2B that trade-off smoothly with the number of batches. Furthermore, we obtain asymptotic regret of O(K2 log2(K))+O(K log(T )) in O(log(T )) batches, nearly matching the best regret bounds known in the fully sequential setting under the Condorcet condition. Our computational results show that C2B, using O(log(T )) batches, achieves almost the same performance as fully sequential algorithms over a variety of real-world datasets. A direction for future research is to design batched algorithms for the K-armed dueling bandit problem when a Condorcet winner does not exist; for example, designing an algorithm for a more general concept of winner, such as Copeland winner [48] or von Neumann winner [22].
1. What is the focus and contribution of the paper on batched dueling bandits? 2. What are the strengths of the proposed algorithm, particularly in terms of its design and evaluation? 3. What are the weaknesses of the paper regarding its experimental setup and comparison with other works? 4. Do you have any concerns or suggestions regarding the theoretical analysis, such as providing a regret lower bound? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This work considered the batched dueling bandit problem under the Condorcet condition. It proposed a sequential elimination-based algorithm and derived regret bounds in expectation and with high probability. It also presented numerical experiments to show the effectiveness of the proposed algorithm. Strengths And Weaknesses Strengths: The paper is well organized and is easy to follow. It clearly discussed the motivation of problem setting, algorithm design and the technique challenges in detail and evaluate the algorithm with real-life datasets. Weaknesses: A lower bound may help strengthen the theoretical results and more explanation in the experiments setup is appreciated. (See "Questions" part for more details.) Questions Are the authors willing to provide a regret lower bound? If it matches the derived lower bound, it would be more convincing that the upper bound is tight and the algorithm is optimal. There should be some discussions in Section 1.4. As the algorithms are proposed for different settings, how are they compared? It is nice that the algorithms are compared using several datasets, but more explanation of the experiment setup is appreciated. In Section 2, in addition to the detailed discussion, the readers may get a even clear view if a table comparing the bounds and conditions of algorithms in the existing literature is presented. Limitations There seems to be no societal issue from my view.
NIPS
Title An Asymptotically Optimal Batched Algorithm for the Dueling Bandit Problem Abstract We study the K-armed dueling bandit problem, a variation of the traditional multiarmed bandit problem in which feedback is obtained in the form of pairwise comparisons. Previous learning algorithms have focused on the fully adaptive setting, where the algorithm can make updates after every comparison. The “batched” dueling bandit problem is motivated by large-scale applications like web search ranking and recommendation systems, where performing sequential updates may be infeasible. In this work, we ask: is there a solution using only a few adaptive rounds that matches the asymptotic regret bounds of the best sequential algorithms for K-armed dueling bandits? We answer this in the affirmative under the Condorcet condition, a standard setting of the K-armed dueling bandit problem. We obtain asymptotic regret of O(K2 log(K)) +O(K log(T )) in O(log(T )) rounds, where T is the time horizon. Our regret bounds nearly match the best regret bounds known in the fully sequential setting under the Condorcet condition. Finally, in computational experiments over a variety of real-world datasets, we observe that our algorithm using O(log(T )) rounds achieves almost the same performance as fully sequential algorithms (that use T rounds). N/A for K-armed dueling bandits? We answer this in the affirmative under the Condorcet condition, a standard setting of the K-armed dueling bandit problem. We obtain asymptotic regret of O(K2 log2(K)) +O(K log(T )) in O(log(T )) rounds, where T is the time horizon. Our regret bounds nearly match the best regret bounds known in the fully sequential setting under the Condorcet condition. Finally, in computational experiments over a variety of real-world datasets, we observe that our algorithm using O(log(T )) rounds achieves almost the same performance as fully sequential algorithms (that use T rounds). 1 Introduction The K-armed dueling bandit problem is a variation of the traditional multi-armed bandit problem in which feedback is obtained in the form of pairwise preferences. This problem has applications in a wide-variety of domains like search ranking, recommendation systems and sports ranking where eliciting qualitative feedback is easy while real-valued feedback is not easily interpretable; thus, it has been a popular topic of research in the machine learning community (see, for example, [51, 49, 47, 5, 54, 52, 53, 21, 32, 35, 36, 42, 18]). Previous learning algorithms have focused on a fully adaptive setting; that is, the learning algorithm can make updates in a sequential fashion. Such updates might be impractical in large systems; for example, consider web-search ranking where the goal is to provide a list (usually ranked) of candidate documents to the user of the system in response to a query [41, 33, 50, 31]. Modern day search engines use hundred of parameters to compute a ranked list in response to a query, and online learning frameworks (based on user feedback) have been invaluable in automatically tuning these parameters [38]. However, given the scale of the system, it may be infeasible to adapt after each interaction: users may make multiple queries in a short time or multiple users may simultaneously query the system. Hence, we prefer solutions with limited rounds of adaptivity. Concretely, we ask: is there a solution using only a few adaptive rounds that matches the asymptotic regret bounds of the best sequential algorithms for K-armed dueling bandits? This “batched” dueling bandit problem was introduced recently in [2]. Here, the learning algorithm’s actions are partitioned into a limited number of rounds. In each round/batch, the algorithm commits to a fixed set of pairwise comparisons, and the feedback for all these comparisons is received 36th Conference on Neural Information Processing Systems (NeurIPS 2022). simultaneously. Then, the algorithm uses the feedback from the current batch of comparisons to choose comparisons for the next batch. [2] studied this problem under two different conditions: (i) the strong stochastic transitivity and stochastic triangle inequality (SST+STI) condition, which enforces a certain linear ordering over the arms; (ii) the Condorcet condition, which requires one arm to be superior to all others. Under SST+STI, their work provided almost tight upper and lower bounds on the trade-off between number of rounds and regret; in particular, they showed that one can achieve worst-case regret of O(K log2 T ) using ⇥(log T ) rounds (T is the time-horizon).1 Under the Condorcet condition, which is more general than SST+STI, they achieved a regret upper bound of O(K2 log T ) in O(log T ) rounds. Previous work [54, 35] on fully sequential algorithms has shown that it is possible to achieve an asymptotic upper bound of O(K2 +K log T ) under the Condorcet condition. Very recently, [43] improved the sequential regret bound even further by obtaining regret O(K log T ), which is the best possible even in the special case of SST+STI [49]. In the batched setting, the upper bound of [2] does not achieve this asymptotic optimality, irrespective of the number of batches, due to the presence of a K2 multiplicative factor in the regret bound. Their work left open the possibility of obtaining a batched algorithm achieving asymptotic optimality under the Condorcet condition. In this paper, we nearly resolve this question, by providing an algorithm with O(K2 log2 K +K log T ) regret in ⇥(log T ) rounds, under the Condorcet condition. 1.1 Contributions • We design an algorithm, denoted C2B, for the batched dueling bandit problem, and analyze its regret under the Condorcet condition. This algorithm achieves a smooth trade-off between the expected regret and the number of batches, B. • Crucially, when B = log(T ), our regret bounds nearly match the best regret bounds [35, 54] known in the fully sequential setting. Hence, our results show that O(log T ) rounds are sufficient to achieve asymptotically optimal regret as a function of T . • Our results rely on new ideas for showing that the Condorcet winner arm can be ‘trapped’ using few adaptive rounds with high (constant) probability while incurring a reasonable amount of regret. We can then integrate over this space of probabilities to obtain a bound on the expected regret (in the same vein as [54]). Once the Condorcet arm is ‘trapped’, we can quickly eliminate all other ‘sub-optimal’ arms and minimize regret in the process. • Finally, we run computational experiments to validate our theoretical results. We show that C2B, using O(log T ) batches, achieves almost the same performance as fully sequential algorithms (which effectively use T batches) over a variety of real datasets. 1.2 Preliminaries The K-armed dueling bandit problem [49] is an online optimization problem, where the goal is to find the best among K bandits B = {1, . . . ,K} using noisy pairwise comparisons with low regret. In each time-step, a noisy comparison between two arms (possibly the same), say (i, j), is performed. The outcome of the comparison is an independent random variable, and the probability of picking i over j is denoted pi,j = 12 + i,j where i,j 2 ( 1 2 , 1 2 ). Here, i,j can be thought of as a measure of distinguishability between the two arms, and we use i j when i,j > 0. We also refer to i,j as the gap between i and j. This problem has been studied under various conditions on the pairwise probabilities pi,j’s. One such condition is the strong stochastic transitivity and stochastic triangle inequality (SST+STI) condition where there exists an ordering over arms, denoted by ⌫, such that for every triple i ⌫ j ⌫ k, we have i,k max{ i,j , j,k}, and i,k i,j + j,k [49, 51]. In this paper, we work under the well-studied Condorcet winner condition, which is much more general than the SST+STI condition [47, 54, 35]. We say that arm i is a Condorcet winner if, and only if, pi,j > 12 for all j 2 B \ {i}. The Condorcet condition means that there exists a Condorcet winner. Throughout the paper, we let a⇤ refer to the Condorcet arm. To further simplify notation, we define j = a⇤,j ; that is, the gap between a⇤ and j. We define the regret per time-step as follows: suppose arms it and jt are chosen in time-step t, then the regret r(t) = it+ jt 2 . The cumulative regret up to time T is R(T ) = P T t=1 r(t), where T is the time horizon, and it’s assumed that K T . The 1They also gave a more complicated algorithm with regret O(K log2 K log T ) in O(log T + logK log logK) rounds, under the SST+STI condition. cumulative regret can be equivalently stated as R(T ) = 12 P K j=1 Tj j , where Tj denotes the number comparisons involving arm j. The goal of an algorithm is to minimize the cumulative R(T ). We define min = minj: j>0 j to be the smallest non-zero gap of any arm with a⇤. 1.3 Batch Policies In traditional bandit settings, actions are performed sequentially, utilizing the results of all prior actions in determining the next action. In the batched setting, the algorithm must commit to a round (or batch) of actions to be performed in parallel, and can only observe the results after all actions in the batch have been performed. More formally, given a number B of batches, the algorithm proceeds as follows. In each batch r = 1, 2, . . . B, the algorithm first decides on the comparisons to be performed; then, all outcomes of the batch-r comparisons are received simultaneously. The algorithm can then, adaptively, select the next batch of comparisons. Note that even the size of the next batch can be adaptively decided based on the observations in previous batches. Finally, the total number of comparisons (across all batches) must sum to T . We assume that the values of T and B are known. Observe that when T = B, we recover the fully sequential setting. 1.4 Results and Techniques We provide a overview of our results and prior results in Table 1. Given any integer B 1, we obtain a B-round algorithm for the dueling bandit problem. We provide both high-probability and expected regret bounds, stated in the following theorems. Theorem 1.1. For any integer B 1, there is an algorithm for the K-armed dueling bandit problem that uses at most B rounds with the following guarantee. For any > 0, with probability at least 1 1 T , its regret under the Condorcet condition is at most R(T ) O ✓ T 1/B · K 2 log(K) 2min · log ✓ logK min ◆◆ + O T 2/B ·K2 · r 1 ! + X j 6=a⇤ O ✓ T 1/B · log(KT ) j ◆ . Theorem 1.2. For any integer B 1, there is an algorithm for the K-armed dueling bandit problem that uses at most B rounds, with expected regret under the Condorcet condition at most E[R(T )] = O ✓ T 1/B · K 2 log(K) 2min · log ✓ logK min ◆◆ + O ⇣ T 2/B ·K2 ⌘ + X j 6=a⇤ O ✓ T 1/B · log(KT ) j ◆ . When the number of rounds B = log(T ), we obtain a batched algorithm that achieves the asymptotic optimality (in terms of T ), even for sequential algorithms. We formalize this observation in the following corollary. Corollary 1.3. There is an algorithm for the K-armed dueling bandit problem that uses at most log(T ) rounds, with expected regret under the Condorcet condition at most E[R(T )] = O ✓ K 2 log(K) 2min · log ✓ logK min ◆◆ + X j 6=a⇤ O ✓ log(KT ) j ◆ . By a lower-bound result from [2], it follows that no algorithm can achieve O( K min · poly(log T )) regret using o( log Tlog log T ) rounds, even under the SST+STI condition. So, the O(log T ) rounds required to achieve asymptotic optimality in Corollary 1.3 is nearly the best possible. Technical Challenges. The only prior approach for batched dueling bandits (under the Condorcet condition) is the algorithm PCOMP from [2], which performs all-pairs comparisons among arms in an active set. Such an approach cannot achieve regret better than O(K2 log T ) because the active set may remain large throughout. In order to achieve better regret bounds, [2] focus on the stronger SST+STI condition. In this setting, their main idea is to first sample a seed set, and use this seed set to eliminate sub-optimal arms. Their algorithm proceeds by performing all pairwise comparisons between the seed set and the set of active arms. However, the analysis of these ‘seeded comparison’ algorithms crucially rely on the total-ordering imposed by the SST and STI assumptions. Unfortunately, there is no such structure to exploit in the Condorcet setting: if the seed set does not contain the Condorcet winner, we immediately incur high regret. The existing fully sequential algorithms such as RUCB [54] and RMED [35] are highly adaptive in nature. For instance, RUCB plays each candidate arm against an optimistic competitor arm using upper confidence bounds (UCB) on pairwise probabilities. This allows RUCB to quickly filter out candidates and uncover the Condorcet arm. Similarly, RMED plays each arm against a carefully selected competitor arm that is likely to beat this arm. However, such competitors can change frequently over trials in both RUCB and RMED. Since the batched setting requires comparisons to be predetermined, we do not have the flexibility to adapt to such changes in competitors. Hence, these existing fully sequential algorithms cannot be easily implemented in our setting. Furthermore, we might also be tempted to consider an explore-then-exploit strategy where we first explore to find the Condorcet arm and exploit by playing this arm for remaining trials. However, this strategy is likely to fail because identifying the Condorcet arm with high probability might involve performing many comparisons, directly leading to high (⌦(K2 log T )) regret; on the other hand, if the Condorcet winner is not identified with high probability, the exploit phase becomes expensive. This motivated us to consider algorithms that allow some form of recourse; that is, unless an arm is found to be sub-optimal, it must be given the opportunity to participate in the comparisons (as it could be the Condorcet winner). The idea behind our algorithm is to identify the Condorcet winner a⇤ in a small expected number of rounds, after which it uses this arm as an “anchor” to eliminate sub-optimal arms while incurring low regret. To identify the best arm, in each round we define a candidate arm and compare it against arms that it “defeats”. Arms that are not defeated by the candidate arm are compared to all active arms: this step ensures that the Condorcet winner is eventually discovered. We show that a⇤ becomes the candidate, and defeats all other arms within a small number of rounds (though the algorithm may not know if this has occurred). Additionally, once this condition is established, it remains invariant in future rounds. This allows us to eliminate sub-optimal arms and achieve low regret. Comparison to RUCB. Initially, RUCB puts all arms in a pool of potential champions, and “optimistically” (using a upper confidence bound) performs all pairwise comparisons. Using these, it constructs a set of candidates C. If |C|= 1, then that arm is the hypothesised Condorcet winner and placed in a set B. Then, a randomized strategy is employed to choose a champion arm ac (from sets C and B) which is compared to arm ad which is most likely to beat it. The pair (ac, ad) is compared, the probabilities are updated and the algorithm continues. Although our algorithm also seeks to identify the best arm, we do not employ the UCB approach nor do we use any randomness. In their analysis, [54] show that the best arm eventually enters the set B, and remains in B: we also show a similar property for our algorithm in the analysis. Finally, similar to the analysis of [54], we first give a high-probability regret bound for our algorithm which we then convert to a bound on the expected regret. 2 Related Work The K-armed dueling bandit problem has been widely studied in recent years (we refer the reader to [46] for a comprehensive survey). Here, we survey the works that are most closely related to our setting. This problem was first studied in [49] under the SST and STI setting. The authors obtained a worst-case regret upper bound of eO(K log T/ min) and provided a matching lower bound. [51] considered a slightly more general version of the SST and STI setting and achieved an instance-wise optimal regret upper bound of P j: j>0 O (log(T )/ j). Since, the SST+STI condition imposes a total order over the arms and might not hold for real-world datasets, [47] initiated the study of dueling bandits under the Condorcet winner condition. [47] proved a O(K2 log T/ min) regret upper bound under the Condorcet condition, which was improved by [54] to O(K2/ 2min) +P j: j>0 O(log T/ 2 j ). [35] achieved a similar but tighter KL divergence-based bound, which is shown to be asymptotically instance-wise optimal (even in terms constant factors). There are also other works that improve the dependence on K in the upper bound, but suffer a worse dependence on j’s [53]. This problem has also been studied under other noise models such as utility based models [5] and other notions of regret [18]. Alternate notions of winners such as Borda winner [32], Copeland winner [52, 36, 48], and von Neumann winner [21] have also been considered. There are also several works on extensions of dueling bandits that allow multiple arms to be compared at once [45, 3, 44]. All of the aforementioned works on the dueling bandits problem are limited to the sequential setting. Recently, [2] initiated the study of the batched version of the K-armed dueling bandits. Their main results are under the SST and STI setting. They give two algorithms, called SCOMP and SCOMP2, for the batched K-armed dueling bandit problem. For any integer B, SCOMP uses at most B + 1 batches and has expected regret bounded by P j: j>0 O( p KT 1/B log(T )/ j). When B = log(T ), this nearly matches (up to a factor of p K) the best known instance-dependent regret bound ofP j: j>0 O(log(T )/ j) obtained by [49]. SCOMP2 aims to achieve better worst-case regret: it uses at most 2B + 1 batches, and has regret O KBT 1/B log(T )/ min . Thus, when B = log(T ), the expected worst-case regret is O K log2(T )/ min , matching the best known result in the sequential setting up to an additional logarithmic factor. Under the Condorcet condition, [2] give a straightforward pairwise comparison algorithm (PCOMP), that achieves expected regret bounded by O(K2 log(T )/ min) in log(T ) batches. They also provide a nearly matching lower bound of ⌦( KT 1/B B2 min ) for any B-batched algorithm. This implies that our bound (for B-round algorithms) in Theorem 1.2 cannot be significantly improved. Recently, [43] designed a fully adaptive algorithm achieving an optimal regret of P j: j>0 O(log T ) j for dueling bandits under the Condorcet setting. This algorithm is based on the idea of dueling two classical bandit (MAB) algorithms against each other in a repeated zero-sum game with carefully designed rewards. The reward for one algorithm depends on the actions of the other; hence, these algorithms need to achieve best-of-both-worlds guarantee for both stochastic and adversarial settings. However, the approach of [43] is not directly applicable to the batched setting that we consider. This is because, as shown by [23], any B-round algorithm for batched MAB in the adversarial setting has regret ⌦(T/B). There has also been substantial work on best-arm or top-k identification using pairwise comparisons with limited adaptivity. [15, 14, 19] considered this problem under the noisy pairwise comparison setting, which is a special case of SST+STI. They showed that constant number of rounds of adaptivity are sufficient to solve these problem with the optimal sample complexity. [4] showed that one can also solve this problem under SST in constant number of rounds with the optimal sample complexity. However, these existing results focus on the SST setting, whereas we focus on the more general Condorcet winner setting. Moreover, these existing works focus on sample complexity for best-arm identification whereas our goal is regret minimization. 3 The Batched Algorithm In this section, we describe a B-round algorithm for the K-armed dueling bandit problem under the Condorcet condition. Recall that given a set of K arms, B = {1, . . . ,K}, and a positive integer B log(T ), we wish to find a sequence of B batches of noisy comparisons with low regret. Given arms i and j, recall that pi,j = 12 + i,j denotes the probability of i winning over j where i,j 2 ( 1/2, 1/2). We use a⇤ to denote the Condorcet winner; recall that a⇤ is a Condorcet winner if pa⇤,j 1/2 for all j 2 B. To simplify notation, we use j = a⇤,j . Before describing our algorithm, we first define some notation. We use A to denote the current set of active arms; i.e., the arms that have not been eliminated. We will use index r for rounds or batches. If pair (i, j) is compared in round r, it is compared qr = bqrc times where q = T 1/B . We define the following quantities at the end of each round r: • Ni,j(r) is the total number of times the pair (i, j) has been compared. • bpi,j(r) is the frequentist estimate of pi,j , i.e., bpi,j(r) = # i wins against j until end of round r Ni,j(r) . (1) • Two confidence-interval radii for each (i, j) pair: ci,j(r) = s 2 log(2K2qr) Ni,j(r) and i,j(r) = s log(K2BT ) 2Ni,j(r) (2) We now describe our B-round algorithm, called CATCHING THE CONDORCET WINNER IN BATCHES (or, C2B). At a high-level, the algorithm identifies the best arm a⇤ in a small expected number of rounds, after which it uses this arm as an “anchor” to eliminate sub-optimal arms while incurring low regret. In every round r, we do the following: 1. We define a defeated set Dr(i) for every active arm i; this set comprises arms that are defeated with confidence by i. Specifically, j 2 Dr(i) if bpi,j(r 1) > 1/2 + ci,j(r 1). 2. Then, we define a candidate ir as the arm that defeats the most number of arms; that is, ir = argmaxi2A|Dr(i)|. 3. For every arm i 6= ir: • If i 2 Dr(ir), then we compare i to ir for qr times. The idea here is to use ir as an anchor against i. We will show that a⇤ becomes the candidate ir in a small number of rounds. Then, this step ensures that we eliminate arms efficiently using a⇤ as an anchor. • If i /2 Dr(ir), then i is compared to all arms in A for qr times. This step crucially protects the algorithm against cases where a sub-optimal arm becomes the candidate (and continues to become the candidate). For example, suppose K = [5] and the arms are linearly ordered as 1 2 · · · 5. Furthermore suppose that in some round r, we have that (a) 2 defeats 3, 4, 5 and (b) 1 (best arm) defeats 2 but not the others. So, 2 is the candidate in round r; if 1 is not compared to 3, 4, 5, then 2 would continue to be the candidate (leading to high regret). 4. If, for any arm j, there is arm i such that bpi,j(r) > 12 + i,j(r), then j is eliminated from A. This continues until T total comparisons are performed. See Algorithm 1 for a formal description. The main result of this section is to show that C2B achieves the guarantees stated in Theorems 1.1 and 1.2. Overview of the Analysis. We provide a brief outline of the proofs of our main results. Towards proving Theorem 1.1, we first define two events: • The first event, denoted G, ensures that a⇤ is not eliminated during the execution of C2B. We show that P(G) 1 1/T . • The second event, denoted E( ), says that there exists a round C( ) (defined later) such that for all r > C( ), the estimate bpi,j(r 1) satisfies the confidence interval of ci,j(r 1). Moreover, P(E( )) 1 . By union bound, P(G \ E( )) 1 1/T . Together, we use G and E( ) to argue that: Algorithm 1 C2B (CATCHING THE CONDORCET WINNER IN BATCHES) 1: Input: Arms B, time-horizon T , integer B 1 2: active arms A B, r 1, emprical probabilities bpi,j(0) = 12 for all i, j 2 B 2 3: while number of comparisons T do 4: if A = {i} for some i then play (i, i) for remaining trials 5: Dr(i) {j 2 A : bpi,j(r 1) > 12 + ci,j(r 1)} 6: ir argmaxi2A|Dr(i)| 7: for i 2 A \ {ir} do 8: if i 2 Dr(ir) then 9: compare (ir, i) for qr times 10: else 11: for each j 2 A, compare (i, j) for qr times 12: compute bpi,j(r) values 13: if 9i, j : bpi,j(r) > 12 + i,j(r) then 14: A A \ {j} 15: r r + 1 • the best arm, a⇤, is not defeated by any arm i in any round r > C( ) (formalized in Lemma 3.5), • and that there exists a round r( ) C( ) such that for every round after r( ), arm a⇤ defeats every other arm (formalized in Lemma 3.7). Intuitively, these observations imply that our algorithm identifies the best arm after r( ) rounds. Thus, beyond round r( ), we only perform pairwise comparisons of the form (a⇤, i) for i 6= a⇤: thus, a⇤ is used as an anchor to eliminate sub-optimal arms. We then analyze the regret in two parts: (i) regret incurred up to round r( ), which is upper bounded by K2 P rr( ) q r and (ii) regret after r( ), which is the regret incurred in eliminating sub-optimal arms using a⇤ as an anchor. Finally, we can use the high-probability bound to also obtain a bound on the expected regret, proving Theorem 1.2. We provide some details of the proof of Theorem 1.1. We defer the proof of Theorem 1.2 to Appendix D. 3.1 The Analysis In this section, we give high-probability and expected regret bounds for C2B. Recall that q = T 1/B , and that q 2. The following lemma is used to prove that a⇤ is never eliminated. We defer the proofs of the Lemmas 3.1, 3.2 to Appendix B. Lemma 3.1. For any batch r 2 [B], and for any pair (i, j), we have P (|bpi,j(r) pi,j |> i,j(r)) 2⌘, where ⌘ = 1/K2BT . We first define the good event G as follows. Definition 3.1 (Event G). An estimate bpi,j(r) at the end of batch r is strongly-correct if |bpi,j(r) pi,j | i,j(r). We say that event G occurs if every estimate in every batch r 2 [B] is strongly-correct. The following two lemmas show that G occurs with high probability and that a⇤ is not eliminated under G. Lemma 3.2. The probability that every estimate in every batch of C2B is strongly-correct is at least 1 1/T . Lemma 3.3. Conditioned on G, a⇤ is never eliminated from A in the elimination step of C2B. Proof. In C2B, an arm j is deleted in batch r iff there is an arm i 2 A with bpi,j(r) > 12+ i,j(r). If a ⇤ is eliminated due to some arm j, then by definition of event G, we get pj,a⇤ bpi,j(r) i,j(r) > 12 , a contradiction. 3.1.1 High-probability Regret Bound In this section, we give details required to prove Theorem 1.1. Fix any > 0. We first define another good event as follows. Definition 3.2 (Event E( )). An estimate bpi,j(r) in batch r is weakly-correct if |bpi,j(r) pi,j | ci,j(r). Let C( ) := d 12 logq(1/ )e. We say that event E( ) occurs if for each batch r C( ), every estimate is weakly-correct. The next lemma shows that E( ) occurs with probability at least 1 . Lemma 3.4. For all > 0, we have P(¬E( )) = P (9r C( ), i, j : |bpi,j(r) pi,j |> ci,j(r)) . We will analyze our algorithm under both events G and E( ). Note that event G is required to ensure that a⇤ is not eliminated in rounds before C( ) (where the Lemma 3.4 does not apply). Lemma 3.5. Conditioned on G and E( ), for any round r > C( ), arm a⇤ is not defeated by any other arm, i.e., a ⇤ /2 [i 6=a⇤Dr(i). To proceed, we need the following definitions. Definition 3.3. The candidate ir of round r is called the champion if |Dr(ir)|= |A| 1; that is, if ir defeats every other active arm. Definition 3.4. Let r( ) C( ) + 1 be the smallest integer such that q r( ) 2A logA, where A := 32 2min · log(2K2). We use the following inequality based on this choice of r( ). Lemma 3.6. The above choice of r( ) satisfies q r > 8 2min · log 2K2qr , 8r r( ). Proof of Lemma 3.6. Using the fact that qr qr, it suffices to show qr 8 2min · log(2K2) + log qr . Moreover, log(2K2) + log qr 1 + log(2K2) · (1 + log qr) 4 · log(2K2) · log qr, where the last inequality uses K 2, r r( ) 1 and q 2. So, it suffices to show: q r > A · log(qr), 8r r( ), where A = 32 2min · log(2K2) (3) Below, let x = qr, R := 2A logA and function f(x) := x A log x. We will show that f(x) > 0 for all x R, which would imply (3) because qr( ) R. As R A, and f is increasing for x A, it suffices to show that f(R) 0. Indeed, f(R) A = 2 logA log(2A logA) = logA log(2 logA) > 0, where the inequality uses A 8. Then, we have the following. Lemma 3.7. Conditioned on G and E( ), the best arm a⇤ is the champion in every round r > r( ). We now have all components required to prove Theorem 1.1; its proof, and the proofs of the aforementioned lemmas can be found in Appendix C. 4 Computational Results In this section, we provide details of our computational experiments. The goal of our experiments is to answer the following questions: (i) How does the regret of C2B using B = blog(T )c batches compare to that of existing fully sequential as well as batched algorithms? and (ii) Can the regret of C2B match the regret of the best known sequential algorithms; if yes, then how many rounds suffice to achieve this? Towards answering (i), we compare C2B to a representative set of sequential algorithms for dueling bandits using the library due to [35]. We compare C2B to the sequential algorithms RUCB [54], RMED [35], and BEAT-THE-MEAN (BTM) [51]. We allow these algorithms to work as prescribed; that is, they work in B = T batches. The reason that we chose these sequential algorithms is that our batched algorithm (C2B) is based on a similar paradigm, and such a comparison demonstrates the power of adaptivity in this context. We also compare C2B to the batched algorithm SCOMP2 [2]. We plot the cumulative regret R(t) incurred by the algorithms against time t. We set B = blog(T )c for C2B and SCOMP2 in this experiment. For (ii), we increased B by a small amount; we found that the performance of C2B improves noticeably when given a constant number of additional rounds (we use B = blog(T )c+ 6 in this experiment). We perform these experiments using the following real-world datasets. Six rankers. This dataset is based on the 6 retrieval functions used in the engine of ArXiv.org. Sushi. The Sushi dataset is based on the Sushi preference dataset [34] that contains the preference data regarding 100 types of Sushi. A preference dataset using the top-16 most popular types of sushi is obtained. Irish election data. The Irish election data for Dublin and Meath is available at preflib.org. It contains partial preference orders over candidates. As in [3], these are transformed into preference matrices by selecting a subset of candiates to ensure that a Condorcet winner exists. There are 12 candidates in the Irish-Meath dataset, and 8 in the Irish-Dublin dataset. MSLR and Yahoo! data. We also run experiments on two web search ranking datasets: the Microsoft Learning to Rank (MSLR) dataset [40] and the Yahoo! Learning to Rank Challenge Set 1 [16]. These datasets have been used in prior work on online ranker evaluation [53, 37]. We use preference matrices generated using the “navigational” configuration (see [37] for details). The MSLR dataset has 136 rankers and the Yahoo! dataset has 700 rankers. We sample 30 rankers from each dataset while ensuring the existence of a Condorcet winner. In this way, we obtain two datasets, denoted MSLR30 and Yahoo30. Note that there exists a Condorcet winner in all datasets. We repeat each experiment 20 times and report the average regret. In our algorithm, we use the KL-divergence based confidence bound due to [35] for elimination as it performs much better empirically, and our theoretical bounds continue to hold (see §E). This KL-divergence based elimination criterion eliminates an arm i in round r if Ii(r) I⇤(r) > log(T ) + f(K) where Ii(r) = P j:bpi,j(r)< 12 Ni,j(r) · DKL(bpi,j(r), 12 ) and I ⇤(r) = minj2[K] Ii(r). Computational Results. As mentioned earlier, we compare our algorithms against a representative set of sequential dueling bandits algorithms (RUCB, RMED, and BTM). We set ↵ = 0.51 for RUCB, and f(K) = 0.3K1.01 for RMED and C2B, and = 1.3 for BTM: these parameters are known to perform well both theoretically and empirically [35]. We set T = 106 for MSLR30 and Yahoo30 datasets (as they have larger number of arms), and T = 105 for the remaining four. For the first set of experiments, we set B = blog(T )c. We observe that C2B always outperforms BTM and beats SCOMP2 on most of the datasets. We observe that even when SCOMP2 beats C2B it has a slightly linear curve (implying that its regret would keep increasing as T increases) while the regret curve of C2B is mostly flat. Furthermore, C2B performs comparably to RUCB in all datasets except Yahoo30. We plot the results in Figure 1. In the second set of experiments, we set B = blog(T )c + 6. We observe that C2B always outperforms RUCB and, in fact, performs comparably to RMED on all datasets except Yahoo30. We plot the results in Figure 2. Finally, we note that SCOMP2 exhibits varying performance across runs (even on the same dataset) and we think that this is due to the randomness involved in selecting the “seed set”. 5 Conclusion In this paper, we proposed a batched algorithm, named C2B, for the K-armed dueling bandit problem. Assuming the existence of a Condorcet winner, we show both high-probability and expected regret bounds for C2B that trade-off smoothly with the number of batches. Furthermore, we obtain asymptotic regret of O(K2 log2(K))+O(K log(T )) in O(log(T )) batches, nearly matching the best regret bounds known in the fully sequential setting under the Condorcet condition. Our computational results show that C2B, using O(log(T )) batches, achieves almost the same performance as fully sequential algorithms over a variety of real-world datasets. A direction for future research is to design batched algorithms for the K-armed dueling bandit problem when a Condorcet winner does not exist; for example, designing an algorithm for a more general concept of winner, such as Copeland winner [48] or von Neumann winner [22].
1. What is the focus of the paper regarding dueling bandit problems? 2. What are the strengths and weaknesses of the proposed algorithm in terms of batched learning settings and theoretical guarantees? 3. Are there any missing references or related literature regarding dueling bandit settings with different objectives? 4. How does the reviewer assess the experimental study, and what improvements could be made? 5. Does the paper adequately discuss the usage of two different confidence intervals, and how do they relate to the key design ideas of the algorithm? 6. Could the paper better connect the high-level description in Section 1.1 with the textual description in Section 3 by reusing specific terms? 7. Why didn't the theoretical analysis include the KL-divergence-based confidence intervals despite their empirical performance and identical theoretical results? 8. Were there any minor errors or typos in the paper, such as those mentioned by the reviewer? 9. How might the paper address potential negative social impacts, and what limitations does it acknowledge?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper studies the dueling bandit problem with regret minimization as the objective under the assumption of an existing Condorcet winner. In contrast to most existing work dealing with fully adaptive learning situations, this work considers the situation of batched learning, where the learner does not decide before each learning round which pairwise comparison to perform, but decides this only in a limited number of rounds, i.e., it performs a batch of the same pairwise comparison at certain learning rounds. For this learning scenario, a learning algorithm is proposed that has a near-optimal upper bound on its cumulative regret using a nearly optimal number of batches. This improves upon existing methods which additionally need stronger assumptions than merely the existence of a Condorcet winner. The suggested algorithm is compared in a numerical study on real-world data sets with existing regret minimizing learning algorithms for the dueling bandit setting. Strengths And Weaknesses Strengths Batched learning settings are definitely relevant for practical applications and have also been studied recently in the standard multi-armed bandit setting. Thus, the scope of the paper is relevant to the ML community. The paper is generally well-written. The notation is well thought out and the main ideas underlying the suggested learning algorithm are described in sufficient detail. Although the learning setting is not novel, as it has been considered by other authors before, the suggested learning algorithm comes with stronger theoretical guarantees than existing methods. The proofs seem to be sound as far as I can tell. Weaknesses Related literature There is some missing related literature: Mark Braverman, Jieming Mao, and S. Matthew Weinberg. Parallel algorithms for select and partition with noisy comparisons. In Proceedings of the ACM symposium on Theory of Computing, pages 851- 862, 2016. This paper also studies the question of round/batch complexity of learners in the dueling bandit setting, but for the objective of finding the top-k arms. Chuang-Chieh Lin and Chi-Jen Lu. Efficient mechanisms for peer grading and dueling bandits. In Asian Conference on Machine Learning (ACML), pages 740-755, 2018. This paper studies the question of round/batch complexity of learners in the multi-dueling bandit setting (with utilities of the arms), but for the objective of finding the best arm (NB: the Borda winner scenario is considered as well). Although the objectives of the two papers are different from the considered regret minimization objective, I think they are definitely worth mentioning. As a side note: These two papers are discussed in the following survey paper on dueling bandits (Section 3.1.18, 3.2.6, and 4.2.8): Bengs, V., Busa-Fekete, R., El Mesaoudi-Paul, A., & Hüllermeier, E. (2021). Preference-based Online Learning with Dueling Bandits: A Survey. Journal of Machine Learning Research, 22(7), 1-108. Moreover, there are some missing references (lines 166-167) regarding the multiple arm comparison for the regret minimization objective (see Chapter 6 of Bengs et al. (2021)). Experimental study The experimental study leaves room for improvement. First, the considered competing algorithms BTM and RUCB are out of date and Double Thompson Sampling and Self-Sparring are currently the state-of-art (together with the considered RMED). Next, it is a bit doubtful that the elimination strategy is changed for the experiments, although the theoretical analysis uses a Hoeffding-based elimination strategy. While I agree with the author’s statement that the theoretical guarantee will continue to hold, it is nevertheless something that needs to be shown. Thus, the authors should include the original algorithm in the experiments and call the one using KL elimination differently in the experiments. Finally, it would be nice if the authors could comment on whether the data sets satisfy SST or STI, i.e., the assumptions needed for SCOMP and SCOMP2, and also report the number of batches used by C2B and SCOMP2. NB: There is also a quite recent Python package for dueling bandits: https://duelpy.gitlab.io/duelpy/index.html Questions Although the main ideas of the algorithm are described quite well, I would have liked to see a remark on the usage of two different confidence intervals in Section 3. I was wondering if this is technically not even one of the key design ideas which improve the theoretical results? Moreover, the textual description in Section 3 could be connected to the high-level description in Section 1.1 by reusing the term “trapped” and “recourse”, as it does not appear in Section 3 anymore. Finally, I am wondering why the theoretical analysis is not carried out for the KL-divergence-based confidence intervals if it (a) leads to the same theoretical results and (b) performs better empirically? Minor things: Typo in line 165: von Neumann. There is a typo in [35]. In the math display of Proof of Lemma 3.1. it should be p ^ i , j ( r ) The idea of the proof of Theorem 1.2 goes back to the proof of RUCB’s expected regret bound. I think the RUCB authors deserve some more credit. Limitations Potential negative social impact As this is a theoretical work, there is no seemingly potential negative social impact. Limitations The authors clearly state that their algorithm is based on the Condorcet winner assumption and mention that it would be desirable to consider learning settings without such an assumption, e.g. Copeland winner or von Neumann winner settings.
NIPS
Title Removing Hidden Confounding by Experimental Grounding Abstract Observational data is increasingly used as a means for making individual-level causal predictions and intervention recommendations. The foremost challenge of causal inference from observational data is hidden confounding, whose presence cannot be tested in data and can invalidate any causal conclusion. Experimental data does not suffer from confounding but is usually limited in both scope and scale. We introduce a novel method of using limited experimental data to correct the hidden confounding in causal effect models trained on larger observational data, even if the observational data does not fully overlap with the experimental data. Our method makes strictly weaker assumptions than existing approaches, and we prove conditions under which it yields a consistent estimator. We demonstrate our method’s efficacy using real-world data from a large educational experiment. 1 Introduction In domains such as healthcare, education, and marketing there is growing interest in using observational data to draw causal conclusions about individual-level effects; for example, using electronic healthcare records to determine which patients should get what treatments, using school records to optimize educational policy interventions, or using past advertising campaign data to refine targeting and maximize lift. Observational datasets, due to their often very large number of samples and exhaustive scope (many measured covariates) in comparison to experimental datasets, offer a unique opportunity to uncover fine-grained effects that may apply to many target populations. However, a significant obstacle when attempting to draw causal conclusions from observational data is the problem of hidden confounders: factors that affect both treatment assignment and outcome, but are unmeasured in the observational data. Example cases where hidden confounders arise include physicians prescribing medication based on indicators not present in the health record, or classes being assigned a teacher’s aide because of special efforts by a competent school principal. Hidden confounding can lead to no-vanishing bias in causal estimates even in the limit of infinite samples [Pea09]. In an observational study, one can never prove that there is no hidden confounding [Pea09]. However, a possible fix can be found if there exists a Randomized Controlled Trial (RCT) testing the effect of the intervention in question. For example, if a Health Management Organization (HMO) is considering the effect of a medication on its patient population, it might look at an RCT which tested this medication. The problem with using RCTs is that often their participants do not fully reflect the target population. As an example, an HMO in California might have to use an RCT from Switzerland, conducted perhaps several years ago, on a much smaller population. The problem of generalizing conclusions from an RCT to a different target population is known as the problem of external validity [Rot05, AO17], or more specifically, transportability [BP13, PB14]. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. In this paper, we are interested in the case where fine-grained causal inference is sought, in the form of Conditional Average Treatment Effects (CATE), where we consider a large set of covariates, enough to identify each unit. We aim at using a large observational sample and a possibly much smaller experimental sample. The typical use case we have in mind is of a user who wishes to estimate CATE and has a relatively large observational sample that covers their population of interest. This observational sample might suffer from hidden confounding, as all observational data will to some extent, but they also have a smaller sample from an experiment, albeit one that might not directly reflect their population of interest. For example, consider The Women’s Health Initiative [Ros02] where there was a big previous observational study and a smaller RCT to study hormone replacement therapy. The studies ended up with opposite results and there is intense discussion about confounding and external validity: the RCT was limited due to covering a fundamentally different (healthier and younger) population compared with the observational study [HAL+08, Van09]. Differently from previous work on estimating CATE from observational data, our approach does not assume that all confounders have been measured, and we only assume that the support of the experimental study has some overlap with the support of the observational study. The major assumption we do make is that we can learn the structure of the hidden confounding by comparing the observational and experimental samples. Specifically, rather than assuming that effects themselves have a parametric structure – a questionable assumption that is bound to lead to dangerous extrapolation from small experiments – we only assume that this hidden confounding function has a parametric structure that we can extrapolate. Thus we limit ourselves to a parametric correction of a possibly complex effect function learned on the observational data. We discuss why this assumption is possibly reasonable. Specifically, as long as the parametric family includes the zero function, this assumption is strictly weaker than assuming that all confounders in the observational study have been observed. One way to view our approach is that we bring together an unbiased but high-variance estimator from the RCT (possibly infinite-variance when the RCT has zero overlap with the target population) and a biased but low-variance estimator from the observational study. This achieves a consistent (vanishing bias and variance) CATE estimator. Finally, we run experiments on both simulation and real-world data and show our method outperforms the standard approaches to this problem. In particular, we use data from a large-scale RCT measuring the effect of small classrooms and teacher’s aids [WJB+90, Kru99] to obtain ground-truth estimates of causal effects, which we then try and reproduce from a confounded observational study. 2 Setup We focus on studying a binary treatment, which we interpret as the presence or absence of an intervention of interest. To study its fine-grained effects on individuals, we consider having treatmentoutcome data from two sources: an observational study that may be subject to hidden confounding, and an unconfounded study, typically coming from an experiment. The observational data consists of baseline covariates XConfi ∈ Rd, assigned treatments TConfi ∈ {0, 1}, and observed outcomes Y Confi ∈ R for i = 1, . . . , nConf. Similarly, the unconfounded data consists of XUnci , TUnci , Y Unci for i = 1, . . . , nUnc. Conceptually, we focus on the setting where (1) the observational data is of much larger scale nUnc nConf and/or (2) the support of the unconfounded data Support(XUnci ) = {x : P ( ‖XUnci − x‖ ≤ δ ) > 0 ∀δ > 0}, does not include the population about which we want to make causal conclusions and targeted interventions. This means that the observational data has both the scale and the scope we want but the presence of confounding limits the study of causal effects, while the unconfounded experimental data has unconfoundedness but does not have the scale and/or scope necessary to study the individual-level effects of interest. The unconfounded data usually comes from an RCT that was conducted on a smaller scale on a different population, as presented in the previous section. Alternatively, and equivalently for our formalism, it can arise from recognizing a latent unconfounded sub-experiment within the observational study. For example, we may have information from the data generation process that indicates that treatment for certain units was actually assigned purely as a (possibly stochastic) function of the observed covariates x. Two examples of this would be when certain prognoses dictate a strict rule-based treatment assignment or in situations of known equipoise after a certain prognosis, where there is no evidence guiding treatment one way or the other and its assignment is as if at random based on the individual who ends up administering it. Regardless if the unconfounded data came from a secondary RCT (more common) or from within the observational dataset, our mathematical set up remains the same. Formally, we consider each dataset to be iid draws from two different super-populations, indicated by the event E taking either the value EConf or EUnc. The observational data are iid draws from the population given by conditioning on the event EConf: XConfi , T Conf i , Y Conf i ∼ (X,T, Y | EConf) iid. Similarly, XUnci , T Unc i , Y Unc i ∼ (X,T, Y | EUnc). Using potential outcome notation, assuming the standard Stable Unit Treatment Value Assumption (SUTVA), which posits no interference and consistency between observed and potential outcomes, we let Y (0), Y (1) be the potential outcomes of administering each of the two treatments and Y = Y (T ) = TY (1) + (1− T )Y (0). The quantity we are interested in is the Conditional Average Treatment Effect (CATE): Definition 1 (CATE). Let τ(x) = E [Y (1)− Y (0)|X = x]. The key assumption we make about the unconfounded data is its unconfoundedness: Assumption 1. [Unconfounded experiment] (i) Y (0), Y (1) ⊥ T | X, EUnc (ii) Y (0), Y (1) ⊥ EUnc | X. This assumption holds if the unconfounded data was generated in a randomized control trial. More generally, it is functionally equivalent to assuming that the unconfounded data was generated by running a logging policy on a contextual bandit, that is, first covariates are drawn from the unconfounded population X | EUnc and revealed, then a treatment T is chosen, the outcomes are drawn based on the covariates Y (0), Y (1) | X , but only the outcome corresponding to the chosen treatment Y = Y (T ) is revealed. The second part of the assumption means that merely being in the unconfounded study does not affect the potential outcomes conditioned on the covariates X . It implies that the functional relationship between the unobserved confounders and the potential outcomes is the same in both studies. This will fail if for example knowing you are part of a study causes you to react differently to the same treatment. We note that this assumption is strictly weaker than the standard ignorability assumption in observational studies. This assumption implies that for covariates within the domain of the experiment, we can identify the value of CATE using regression. Specifically, if x ∈ Support(X | EUnc), that is, if P ( ‖X − x‖ ≤ δ | EUnc ) > 0 ∀δ > 0, then τ(x) = E [ Y | T = 1, X = x,EUnc ] − E [ Y | T = 0, X = x,EUnc ] , where E [ Y | T = t,X = x,EUnc ] can be identified by regressing observed outcomes on treatment and covariates in the unconfounded data. However, this identification of CATE is (i) limited to the restricted domain of the experiment and (ii) hindered by the limited amount of data available in the unconfounded sample. The hope is to overcome these obstacles using the observational data. Importantly, however, the unconfoundedness assumption is not assumed to hold for the observational data, which may be subject to unmeasured confounding. That is, both selection into the observational study and the selection of the treatment may be confounded with the potential outcomes of any one treatment. Let us denote the difference in conditional average outcomes in the observational data by ω(x) = E [ Y | T = 1, X = x,EConf ] − E [ Y | T = 0, X = x,EConf ] . Note that due to confounding factors, ω(x) 6= τ(x) for any x, whether in the support of the observational study or not. The difference between these two quantities is precisely the confounding effect, which we denote η(x) = τ(x)− ω(x). Another way to express this term is: η(x) = {E [Y (1)|x]− E [Y (1)|x, T = 1]} − {E [Y (0)|x]− E [Y (0)|x, T = 0]}. Note that if the observational study were unconfounded then we would have η(x) = 0. Further note that a standard assumption in the vast majority of methodological literature makes the assumption that η(x) ≡ 0, even though it is widely acknowledged that this assumption isn’t realistic, and is at best an approximation. Example. In order to better understand the function η(x), consider the following case: Assume there are two equally likely types of patients,“dutiful” and “negligent”. Dutiful patients take care of their general health and are more likely to seek treatment, while negligent patients do not. Assume T = 1 is a medical treatment that requires the patient to see a physician, do lab tests, and obtain a prescription if indeed needed, while T = 0 means no treatment. Let Y be some measure of health, say blood pressure. In this scenario, where patients are self-selected into treatment (to a certain degree), we would expect that both potential outcomes would be greater for the treated over the control: E [Y (1)|T = 1] > E [Y (1)|T = 0], E [Y (0)|T = 1] > E [Y (0)|T = 0]. Since E [Y (1)] = E [Y (1)|T = 1] p(T = 1)+E [Y (1)|T = 0] p(T = 0) we also have that E [Y (1)] > E [Y (1)|T = 1], and E [Y (0)] > E [Y (0)|T = 0] unless p(T = 0) = 1. Taken together, this shows that in the above scenario, we expect η < 0, if we haven’t measured any X . This logic carries through in the plausible scenario where we have measured some X , but do not have access to all the variables X that allows us to tell apart “dutiful” from “negligent” patients. To sum up, this example shows that in cases where some units are selected such as those more likely to be treated are those whose potential outcomes are higher (resp. lower) anyway, we can expect η to be negative (resp. positive). 3 Method Given data from both the unconfounded and confounded studies, we propose the following recipe for removing the hidden confounding. First, we learn a function ω̂ over the observational sample {XConfi , TConfi , Y Confi }n Conf i=1 . This can be done using any CATE estimation method such as learning two regression functions for the treated and control and taking their difference, or specially constructed methods such as Causal Forest [WA17]. Since we assume this sample has hidden confounding, ω is not equal to the true CATE and correspondingly ω̂ does not estimate the true CATE. We then learn a correction term which interpolates between ω̂ evaluated on the RCT samples XUnci , and the RCT outcomes Y Unci . This is a correction term for hidden confounding, which is our estimate of η. The correction term allows us to extrapolate τ over the confounded sample, using the identity τ(X) = ω(X) + η(X). Note that we could not have gone the other way round: if we were to start with estimating τ over the unconfounded sample, and then estimate η using the samples from the confounded study, we would end up constructing an estimate of ω(x), which is not the quantity of interest. Moreover, doing so would be difficult as the unconfounded sample is not expected to cover the confounded one. Specifically, the way we use the RCT samples relies on a simple identity. Let eUnc(x) = P ( T = 1 | X = x,EUnc ) be the propensity score on the unconfounded sample. If this sample is an RCT then typically eUnc(x) = q for some constant, often q = 0.5. Let q(XUnci ) = TUnci eUnc(XUnci ) − 1−T Unc i 1−eUnc(XUnci ) be a signed re-weighting function. We have: Lemma 1. E [ q(XUnci )Y Unc i |XUnci , EUnc ] = τ(XUnci ). (1) What Lemma 1 shows us is that q(XUnci )Y Unc i is an unbiased estimate of τ(X Unc i ). We now use this fact to learn η as follows: θ̂ = argmin θ nUnc∑ i=1 ( q(XUnci )Y Unc i − ω̂(XUnci )− θ>XUnci )2 (2) Let τ̂(x) = ω̂(x) + θ̂>x. (3) The method is summarized in Algorithm 1. Let us contrast our approach with two existing ones. The first, is to simply learn the treatment effect function directly from the unconfounded data, and extrapolate it to the observational sample. This is guaranteed to be unconfounded, and with a large enough unconfounded sample the CATE function can be learned [CHIM08, Pea15]. This approach is presented for example by [BP13] for ATE, as the transport formula. However, extending this approach to CATE in our case is not as straightforward. The reason is that we assume that the confounded study does not fully overlap with the unconfounded study, which requires extrapolating the estimated CATE function into a region of sample space outside the region where it was fit. This requires strong parametric assumptions about the CATE Algorithm 1 Remove hidden confounding with unconfounded sample 1: Input: Unconfounded sample with propensity scores DUnc = {XUnci , TUnci , Y Unci , eUnc(XUnci )}n Unc i=1 . Confounded sample D Conf = {XConfi , TConfi , Y Confi }n Conf i=1 . Algorithm Q for fitting CATE. 2: Run Q on DConf, obtain CATE estimate ω̂. 3: Let θ̂ be the solution of the optimization problem in Equation (2). 4: Set function τ̂(x) := ω̂(x) + θ̂>x 5: Return: τ̂ , an estimate of CATE over DConf. function. On the other hand, we do have samples from the target region, they are simply confounded. One way to view our approach is that we move the extrapolation a step back: instead of extrapolating the CATE function, we merely extrapolate a correction due to hidden confounding. In the case that the CATE function does actually extrapolate well, we do no harm - we learn η ≈ 0. The second alternative relies on re-weighting the RCT population so as to make it similar to the target, observational population [SCBL11, HGRS15, AO17]. These approaches suffer from two important drawbacks from our point of view: (i) they assume the observational study has no unmeasured confounders, which is often an unrealistic assumption; (ii) they assume that the support of the observational study is contained within the support of the experimental study, which again is unrealistic as the experimental studies are often smaller and on somewhat different populations. If we were to apply these approaches to our case, we would be re-weighting by the inverse of weights which are close to, or even identical to, 0. 4 Theoretical guarantee We prove that under conditions of parametric identification of η, Algorithm 1 recovers a consistent estimate of τ(x) over the EConf, at a rate which is governed by the rate of estimating ω by ω̂. For the sake of clarity, we focus on a linear specification of η. Other parametric specifications can easily be accommodated given that the appropriate identification criteria hold (for linear this is the non-singularity of the design matrix). Note that this result is strictly stronger than results about CATE identification which rely on ignorability: what enables the improvement is of course the presence of the unconfounded sample EUnc. Also note that this result is strictly stronger than the transport formula [BP13] and re-weighting such as [AO17]. Theorem 1. Suppose 1. ω̂ is a consistent estimator on the observational data (on which it’s trained): E[(ω̂(X)− ω(X))2 | EConf] = O(r(n)) for r(n) = o(1) 2. The covariates in the confounded data cover those in the unconfounded data (strong one-way overlap): ∃κ > 0 : P ( EUnc | X ) ≤ κP ( EConf | X ) 3. η is linear: ∃θ0 : η(x) = θ>0 x 4. Identifiability of θ0: E[XX> | EUnc] is non-singular 5. X , Y , and ω̂(X) have finite fourth moments in the experimental data: E[‖X‖42 | EUnc] <∞, E[Y 4 | EUnc] <∞, E[ω̂(X)4 | EUnc] <∞ 6. Strong overlap between treatments in unconfounded data: ∃ν > 0 : ν ≤ eUnc(X) ≤ 1− ν Then θ̂ is consistent ‖θ̂ − θ0‖22 = Op(r(n) + 1/n) and τ̂ is consistent on its target population ((τ̂(X)− τ(X))2 | EConf) = Op(r(n) + 1/n) There are a few things to note about the result and its conditions. First, we note that if the so-called confounded observational sample is in fact unconfounded then we immediately get that the linear specification of η is correct with θ0 = 0 because we simply have η(x) = 0. Therefore, our conditions are strictly weaker than imposing unconfoundedness on the observational data. Condition 1 requires that our base method for learning ω is consistent just as a regression method. There are a few ways to guarantee this. For example, if we fit ω̂ by empirical risk minimization on weighted outcomes over a function class of finite capacity (such as a VC class) or if we fit as the difference of two regression functions each fit by empirical risk minimization on observed outcomes in each treatment group, then standard results in statistical learning [BM02] ensure the consistency of L2 risk and therefore the L2 convergence required in condition 1. Alternatively, any method for learning CATE that would have been consistent for CATE under unconfoundedness would actually still be consistent for ω if applied. Therefore we can also rely on such base method as causal forests [WA17] and other methods that target CATE as inputs to our method, even if they don’t actually learn CATE here due to confounding. Condition 2 captures our understanding of the observational dataset having a larger scope than the experimental dataset. The condition essentially requires a strong form of absolute continuity between the two covariate distributions. This condition could potentially be relaxed so long as there is enough intersection where we can learn η. So for example, if there is a subset of the experiment that the observational data covers, that would be sufficient so long as we can also ensure that condition 4 still remains valid on that subset so that we can learn the sufficient parameters for η. Condition 3, the linear specification of η, can be replaced with another one so long as it has finitely many parameters and they can be identified on the experimental dataset, i.e., condition 4 above would change appropriately. Since unconfoundedness implies η = 0, whenever the parametric specification of η contains the zero function (e.g., as in the linear case above since θ0 = 0 is allowed) condition 3 is strictly weaker than assuming unconfoundedness. In that sense, our method can consistently estimate CATE on a population where no experimental data exists under weaker conditions than existing methods, which assume the observational data is unconfounded. Condition 5 is trivially satisfied whenever outcomes and covariates are bounded. Similarly, we would expect that if the first two parts of condition 5 hold (about X and Y ) then the last one about ω̂ would also hold as it is predicting outcomes Y . That is, the last part of condition 5 is essentially a requirement on our ω̂-leaner base method that it’s not doing anything strange like adding unnecessary noise to Y thereby making it have fewer moments. For all base methods that we consider, this would come for free because they are only averaging outcomes Y . We also note that if we impose the existence of even higher moments as well as pointwise asymptotic normality of ω̂, one can easily transform the result to an asymptotic normality result. Standard error estimates will in turn require a variance estimate of ω̂. Finally, we note that condition 6, which requires strong overlap, only needs to hold in the unconfounded sample. This is important as it would be a rather strong requirement in the confounded sample where treatment choices may depend on high dimensional variables [DDF+17], but it is a weak condition for the experimental data. Specifically, if the unconfounded sample arose from an RCT then propensities would be constant and the condition would hold trivially. 5 Experiments In order to illustrate the validity and usefulness of our proposed method we conduct simulation experiments and experiments with real-world data taken from the Tennessee STAR study: a large longterm school study where students were randomized to different types of classes [WJB+90, Kru99]. 5.1 Simulation study We generate data simulating a situation where there exists an un-confounded dataset and a confounded dataset, with only partial overlap. Let X ∈ R be a measured covariate, T ∈ {0, 1} binary treatment assignment, U ∈ R an unmeasured confounder, and Y ∈ R the outcome. We are interested in τ(X). We generate the unconfounded sample as follows: XUnc ∼ Uniform [−1, 1], UUnc ∼ N (0, 1), TUnc ∼ Bernoulli(0.5). We generate the confounded sample as follows: we first sample TConf ∼ Bernoulli(0.5) and then sample XConf, UConf from a bivariate Gaussian (XConf, UConf)|TConf ∼ N ( [0, 0], [ 1 TConf − 0.5 TConf − 0.5 1 ]) . This means that XConf, UConf come from a Gaussian mixture model where TConf denotes the mixture component and the components have equal means but different covariance structures. This also implies that η is linear. For both datasets we set Y = 1 + T +X + 2 · T ·X + 0.5X2 + 0.75 · T ·X2 + U + 0.5 , where ∼ N (0, 1). The true CATE is therefore τ(X) = 0.75X2 + 2X + 1. We have that the true ω = τ +E[U |X,T = 1]−E[U |X,T = 0], which leads to the true η = x. We then apply our method (with a CF base) to learn η. We plot (See Figure 1) here the true and recovered η with our method. Even with the limited un-confounded set (between −1, 1) making the full scope of the x2 term in Y inaccessible, we are able to reasonably estimate τ . Other methods would suffer under the strong unobserved confounding. 5.2 Real-world data Validating causal-inference methods is hard because we almost never have access to true counterfactuals. We approach this challenge by using data from a randomized controlled trial, the Tennessee STAR study [WJB+90, Kru99, MISN18]. When using an RCT, we have access to unbiased CATEestimates because we are guaranteed unconfoundedness. We then artificially introduce confounding by selectively removing a biased subset of samples. The data: The Tennessee Student/Teacher Achievement Ratio (STAR) experiment is a randomized experiment started in 1985 to measure the effect of class size on student outcomes, measured by standardized test scores. The experiment started monitoring students in kindergarten and followed students until third grade. Students and teachers were randomly assigned into conditions during the first school year, with the intention for students to continue in their class-size condition for the entirety of the experiment. We focus on two of the experiment conditions: small classes(13-17 pupils), and regular classes(22-25 pupils). Since many students only started the study at first grade, we took as treatment their class-type at first grade. Overall we have 4509 students with treatment assignment at first grade. The outcome Y is the sum of the listening, reading, and math standardized test at the end of first grade. After removing students with missing outcomes 1, we remain with a randomized sample of 4218 students: 1805 assigned to treatment (small class, T = 1), and 2413 to control (regular size class, T = 0). In addition to treatment and outcome, we used the following covariates for each student: gender, race, birth month, birthday, birth year, free lunch given or not, teacher id. Our goal is to compute the CATE conditioned on this set of covariates, jointly denoted X . Computing ground-truth CATE: The STAR RCT allows us to obtain an unbiased estimate of the CATE. Specifically, we use the identity in Eq. (1), and the fact that in the study, the propensity scores 1The correlation between missing outcome and treatment assignment is R2 < 10−4. e(Xi) were constant. We define a ground-truth sample {(Xi, Y GTi )}ni=1, where Y GTi = Yiq+Ti−1 , q = p(T = 1). By Eq. (1) we know that E [ Y GTi |Xi ] = τ(Xi) within the STAR study. Introducing hidden confounding: Now that we have the ground-truth CATE, we wish to emulate the scenario which motivates our work. We split the entire dataset (ALL) into a small unconfounded subset (UNC), and a larger, confounded subset (CONF) over a somewhat different population. We do this by splitting the population over a variable which is known to be a strong determinant of outcome [Kru99]: rural or inner-city (2811 students) vs. urban or suburban (1407 students). We generate UNC by randomly sampling a fraction q′ of the rural or inner-city students, where q′ ranges from 0.1 to 0.5. Over this sample, we know that treatment assignment was at random. When generating CONF, we wish to obtain two goals: (a) the support of CONF should have only a partial overlap with the support of UNC, and (b) treatment assignment should be confounded, i.e. the treated and control populations should be systematically different in their potential outcomes. In order to achieve these goals, we generate CONF as follows: From the rural or inner-city students, we take the controls (T = 0) that were not sampled in UNC, and only the treated (T = 1) whose outcomes were in the lower half of outcomes among treated rural or inner-city students. From the urban or suburban students, we take all of the controls, and only the treated whose outcomes were in the lower half of outcomes among treated urban or suburban students. This procedure results in UNC and CONF populations which do not fully overlap: UNC has only rural or inner-city students, while CONF has a substantial subset (roughly one half for q′ = 0.5) of urban and suburban students. It also creates confounding, by removing the students with the higher scores selectively from the treated population. This biases the naive treatment effect estimates downward. We further complicate matters by dropping the covariate indicating rural, inner-city, urban or suburban from all subsequent analysis. Therefore, we have significant unmeasured confounding in the CONF population, and also the unconfounded ground-truth in the original, ALL population. Metric: In our experiments, we assume we have access to samples from UNC and CONF. We use either UNC, CONF or both to fit various models for predicting CATE. We then evaluate how well the CATE predictions match Y GTi on a held-out sample from ALL \ UNC (the set ALL minus the set UNC), in terms of RMSE. Note that we are not evaluating on CONF, but on the unconfounded version of CONF, which is exactly ALL \ UNC. The reason we don’t evaluate on ALL is twofold: First, it will only make the task easier because of the nature of the UNC set; second, we are motivated by the scenario where we have a confounded observational study representing the target population of interest, and wish to be aided by a separate unconfounded study (typically an RCT) available for a different population. We focus on a held-out set in order to avoid giving too much of an advantage to methods which can simply fit the observed outcomes well. Baselines: As a baseline we fit CATE using standard methods on either the UNC set or the CONF set. Fitting on the UNC set is essentially a CATE version of applying the transport formula [PB14]. Fitting on the CONF set amounts to assuming ignorability (which is wrong in this case), and using standard methods. The methods we use to estimate CATE are: (i) Regression method fit on Y GTi over UNC (ii) Regression method fit separately on treated and control in CONF (iii) Regression method fit separately on treated and control in UNC. The regression methods we use in (i)-(iii) are Random Forest with 200 trees and Ridge Regression with cross-validation. In baselines (ii) and (iii), the CATE is estimated as the difference between the prediction of the model fit on the treated and the prediction of the model fit on the control. We also experimented extensively with Causal Forest [WA17], but found it to uniformly perform worse than the other methods, even when given unfair advantages such as access to the entire dataset (ALL). Results: Our two-step method requires a method for fitting ω̂ on the confounded dataset. We experiment with two methods, which parallel those used as baseline: A regression method fit separately on treated and control in CONF, where we use either Random Forest with 200 trees or Ridge Regression with cross-validation as regression methods. We see that our methods, 2-step RF and 2-step ridge, consistently produce more accurate estimates than the baselines. We see that our methods in particular are able to make use of larger unconfounded sets to produce better estimates of the CATE function.See Figure 2 for the performance of our method vs. the various baselines. 6 Discussion In this paper we address a scenario that is becoming more and more common: users with large observational datasets who wish to extract causal insights using their data and help from unconfounded experiments on different populations. One direction for future work is combining the current work with work that looks explicitly into the causal graph connecting the covariates, including unmeasured ones [TT15, MMC16]. Another direction includes cases where the outcomes or interventions are not directly comparable, but where the difference can be modeled. For example, experimental studies often only study short-term outcomes, whereas the observational study might track long-term outcomes which are of more interest [ACIK16]. Acknowledgements We wish to thank the anonymous reviewers for their helpful suggestions and comments. (NK) This material is based upon work supported by the National Science Foundation under Grant No. 1656996.
1. What is the focus of the paper regarding individual treatment effects? 2. What is the novel approach introduced by the authors? 3. What are the strengths of the proposed method, particularly in reducing variance? 4. What are the weaknesses or limitations of the paper, especially regarding the assumption of confounders' effects? 5. How does the reviewer assess the clarity and generalizability of the paper's content?
Review
Review This paper tackles the problem of prediction of individual treatment effects given observed data and a small randomized trial. The unconfounded data can be used to estimate the effect, but the variance will be high, e.g., with importance sampling. So the authors reduce variance by including an estimate from the observed data, which no longer needs to be unbiased. I think the paper introduces an innovative approach that can be useful whenever unconfounded data is available. The experimental section is encouraging, except that I would have liked to see at least one other real-world dataset to show generalizability, and a comparison with baselines that they mention in section 3 (importance sampling and transportability ). The catch, however, is that the effect of confounders can be expressed parameterically (linear in the current paper). I think this is a reasonable assumption, but the authors make a stronger claim: "strictly weaker than other assumptions". I suggest that the authors present a formal argument for this. The current justification in Section 4 is vague. The other suggestion I have is in terms of expressing the main idea. It seems that the method is really a variance reduction trick---where a low-variance but biased estimate from the confounded data is used to reduce variance on the high-variance unbiased estimate from the confounded data. Maybe the authors can discuss this interpretation, and how this could lead to other derivative methods from this general insight?
NIPS
Title Removing Hidden Confounding by Experimental Grounding Abstract Observational data is increasingly used as a means for making individual-level causal predictions and intervention recommendations. The foremost challenge of causal inference from observational data is hidden confounding, whose presence cannot be tested in data and can invalidate any causal conclusion. Experimental data does not suffer from confounding but is usually limited in both scope and scale. We introduce a novel method of using limited experimental data to correct the hidden confounding in causal effect models trained on larger observational data, even if the observational data does not fully overlap with the experimental data. Our method makes strictly weaker assumptions than existing approaches, and we prove conditions under which it yields a consistent estimator. We demonstrate our method’s efficacy using real-world data from a large educational experiment. 1 Introduction In domains such as healthcare, education, and marketing there is growing interest in using observational data to draw causal conclusions about individual-level effects; for example, using electronic healthcare records to determine which patients should get what treatments, using school records to optimize educational policy interventions, or using past advertising campaign data to refine targeting and maximize lift. Observational datasets, due to their often very large number of samples and exhaustive scope (many measured covariates) in comparison to experimental datasets, offer a unique opportunity to uncover fine-grained effects that may apply to many target populations. However, a significant obstacle when attempting to draw causal conclusions from observational data is the problem of hidden confounders: factors that affect both treatment assignment and outcome, but are unmeasured in the observational data. Example cases where hidden confounders arise include physicians prescribing medication based on indicators not present in the health record, or classes being assigned a teacher’s aide because of special efforts by a competent school principal. Hidden confounding can lead to no-vanishing bias in causal estimates even in the limit of infinite samples [Pea09]. In an observational study, one can never prove that there is no hidden confounding [Pea09]. However, a possible fix can be found if there exists a Randomized Controlled Trial (RCT) testing the effect of the intervention in question. For example, if a Health Management Organization (HMO) is considering the effect of a medication on its patient population, it might look at an RCT which tested this medication. The problem with using RCTs is that often their participants do not fully reflect the target population. As an example, an HMO in California might have to use an RCT from Switzerland, conducted perhaps several years ago, on a much smaller population. The problem of generalizing conclusions from an RCT to a different target population is known as the problem of external validity [Rot05, AO17], or more specifically, transportability [BP13, PB14]. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. In this paper, we are interested in the case where fine-grained causal inference is sought, in the form of Conditional Average Treatment Effects (CATE), where we consider a large set of covariates, enough to identify each unit. We aim at using a large observational sample and a possibly much smaller experimental sample. The typical use case we have in mind is of a user who wishes to estimate CATE and has a relatively large observational sample that covers their population of interest. This observational sample might suffer from hidden confounding, as all observational data will to some extent, but they also have a smaller sample from an experiment, albeit one that might not directly reflect their population of interest. For example, consider The Women’s Health Initiative [Ros02] where there was a big previous observational study and a smaller RCT to study hormone replacement therapy. The studies ended up with opposite results and there is intense discussion about confounding and external validity: the RCT was limited due to covering a fundamentally different (healthier and younger) population compared with the observational study [HAL+08, Van09]. Differently from previous work on estimating CATE from observational data, our approach does not assume that all confounders have been measured, and we only assume that the support of the experimental study has some overlap with the support of the observational study. The major assumption we do make is that we can learn the structure of the hidden confounding by comparing the observational and experimental samples. Specifically, rather than assuming that effects themselves have a parametric structure – a questionable assumption that is bound to lead to dangerous extrapolation from small experiments – we only assume that this hidden confounding function has a parametric structure that we can extrapolate. Thus we limit ourselves to a parametric correction of a possibly complex effect function learned on the observational data. We discuss why this assumption is possibly reasonable. Specifically, as long as the parametric family includes the zero function, this assumption is strictly weaker than assuming that all confounders in the observational study have been observed. One way to view our approach is that we bring together an unbiased but high-variance estimator from the RCT (possibly infinite-variance when the RCT has zero overlap with the target population) and a biased but low-variance estimator from the observational study. This achieves a consistent (vanishing bias and variance) CATE estimator. Finally, we run experiments on both simulation and real-world data and show our method outperforms the standard approaches to this problem. In particular, we use data from a large-scale RCT measuring the effect of small classrooms and teacher’s aids [WJB+90, Kru99] to obtain ground-truth estimates of causal effects, which we then try and reproduce from a confounded observational study. 2 Setup We focus on studying a binary treatment, which we interpret as the presence or absence of an intervention of interest. To study its fine-grained effects on individuals, we consider having treatmentoutcome data from two sources: an observational study that may be subject to hidden confounding, and an unconfounded study, typically coming from an experiment. The observational data consists of baseline covariates XConfi ∈ Rd, assigned treatments TConfi ∈ {0, 1}, and observed outcomes Y Confi ∈ R for i = 1, . . . , nConf. Similarly, the unconfounded data consists of XUnci , TUnci , Y Unci for i = 1, . . . , nUnc. Conceptually, we focus on the setting where (1) the observational data is of much larger scale nUnc nConf and/or (2) the support of the unconfounded data Support(XUnci ) = {x : P ( ‖XUnci − x‖ ≤ δ ) > 0 ∀δ > 0}, does not include the population about which we want to make causal conclusions and targeted interventions. This means that the observational data has both the scale and the scope we want but the presence of confounding limits the study of causal effects, while the unconfounded experimental data has unconfoundedness but does not have the scale and/or scope necessary to study the individual-level effects of interest. The unconfounded data usually comes from an RCT that was conducted on a smaller scale on a different population, as presented in the previous section. Alternatively, and equivalently for our formalism, it can arise from recognizing a latent unconfounded sub-experiment within the observational study. For example, we may have information from the data generation process that indicates that treatment for certain units was actually assigned purely as a (possibly stochastic) function of the observed covariates x. Two examples of this would be when certain prognoses dictate a strict rule-based treatment assignment or in situations of known equipoise after a certain prognosis, where there is no evidence guiding treatment one way or the other and its assignment is as if at random based on the individual who ends up administering it. Regardless if the unconfounded data came from a secondary RCT (more common) or from within the observational dataset, our mathematical set up remains the same. Formally, we consider each dataset to be iid draws from two different super-populations, indicated by the event E taking either the value EConf or EUnc. The observational data are iid draws from the population given by conditioning on the event EConf: XConfi , T Conf i , Y Conf i ∼ (X,T, Y | EConf) iid. Similarly, XUnci , T Unc i , Y Unc i ∼ (X,T, Y | EUnc). Using potential outcome notation, assuming the standard Stable Unit Treatment Value Assumption (SUTVA), which posits no interference and consistency between observed and potential outcomes, we let Y (0), Y (1) be the potential outcomes of administering each of the two treatments and Y = Y (T ) = TY (1) + (1− T )Y (0). The quantity we are interested in is the Conditional Average Treatment Effect (CATE): Definition 1 (CATE). Let τ(x) = E [Y (1)− Y (0)|X = x]. The key assumption we make about the unconfounded data is its unconfoundedness: Assumption 1. [Unconfounded experiment] (i) Y (0), Y (1) ⊥ T | X, EUnc (ii) Y (0), Y (1) ⊥ EUnc | X. This assumption holds if the unconfounded data was generated in a randomized control trial. More generally, it is functionally equivalent to assuming that the unconfounded data was generated by running a logging policy on a contextual bandit, that is, first covariates are drawn from the unconfounded population X | EUnc and revealed, then a treatment T is chosen, the outcomes are drawn based on the covariates Y (0), Y (1) | X , but only the outcome corresponding to the chosen treatment Y = Y (T ) is revealed. The second part of the assumption means that merely being in the unconfounded study does not affect the potential outcomes conditioned on the covariates X . It implies that the functional relationship between the unobserved confounders and the potential outcomes is the same in both studies. This will fail if for example knowing you are part of a study causes you to react differently to the same treatment. We note that this assumption is strictly weaker than the standard ignorability assumption in observational studies. This assumption implies that for covariates within the domain of the experiment, we can identify the value of CATE using regression. Specifically, if x ∈ Support(X | EUnc), that is, if P ( ‖X − x‖ ≤ δ | EUnc ) > 0 ∀δ > 0, then τ(x) = E [ Y | T = 1, X = x,EUnc ] − E [ Y | T = 0, X = x,EUnc ] , where E [ Y | T = t,X = x,EUnc ] can be identified by regressing observed outcomes on treatment and covariates in the unconfounded data. However, this identification of CATE is (i) limited to the restricted domain of the experiment and (ii) hindered by the limited amount of data available in the unconfounded sample. The hope is to overcome these obstacles using the observational data. Importantly, however, the unconfoundedness assumption is not assumed to hold for the observational data, which may be subject to unmeasured confounding. That is, both selection into the observational study and the selection of the treatment may be confounded with the potential outcomes of any one treatment. Let us denote the difference in conditional average outcomes in the observational data by ω(x) = E [ Y | T = 1, X = x,EConf ] − E [ Y | T = 0, X = x,EConf ] . Note that due to confounding factors, ω(x) 6= τ(x) for any x, whether in the support of the observational study or not. The difference between these two quantities is precisely the confounding effect, which we denote η(x) = τ(x)− ω(x). Another way to express this term is: η(x) = {E [Y (1)|x]− E [Y (1)|x, T = 1]} − {E [Y (0)|x]− E [Y (0)|x, T = 0]}. Note that if the observational study were unconfounded then we would have η(x) = 0. Further note that a standard assumption in the vast majority of methodological literature makes the assumption that η(x) ≡ 0, even though it is widely acknowledged that this assumption isn’t realistic, and is at best an approximation. Example. In order to better understand the function η(x), consider the following case: Assume there are two equally likely types of patients,“dutiful” and “negligent”. Dutiful patients take care of their general health and are more likely to seek treatment, while negligent patients do not. Assume T = 1 is a medical treatment that requires the patient to see a physician, do lab tests, and obtain a prescription if indeed needed, while T = 0 means no treatment. Let Y be some measure of health, say blood pressure. In this scenario, where patients are self-selected into treatment (to a certain degree), we would expect that both potential outcomes would be greater for the treated over the control: E [Y (1)|T = 1] > E [Y (1)|T = 0], E [Y (0)|T = 1] > E [Y (0)|T = 0]. Since E [Y (1)] = E [Y (1)|T = 1] p(T = 1)+E [Y (1)|T = 0] p(T = 0) we also have that E [Y (1)] > E [Y (1)|T = 1], and E [Y (0)] > E [Y (0)|T = 0] unless p(T = 0) = 1. Taken together, this shows that in the above scenario, we expect η < 0, if we haven’t measured any X . This logic carries through in the plausible scenario where we have measured some X , but do not have access to all the variables X that allows us to tell apart “dutiful” from “negligent” patients. To sum up, this example shows that in cases where some units are selected such as those more likely to be treated are those whose potential outcomes are higher (resp. lower) anyway, we can expect η to be negative (resp. positive). 3 Method Given data from both the unconfounded and confounded studies, we propose the following recipe for removing the hidden confounding. First, we learn a function ω̂ over the observational sample {XConfi , TConfi , Y Confi }n Conf i=1 . This can be done using any CATE estimation method such as learning two regression functions for the treated and control and taking their difference, or specially constructed methods such as Causal Forest [WA17]. Since we assume this sample has hidden confounding, ω is not equal to the true CATE and correspondingly ω̂ does not estimate the true CATE. We then learn a correction term which interpolates between ω̂ evaluated on the RCT samples XUnci , and the RCT outcomes Y Unci . This is a correction term for hidden confounding, which is our estimate of η. The correction term allows us to extrapolate τ over the confounded sample, using the identity τ(X) = ω(X) + η(X). Note that we could not have gone the other way round: if we were to start with estimating τ over the unconfounded sample, and then estimate η using the samples from the confounded study, we would end up constructing an estimate of ω(x), which is not the quantity of interest. Moreover, doing so would be difficult as the unconfounded sample is not expected to cover the confounded one. Specifically, the way we use the RCT samples relies on a simple identity. Let eUnc(x) = P ( T = 1 | X = x,EUnc ) be the propensity score on the unconfounded sample. If this sample is an RCT then typically eUnc(x) = q for some constant, often q = 0.5. Let q(XUnci ) = TUnci eUnc(XUnci ) − 1−T Unc i 1−eUnc(XUnci ) be a signed re-weighting function. We have: Lemma 1. E [ q(XUnci )Y Unc i |XUnci , EUnc ] = τ(XUnci ). (1) What Lemma 1 shows us is that q(XUnci )Y Unc i is an unbiased estimate of τ(X Unc i ). We now use this fact to learn η as follows: θ̂ = argmin θ nUnc∑ i=1 ( q(XUnci )Y Unc i − ω̂(XUnci )− θ>XUnci )2 (2) Let τ̂(x) = ω̂(x) + θ̂>x. (3) The method is summarized in Algorithm 1. Let us contrast our approach with two existing ones. The first, is to simply learn the treatment effect function directly from the unconfounded data, and extrapolate it to the observational sample. This is guaranteed to be unconfounded, and with a large enough unconfounded sample the CATE function can be learned [CHIM08, Pea15]. This approach is presented for example by [BP13] for ATE, as the transport formula. However, extending this approach to CATE in our case is not as straightforward. The reason is that we assume that the confounded study does not fully overlap with the unconfounded study, which requires extrapolating the estimated CATE function into a region of sample space outside the region where it was fit. This requires strong parametric assumptions about the CATE Algorithm 1 Remove hidden confounding with unconfounded sample 1: Input: Unconfounded sample with propensity scores DUnc = {XUnci , TUnci , Y Unci , eUnc(XUnci )}n Unc i=1 . Confounded sample D Conf = {XConfi , TConfi , Y Confi }n Conf i=1 . Algorithm Q for fitting CATE. 2: Run Q on DConf, obtain CATE estimate ω̂. 3: Let θ̂ be the solution of the optimization problem in Equation (2). 4: Set function τ̂(x) := ω̂(x) + θ̂>x 5: Return: τ̂ , an estimate of CATE over DConf. function. On the other hand, we do have samples from the target region, they are simply confounded. One way to view our approach is that we move the extrapolation a step back: instead of extrapolating the CATE function, we merely extrapolate a correction due to hidden confounding. In the case that the CATE function does actually extrapolate well, we do no harm - we learn η ≈ 0. The second alternative relies on re-weighting the RCT population so as to make it similar to the target, observational population [SCBL11, HGRS15, AO17]. These approaches suffer from two important drawbacks from our point of view: (i) they assume the observational study has no unmeasured confounders, which is often an unrealistic assumption; (ii) they assume that the support of the observational study is contained within the support of the experimental study, which again is unrealistic as the experimental studies are often smaller and on somewhat different populations. If we were to apply these approaches to our case, we would be re-weighting by the inverse of weights which are close to, or even identical to, 0. 4 Theoretical guarantee We prove that under conditions of parametric identification of η, Algorithm 1 recovers a consistent estimate of τ(x) over the EConf, at a rate which is governed by the rate of estimating ω by ω̂. For the sake of clarity, we focus on a linear specification of η. Other parametric specifications can easily be accommodated given that the appropriate identification criteria hold (for linear this is the non-singularity of the design matrix). Note that this result is strictly stronger than results about CATE identification which rely on ignorability: what enables the improvement is of course the presence of the unconfounded sample EUnc. Also note that this result is strictly stronger than the transport formula [BP13] and re-weighting such as [AO17]. Theorem 1. Suppose 1. ω̂ is a consistent estimator on the observational data (on which it’s trained): E[(ω̂(X)− ω(X))2 | EConf] = O(r(n)) for r(n) = o(1) 2. The covariates in the confounded data cover those in the unconfounded data (strong one-way overlap): ∃κ > 0 : P ( EUnc | X ) ≤ κP ( EConf | X ) 3. η is linear: ∃θ0 : η(x) = θ>0 x 4. Identifiability of θ0: E[XX> | EUnc] is non-singular 5. X , Y , and ω̂(X) have finite fourth moments in the experimental data: E[‖X‖42 | EUnc] <∞, E[Y 4 | EUnc] <∞, E[ω̂(X)4 | EUnc] <∞ 6. Strong overlap between treatments in unconfounded data: ∃ν > 0 : ν ≤ eUnc(X) ≤ 1− ν Then θ̂ is consistent ‖θ̂ − θ0‖22 = Op(r(n) + 1/n) and τ̂ is consistent on its target population ((τ̂(X)− τ(X))2 | EConf) = Op(r(n) + 1/n) There are a few things to note about the result and its conditions. First, we note that if the so-called confounded observational sample is in fact unconfounded then we immediately get that the linear specification of η is correct with θ0 = 0 because we simply have η(x) = 0. Therefore, our conditions are strictly weaker than imposing unconfoundedness on the observational data. Condition 1 requires that our base method for learning ω is consistent just as a regression method. There are a few ways to guarantee this. For example, if we fit ω̂ by empirical risk minimization on weighted outcomes over a function class of finite capacity (such as a VC class) or if we fit as the difference of two regression functions each fit by empirical risk minimization on observed outcomes in each treatment group, then standard results in statistical learning [BM02] ensure the consistency of L2 risk and therefore the L2 convergence required in condition 1. Alternatively, any method for learning CATE that would have been consistent for CATE under unconfoundedness would actually still be consistent for ω if applied. Therefore we can also rely on such base method as causal forests [WA17] and other methods that target CATE as inputs to our method, even if they don’t actually learn CATE here due to confounding. Condition 2 captures our understanding of the observational dataset having a larger scope than the experimental dataset. The condition essentially requires a strong form of absolute continuity between the two covariate distributions. This condition could potentially be relaxed so long as there is enough intersection where we can learn η. So for example, if there is a subset of the experiment that the observational data covers, that would be sufficient so long as we can also ensure that condition 4 still remains valid on that subset so that we can learn the sufficient parameters for η. Condition 3, the linear specification of η, can be replaced with another one so long as it has finitely many parameters and they can be identified on the experimental dataset, i.e., condition 4 above would change appropriately. Since unconfoundedness implies η = 0, whenever the parametric specification of η contains the zero function (e.g., as in the linear case above since θ0 = 0 is allowed) condition 3 is strictly weaker than assuming unconfoundedness. In that sense, our method can consistently estimate CATE on a population where no experimental data exists under weaker conditions than existing methods, which assume the observational data is unconfounded. Condition 5 is trivially satisfied whenever outcomes and covariates are bounded. Similarly, we would expect that if the first two parts of condition 5 hold (about X and Y ) then the last one about ω̂ would also hold as it is predicting outcomes Y . That is, the last part of condition 5 is essentially a requirement on our ω̂-leaner base method that it’s not doing anything strange like adding unnecessary noise to Y thereby making it have fewer moments. For all base methods that we consider, this would come for free because they are only averaging outcomes Y . We also note that if we impose the existence of even higher moments as well as pointwise asymptotic normality of ω̂, one can easily transform the result to an asymptotic normality result. Standard error estimates will in turn require a variance estimate of ω̂. Finally, we note that condition 6, which requires strong overlap, only needs to hold in the unconfounded sample. This is important as it would be a rather strong requirement in the confounded sample where treatment choices may depend on high dimensional variables [DDF+17], but it is a weak condition for the experimental data. Specifically, if the unconfounded sample arose from an RCT then propensities would be constant and the condition would hold trivially. 5 Experiments In order to illustrate the validity and usefulness of our proposed method we conduct simulation experiments and experiments with real-world data taken from the Tennessee STAR study: a large longterm school study where students were randomized to different types of classes [WJB+90, Kru99]. 5.1 Simulation study We generate data simulating a situation where there exists an un-confounded dataset and a confounded dataset, with only partial overlap. Let X ∈ R be a measured covariate, T ∈ {0, 1} binary treatment assignment, U ∈ R an unmeasured confounder, and Y ∈ R the outcome. We are interested in τ(X). We generate the unconfounded sample as follows: XUnc ∼ Uniform [−1, 1], UUnc ∼ N (0, 1), TUnc ∼ Bernoulli(0.5). We generate the confounded sample as follows: we first sample TConf ∼ Bernoulli(0.5) and then sample XConf, UConf from a bivariate Gaussian (XConf, UConf)|TConf ∼ N ( [0, 0], [ 1 TConf − 0.5 TConf − 0.5 1 ]) . This means that XConf, UConf come from a Gaussian mixture model where TConf denotes the mixture component and the components have equal means but different covariance structures. This also implies that η is linear. For both datasets we set Y = 1 + T +X + 2 · T ·X + 0.5X2 + 0.75 · T ·X2 + U + 0.5 , where ∼ N (0, 1). The true CATE is therefore τ(X) = 0.75X2 + 2X + 1. We have that the true ω = τ +E[U |X,T = 1]−E[U |X,T = 0], which leads to the true η = x. We then apply our method (with a CF base) to learn η. We plot (See Figure 1) here the true and recovered η with our method. Even with the limited un-confounded set (between −1, 1) making the full scope of the x2 term in Y inaccessible, we are able to reasonably estimate τ . Other methods would suffer under the strong unobserved confounding. 5.2 Real-world data Validating causal-inference methods is hard because we almost never have access to true counterfactuals. We approach this challenge by using data from a randomized controlled trial, the Tennessee STAR study [WJB+90, Kru99, MISN18]. When using an RCT, we have access to unbiased CATEestimates because we are guaranteed unconfoundedness. We then artificially introduce confounding by selectively removing a biased subset of samples. The data: The Tennessee Student/Teacher Achievement Ratio (STAR) experiment is a randomized experiment started in 1985 to measure the effect of class size on student outcomes, measured by standardized test scores. The experiment started monitoring students in kindergarten and followed students until third grade. Students and teachers were randomly assigned into conditions during the first school year, with the intention for students to continue in their class-size condition for the entirety of the experiment. We focus on two of the experiment conditions: small classes(13-17 pupils), and regular classes(22-25 pupils). Since many students only started the study at first grade, we took as treatment their class-type at first grade. Overall we have 4509 students with treatment assignment at first grade. The outcome Y is the sum of the listening, reading, and math standardized test at the end of first grade. After removing students with missing outcomes 1, we remain with a randomized sample of 4218 students: 1805 assigned to treatment (small class, T = 1), and 2413 to control (regular size class, T = 0). In addition to treatment and outcome, we used the following covariates for each student: gender, race, birth month, birthday, birth year, free lunch given or not, teacher id. Our goal is to compute the CATE conditioned on this set of covariates, jointly denoted X . Computing ground-truth CATE: The STAR RCT allows us to obtain an unbiased estimate of the CATE. Specifically, we use the identity in Eq. (1), and the fact that in the study, the propensity scores 1The correlation between missing outcome and treatment assignment is R2 < 10−4. e(Xi) were constant. We define a ground-truth sample {(Xi, Y GTi )}ni=1, where Y GTi = Yiq+Ti−1 , q = p(T = 1). By Eq. (1) we know that E [ Y GTi |Xi ] = τ(Xi) within the STAR study. Introducing hidden confounding: Now that we have the ground-truth CATE, we wish to emulate the scenario which motivates our work. We split the entire dataset (ALL) into a small unconfounded subset (UNC), and a larger, confounded subset (CONF) over a somewhat different population. We do this by splitting the population over a variable which is known to be a strong determinant of outcome [Kru99]: rural or inner-city (2811 students) vs. urban or suburban (1407 students). We generate UNC by randomly sampling a fraction q′ of the rural or inner-city students, where q′ ranges from 0.1 to 0.5. Over this sample, we know that treatment assignment was at random. When generating CONF, we wish to obtain two goals: (a) the support of CONF should have only a partial overlap with the support of UNC, and (b) treatment assignment should be confounded, i.e. the treated and control populations should be systematically different in their potential outcomes. In order to achieve these goals, we generate CONF as follows: From the rural or inner-city students, we take the controls (T = 0) that were not sampled in UNC, and only the treated (T = 1) whose outcomes were in the lower half of outcomes among treated rural or inner-city students. From the urban or suburban students, we take all of the controls, and only the treated whose outcomes were in the lower half of outcomes among treated urban or suburban students. This procedure results in UNC and CONF populations which do not fully overlap: UNC has only rural or inner-city students, while CONF has a substantial subset (roughly one half for q′ = 0.5) of urban and suburban students. It also creates confounding, by removing the students with the higher scores selectively from the treated population. This biases the naive treatment effect estimates downward. We further complicate matters by dropping the covariate indicating rural, inner-city, urban or suburban from all subsequent analysis. Therefore, we have significant unmeasured confounding in the CONF population, and also the unconfounded ground-truth in the original, ALL population. Metric: In our experiments, we assume we have access to samples from UNC and CONF. We use either UNC, CONF or both to fit various models for predicting CATE. We then evaluate how well the CATE predictions match Y GTi on a held-out sample from ALL \ UNC (the set ALL minus the set UNC), in terms of RMSE. Note that we are not evaluating on CONF, but on the unconfounded version of CONF, which is exactly ALL \ UNC. The reason we don’t evaluate on ALL is twofold: First, it will only make the task easier because of the nature of the UNC set; second, we are motivated by the scenario where we have a confounded observational study representing the target population of interest, and wish to be aided by a separate unconfounded study (typically an RCT) available for a different population. We focus on a held-out set in order to avoid giving too much of an advantage to methods which can simply fit the observed outcomes well. Baselines: As a baseline we fit CATE using standard methods on either the UNC set or the CONF set. Fitting on the UNC set is essentially a CATE version of applying the transport formula [PB14]. Fitting on the CONF set amounts to assuming ignorability (which is wrong in this case), and using standard methods. The methods we use to estimate CATE are: (i) Regression method fit on Y GTi over UNC (ii) Regression method fit separately on treated and control in CONF (iii) Regression method fit separately on treated and control in UNC. The regression methods we use in (i)-(iii) are Random Forest with 200 trees and Ridge Regression with cross-validation. In baselines (ii) and (iii), the CATE is estimated as the difference between the prediction of the model fit on the treated and the prediction of the model fit on the control. We also experimented extensively with Causal Forest [WA17], but found it to uniformly perform worse than the other methods, even when given unfair advantages such as access to the entire dataset (ALL). Results: Our two-step method requires a method for fitting ω̂ on the confounded dataset. We experiment with two methods, which parallel those used as baseline: A regression method fit separately on treated and control in CONF, where we use either Random Forest with 200 trees or Ridge Regression with cross-validation as regression methods. We see that our methods, 2-step RF and 2-step ridge, consistently produce more accurate estimates than the baselines. We see that our methods in particular are able to make use of larger unconfounded sets to produce better estimates of the CATE function.See Figure 2 for the performance of our method vs. the various baselines. 6 Discussion In this paper we address a scenario that is becoming more and more common: users with large observational datasets who wish to extract causal insights using their data and help from unconfounded experiments on different populations. One direction for future work is combining the current work with work that looks explicitly into the causal graph connecting the covariates, including unmeasured ones [TT15, MMC16]. Another direction includes cases where the outcomes or interventions are not directly comparable, but where the difference can be modeled. For example, experimental studies often only study short-term outcomes, whereas the observational study might track long-term outcomes which are of more interest [ACIK16]. Acknowledgements We wish to thank the anonymous reviewers for their helpful suggestions and comments. (NK) This material is based upon work supported by the National Science Foundation under Grant No. 1656996.
1. What is the main contribution of the paper regarding experimental data removal of bias in observational studies? 2. How practical and useful is the proposed method compared to other datasets? 3. How can the variance of the estimator be discussed, and how can a CLT be proved or argued heuristically for obtaining confidence intervals? 4. How does the precision of the estimator $\hat{\theta}$ relate to the sample size of the UNCONF dataset, and how does it compare to a naive estimator with lower variance thanks to a large sample size of CONF? 5. How can the tradeoff between bias and variance in the choice between the proposed estimator and a naive estimator that assumes unconfoundedness be discussed? 6. Can the insight from the paper about pushing the burden of extrapolation to the confounding term be emphasized more? 7. How can the language used in the paper be made more precise and less imprecise? 8. Can the example in lines 110-119 be made less confusing by being explicit about assuming dutiful patients are more likely to seek treatment? 9. Can the sentence on line 131 be revised to use the correct terminology for an estimator, and clarify what "unbiased" means in this context? 10. Can the meaning of the sentence on line 132 be clarified, and why Y is superscripted by CATE? 11. Can the proof of the lemma on lines 139-144 be eliminated, and the result stated as a lemma instead? 12. Would using single letters "C" and "U" instead of superscripts "Conf" and "Unc" improve readability? 13. Are there any typos or minor errors in the paper that should be corrected before publication?
Review
Review Summary: This paper proposes a new method for leveraging experimental data to remove the bias arising from potential confounders in a related observational study. I find the key idea of the paper to be simple but clever, interesting, and potentially useful. The simulation based on real data (Section 5.2) is in my opinion quite convincing. On the other hand, I find the general exposition to be imprecise and difficult to read. I think that this manuscript has the substance to be a good paper if major work is undertaken to tighten it up and clarify the exposition. Specific comments below: Substance: 1) I believe that the method you propose is sensible, but I am not yet convinced by its practical significance. It would be helpful if you could point to a number of real world datasets on which such a method could be used. I am not asking you to actually apply your method to other datasets, nor am I asking you to point to datasets you have access to — what I am asking is whether there are any concrete datasets you know of that could benefit from your method? 2) There is a general sense in which when estimating causal effects in observational studies, bias is often more of an issue than variance. I agree with that view, overall. Still, it would be useful if you could briefly discuss the variance of your estimator, and propose a way to estimate that variance. Proving a CLT (or if you can’t prove it, at least argue heuristically for it) would be also helpful for obtaining confidence intervals, etc… 3) The precision of your estimator $\hat{\theta}$ is limited by the sample size of the UNCONF dataset. Now if you look at the mean squared error of the estimator $\hat{\tau}$ that you construct, it is quite possible that because of the potential large variance of $\hat{\theta}$, it exceeds that of a naive estimator that will be biased due to confounding but will have much lower variance thanks to the large sample size of CONF. In effect, I am pointing to the fact that there is a bias / variance tradeoff at play here, in the choice between your estimator, and a naive estimator that assumes unconfoundedness. It would be useful to discuss this point. 4) In the introduction you write: “We discuss below why this [parametric correction] assumption is possibly reasonable […]”. I think that the insight from your paper — pushing the burden of extrapolation to the confounding term — is a very good one. You discuss this briefly on lines 156-159, but I think that you should emphasize this more. Clarity: I would usually include these remarks as minor comments, but in this particular case they are not minor. I am taking the time to be specific because I really believe that the paper could be improved dramatically if the writing was tighter. 1) The language you use is very imprecise: - l.68: what do you mean by “scale” and “scope”? - l.69: you talk about unconfoundedness, but you only define it on l.91. The sentence on l. 90: “the key assumption we make about the unconfounded data is its unconfoundedness” should alert you to the fact that something is wrong. - The example in l.110-119 is confusing. What you assume is that dutiful patients are more likely to seek treatment. You should be explicit about this. As an aside, it is a matter of personal preference but you could consider wrapping this into an “example” environment to set it apart from the main text. - l.131: \omega is not an estimator, and unbiased is not the right word for it. - l 132: “[…] which interpolates between \hat{omega} to the RCT outcomes” what does this sentence mean? - l. 238-242 is confusing. Why do you superscript Y by CATE? - l. 248: you mean that you are sampling a *fraction* q’ of the rural… etc…. Please go over your paper carefully eliminating these kinds of things. 2) l.139-144 should be a lemma. The proof does not add to the comprehension. 3) Please consider not using superscripts like “Conf” and “Unc”. Single letters “C” and “U” would do. Also, using the letter E is usually not a good idea when you take expectations (even if you use a slightly different symbol for expectations). 4) Typos: - l.40: “this observational sample which might suffer […]” -> remove the word “which”. - l.131 and l.132: its \omega not omega - l.138: “doing so would be difficuly” -> difficult - l 139: “[…] relies on a simply identity” -> simple etc…
NIPS
Title Removing Hidden Confounding by Experimental Grounding Abstract Observational data is increasingly used as a means for making individual-level causal predictions and intervention recommendations. The foremost challenge of causal inference from observational data is hidden confounding, whose presence cannot be tested in data and can invalidate any causal conclusion. Experimental data does not suffer from confounding but is usually limited in both scope and scale. We introduce a novel method of using limited experimental data to correct the hidden confounding in causal effect models trained on larger observational data, even if the observational data does not fully overlap with the experimental data. Our method makes strictly weaker assumptions than existing approaches, and we prove conditions under which it yields a consistent estimator. We demonstrate our method’s efficacy using real-world data from a large educational experiment. 1 Introduction In domains such as healthcare, education, and marketing there is growing interest in using observational data to draw causal conclusions about individual-level effects; for example, using electronic healthcare records to determine which patients should get what treatments, using school records to optimize educational policy interventions, or using past advertising campaign data to refine targeting and maximize lift. Observational datasets, due to their often very large number of samples and exhaustive scope (many measured covariates) in comparison to experimental datasets, offer a unique opportunity to uncover fine-grained effects that may apply to many target populations. However, a significant obstacle when attempting to draw causal conclusions from observational data is the problem of hidden confounders: factors that affect both treatment assignment and outcome, but are unmeasured in the observational data. Example cases where hidden confounders arise include physicians prescribing medication based on indicators not present in the health record, or classes being assigned a teacher’s aide because of special efforts by a competent school principal. Hidden confounding can lead to no-vanishing bias in causal estimates even in the limit of infinite samples [Pea09]. In an observational study, one can never prove that there is no hidden confounding [Pea09]. However, a possible fix can be found if there exists a Randomized Controlled Trial (RCT) testing the effect of the intervention in question. For example, if a Health Management Organization (HMO) is considering the effect of a medication on its patient population, it might look at an RCT which tested this medication. The problem with using RCTs is that often their participants do not fully reflect the target population. As an example, an HMO in California might have to use an RCT from Switzerland, conducted perhaps several years ago, on a much smaller population. The problem of generalizing conclusions from an RCT to a different target population is known as the problem of external validity [Rot05, AO17], or more specifically, transportability [BP13, PB14]. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. In this paper, we are interested in the case where fine-grained causal inference is sought, in the form of Conditional Average Treatment Effects (CATE), where we consider a large set of covariates, enough to identify each unit. We aim at using a large observational sample and a possibly much smaller experimental sample. The typical use case we have in mind is of a user who wishes to estimate CATE and has a relatively large observational sample that covers their population of interest. This observational sample might suffer from hidden confounding, as all observational data will to some extent, but they also have a smaller sample from an experiment, albeit one that might not directly reflect their population of interest. For example, consider The Women’s Health Initiative [Ros02] where there was a big previous observational study and a smaller RCT to study hormone replacement therapy. The studies ended up with opposite results and there is intense discussion about confounding and external validity: the RCT was limited due to covering a fundamentally different (healthier and younger) population compared with the observational study [HAL+08, Van09]. Differently from previous work on estimating CATE from observational data, our approach does not assume that all confounders have been measured, and we only assume that the support of the experimental study has some overlap with the support of the observational study. The major assumption we do make is that we can learn the structure of the hidden confounding by comparing the observational and experimental samples. Specifically, rather than assuming that effects themselves have a parametric structure – a questionable assumption that is bound to lead to dangerous extrapolation from small experiments – we only assume that this hidden confounding function has a parametric structure that we can extrapolate. Thus we limit ourselves to a parametric correction of a possibly complex effect function learned on the observational data. We discuss why this assumption is possibly reasonable. Specifically, as long as the parametric family includes the zero function, this assumption is strictly weaker than assuming that all confounders in the observational study have been observed. One way to view our approach is that we bring together an unbiased but high-variance estimator from the RCT (possibly infinite-variance when the RCT has zero overlap with the target population) and a biased but low-variance estimator from the observational study. This achieves a consistent (vanishing bias and variance) CATE estimator. Finally, we run experiments on both simulation and real-world data and show our method outperforms the standard approaches to this problem. In particular, we use data from a large-scale RCT measuring the effect of small classrooms and teacher’s aids [WJB+90, Kru99] to obtain ground-truth estimates of causal effects, which we then try and reproduce from a confounded observational study. 2 Setup We focus on studying a binary treatment, which we interpret as the presence or absence of an intervention of interest. To study its fine-grained effects on individuals, we consider having treatmentoutcome data from two sources: an observational study that may be subject to hidden confounding, and an unconfounded study, typically coming from an experiment. The observational data consists of baseline covariates XConfi ∈ Rd, assigned treatments TConfi ∈ {0, 1}, and observed outcomes Y Confi ∈ R for i = 1, . . . , nConf. Similarly, the unconfounded data consists of XUnci , TUnci , Y Unci for i = 1, . . . , nUnc. Conceptually, we focus on the setting where (1) the observational data is of much larger scale nUnc nConf and/or (2) the support of the unconfounded data Support(XUnci ) = {x : P ( ‖XUnci − x‖ ≤ δ ) > 0 ∀δ > 0}, does not include the population about which we want to make causal conclusions and targeted interventions. This means that the observational data has both the scale and the scope we want but the presence of confounding limits the study of causal effects, while the unconfounded experimental data has unconfoundedness but does not have the scale and/or scope necessary to study the individual-level effects of interest. The unconfounded data usually comes from an RCT that was conducted on a smaller scale on a different population, as presented in the previous section. Alternatively, and equivalently for our formalism, it can arise from recognizing a latent unconfounded sub-experiment within the observational study. For example, we may have information from the data generation process that indicates that treatment for certain units was actually assigned purely as a (possibly stochastic) function of the observed covariates x. Two examples of this would be when certain prognoses dictate a strict rule-based treatment assignment or in situations of known equipoise after a certain prognosis, where there is no evidence guiding treatment one way or the other and its assignment is as if at random based on the individual who ends up administering it. Regardless if the unconfounded data came from a secondary RCT (more common) or from within the observational dataset, our mathematical set up remains the same. Formally, we consider each dataset to be iid draws from two different super-populations, indicated by the event E taking either the value EConf or EUnc. The observational data are iid draws from the population given by conditioning on the event EConf: XConfi , T Conf i , Y Conf i ∼ (X,T, Y | EConf) iid. Similarly, XUnci , T Unc i , Y Unc i ∼ (X,T, Y | EUnc). Using potential outcome notation, assuming the standard Stable Unit Treatment Value Assumption (SUTVA), which posits no interference and consistency between observed and potential outcomes, we let Y (0), Y (1) be the potential outcomes of administering each of the two treatments and Y = Y (T ) = TY (1) + (1− T )Y (0). The quantity we are interested in is the Conditional Average Treatment Effect (CATE): Definition 1 (CATE). Let τ(x) = E [Y (1)− Y (0)|X = x]. The key assumption we make about the unconfounded data is its unconfoundedness: Assumption 1. [Unconfounded experiment] (i) Y (0), Y (1) ⊥ T | X, EUnc (ii) Y (0), Y (1) ⊥ EUnc | X. This assumption holds if the unconfounded data was generated in a randomized control trial. More generally, it is functionally equivalent to assuming that the unconfounded data was generated by running a logging policy on a contextual bandit, that is, first covariates are drawn from the unconfounded population X | EUnc and revealed, then a treatment T is chosen, the outcomes are drawn based on the covariates Y (0), Y (1) | X , but only the outcome corresponding to the chosen treatment Y = Y (T ) is revealed. The second part of the assumption means that merely being in the unconfounded study does not affect the potential outcomes conditioned on the covariates X . It implies that the functional relationship between the unobserved confounders and the potential outcomes is the same in both studies. This will fail if for example knowing you are part of a study causes you to react differently to the same treatment. We note that this assumption is strictly weaker than the standard ignorability assumption in observational studies. This assumption implies that for covariates within the domain of the experiment, we can identify the value of CATE using regression. Specifically, if x ∈ Support(X | EUnc), that is, if P ( ‖X − x‖ ≤ δ | EUnc ) > 0 ∀δ > 0, then τ(x) = E [ Y | T = 1, X = x,EUnc ] − E [ Y | T = 0, X = x,EUnc ] , where E [ Y | T = t,X = x,EUnc ] can be identified by regressing observed outcomes on treatment and covariates in the unconfounded data. However, this identification of CATE is (i) limited to the restricted domain of the experiment and (ii) hindered by the limited amount of data available in the unconfounded sample. The hope is to overcome these obstacles using the observational data. Importantly, however, the unconfoundedness assumption is not assumed to hold for the observational data, which may be subject to unmeasured confounding. That is, both selection into the observational study and the selection of the treatment may be confounded with the potential outcomes of any one treatment. Let us denote the difference in conditional average outcomes in the observational data by ω(x) = E [ Y | T = 1, X = x,EConf ] − E [ Y | T = 0, X = x,EConf ] . Note that due to confounding factors, ω(x) 6= τ(x) for any x, whether in the support of the observational study or not. The difference between these two quantities is precisely the confounding effect, which we denote η(x) = τ(x)− ω(x). Another way to express this term is: η(x) = {E [Y (1)|x]− E [Y (1)|x, T = 1]} − {E [Y (0)|x]− E [Y (0)|x, T = 0]}. Note that if the observational study were unconfounded then we would have η(x) = 0. Further note that a standard assumption in the vast majority of methodological literature makes the assumption that η(x) ≡ 0, even though it is widely acknowledged that this assumption isn’t realistic, and is at best an approximation. Example. In order to better understand the function η(x), consider the following case: Assume there are two equally likely types of patients,“dutiful” and “negligent”. Dutiful patients take care of their general health and are more likely to seek treatment, while negligent patients do not. Assume T = 1 is a medical treatment that requires the patient to see a physician, do lab tests, and obtain a prescription if indeed needed, while T = 0 means no treatment. Let Y be some measure of health, say blood pressure. In this scenario, where patients are self-selected into treatment (to a certain degree), we would expect that both potential outcomes would be greater for the treated over the control: E [Y (1)|T = 1] > E [Y (1)|T = 0], E [Y (0)|T = 1] > E [Y (0)|T = 0]. Since E [Y (1)] = E [Y (1)|T = 1] p(T = 1)+E [Y (1)|T = 0] p(T = 0) we also have that E [Y (1)] > E [Y (1)|T = 1], and E [Y (0)] > E [Y (0)|T = 0] unless p(T = 0) = 1. Taken together, this shows that in the above scenario, we expect η < 0, if we haven’t measured any X . This logic carries through in the plausible scenario where we have measured some X , but do not have access to all the variables X that allows us to tell apart “dutiful” from “negligent” patients. To sum up, this example shows that in cases where some units are selected such as those more likely to be treated are those whose potential outcomes are higher (resp. lower) anyway, we can expect η to be negative (resp. positive). 3 Method Given data from both the unconfounded and confounded studies, we propose the following recipe for removing the hidden confounding. First, we learn a function ω̂ over the observational sample {XConfi , TConfi , Y Confi }n Conf i=1 . This can be done using any CATE estimation method such as learning two regression functions for the treated and control and taking their difference, or specially constructed methods such as Causal Forest [WA17]. Since we assume this sample has hidden confounding, ω is not equal to the true CATE and correspondingly ω̂ does not estimate the true CATE. We then learn a correction term which interpolates between ω̂ evaluated on the RCT samples XUnci , and the RCT outcomes Y Unci . This is a correction term for hidden confounding, which is our estimate of η. The correction term allows us to extrapolate τ over the confounded sample, using the identity τ(X) = ω(X) + η(X). Note that we could not have gone the other way round: if we were to start with estimating τ over the unconfounded sample, and then estimate η using the samples from the confounded study, we would end up constructing an estimate of ω(x), which is not the quantity of interest. Moreover, doing so would be difficult as the unconfounded sample is not expected to cover the confounded one. Specifically, the way we use the RCT samples relies on a simple identity. Let eUnc(x) = P ( T = 1 | X = x,EUnc ) be the propensity score on the unconfounded sample. If this sample is an RCT then typically eUnc(x) = q for some constant, often q = 0.5. Let q(XUnci ) = TUnci eUnc(XUnci ) − 1−T Unc i 1−eUnc(XUnci ) be a signed re-weighting function. We have: Lemma 1. E [ q(XUnci )Y Unc i |XUnci , EUnc ] = τ(XUnci ). (1) What Lemma 1 shows us is that q(XUnci )Y Unc i is an unbiased estimate of τ(X Unc i ). We now use this fact to learn η as follows: θ̂ = argmin θ nUnc∑ i=1 ( q(XUnci )Y Unc i − ω̂(XUnci )− θ>XUnci )2 (2) Let τ̂(x) = ω̂(x) + θ̂>x. (3) The method is summarized in Algorithm 1. Let us contrast our approach with two existing ones. The first, is to simply learn the treatment effect function directly from the unconfounded data, and extrapolate it to the observational sample. This is guaranteed to be unconfounded, and with a large enough unconfounded sample the CATE function can be learned [CHIM08, Pea15]. This approach is presented for example by [BP13] for ATE, as the transport formula. However, extending this approach to CATE in our case is not as straightforward. The reason is that we assume that the confounded study does not fully overlap with the unconfounded study, which requires extrapolating the estimated CATE function into a region of sample space outside the region where it was fit. This requires strong parametric assumptions about the CATE Algorithm 1 Remove hidden confounding with unconfounded sample 1: Input: Unconfounded sample with propensity scores DUnc = {XUnci , TUnci , Y Unci , eUnc(XUnci )}n Unc i=1 . Confounded sample D Conf = {XConfi , TConfi , Y Confi }n Conf i=1 . Algorithm Q for fitting CATE. 2: Run Q on DConf, obtain CATE estimate ω̂. 3: Let θ̂ be the solution of the optimization problem in Equation (2). 4: Set function τ̂(x) := ω̂(x) + θ̂>x 5: Return: τ̂ , an estimate of CATE over DConf. function. On the other hand, we do have samples from the target region, they are simply confounded. One way to view our approach is that we move the extrapolation a step back: instead of extrapolating the CATE function, we merely extrapolate a correction due to hidden confounding. In the case that the CATE function does actually extrapolate well, we do no harm - we learn η ≈ 0. The second alternative relies on re-weighting the RCT population so as to make it similar to the target, observational population [SCBL11, HGRS15, AO17]. These approaches suffer from two important drawbacks from our point of view: (i) they assume the observational study has no unmeasured confounders, which is often an unrealistic assumption; (ii) they assume that the support of the observational study is contained within the support of the experimental study, which again is unrealistic as the experimental studies are often smaller and on somewhat different populations. If we were to apply these approaches to our case, we would be re-weighting by the inverse of weights which are close to, or even identical to, 0. 4 Theoretical guarantee We prove that under conditions of parametric identification of η, Algorithm 1 recovers a consistent estimate of τ(x) over the EConf, at a rate which is governed by the rate of estimating ω by ω̂. For the sake of clarity, we focus on a linear specification of η. Other parametric specifications can easily be accommodated given that the appropriate identification criteria hold (for linear this is the non-singularity of the design matrix). Note that this result is strictly stronger than results about CATE identification which rely on ignorability: what enables the improvement is of course the presence of the unconfounded sample EUnc. Also note that this result is strictly stronger than the transport formula [BP13] and re-weighting such as [AO17]. Theorem 1. Suppose 1. ω̂ is a consistent estimator on the observational data (on which it’s trained): E[(ω̂(X)− ω(X))2 | EConf] = O(r(n)) for r(n) = o(1) 2. The covariates in the confounded data cover those in the unconfounded data (strong one-way overlap): ∃κ > 0 : P ( EUnc | X ) ≤ κP ( EConf | X ) 3. η is linear: ∃θ0 : η(x) = θ>0 x 4. Identifiability of θ0: E[XX> | EUnc] is non-singular 5. X , Y , and ω̂(X) have finite fourth moments in the experimental data: E[‖X‖42 | EUnc] <∞, E[Y 4 | EUnc] <∞, E[ω̂(X)4 | EUnc] <∞ 6. Strong overlap between treatments in unconfounded data: ∃ν > 0 : ν ≤ eUnc(X) ≤ 1− ν Then θ̂ is consistent ‖θ̂ − θ0‖22 = Op(r(n) + 1/n) and τ̂ is consistent on its target population ((τ̂(X)− τ(X))2 | EConf) = Op(r(n) + 1/n) There are a few things to note about the result and its conditions. First, we note that if the so-called confounded observational sample is in fact unconfounded then we immediately get that the linear specification of η is correct with θ0 = 0 because we simply have η(x) = 0. Therefore, our conditions are strictly weaker than imposing unconfoundedness on the observational data. Condition 1 requires that our base method for learning ω is consistent just as a regression method. There are a few ways to guarantee this. For example, if we fit ω̂ by empirical risk minimization on weighted outcomes over a function class of finite capacity (such as a VC class) or if we fit as the difference of two regression functions each fit by empirical risk minimization on observed outcomes in each treatment group, then standard results in statistical learning [BM02] ensure the consistency of L2 risk and therefore the L2 convergence required in condition 1. Alternatively, any method for learning CATE that would have been consistent for CATE under unconfoundedness would actually still be consistent for ω if applied. Therefore we can also rely on such base method as causal forests [WA17] and other methods that target CATE as inputs to our method, even if they don’t actually learn CATE here due to confounding. Condition 2 captures our understanding of the observational dataset having a larger scope than the experimental dataset. The condition essentially requires a strong form of absolute continuity between the two covariate distributions. This condition could potentially be relaxed so long as there is enough intersection where we can learn η. So for example, if there is a subset of the experiment that the observational data covers, that would be sufficient so long as we can also ensure that condition 4 still remains valid on that subset so that we can learn the sufficient parameters for η. Condition 3, the linear specification of η, can be replaced with another one so long as it has finitely many parameters and they can be identified on the experimental dataset, i.e., condition 4 above would change appropriately. Since unconfoundedness implies η = 0, whenever the parametric specification of η contains the zero function (e.g., as in the linear case above since θ0 = 0 is allowed) condition 3 is strictly weaker than assuming unconfoundedness. In that sense, our method can consistently estimate CATE on a population where no experimental data exists under weaker conditions than existing methods, which assume the observational data is unconfounded. Condition 5 is trivially satisfied whenever outcomes and covariates are bounded. Similarly, we would expect that if the first two parts of condition 5 hold (about X and Y ) then the last one about ω̂ would also hold as it is predicting outcomes Y . That is, the last part of condition 5 is essentially a requirement on our ω̂-leaner base method that it’s not doing anything strange like adding unnecessary noise to Y thereby making it have fewer moments. For all base methods that we consider, this would come for free because they are only averaging outcomes Y . We also note that if we impose the existence of even higher moments as well as pointwise asymptotic normality of ω̂, one can easily transform the result to an asymptotic normality result. Standard error estimates will in turn require a variance estimate of ω̂. Finally, we note that condition 6, which requires strong overlap, only needs to hold in the unconfounded sample. This is important as it would be a rather strong requirement in the confounded sample where treatment choices may depend on high dimensional variables [DDF+17], but it is a weak condition for the experimental data. Specifically, if the unconfounded sample arose from an RCT then propensities would be constant and the condition would hold trivially. 5 Experiments In order to illustrate the validity and usefulness of our proposed method we conduct simulation experiments and experiments with real-world data taken from the Tennessee STAR study: a large longterm school study where students were randomized to different types of classes [WJB+90, Kru99]. 5.1 Simulation study We generate data simulating a situation where there exists an un-confounded dataset and a confounded dataset, with only partial overlap. Let X ∈ R be a measured covariate, T ∈ {0, 1} binary treatment assignment, U ∈ R an unmeasured confounder, and Y ∈ R the outcome. We are interested in τ(X). We generate the unconfounded sample as follows: XUnc ∼ Uniform [−1, 1], UUnc ∼ N (0, 1), TUnc ∼ Bernoulli(0.5). We generate the confounded sample as follows: we first sample TConf ∼ Bernoulli(0.5) and then sample XConf, UConf from a bivariate Gaussian (XConf, UConf)|TConf ∼ N ( [0, 0], [ 1 TConf − 0.5 TConf − 0.5 1 ]) . This means that XConf, UConf come from a Gaussian mixture model where TConf denotes the mixture component and the components have equal means but different covariance structures. This also implies that η is linear. For both datasets we set Y = 1 + T +X + 2 · T ·X + 0.5X2 + 0.75 · T ·X2 + U + 0.5 , where ∼ N (0, 1). The true CATE is therefore τ(X) = 0.75X2 + 2X + 1. We have that the true ω = τ +E[U |X,T = 1]−E[U |X,T = 0], which leads to the true η = x. We then apply our method (with a CF base) to learn η. We plot (See Figure 1) here the true and recovered η with our method. Even with the limited un-confounded set (between −1, 1) making the full scope of the x2 term in Y inaccessible, we are able to reasonably estimate τ . Other methods would suffer under the strong unobserved confounding. 5.2 Real-world data Validating causal-inference methods is hard because we almost never have access to true counterfactuals. We approach this challenge by using data from a randomized controlled trial, the Tennessee STAR study [WJB+90, Kru99, MISN18]. When using an RCT, we have access to unbiased CATEestimates because we are guaranteed unconfoundedness. We then artificially introduce confounding by selectively removing a biased subset of samples. The data: The Tennessee Student/Teacher Achievement Ratio (STAR) experiment is a randomized experiment started in 1985 to measure the effect of class size on student outcomes, measured by standardized test scores. The experiment started monitoring students in kindergarten and followed students until third grade. Students and teachers were randomly assigned into conditions during the first school year, with the intention for students to continue in their class-size condition for the entirety of the experiment. We focus on two of the experiment conditions: small classes(13-17 pupils), and regular classes(22-25 pupils). Since many students only started the study at first grade, we took as treatment their class-type at first grade. Overall we have 4509 students with treatment assignment at first grade. The outcome Y is the sum of the listening, reading, and math standardized test at the end of first grade. After removing students with missing outcomes 1, we remain with a randomized sample of 4218 students: 1805 assigned to treatment (small class, T = 1), and 2413 to control (regular size class, T = 0). In addition to treatment and outcome, we used the following covariates for each student: gender, race, birth month, birthday, birth year, free lunch given or not, teacher id. Our goal is to compute the CATE conditioned on this set of covariates, jointly denoted X . Computing ground-truth CATE: The STAR RCT allows us to obtain an unbiased estimate of the CATE. Specifically, we use the identity in Eq. (1), and the fact that in the study, the propensity scores 1The correlation between missing outcome and treatment assignment is R2 < 10−4. e(Xi) were constant. We define a ground-truth sample {(Xi, Y GTi )}ni=1, where Y GTi = Yiq+Ti−1 , q = p(T = 1). By Eq. (1) we know that E [ Y GTi |Xi ] = τ(Xi) within the STAR study. Introducing hidden confounding: Now that we have the ground-truth CATE, we wish to emulate the scenario which motivates our work. We split the entire dataset (ALL) into a small unconfounded subset (UNC), and a larger, confounded subset (CONF) over a somewhat different population. We do this by splitting the population over a variable which is known to be a strong determinant of outcome [Kru99]: rural or inner-city (2811 students) vs. urban or suburban (1407 students). We generate UNC by randomly sampling a fraction q′ of the rural or inner-city students, where q′ ranges from 0.1 to 0.5. Over this sample, we know that treatment assignment was at random. When generating CONF, we wish to obtain two goals: (a) the support of CONF should have only a partial overlap with the support of UNC, and (b) treatment assignment should be confounded, i.e. the treated and control populations should be systematically different in their potential outcomes. In order to achieve these goals, we generate CONF as follows: From the rural or inner-city students, we take the controls (T = 0) that were not sampled in UNC, and only the treated (T = 1) whose outcomes were in the lower half of outcomes among treated rural or inner-city students. From the urban or suburban students, we take all of the controls, and only the treated whose outcomes were in the lower half of outcomes among treated urban or suburban students. This procedure results in UNC and CONF populations which do not fully overlap: UNC has only rural or inner-city students, while CONF has a substantial subset (roughly one half for q′ = 0.5) of urban and suburban students. It also creates confounding, by removing the students with the higher scores selectively from the treated population. This biases the naive treatment effect estimates downward. We further complicate matters by dropping the covariate indicating rural, inner-city, urban or suburban from all subsequent analysis. Therefore, we have significant unmeasured confounding in the CONF population, and also the unconfounded ground-truth in the original, ALL population. Metric: In our experiments, we assume we have access to samples from UNC and CONF. We use either UNC, CONF or both to fit various models for predicting CATE. We then evaluate how well the CATE predictions match Y GTi on a held-out sample from ALL \ UNC (the set ALL minus the set UNC), in terms of RMSE. Note that we are not evaluating on CONF, but on the unconfounded version of CONF, which is exactly ALL \ UNC. The reason we don’t evaluate on ALL is twofold: First, it will only make the task easier because of the nature of the UNC set; second, we are motivated by the scenario where we have a confounded observational study representing the target population of interest, and wish to be aided by a separate unconfounded study (typically an RCT) available for a different population. We focus on a held-out set in order to avoid giving too much of an advantage to methods which can simply fit the observed outcomes well. Baselines: As a baseline we fit CATE using standard methods on either the UNC set or the CONF set. Fitting on the UNC set is essentially a CATE version of applying the transport formula [PB14]. Fitting on the CONF set amounts to assuming ignorability (which is wrong in this case), and using standard methods. The methods we use to estimate CATE are: (i) Regression method fit on Y GTi over UNC (ii) Regression method fit separately on treated and control in CONF (iii) Regression method fit separately on treated and control in UNC. The regression methods we use in (i)-(iii) are Random Forest with 200 trees and Ridge Regression with cross-validation. In baselines (ii) and (iii), the CATE is estimated as the difference between the prediction of the model fit on the treated and the prediction of the model fit on the control. We also experimented extensively with Causal Forest [WA17], but found it to uniformly perform worse than the other methods, even when given unfair advantages such as access to the entire dataset (ALL). Results: Our two-step method requires a method for fitting ω̂ on the confounded dataset. We experiment with two methods, which parallel those used as baseline: A regression method fit separately on treated and control in CONF, where we use either Random Forest with 200 trees or Ridge Regression with cross-validation as regression methods. We see that our methods, 2-step RF and 2-step ridge, consistently produce more accurate estimates than the baselines. We see that our methods in particular are able to make use of larger unconfounded sets to produce better estimates of the CATE function.See Figure 2 for the performance of our method vs. the various baselines. 6 Discussion In this paper we address a scenario that is becoming more and more common: users with large observational datasets who wish to extract causal insights using their data and help from unconfounded experiments on different populations. One direction for future work is combining the current work with work that looks explicitly into the causal graph connecting the covariates, including unmeasured ones [TT15, MMC16]. Another direction includes cases where the outcomes or interventions are not directly comparable, but where the difference can be modeled. For example, experimental studies often only study short-term outcomes, whereas the observational study might track long-term outcomes which are of more interest [ACIK16]. Acknowledgements We wish to thank the anonymous reviewers for their helpful suggestions and comments. (NK) This material is based upon work supported by the National Science Foundation under Grant No. 1656996.
1. What is the focus of the paper regarding controlling unconfounded bias? 2. What is the proposed solution by the authors, and how does it differ from other approaches? 3. What are the concerns regarding the assumptions made in the paper, particularly about unobserved confounders? 4. How does the reviewer assess the technical content and validity of the paper's assumptions? 5. What are the issues with the data generating process used in the simulation study, and how does it affect the interpretation of the results? 6. Are there any minor errors or typos in the paper that the reviewer noticed? 7. What is the main contribution of the paper, and how does it expand the toolkit available to practitioners? 8. Can you identify potential applied areas that could benefit from this research?
Review
Review This paper tackles the challenging problem of controlling for unconfounded bias by using a smaller, but still related randomized experiment. More specifically, the authors propose an interesting methodology that first estimates the difference in conditional average outcomes in the observational data, and then a "correction" term obtained from the randomized experiment. Many authors and have studied this problem; however, few have proposed such a solution. The paper adds a new perspective that has many potential applications. The technical content of the paper appears to be correct in the sense that given their assumptions their technical results hold. My primary concern in this area is the validity of their assumptions. In particular, for the randomized experiment to be useful, they require that the relationship between any unobserved confounders and the outcome remain constant in both the observational study and the randomized experiment. I think this assumption needs a little more discussion as there are examples where it does not hold. Can you say anything more about this? I think the data generating process in their simulation study is rather strange. Usually, in observational studies, we assume that the treatment is assigned as a function of the covariates, rather than the covariates are generated as a function of the treatment. The end results are pretty much the same - there is a correlation between T and U; however, the interpretation is different. Overall, I found the paper well written and reasonably clear, although there are some minor errors, the main ones are: - writing omega instead of \omega, - not formally defining the event E - line 82 "population" I assume you mean a "super population" - this is a very important distinction in causal inference! The main contribution of this paper is the method summarized in Algorithm 1 that describes how the unobserved bias can be removed and the accompanying theorem that provides theoretical garantees. This is novel work and is something that expands the toolkit available to practitioners. I believe that this is a significant contribution as it provides a new an underexplored avenue for further research. In particular, I can envision many applied areas that can utilize and expand this work.
NIPS
Title ISAAC Newton: Input-based Approximate Curvature for Newton's Method Abstract We present ISAAC (Input-baSed ApproximAte Curvature), a novel method that 1 conditions the gradient using selected second-order information and has an asymp2 totically vanishing computational overhead, assuming a batch size smaller than 3 the number of neurons. We show that it is possible to compute a good conditioner 4 based on only the input to a respective layer without a substantial computational 5 overhead. The proposed method allows effective training even in small-batch 6 stochastic regimes, which makes it competitive to first-order as well as quasi7 Newton methods. 8 N/A We present ISAAC (Input-baSed ApproximAte Curvature), a novel method that1 conditions the gradient using selected second-order information and has an asymp-2 totically vanishing computational overhead, assuming a batch size smaller than3 the number of neurons. We show that it is possible to compute a good conditioner4 based on only the input to a respective layer without a substantial computational5 overhead. The proposed method allows effective training even in small-batch6 stochastic regimes, which makes it competitive to first-order as well as quasi-7 Newton methods.8 1 Introduction9 While second-order optimization methods are traditionally much less explored than first-order10 methods in large-scale machine learning (ML) applications due to their memory requirements and11 prohibitive computational cost per iteration, they have recently become more popular in ML mainly12 due to their fast convergence properties when compared to first-order methods [1]. The expensive13 computation of an inverse Hessian (also known as pre-conditioning matrix) in the Newton step has14 also been tackled via estimating the curvature from the change in gradients. Loosely speaking, these15 algorithms are known as quasi-Newton methods and a comprehensive treatment can be found in16 the textbook [2]. In addition, various new approximations to the pre-conditioning matrix have been17 proposed in the recent literature [3]–[6]. From a theoretical perspective, second-order optimization18 methods are not nearly as well understood as first-order methods. It is an active research direction to19 fill this gap [7], [8].20 Motivated by the task of training neural networks, and the observation that invoking local curvature21 information associated with neural network objective functions can achieve much faster progress22 per iteration than standard first-order methods [9]–[11], several methods have been proposed. One23 of these methods, that received significant attention, is known as Kronecker-factored Approximate24 Curvature (K-FAC) [12], whose main ingredient is a sophisticated approximation to the generalized25 Gauss-Newton matrix and the Fisher information matrix quantifying the curvature of the underlying26 neural network objective function, which then can be inverted efficiently.27 Inspired by the K-FAC approximation and the Tikhonov regularization of the Newton method, we28 introduce a novel two parameter regularized Kronecker-factorized Newton update step. The proposed29 scheme disentangles the classical Tikhonov regularization and allows us to condition the gradient30 using selected second-order information and has an asymptotically vanishing computational overhead.31 While this property makes the presented method highly attractive from the computational complexity32 perspective, we show that its achieved empirical performance on complicated high-dimensional33 Machine Learning problems remains comparable to existing state-of-the-art methods.34 The contributions of this paper can be summarized as follows: (i) we propose a novel two parameter35 regularized K-FAC approximated Gauss-Newton update step; (ii) we show that asymptotically—as36 Submitted to 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Do not distribute. both regularization parameters vanish—our method recovers the classical K-FAC scheme and in37 the opposite setting—as both regularization parameters grow—our method asymptotically reduces38 to classical gradient descent; (iii) we prove that for an arbitrary pair of regularization parameters,39 the proposed update direction is always a direction of decreasing loss; (iv) in the limit, as one40 regularization parameter grows, we obtain an efficient and effective conditioning of the gradient with41 an asymptotically vanishing overhead; (v) we empirically analyze the presented method and find that42 our efficient conditioning method maintains the performance of its more expensive counterpart; (vi)43 we demonstrate the effectiveness of the presented method in the setting of small-batch stochastic44 regimes and observe that it is competitive to first-order as well as quasi-Newton methods.45 2 Preliminaries46 In this section, we review aspects of second-order optimization, with a focus on generalized Gauss-47 Newton methods. In combination with Kronecker factorization, this leads us to a new regularized48 update scheme. We consider the training of an L-layer neural network f(x; θ) defined recursively as49 zi ← ai−1W (i) (pre-activations), ai ← ϕ(zi) (activations), (1) where a0 = x is the vector of inputs and aL = f(x; θ) is the vector of outputs. Unless noted otherwise,50 we assume these vectors to be row vectors (i.e., in R1×n) as this allows for a direct extension to the51 (batch) vectorized case (i.e., in Rb×n) introduced later. For any layer i, let W (i) ∈ Rdi−1×di be a52 weight matrix and let ϕ be an element-wise nonlinear function. We consider a convex loss function53 L(y, y′) that measures the discrepancy between y and y′. The training optimization problem is then54 argmin θ Ex,y [L(f(x; θ), y)] , (2) where θ = [ θ(1), . . . , θ(L) ] with θ(i) = vec(W (i)).55 The classical Newton method for solving (2) is expressed as the update rule56 θ′ = θ − ηH−1θ ∇θL(f(x; θ), y) , (3) where η > 0 denotes the learning rate and Hθ is the Hessian corresponding to the objective function57 in (2). The stability and efficiency of an estimation problem solved via the Newton method can be58 improved by adding a Tikhonov regularization term [13] leading to a regularized Newton method59 θ′ = θ − η (Hθ + λI)−1∇θL(f(x; θ), y) , (4) where λ > 0 is the so-called Tikhonov regularization parameter. It is well-known [14], [15], that60 under the assumption of approximating the model f with its first-order Taylor expansion, the Hessian61 corresponds with the so-called generalized Gauss-Newton (GGN) matrix Gθ, and hence (4) can be62 expressed as63 θ′ = θ − η (Gθ + λI)−1∇θL(f(x; θ), y) . (5) A major practical limitation of (5) is the computation of the inverse term. A method that alleviates this64 difficulty is known as Kronecker-Factored Approximate Curvature (K-FAC) [12] which approximates65 the block-diagonal (i.e., layer-wise) empirical Hessian or GGN matrix. Inspired by K-FAC, there66 have been other works discussing approximations of Gθ and its inverse [15]. In the following, we67 discuss a popular approach that allows for (moderately) efficient computation.68 The generalized Gauss-Newton matrix Gθ is defined as69 Gθ = E [ (Jθf(x; θ)) ⊤∇2fL(f(x; θ), y)Jθf(x; θ) ] , (6) where J and H denote the Jacobian and Hessian matrices, respectively. Correspondingly, the diagonal70 block of Gθ corresponding to the weights of the ith layer W (i) is71 GW (i)=E [ (JW (i)f(x; θ)) ⊤∇2fL(f(x; θ), y)JW (i)f(x; θ) ] . According to the backpropagation rule Jθ(i)f(x; θ) = Jzif(x; θ) ai−1, a ⊤b = a ⊗ b, and the72 mixed-product property, we can rewrite GW (i) as73 GW (i)=E [( (Jzif(x; θ) ai−1) ⊤(∇2fL(f(x; θ), y))1/2 )( (∇2fL(f(x; θ), y))1/2 Jzif(x; θ) ai−1 )] (7) = E [ (ḡ⊤ai−1) ⊤(ḡ⊤ai−1) ] = E [ (ḡ ⊗ ai−1)⊤(ḡ ⊗ ai−1) ] = E [ (ḡ⊤ḡ)⊗ (a⊤i−1 ⊗ ai−1) ] , (8) where74 ḡ = (Jzif(x; θ)) ⊤ (∇2fL(f(x; θ), y))1/2 . (9) Remark 1 (Monte-Carlo Low-Rank Approximation for ḡ⊤ḡ). As ḡ is a matrix of shape m × di75 where m is the dimension of the output of f , ḡ is generally expensive to compute. Therefore, [12] use76 a low-rank Monte-Carlo approximation to estimate HfL(f(x; θ), y) and thereby ḡ⊤ḡ. For this, we77 need to use the distribution underlying the probabilistic model of our loss L (e.g., Gaussian for MSE78 loss, or a categorical distribution for cross entropy). Specifically, by sampling from this distribution79 pf (x) defined by the network output f(x; θ), we can get an estimator of HfL(f(x; θ), y) via the80 identity81 HfL(f(x; θ), y) = Eŷ∼pf (x) [ ∇fL(f(x; θ), ŷ)⊤∇fL(f(x; θ), ŷ) ] . (10) An extensive reference for this (as well as alternatives) can be found in Appendix A.2 of Dangel et82 al. [15]. The respective rank-1 approximation (denoted by ≜) of HfL(f(x; θ)) is83 HfL(f(x; θ), y) ≜ ∇fL(f(x; θ), ŷ)⊤∇fL(f(x; θ), ŷ) , where ŷ ∼ pf (x). Respectively, we can estimate ḡ⊤ḡ using this rank-1 approximation with84 ḡ ≜ (Jzif(x; θ)) ⊤∇fL(f(x; θ), ŷ) = ∇ziL(f(x; θ), ŷ) . (11) In analogy to ḡ, we introduce the gradient of training objective with respect to pre-activations zi as85 gi = (Jzif(x; θ)) ⊤∇fL(f(x; θ), y) = ∇ziL(f(x; θ), y) . (12) In other words, for a given layer, let g ∈ R1×di denote the gradient of the loss between an output and86 the ground truth and let ḡ ∈ Rm×di denote the derivative of the network f times the square root of87 the Hessian of the loss function (which may be approximated according to Remark 1), each of them88 with respect to the output zi of the given layer i. Note that ḡ is not equal to g and that they require one89 backpropagation pass each (or potentially many for the case of ḡ). This makes computing ḡ costly.90 Applying the K-FAC [12] approximation to (8) the expectation of Kronecker products can be91 approximated as the Kronecker product of expectations as92 G = E((ḡ⊤ḡ)⊗ (a⊤a)) ≈ E(ḡ⊤ḡ)⊗ E(a⊤a) , (13) where, for clarity, we drop the index of ai−1 in (8) and denote it with a; similarly we denote GW (i)93 as G. While the expectation of Kronecker products is generally not equal to the Kronecker product94 of expectations, this K-FAC approximation (13) has been shown to be fairly accurate in practice95 and to preserve the “coarse structure” of the GGN matrix [12]. The K-FAC decomposition in (13)96 is convenient as the Kronecker product has the favorable property that for two matrices A,B the97 identity (A⊗B)−1 = A−1 ⊗B−1 which significantly simplifies the computation of an inverse.98 In practice, E(ḡ⊤ḡ) and E(a⊤a) can be computed by averaging over a batch of size b as99 E(ḡ⊤ḡ) ≃ ḡ⊤ḡ/b, E(a⊤a) ≃ a⊤a/b, (14) where we denote batches of g, ḡ and a, as g ∈ Rb×di , ḡ ∈ Rrb×di and a ∈ Rb×di−1 , where our layer100 has di−1 inputs, di outputs, b is the batch size, and r is either the number of outputs m or the rank of101 an approximation according to Remark 1. Correspondingly, the K-FAC approximation of the GGN102 matrix and its inverse are concisely expressed as103 G ≈ (ḡ⊤ḡ)⊗ (a⊤a)/b2 G−1 ≈ ( ḡ⊤ḡ )−1⊗(a⊤a)−1 · b2 . (15) Equipped with the standard terminology and setting, we now introduce the novel, regularized update104 step. First, inspired by the K-FAC approximation (13), the Tikhonov regularized Gauss-Newton105 method (5) can be approximated by106 θ(i)′ = θ(i) − η(ḡ⊤ḡ/b+ λI)−1 ⊗ (a⊤a/b+ λI)−1∇θ(i)L(f(x; θ)), (16) with regularization parameter λ > 0. A key observation, which is motivated by the structure of107 the above update, is to disentangle the two occurrences of λ into two independent regularization108 parameters λg, λa > 0. By defining the Kronecker-factorized Gauss-Newton update step as109 ζ = λgλa(ḡ ⊤ḡ/b+ λgI) −1 ⊗ (a⊤a/b+ λaI)−1∇θ(i)L(f(x; θ)), (17) we obtain the concise update equation110 θ(i)′ = θ(i) − η∗ζ . (18) This update (18) is equivalent to update (16) when in the case of η∗ = ηλgλa and λ = λg = λa. This111 equivalence does not restrict η∗, λg, λa in any way, and changing λg or λa does not mean that we112 change our learning rate or step size η∗. Parameterizing ζ in (17) with the multiplicative terms λgλa113 makes the formulation more convenient for analysis.114 In this paper, we investigate the theoretical and empirical properties of the iterative update rule (18)115 and in particular show how the regularization parameters λg, λa affect the Kronecker-factorized116 Gauss-Newton update step ζ . When analyzing the Kronecker-factorized Gauss-Newton update step117 ζ , a particularly useful tool is the vector product identity,118 (( ḡ⊤ḡ )−1 ⊗ (a⊤a)−1) vec(g⊤a) = vec((ḡ⊤ḡ)−1 g⊤a (a⊤a)−1) , (19) where the gradient with respect to the weight matrix is g⊤a.119 3 Theoretical Guarantees120 In this section, we investigate the theoretical properties of the Kronecker-factorized Gauss-Newton121 update direction ζ as defined in (17). We recall that ζ introduces a Tikonov regularization, as it is122 commonly done in implementations of second order-based methods. Not surprisingly, we show that123 by decreasing the regularization parameters λg, λa the update rule (18) collapses (in the limit) to the124 classical Gauss-Newton method, and hence in the regime of small λg, λa the variable ζ describes the125 Gauss-Newton direction. Moreover, by increasing the regularization strength, we converge (in the126 limit) to the conventional gradient descent update step.127 The key observation is that, as we disentangle the regularization of the two Kronecker factors ḡ⊤ḡ128 and a⊤a, and consider the setting where only one regularizer is large (λg → ∞ to be precise),129 we obtain an update direction that can be computed highly efficiently. We show that this setting130 describes an approximated Gauss-Newton update scheme, whose superior numerical performance is131 then empirically demonstrated in Section 4.132 Theorem 1 (Properties of ζ ). The K-FAC based update step ζ as defined in (17) can be expressed as133 ζ = ( Im − 1 bλg ḡ⊤ ( Ib + 1 bλg ḡḡ⊤ )−1 ḡ ) · g⊤ · ( Ib − 1 bλa aa⊤ ( Ib + 1 bλa aa⊤ )−1) · a . (20) Moreover, ζ admits the following asymptotic properties:134 (i) In the limit of λg, λa → 0, 1λgλaζ is the K-FAC approximation of the Gauss-Newton step, i.e.,135 limλg,λa→0 1 λgλa ζ ≈ G−1∇θ(i)L(f(x; θ)), where ≈ denotes the K-FAC approximation (15).136 (ii) In the limit of λg, λa →∞, ζ is the gradient, i.e., limλg,λa→∞ ζ = ∇θ(i)L(f(x; θ)).137 The Proof is deferred to the Supplementary Material.138 We want to show that ζ is well-defined and points in the correct direction, not only for λg and λa139 numerically close to zero because we want to explore the full spectrum of settings for λg and λa.140 Thus, we prove that ζ is a direction of increasing loss, independent of the choices of λg and λa.141 Theorem 2 (Correctness of ζ is independent of λg and λa). ζ is a direction of increasing loss,142 independent of the choices of λg and λa.143 Proof. Recall that (λgIm+ḡ⊤ḡ/b) and (λaIn+a⊤a/b) are positive semi-definite (PSD) matrices by144 definition. Their inverses (λgIm + ḡ⊤ḡ/b)−1 and (λaIn + a⊤a/b)−1 are therefore also PSD. As the145 Kronecker product of PSD matrices is PSD, the conditioning matrix ((λgIm + ḡ⊤ḡ/b)−1 ⊗ (λaIn +146 a⊤a/b)−1 ≈ G−1) is PSD, and therefore the direction of the update step remains correct.147 From our formulation of ζ , we can find that, in the limit for λg →∞, Equation (21) does not depend148 on ḡ . This is computationally very beneficial as computing ḡ is costly as it requires one or even149 many additional backpropagation passes. In addition, it allows conditioning the gradient update by150 multiplying a b× b matrix between g⊤ and a, which can be done very fast.151 Theorem 3 (Efficient Update Direction). In the limit of λg → ∞, the update step ζ converges to152 limλg→∞ ζ = ζ ∗, where153 ζ∗= g⊤ · ( Ib − 1 bλa aa⊤ ( Ib + 1 bλa aa⊤ )−1) · a . (21) (i) Here, the update direction ζ∗ is based only on the inputs and does not require computing ḡ154 (which would require a second backpropagation pass), making it efficient.155 (ii) The computational cost of computing the update ζ∗ lies in O(bn2 + b2n+ b3), where n is the156 number of neurons in each layer. This comprises the conventional cost of computing the gradient157 ∇ = g⊤x lying inO(bn2), and the overhead of computing ζ∗ instead of∇ lying inO(b2n+b3).158 The overhead is vanishing, assuming n≫ b. For b > n the complexity lies in O(bn2 + n3).159 Proof. We first show the property (21). Note that according to (22), λg · ( λgIm + ḡ ⊤ḡ/b )−1 con-160 verges in the limit of λg →∞ to Im, and therefore (21) holds.161 (i) The statement follows from the fact that the term ḡ does not appear in the equivalent characteriza-162 tion (21) of ζ∗.163 (ii) We first note that the matrix aa⊤ is of dimension b × b, and can be computed in O(b2n) time.164 Next, the matrix165 ( Ib − 1 bλa aa⊤ ( Ib + 1 bλa aa⊤ )−1) is of shape b× b and can be multiplied with a in O(b2n) time.166 Notably, (21) can be computed with a vanishing computational overhead and with only minor167 modifications to the implementation. Specifically, only the g⊤a expression has to be replaced by (21)168 in the backpropagation step. As this can be done independently for each layer, this lends itself also to169 applying it only to individual layers.170 As we see in the experimental section, in many cases in the mini-batch regime (i.e., b < n), the171 optimal (or a good) choice for λg actually lies in the limit to∞. This is a surprising result, leading to172 the efficient and effective ζ∗ = ζλg→∞ optimizer.173 Remark 2 (Relation between Update Direction ζ and ζ∗). When comparing the update direction174 ζ in (20) without regularization (i.e., λg → 0, λa → 0) with ζ∗ (i.e., λg → ∞) as given in (21), it175 can be directly seen that ζ∗ corresponds to a particular pre-conditioning of ζ , since ζ∗ = Mζ for176 M = 1bλg ḡ ⊤ḡ .177 As the last theoretical property of our proposed update direction ζ∗, we show that in specific networks178 ζ∗ coincides with the Gauss-Newton update direction.179 Theorem 4 (ζ∗ is Exact for the Last Layer). For the case of linear regression or, more generally, the180 last layer of networks, with the mean squared error, ζ∗ is the Gauss-Newton update direction.181 Proof. The Hessian matrix of the mean squared error loss is the identity matrix. Correspondingly,182 the expectation value of ḡ⊤ḡ is I. Thus, ζ∗ = ζ .183 Remark 3. The direction ζ∗ corresponds to the Gauss-Newton update direction with an approxima-184 tion of G that can be expressed as G ≈ E [ I⊗ (a⊤a) ] .185 Remark 4 (Extension to the Natural Gradient). In some cases, it might be more desirable to use the186 Fisher-based natural gradient instead of the Gauss-Newton method. The difference to this setting is187 that in (5) the GGN matrix G is replaced by the empirical Fisher information matrix F.188 We note that our theory also applies to F, and that ζ∗ also efficiently approximates the natural189 gradient update step F−1∇. The i-th diagonal block of F (Fθ(i) = E [ (g⊤i gi)⊗ (a⊤i−1 ⊗ ai−1) ] ),190 has the same form as a block of the GGN matrix G (Gθ(i) = E [ (ḡ⊤i ḡi)⊗ (a⊤i−1 ⊗ ai−1) ] ).191 Thus, we can replace ḡ with g in our theoretical results to obtain their counterparts for F.192 4 Experiments193 In the previous section, we discussed the theoretical properties of the proposed update directions194 ζ and ζ∗ with the aspect that ζ∗ would actually be “free” to compute in the mini-batch regime. In195 this section, we provide empirical evidence that ζ∗ is a good update direction, even in deep learning.196 Specifically, we demonstrate that197 (E1) ζ∗ achieves similar performance to K-FAC, while being substantially cheaper to compute.198 (E2) The performance of our proposed method can be empirically maintained in the mini-batch199 regime (n≫ b).200 (E3) ζ∗ may be used for individual layers, while for other layers only the gradient ∇ is used. This201 still leads to improved performance.202 (E4) ζ∗ also improves the performance for training larger models such as BERT and ResNet.203 (E5) The runtime and memory requirements of ζ∗ are comparable to those of gradient descent.204 E1: Impact of Regularization Parameters205 For (E1), we study the dependence of the model’s performance on the regularization parameters λg206 and λa. Here, we train a 5-layer deep neural network on the MNIST classification task [16] with a207 batch size of 60 for a total of 40 epochs or 40 000 steps.208 The plots in Figure 1 demonstrate that the advantage of training by conditioning with curvature209 information can be achieved by considering both layer inputs a and gradients with respect to random210 samples ḡ , but also using only layer inputs a. In the plot, we show the performance of ζ for different211 choices of λg and λa, each in the range from 10−6 to 106. The right column shows ζ ∗, i.e., λg =∞,212 for different λa. The bottom-right corner is gradient descent, which corresponds to λg = ∞ and213 λa =∞.214 Newton’s method or the general K-FAC approximation corresponds to the area with small λg and λa.215 The interesting finding here is that the performance does not suffer by increasing λg toward∞, i.e.,216 from left to right in the plot.217 In addition, in Figure 3, we consider the case of regression with an auto-encoder trained with the218 MSE loss on MNIST [16] and Fashion-MNIST [17]. Here, we follow the same principle as above219 and also find that ζ∗ performs well.220 -6 -5 -4 -3 -2 -1 0 +1+2+3+4+5+6 log10 g -6 -5 -4 -3 -2 -1 0 +1 +2 +3 +4 +5 +6 lo g 1 0 a (a) -6 -5 -4 -3 -2 -1 0 +1+2+3+4+5+6 log10 g (b) 5.5 5.0 4.5 4.0 3.5 3.0 2.5 Figure 3: Training an auto-encoder on MNIST (left) and FashionMNIST (right). The model is the same as used by Botev et al. [18], i.e., it is a ReLU-activated 6-layer fully connected model with dimensions 784-1000-500- 30-500-1000-784. Displayed is the logarithmic training loss. -6 -5 -4 -3 -2 -1 0 +1+2+3+4+5+6 log10 g -6 -5 -4 -3 -2 -1 0 +1 +2 +3 +4 +5 +6 lo g 1 0 a (a) -6 -5 -4 -3 -2 -1 0 +1+2+3+4+5+6 log10 g (b) Figure 4: Training a 5-layer ReLU network with 400 neurons per layer on the MNIST classification task (as in Figure 1) but with the Adam optimizer [19]. In Figure 7, we compare the loss for dif-221 ferent methods. Here, we distinguish222 between loss per time (left) and loss223 per number of steps (right). We can ob-224 serve that, for λ = 0.1, K-FAC, ζ , and225 ζ∗ are almost identical per update step226 (right), while ζ∗ is by a large margin227 the fastest, followed by ζ , and the con-228 ventional K-FAC implementation is the229 slowest (left). On the other hand, for230 λ = 0.01 we can achieve a faster con-231 vergence than with λ = 0.1, but here232 only the K-FAC and ζ methods are nu-233 merically stable, while ζ∗ is unstable in234 this case. This means in the regime of235 very small λ, ζ∗ is not as robust as K-236 FAC and ζ , however, it achieves good237 performance with small but moderate238 λ like λ = 0.1. For λ < 0.01, also239 K-FAC and ζ become numerically un-240 stable in this setting and, in general, we241 observed that the smallest valid λ for242 K-FAC is 0.01 or 0.001 depending on243 model and task. Under consideration244 of the runtime, ζ∗ performs best as it is245 almost as fast as gradient descent while246 performing equivalent to K-FAC and ζ .247 Specifically, a gradient descent step is248 only about 10% faster than ζ∗.249 E2: Minibatch Regime250 For (E2), in Figure 1, we can see that training251 performs well for n ∈ {100, 400, 1 600} neu-252 rons per layer at a batch size of only 60. Also, in253 all other experiments, we use small batch sizes254 of between 8 and 100.255 E3: ζ∗ in Individual Layers256 In Figure 5, we train the 5-layer fully connected257 model with 400 neurons per layer. Here, we258 consider the setting that we use ζ∗ in some of259 the layers while using the default gradient ∇260 in other layers. Specifically, we consider the261 settings, where all, the first, the final, the first three, the final three, the odd numbered, and the262 even numbered layers are updated by ζ∗. We observe that all settings with ζ∗ perform better than263 plain gradient descent, except for “ζ∗ for layers 3,4,5” which performs approximately equivalent to264 gradient descent.265 E4: Large-scale Models266 BERT To demonstrate the utility of ζ∗ also in large-scale models, we evaluate it for fine-tuning267 BERT [20] on three natural language tasks. In Table 1, we summarize the results for the BERT268 fine-tuning task. For the “Corpus of Linguistic Acceptability” (CoLA) [21] data set, we fine-tune269 both the BERT-Base and the BERT-Mini models and find that we outperform the gradient descent270 baseline in both cases. For the “Microsoft Research Paraphrase Corpus” (MRPC) [22] data set, we271 fine-tune the BERT-Base model and find that we outperform the baseline both in terms of accuracy272 and F1-score. Finally, on the “Semantic Textual Similarity Benchmark” (STS-B) [23] data set, we273 fine-tune the BERT-Mini model and achieve higher Pearson and Spearman correlations than the274 baseline. While for training with CoLA and MRPC, we were able to use the Adam optimizer [19]275 (which is recommended for this task and model) in conjunction with ζ∗ in place of the gradient,276 for STS-B Adam did not work well. Therefore, for STS-B, we evaluated it using the SGD with277 momentum optimizer. For each method, we performed a grid search over the hyperparameters. We278 note that we use a batch size of 8 in all BERT experiments.279 ResNet In addition, we conduct an experiment280 where we train the last layer of a ResNet with281 ζ∗, while the remainder of the model is up-282 dated using the gradient ∇. Here, we train a283 ResNet-18 [24] on CIFAR-10 [25] using SGD284 with a batch size of 100 in a vanilla setting, i.e.,285 without additional tricks employed in by He et286 al. [24] and others. Specifically, we use (i) a287 constant learning rate for each training (optimal288 from (1, 0.3, 0.1, 0.03, 0.01)) and (ii) vanilla289 SGD and not momentum-based SGD. The rea-290 son behind this is that we want a vanilla experi-291 ment and with aspects such as extensively tuning292 multiple parameters of learning rate scheduler293 would make the evaluation less transparent; how-294 ever, therefore, all accuracies are naturally lower than SOTA. In Figure 6, we plot the test accuracy295 against time. The results show that the proposed method outperforms vanilla SGD when applied296 to the last layer of a ResNet-18. To validate that the learning rate is not the cause for the better297 performance, we also plot the neighboring learning rates and find that even with a too small or too298 large learning rate ζ∗ outperforms gradient descent with the optimal learning rate.299 E5: Runtime and Memory300 Finally, we also evaluate the runtime and memory requirements of each method. The runtime301 evaluation is displayed in Table 2. We report both CPU and GPU runtime using PyTorch [26] and302 (for K-FAC) the backpack library [15]. Note that the CPU runtime is more representative of the303 pure computational cost, as for the first rows of the GPU runtime the overhead of calling the GPU304 is dominant. When comparing runtimes between the gradient and ζ∗ on the GPU, we can observe305 that we have an overhead of around 2.5 s independent of the model size. The overhead for CPU time306 is also very small at less than 1% for the largest model, and only 1.3 s for the smallest model. In307 contrast, the runtime of ζ∗ is around 4 times the runtime of the gradient, and K-FAC has an even308 substantially larger runtime. Regarding memory, ζ∗ (contrasting the other approaches) also requires309 only a small additional footprint.310 Remark 5 (Implementation). The implementation of ζ∗ can be done by replacing the backpropagation311 step of a respective layer by (21). As all “ingredients” are already available in popular deep learning312 frameworks, it requires only little modification (contrasting K-FAC and ζ , which require at least one313 additional backpropagation.)314 We will publish the source code of our implementation. In the appendix, we give a PyTorch [26]315 implementation of the proposed method (ζ∗).316 5 Related Work317 Our methods are related to K-FAC by Martens and Grosse [12]. K-FAC uses the approximation318 (13) to approximate the blocks of the Hessian of the empirical risk of neural networks. In most319 implementations of K-FAC, the off-diagonal blocks of the Hessian are also set to zero. One of the320 main claimed benefits of K-FAC is its speed (compared to stochastic gradient descent) for large-batch321 size training. That said, recent empirical work has shown that this advantage of K-FAC disappears322 once the additional computational costs of hyperparameter tuning for large batch training is accounted323 for. There is a line of work that extends the basic idea of K-FAC to convolutional layers [27]. Botev et324 al. [18] further extend these ideas to present KFLR, a Kronecker factored low-rank approximation,325 and KFRA, a Kronecker factored recursive approximation of the Gauss-Newton step. Singh and326 Alistarh [28] propose WoodFisher, a Woodbury matrix inverse-based estimate of the inverse Hessian,327 and apply it to neural network compression. Yao et al. [29] propose AdaHessian, a second-order328 optimizer that incorporates the curvature of the loss function via an adaptive estimation of the Hessian.329 Frantar et al. [6] propose M-FAC, a matrix-free approximation of the natural gradient through a queue330 of the (e.g., 1 000) recent gradients. These works fundamentally differ from our approach in that their331 objective is to approximate the Fisher or Gauss-Newton matrix inverse vector products. In contrast,332 this work proposes to approximate the Gauss-Newton matrix by only one of its Kronecker factors,333 which we find to achieve good performance at a substantial computational speedup and reduction of334 memory footprint. For an overview of this area, we refer to Kunstner et al. [30] and Martens [31].335 For an overview of the technical aspects of backpropagation of second-order quantities, we refer to336 Dangel et al. [15], [32]337 Taking a step back, K-FAC is one of many Newton-type methods for training neural networks.338 Other prominent examples of such methods include subsampled Newton methods [33], [34] (which339 approximate the Hessian by subsampling the terms in the empirical risk function and evaluating the340 Hessian of the subsampled terms) and sketched Newton methods [3]–[5] (which approximate the341 Hessian by sketching, e.g., by projecting the Hessian to a lower-dimensional space by multiplying it342 with a random matrix). The main features that distinguish K-FAC from this group of methods are343 K-FAC’s superior empirical performance and K-FAC’s lack of theoretical justification.344 6 Conclusion345 In this work, we presented ISAAC Newton, a novel approximate curvature method based on layer-346 inputs. We demonstrated it to be a special case of the regularization-generalized Gauss-Newton347 method and empirically demonstrate its utility. Specifically, our method features an asymptotically348 vanishing computational overhead in the mini-batch regime, while achieving competitive empirical349 performance on various benchmark problems.350 References351 [1] N. Agarwal, B. Bullins, and E. Hazan, “Second-order stochastic optimization for machine352 learning in linear time,” Journal on Machine Learning Research, vol. 18, no. 1, pp. 4148–4187,353 2017.354 [2] J. Nocedal and S. J. Wright, Numerical Optimization, 2e. New York, NY, USA: Springer, 2006.355 [3] A. Gonen and S. Shalev-Shwartz, “Faster SGD using sketched conditioning,” arXiv preprint,356 arXiv:1506.02649, 2015.357 [4] M. Pilanci and M. J. Wainwright, “Newton sketch: A near linear-time optimization algorithm358 with linear-quadratic convergence,” SIAM Journal on Optimization, vol. 27, 2017.359 [5] M. A. Erdogdu and A. Montanari, “Convergence rates of sub-sampled Newton methods,” in360 Proc. Neural Information Processing Systems (NeurIPS), 2015.361 [6] E. Frantar, E. Kurtic, and D. Alistarh, “M-FAC: Efficient matrix-free approximations of362 second-order information,” in Proc. Neural Information Processing Systems (NeurIPS), 2021.363 [7] N. Doikov and Y. Nesterov, “Convex Optimization based on Global Lower Second-order364 Models,” in Proc. Neural Information Processing Systems (NeurIPS), Curran Associates, Inc.,365 2020.366 [8] Y. Nesterov and B. T. Polyak, “Cubic regularization of Newton method and its global perfor-367 mance,” Mathematical Programming, vol. 108, 2006.368 [9] S. Becker and Y. Lecun, “Improving the convergence of back-propagation learning with369 second-order methods,” 1989.370 [10] T. Schaul, S. Zhang, and Y. LeCun, “No more pesky learning rates,” in International Conference371 on Machine Learning (ICML), 2013.372 [11] Y. Ollivier, “Riemannian metrics for neural networks i: Feedforward networks,” Information373 and Inference, vol. 4, pp. 108–153, Jun. 2015.374 [12] J. Martens and R. Grosse, “Optimizing neural networks with Kronecker-factored approximate375 curvature,” in International Conference on Machine Learning (ICML), 2015.376 [13] A. N. Tikhonov and V. Y. Arsenin, Solutions of Ill-posed problems. W.H. Winston, 1977.377 [14] P. Chen, “Hessian matrix vs. Gauss—Newton Hessian matrix,” SIAM Journal on Numerical378 Analysis, 2011.379 [15] F. Dangel, F. Kunstner, and P. Hennig, “Backpack: Packing more into backprop,” in Interna-380 tional Conference on Learning Representations, 2020.381 [16] Y. LeCun, C. Cortes, and C. Burges, “MNIST Handwritten Digit Database,” ATT Labs, 2010.382 [17] H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-MNIST: A novel image dataset for benchmarking383 machine learning algorithms,” arXiv, 2017.384 [18] A. Botev, H. Ritter, and D. Barber, “Practical Gauss-Newton optimisation for deep learning,”385 in International Conference on Machine Learning (ICML), 2017.386 [19] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in International Confer-387 ence on Learning Representations (ICLR), 2015.388 [20] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional389 transformers for language understanding,” in North American Chapter of the Association for390 Computational Linguistics: Human Language Technologies (NAACL-HLT), 2018.391 [21] A. Warstadt, A. Singh, and S. R. Bowman, “Neural network acceptability judgments,” Trans-392 actions of the Association for Computational Linguistics, vol. 7, 2019.393 [22] W. B. Dolan and C. Brockett, “Automatically constructing a corpus of sentential paraphrases,”394 in Proceedings of the Third International Workshop on Paraphrasing (IWP2005), 2005.395 [23] D. Cer, M. Diab, E. Agirre, I. Lopez-Gazpio, and L. Specia, “SemEval-2017 task 1: Semantic396 textual similarity multilingual and crosslingual focused evaluation,” in Proceedings of the397 11th International Workshop on Semantic Evaluation (SemEval-2017), Vancouver, Canada:398 Association for Computational Linguistics, 2017.399 [24] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in400 Proc. International Conference on Computer Vision and Pattern Recognition (CVPR), 2016.401 [25] A. Krizhevsky, V. Nair, and G. Hinton, “Cifar-10 (Canadian Institute for Advanced Research),”402 2009.403 [26] A. Paszke, S. Gross, F. Massa, et al., “Pytorch: An imperative style, high-performance deep404 learning library,” in Proc. Neural Information Processing Systems (NeurIPS), 2019.405 [27] R. Grosse and J. Martens, “A Kronecker-factored approximate Fisher matrix for convolution406 layers,” in International Conference on Machine Learning (ICML), 2016.407 [28] S. P. Singh and D. Alistarh, “Woodfisher: Efficient second-order approximation for neural408 network compression,” in Proc. Neural Information Processing Systems (NeurIPS), 2020.409 [29] Z. Yao, A. Gholami, S. Shen, M. Mustafa, K. Keutzer, and M. W. Mahoney, “Adahessian:410 An adaptive second order optimizer for machine learning,” in AAAI Conference on Artificial411 Intelligence, 2021.412 [30] F. Kunstner, L. Balles, and P. Hennig, “Limitations of the empirical Fisher approximation for413 natural gradient descent,” in Proc. Neural Information Processing Systems (NeurIPS), 2019.414 [31] J. Martens, “New insights and perspectives on the natural gradient method,” Journal of Machine415 Learning Research, 2020.416 [32] F. Dangel, S. Harmeling, and P. Hennig, “Modular block-diagonal curvature approximations417 for feedforward architectures,” in International Conference on Artificial Intelligence and418 Statistics (AISTATS), 2020.419 [33] F. Roosta-Khorasani and M. W. Mahoney, “Sub-Sampled Newton Methods I: Globally Con-420 vergent Algorithms,” arXiv: 1601.04737, 2016.421 [34] P. Xu, J. Yang, F. Roosta, C. Ré, and M. W. Mahoney, “Sub-sampled Newton Methods with422 Non-uniform Sampling,” in Proc. Neural Information Processing Systems (NeurIPS), 2016.423 Checklist424 1. For all authors...425 (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s426 contributions and scope? [Yes]427 (b) Did you describe the limitations of your work? [Yes]428 (c) Did you discuss any potential negative societal impacts of your work? [N/A]429 (d) Have you read the ethics review guidelines and ensured that your paper conforms to them?430 [Yes]431 2. If you are including theoretical results...432 (a) Did you state the full set of assumptions of all theoretical results? [Yes]433 (b) Did you include complete proofs of all theoretical results? [Yes]434 3. If you ran experiments...435 (a) Did you include the code, data, and instructions needed to reproduce the main experimental436 results (either in the supplemental material or as a URL)? [Yes] / [No] We include a437 Python / PyTorch implementation of the method in the supplementary material. We will438 publicly release full source code for the experiments.439 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were440 chosen)? [Yes]441 (c) Did you report error bars (e.g., with respect to the random seed after running experiments442 multiple times)? [Yes]443 (d) Did you include the total amount of compute and the type of resources used (e.g., type of444 GPUs, internal cluster, or cloud provider)? [Yes]445 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...446 (a) If your work uses existing assets, did you cite the creators? [Yes]447 (b) Did you mention the license of the assets? [N/A]448 (c) Did you include any new assets either in the supplemental material or as a URL? [N/A]449 (d) Did you discuss whether and how consent was obtained from people whose data you’re450 using/curating? [N/A]451 (e) Did you discuss whether the data you are using/curating contains personally identifiable452 information or offensive content? [N/A]453 5. If you used crowdsourcing or conducted research with human subjects...454 (a) Did you include the full text of instructions given to participants and screenshots, if455 applicable? [N/A]456 (b) Did you describe any potential participant risks, with links to Institutional Review Board457 (IRB) approvals, if applicable? [N/A]458 (c) Did you include the estimated hourly wage paid to participants and the total amount spent459 on participant compensation? [N/A]460 A PyTorch Implementation461 We display a PyTorch [26] implementation of ISAAC for a fully-connected layer below. Here, we462 mark the important part (i.e., the part beyond the boilerplate) with a red rectangle.463 import torch class ISAACLinearFunction(torch.autograd.Function): @staticmethod def forward(ctx, input, weight, bias, la, inv_type): ctx.save_for_backward(input, weight, bias) ctx.la = la if inv_type == 'cholesky_inverse': ctx.inverse = torch.cholesky_inverse elif inv_type == 'inverse': ctx.inverse = torch.inverse else: raise NotImplementedError(inv_type) return input @ weight.T + (bias if bias is not None else 0) @staticmethod def backward(ctx, grad_output): input, weight, bias = ctx.saved_tensors if ctx.needs_input_grad[0]: grad_0 = grad_output @ weight else: grad_0 = None if ctx.needs_input_grad[1]: aaT = input @ input.T / grad_output.shape[0] I_b = torch.eye(aaT.shape[0], device=aaT.device, dtype=aaT.dtype) aaT_IaaT_inv = aaT @ ctx.inverse(aaT / ctx.la + I_b) grad_1 = grad_output.T @ ( I_b - 1. / ctx.la * aaT_IaaT_inv ) @ input else: grad_1 = None return ( grad_0, grad_1, grad_output.mean(0, keepdim=True) if bias is not None else None, None, None, None, ) class ISAACLinear(torch.nn.Linear): def __init__(self, in_features, out_features, la, inv_type='inverse', **kwargs): super(ISAACLinear, self).__init__( in_features=in_features, out_features=out_features, **kwargs ) self.la = la self.inv_type = inv_type def forward(self, input: torch.Tensor) -> torch.Tensor: return ISAACLinearFunction.apply( input, self.weight, self.bias.unsqueeze(0) if self.bias is not None else None, self.la, self.inv_type ) B Implementation Details464 Unless noted differently, for all experiments, we tune the learning rate on a grid of465 (1, 0.3, 0.1, 0.03, 0.01, 0.003, 0.001). We verified this range to cover the full reasonable range of466 learning rates. Specifically, for every single experiment, we made sure that there is no learning rate467 outside this range which performs better.468 For all language model experiments, we used the respective Huggingface PyTorch implementation.469 All other hyperparameter details are given in the main paper.470 The code will be made publicly available.471 C Additional Proofs472 Proof of Theorem 1. We first show, that ζ as defined in (17) can be expressed as in (20). Indeed by473 using (19), the Woodbury matrix identity and by regularizing the inverses, we can see that474 ζ = λgλa(ḡ ⊤ḡ/b+ λgI) −1 ⊗ (a⊤a/b+ λaI)−1g⊤a = λgλa · ( λgIm + ḡ ⊤ḡ/b )−1 g⊤a ( λaIn + a ⊤a/b )−1 = λgλa · ( 1 λg Im − 1 bλg 2 ḡ ⊤ ( Ib + 1 bλg ḡḡ⊤ )−1 ḡ ) g⊤a ( 1 λa In − 1 bλa 2 a ⊤ ( Ib + 1 bλa aa⊤ )−1 a ) = ( Im − 1 bλg ḡ⊤ ( Ib + 1 bλg ḡḡ⊤ )−1 ḡ ) · g⊤ · a · ( In − 1 bλa a⊤ ( Ib + 1 bλa aa⊤ )−1 a ) = ( Im − 1 bλg ḡ⊤ ( Ib + 1 bλg ḡḡ⊤ )−1 ḡ ) · g⊤ · ( a− 1 bλa aa⊤ ( Ib + 1 bλa aa⊤ )−1 a ) = ( Im − 1 bλg ḡ⊤ ( Ib + 1 bλg ḡḡ⊤ )−1 ḡ ) · g⊤ · ( Ib − 1 bλa aa⊤ ( Ib + 1 bλa aa⊤ )−1) · a To show Assertion (i), we note that according to (17)475 lim λg,λa→0 1 λgλa ζ = lim λg,λa→0 (ḡ⊤ḡ/b+ λgI) −1 ⊗ (a⊤a/b+ λaI)−1g⊤a = (ḡ⊤ḡ)−1 ⊗ (a⊤a)−1g⊤a ≈ G−1g⊤a, where the first equality uses the definition of ζ in (17). The second equality is due to the continuity of476 the matrix inversion and the last approximate equality follows from the K-FAC approximation (15).477 To show Assertion (ii), we consider limλg→∞ and limλa→∞ independently, that is478 lim λg→∞ λg · ( λgIm + ḡ ⊤ḡ/b )−1 (22) = lim λg→∞ ( Im + 1 bλg ḡ⊤ḡ )−1 = Im, and479 lim λa→∞ λa · ( λaIn + a ⊤a/b )−1 (23) = lim λa→∞ ( In + 1 bλa a⊤a )−1 = In. This then implies480 lim λg,λa→∞ λg ( λgIm + ḡ ⊤ḡ/b )−1 · g⊤ (24) · a · λa ( λaIn + a ⊤a/b )−1 = Im · g⊤a · In = g⊤a, which concludes the proof.481 D Additional Experiments482
1. What is the focus of the paper regarding machine learning tasks? 2. What are the strengths of the proposed approach, particularly its practicality and efficiency? 3. What are the weaknesses of the paper, such as limited experiment scope? 4. Do you have any questions regarding typos or equations in the paper? 5. Is there any concern about the preconditioner used in the method? 6. How does the reviewer assess the clarity and quality of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper addresses the problem of using second-order information in large-scale machine learning tasks (e.g. training neural networks). It proposes a Kronecker-factored Approximate Curvature (KFAC)-approximated, Tikhonov regularized Gauss-Newton (GN) update step. Instead of a single regularization parameter ( λ ), the update step in the paper uses two independent regularization parameters ( λ g and λ a ). The paper relates different settings of λ a and λ g to other algorithms (i.e. classical GN and gradient descent). It shows that allowing λ g grows to infinity gives a computationally efficient update step. It provides empirical results on their proposed method. Strengths And Weaknesses Strengths: The update step ξ ∗ is interesting and practical. The paper provides experiments to show that using ξ ∗ can lead to lower computation time without sacrificing performance. Experiments are thorough and tested on some fairly large neural network architectures. The update step ξ is a nice, compact way to describe an algorithm for training neural networks. The analysis of ξ provides various ways of interpreting the algorithm in terms of different regularization parameter settings. The properties of ξ are interesting. The analysis is relatively thorough (e.g. showing that any combination of λ g and λ a will still result in a descent direction). Paper is well-written and clear. Weaknesses Some of the experiments are not benchmarked to SOTA (which is acknowledged by the paper). This might affect people's willingness to adopt the method in practice. Questions Line 70 refers to H in Equation 6. Typo? Is there an intuitive interpretation of the preconditioner M in Remark 2? Does this preconditioning help in terms of reducing the number of iterations needed to reach a solution? Limitations This work does not appear to me to have negative social impact.
NIPS
Title ISAAC Newton: Input-based Approximate Curvature for Newton's Method Abstract We present ISAAC (Input-baSed ApproximAte Curvature), a novel method that 1 conditions the gradient using selected second-order information and has an asymp2 totically vanishing computational overhead, assuming a batch size smaller than 3 the number of neurons. We show that it is possible to compute a good conditioner 4 based on only the input to a respective layer without a substantial computational 5 overhead. The proposed method allows effective training even in small-batch 6 stochastic regimes, which makes it competitive to first-order as well as quasi7 Newton methods. 8 N/A We present ISAAC (Input-baSed ApproximAte Curvature), a novel method that1 conditions the gradient using selected second-order information and has an asymp-2 totically vanishing computational overhead, assuming a batch size smaller than3 the number of neurons. We show that it is possible to compute a good conditioner4 based on only the input to a respective layer without a substantial computational5 overhead. The proposed method allows effective training even in small-batch6 stochastic regimes, which makes it competitive to first-order as well as quasi-7 Newton methods.8 1 Introduction9 While second-order optimization methods are traditionally much less explored than first-order10 methods in large-scale machine learning (ML) applications due to their memory requirements and11 prohibitive computational cost per iteration, they have recently become more popular in ML mainly12 due to their fast convergence properties when compared to first-order methods [1]. The expensive13 computation of an inverse Hessian (also known as pre-conditioning matrix) in the Newton step has14 also been tackled via estimating the curvature from the change in gradients. Loosely speaking, these15 algorithms are known as quasi-Newton methods and a comprehensive treatment can be found in16 the textbook [2]. In addition, various new approximations to the pre-conditioning matrix have been17 proposed in the recent literature [3]–[6]. From a theoretical perspective, second-order optimization18 methods are not nearly as well understood as first-order methods. It is an active research direction to19 fill this gap [7], [8].20 Motivated by the task of training neural networks, and the observation that invoking local curvature21 information associated with neural network objective functions can achieve much faster progress22 per iteration than standard first-order methods [9]–[11], several methods have been proposed. One23 of these methods, that received significant attention, is known as Kronecker-factored Approximate24 Curvature (K-FAC) [12], whose main ingredient is a sophisticated approximation to the generalized25 Gauss-Newton matrix and the Fisher information matrix quantifying the curvature of the underlying26 neural network objective function, which then can be inverted efficiently.27 Inspired by the K-FAC approximation and the Tikhonov regularization of the Newton method, we28 introduce a novel two parameter regularized Kronecker-factorized Newton update step. The proposed29 scheme disentangles the classical Tikhonov regularization and allows us to condition the gradient30 using selected second-order information and has an asymptotically vanishing computational overhead.31 While this property makes the presented method highly attractive from the computational complexity32 perspective, we show that its achieved empirical performance on complicated high-dimensional33 Machine Learning problems remains comparable to existing state-of-the-art methods.34 The contributions of this paper can be summarized as follows: (i) we propose a novel two parameter35 regularized K-FAC approximated Gauss-Newton update step; (ii) we show that asymptotically—as36 Submitted to 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Do not distribute. both regularization parameters vanish—our method recovers the classical K-FAC scheme and in37 the opposite setting—as both regularization parameters grow—our method asymptotically reduces38 to classical gradient descent; (iii) we prove that for an arbitrary pair of regularization parameters,39 the proposed update direction is always a direction of decreasing loss; (iv) in the limit, as one40 regularization parameter grows, we obtain an efficient and effective conditioning of the gradient with41 an asymptotically vanishing overhead; (v) we empirically analyze the presented method and find that42 our efficient conditioning method maintains the performance of its more expensive counterpart; (vi)43 we demonstrate the effectiveness of the presented method in the setting of small-batch stochastic44 regimes and observe that it is competitive to first-order as well as quasi-Newton methods.45 2 Preliminaries46 In this section, we review aspects of second-order optimization, with a focus on generalized Gauss-47 Newton methods. In combination with Kronecker factorization, this leads us to a new regularized48 update scheme. We consider the training of an L-layer neural network f(x; θ) defined recursively as49 zi ← ai−1W (i) (pre-activations), ai ← ϕ(zi) (activations), (1) where a0 = x is the vector of inputs and aL = f(x; θ) is the vector of outputs. Unless noted otherwise,50 we assume these vectors to be row vectors (i.e., in R1×n) as this allows for a direct extension to the51 (batch) vectorized case (i.e., in Rb×n) introduced later. For any layer i, let W (i) ∈ Rdi−1×di be a52 weight matrix and let ϕ be an element-wise nonlinear function. We consider a convex loss function53 L(y, y′) that measures the discrepancy between y and y′. The training optimization problem is then54 argmin θ Ex,y [L(f(x; θ), y)] , (2) where θ = [ θ(1), . . . , θ(L) ] with θ(i) = vec(W (i)).55 The classical Newton method for solving (2) is expressed as the update rule56 θ′ = θ − ηH−1θ ∇θL(f(x; θ), y) , (3) where η > 0 denotes the learning rate and Hθ is the Hessian corresponding to the objective function57 in (2). The stability and efficiency of an estimation problem solved via the Newton method can be58 improved by adding a Tikhonov regularization term [13] leading to a regularized Newton method59 θ′ = θ − η (Hθ + λI)−1∇θL(f(x; θ), y) , (4) where λ > 0 is the so-called Tikhonov regularization parameter. It is well-known [14], [15], that60 under the assumption of approximating the model f with its first-order Taylor expansion, the Hessian61 corresponds with the so-called generalized Gauss-Newton (GGN) matrix Gθ, and hence (4) can be62 expressed as63 θ′ = θ − η (Gθ + λI)−1∇θL(f(x; θ), y) . (5) A major practical limitation of (5) is the computation of the inverse term. A method that alleviates this64 difficulty is known as Kronecker-Factored Approximate Curvature (K-FAC) [12] which approximates65 the block-diagonal (i.e., layer-wise) empirical Hessian or GGN matrix. Inspired by K-FAC, there66 have been other works discussing approximations of Gθ and its inverse [15]. In the following, we67 discuss a popular approach that allows for (moderately) efficient computation.68 The generalized Gauss-Newton matrix Gθ is defined as69 Gθ = E [ (Jθf(x; θ)) ⊤∇2fL(f(x; θ), y)Jθf(x; θ) ] , (6) where J and H denote the Jacobian and Hessian matrices, respectively. Correspondingly, the diagonal70 block of Gθ corresponding to the weights of the ith layer W (i) is71 GW (i)=E [ (JW (i)f(x; θ)) ⊤∇2fL(f(x; θ), y)JW (i)f(x; θ) ] . According to the backpropagation rule Jθ(i)f(x; θ) = Jzif(x; θ) ai−1, a ⊤b = a ⊗ b, and the72 mixed-product property, we can rewrite GW (i) as73 GW (i)=E [( (Jzif(x; θ) ai−1) ⊤(∇2fL(f(x; θ), y))1/2 )( (∇2fL(f(x; θ), y))1/2 Jzif(x; θ) ai−1 )] (7) = E [ (ḡ⊤ai−1) ⊤(ḡ⊤ai−1) ] = E [ (ḡ ⊗ ai−1)⊤(ḡ ⊗ ai−1) ] = E [ (ḡ⊤ḡ)⊗ (a⊤i−1 ⊗ ai−1) ] , (8) where74 ḡ = (Jzif(x; θ)) ⊤ (∇2fL(f(x; θ), y))1/2 . (9) Remark 1 (Monte-Carlo Low-Rank Approximation for ḡ⊤ḡ). As ḡ is a matrix of shape m × di75 where m is the dimension of the output of f , ḡ is generally expensive to compute. Therefore, [12] use76 a low-rank Monte-Carlo approximation to estimate HfL(f(x; θ), y) and thereby ḡ⊤ḡ. For this, we77 need to use the distribution underlying the probabilistic model of our loss L (e.g., Gaussian for MSE78 loss, or a categorical distribution for cross entropy). Specifically, by sampling from this distribution79 pf (x) defined by the network output f(x; θ), we can get an estimator of HfL(f(x; θ), y) via the80 identity81 HfL(f(x; θ), y) = Eŷ∼pf (x) [ ∇fL(f(x; θ), ŷ)⊤∇fL(f(x; θ), ŷ) ] . (10) An extensive reference for this (as well as alternatives) can be found in Appendix A.2 of Dangel et82 al. [15]. The respective rank-1 approximation (denoted by ≜) of HfL(f(x; θ)) is83 HfL(f(x; θ), y) ≜ ∇fL(f(x; θ), ŷ)⊤∇fL(f(x; θ), ŷ) , where ŷ ∼ pf (x). Respectively, we can estimate ḡ⊤ḡ using this rank-1 approximation with84 ḡ ≜ (Jzif(x; θ)) ⊤∇fL(f(x; θ), ŷ) = ∇ziL(f(x; θ), ŷ) . (11) In analogy to ḡ, we introduce the gradient of training objective with respect to pre-activations zi as85 gi = (Jzif(x; θ)) ⊤∇fL(f(x; θ), y) = ∇ziL(f(x; θ), y) . (12) In other words, for a given layer, let g ∈ R1×di denote the gradient of the loss between an output and86 the ground truth and let ḡ ∈ Rm×di denote the derivative of the network f times the square root of87 the Hessian of the loss function (which may be approximated according to Remark 1), each of them88 with respect to the output zi of the given layer i. Note that ḡ is not equal to g and that they require one89 backpropagation pass each (or potentially many for the case of ḡ). This makes computing ḡ costly.90 Applying the K-FAC [12] approximation to (8) the expectation of Kronecker products can be91 approximated as the Kronecker product of expectations as92 G = E((ḡ⊤ḡ)⊗ (a⊤a)) ≈ E(ḡ⊤ḡ)⊗ E(a⊤a) , (13) where, for clarity, we drop the index of ai−1 in (8) and denote it with a; similarly we denote GW (i)93 as G. While the expectation of Kronecker products is generally not equal to the Kronecker product94 of expectations, this K-FAC approximation (13) has been shown to be fairly accurate in practice95 and to preserve the “coarse structure” of the GGN matrix [12]. The K-FAC decomposition in (13)96 is convenient as the Kronecker product has the favorable property that for two matrices A,B the97 identity (A⊗B)−1 = A−1 ⊗B−1 which significantly simplifies the computation of an inverse.98 In practice, E(ḡ⊤ḡ) and E(a⊤a) can be computed by averaging over a batch of size b as99 E(ḡ⊤ḡ) ≃ ḡ⊤ḡ/b, E(a⊤a) ≃ a⊤a/b, (14) where we denote batches of g, ḡ and a, as g ∈ Rb×di , ḡ ∈ Rrb×di and a ∈ Rb×di−1 , where our layer100 has di−1 inputs, di outputs, b is the batch size, and r is either the number of outputs m or the rank of101 an approximation according to Remark 1. Correspondingly, the K-FAC approximation of the GGN102 matrix and its inverse are concisely expressed as103 G ≈ (ḡ⊤ḡ)⊗ (a⊤a)/b2 G−1 ≈ ( ḡ⊤ḡ )−1⊗(a⊤a)−1 · b2 . (15) Equipped with the standard terminology and setting, we now introduce the novel, regularized update104 step. First, inspired by the K-FAC approximation (13), the Tikhonov regularized Gauss-Newton105 method (5) can be approximated by106 θ(i)′ = θ(i) − η(ḡ⊤ḡ/b+ λI)−1 ⊗ (a⊤a/b+ λI)−1∇θ(i)L(f(x; θ)), (16) with regularization parameter λ > 0. A key observation, which is motivated by the structure of107 the above update, is to disentangle the two occurrences of λ into two independent regularization108 parameters λg, λa > 0. By defining the Kronecker-factorized Gauss-Newton update step as109 ζ = λgλa(ḡ ⊤ḡ/b+ λgI) −1 ⊗ (a⊤a/b+ λaI)−1∇θ(i)L(f(x; θ)), (17) we obtain the concise update equation110 θ(i)′ = θ(i) − η∗ζ . (18) This update (18) is equivalent to update (16) when in the case of η∗ = ηλgλa and λ = λg = λa. This111 equivalence does not restrict η∗, λg, λa in any way, and changing λg or λa does not mean that we112 change our learning rate or step size η∗. Parameterizing ζ in (17) with the multiplicative terms λgλa113 makes the formulation more convenient for analysis.114 In this paper, we investigate the theoretical and empirical properties of the iterative update rule (18)115 and in particular show how the regularization parameters λg, λa affect the Kronecker-factorized116 Gauss-Newton update step ζ . When analyzing the Kronecker-factorized Gauss-Newton update step117 ζ , a particularly useful tool is the vector product identity,118 (( ḡ⊤ḡ )−1 ⊗ (a⊤a)−1) vec(g⊤a) = vec((ḡ⊤ḡ)−1 g⊤a (a⊤a)−1) , (19) where the gradient with respect to the weight matrix is g⊤a.119 3 Theoretical Guarantees120 In this section, we investigate the theoretical properties of the Kronecker-factorized Gauss-Newton121 update direction ζ as defined in (17). We recall that ζ introduces a Tikonov regularization, as it is122 commonly done in implementations of second order-based methods. Not surprisingly, we show that123 by decreasing the regularization parameters λg, λa the update rule (18) collapses (in the limit) to the124 classical Gauss-Newton method, and hence in the regime of small λg, λa the variable ζ describes the125 Gauss-Newton direction. Moreover, by increasing the regularization strength, we converge (in the126 limit) to the conventional gradient descent update step.127 The key observation is that, as we disentangle the regularization of the two Kronecker factors ḡ⊤ḡ128 and a⊤a, and consider the setting where only one regularizer is large (λg → ∞ to be precise),129 we obtain an update direction that can be computed highly efficiently. We show that this setting130 describes an approximated Gauss-Newton update scheme, whose superior numerical performance is131 then empirically demonstrated in Section 4.132 Theorem 1 (Properties of ζ ). The K-FAC based update step ζ as defined in (17) can be expressed as133 ζ = ( Im − 1 bλg ḡ⊤ ( Ib + 1 bλg ḡḡ⊤ )−1 ḡ ) · g⊤ · ( Ib − 1 bλa aa⊤ ( Ib + 1 bλa aa⊤ )−1) · a . (20) Moreover, ζ admits the following asymptotic properties:134 (i) In the limit of λg, λa → 0, 1λgλaζ is the K-FAC approximation of the Gauss-Newton step, i.e.,135 limλg,λa→0 1 λgλa ζ ≈ G−1∇θ(i)L(f(x; θ)), where ≈ denotes the K-FAC approximation (15).136 (ii) In the limit of λg, λa →∞, ζ is the gradient, i.e., limλg,λa→∞ ζ = ∇θ(i)L(f(x; θ)).137 The Proof is deferred to the Supplementary Material.138 We want to show that ζ is well-defined and points in the correct direction, not only for λg and λa139 numerically close to zero because we want to explore the full spectrum of settings for λg and λa.140 Thus, we prove that ζ is a direction of increasing loss, independent of the choices of λg and λa.141 Theorem 2 (Correctness of ζ is independent of λg and λa). ζ is a direction of increasing loss,142 independent of the choices of λg and λa.143 Proof. Recall that (λgIm+ḡ⊤ḡ/b) and (λaIn+a⊤a/b) are positive semi-definite (PSD) matrices by144 definition. Their inverses (λgIm + ḡ⊤ḡ/b)−1 and (λaIn + a⊤a/b)−1 are therefore also PSD. As the145 Kronecker product of PSD matrices is PSD, the conditioning matrix ((λgIm + ḡ⊤ḡ/b)−1 ⊗ (λaIn +146 a⊤a/b)−1 ≈ G−1) is PSD, and therefore the direction of the update step remains correct.147 From our formulation of ζ , we can find that, in the limit for λg →∞, Equation (21) does not depend148 on ḡ . This is computationally very beneficial as computing ḡ is costly as it requires one or even149 many additional backpropagation passes. In addition, it allows conditioning the gradient update by150 multiplying a b× b matrix between g⊤ and a, which can be done very fast.151 Theorem 3 (Efficient Update Direction). In the limit of λg → ∞, the update step ζ converges to152 limλg→∞ ζ = ζ ∗, where153 ζ∗= g⊤ · ( Ib − 1 bλa aa⊤ ( Ib + 1 bλa aa⊤ )−1) · a . (21) (i) Here, the update direction ζ∗ is based only on the inputs and does not require computing ḡ154 (which would require a second backpropagation pass), making it efficient.155 (ii) The computational cost of computing the update ζ∗ lies in O(bn2 + b2n+ b3), where n is the156 number of neurons in each layer. This comprises the conventional cost of computing the gradient157 ∇ = g⊤x lying inO(bn2), and the overhead of computing ζ∗ instead of∇ lying inO(b2n+b3).158 The overhead is vanishing, assuming n≫ b. For b > n the complexity lies in O(bn2 + n3).159 Proof. We first show the property (21). Note that according to (22), λg · ( λgIm + ḡ ⊤ḡ/b )−1 con-160 verges in the limit of λg →∞ to Im, and therefore (21) holds.161 (i) The statement follows from the fact that the term ḡ does not appear in the equivalent characteriza-162 tion (21) of ζ∗.163 (ii) We first note that the matrix aa⊤ is of dimension b × b, and can be computed in O(b2n) time.164 Next, the matrix165 ( Ib − 1 bλa aa⊤ ( Ib + 1 bλa aa⊤ )−1) is of shape b× b and can be multiplied with a in O(b2n) time.166 Notably, (21) can be computed with a vanishing computational overhead and with only minor167 modifications to the implementation. Specifically, only the g⊤a expression has to be replaced by (21)168 in the backpropagation step. As this can be done independently for each layer, this lends itself also to169 applying it only to individual layers.170 As we see in the experimental section, in many cases in the mini-batch regime (i.e., b < n), the171 optimal (or a good) choice for λg actually lies in the limit to∞. This is a surprising result, leading to172 the efficient and effective ζ∗ = ζλg→∞ optimizer.173 Remark 2 (Relation between Update Direction ζ and ζ∗). When comparing the update direction174 ζ in (20) without regularization (i.e., λg → 0, λa → 0) with ζ∗ (i.e., λg → ∞) as given in (21), it175 can be directly seen that ζ∗ corresponds to a particular pre-conditioning of ζ , since ζ∗ = Mζ for176 M = 1bλg ḡ ⊤ḡ .177 As the last theoretical property of our proposed update direction ζ∗, we show that in specific networks178 ζ∗ coincides with the Gauss-Newton update direction.179 Theorem 4 (ζ∗ is Exact for the Last Layer). For the case of linear regression or, more generally, the180 last layer of networks, with the mean squared error, ζ∗ is the Gauss-Newton update direction.181 Proof. The Hessian matrix of the mean squared error loss is the identity matrix. Correspondingly,182 the expectation value of ḡ⊤ḡ is I. Thus, ζ∗ = ζ .183 Remark 3. The direction ζ∗ corresponds to the Gauss-Newton update direction with an approxima-184 tion of G that can be expressed as G ≈ E [ I⊗ (a⊤a) ] .185 Remark 4 (Extension to the Natural Gradient). In some cases, it might be more desirable to use the186 Fisher-based natural gradient instead of the Gauss-Newton method. The difference to this setting is187 that in (5) the GGN matrix G is replaced by the empirical Fisher information matrix F.188 We note that our theory also applies to F, and that ζ∗ also efficiently approximates the natural189 gradient update step F−1∇. The i-th diagonal block of F (Fθ(i) = E [ (g⊤i gi)⊗ (a⊤i−1 ⊗ ai−1) ] ),190 has the same form as a block of the GGN matrix G (Gθ(i) = E [ (ḡ⊤i ḡi)⊗ (a⊤i−1 ⊗ ai−1) ] ).191 Thus, we can replace ḡ with g in our theoretical results to obtain their counterparts for F.192 4 Experiments193 In the previous section, we discussed the theoretical properties of the proposed update directions194 ζ and ζ∗ with the aspect that ζ∗ would actually be “free” to compute in the mini-batch regime. In195 this section, we provide empirical evidence that ζ∗ is a good update direction, even in deep learning.196 Specifically, we demonstrate that197 (E1) ζ∗ achieves similar performance to K-FAC, while being substantially cheaper to compute.198 (E2) The performance of our proposed method can be empirically maintained in the mini-batch199 regime (n≫ b).200 (E3) ζ∗ may be used for individual layers, while for other layers only the gradient ∇ is used. This201 still leads to improved performance.202 (E4) ζ∗ also improves the performance for training larger models such as BERT and ResNet.203 (E5) The runtime and memory requirements of ζ∗ are comparable to those of gradient descent.204 E1: Impact of Regularization Parameters205 For (E1), we study the dependence of the model’s performance on the regularization parameters λg206 and λa. Here, we train a 5-layer deep neural network on the MNIST classification task [16] with a207 batch size of 60 for a total of 40 epochs or 40 000 steps.208 The plots in Figure 1 demonstrate that the advantage of training by conditioning with curvature209 information can be achieved by considering both layer inputs a and gradients with respect to random210 samples ḡ , but also using only layer inputs a. In the plot, we show the performance of ζ for different211 choices of λg and λa, each in the range from 10−6 to 106. The right column shows ζ ∗, i.e., λg =∞,212 for different λa. The bottom-right corner is gradient descent, which corresponds to λg = ∞ and213 λa =∞.214 Newton’s method or the general K-FAC approximation corresponds to the area with small λg and λa.215 The interesting finding here is that the performance does not suffer by increasing λg toward∞, i.e.,216 from left to right in the plot.217 In addition, in Figure 3, we consider the case of regression with an auto-encoder trained with the218 MSE loss on MNIST [16] and Fashion-MNIST [17]. Here, we follow the same principle as above219 and also find that ζ∗ performs well.220 -6 -5 -4 -3 -2 -1 0 +1+2+3+4+5+6 log10 g -6 -5 -4 -3 -2 -1 0 +1 +2 +3 +4 +5 +6 lo g 1 0 a (a) -6 -5 -4 -3 -2 -1 0 +1+2+3+4+5+6 log10 g (b) 5.5 5.0 4.5 4.0 3.5 3.0 2.5 Figure 3: Training an auto-encoder on MNIST (left) and FashionMNIST (right). The model is the same as used by Botev et al. [18], i.e., it is a ReLU-activated 6-layer fully connected model with dimensions 784-1000-500- 30-500-1000-784. Displayed is the logarithmic training loss. -6 -5 -4 -3 -2 -1 0 +1+2+3+4+5+6 log10 g -6 -5 -4 -3 -2 -1 0 +1 +2 +3 +4 +5 +6 lo g 1 0 a (a) -6 -5 -4 -3 -2 -1 0 +1+2+3+4+5+6 log10 g (b) Figure 4: Training a 5-layer ReLU network with 400 neurons per layer on the MNIST classification task (as in Figure 1) but with the Adam optimizer [19]. In Figure 7, we compare the loss for dif-221 ferent methods. Here, we distinguish222 between loss per time (left) and loss223 per number of steps (right). We can ob-224 serve that, for λ = 0.1, K-FAC, ζ , and225 ζ∗ are almost identical per update step226 (right), while ζ∗ is by a large margin227 the fastest, followed by ζ , and the con-228 ventional K-FAC implementation is the229 slowest (left). On the other hand, for230 λ = 0.01 we can achieve a faster con-231 vergence than with λ = 0.1, but here232 only the K-FAC and ζ methods are nu-233 merically stable, while ζ∗ is unstable in234 this case. This means in the regime of235 very small λ, ζ∗ is not as robust as K-236 FAC and ζ , however, it achieves good237 performance with small but moderate238 λ like λ = 0.1. For λ < 0.01, also239 K-FAC and ζ become numerically un-240 stable in this setting and, in general, we241 observed that the smallest valid λ for242 K-FAC is 0.01 or 0.001 depending on243 model and task. Under consideration244 of the runtime, ζ∗ performs best as it is245 almost as fast as gradient descent while246 performing equivalent to K-FAC and ζ .247 Specifically, a gradient descent step is248 only about 10% faster than ζ∗.249 E2: Minibatch Regime250 For (E2), in Figure 1, we can see that training251 performs well for n ∈ {100, 400, 1 600} neu-252 rons per layer at a batch size of only 60. Also, in253 all other experiments, we use small batch sizes254 of between 8 and 100.255 E3: ζ∗ in Individual Layers256 In Figure 5, we train the 5-layer fully connected257 model with 400 neurons per layer. Here, we258 consider the setting that we use ζ∗ in some of259 the layers while using the default gradient ∇260 in other layers. Specifically, we consider the261 settings, where all, the first, the final, the first three, the final three, the odd numbered, and the262 even numbered layers are updated by ζ∗. We observe that all settings with ζ∗ perform better than263 plain gradient descent, except for “ζ∗ for layers 3,4,5” which performs approximately equivalent to264 gradient descent.265 E4: Large-scale Models266 BERT To demonstrate the utility of ζ∗ also in large-scale models, we evaluate it for fine-tuning267 BERT [20] on three natural language tasks. In Table 1, we summarize the results for the BERT268 fine-tuning task. For the “Corpus of Linguistic Acceptability” (CoLA) [21] data set, we fine-tune269 both the BERT-Base and the BERT-Mini models and find that we outperform the gradient descent270 baseline in both cases. For the “Microsoft Research Paraphrase Corpus” (MRPC) [22] data set, we271 fine-tune the BERT-Base model and find that we outperform the baseline both in terms of accuracy272 and F1-score. Finally, on the “Semantic Textual Similarity Benchmark” (STS-B) [23] data set, we273 fine-tune the BERT-Mini model and achieve higher Pearson and Spearman correlations than the274 baseline. While for training with CoLA and MRPC, we were able to use the Adam optimizer [19]275 (which is recommended for this task and model) in conjunction with ζ∗ in place of the gradient,276 for STS-B Adam did not work well. Therefore, for STS-B, we evaluated it using the SGD with277 momentum optimizer. For each method, we performed a grid search over the hyperparameters. We278 note that we use a batch size of 8 in all BERT experiments.279 ResNet In addition, we conduct an experiment280 where we train the last layer of a ResNet with281 ζ∗, while the remainder of the model is up-282 dated using the gradient ∇. Here, we train a283 ResNet-18 [24] on CIFAR-10 [25] using SGD284 with a batch size of 100 in a vanilla setting, i.e.,285 without additional tricks employed in by He et286 al. [24] and others. Specifically, we use (i) a287 constant learning rate for each training (optimal288 from (1, 0.3, 0.1, 0.03, 0.01)) and (ii) vanilla289 SGD and not momentum-based SGD. The rea-290 son behind this is that we want a vanilla experi-291 ment and with aspects such as extensively tuning292 multiple parameters of learning rate scheduler293 would make the evaluation less transparent; how-294 ever, therefore, all accuracies are naturally lower than SOTA. In Figure 6, we plot the test accuracy295 against time. The results show that the proposed method outperforms vanilla SGD when applied296 to the last layer of a ResNet-18. To validate that the learning rate is not the cause for the better297 performance, we also plot the neighboring learning rates and find that even with a too small or too298 large learning rate ζ∗ outperforms gradient descent with the optimal learning rate.299 E5: Runtime and Memory300 Finally, we also evaluate the runtime and memory requirements of each method. The runtime301 evaluation is displayed in Table 2. We report both CPU and GPU runtime using PyTorch [26] and302 (for K-FAC) the backpack library [15]. Note that the CPU runtime is more representative of the303 pure computational cost, as for the first rows of the GPU runtime the overhead of calling the GPU304 is dominant. When comparing runtimes between the gradient and ζ∗ on the GPU, we can observe305 that we have an overhead of around 2.5 s independent of the model size. The overhead for CPU time306 is also very small at less than 1% for the largest model, and only 1.3 s for the smallest model. In307 contrast, the runtime of ζ∗ is around 4 times the runtime of the gradient, and K-FAC has an even308 substantially larger runtime. Regarding memory, ζ∗ (contrasting the other approaches) also requires309 only a small additional footprint.310 Remark 5 (Implementation). The implementation of ζ∗ can be done by replacing the backpropagation311 step of a respective layer by (21). As all “ingredients” are already available in popular deep learning312 frameworks, it requires only little modification (contrasting K-FAC and ζ , which require at least one313 additional backpropagation.)314 We will publish the source code of our implementation. In the appendix, we give a PyTorch [26]315 implementation of the proposed method (ζ∗).316 5 Related Work317 Our methods are related to K-FAC by Martens and Grosse [12]. K-FAC uses the approximation318 (13) to approximate the blocks of the Hessian of the empirical risk of neural networks. In most319 implementations of K-FAC, the off-diagonal blocks of the Hessian are also set to zero. One of the320 main claimed benefits of K-FAC is its speed (compared to stochastic gradient descent) for large-batch321 size training. That said, recent empirical work has shown that this advantage of K-FAC disappears322 once the additional computational costs of hyperparameter tuning for large batch training is accounted323 for. There is a line of work that extends the basic idea of K-FAC to convolutional layers [27]. Botev et324 al. [18] further extend these ideas to present KFLR, a Kronecker factored low-rank approximation,325 and KFRA, a Kronecker factored recursive approximation of the Gauss-Newton step. Singh and326 Alistarh [28] propose WoodFisher, a Woodbury matrix inverse-based estimate of the inverse Hessian,327 and apply it to neural network compression. Yao et al. [29] propose AdaHessian, a second-order328 optimizer that incorporates the curvature of the loss function via an adaptive estimation of the Hessian.329 Frantar et al. [6] propose M-FAC, a matrix-free approximation of the natural gradient through a queue330 of the (e.g., 1 000) recent gradients. These works fundamentally differ from our approach in that their331 objective is to approximate the Fisher or Gauss-Newton matrix inverse vector products. In contrast,332 this work proposes to approximate the Gauss-Newton matrix by only one of its Kronecker factors,333 which we find to achieve good performance at a substantial computational speedup and reduction of334 memory footprint. For an overview of this area, we refer to Kunstner et al. [30] and Martens [31].335 For an overview of the technical aspects of backpropagation of second-order quantities, we refer to336 Dangel et al. [15], [32]337 Taking a step back, K-FAC is one of many Newton-type methods for training neural networks.338 Other prominent examples of such methods include subsampled Newton methods [33], [34] (which339 approximate the Hessian by subsampling the terms in the empirical risk function and evaluating the340 Hessian of the subsampled terms) and sketched Newton methods [3]–[5] (which approximate the341 Hessian by sketching, e.g., by projecting the Hessian to a lower-dimensional space by multiplying it342 with a random matrix). The main features that distinguish K-FAC from this group of methods are343 K-FAC’s superior empirical performance and K-FAC’s lack of theoretical justification.344 6 Conclusion345 In this work, we presented ISAAC Newton, a novel approximate curvature method based on layer-346 inputs. We demonstrated it to be a special case of the regularization-generalized Gauss-Newton347 method and empirically demonstrate its utility. Specifically, our method features an asymptotically348 vanishing computational overhead in the mini-batch regime, while achieving competitive empirical349 performance on various benchmark problems.350 References351 [1] N. Agarwal, B. Bullins, and E. Hazan, “Second-order stochastic optimization for machine352 learning in linear time,” Journal on Machine Learning Research, vol. 18, no. 1, pp. 4148–4187,353 2017.354 [2] J. Nocedal and S. J. Wright, Numerical Optimization, 2e. New York, NY, USA: Springer, 2006.355 [3] A. Gonen and S. Shalev-Shwartz, “Faster SGD using sketched conditioning,” arXiv preprint,356 arXiv:1506.02649, 2015.357 [4] M. Pilanci and M. J. Wainwright, “Newton sketch: A near linear-time optimization algorithm358 with linear-quadratic convergence,” SIAM Journal on Optimization, vol. 27, 2017.359 [5] M. A. Erdogdu and A. Montanari, “Convergence rates of sub-sampled Newton methods,” in360 Proc. Neural Information Processing Systems (NeurIPS), 2015.361 [6] E. Frantar, E. Kurtic, and D. Alistarh, “M-FAC: Efficient matrix-free approximations of362 second-order information,” in Proc. Neural Information Processing Systems (NeurIPS), 2021.363 [7] N. Doikov and Y. Nesterov, “Convex Optimization based on Global Lower Second-order364 Models,” in Proc. Neural Information Processing Systems (NeurIPS), Curran Associates, Inc.,365 2020.366 [8] Y. Nesterov and B. T. Polyak, “Cubic regularization of Newton method and its global perfor-367 mance,” Mathematical Programming, vol. 108, 2006.368 [9] S. Becker and Y. Lecun, “Improving the convergence of back-propagation learning with369 second-order methods,” 1989.370 [10] T. Schaul, S. Zhang, and Y. LeCun, “No more pesky learning rates,” in International Conference371 on Machine Learning (ICML), 2013.372 [11] Y. Ollivier, “Riemannian metrics for neural networks i: Feedforward networks,” Information373 and Inference, vol. 4, pp. 108–153, Jun. 2015.374 [12] J. Martens and R. Grosse, “Optimizing neural networks with Kronecker-factored approximate375 curvature,” in International Conference on Machine Learning (ICML), 2015.376 [13] A. N. Tikhonov and V. Y. Arsenin, Solutions of Ill-posed problems. W.H. Winston, 1977.377 [14] P. Chen, “Hessian matrix vs. Gauss—Newton Hessian matrix,” SIAM Journal on Numerical378 Analysis, 2011.379 [15] F. Dangel, F. Kunstner, and P. Hennig, “Backpack: Packing more into backprop,” in Interna-380 tional Conference on Learning Representations, 2020.381 [16] Y. LeCun, C. Cortes, and C. Burges, “MNIST Handwritten Digit Database,” ATT Labs, 2010.382 [17] H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-MNIST: A novel image dataset for benchmarking383 machine learning algorithms,” arXiv, 2017.384 [18] A. Botev, H. Ritter, and D. Barber, “Practical Gauss-Newton optimisation for deep learning,”385 in International Conference on Machine Learning (ICML), 2017.386 [19] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in International Confer-387 ence on Learning Representations (ICLR), 2015.388 [20] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional389 transformers for language understanding,” in North American Chapter of the Association for390 Computational Linguistics: Human Language Technologies (NAACL-HLT), 2018.391 [21] A. Warstadt, A. Singh, and S. R. Bowman, “Neural network acceptability judgments,” Trans-392 actions of the Association for Computational Linguistics, vol. 7, 2019.393 [22] W. B. Dolan and C. Brockett, “Automatically constructing a corpus of sentential paraphrases,”394 in Proceedings of the Third International Workshop on Paraphrasing (IWP2005), 2005.395 [23] D. Cer, M. Diab, E. Agirre, I. Lopez-Gazpio, and L. Specia, “SemEval-2017 task 1: Semantic396 textual similarity multilingual and crosslingual focused evaluation,” in Proceedings of the397 11th International Workshop on Semantic Evaluation (SemEval-2017), Vancouver, Canada:398 Association for Computational Linguistics, 2017.399 [24] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in400 Proc. International Conference on Computer Vision and Pattern Recognition (CVPR), 2016.401 [25] A. Krizhevsky, V. Nair, and G. Hinton, “Cifar-10 (Canadian Institute for Advanced Research),”402 2009.403 [26] A. Paszke, S. Gross, F. Massa, et al., “Pytorch: An imperative style, high-performance deep404 learning library,” in Proc. Neural Information Processing Systems (NeurIPS), 2019.405 [27] R. Grosse and J. Martens, “A Kronecker-factored approximate Fisher matrix for convolution406 layers,” in International Conference on Machine Learning (ICML), 2016.407 [28] S. P. Singh and D. Alistarh, “Woodfisher: Efficient second-order approximation for neural408 network compression,” in Proc. Neural Information Processing Systems (NeurIPS), 2020.409 [29] Z. Yao, A. Gholami, S. Shen, M. Mustafa, K. Keutzer, and M. W. Mahoney, “Adahessian:410 An adaptive second order optimizer for machine learning,” in AAAI Conference on Artificial411 Intelligence, 2021.412 [30] F. Kunstner, L. Balles, and P. Hennig, “Limitations of the empirical Fisher approximation for413 natural gradient descent,” in Proc. Neural Information Processing Systems (NeurIPS), 2019.414 [31] J. Martens, “New insights and perspectives on the natural gradient method,” Journal of Machine415 Learning Research, 2020.416 [32] F. Dangel, S. Harmeling, and P. Hennig, “Modular block-diagonal curvature approximations417 for feedforward architectures,” in International Conference on Artificial Intelligence and418 Statistics (AISTATS), 2020.419 [33] F. Roosta-Khorasani and M. W. Mahoney, “Sub-Sampled Newton Methods I: Globally Con-420 vergent Algorithms,” arXiv: 1601.04737, 2016.421 [34] P. Xu, J. Yang, F. Roosta, C. Ré, and M. W. Mahoney, “Sub-sampled Newton Methods with422 Non-uniform Sampling,” in Proc. Neural Information Processing Systems (NeurIPS), 2016.423 Checklist424 1. For all authors...425 (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s426 contributions and scope? [Yes]427 (b) Did you describe the limitations of your work? [Yes]428 (c) Did you discuss any potential negative societal impacts of your work? [N/A]429 (d) Have you read the ethics review guidelines and ensured that your paper conforms to them?430 [Yes]431 2. If you are including theoretical results...432 (a) Did you state the full set of assumptions of all theoretical results? [Yes]433 (b) Did you include complete proofs of all theoretical results? [Yes]434 3. If you ran experiments...435 (a) Did you include the code, data, and instructions needed to reproduce the main experimental436 results (either in the supplemental material or as a URL)? [Yes] / [No] We include a437 Python / PyTorch implementation of the method in the supplementary material. We will438 publicly release full source code for the experiments.439 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were440 chosen)? [Yes]441 (c) Did you report error bars (e.g., with respect to the random seed after running experiments442 multiple times)? [Yes]443 (d) Did you include the total amount of compute and the type of resources used (e.g., type of444 GPUs, internal cluster, or cloud provider)? [Yes]445 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...446 (a) If your work uses existing assets, did you cite the creators? [Yes]447 (b) Did you mention the license of the assets? [N/A]448 (c) Did you include any new assets either in the supplemental material or as a URL? [N/A]449 (d) Did you discuss whether and how consent was obtained from people whose data you’re450 using/curating? [N/A]451 (e) Did you discuss whether the data you are using/curating contains personally identifiable452 information or offensive content? [N/A]453 5. If you used crowdsourcing or conducted research with human subjects...454 (a) Did you include the full text of instructions given to participants and screenshots, if455 applicable? [N/A]456 (b) Did you describe any potential participant risks, with links to Institutional Review Board457 (IRB) approvals, if applicable? [N/A]458 (c) Did you include the estimated hourly wage paid to participants and the total amount spent459 on participant compensation? [N/A]460 A PyTorch Implementation461 We display a PyTorch [26] implementation of ISAAC for a fully-connected layer below. Here, we462 mark the important part (i.e., the part beyond the boilerplate) with a red rectangle.463 import torch class ISAACLinearFunction(torch.autograd.Function): @staticmethod def forward(ctx, input, weight, bias, la, inv_type): ctx.save_for_backward(input, weight, bias) ctx.la = la if inv_type == 'cholesky_inverse': ctx.inverse = torch.cholesky_inverse elif inv_type == 'inverse': ctx.inverse = torch.inverse else: raise NotImplementedError(inv_type) return input @ weight.T + (bias if bias is not None else 0) @staticmethod def backward(ctx, grad_output): input, weight, bias = ctx.saved_tensors if ctx.needs_input_grad[0]: grad_0 = grad_output @ weight else: grad_0 = None if ctx.needs_input_grad[1]: aaT = input @ input.T / grad_output.shape[0] I_b = torch.eye(aaT.shape[0], device=aaT.device, dtype=aaT.dtype) aaT_IaaT_inv = aaT @ ctx.inverse(aaT / ctx.la + I_b) grad_1 = grad_output.T @ ( I_b - 1. / ctx.la * aaT_IaaT_inv ) @ input else: grad_1 = None return ( grad_0, grad_1, grad_output.mean(0, keepdim=True) if bias is not None else None, None, None, None, ) class ISAACLinear(torch.nn.Linear): def __init__(self, in_features, out_features, la, inv_type='inverse', **kwargs): super(ISAACLinear, self).__init__( in_features=in_features, out_features=out_features, **kwargs ) self.la = la self.inv_type = inv_type def forward(self, input: torch.Tensor) -> torch.Tensor: return ISAACLinearFunction.apply( input, self.weight, self.bias.unsqueeze(0) if self.bias is not None else None, self.la, self.inv_type ) B Implementation Details464 Unless noted differently, for all experiments, we tune the learning rate on a grid of465 (1, 0.3, 0.1, 0.03, 0.01, 0.003, 0.001). We verified this range to cover the full reasonable range of466 learning rates. Specifically, for every single experiment, we made sure that there is no learning rate467 outside this range which performs better.468 For all language model experiments, we used the respective Huggingface PyTorch implementation.469 All other hyperparameter details are given in the main paper.470 The code will be made publicly available.471 C Additional Proofs472 Proof of Theorem 1. We first show, that ζ as defined in (17) can be expressed as in (20). Indeed by473 using (19), the Woodbury matrix identity and by regularizing the inverses, we can see that474 ζ = λgλa(ḡ ⊤ḡ/b+ λgI) −1 ⊗ (a⊤a/b+ λaI)−1g⊤a = λgλa · ( λgIm + ḡ ⊤ḡ/b )−1 g⊤a ( λaIn + a ⊤a/b )−1 = λgλa · ( 1 λg Im − 1 bλg 2 ḡ ⊤ ( Ib + 1 bλg ḡḡ⊤ )−1 ḡ ) g⊤a ( 1 λa In − 1 bλa 2 a ⊤ ( Ib + 1 bλa aa⊤ )−1 a ) = ( Im − 1 bλg ḡ⊤ ( Ib + 1 bλg ḡḡ⊤ )−1 ḡ ) · g⊤ · a · ( In − 1 bλa a⊤ ( Ib + 1 bλa aa⊤ )−1 a ) = ( Im − 1 bλg ḡ⊤ ( Ib + 1 bλg ḡḡ⊤ )−1 ḡ ) · g⊤ · ( a− 1 bλa aa⊤ ( Ib + 1 bλa aa⊤ )−1 a ) = ( Im − 1 bλg ḡ⊤ ( Ib + 1 bλg ḡḡ⊤ )−1 ḡ ) · g⊤ · ( Ib − 1 bλa aa⊤ ( Ib + 1 bλa aa⊤ )−1) · a To show Assertion (i), we note that according to (17)475 lim λg,λa→0 1 λgλa ζ = lim λg,λa→0 (ḡ⊤ḡ/b+ λgI) −1 ⊗ (a⊤a/b+ λaI)−1g⊤a = (ḡ⊤ḡ)−1 ⊗ (a⊤a)−1g⊤a ≈ G−1g⊤a, where the first equality uses the definition of ζ in (17). The second equality is due to the continuity of476 the matrix inversion and the last approximate equality follows from the K-FAC approximation (15).477 To show Assertion (ii), we consider limλg→∞ and limλa→∞ independently, that is478 lim λg→∞ λg · ( λgIm + ḡ ⊤ḡ/b )−1 (22) = lim λg→∞ ( Im + 1 bλg ḡ⊤ḡ )−1 = Im, and479 lim λa→∞ λa · ( λaIn + a ⊤a/b )−1 (23) = lim λa→∞ ( In + 1 bλa a⊤a )−1 = In. This then implies480 lim λg,λa→∞ λg ( λgIm + ḡ ⊤ḡ/b )−1 · g⊤ (24) · a · λa ( λaIn + a ⊤a/b )−1 = Im · g⊤a · In = g⊤a, which concludes the proof.481 D Additional Experiments482
1. What is the main contribution of the paper regarding the regularized K-FAC approximated Gauss-Newton type method called ISAAC? 2. What are the strengths of the proposed method compared to first-order and second-order methods? 3. What are the weaknesses of the paper regarding its readability, related literature, theoretical properties, numerical examples, and convergence result? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions or concerns regarding the paper's discussion, references, equations, and extensions to convolutional neural networks?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper Inspired by the KFAC approximation and the Tikhonov regularization of the Newton method, this paper proposes a regularized K-FAC approximated Gauss-Newton type method called ISAAC. The proposed method is based on only the input to a respective layer, therefore does not suffer from a substantial computational overhead. Numerical experiments show the advantages of the proposed method compared to both first-order type and second-order type methods. Strengths And Weaknesses Strengths: Regularization parameters is crucial for second-order type methods. The authors first approximate the GGN matrix as a kroncker product of two matrices, and then view two regulation parameters independently. By letting the two regularization parameters vanish or go to infinity, one can obtain new directions and therefore new methods. The proposed method is more efficient than KFAC since only one matrix should be inverted and no need extra propagation to compute g . Weaknesses: This paper should be polished to make it easier to read. Some related literatures on second-order methods should be mentioned, such as Shampoo[1], KBFGS[2], NG+[3] and SENG[4]. Some theoretical properties is natural and trivial. It is better to formalize an algorithm and give a convergence result. It is better to add some large numerical examples, such as Resnet50 with Imagenet-1k and make a comparison with other second-order methods Shampoo[1], KBFGS[2], NG+[3] and SENG[4]. [1] Gupta,V.,Koren,T., Singer,Y.:, Shampoo: Preconditioned stochastic tensor optimization. In: International Conference on Machine Learning. [2] Goldfarb, D., Ren, Y., Bahamou, A.:, Practical quasi-Newton methods for training deep neural networks. In:Advances inNeural Information Processing Systems. [3] Yang, M., Xu, D., Cui, Q., Wen, Z., & Xu, P. (2021). NG+: A Multi-Step Matrix-Product Natural Gradient Method for Deep Learning. arXiv preprint arXiv:2106.07454. [4] Yang, M., Xu, D., Wen, Z., Chen, M., & Xu, P. (2020). Sketch-based empirical natural gradient methods for deep learning. Journal of Scientific Computing. Questions line 60: change [14],[15] to [14.15]. It is better to put multiple references in the same brackets. Please check other places. It is better to add the discussion between GGN matrix and Fisher matrix, which can refer to [1,2] and related literatures. It is better to add a left brackets after η and a right bracket before the gradient in eq.(16). Please also add brackets in a suitable place in eq. (17). In eq.(19), the gradient should be g ⊤ a / b where the b is missed. In eq.(19), the authors use the same batch to approximate the GGN matrix and compute the gradient. Can the batches for approximation of the GGN and the computation of the gradient be different? In that case, what the related theoretical properties will be? When the objective function is convex, the GGN matrix is positive semi-definite matrix. Adding a regularization parameter in GGN, It is t easy to show that ζ is an ascent direction, which means Theorem 2 is trivial. The authors consider the fully-connected neural network in this paper. I believe this method can be extended to convolutional neural networks. And in numerical part, the authors have done some experiments. I wonder can we obtain some explanation from theory for this case? For the kronecker approximation of the GGN matrix, a a ⊤ is more important than g ― ⊤ g ― part. I once validate it numerically. One reason is that each value of the matrix g ― ⊤ g ― is very small so that the λ g domains. I have noticed one related work recently. The work is named NG+[3]. [1] Botev, Aleksandar, Hippolyt Ritter, and David Barber. "Practical gauss-newton optimisation for deep learning." International Conference on Machine Learning. PMLR, 2017. [2] Martens, James. "New insights and perspectives on the natural gradient method." The Journal of Machine Learning Research 21.1 (2020): 5776-5851. [3] Yang, M., Xu, D., Cui, Q., Wen, Z., & Xu, P. (2021). NG+: A Multi-Step Matrix-Product Natural Gradient Method for Deep Learning. arXiv preprint arXiv:2106.07454. Limitations Yes
NIPS
Title ISAAC Newton: Input-based Approximate Curvature for Newton's Method Abstract We present ISAAC (Input-baSed ApproximAte Curvature), a novel method that 1 conditions the gradient using selected second-order information and has an asymp2 totically vanishing computational overhead, assuming a batch size smaller than 3 the number of neurons. We show that it is possible to compute a good conditioner 4 based on only the input to a respective layer without a substantial computational 5 overhead. The proposed method allows effective training even in small-batch 6 stochastic regimes, which makes it competitive to first-order as well as quasi7 Newton methods. 8 N/A We present ISAAC (Input-baSed ApproximAte Curvature), a novel method that1 conditions the gradient using selected second-order information and has an asymp-2 totically vanishing computational overhead, assuming a batch size smaller than3 the number of neurons. We show that it is possible to compute a good conditioner4 based on only the input to a respective layer without a substantial computational5 overhead. The proposed method allows effective training even in small-batch6 stochastic regimes, which makes it competitive to first-order as well as quasi-7 Newton methods.8 1 Introduction9 While second-order optimization methods are traditionally much less explored than first-order10 methods in large-scale machine learning (ML) applications due to their memory requirements and11 prohibitive computational cost per iteration, they have recently become more popular in ML mainly12 due to their fast convergence properties when compared to first-order methods [1]. The expensive13 computation of an inverse Hessian (also known as pre-conditioning matrix) in the Newton step has14 also been tackled via estimating the curvature from the change in gradients. Loosely speaking, these15 algorithms are known as quasi-Newton methods and a comprehensive treatment can be found in16 the textbook [2]. In addition, various new approximations to the pre-conditioning matrix have been17 proposed in the recent literature [3]–[6]. From a theoretical perspective, second-order optimization18 methods are not nearly as well understood as first-order methods. It is an active research direction to19 fill this gap [7], [8].20 Motivated by the task of training neural networks, and the observation that invoking local curvature21 information associated with neural network objective functions can achieve much faster progress22 per iteration than standard first-order methods [9]–[11], several methods have been proposed. One23 of these methods, that received significant attention, is known as Kronecker-factored Approximate24 Curvature (K-FAC) [12], whose main ingredient is a sophisticated approximation to the generalized25 Gauss-Newton matrix and the Fisher information matrix quantifying the curvature of the underlying26 neural network objective function, which then can be inverted efficiently.27 Inspired by the K-FAC approximation and the Tikhonov regularization of the Newton method, we28 introduce a novel two parameter regularized Kronecker-factorized Newton update step. The proposed29 scheme disentangles the classical Tikhonov regularization and allows us to condition the gradient30 using selected second-order information and has an asymptotically vanishing computational overhead.31 While this property makes the presented method highly attractive from the computational complexity32 perspective, we show that its achieved empirical performance on complicated high-dimensional33 Machine Learning problems remains comparable to existing state-of-the-art methods.34 The contributions of this paper can be summarized as follows: (i) we propose a novel two parameter35 regularized K-FAC approximated Gauss-Newton update step; (ii) we show that asymptotically—as36 Submitted to 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Do not distribute. both regularization parameters vanish—our method recovers the classical K-FAC scheme and in37 the opposite setting—as both regularization parameters grow—our method asymptotically reduces38 to classical gradient descent; (iii) we prove that for an arbitrary pair of regularization parameters,39 the proposed update direction is always a direction of decreasing loss; (iv) in the limit, as one40 regularization parameter grows, we obtain an efficient and effective conditioning of the gradient with41 an asymptotically vanishing overhead; (v) we empirically analyze the presented method and find that42 our efficient conditioning method maintains the performance of its more expensive counterpart; (vi)43 we demonstrate the effectiveness of the presented method in the setting of small-batch stochastic44 regimes and observe that it is competitive to first-order as well as quasi-Newton methods.45 2 Preliminaries46 In this section, we review aspects of second-order optimization, with a focus on generalized Gauss-47 Newton methods. In combination with Kronecker factorization, this leads us to a new regularized48 update scheme. We consider the training of an L-layer neural network f(x; θ) defined recursively as49 zi ← ai−1W (i) (pre-activations), ai ← ϕ(zi) (activations), (1) where a0 = x is the vector of inputs and aL = f(x; θ) is the vector of outputs. Unless noted otherwise,50 we assume these vectors to be row vectors (i.e., in R1×n) as this allows for a direct extension to the51 (batch) vectorized case (i.e., in Rb×n) introduced later. For any layer i, let W (i) ∈ Rdi−1×di be a52 weight matrix and let ϕ be an element-wise nonlinear function. We consider a convex loss function53 L(y, y′) that measures the discrepancy between y and y′. The training optimization problem is then54 argmin θ Ex,y [L(f(x; θ), y)] , (2) where θ = [ θ(1), . . . , θ(L) ] with θ(i) = vec(W (i)).55 The classical Newton method for solving (2) is expressed as the update rule56 θ′ = θ − ηH−1θ ∇θL(f(x; θ), y) , (3) where η > 0 denotes the learning rate and Hθ is the Hessian corresponding to the objective function57 in (2). The stability and efficiency of an estimation problem solved via the Newton method can be58 improved by adding a Tikhonov regularization term [13] leading to a regularized Newton method59 θ′ = θ − η (Hθ + λI)−1∇θL(f(x; θ), y) , (4) where λ > 0 is the so-called Tikhonov regularization parameter. It is well-known [14], [15], that60 under the assumption of approximating the model f with its first-order Taylor expansion, the Hessian61 corresponds with the so-called generalized Gauss-Newton (GGN) matrix Gθ, and hence (4) can be62 expressed as63 θ′ = θ − η (Gθ + λI)−1∇θL(f(x; θ), y) . (5) A major practical limitation of (5) is the computation of the inverse term. A method that alleviates this64 difficulty is known as Kronecker-Factored Approximate Curvature (K-FAC) [12] which approximates65 the block-diagonal (i.e., layer-wise) empirical Hessian or GGN matrix. Inspired by K-FAC, there66 have been other works discussing approximations of Gθ and its inverse [15]. In the following, we67 discuss a popular approach that allows for (moderately) efficient computation.68 The generalized Gauss-Newton matrix Gθ is defined as69 Gθ = E [ (Jθf(x; θ)) ⊤∇2fL(f(x; θ), y)Jθf(x; θ) ] , (6) where J and H denote the Jacobian and Hessian matrices, respectively. Correspondingly, the diagonal70 block of Gθ corresponding to the weights of the ith layer W (i) is71 GW (i)=E [ (JW (i)f(x; θ)) ⊤∇2fL(f(x; θ), y)JW (i)f(x; θ) ] . According to the backpropagation rule Jθ(i)f(x; θ) = Jzif(x; θ) ai−1, a ⊤b = a ⊗ b, and the72 mixed-product property, we can rewrite GW (i) as73 GW (i)=E [( (Jzif(x; θ) ai−1) ⊤(∇2fL(f(x; θ), y))1/2 )( (∇2fL(f(x; θ), y))1/2 Jzif(x; θ) ai−1 )] (7) = E [ (ḡ⊤ai−1) ⊤(ḡ⊤ai−1) ] = E [ (ḡ ⊗ ai−1)⊤(ḡ ⊗ ai−1) ] = E [ (ḡ⊤ḡ)⊗ (a⊤i−1 ⊗ ai−1) ] , (8) where74 ḡ = (Jzif(x; θ)) ⊤ (∇2fL(f(x; θ), y))1/2 . (9) Remark 1 (Monte-Carlo Low-Rank Approximation for ḡ⊤ḡ). As ḡ is a matrix of shape m × di75 where m is the dimension of the output of f , ḡ is generally expensive to compute. Therefore, [12] use76 a low-rank Monte-Carlo approximation to estimate HfL(f(x; θ), y) and thereby ḡ⊤ḡ. For this, we77 need to use the distribution underlying the probabilistic model of our loss L (e.g., Gaussian for MSE78 loss, or a categorical distribution for cross entropy). Specifically, by sampling from this distribution79 pf (x) defined by the network output f(x; θ), we can get an estimator of HfL(f(x; θ), y) via the80 identity81 HfL(f(x; θ), y) = Eŷ∼pf (x) [ ∇fL(f(x; θ), ŷ)⊤∇fL(f(x; θ), ŷ) ] . (10) An extensive reference for this (as well as alternatives) can be found in Appendix A.2 of Dangel et82 al. [15]. The respective rank-1 approximation (denoted by ≜) of HfL(f(x; θ)) is83 HfL(f(x; θ), y) ≜ ∇fL(f(x; θ), ŷ)⊤∇fL(f(x; θ), ŷ) , where ŷ ∼ pf (x). Respectively, we can estimate ḡ⊤ḡ using this rank-1 approximation with84 ḡ ≜ (Jzif(x; θ)) ⊤∇fL(f(x; θ), ŷ) = ∇ziL(f(x; θ), ŷ) . (11) In analogy to ḡ, we introduce the gradient of training objective with respect to pre-activations zi as85 gi = (Jzif(x; θ)) ⊤∇fL(f(x; θ), y) = ∇ziL(f(x; θ), y) . (12) In other words, for a given layer, let g ∈ R1×di denote the gradient of the loss between an output and86 the ground truth and let ḡ ∈ Rm×di denote the derivative of the network f times the square root of87 the Hessian of the loss function (which may be approximated according to Remark 1), each of them88 with respect to the output zi of the given layer i. Note that ḡ is not equal to g and that they require one89 backpropagation pass each (or potentially many for the case of ḡ). This makes computing ḡ costly.90 Applying the K-FAC [12] approximation to (8) the expectation of Kronecker products can be91 approximated as the Kronecker product of expectations as92 G = E((ḡ⊤ḡ)⊗ (a⊤a)) ≈ E(ḡ⊤ḡ)⊗ E(a⊤a) , (13) where, for clarity, we drop the index of ai−1 in (8) and denote it with a; similarly we denote GW (i)93 as G. While the expectation of Kronecker products is generally not equal to the Kronecker product94 of expectations, this K-FAC approximation (13) has been shown to be fairly accurate in practice95 and to preserve the “coarse structure” of the GGN matrix [12]. The K-FAC decomposition in (13)96 is convenient as the Kronecker product has the favorable property that for two matrices A,B the97 identity (A⊗B)−1 = A−1 ⊗B−1 which significantly simplifies the computation of an inverse.98 In practice, E(ḡ⊤ḡ) and E(a⊤a) can be computed by averaging over a batch of size b as99 E(ḡ⊤ḡ) ≃ ḡ⊤ḡ/b, E(a⊤a) ≃ a⊤a/b, (14) where we denote batches of g, ḡ and a, as g ∈ Rb×di , ḡ ∈ Rrb×di and a ∈ Rb×di−1 , where our layer100 has di−1 inputs, di outputs, b is the batch size, and r is either the number of outputs m or the rank of101 an approximation according to Remark 1. Correspondingly, the K-FAC approximation of the GGN102 matrix and its inverse are concisely expressed as103 G ≈ (ḡ⊤ḡ)⊗ (a⊤a)/b2 G−1 ≈ ( ḡ⊤ḡ )−1⊗(a⊤a)−1 · b2 . (15) Equipped with the standard terminology and setting, we now introduce the novel, regularized update104 step. First, inspired by the K-FAC approximation (13), the Tikhonov regularized Gauss-Newton105 method (5) can be approximated by106 θ(i)′ = θ(i) − η(ḡ⊤ḡ/b+ λI)−1 ⊗ (a⊤a/b+ λI)−1∇θ(i)L(f(x; θ)), (16) with regularization parameter λ > 0. A key observation, which is motivated by the structure of107 the above update, is to disentangle the two occurrences of λ into two independent regularization108 parameters λg, λa > 0. By defining the Kronecker-factorized Gauss-Newton update step as109 ζ = λgλa(ḡ ⊤ḡ/b+ λgI) −1 ⊗ (a⊤a/b+ λaI)−1∇θ(i)L(f(x; θ)), (17) we obtain the concise update equation110 θ(i)′ = θ(i) − η∗ζ . (18) This update (18) is equivalent to update (16) when in the case of η∗ = ηλgλa and λ = λg = λa. This111 equivalence does not restrict η∗, λg, λa in any way, and changing λg or λa does not mean that we112 change our learning rate or step size η∗. Parameterizing ζ in (17) with the multiplicative terms λgλa113 makes the formulation more convenient for analysis.114 In this paper, we investigate the theoretical and empirical properties of the iterative update rule (18)115 and in particular show how the regularization parameters λg, λa affect the Kronecker-factorized116 Gauss-Newton update step ζ . When analyzing the Kronecker-factorized Gauss-Newton update step117 ζ , a particularly useful tool is the vector product identity,118 (( ḡ⊤ḡ )−1 ⊗ (a⊤a)−1) vec(g⊤a) = vec((ḡ⊤ḡ)−1 g⊤a (a⊤a)−1) , (19) where the gradient with respect to the weight matrix is g⊤a.119 3 Theoretical Guarantees120 In this section, we investigate the theoretical properties of the Kronecker-factorized Gauss-Newton121 update direction ζ as defined in (17). We recall that ζ introduces a Tikonov regularization, as it is122 commonly done in implementations of second order-based methods. Not surprisingly, we show that123 by decreasing the regularization parameters λg, λa the update rule (18) collapses (in the limit) to the124 classical Gauss-Newton method, and hence in the regime of small λg, λa the variable ζ describes the125 Gauss-Newton direction. Moreover, by increasing the regularization strength, we converge (in the126 limit) to the conventional gradient descent update step.127 The key observation is that, as we disentangle the regularization of the two Kronecker factors ḡ⊤ḡ128 and a⊤a, and consider the setting where only one regularizer is large (λg → ∞ to be precise),129 we obtain an update direction that can be computed highly efficiently. We show that this setting130 describes an approximated Gauss-Newton update scheme, whose superior numerical performance is131 then empirically demonstrated in Section 4.132 Theorem 1 (Properties of ζ ). The K-FAC based update step ζ as defined in (17) can be expressed as133 ζ = ( Im − 1 bλg ḡ⊤ ( Ib + 1 bλg ḡḡ⊤ )−1 ḡ ) · g⊤ · ( Ib − 1 bλa aa⊤ ( Ib + 1 bλa aa⊤ )−1) · a . (20) Moreover, ζ admits the following asymptotic properties:134 (i) In the limit of λg, λa → 0, 1λgλaζ is the K-FAC approximation of the Gauss-Newton step, i.e.,135 limλg,λa→0 1 λgλa ζ ≈ G−1∇θ(i)L(f(x; θ)), where ≈ denotes the K-FAC approximation (15).136 (ii) In the limit of λg, λa →∞, ζ is the gradient, i.e., limλg,λa→∞ ζ = ∇θ(i)L(f(x; θ)).137 The Proof is deferred to the Supplementary Material.138 We want to show that ζ is well-defined and points in the correct direction, not only for λg and λa139 numerically close to zero because we want to explore the full spectrum of settings for λg and λa.140 Thus, we prove that ζ is a direction of increasing loss, independent of the choices of λg and λa.141 Theorem 2 (Correctness of ζ is independent of λg and λa). ζ is a direction of increasing loss,142 independent of the choices of λg and λa.143 Proof. Recall that (λgIm+ḡ⊤ḡ/b) and (λaIn+a⊤a/b) are positive semi-definite (PSD) matrices by144 definition. Their inverses (λgIm + ḡ⊤ḡ/b)−1 and (λaIn + a⊤a/b)−1 are therefore also PSD. As the145 Kronecker product of PSD matrices is PSD, the conditioning matrix ((λgIm + ḡ⊤ḡ/b)−1 ⊗ (λaIn +146 a⊤a/b)−1 ≈ G−1) is PSD, and therefore the direction of the update step remains correct.147 From our formulation of ζ , we can find that, in the limit for λg →∞, Equation (21) does not depend148 on ḡ . This is computationally very beneficial as computing ḡ is costly as it requires one or even149 many additional backpropagation passes. In addition, it allows conditioning the gradient update by150 multiplying a b× b matrix between g⊤ and a, which can be done very fast.151 Theorem 3 (Efficient Update Direction). In the limit of λg → ∞, the update step ζ converges to152 limλg→∞ ζ = ζ ∗, where153 ζ∗= g⊤ · ( Ib − 1 bλa aa⊤ ( Ib + 1 bλa aa⊤ )−1) · a . (21) (i) Here, the update direction ζ∗ is based only on the inputs and does not require computing ḡ154 (which would require a second backpropagation pass), making it efficient.155 (ii) The computational cost of computing the update ζ∗ lies in O(bn2 + b2n+ b3), where n is the156 number of neurons in each layer. This comprises the conventional cost of computing the gradient157 ∇ = g⊤x lying inO(bn2), and the overhead of computing ζ∗ instead of∇ lying inO(b2n+b3).158 The overhead is vanishing, assuming n≫ b. For b > n the complexity lies in O(bn2 + n3).159 Proof. We first show the property (21). Note that according to (22), λg · ( λgIm + ḡ ⊤ḡ/b )−1 con-160 verges in the limit of λg →∞ to Im, and therefore (21) holds.161 (i) The statement follows from the fact that the term ḡ does not appear in the equivalent characteriza-162 tion (21) of ζ∗.163 (ii) We first note that the matrix aa⊤ is of dimension b × b, and can be computed in O(b2n) time.164 Next, the matrix165 ( Ib − 1 bλa aa⊤ ( Ib + 1 bλa aa⊤ )−1) is of shape b× b and can be multiplied with a in O(b2n) time.166 Notably, (21) can be computed with a vanishing computational overhead and with only minor167 modifications to the implementation. Specifically, only the g⊤a expression has to be replaced by (21)168 in the backpropagation step. As this can be done independently for each layer, this lends itself also to169 applying it only to individual layers.170 As we see in the experimental section, in many cases in the mini-batch regime (i.e., b < n), the171 optimal (or a good) choice for λg actually lies in the limit to∞. This is a surprising result, leading to172 the efficient and effective ζ∗ = ζλg→∞ optimizer.173 Remark 2 (Relation between Update Direction ζ and ζ∗). When comparing the update direction174 ζ in (20) without regularization (i.e., λg → 0, λa → 0) with ζ∗ (i.e., λg → ∞) as given in (21), it175 can be directly seen that ζ∗ corresponds to a particular pre-conditioning of ζ , since ζ∗ = Mζ for176 M = 1bλg ḡ ⊤ḡ .177 As the last theoretical property of our proposed update direction ζ∗, we show that in specific networks178 ζ∗ coincides with the Gauss-Newton update direction.179 Theorem 4 (ζ∗ is Exact for the Last Layer). For the case of linear regression or, more generally, the180 last layer of networks, with the mean squared error, ζ∗ is the Gauss-Newton update direction.181 Proof. The Hessian matrix of the mean squared error loss is the identity matrix. Correspondingly,182 the expectation value of ḡ⊤ḡ is I. Thus, ζ∗ = ζ .183 Remark 3. The direction ζ∗ corresponds to the Gauss-Newton update direction with an approxima-184 tion of G that can be expressed as G ≈ E [ I⊗ (a⊤a) ] .185 Remark 4 (Extension to the Natural Gradient). In some cases, it might be more desirable to use the186 Fisher-based natural gradient instead of the Gauss-Newton method. The difference to this setting is187 that in (5) the GGN matrix G is replaced by the empirical Fisher information matrix F.188 We note that our theory also applies to F, and that ζ∗ also efficiently approximates the natural189 gradient update step F−1∇. The i-th diagonal block of F (Fθ(i) = E [ (g⊤i gi)⊗ (a⊤i−1 ⊗ ai−1) ] ),190 has the same form as a block of the GGN matrix G (Gθ(i) = E [ (ḡ⊤i ḡi)⊗ (a⊤i−1 ⊗ ai−1) ] ).191 Thus, we can replace ḡ with g in our theoretical results to obtain their counterparts for F.192 4 Experiments193 In the previous section, we discussed the theoretical properties of the proposed update directions194 ζ and ζ∗ with the aspect that ζ∗ would actually be “free” to compute in the mini-batch regime. In195 this section, we provide empirical evidence that ζ∗ is a good update direction, even in deep learning.196 Specifically, we demonstrate that197 (E1) ζ∗ achieves similar performance to K-FAC, while being substantially cheaper to compute.198 (E2) The performance of our proposed method can be empirically maintained in the mini-batch199 regime (n≫ b).200 (E3) ζ∗ may be used for individual layers, while for other layers only the gradient ∇ is used. This201 still leads to improved performance.202 (E4) ζ∗ also improves the performance for training larger models such as BERT and ResNet.203 (E5) The runtime and memory requirements of ζ∗ are comparable to those of gradient descent.204 E1: Impact of Regularization Parameters205 For (E1), we study the dependence of the model’s performance on the regularization parameters λg206 and λa. Here, we train a 5-layer deep neural network on the MNIST classification task [16] with a207 batch size of 60 for a total of 40 epochs or 40 000 steps.208 The plots in Figure 1 demonstrate that the advantage of training by conditioning with curvature209 information can be achieved by considering both layer inputs a and gradients with respect to random210 samples ḡ , but also using only layer inputs a. In the plot, we show the performance of ζ for different211 choices of λg and λa, each in the range from 10−6 to 106. The right column shows ζ ∗, i.e., λg =∞,212 for different λa. The bottom-right corner is gradient descent, which corresponds to λg = ∞ and213 λa =∞.214 Newton’s method or the general K-FAC approximation corresponds to the area with small λg and λa.215 The interesting finding here is that the performance does not suffer by increasing λg toward∞, i.e.,216 from left to right in the plot.217 In addition, in Figure 3, we consider the case of regression with an auto-encoder trained with the218 MSE loss on MNIST [16] and Fashion-MNIST [17]. Here, we follow the same principle as above219 and also find that ζ∗ performs well.220 -6 -5 -4 -3 -2 -1 0 +1+2+3+4+5+6 log10 g -6 -5 -4 -3 -2 -1 0 +1 +2 +3 +4 +5 +6 lo g 1 0 a (a) -6 -5 -4 -3 -2 -1 0 +1+2+3+4+5+6 log10 g (b) 5.5 5.0 4.5 4.0 3.5 3.0 2.5 Figure 3: Training an auto-encoder on MNIST (left) and FashionMNIST (right). The model is the same as used by Botev et al. [18], i.e., it is a ReLU-activated 6-layer fully connected model with dimensions 784-1000-500- 30-500-1000-784. Displayed is the logarithmic training loss. -6 -5 -4 -3 -2 -1 0 +1+2+3+4+5+6 log10 g -6 -5 -4 -3 -2 -1 0 +1 +2 +3 +4 +5 +6 lo g 1 0 a (a) -6 -5 -4 -3 -2 -1 0 +1+2+3+4+5+6 log10 g (b) Figure 4: Training a 5-layer ReLU network with 400 neurons per layer on the MNIST classification task (as in Figure 1) but with the Adam optimizer [19]. In Figure 7, we compare the loss for dif-221 ferent methods. Here, we distinguish222 between loss per time (left) and loss223 per number of steps (right). We can ob-224 serve that, for λ = 0.1, K-FAC, ζ , and225 ζ∗ are almost identical per update step226 (right), while ζ∗ is by a large margin227 the fastest, followed by ζ , and the con-228 ventional K-FAC implementation is the229 slowest (left). On the other hand, for230 λ = 0.01 we can achieve a faster con-231 vergence than with λ = 0.1, but here232 only the K-FAC and ζ methods are nu-233 merically stable, while ζ∗ is unstable in234 this case. This means in the regime of235 very small λ, ζ∗ is not as robust as K-236 FAC and ζ , however, it achieves good237 performance with small but moderate238 λ like λ = 0.1. For λ < 0.01, also239 K-FAC and ζ become numerically un-240 stable in this setting and, in general, we241 observed that the smallest valid λ for242 K-FAC is 0.01 or 0.001 depending on243 model and task. Under consideration244 of the runtime, ζ∗ performs best as it is245 almost as fast as gradient descent while246 performing equivalent to K-FAC and ζ .247 Specifically, a gradient descent step is248 only about 10% faster than ζ∗.249 E2: Minibatch Regime250 For (E2), in Figure 1, we can see that training251 performs well for n ∈ {100, 400, 1 600} neu-252 rons per layer at a batch size of only 60. Also, in253 all other experiments, we use small batch sizes254 of between 8 and 100.255 E3: ζ∗ in Individual Layers256 In Figure 5, we train the 5-layer fully connected257 model with 400 neurons per layer. Here, we258 consider the setting that we use ζ∗ in some of259 the layers while using the default gradient ∇260 in other layers. Specifically, we consider the261 settings, where all, the first, the final, the first three, the final three, the odd numbered, and the262 even numbered layers are updated by ζ∗. We observe that all settings with ζ∗ perform better than263 plain gradient descent, except for “ζ∗ for layers 3,4,5” which performs approximately equivalent to264 gradient descent.265 E4: Large-scale Models266 BERT To demonstrate the utility of ζ∗ also in large-scale models, we evaluate it for fine-tuning267 BERT [20] on three natural language tasks. In Table 1, we summarize the results for the BERT268 fine-tuning task. For the “Corpus of Linguistic Acceptability” (CoLA) [21] data set, we fine-tune269 both the BERT-Base and the BERT-Mini models and find that we outperform the gradient descent270 baseline in both cases. For the “Microsoft Research Paraphrase Corpus” (MRPC) [22] data set, we271 fine-tune the BERT-Base model and find that we outperform the baseline both in terms of accuracy272 and F1-score. Finally, on the “Semantic Textual Similarity Benchmark” (STS-B) [23] data set, we273 fine-tune the BERT-Mini model and achieve higher Pearson and Spearman correlations than the274 baseline. While for training with CoLA and MRPC, we were able to use the Adam optimizer [19]275 (which is recommended for this task and model) in conjunction with ζ∗ in place of the gradient,276 for STS-B Adam did not work well. Therefore, for STS-B, we evaluated it using the SGD with277 momentum optimizer. For each method, we performed a grid search over the hyperparameters. We278 note that we use a batch size of 8 in all BERT experiments.279 ResNet In addition, we conduct an experiment280 where we train the last layer of a ResNet with281 ζ∗, while the remainder of the model is up-282 dated using the gradient ∇. Here, we train a283 ResNet-18 [24] on CIFAR-10 [25] using SGD284 with a batch size of 100 in a vanilla setting, i.e.,285 without additional tricks employed in by He et286 al. [24] and others. Specifically, we use (i) a287 constant learning rate for each training (optimal288 from (1, 0.3, 0.1, 0.03, 0.01)) and (ii) vanilla289 SGD and not momentum-based SGD. The rea-290 son behind this is that we want a vanilla experi-291 ment and with aspects such as extensively tuning292 multiple parameters of learning rate scheduler293 would make the evaluation less transparent; how-294 ever, therefore, all accuracies are naturally lower than SOTA. In Figure 6, we plot the test accuracy295 against time. The results show that the proposed method outperforms vanilla SGD when applied296 to the last layer of a ResNet-18. To validate that the learning rate is not the cause for the better297 performance, we also plot the neighboring learning rates and find that even with a too small or too298 large learning rate ζ∗ outperforms gradient descent with the optimal learning rate.299 E5: Runtime and Memory300 Finally, we also evaluate the runtime and memory requirements of each method. The runtime301 evaluation is displayed in Table 2. We report both CPU and GPU runtime using PyTorch [26] and302 (for K-FAC) the backpack library [15]. Note that the CPU runtime is more representative of the303 pure computational cost, as for the first rows of the GPU runtime the overhead of calling the GPU304 is dominant. When comparing runtimes between the gradient and ζ∗ on the GPU, we can observe305 that we have an overhead of around 2.5 s independent of the model size. The overhead for CPU time306 is also very small at less than 1% for the largest model, and only 1.3 s for the smallest model. In307 contrast, the runtime of ζ∗ is around 4 times the runtime of the gradient, and K-FAC has an even308 substantially larger runtime. Regarding memory, ζ∗ (contrasting the other approaches) also requires309 only a small additional footprint.310 Remark 5 (Implementation). The implementation of ζ∗ can be done by replacing the backpropagation311 step of a respective layer by (21). As all “ingredients” are already available in popular deep learning312 frameworks, it requires only little modification (contrasting K-FAC and ζ , which require at least one313 additional backpropagation.)314 We will publish the source code of our implementation. In the appendix, we give a PyTorch [26]315 implementation of the proposed method (ζ∗).316 5 Related Work317 Our methods are related to K-FAC by Martens and Grosse [12]. K-FAC uses the approximation318 (13) to approximate the blocks of the Hessian of the empirical risk of neural networks. In most319 implementations of K-FAC, the off-diagonal blocks of the Hessian are also set to zero. One of the320 main claimed benefits of K-FAC is its speed (compared to stochastic gradient descent) for large-batch321 size training. That said, recent empirical work has shown that this advantage of K-FAC disappears322 once the additional computational costs of hyperparameter tuning for large batch training is accounted323 for. There is a line of work that extends the basic idea of K-FAC to convolutional layers [27]. Botev et324 al. [18] further extend these ideas to present KFLR, a Kronecker factored low-rank approximation,325 and KFRA, a Kronecker factored recursive approximation of the Gauss-Newton step. Singh and326 Alistarh [28] propose WoodFisher, a Woodbury matrix inverse-based estimate of the inverse Hessian,327 and apply it to neural network compression. Yao et al. [29] propose AdaHessian, a second-order328 optimizer that incorporates the curvature of the loss function via an adaptive estimation of the Hessian.329 Frantar et al. [6] propose M-FAC, a matrix-free approximation of the natural gradient through a queue330 of the (e.g., 1 000) recent gradients. These works fundamentally differ from our approach in that their331 objective is to approximate the Fisher or Gauss-Newton matrix inverse vector products. In contrast,332 this work proposes to approximate the Gauss-Newton matrix by only one of its Kronecker factors,333 which we find to achieve good performance at a substantial computational speedup and reduction of334 memory footprint. For an overview of this area, we refer to Kunstner et al. [30] and Martens [31].335 For an overview of the technical aspects of backpropagation of second-order quantities, we refer to336 Dangel et al. [15], [32]337 Taking a step back, K-FAC is one of many Newton-type methods for training neural networks.338 Other prominent examples of such methods include subsampled Newton methods [33], [34] (which339 approximate the Hessian by subsampling the terms in the empirical risk function and evaluating the340 Hessian of the subsampled terms) and sketched Newton methods [3]–[5] (which approximate the341 Hessian by sketching, e.g., by projecting the Hessian to a lower-dimensional space by multiplying it342 with a random matrix). The main features that distinguish K-FAC from this group of methods are343 K-FAC’s superior empirical performance and K-FAC’s lack of theoretical justification.344 6 Conclusion345 In this work, we presented ISAAC Newton, a novel approximate curvature method based on layer-346 inputs. We demonstrated it to be a special case of the regularization-generalized Gauss-Newton347 method and empirically demonstrate its utility. Specifically, our method features an asymptotically348 vanishing computational overhead in the mini-batch regime, while achieving competitive empirical349 performance on various benchmark problems.350 References351 [1] N. Agarwal, B. Bullins, and E. Hazan, “Second-order stochastic optimization for machine352 learning in linear time,” Journal on Machine Learning Research, vol. 18, no. 1, pp. 4148–4187,353 2017.354 [2] J. Nocedal and S. J. Wright, Numerical Optimization, 2e. New York, NY, USA: Springer, 2006.355 [3] A. Gonen and S. Shalev-Shwartz, “Faster SGD using sketched conditioning,” arXiv preprint,356 arXiv:1506.02649, 2015.357 [4] M. Pilanci and M. J. Wainwright, “Newton sketch: A near linear-time optimization algorithm358 with linear-quadratic convergence,” SIAM Journal on Optimization, vol. 27, 2017.359 [5] M. A. Erdogdu and A. Montanari, “Convergence rates of sub-sampled Newton methods,” in360 Proc. Neural Information Processing Systems (NeurIPS), 2015.361 [6] E. Frantar, E. Kurtic, and D. Alistarh, “M-FAC: Efficient matrix-free approximations of362 second-order information,” in Proc. Neural Information Processing Systems (NeurIPS), 2021.363 [7] N. Doikov and Y. Nesterov, “Convex Optimization based on Global Lower Second-order364 Models,” in Proc. Neural Information Processing Systems (NeurIPS), Curran Associates, Inc.,365 2020.366 [8] Y. Nesterov and B. T. Polyak, “Cubic regularization of Newton method and its global perfor-367 mance,” Mathematical Programming, vol. 108, 2006.368 [9] S. Becker and Y. Lecun, “Improving the convergence of back-propagation learning with369 second-order methods,” 1989.370 [10] T. Schaul, S. Zhang, and Y. LeCun, “No more pesky learning rates,” in International Conference371 on Machine Learning (ICML), 2013.372 [11] Y. Ollivier, “Riemannian metrics for neural networks i: Feedforward networks,” Information373 and Inference, vol. 4, pp. 108–153, Jun. 2015.374 [12] J. Martens and R. Grosse, “Optimizing neural networks with Kronecker-factored approximate375 curvature,” in International Conference on Machine Learning (ICML), 2015.376 [13] A. N. Tikhonov and V. Y. Arsenin, Solutions of Ill-posed problems. W.H. Winston, 1977.377 [14] P. Chen, “Hessian matrix vs. Gauss—Newton Hessian matrix,” SIAM Journal on Numerical378 Analysis, 2011.379 [15] F. Dangel, F. Kunstner, and P. Hennig, “Backpack: Packing more into backprop,” in Interna-380 tional Conference on Learning Representations, 2020.381 [16] Y. LeCun, C. Cortes, and C. Burges, “MNIST Handwritten Digit Database,” ATT Labs, 2010.382 [17] H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-MNIST: A novel image dataset for benchmarking383 machine learning algorithms,” arXiv, 2017.384 [18] A. Botev, H. Ritter, and D. Barber, “Practical Gauss-Newton optimisation for deep learning,”385 in International Conference on Machine Learning (ICML), 2017.386 [19] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in International Confer-387 ence on Learning Representations (ICLR), 2015.388 [20] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional389 transformers for language understanding,” in North American Chapter of the Association for390 Computational Linguistics: Human Language Technologies (NAACL-HLT), 2018.391 [21] A. Warstadt, A. Singh, and S. R. Bowman, “Neural network acceptability judgments,” Trans-392 actions of the Association for Computational Linguistics, vol. 7, 2019.393 [22] W. B. Dolan and C. Brockett, “Automatically constructing a corpus of sentential paraphrases,”394 in Proceedings of the Third International Workshop on Paraphrasing (IWP2005), 2005.395 [23] D. Cer, M. Diab, E. Agirre, I. Lopez-Gazpio, and L. Specia, “SemEval-2017 task 1: Semantic396 textual similarity multilingual and crosslingual focused evaluation,” in Proceedings of the397 11th International Workshop on Semantic Evaluation (SemEval-2017), Vancouver, Canada:398 Association for Computational Linguistics, 2017.399 [24] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in400 Proc. International Conference on Computer Vision and Pattern Recognition (CVPR), 2016.401 [25] A. Krizhevsky, V. Nair, and G. Hinton, “Cifar-10 (Canadian Institute for Advanced Research),”402 2009.403 [26] A. Paszke, S. Gross, F. Massa, et al., “Pytorch: An imperative style, high-performance deep404 learning library,” in Proc. Neural Information Processing Systems (NeurIPS), 2019.405 [27] R. Grosse and J. Martens, “A Kronecker-factored approximate Fisher matrix for convolution406 layers,” in International Conference on Machine Learning (ICML), 2016.407 [28] S. P. Singh and D. Alistarh, “Woodfisher: Efficient second-order approximation for neural408 network compression,” in Proc. Neural Information Processing Systems (NeurIPS), 2020.409 [29] Z. Yao, A. Gholami, S. Shen, M. Mustafa, K. Keutzer, and M. W. Mahoney, “Adahessian:410 An adaptive second order optimizer for machine learning,” in AAAI Conference on Artificial411 Intelligence, 2021.412 [30] F. Kunstner, L. Balles, and P. Hennig, “Limitations of the empirical Fisher approximation for413 natural gradient descent,” in Proc. Neural Information Processing Systems (NeurIPS), 2019.414 [31] J. Martens, “New insights and perspectives on the natural gradient method,” Journal of Machine415 Learning Research, 2020.416 [32] F. Dangel, S. Harmeling, and P. Hennig, “Modular block-diagonal curvature approximations417 for feedforward architectures,” in International Conference on Artificial Intelligence and418 Statistics (AISTATS), 2020.419 [33] F. Roosta-Khorasani and M. W. Mahoney, “Sub-Sampled Newton Methods I: Globally Con-420 vergent Algorithms,” arXiv: 1601.04737, 2016.421 [34] P. Xu, J. Yang, F. Roosta, C. Ré, and M. W. Mahoney, “Sub-sampled Newton Methods with422 Non-uniform Sampling,” in Proc. Neural Information Processing Systems (NeurIPS), 2016.423 Checklist424 1. For all authors...425 (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s426 contributions and scope? [Yes]427 (b) Did you describe the limitations of your work? [Yes]428 (c) Did you discuss any potential negative societal impacts of your work? [N/A]429 (d) Have you read the ethics review guidelines and ensured that your paper conforms to them?430 [Yes]431 2. If you are including theoretical results...432 (a) Did you state the full set of assumptions of all theoretical results? [Yes]433 (b) Did you include complete proofs of all theoretical results? [Yes]434 3. If you ran experiments...435 (a) Did you include the code, data, and instructions needed to reproduce the main experimental436 results (either in the supplemental material or as a URL)? [Yes] / [No] We include a437 Python / PyTorch implementation of the method in the supplementary material. We will438 publicly release full source code for the experiments.439 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were440 chosen)? [Yes]441 (c) Did you report error bars (e.g., with respect to the random seed after running experiments442 multiple times)? [Yes]443 (d) Did you include the total amount of compute and the type of resources used (e.g., type of444 GPUs, internal cluster, or cloud provider)? [Yes]445 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...446 (a) If your work uses existing assets, did you cite the creators? [Yes]447 (b) Did you mention the license of the assets? [N/A]448 (c) Did you include any new assets either in the supplemental material or as a URL? [N/A]449 (d) Did you discuss whether and how consent was obtained from people whose data you’re450 using/curating? [N/A]451 (e) Did you discuss whether the data you are using/curating contains personally identifiable452 information or offensive content? [N/A]453 5. If you used crowdsourcing or conducted research with human subjects...454 (a) Did you include the full text of instructions given to participants and screenshots, if455 applicable? [N/A]456 (b) Did you describe any potential participant risks, with links to Institutional Review Board457 (IRB) approvals, if applicable? [N/A]458 (c) Did you include the estimated hourly wage paid to participants and the total amount spent459 on participant compensation? [N/A]460 A PyTorch Implementation461 We display a PyTorch [26] implementation of ISAAC for a fully-connected layer below. Here, we462 mark the important part (i.e., the part beyond the boilerplate) with a red rectangle.463 import torch class ISAACLinearFunction(torch.autograd.Function): @staticmethod def forward(ctx, input, weight, bias, la, inv_type): ctx.save_for_backward(input, weight, bias) ctx.la = la if inv_type == 'cholesky_inverse': ctx.inverse = torch.cholesky_inverse elif inv_type == 'inverse': ctx.inverse = torch.inverse else: raise NotImplementedError(inv_type) return input @ weight.T + (bias if bias is not None else 0) @staticmethod def backward(ctx, grad_output): input, weight, bias = ctx.saved_tensors if ctx.needs_input_grad[0]: grad_0 = grad_output @ weight else: grad_0 = None if ctx.needs_input_grad[1]: aaT = input @ input.T / grad_output.shape[0] I_b = torch.eye(aaT.shape[0], device=aaT.device, dtype=aaT.dtype) aaT_IaaT_inv = aaT @ ctx.inverse(aaT / ctx.la + I_b) grad_1 = grad_output.T @ ( I_b - 1. / ctx.la * aaT_IaaT_inv ) @ input else: grad_1 = None return ( grad_0, grad_1, grad_output.mean(0, keepdim=True) if bias is not None else None, None, None, None, ) class ISAACLinear(torch.nn.Linear): def __init__(self, in_features, out_features, la, inv_type='inverse', **kwargs): super(ISAACLinear, self).__init__( in_features=in_features, out_features=out_features, **kwargs ) self.la = la self.inv_type = inv_type def forward(self, input: torch.Tensor) -> torch.Tensor: return ISAACLinearFunction.apply( input, self.weight, self.bias.unsqueeze(0) if self.bias is not None else None, self.la, self.inv_type ) B Implementation Details464 Unless noted differently, for all experiments, we tune the learning rate on a grid of465 (1, 0.3, 0.1, 0.03, 0.01, 0.003, 0.001). We verified this range to cover the full reasonable range of466 learning rates. Specifically, for every single experiment, we made sure that there is no learning rate467 outside this range which performs better.468 For all language model experiments, we used the respective Huggingface PyTorch implementation.469 All other hyperparameter details are given in the main paper.470 The code will be made publicly available.471 C Additional Proofs472 Proof of Theorem 1. We first show, that ζ as defined in (17) can be expressed as in (20). Indeed by473 using (19), the Woodbury matrix identity and by regularizing the inverses, we can see that474 ζ = λgλa(ḡ ⊤ḡ/b+ λgI) −1 ⊗ (a⊤a/b+ λaI)−1g⊤a = λgλa · ( λgIm + ḡ ⊤ḡ/b )−1 g⊤a ( λaIn + a ⊤a/b )−1 = λgλa · ( 1 λg Im − 1 bλg 2 ḡ ⊤ ( Ib + 1 bλg ḡḡ⊤ )−1 ḡ ) g⊤a ( 1 λa In − 1 bλa 2 a ⊤ ( Ib + 1 bλa aa⊤ )−1 a ) = ( Im − 1 bλg ḡ⊤ ( Ib + 1 bλg ḡḡ⊤ )−1 ḡ ) · g⊤ · a · ( In − 1 bλa a⊤ ( Ib + 1 bλa aa⊤ )−1 a ) = ( Im − 1 bλg ḡ⊤ ( Ib + 1 bλg ḡḡ⊤ )−1 ḡ ) · g⊤ · ( a− 1 bλa aa⊤ ( Ib + 1 bλa aa⊤ )−1 a ) = ( Im − 1 bλg ḡ⊤ ( Ib + 1 bλg ḡḡ⊤ )−1 ḡ ) · g⊤ · ( Ib − 1 bλa aa⊤ ( Ib + 1 bλa aa⊤ )−1) · a To show Assertion (i), we note that according to (17)475 lim λg,λa→0 1 λgλa ζ = lim λg,λa→0 (ḡ⊤ḡ/b+ λgI) −1 ⊗ (a⊤a/b+ λaI)−1g⊤a = (ḡ⊤ḡ)−1 ⊗ (a⊤a)−1g⊤a ≈ G−1g⊤a, where the first equality uses the definition of ζ in (17). The second equality is due to the continuity of476 the matrix inversion and the last approximate equality follows from the K-FAC approximation (15).477 To show Assertion (ii), we consider limλg→∞ and limλa→∞ independently, that is478 lim λg→∞ λg · ( λgIm + ḡ ⊤ḡ/b )−1 (22) = lim λg→∞ ( Im + 1 bλg ḡ⊤ḡ )−1 = Im, and479 lim λa→∞ λa · ( λaIn + a ⊤a/b )−1 (23) = lim λa→∞ ( In + 1 bλa a⊤a )−1 = In. This then implies480 lim λg,λa→∞ λg ( λgIm + ḡ ⊤ḡ/b )−1 · g⊤ (24) · a · λa ( λaIn + a ⊤a/b )−1 = Im · g⊤a · In = g⊤a, which concludes the proof.481 D Additional Experiments482
1. What is the focus and contribution of the paper regarding improving computation efficiency in KFAC? 2. What are the strengths and weaknesses of the proposed approach, particularly in its theoretical guarantee and empirical results? 3. Do you have any concerns or questions regarding the motivation and novelty of the solution? 4. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content? 5. Are there any limitations or areas for improvement regarding the training settings and comparisons with other optimizers?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposed a KFAC variant to improve the computation efficiency. It introduces two Tikhonov regularization terms into KFAC, and makes gradient conditioning more efficient by force λ g go to infinity. The authors present a theoretical guarantee that the optimization still goes in the right direction even with such an adaptation. The authors also show ISAAC is more computation efficient in small-batch stochastic regimes. However, I feel contributions in this paper are rather limited, and some motivations might need further justification. Strengths And Weaknesses Strength: The paper is well written. Preliminaries and methods are clearly explained. Weaknesses: 1, Contributions are limited. Reducing the compute costs of KFAC is a good problem. However, I am not convinced that separating the regularization term and using a large regularization is a novel and effective solution. It only makes the optimizer behave more and more like SGD. As the authors stated in Theorem 1, small λ a , λ g leads to KFAC, while large λ a , λ g leads to vanilla gradient descent. 2, Empirical results need to be enhanced. The authors use several empirical results to support the claim that ISAAC still behaves like KFAC even though using a strong regularization factor. However, I do not think current results are strong enough to support the claim. The issues in the empirical evaluations also make me doubt the contributions in the paper. 2.1, The author should provide more baselines (Adam, SGD with momentum) so that we can really have a fair evaluation. 2.2, For Figure 1, my understanding is that the authors are trying to show ISAAC still obtains good results (loss and accuracy) even with large regularization. If so, I do not think it is necessary. As the authors stated in Theorem 1, larger regularization makes ISAAC behave more like SGD. Therefore, getting similar loss and accuracy is no surprise. 2.3, For Figure 2, there are several issues. First, hyper parameters are not fully revealed. For instance, what is the weight decay for each optimizer since weight decays large affects training convergence? Second, only the training curve is reported. The author really should present both a training and validation curve. In many cases, we can have faster convergence on training dataset, but not for validation dataset. 2.4, For Figure 6, the reported accuracy is obviously lower compared to results we usually have on CIFAR-10. The reason might be that the authors use SGD without momentum. However, I am not convinced under such a setting. SGD with momentum is a default setting in many training tasks. Without momentum, I do not think the results will provide meaningful comparisons and conclusions. 3, Training with small batches should be further justified. I understand the authors stress the compute benefits of ISAAC in a small-batch regime (Theorem 3-ii). However, considering more general settings, the authors should well justify it: is it worth accelerating the training using small batches given the fact that large batches can also improve convergence performance even using SGD. Therefore, I think not only should the author provide results in small-batch regimes, training using large batches should also be evaluated and compared with other baselines. Questions See my comments above. Limitations No
NIPS
Title ISAAC Newton: Input-based Approximate Curvature for Newton's Method Abstract We present ISAAC (Input-baSed ApproximAte Curvature), a novel method that 1 conditions the gradient using selected second-order information and has an asymp2 totically vanishing computational overhead, assuming a batch size smaller than 3 the number of neurons. We show that it is possible to compute a good conditioner 4 based on only the input to a respective layer without a substantial computational 5 overhead. The proposed method allows effective training even in small-batch 6 stochastic regimes, which makes it competitive to first-order as well as quasi7 Newton methods. 8 N/A We present ISAAC (Input-baSed ApproximAte Curvature), a novel method that1 conditions the gradient using selected second-order information and has an asymp-2 totically vanishing computational overhead, assuming a batch size smaller than3 the number of neurons. We show that it is possible to compute a good conditioner4 based on only the input to a respective layer without a substantial computational5 overhead. The proposed method allows effective training even in small-batch6 stochastic regimes, which makes it competitive to first-order as well as quasi-7 Newton methods.8 1 Introduction9 While second-order optimization methods are traditionally much less explored than first-order10 methods in large-scale machine learning (ML) applications due to their memory requirements and11 prohibitive computational cost per iteration, they have recently become more popular in ML mainly12 due to their fast convergence properties when compared to first-order methods [1]. The expensive13 computation of an inverse Hessian (also known as pre-conditioning matrix) in the Newton step has14 also been tackled via estimating the curvature from the change in gradients. Loosely speaking, these15 algorithms are known as quasi-Newton methods and a comprehensive treatment can be found in16 the textbook [2]. In addition, various new approximations to the pre-conditioning matrix have been17 proposed in the recent literature [3]–[6]. From a theoretical perspective, second-order optimization18 methods are not nearly as well understood as first-order methods. It is an active research direction to19 fill this gap [7], [8].20 Motivated by the task of training neural networks, and the observation that invoking local curvature21 information associated with neural network objective functions can achieve much faster progress22 per iteration than standard first-order methods [9]–[11], several methods have been proposed. One23 of these methods, that received significant attention, is known as Kronecker-factored Approximate24 Curvature (K-FAC) [12], whose main ingredient is a sophisticated approximation to the generalized25 Gauss-Newton matrix and the Fisher information matrix quantifying the curvature of the underlying26 neural network objective function, which then can be inverted efficiently.27 Inspired by the K-FAC approximation and the Tikhonov regularization of the Newton method, we28 introduce a novel two parameter regularized Kronecker-factorized Newton update step. The proposed29 scheme disentangles the classical Tikhonov regularization and allows us to condition the gradient30 using selected second-order information and has an asymptotically vanishing computational overhead.31 While this property makes the presented method highly attractive from the computational complexity32 perspective, we show that its achieved empirical performance on complicated high-dimensional33 Machine Learning problems remains comparable to existing state-of-the-art methods.34 The contributions of this paper can be summarized as follows: (i) we propose a novel two parameter35 regularized K-FAC approximated Gauss-Newton update step; (ii) we show that asymptotically—as36 Submitted to 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Do not distribute. both regularization parameters vanish—our method recovers the classical K-FAC scheme and in37 the opposite setting—as both regularization parameters grow—our method asymptotically reduces38 to classical gradient descent; (iii) we prove that for an arbitrary pair of regularization parameters,39 the proposed update direction is always a direction of decreasing loss; (iv) in the limit, as one40 regularization parameter grows, we obtain an efficient and effective conditioning of the gradient with41 an asymptotically vanishing overhead; (v) we empirically analyze the presented method and find that42 our efficient conditioning method maintains the performance of its more expensive counterpart; (vi)43 we demonstrate the effectiveness of the presented method in the setting of small-batch stochastic44 regimes and observe that it is competitive to first-order as well as quasi-Newton methods.45 2 Preliminaries46 In this section, we review aspects of second-order optimization, with a focus on generalized Gauss-47 Newton methods. In combination with Kronecker factorization, this leads us to a new regularized48 update scheme. We consider the training of an L-layer neural network f(x; θ) defined recursively as49 zi ← ai−1W (i) (pre-activations), ai ← ϕ(zi) (activations), (1) where a0 = x is the vector of inputs and aL = f(x; θ) is the vector of outputs. Unless noted otherwise,50 we assume these vectors to be row vectors (i.e., in R1×n) as this allows for a direct extension to the51 (batch) vectorized case (i.e., in Rb×n) introduced later. For any layer i, let W (i) ∈ Rdi−1×di be a52 weight matrix and let ϕ be an element-wise nonlinear function. We consider a convex loss function53 L(y, y′) that measures the discrepancy between y and y′. The training optimization problem is then54 argmin θ Ex,y [L(f(x; θ), y)] , (2) where θ = [ θ(1), . . . , θ(L) ] with θ(i) = vec(W (i)).55 The classical Newton method for solving (2) is expressed as the update rule56 θ′ = θ − ηH−1θ ∇θL(f(x; θ), y) , (3) where η > 0 denotes the learning rate and Hθ is the Hessian corresponding to the objective function57 in (2). The stability and efficiency of an estimation problem solved via the Newton method can be58 improved by adding a Tikhonov regularization term [13] leading to a regularized Newton method59 θ′ = θ − η (Hθ + λI)−1∇θL(f(x; θ), y) , (4) where λ > 0 is the so-called Tikhonov regularization parameter. It is well-known [14], [15], that60 under the assumption of approximating the model f with its first-order Taylor expansion, the Hessian61 corresponds with the so-called generalized Gauss-Newton (GGN) matrix Gθ, and hence (4) can be62 expressed as63 θ′ = θ − η (Gθ + λI)−1∇θL(f(x; θ), y) . (5) A major practical limitation of (5) is the computation of the inverse term. A method that alleviates this64 difficulty is known as Kronecker-Factored Approximate Curvature (K-FAC) [12] which approximates65 the block-diagonal (i.e., layer-wise) empirical Hessian or GGN matrix. Inspired by K-FAC, there66 have been other works discussing approximations of Gθ and its inverse [15]. In the following, we67 discuss a popular approach that allows for (moderately) efficient computation.68 The generalized Gauss-Newton matrix Gθ is defined as69 Gθ = E [ (Jθf(x; θ)) ⊤∇2fL(f(x; θ), y)Jθf(x; θ) ] , (6) where J and H denote the Jacobian and Hessian matrices, respectively. Correspondingly, the diagonal70 block of Gθ corresponding to the weights of the ith layer W (i) is71 GW (i)=E [ (JW (i)f(x; θ)) ⊤∇2fL(f(x; θ), y)JW (i)f(x; θ) ] . According to the backpropagation rule Jθ(i)f(x; θ) = Jzif(x; θ) ai−1, a ⊤b = a ⊗ b, and the72 mixed-product property, we can rewrite GW (i) as73 GW (i)=E [( (Jzif(x; θ) ai−1) ⊤(∇2fL(f(x; θ), y))1/2 )( (∇2fL(f(x; θ), y))1/2 Jzif(x; θ) ai−1 )] (7) = E [ (ḡ⊤ai−1) ⊤(ḡ⊤ai−1) ] = E [ (ḡ ⊗ ai−1)⊤(ḡ ⊗ ai−1) ] = E [ (ḡ⊤ḡ)⊗ (a⊤i−1 ⊗ ai−1) ] , (8) where74 ḡ = (Jzif(x; θ)) ⊤ (∇2fL(f(x; θ), y))1/2 . (9) Remark 1 (Monte-Carlo Low-Rank Approximation for ḡ⊤ḡ). As ḡ is a matrix of shape m × di75 where m is the dimension of the output of f , ḡ is generally expensive to compute. Therefore, [12] use76 a low-rank Monte-Carlo approximation to estimate HfL(f(x; θ), y) and thereby ḡ⊤ḡ. For this, we77 need to use the distribution underlying the probabilistic model of our loss L (e.g., Gaussian for MSE78 loss, or a categorical distribution for cross entropy). Specifically, by sampling from this distribution79 pf (x) defined by the network output f(x; θ), we can get an estimator of HfL(f(x; θ), y) via the80 identity81 HfL(f(x; θ), y) = Eŷ∼pf (x) [ ∇fL(f(x; θ), ŷ)⊤∇fL(f(x; θ), ŷ) ] . (10) An extensive reference for this (as well as alternatives) can be found in Appendix A.2 of Dangel et82 al. [15]. The respective rank-1 approximation (denoted by ≜) of HfL(f(x; θ)) is83 HfL(f(x; θ), y) ≜ ∇fL(f(x; θ), ŷ)⊤∇fL(f(x; θ), ŷ) , where ŷ ∼ pf (x). Respectively, we can estimate ḡ⊤ḡ using this rank-1 approximation with84 ḡ ≜ (Jzif(x; θ)) ⊤∇fL(f(x; θ), ŷ) = ∇ziL(f(x; θ), ŷ) . (11) In analogy to ḡ, we introduce the gradient of training objective with respect to pre-activations zi as85 gi = (Jzif(x; θ)) ⊤∇fL(f(x; θ), y) = ∇ziL(f(x; θ), y) . (12) In other words, for a given layer, let g ∈ R1×di denote the gradient of the loss between an output and86 the ground truth and let ḡ ∈ Rm×di denote the derivative of the network f times the square root of87 the Hessian of the loss function (which may be approximated according to Remark 1), each of them88 with respect to the output zi of the given layer i. Note that ḡ is not equal to g and that they require one89 backpropagation pass each (or potentially many for the case of ḡ). This makes computing ḡ costly.90 Applying the K-FAC [12] approximation to (8) the expectation of Kronecker products can be91 approximated as the Kronecker product of expectations as92 G = E((ḡ⊤ḡ)⊗ (a⊤a)) ≈ E(ḡ⊤ḡ)⊗ E(a⊤a) , (13) where, for clarity, we drop the index of ai−1 in (8) and denote it with a; similarly we denote GW (i)93 as G. While the expectation of Kronecker products is generally not equal to the Kronecker product94 of expectations, this K-FAC approximation (13) has been shown to be fairly accurate in practice95 and to preserve the “coarse structure” of the GGN matrix [12]. The K-FAC decomposition in (13)96 is convenient as the Kronecker product has the favorable property that for two matrices A,B the97 identity (A⊗B)−1 = A−1 ⊗B−1 which significantly simplifies the computation of an inverse.98 In practice, E(ḡ⊤ḡ) and E(a⊤a) can be computed by averaging over a batch of size b as99 E(ḡ⊤ḡ) ≃ ḡ⊤ḡ/b, E(a⊤a) ≃ a⊤a/b, (14) where we denote batches of g, ḡ and a, as g ∈ Rb×di , ḡ ∈ Rrb×di and a ∈ Rb×di−1 , where our layer100 has di−1 inputs, di outputs, b is the batch size, and r is either the number of outputs m or the rank of101 an approximation according to Remark 1. Correspondingly, the K-FAC approximation of the GGN102 matrix and its inverse are concisely expressed as103 G ≈ (ḡ⊤ḡ)⊗ (a⊤a)/b2 G−1 ≈ ( ḡ⊤ḡ )−1⊗(a⊤a)−1 · b2 . (15) Equipped with the standard terminology and setting, we now introduce the novel, regularized update104 step. First, inspired by the K-FAC approximation (13), the Tikhonov regularized Gauss-Newton105 method (5) can be approximated by106 θ(i)′ = θ(i) − η(ḡ⊤ḡ/b+ λI)−1 ⊗ (a⊤a/b+ λI)−1∇θ(i)L(f(x; θ)), (16) with regularization parameter λ > 0. A key observation, which is motivated by the structure of107 the above update, is to disentangle the two occurrences of λ into two independent regularization108 parameters λg, λa > 0. By defining the Kronecker-factorized Gauss-Newton update step as109 ζ = λgλa(ḡ ⊤ḡ/b+ λgI) −1 ⊗ (a⊤a/b+ λaI)−1∇θ(i)L(f(x; θ)), (17) we obtain the concise update equation110 θ(i)′ = θ(i) − η∗ζ . (18) This update (18) is equivalent to update (16) when in the case of η∗ = ηλgλa and λ = λg = λa. This111 equivalence does not restrict η∗, λg, λa in any way, and changing λg or λa does not mean that we112 change our learning rate or step size η∗. Parameterizing ζ in (17) with the multiplicative terms λgλa113 makes the formulation more convenient for analysis.114 In this paper, we investigate the theoretical and empirical properties of the iterative update rule (18)115 and in particular show how the regularization parameters λg, λa affect the Kronecker-factorized116 Gauss-Newton update step ζ . When analyzing the Kronecker-factorized Gauss-Newton update step117 ζ , a particularly useful tool is the vector product identity,118 (( ḡ⊤ḡ )−1 ⊗ (a⊤a)−1) vec(g⊤a) = vec((ḡ⊤ḡ)−1 g⊤a (a⊤a)−1) , (19) where the gradient with respect to the weight matrix is g⊤a.119 3 Theoretical Guarantees120 In this section, we investigate the theoretical properties of the Kronecker-factorized Gauss-Newton121 update direction ζ as defined in (17). We recall that ζ introduces a Tikonov regularization, as it is122 commonly done in implementations of second order-based methods. Not surprisingly, we show that123 by decreasing the regularization parameters λg, λa the update rule (18) collapses (in the limit) to the124 classical Gauss-Newton method, and hence in the regime of small λg, λa the variable ζ describes the125 Gauss-Newton direction. Moreover, by increasing the regularization strength, we converge (in the126 limit) to the conventional gradient descent update step.127 The key observation is that, as we disentangle the regularization of the two Kronecker factors ḡ⊤ḡ128 and a⊤a, and consider the setting where only one regularizer is large (λg → ∞ to be precise),129 we obtain an update direction that can be computed highly efficiently. We show that this setting130 describes an approximated Gauss-Newton update scheme, whose superior numerical performance is131 then empirically demonstrated in Section 4.132 Theorem 1 (Properties of ζ ). The K-FAC based update step ζ as defined in (17) can be expressed as133 ζ = ( Im − 1 bλg ḡ⊤ ( Ib + 1 bλg ḡḡ⊤ )−1 ḡ ) · g⊤ · ( Ib − 1 bλa aa⊤ ( Ib + 1 bλa aa⊤ )−1) · a . (20) Moreover, ζ admits the following asymptotic properties:134 (i) In the limit of λg, λa → 0, 1λgλaζ is the K-FAC approximation of the Gauss-Newton step, i.e.,135 limλg,λa→0 1 λgλa ζ ≈ G−1∇θ(i)L(f(x; θ)), where ≈ denotes the K-FAC approximation (15).136 (ii) In the limit of λg, λa →∞, ζ is the gradient, i.e., limλg,λa→∞ ζ = ∇θ(i)L(f(x; θ)).137 The Proof is deferred to the Supplementary Material.138 We want to show that ζ is well-defined and points in the correct direction, not only for λg and λa139 numerically close to zero because we want to explore the full spectrum of settings for λg and λa.140 Thus, we prove that ζ is a direction of increasing loss, independent of the choices of λg and λa.141 Theorem 2 (Correctness of ζ is independent of λg and λa). ζ is a direction of increasing loss,142 independent of the choices of λg and λa.143 Proof. Recall that (λgIm+ḡ⊤ḡ/b) and (λaIn+a⊤a/b) are positive semi-definite (PSD) matrices by144 definition. Their inverses (λgIm + ḡ⊤ḡ/b)−1 and (λaIn + a⊤a/b)−1 are therefore also PSD. As the145 Kronecker product of PSD matrices is PSD, the conditioning matrix ((λgIm + ḡ⊤ḡ/b)−1 ⊗ (λaIn +146 a⊤a/b)−1 ≈ G−1) is PSD, and therefore the direction of the update step remains correct.147 From our formulation of ζ , we can find that, in the limit for λg →∞, Equation (21) does not depend148 on ḡ . This is computationally very beneficial as computing ḡ is costly as it requires one or even149 many additional backpropagation passes. In addition, it allows conditioning the gradient update by150 multiplying a b× b matrix between g⊤ and a, which can be done very fast.151 Theorem 3 (Efficient Update Direction). In the limit of λg → ∞, the update step ζ converges to152 limλg→∞ ζ = ζ ∗, where153 ζ∗= g⊤ · ( Ib − 1 bλa aa⊤ ( Ib + 1 bλa aa⊤ )−1) · a . (21) (i) Here, the update direction ζ∗ is based only on the inputs and does not require computing ḡ154 (which would require a second backpropagation pass), making it efficient.155 (ii) The computational cost of computing the update ζ∗ lies in O(bn2 + b2n+ b3), where n is the156 number of neurons in each layer. This comprises the conventional cost of computing the gradient157 ∇ = g⊤x lying inO(bn2), and the overhead of computing ζ∗ instead of∇ lying inO(b2n+b3).158 The overhead is vanishing, assuming n≫ b. For b > n the complexity lies in O(bn2 + n3).159 Proof. We first show the property (21). Note that according to (22), λg · ( λgIm + ḡ ⊤ḡ/b )−1 con-160 verges in the limit of λg →∞ to Im, and therefore (21) holds.161 (i) The statement follows from the fact that the term ḡ does not appear in the equivalent characteriza-162 tion (21) of ζ∗.163 (ii) We first note that the matrix aa⊤ is of dimension b × b, and can be computed in O(b2n) time.164 Next, the matrix165 ( Ib − 1 bλa aa⊤ ( Ib + 1 bλa aa⊤ )−1) is of shape b× b and can be multiplied with a in O(b2n) time.166 Notably, (21) can be computed with a vanishing computational overhead and with only minor167 modifications to the implementation. Specifically, only the g⊤a expression has to be replaced by (21)168 in the backpropagation step. As this can be done independently for each layer, this lends itself also to169 applying it only to individual layers.170 As we see in the experimental section, in many cases in the mini-batch regime (i.e., b < n), the171 optimal (or a good) choice for λg actually lies in the limit to∞. This is a surprising result, leading to172 the efficient and effective ζ∗ = ζλg→∞ optimizer.173 Remark 2 (Relation between Update Direction ζ and ζ∗). When comparing the update direction174 ζ in (20) without regularization (i.e., λg → 0, λa → 0) with ζ∗ (i.e., λg → ∞) as given in (21), it175 can be directly seen that ζ∗ corresponds to a particular pre-conditioning of ζ , since ζ∗ = Mζ for176 M = 1bλg ḡ ⊤ḡ .177 As the last theoretical property of our proposed update direction ζ∗, we show that in specific networks178 ζ∗ coincides with the Gauss-Newton update direction.179 Theorem 4 (ζ∗ is Exact for the Last Layer). For the case of linear regression or, more generally, the180 last layer of networks, with the mean squared error, ζ∗ is the Gauss-Newton update direction.181 Proof. The Hessian matrix of the mean squared error loss is the identity matrix. Correspondingly,182 the expectation value of ḡ⊤ḡ is I. Thus, ζ∗ = ζ .183 Remark 3. The direction ζ∗ corresponds to the Gauss-Newton update direction with an approxima-184 tion of G that can be expressed as G ≈ E [ I⊗ (a⊤a) ] .185 Remark 4 (Extension to the Natural Gradient). In some cases, it might be more desirable to use the186 Fisher-based natural gradient instead of the Gauss-Newton method. The difference to this setting is187 that in (5) the GGN matrix G is replaced by the empirical Fisher information matrix F.188 We note that our theory also applies to F, and that ζ∗ also efficiently approximates the natural189 gradient update step F−1∇. The i-th diagonal block of F (Fθ(i) = E [ (g⊤i gi)⊗ (a⊤i−1 ⊗ ai−1) ] ),190 has the same form as a block of the GGN matrix G (Gθ(i) = E [ (ḡ⊤i ḡi)⊗ (a⊤i−1 ⊗ ai−1) ] ).191 Thus, we can replace ḡ with g in our theoretical results to obtain their counterparts for F.192 4 Experiments193 In the previous section, we discussed the theoretical properties of the proposed update directions194 ζ and ζ∗ with the aspect that ζ∗ would actually be “free” to compute in the mini-batch regime. In195 this section, we provide empirical evidence that ζ∗ is a good update direction, even in deep learning.196 Specifically, we demonstrate that197 (E1) ζ∗ achieves similar performance to K-FAC, while being substantially cheaper to compute.198 (E2) The performance of our proposed method can be empirically maintained in the mini-batch199 regime (n≫ b).200 (E3) ζ∗ may be used for individual layers, while for other layers only the gradient ∇ is used. This201 still leads to improved performance.202 (E4) ζ∗ also improves the performance for training larger models such as BERT and ResNet.203 (E5) The runtime and memory requirements of ζ∗ are comparable to those of gradient descent.204 E1: Impact of Regularization Parameters205 For (E1), we study the dependence of the model’s performance on the regularization parameters λg206 and λa. Here, we train a 5-layer deep neural network on the MNIST classification task [16] with a207 batch size of 60 for a total of 40 epochs or 40 000 steps.208 The plots in Figure 1 demonstrate that the advantage of training by conditioning with curvature209 information can be achieved by considering both layer inputs a and gradients with respect to random210 samples ḡ , but also using only layer inputs a. In the plot, we show the performance of ζ for different211 choices of λg and λa, each in the range from 10−6 to 106. The right column shows ζ ∗, i.e., λg =∞,212 for different λa. The bottom-right corner is gradient descent, which corresponds to λg = ∞ and213 λa =∞.214 Newton’s method or the general K-FAC approximation corresponds to the area with small λg and λa.215 The interesting finding here is that the performance does not suffer by increasing λg toward∞, i.e.,216 from left to right in the plot.217 In addition, in Figure 3, we consider the case of regression with an auto-encoder trained with the218 MSE loss on MNIST [16] and Fashion-MNIST [17]. Here, we follow the same principle as above219 and also find that ζ∗ performs well.220 -6 -5 -4 -3 -2 -1 0 +1+2+3+4+5+6 log10 g -6 -5 -4 -3 -2 -1 0 +1 +2 +3 +4 +5 +6 lo g 1 0 a (a) -6 -5 -4 -3 -2 -1 0 +1+2+3+4+5+6 log10 g (b) 5.5 5.0 4.5 4.0 3.5 3.0 2.5 Figure 3: Training an auto-encoder on MNIST (left) and FashionMNIST (right). The model is the same as used by Botev et al. [18], i.e., it is a ReLU-activated 6-layer fully connected model with dimensions 784-1000-500- 30-500-1000-784. Displayed is the logarithmic training loss. -6 -5 -4 -3 -2 -1 0 +1+2+3+4+5+6 log10 g -6 -5 -4 -3 -2 -1 0 +1 +2 +3 +4 +5 +6 lo g 1 0 a (a) -6 -5 -4 -3 -2 -1 0 +1+2+3+4+5+6 log10 g (b) Figure 4: Training a 5-layer ReLU network with 400 neurons per layer on the MNIST classification task (as in Figure 1) but with the Adam optimizer [19]. In Figure 7, we compare the loss for dif-221 ferent methods. Here, we distinguish222 between loss per time (left) and loss223 per number of steps (right). We can ob-224 serve that, for λ = 0.1, K-FAC, ζ , and225 ζ∗ are almost identical per update step226 (right), while ζ∗ is by a large margin227 the fastest, followed by ζ , and the con-228 ventional K-FAC implementation is the229 slowest (left). On the other hand, for230 λ = 0.01 we can achieve a faster con-231 vergence than with λ = 0.1, but here232 only the K-FAC and ζ methods are nu-233 merically stable, while ζ∗ is unstable in234 this case. This means in the regime of235 very small λ, ζ∗ is not as robust as K-236 FAC and ζ , however, it achieves good237 performance with small but moderate238 λ like λ = 0.1. For λ < 0.01, also239 K-FAC and ζ become numerically un-240 stable in this setting and, in general, we241 observed that the smallest valid λ for242 K-FAC is 0.01 or 0.001 depending on243 model and task. Under consideration244 of the runtime, ζ∗ performs best as it is245 almost as fast as gradient descent while246 performing equivalent to K-FAC and ζ .247 Specifically, a gradient descent step is248 only about 10% faster than ζ∗.249 E2: Minibatch Regime250 For (E2), in Figure 1, we can see that training251 performs well for n ∈ {100, 400, 1 600} neu-252 rons per layer at a batch size of only 60. Also, in253 all other experiments, we use small batch sizes254 of between 8 and 100.255 E3: ζ∗ in Individual Layers256 In Figure 5, we train the 5-layer fully connected257 model with 400 neurons per layer. Here, we258 consider the setting that we use ζ∗ in some of259 the layers while using the default gradient ∇260 in other layers. Specifically, we consider the261 settings, where all, the first, the final, the first three, the final three, the odd numbered, and the262 even numbered layers are updated by ζ∗. We observe that all settings with ζ∗ perform better than263 plain gradient descent, except for “ζ∗ for layers 3,4,5” which performs approximately equivalent to264 gradient descent.265 E4: Large-scale Models266 BERT To demonstrate the utility of ζ∗ also in large-scale models, we evaluate it for fine-tuning267 BERT [20] on three natural language tasks. In Table 1, we summarize the results for the BERT268 fine-tuning task. For the “Corpus of Linguistic Acceptability” (CoLA) [21] data set, we fine-tune269 both the BERT-Base and the BERT-Mini models and find that we outperform the gradient descent270 baseline in both cases. For the “Microsoft Research Paraphrase Corpus” (MRPC) [22] data set, we271 fine-tune the BERT-Base model and find that we outperform the baseline both in terms of accuracy272 and F1-score. Finally, on the “Semantic Textual Similarity Benchmark” (STS-B) [23] data set, we273 fine-tune the BERT-Mini model and achieve higher Pearson and Spearman correlations than the274 baseline. While for training with CoLA and MRPC, we were able to use the Adam optimizer [19]275 (which is recommended for this task and model) in conjunction with ζ∗ in place of the gradient,276 for STS-B Adam did not work well. Therefore, for STS-B, we evaluated it using the SGD with277 momentum optimizer. For each method, we performed a grid search over the hyperparameters. We278 note that we use a batch size of 8 in all BERT experiments.279 ResNet In addition, we conduct an experiment280 where we train the last layer of a ResNet with281 ζ∗, while the remainder of the model is up-282 dated using the gradient ∇. Here, we train a283 ResNet-18 [24] on CIFAR-10 [25] using SGD284 with a batch size of 100 in a vanilla setting, i.e.,285 without additional tricks employed in by He et286 al. [24] and others. Specifically, we use (i) a287 constant learning rate for each training (optimal288 from (1, 0.3, 0.1, 0.03, 0.01)) and (ii) vanilla289 SGD and not momentum-based SGD. The rea-290 son behind this is that we want a vanilla experi-291 ment and with aspects such as extensively tuning292 multiple parameters of learning rate scheduler293 would make the evaluation less transparent; how-294 ever, therefore, all accuracies are naturally lower than SOTA. In Figure 6, we plot the test accuracy295 against time. The results show that the proposed method outperforms vanilla SGD when applied296 to the last layer of a ResNet-18. To validate that the learning rate is not the cause for the better297 performance, we also plot the neighboring learning rates and find that even with a too small or too298 large learning rate ζ∗ outperforms gradient descent with the optimal learning rate.299 E5: Runtime and Memory300 Finally, we also evaluate the runtime and memory requirements of each method. The runtime301 evaluation is displayed in Table 2. We report both CPU and GPU runtime using PyTorch [26] and302 (for K-FAC) the backpack library [15]. Note that the CPU runtime is more representative of the303 pure computational cost, as for the first rows of the GPU runtime the overhead of calling the GPU304 is dominant. When comparing runtimes between the gradient and ζ∗ on the GPU, we can observe305 that we have an overhead of around 2.5 s independent of the model size. The overhead for CPU time306 is also very small at less than 1% for the largest model, and only 1.3 s for the smallest model. In307 contrast, the runtime of ζ∗ is around 4 times the runtime of the gradient, and K-FAC has an even308 substantially larger runtime. Regarding memory, ζ∗ (contrasting the other approaches) also requires309 only a small additional footprint.310 Remark 5 (Implementation). The implementation of ζ∗ can be done by replacing the backpropagation311 step of a respective layer by (21). As all “ingredients” are already available in popular deep learning312 frameworks, it requires only little modification (contrasting K-FAC and ζ , which require at least one313 additional backpropagation.)314 We will publish the source code of our implementation. In the appendix, we give a PyTorch [26]315 implementation of the proposed method (ζ∗).316 5 Related Work317 Our methods are related to K-FAC by Martens and Grosse [12]. K-FAC uses the approximation318 (13) to approximate the blocks of the Hessian of the empirical risk of neural networks. In most319 implementations of K-FAC, the off-diagonal blocks of the Hessian are also set to zero. One of the320 main claimed benefits of K-FAC is its speed (compared to stochastic gradient descent) for large-batch321 size training. That said, recent empirical work has shown that this advantage of K-FAC disappears322 once the additional computational costs of hyperparameter tuning for large batch training is accounted323 for. There is a line of work that extends the basic idea of K-FAC to convolutional layers [27]. Botev et324 al. [18] further extend these ideas to present KFLR, a Kronecker factored low-rank approximation,325 and KFRA, a Kronecker factored recursive approximation of the Gauss-Newton step. Singh and326 Alistarh [28] propose WoodFisher, a Woodbury matrix inverse-based estimate of the inverse Hessian,327 and apply it to neural network compression. Yao et al. [29] propose AdaHessian, a second-order328 optimizer that incorporates the curvature of the loss function via an adaptive estimation of the Hessian.329 Frantar et al. [6] propose M-FAC, a matrix-free approximation of the natural gradient through a queue330 of the (e.g., 1 000) recent gradients. These works fundamentally differ from our approach in that their331 objective is to approximate the Fisher or Gauss-Newton matrix inverse vector products. In contrast,332 this work proposes to approximate the Gauss-Newton matrix by only one of its Kronecker factors,333 which we find to achieve good performance at a substantial computational speedup and reduction of334 memory footprint. For an overview of this area, we refer to Kunstner et al. [30] and Martens [31].335 For an overview of the technical aspects of backpropagation of second-order quantities, we refer to336 Dangel et al. [15], [32]337 Taking a step back, K-FAC is one of many Newton-type methods for training neural networks.338 Other prominent examples of such methods include subsampled Newton methods [33], [34] (which339 approximate the Hessian by subsampling the terms in the empirical risk function and evaluating the340 Hessian of the subsampled terms) and sketched Newton methods [3]–[5] (which approximate the341 Hessian by sketching, e.g., by projecting the Hessian to a lower-dimensional space by multiplying it342 with a random matrix). The main features that distinguish K-FAC from this group of methods are343 K-FAC’s superior empirical performance and K-FAC’s lack of theoretical justification.344 6 Conclusion345 In this work, we presented ISAAC Newton, a novel approximate curvature method based on layer-346 inputs. We demonstrated it to be a special case of the regularization-generalized Gauss-Newton347 method and empirically demonstrate its utility. Specifically, our method features an asymptotically348 vanishing computational overhead in the mini-batch regime, while achieving competitive empirical349 performance on various benchmark problems.350 References351 [1] N. Agarwal, B. Bullins, and E. Hazan, “Second-order stochastic optimization for machine352 learning in linear time,” Journal on Machine Learning Research, vol. 18, no. 1, pp. 4148–4187,353 2017.354 [2] J. Nocedal and S. J. Wright, Numerical Optimization, 2e. New York, NY, USA: Springer, 2006.355 [3] A. Gonen and S. Shalev-Shwartz, “Faster SGD using sketched conditioning,” arXiv preprint,356 arXiv:1506.02649, 2015.357 [4] M. Pilanci and M. J. Wainwright, “Newton sketch: A near linear-time optimization algorithm358 with linear-quadratic convergence,” SIAM Journal on Optimization, vol. 27, 2017.359 [5] M. A. Erdogdu and A. Montanari, “Convergence rates of sub-sampled Newton methods,” in360 Proc. Neural Information Processing Systems (NeurIPS), 2015.361 [6] E. Frantar, E. Kurtic, and D. Alistarh, “M-FAC: Efficient matrix-free approximations of362 second-order information,” in Proc. Neural Information Processing Systems (NeurIPS), 2021.363 [7] N. Doikov and Y. Nesterov, “Convex Optimization based on Global Lower Second-order364 Models,” in Proc. Neural Information Processing Systems (NeurIPS), Curran Associates, Inc.,365 2020.366 [8] Y. Nesterov and B. T. Polyak, “Cubic regularization of Newton method and its global perfor-367 mance,” Mathematical Programming, vol. 108, 2006.368 [9] S. Becker and Y. Lecun, “Improving the convergence of back-propagation learning with369 second-order methods,” 1989.370 [10] T. Schaul, S. Zhang, and Y. LeCun, “No more pesky learning rates,” in International Conference371 on Machine Learning (ICML), 2013.372 [11] Y. Ollivier, “Riemannian metrics for neural networks i: Feedforward networks,” Information373 and Inference, vol. 4, pp. 108–153, Jun. 2015.374 [12] J. Martens and R. Grosse, “Optimizing neural networks with Kronecker-factored approximate375 curvature,” in International Conference on Machine Learning (ICML), 2015.376 [13] A. N. Tikhonov and V. Y. Arsenin, Solutions of Ill-posed problems. W.H. Winston, 1977.377 [14] P. Chen, “Hessian matrix vs. Gauss—Newton Hessian matrix,” SIAM Journal on Numerical378 Analysis, 2011.379 [15] F. Dangel, F. Kunstner, and P. Hennig, “Backpack: Packing more into backprop,” in Interna-380 tional Conference on Learning Representations, 2020.381 [16] Y. LeCun, C. Cortes, and C. Burges, “MNIST Handwritten Digit Database,” ATT Labs, 2010.382 [17] H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-MNIST: A novel image dataset for benchmarking383 machine learning algorithms,” arXiv, 2017.384 [18] A. Botev, H. Ritter, and D. Barber, “Practical Gauss-Newton optimisation for deep learning,”385 in International Conference on Machine Learning (ICML), 2017.386 [19] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in International Confer-387 ence on Learning Representations (ICLR), 2015.388 [20] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional389 transformers for language understanding,” in North American Chapter of the Association for390 Computational Linguistics: Human Language Technologies (NAACL-HLT), 2018.391 [21] A. Warstadt, A. Singh, and S. R. Bowman, “Neural network acceptability judgments,” Trans-392 actions of the Association for Computational Linguistics, vol. 7, 2019.393 [22] W. B. Dolan and C. Brockett, “Automatically constructing a corpus of sentential paraphrases,”394 in Proceedings of the Third International Workshop on Paraphrasing (IWP2005), 2005.395 [23] D. Cer, M. Diab, E. Agirre, I. Lopez-Gazpio, and L. Specia, “SemEval-2017 task 1: Semantic396 textual similarity multilingual and crosslingual focused evaluation,” in Proceedings of the397 11th International Workshop on Semantic Evaluation (SemEval-2017), Vancouver, Canada:398 Association for Computational Linguistics, 2017.399 [24] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in400 Proc. International Conference on Computer Vision and Pattern Recognition (CVPR), 2016.401 [25] A. Krizhevsky, V. Nair, and G. Hinton, “Cifar-10 (Canadian Institute for Advanced Research),”402 2009.403 [26] A. Paszke, S. Gross, F. Massa, et al., “Pytorch: An imperative style, high-performance deep404 learning library,” in Proc. Neural Information Processing Systems (NeurIPS), 2019.405 [27] R. Grosse and J. Martens, “A Kronecker-factored approximate Fisher matrix for convolution406 layers,” in International Conference on Machine Learning (ICML), 2016.407 [28] S. P. Singh and D. Alistarh, “Woodfisher: Efficient second-order approximation for neural408 network compression,” in Proc. Neural Information Processing Systems (NeurIPS), 2020.409 [29] Z. Yao, A. Gholami, S. Shen, M. Mustafa, K. Keutzer, and M. W. Mahoney, “Adahessian:410 An adaptive second order optimizer for machine learning,” in AAAI Conference on Artificial411 Intelligence, 2021.412 [30] F. Kunstner, L. Balles, and P. Hennig, “Limitations of the empirical Fisher approximation for413 natural gradient descent,” in Proc. Neural Information Processing Systems (NeurIPS), 2019.414 [31] J. Martens, “New insights and perspectives on the natural gradient method,” Journal of Machine415 Learning Research, 2020.416 [32] F. Dangel, S. Harmeling, and P. Hennig, “Modular block-diagonal curvature approximations417 for feedforward architectures,” in International Conference on Artificial Intelligence and418 Statistics (AISTATS), 2020.419 [33] F. Roosta-Khorasani and M. W. Mahoney, “Sub-Sampled Newton Methods I: Globally Con-420 vergent Algorithms,” arXiv: 1601.04737, 2016.421 [34] P. Xu, J. Yang, F. Roosta, C. Ré, and M. W. Mahoney, “Sub-sampled Newton Methods with422 Non-uniform Sampling,” in Proc. Neural Information Processing Systems (NeurIPS), 2016.423 Checklist424 1. For all authors...425 (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s426 contributions and scope? [Yes]427 (b) Did you describe the limitations of your work? [Yes]428 (c) Did you discuss any potential negative societal impacts of your work? [N/A]429 (d) Have you read the ethics review guidelines and ensured that your paper conforms to them?430 [Yes]431 2. If you are including theoretical results...432 (a) Did you state the full set of assumptions of all theoretical results? [Yes]433 (b) Did you include complete proofs of all theoretical results? [Yes]434 3. If you ran experiments...435 (a) Did you include the code, data, and instructions needed to reproduce the main experimental436 results (either in the supplemental material or as a URL)? [Yes] / [No] We include a437 Python / PyTorch implementation of the method in the supplementary material. We will438 publicly release full source code for the experiments.439 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were440 chosen)? [Yes]441 (c) Did you report error bars (e.g., with respect to the random seed after running experiments442 multiple times)? [Yes]443 (d) Did you include the total amount of compute and the type of resources used (e.g., type of444 GPUs, internal cluster, or cloud provider)? [Yes]445 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...446 (a) If your work uses existing assets, did you cite the creators? [Yes]447 (b) Did you mention the license of the assets? [N/A]448 (c) Did you include any new assets either in the supplemental material or as a URL? [N/A]449 (d) Did you discuss whether and how consent was obtained from people whose data you’re450 using/curating? [N/A]451 (e) Did you discuss whether the data you are using/curating contains personally identifiable452 information or offensive content? [N/A]453 5. If you used crowdsourcing or conducted research with human subjects...454 (a) Did you include the full text of instructions given to participants and screenshots, if455 applicable? [N/A]456 (b) Did you describe any potential participant risks, with links to Institutional Review Board457 (IRB) approvals, if applicable? [N/A]458 (c) Did you include the estimated hourly wage paid to participants and the total amount spent459 on participant compensation? [N/A]460 A PyTorch Implementation461 We display a PyTorch [26] implementation of ISAAC for a fully-connected layer below. Here, we462 mark the important part (i.e., the part beyond the boilerplate) with a red rectangle.463 import torch class ISAACLinearFunction(torch.autograd.Function): @staticmethod def forward(ctx, input, weight, bias, la, inv_type): ctx.save_for_backward(input, weight, bias) ctx.la = la if inv_type == 'cholesky_inverse': ctx.inverse = torch.cholesky_inverse elif inv_type == 'inverse': ctx.inverse = torch.inverse else: raise NotImplementedError(inv_type) return input @ weight.T + (bias if bias is not None else 0) @staticmethod def backward(ctx, grad_output): input, weight, bias = ctx.saved_tensors if ctx.needs_input_grad[0]: grad_0 = grad_output @ weight else: grad_0 = None if ctx.needs_input_grad[1]: aaT = input @ input.T / grad_output.shape[0] I_b = torch.eye(aaT.shape[0], device=aaT.device, dtype=aaT.dtype) aaT_IaaT_inv = aaT @ ctx.inverse(aaT / ctx.la + I_b) grad_1 = grad_output.T @ ( I_b - 1. / ctx.la * aaT_IaaT_inv ) @ input else: grad_1 = None return ( grad_0, grad_1, grad_output.mean(0, keepdim=True) if bias is not None else None, None, None, None, ) class ISAACLinear(torch.nn.Linear): def __init__(self, in_features, out_features, la, inv_type='inverse', **kwargs): super(ISAACLinear, self).__init__( in_features=in_features, out_features=out_features, **kwargs ) self.la = la self.inv_type = inv_type def forward(self, input: torch.Tensor) -> torch.Tensor: return ISAACLinearFunction.apply( input, self.weight, self.bias.unsqueeze(0) if self.bias is not None else None, self.la, self.inv_type ) B Implementation Details464 Unless noted differently, for all experiments, we tune the learning rate on a grid of465 (1, 0.3, 0.1, 0.03, 0.01, 0.003, 0.001). We verified this range to cover the full reasonable range of466 learning rates. Specifically, for every single experiment, we made sure that there is no learning rate467 outside this range which performs better.468 For all language model experiments, we used the respective Huggingface PyTorch implementation.469 All other hyperparameter details are given in the main paper.470 The code will be made publicly available.471 C Additional Proofs472 Proof of Theorem 1. We first show, that ζ as defined in (17) can be expressed as in (20). Indeed by473 using (19), the Woodbury matrix identity and by regularizing the inverses, we can see that474 ζ = λgλa(ḡ ⊤ḡ/b+ λgI) −1 ⊗ (a⊤a/b+ λaI)−1g⊤a = λgλa · ( λgIm + ḡ ⊤ḡ/b )−1 g⊤a ( λaIn + a ⊤a/b )−1 = λgλa · ( 1 λg Im − 1 bλg 2 ḡ ⊤ ( Ib + 1 bλg ḡḡ⊤ )−1 ḡ ) g⊤a ( 1 λa In − 1 bλa 2 a ⊤ ( Ib + 1 bλa aa⊤ )−1 a ) = ( Im − 1 bλg ḡ⊤ ( Ib + 1 bλg ḡḡ⊤ )−1 ḡ ) · g⊤ · a · ( In − 1 bλa a⊤ ( Ib + 1 bλa aa⊤ )−1 a ) = ( Im − 1 bλg ḡ⊤ ( Ib + 1 bλg ḡḡ⊤ )−1 ḡ ) · g⊤ · ( a− 1 bλa aa⊤ ( Ib + 1 bλa aa⊤ )−1 a ) = ( Im − 1 bλg ḡ⊤ ( Ib + 1 bλg ḡḡ⊤ )−1 ḡ ) · g⊤ · ( Ib − 1 bλa aa⊤ ( Ib + 1 bλa aa⊤ )−1) · a To show Assertion (i), we note that according to (17)475 lim λg,λa→0 1 λgλa ζ = lim λg,λa→0 (ḡ⊤ḡ/b+ λgI) −1 ⊗ (a⊤a/b+ λaI)−1g⊤a = (ḡ⊤ḡ)−1 ⊗ (a⊤a)−1g⊤a ≈ G−1g⊤a, where the first equality uses the definition of ζ in (17). The second equality is due to the continuity of476 the matrix inversion and the last approximate equality follows from the K-FAC approximation (15).477 To show Assertion (ii), we consider limλg→∞ and limλa→∞ independently, that is478 lim λg→∞ λg · ( λgIm + ḡ ⊤ḡ/b )−1 (22) = lim λg→∞ ( Im + 1 bλg ḡ⊤ḡ )−1 = Im, and479 lim λa→∞ λa · ( λaIn + a ⊤a/b )−1 (23) = lim λa→∞ ( In + 1 bλa a⊤a )−1 = In. This then implies480 lim λg,λa→∞ λg ( λgIm + ḡ ⊤ḡ/b )−1 · g⊤ (24) · a · λa ( λaIn + a ⊤a/b )−1 = Im · g⊤a · In = g⊤a, which concludes the proof.481 D Additional Experiments482
1. What is the focus and contribution of the paper on efficient approximation of Gaussian-Newton approximation? 2. What are the strengths of the proposed approach, particularly in terms of computational efficiency? 3. What are the weaknesses of the paper regarding its claims and theoretical analysis? 4. Do you have any concerns or questions about the two-parameter damping mechanism or the special case mentioned in Remark 3? 5. What is the intuition behind Equation 21, and how does it compare to Equation 13?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In unconstrained optimisation, Newton's method uses inverse Hessian to determine the step size of the update. When applied to deep learning, the sheer size of the Hessian matrix makes Newton's method intractable. Among existing attempts to address this challenge, K-FAC is a promising approach. In practice, Tikhonov regularization/damping is used to avoid divergence. In this paper, the authors introduced an extension to the standard Tikhonov damping for K-FAC. The main idea is to use two independent damping parameters (hyper-parameters for regularisation) instead of one damping parameter, one for the gradient factor and one for the activity factor, as shown in Equation (17). The authors explored special cases where the two parameters are either 0 or infinity. They also discussed the special case of setting one parameter to infinity, effectively removing the gradient factor in K-FAC. One major claim was that this activity-only form has an advantage in reduced computational complexity, although I don't find a theoretical comparison with K-FAC in this aspect. The claim for computational efficiency was mostly supported via empirical evaluation. Strengths And Weaknesses Strengths: The paper addresses an important problem: efficient approximation of Gaussian-Newton approximation The exposition is mostly clear, especially if you are already familiar with the existing literature. The experiments seem comprehensive. Weaknesses: It is not clear what is the main contribution of this paper. Is it the two-parameter damping mechanism (What advantage do we gain)? Or is it the special case mentioned in Remark 3 (Why this is a still good approximation without g ⊤ g )? Many theoretical results presented look rather trivial. Questions What is the intuition behind Eq (21)? Compared with Eq (13), why is it a good approximation? Limitations N/A
NIPS
Title Fully Parameterized Quantile Function for Distributional Reinforcement Learning Abstract Distributional Reinforcement Learning (RL) differs from traditional RL in that, rather than the expectation of total returns, it estimates distributions and has achieved state-of-the-art performance on Atari Games. The key challenge in practical distributional RL algorithms lies in how to parameterize estimated distributions so as to better approximate the true continuous distribution. Existing distributional RL algorithms parameterize either the probability side or the return value side of the distribution function, leaving the other side uniformly fixed as in C51, QR-DQN or randomly sampled as in IQN. In this paper, we propose fully parameterized quantile function that parameterizes both the quantile fraction axis (i.e., the x-axis) and the value axis (i.e., y-axis) for distributional RL. Our algorithm contains a fraction proposal network that generates a discrete set of quantile fractions and a quantile value network that gives corresponding quantile values. The two networks are jointly trained to find the best approximation of the true distribution. Experiments on 55 Atari Games show that our algorithm significantly outperforms existing distributional RL algorithms and creates a new record for the Atari Learning Environment for non-distributed agents. 1 Introduction Distributional reinforcement learning [Jaquette et al., 1973, Sobel, 1982, White, 1988, Morimura et al., 2010, Bellemare et al., 2017] differs from value-based reinforcement learning in that, instead of focusing only on the expectation of the return, distributional reinforcement learning also takes the intrinsic randomness of returns within the framework into consideration [Bellemare et al., 2017, Dabney et al., 2018b,a, Rowland et al., 2018]. The randomness comes from both the environment itself and agent’s policy. Distributional RL algorithms characterize the total return as random variable and estimate the distribution of such random variable, while traditional Q-learning algorithms estimate only the mean (i.e., traditional value function) of such random variable. The main challenge of distributional RL algorithm is how to parameterize and approximate the distribution. In Categorical DQN [Bellemare et al., 2017](C51), the possible returns are limited to a discrete set of fixed values, and the probability of each value is learned through interacting with environments. C51 out-performs all previous variants of DQN on a set of 57 Atari 2600 games in the Arcade Learning Environment (ALE) [Bellemare et al., 2013]. Another approach for distributional reinforcement learning is to estimate the quantile values instead. Dabney et al. [2018b] proposed QR∗Contributed during internship at Microsoft Research. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. DQN to compute the return quantiles on fixed, uniform quantile fractions using quantile regression and minimize the quantile Huber loss [Huber, 1964] between the Bellman updated distribution and current return distribution. Unlike C51, QR-DQN has no restrictions or bound for value and achieves significant improvements over C51. However, both C51 and QR-DQN approximate the distribution function or quantile function on fixed locations, either value or probability. Dabney et al. [2018a] propose learning the quantile values for sampled quantile fractions rather than fixed ones with an implicit quantile value network (IQN) that maps from quantile fractions to quantile values. With sufficient network capacity and infinite number of quantiles, IQN is able to approximate the full quantile function. However, it is impossible to have infinite quantiles in practice. With limited number of quantile fractions, efficiency and effectiveness of the samples must be reconsidered. The sampling method in IQN mainly helps training the implicit quantile value network rather than approximating the full quantile function, and thus there is no guarantee in that sampled probabilities would provide better quantile function approximation than fixed probabilities. In this work, we extend the method in Dabney et al. [2018b] and Dabney et al. [2018a] and propose to fully parameterize the quantile function. By fully parameterization, we mean that unlike QR-DQN and IQN where quantile fractions are fixed or sampled and only the corresponding quantile values are parameterized, both quantile fractions and corresponding quantile values in our algorithm are parameterized. In addition to a quantile value network similar to IQN that maps quantile fractions to corresponding quantile values, we propose a fraction proposal network that generates quantile fractions for each state-action pair. The fraction proposal network is trained so that as the true distribution is approximated, the 1-Wasserstein distance between the approximated distribution and the true distribution is minimized. Given the proposed fractions generated by the fraction proposal network, we can learn the quantile value network by quantile regression. With self-adjusting fractions, we can approximate the true distribution better than with fixed or sampled fractions. We begin with related works and backgrounds of distributional RL in Section 2. We describe our algorithm in Section 3 and provide experiment results of our algorithm on the ALE environment [Bellemare et al., 2013] in Section 4. At last, we discuss the future extension of our work, and conclude our work in Section 5. 2 Background and Related Work We consider the standard reinforcement learning setting where agent-environment interactions are modeled as a Markov Decision Process (X ,A, R, P, γ) [Puterman, 1994], where X and A denote state space and action space, P denotes the transition probability given state and action, R denotes state and action dependent reward function and γ ∈ (0, 1) denotes the reward discount factor. For a policy π, define the discounted return sum a random variable by Zπ(x, a) = ∑∞ t=0 γ tR(xt, at), where x0 = x, a0 = a, xt ∼ P (·|xt−1, at−1) and at ∼ π(·|xt). The objective in reinforcement learning can be summarized as finding the optimal π∗ that maximizes the expectation of Zπ, the action-value function Qπ(x, a) = E[Zπ(x, a)]. The most common approach is to find the unique fixed point of the Bellman optimality operator T [Bellman, 1957]: Q∗(x, a) = T Q∗(x, a) := E[R(x, a)] + γEP max a′ Q∗ (x′, a′) . To update Q, which is approximated by a neural network in most deep reinforcement learning studies, Q-learning [Watkins, 1989] iteratively trains the network by minimizing the squared temporal difference (TD) error defined by δ2t = [ rt + γ max a′∈A Q (xt+1, a ′)−Q (xt, at) ]2 along the trajectory observed while the agent interacts with the environment following -greedy policy. DQN [Mnih et al., 2015] uses a convolutional neural network to represent Q and achieves human-level play on the Atari-57 benchmark. 2.1 Distributional RL Instead of a scalar Qπ(x, a), distributional RL looks into the intrinsic randomness of Zπ by studying its distribution. The distributional Bellman operator for policy evaluation is Zπ(x, a) D = R(x, a) + γZπ (X ′, A′) , where X ′ ∼ P (·|x, a) and A′ ∼ π(·|X ′), A D= B denotes that random variable A and B follow the same distribution. Both theory and algorithms have been established for distributional RL. In theory, the distributional Bellman operator for policy evaluation is proved to be a contraction in the p-Wasserstein distance [Bellemare et al., 2017]. Bellemare et al. [2017] shows that C51 outperforms value-based RL, in addition Hessel et al. [2018] combined C51 with enhancements such as prioritized experience replay [Schaul et al., 2016], n-step updates [Sutton, 1988], and the dueling architecture [Wang et al., 2016], leading to the Rainbow agent, current state-of-the-art in Atari-57 for non-distributed agents, while the distributed algorithm proposed by Kapturowski et al. [2018] achieves state-of-the-art performance for all agents. From an algorithmic perspective, it is impossible to represent the full space of probability distributions with a finite collection of parameters. Therefore the parameterization of quantile functions is usually the most crucial part in a general distributional RL algorithm. In C51, the true distribution is projected to a categorical distribution [Bellemare et al., 2017] with fixed values for parameterization. QR-DQN fixes probabilities instead of values, and parameterizes the quantile values [Dabney et al., 2018a] while IQN randomly samples the probabilities [Dabney et al., 2018a]. We will introduce QR-DQN and IQN in Section 2.2, and extend from their work to ours. 2.2 Quantile Regression for Distributional RL In contrast to C51 which estimates probabilities for N fixed locations in return, QR-DQN [Dabney et al., 2018b] estimates the respected quantile values for N fixed, uniform probabilities. In QR-DQN, the distribution of the random return is approximated by a uniform mixture of N Diracs, Zθ(x, a) := 1 N N∑ i=1 δθi(x,a), with each θi assigned a quantile value trained with quantile regression. Based on QR-DQN, Dabney et al. [2018a] propose using probabilities sampled from a base distribution, e.g. τ ∈ U([0, 1]), rather than fixed probabilities. They further learn the quantile function that maps from embeddings of sampled probabilities to the corresponding quantiles, called implicit quantile value network (IQN). At the time of this writing, IQN achieves the state-or-the-art performance on Atari-57 benchmark, human-normalized mean and median of all agents that does not combine distributed RL, prioritized replay [Schaul et al., 2016] and n-step update. Dabney et al. [2018a] claimed that with enough network capacity, IQN is able to approximate to the full quantile function with infinite number of quantile fractions. However, in practice one needs to use a finite number of quantile fractions to estimate action values for decision making, e.g. 32 randomly sampled quantile fractions as in Dabney et al. [2018a]. With limited fractions, a natural question arises that, how to best utilize those fractions to find the closest approximation of the true distribution? 3 Our Algorithm We propose Fully parameterized Quantile Function (FQF) for Distributional RL. Our algorithm consists of two networks, the fraction proposal network that generates a set of quantile fractions for each state-action pair, and the quantile value network that maps probabilities to quantile values. We first describe the fully parameterized quantile function in Section 3.1, with variables on both probability axis and value axis. Then, we show how to train the fraction proposal network in Section 3.2, and how to train the quantile value network with quantile regression in Section 3.3. Finally, we present our algorithm and describe the implementation details in Section 3.4. 3.1 Fully Parameterized Quantile Function In FQF, we estimate N adjustable quantile values for N adjustable quantile fractions to approximate the quantile function. The distribution of the return is approximated by a weighted mixture of N Diracs given by Zθ,τ (x, a) := N−1∑ i=0 (τi+1 − τi)δθi(x,a), (1) where δz denotes a Dirac at z ∈ R, τ1, ...τN−1 represent the N-1 adjustable fractions satisfying τi−1 < τi, with τ0 = 0 and τN = 1 to simplify notation. Denote quantile function [Müller, 1997] F−1Z the inverse function of cumulative distribution function FZ(z) = Pr(Z < z). By definition we have F−1Z (p) := inf {z ∈ R : p ≤ FZ(z)} where p is what we refer to as quantile fraction. Based on the distribution in Eq.(1), denote Πθ,τ the projection operator that projects quantile function onto a staircase function supported by θ and τ , the projected quantile function is given by F−1,θ,τZ (ω) = Π θ,τF−1Z (ω) = θ0 + N−1∑ i=0 (θi+1 − θi)Hτi+1(ω), where H is the Heaviside step function and Hτ (ω) is the short for H(ω − τ). Figure 1 gives an example of such projection. For each state-action pair (x, a), we first generate the set of fractions τ using the fraction proposal network, and then obtain the quantiles values θ corresponding to τ using the quantile value network. To measure the distortion between approximated quantile function and the true quantile function, we use the 1-Wasserstein metric given by W1(Z, θ, τ) = N−1∑ i=0 ∫ τi+1 τi ∣∣F−1Z (ω)− θi∣∣ dω. (2) Unlike KL divergence used in C51 which considers only the probabilities of the outcomes, the p-Wasseretein metric takes both the probability and the distance between outcomes into consideration. Figure 1 illustrates the concept of how different approximations could affect W1 error, and shows an example of ΠW1 . However, note that in practice Eq.(2) can not be obtained without bias. 3.2 Training fraction proposal Network To achieve minimal 1-Wasserstein error, we start from fixing τ and finding the optimal corresponding quantile values θ. In QR-DQN, Dabney et al. [2018a] gives an explicit form of θ to achieve the goal. We extend it to our setting: Lemma 1. [Dabney et al., 2018a] For any τ1, ...τN−1 ∈ [0, 1] satisfying τi−1 < τi for i, with τ1 = 0 and τN = 1, and cumulative distribution function F with inverse F−1, the set of θ minimizing Eq.(2) is given by θi = F −1 Z ( τi + τi+1 2 ) (3) We can now substitute θi in Eq.(2) with equation Eq.(3) and find the optimal condition for τ to minimize W1(Z, τ). For simplicity, we denote τ̂i = τi+τi+1 2 . Proposition 1. For any continuous quantile function F−1Z that is non-decreasing, define the 1- Wasserstein loss of F−1Z and F −1,τ Z by W1(Z, τ) = N−1∑ i=0 ∫ τi+1 τi ∣∣F−1Z (ω)− F−1Z (τ̂i)∣∣ dω. (4) ∂W1 ∂τi is given by ∂W1 ∂τi = 2F−1Z (τi)− F −1 Z (τ̂i)− F −1 Z (τ̂i−1), (5) ∀i ∈ (0, N). Further more, ∀τi−1, τi+1 ∈ [0, 1], τi−1 < τi+1, ∃τi ∈ (τi−1, τi+1) s.t. ∂W1∂τi = 0. Proof of proposition 1 is given in the appendix. While computing W1 without bias is usually impractical, equation 5 provides us with a way to minimize W1 without computing it. Let w1 be the parameters of the fraction proposal network P , for an arbitrary quantile function F−1Z , we can minimize W1 by iteratively applying gradients descent to w1 according to Eq.(5) and convergence is guaranteed. As the true quantile function F−1Z is unknown to us in practice, we use the quantile value network F−1Z,w2 with parameters w2 for current state and action as true quantile function. The expected return, also known as action-value based on FQF is then given by Q(x, a) = N−1∑ i=0 (τi+1 − τi)F−1Z,w2(τ̂i), where τ0 = 0 and τN = 1. 3.3 Training quantile value network With the properly chosen probabilities, we combine quantile regression and distributional Bellman update on the optimized probabilities to train the quantile function. Consider Z a random variable denoting the action-value at (xt, at) and Z ′ the action-value random variable at (xt+1, at+1), the weighted temporal difference (TD) error for two probabilities τ̂i and τ̂j is defined by δtij = rt + γF −1 Z′,w1 (τ̂i)− F−1Z,w1(τ̂j) (6) Quantile regression is used in QR-DQN and IQN to stochastically adjust the quantile estimates so as to minimize the Wasserstein distance to a target distribution. We follow QR-DQN and IQN where quantile value networks are trained by minimizing the Huber quantile regression loss [Huber, 1964], with threshold κ, ρκτ (δij) = |τ − I {δij < 0}| Lκ (δij) κ , with Lκ (δij) = { 1 2δ 2 ij , if |δij | ≤ κ κ ( |δij | − 12κ ) , otherwise The loss of the quantile value network is then given by L(xt, at, rt, xt+1) = 1 N N−1∑ i=0 N−1∑ j=0 ρκτ̂j (δ t ij) (7) Note that F−1Z and its Bellman target share the same proposed quantile fractions τ̂ to reduce computation. We perform joint gradient update for w1 and w2, as illustrated in Algorithm 1. Algorithm 1: FQF update Parameter :N,κ Input: x, a, r, x′, γ ∈ [0, 1) // Compute proposed fractions for x, a τ ← Pw1(x); // Compute proposed fractions for x′, a′ for a′ ∈ A do τ ′ ← Pw1(x′); end // Compute greedy action Q(s′, a′)← ∑N−1 i=0 (τ ′ i+1 − τ ′i)F −1 Z′,w2 (τ̂i)a; a∗ ← argmax a′ Q(s′, a′); // Compute L for 0 ≤ i ≤ N − 1 do for 0 ≤ j ≤ N − 1 do δij ← r + γF−1Z′,w2(τ̂i)− F −1 Z,w2 (τ̂j) end end L = 1N ∑N−1 i=0 ∑N−1 j=0 ρ κ τ̂j (δij); // Compute ∂W1 ∂τi for i ∈ [1, N − 1] ∂W1 ∂τi = 2F−1Z,w2(τi)− F −1 Z,w2 (τ̂i)− F−1Z,w2(τ̂i−1); Update w1 with ∂W1∂τi ; Update w2 with∇L; Output: Q 3.4 Implementation Details Our fraction proposal network is represented by one fully-connected MLP layer. It takes the state embedding of original IQN as input and generates fraction proposal. Recall that in Proposition 1, we require τi−1 < τi and τ0 = 0, τN = 1. While it is feasible to have τ0 = 0, τN = 1 fixed and sort the output of τw1 , the sort operation would make the network hard to train. A more reasonable and practical way would be to let the neural network automatically have the output sorted using cumulated softmax. Let q ∈ RN denote the output of a softmax layer, we have qi ∈ (0, 1), i ∈ [0, N − 1] and∑N−1 i=0 qi = 1. Let τi = ∑i−1 j=0 qj , i ∈ [0, N ], then straightforwardly we have τi < τj for ∀i < j and τ0 = 0, τN = 1 in our fraction proposal network. Note that as W1 is not computed, we can’t directly perform gradient descent for the fraction proposal network. Instead, we use the grad_ys argument in the tensorflow operator tf.gradients to assign ∂W1∂τi to the optimizer. In addition, one can use entropy of q as a regularization term H(q) = − ∑N−1 i=0 qi log qi to prevent the distribution from degenerating into a deterministic one. We borrow the idea of implicit representations from IQN to our quantile value network. To be specific, we compute the embedding of τ , denoted by φ(τ), with φj(τ) := ReLU ( n−1∑ i=0 cos(iπτ)wij + bj ) , where wij and bj are network parameters. We then compute the element-wise (Hadamard) product of state feature ψ(x) and embedding φ(τ). Let denote element-wise product, the quantile values are given by F−1Z (τ) ≈ F −1 Z,w2 (ψ(x) φ(τ)). In IQN, after the set of τ is sampled from a uniform distribution, instead of using differences between τ as probabilities of the quantiles, the mean of the quantile values is used to compute action-value Q. While in expectation, Q = ∑N−1 i=0 (τi+1 − τi)F −1 Z ( τi+τi+1 2 ) with τ0 = 0, τN = 1 and Q = 1N ∑N i=1 F −1 Z (τi) are equal, we use the former one to consist with our projection operation. 4 Experiments We test our algorithm on the Atari games from Arcade Learning Environment (ALE) Bellemare et al. [2013]. We select the most relative algorithm to ours, IQN [Dabney et al., 2018a], as baseline, and compare FQF with QR-DQN [Dabney et al., 2018b], C51 [Bellemare et al., 2017], prioritized experience replay [Schaul et al., 2016] and Rainbow [Hessel et al., 2018], the current state-of-art that combines the advantages of several RL algorithms including distributional RL. The baseline algorithm is implemented by Castro et al. [2018] in the Dopamine framework, with slightly lower performance than reported in IQN. We implement FQF based on the Dopamine framework. Unfortunately, we fail to test our algorithm on Surround and Defender as Surround is not supported by the Dopamine framework and scores of Defender is unreliable in Dopamine. Following the common practice [Van Hasselt et al., 2016], we use the 30-noop evaluation settings to align with previous works. Results of FQF and IQN using sticky action for evaluation proposed by Machado et al. [2018] are also provided in the appendix. In all, the algorithms are tested on 55 Atari games. Our hyper-parameter setting is aligned with IQN for fair comparison. The number of τ for FQF is 32. The weights of the fraction proposal network are initialized so that initial probabilities are uniform as in QR-DQN, also the learning rates are relatively small compared with the quantile value network to keep the probabilities relatively stable while training. We run all agents with 200 million frames. At the training stage, we use -greedy with = 0.01. For each evaluation stage, we test the agent for 0.125 million frames with = 0.001. For each algorithm we run 3 random seeds. All experiments are performed on NVIDIA Tesla V100 16GB graphics cards. Table 1 compares the mean and median human normalized scores across 55 Atari games with up to 30 random no-op starts, and the full score table is provided in the Appendix. It shows that FQF outperforms all existing distributional RL algorithms, including Rainbow [Hessel et al., 2018] that combines C51 with prioritized replay, and n-step updates. We also set a new record on the number of games where non-distributed RL agent performs better than human. Figure 2 shows the training curves of several Atari games. Even on games where FQF and IQN have similar performance such as Centipede , FQF is generally much faster thanks to self-adjusting fractions. However, one side effect of the full parameterization in FQF is that the training speed is decreased. With same settings, FQF is roughly 20% slower than IQN due to the additional fraction proposal network. As the number of τ increases, FQF slows down significantly while IQN’s training speed is not sensitive to the number of τ samples. 5 Discussion and Conclusions Based on previous works of distributional RL, we propose a more general complete approximation of the return distribution. Compared with previous distributional RL algorithms, FQF focuses not only on learning the target, e.g. probabilities for C51, quantile values for QR-DQN and IQN, but also which target to learn, i.e quantile fraction. This allows FQF to learn a better approximation of the true distribution under restrictions of network capacity. Experiment result shows that FQF does achieve significant improvement. There are some open questions we are yet unable to address in this paper. We will have some discussions here. First, does the 1-Wasserstein error converge to its minimal value when the quantile function is not fixed? We cannot guarantee convergence of the fraction proposal network in deep neural networks where we involve quantile regression and Bellman update. Second, though we empirically believe so, does the contraction mapping result for fixed probabilities given by Dabney et al. [2018b] also apply on self-adjusting probabilities? Third, while FQF does provide potentially better distribution approximation with same amount of fractions, how will a better approximated distribution affect agent’s policy and how will it affect the training process? More generally, how important is quantile fraction selection during training? As for future work, we believe that studying the trained quantile fractions will provide intriguing results. Such as how sensitive are the quantile fractions to state and action, and that how the quantile fractions will evolve in a single run. Also, the combination of distributional RL and DDPG in D4PG [Barth-Maron et al., 2018] showed that distributional RL can also be extended to continuous control settings. Extending our algorithm to continuous settings is another interesting topic. Furthermore, in our algorithm we adopted the concept of selecting the best target to learn. Can this intuition be applied to areas other than RL? Finally, we also noticed that most of the games we fail to reach human-level performance involves complex rules that requires exploration based policies, such as Montezuma Revenge and Venture. Integrating distributional RL will be another potential direction as in [Tang and Agrawal, 2018]. In general, we believe that our algorithm can be viewed as a natural extension of existing distributional RL algorithms, and that distributional RL may integrate greatly with other algorithms to reach higher performance.
1. What is the focus of the review on the paper regarding semantic correspondence? 2. What are the strengths of the proposed approach, particularly in terms of neural representation? 3. What are the weaknesses of the paper, especially for the experiment section? 4. Do you have any concerns about the semantic correspondence representation? 5. What are the limitations regarding the NeMF approach? 6. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 7. What is the contribution of the paper and the significance of the proposed modules? 8. What are the strengths of the proposed differentiable data generation pipeline? 9. What are the weaknesses of the paper regarding its claims, experiments, and comparisons with other works? 10. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 11. What are the key contributions and novel aspects introduced by the paper in spin glass techniques? 12. What are the weaknesses of the paper compared to prior works? 13. What is the main contribution of the paper on dictionary learning? 14. What are the strengths of the paper, especially in the theoretical analysis? 15. Do you have any questions regarding the paper?
Review
Review POST-REBUTTAL I thank the authors for their detailed response. My main concern was the level of experimental detail provided in the submission, and I'm pleased that the authors have committed to including more of the details implicitly contained within the code in the paper itself. My overall recommendation remains the same; I think the paper should be published, and the strong Atari results will be of interest fairly widely. However, there were a few parts of the response I wasn't convinced by: (1) "(D) Inefficient Hyperparameter": I don't agree with the authors' claim that e.g. QR-DQN requires more hyperparameters than FQF (it seems to me that both algorithmically require the number of quantiles, and the standard hyperparameters associated with network architecture and training beyond that). (2) The IQN vs. FQF discussion. In particular: (2.i) "It is not difficult to extend FQF to support arbitrary number of optimal quantiles as in IQN: we can modify the quantile network into a recurrent network so that it generates a single τ each time and takes state, action and previously outputted τ s as input to generate next τ". This seems non-trivial as a modification to the architecture, and it's not clear to me that this proposed solution would work in practice. (2.ii) "Thus, intuitively the quantiles generated in FQF should be better than sampled quantiles (as in IQN) in terms of quantile function approximation in most cases." This seems speculative, and I don't think there's anything in the paper (beyond the Atari results) to back this up. -------------------- HIGH-LEVEL COMMENTS The paper contributes a new distributional RL algorithm, which parametrizes return distributions as mixtures of Dirac deltas, and learns both the locations of the Dirac deltas, and their probability masses, extending the parametrisations used in QR-DQN and C51. This adjustment is explained reasonably clearly, and the empirical evaluations on Atari are strong, and improve considerably on related earlier algorithms. I have checked the proofs in the appendix. Based on these results, I believe the paper should be accepted. However, the paper is lacking some experimental detail, such as precise architectures/optimizers used, hyperparameter sweeps undertaken, etc.. Whilst some of this can be discerned from the attached code, not all of it can, and it any case it would be clearer if it were presented an appendix of the paper. The code itself could do with better documentation; at the moment, several “TODO” notes are left, and blocks of unused code commented out etc. In Section 4, the following claim is made “We also set a new record on the number of 222 games where RL agent performs better than human.”, with Table 1 showing that FQF beats human performance in 44 Atari games. However, the agent in (Kapturowski et al., 2019) attains super-human performance on 52 of the 57 Atari games - can the authors comment on this? DETAILED COMMENTS Lines 44-54: I think it’s inaccurate (or at least unclear) to suggest that the distribution can be better approximated by FQF than IQN. If I understand correctly, the distributions FQF can express are restricted to the parametric form in Eqn (1), whereas IQN (at test time) can sample many more quantiles than were used in each stochastic gradient computation in training, and therefore is not limited to a particular parametric family of return distributions (although clearly it is limited by the expressivity of the network). Lines 104-108: I don't agree with this statement - there will be variance in the IQN gradient introduced by using a finite number of samples in each gradient estimate, but this does not affect the approximation error of the converged system. This is similar to the variance that may occur in general distributional RL updates due to using sampled transitions rather than integrating over the dynamics of the MDP. Lines 136-138: The reason Eqn (2) cannot be minimized with SGD is that it is not possible to obtain an unbiased gradient. Line 175: It’s not clear to me what these two sentences mean. In general, the Bellman target and the fitted distribution will not be equal, due to the approximation error incurred due to the network using a parametric family of distributions. Algorithm 1: there are several minor things here that could do with neatening up: Q(s’,a’) appears outside the loop over a’; Z’ and Z are not defined. Figure 2: the error bars/mean looks off for DoubleDunk? Line 204: It is unfortunate that results for Defender and Surround were not included; however, the authors have based their experiments on a commonly-used framework, so I do not count this against the submission. The experimental results the authors report are very strong. Can the authors clarify whether the shaded regions in Figure 2 are standard deviations across the seeds run? Lines 233-238: It is unclear what the hyperparameters that the authors refer to are. If I understand correctly, their method requires selection of the hyperparameter prescribing the number of atoms to be represented (as with several other distributional RL agents), and additionally requires tuning the training hyperparameters of an additional network, since the probabilities are output by a separate network. I also disagree with the claim that it has been *theoretically* shown that the algorithm proposed in this paper achieves better performance than IQN. As I understand it, the distributions expressible by IQN and FQF are completely different: IQN can express any distribution for which the quantile function is expressible by the network. In particular, all such distributions will be absolutely continuous. In contrast, any distribution that FQF can express must necessarily be finitely-supported. Appendix: main derivations: “W_1/\partial\tau_i”->”\partial W_1/\parital\tau_i”. 2nd line: in the second integral, the lower limit should be \hat{\tau}_{i-1}. Minor typos “Dopamine” should be capitalized throughout. Line 78: “A’ \sim \pi(\cdot|x’)” ->“A’ \sim \pi(\cdot|X’)” Line 253: “Motezuma” -> “Montezuma” References Kapturowski et al.. Recurrent Experience Replay in Distributed Reinforcement Learning. ICLR 2019.
NIPS
Title Fully Parameterized Quantile Function for Distributional Reinforcement Learning Abstract Distributional Reinforcement Learning (RL) differs from traditional RL in that, rather than the expectation of total returns, it estimates distributions and has achieved state-of-the-art performance on Atari Games. The key challenge in practical distributional RL algorithms lies in how to parameterize estimated distributions so as to better approximate the true continuous distribution. Existing distributional RL algorithms parameterize either the probability side or the return value side of the distribution function, leaving the other side uniformly fixed as in C51, QR-DQN or randomly sampled as in IQN. In this paper, we propose fully parameterized quantile function that parameterizes both the quantile fraction axis (i.e., the x-axis) and the value axis (i.e., y-axis) for distributional RL. Our algorithm contains a fraction proposal network that generates a discrete set of quantile fractions and a quantile value network that gives corresponding quantile values. The two networks are jointly trained to find the best approximation of the true distribution. Experiments on 55 Atari Games show that our algorithm significantly outperforms existing distributional RL algorithms and creates a new record for the Atari Learning Environment for non-distributed agents. 1 Introduction Distributional reinforcement learning [Jaquette et al., 1973, Sobel, 1982, White, 1988, Morimura et al., 2010, Bellemare et al., 2017] differs from value-based reinforcement learning in that, instead of focusing only on the expectation of the return, distributional reinforcement learning also takes the intrinsic randomness of returns within the framework into consideration [Bellemare et al., 2017, Dabney et al., 2018b,a, Rowland et al., 2018]. The randomness comes from both the environment itself and agent’s policy. Distributional RL algorithms characterize the total return as random variable and estimate the distribution of such random variable, while traditional Q-learning algorithms estimate only the mean (i.e., traditional value function) of such random variable. The main challenge of distributional RL algorithm is how to parameterize and approximate the distribution. In Categorical DQN [Bellemare et al., 2017](C51), the possible returns are limited to a discrete set of fixed values, and the probability of each value is learned through interacting with environments. C51 out-performs all previous variants of DQN on a set of 57 Atari 2600 games in the Arcade Learning Environment (ALE) [Bellemare et al., 2013]. Another approach for distributional reinforcement learning is to estimate the quantile values instead. Dabney et al. [2018b] proposed QR∗Contributed during internship at Microsoft Research. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. DQN to compute the return quantiles on fixed, uniform quantile fractions using quantile regression and minimize the quantile Huber loss [Huber, 1964] between the Bellman updated distribution and current return distribution. Unlike C51, QR-DQN has no restrictions or bound for value and achieves significant improvements over C51. However, both C51 and QR-DQN approximate the distribution function or quantile function on fixed locations, either value or probability. Dabney et al. [2018a] propose learning the quantile values for sampled quantile fractions rather than fixed ones with an implicit quantile value network (IQN) that maps from quantile fractions to quantile values. With sufficient network capacity and infinite number of quantiles, IQN is able to approximate the full quantile function. However, it is impossible to have infinite quantiles in practice. With limited number of quantile fractions, efficiency and effectiveness of the samples must be reconsidered. The sampling method in IQN mainly helps training the implicit quantile value network rather than approximating the full quantile function, and thus there is no guarantee in that sampled probabilities would provide better quantile function approximation than fixed probabilities. In this work, we extend the method in Dabney et al. [2018b] and Dabney et al. [2018a] and propose to fully parameterize the quantile function. By fully parameterization, we mean that unlike QR-DQN and IQN where quantile fractions are fixed or sampled and only the corresponding quantile values are parameterized, both quantile fractions and corresponding quantile values in our algorithm are parameterized. In addition to a quantile value network similar to IQN that maps quantile fractions to corresponding quantile values, we propose a fraction proposal network that generates quantile fractions for each state-action pair. The fraction proposal network is trained so that as the true distribution is approximated, the 1-Wasserstein distance between the approximated distribution and the true distribution is minimized. Given the proposed fractions generated by the fraction proposal network, we can learn the quantile value network by quantile regression. With self-adjusting fractions, we can approximate the true distribution better than with fixed or sampled fractions. We begin with related works and backgrounds of distributional RL in Section 2. We describe our algorithm in Section 3 and provide experiment results of our algorithm on the ALE environment [Bellemare et al., 2013] in Section 4. At last, we discuss the future extension of our work, and conclude our work in Section 5. 2 Background and Related Work We consider the standard reinforcement learning setting where agent-environment interactions are modeled as a Markov Decision Process (X ,A, R, P, γ) [Puterman, 1994], where X and A denote state space and action space, P denotes the transition probability given state and action, R denotes state and action dependent reward function and γ ∈ (0, 1) denotes the reward discount factor. For a policy π, define the discounted return sum a random variable by Zπ(x, a) = ∑∞ t=0 γ tR(xt, at), where x0 = x, a0 = a, xt ∼ P (·|xt−1, at−1) and at ∼ π(·|xt). The objective in reinforcement learning can be summarized as finding the optimal π∗ that maximizes the expectation of Zπ, the action-value function Qπ(x, a) = E[Zπ(x, a)]. The most common approach is to find the unique fixed point of the Bellman optimality operator T [Bellman, 1957]: Q∗(x, a) = T Q∗(x, a) := E[R(x, a)] + γEP max a′ Q∗ (x′, a′) . To update Q, which is approximated by a neural network in most deep reinforcement learning studies, Q-learning [Watkins, 1989] iteratively trains the network by minimizing the squared temporal difference (TD) error defined by δ2t = [ rt + γ max a′∈A Q (xt+1, a ′)−Q (xt, at) ]2 along the trajectory observed while the agent interacts with the environment following -greedy policy. DQN [Mnih et al., 2015] uses a convolutional neural network to represent Q and achieves human-level play on the Atari-57 benchmark. 2.1 Distributional RL Instead of a scalar Qπ(x, a), distributional RL looks into the intrinsic randomness of Zπ by studying its distribution. The distributional Bellman operator for policy evaluation is Zπ(x, a) D = R(x, a) + γZπ (X ′, A′) , where X ′ ∼ P (·|x, a) and A′ ∼ π(·|X ′), A D= B denotes that random variable A and B follow the same distribution. Both theory and algorithms have been established for distributional RL. In theory, the distributional Bellman operator for policy evaluation is proved to be a contraction in the p-Wasserstein distance [Bellemare et al., 2017]. Bellemare et al. [2017] shows that C51 outperforms value-based RL, in addition Hessel et al. [2018] combined C51 with enhancements such as prioritized experience replay [Schaul et al., 2016], n-step updates [Sutton, 1988], and the dueling architecture [Wang et al., 2016], leading to the Rainbow agent, current state-of-the-art in Atari-57 for non-distributed agents, while the distributed algorithm proposed by Kapturowski et al. [2018] achieves state-of-the-art performance for all agents. From an algorithmic perspective, it is impossible to represent the full space of probability distributions with a finite collection of parameters. Therefore the parameterization of quantile functions is usually the most crucial part in a general distributional RL algorithm. In C51, the true distribution is projected to a categorical distribution [Bellemare et al., 2017] with fixed values for parameterization. QR-DQN fixes probabilities instead of values, and parameterizes the quantile values [Dabney et al., 2018a] while IQN randomly samples the probabilities [Dabney et al., 2018a]. We will introduce QR-DQN and IQN in Section 2.2, and extend from their work to ours. 2.2 Quantile Regression for Distributional RL In contrast to C51 which estimates probabilities for N fixed locations in return, QR-DQN [Dabney et al., 2018b] estimates the respected quantile values for N fixed, uniform probabilities. In QR-DQN, the distribution of the random return is approximated by a uniform mixture of N Diracs, Zθ(x, a) := 1 N N∑ i=1 δθi(x,a), with each θi assigned a quantile value trained with quantile regression. Based on QR-DQN, Dabney et al. [2018a] propose using probabilities sampled from a base distribution, e.g. τ ∈ U([0, 1]), rather than fixed probabilities. They further learn the quantile function that maps from embeddings of sampled probabilities to the corresponding quantiles, called implicit quantile value network (IQN). At the time of this writing, IQN achieves the state-or-the-art performance on Atari-57 benchmark, human-normalized mean and median of all agents that does not combine distributed RL, prioritized replay [Schaul et al., 2016] and n-step update. Dabney et al. [2018a] claimed that with enough network capacity, IQN is able to approximate to the full quantile function with infinite number of quantile fractions. However, in practice one needs to use a finite number of quantile fractions to estimate action values for decision making, e.g. 32 randomly sampled quantile fractions as in Dabney et al. [2018a]. With limited fractions, a natural question arises that, how to best utilize those fractions to find the closest approximation of the true distribution? 3 Our Algorithm We propose Fully parameterized Quantile Function (FQF) for Distributional RL. Our algorithm consists of two networks, the fraction proposal network that generates a set of quantile fractions for each state-action pair, and the quantile value network that maps probabilities to quantile values. We first describe the fully parameterized quantile function in Section 3.1, with variables on both probability axis and value axis. Then, we show how to train the fraction proposal network in Section 3.2, and how to train the quantile value network with quantile regression in Section 3.3. Finally, we present our algorithm and describe the implementation details in Section 3.4. 3.1 Fully Parameterized Quantile Function In FQF, we estimate N adjustable quantile values for N adjustable quantile fractions to approximate the quantile function. The distribution of the return is approximated by a weighted mixture of N Diracs given by Zθ,τ (x, a) := N−1∑ i=0 (τi+1 − τi)δθi(x,a), (1) where δz denotes a Dirac at z ∈ R, τ1, ...τN−1 represent the N-1 adjustable fractions satisfying τi−1 < τi, with τ0 = 0 and τN = 1 to simplify notation. Denote quantile function [Müller, 1997] F−1Z the inverse function of cumulative distribution function FZ(z) = Pr(Z < z). By definition we have F−1Z (p) := inf {z ∈ R : p ≤ FZ(z)} where p is what we refer to as quantile fraction. Based on the distribution in Eq.(1), denote Πθ,τ the projection operator that projects quantile function onto a staircase function supported by θ and τ , the projected quantile function is given by F−1,θ,τZ (ω) = Π θ,τF−1Z (ω) = θ0 + N−1∑ i=0 (θi+1 − θi)Hτi+1(ω), where H is the Heaviside step function and Hτ (ω) is the short for H(ω − τ). Figure 1 gives an example of such projection. For each state-action pair (x, a), we first generate the set of fractions τ using the fraction proposal network, and then obtain the quantiles values θ corresponding to τ using the quantile value network. To measure the distortion between approximated quantile function and the true quantile function, we use the 1-Wasserstein metric given by W1(Z, θ, τ) = N−1∑ i=0 ∫ τi+1 τi ∣∣F−1Z (ω)− θi∣∣ dω. (2) Unlike KL divergence used in C51 which considers only the probabilities of the outcomes, the p-Wasseretein metric takes both the probability and the distance between outcomes into consideration. Figure 1 illustrates the concept of how different approximations could affect W1 error, and shows an example of ΠW1 . However, note that in practice Eq.(2) can not be obtained without bias. 3.2 Training fraction proposal Network To achieve minimal 1-Wasserstein error, we start from fixing τ and finding the optimal corresponding quantile values θ. In QR-DQN, Dabney et al. [2018a] gives an explicit form of θ to achieve the goal. We extend it to our setting: Lemma 1. [Dabney et al., 2018a] For any τ1, ...τN−1 ∈ [0, 1] satisfying τi−1 < τi for i, with τ1 = 0 and τN = 1, and cumulative distribution function F with inverse F−1, the set of θ minimizing Eq.(2) is given by θi = F −1 Z ( τi + τi+1 2 ) (3) We can now substitute θi in Eq.(2) with equation Eq.(3) and find the optimal condition for τ to minimize W1(Z, τ). For simplicity, we denote τ̂i = τi+τi+1 2 . Proposition 1. For any continuous quantile function F−1Z that is non-decreasing, define the 1- Wasserstein loss of F−1Z and F −1,τ Z by W1(Z, τ) = N−1∑ i=0 ∫ τi+1 τi ∣∣F−1Z (ω)− F−1Z (τ̂i)∣∣ dω. (4) ∂W1 ∂τi is given by ∂W1 ∂τi = 2F−1Z (τi)− F −1 Z (τ̂i)− F −1 Z (τ̂i−1), (5) ∀i ∈ (0, N). Further more, ∀τi−1, τi+1 ∈ [0, 1], τi−1 < τi+1, ∃τi ∈ (τi−1, τi+1) s.t. ∂W1∂τi = 0. Proof of proposition 1 is given in the appendix. While computing W1 without bias is usually impractical, equation 5 provides us with a way to minimize W1 without computing it. Let w1 be the parameters of the fraction proposal network P , for an arbitrary quantile function F−1Z , we can minimize W1 by iteratively applying gradients descent to w1 according to Eq.(5) and convergence is guaranteed. As the true quantile function F−1Z is unknown to us in practice, we use the quantile value network F−1Z,w2 with parameters w2 for current state and action as true quantile function. The expected return, also known as action-value based on FQF is then given by Q(x, a) = N−1∑ i=0 (τi+1 − τi)F−1Z,w2(τ̂i), where τ0 = 0 and τN = 1. 3.3 Training quantile value network With the properly chosen probabilities, we combine quantile regression and distributional Bellman update on the optimized probabilities to train the quantile function. Consider Z a random variable denoting the action-value at (xt, at) and Z ′ the action-value random variable at (xt+1, at+1), the weighted temporal difference (TD) error for two probabilities τ̂i and τ̂j is defined by δtij = rt + γF −1 Z′,w1 (τ̂i)− F−1Z,w1(τ̂j) (6) Quantile regression is used in QR-DQN and IQN to stochastically adjust the quantile estimates so as to minimize the Wasserstein distance to a target distribution. We follow QR-DQN and IQN where quantile value networks are trained by minimizing the Huber quantile regression loss [Huber, 1964], with threshold κ, ρκτ (δij) = |τ − I {δij < 0}| Lκ (δij) κ , with Lκ (δij) = { 1 2δ 2 ij , if |δij | ≤ κ κ ( |δij | − 12κ ) , otherwise The loss of the quantile value network is then given by L(xt, at, rt, xt+1) = 1 N N−1∑ i=0 N−1∑ j=0 ρκτ̂j (δ t ij) (7) Note that F−1Z and its Bellman target share the same proposed quantile fractions τ̂ to reduce computation. We perform joint gradient update for w1 and w2, as illustrated in Algorithm 1. Algorithm 1: FQF update Parameter :N,κ Input: x, a, r, x′, γ ∈ [0, 1) // Compute proposed fractions for x, a τ ← Pw1(x); // Compute proposed fractions for x′, a′ for a′ ∈ A do τ ′ ← Pw1(x′); end // Compute greedy action Q(s′, a′)← ∑N−1 i=0 (τ ′ i+1 − τ ′i)F −1 Z′,w2 (τ̂i)a; a∗ ← argmax a′ Q(s′, a′); // Compute L for 0 ≤ i ≤ N − 1 do for 0 ≤ j ≤ N − 1 do δij ← r + γF−1Z′,w2(τ̂i)− F −1 Z,w2 (τ̂j) end end L = 1N ∑N−1 i=0 ∑N−1 j=0 ρ κ τ̂j (δij); // Compute ∂W1 ∂τi for i ∈ [1, N − 1] ∂W1 ∂τi = 2F−1Z,w2(τi)− F −1 Z,w2 (τ̂i)− F−1Z,w2(τ̂i−1); Update w1 with ∂W1∂τi ; Update w2 with∇L; Output: Q 3.4 Implementation Details Our fraction proposal network is represented by one fully-connected MLP layer. It takes the state embedding of original IQN as input and generates fraction proposal. Recall that in Proposition 1, we require τi−1 < τi and τ0 = 0, τN = 1. While it is feasible to have τ0 = 0, τN = 1 fixed and sort the output of τw1 , the sort operation would make the network hard to train. A more reasonable and practical way would be to let the neural network automatically have the output sorted using cumulated softmax. Let q ∈ RN denote the output of a softmax layer, we have qi ∈ (0, 1), i ∈ [0, N − 1] and∑N−1 i=0 qi = 1. Let τi = ∑i−1 j=0 qj , i ∈ [0, N ], then straightforwardly we have τi < τj for ∀i < j and τ0 = 0, τN = 1 in our fraction proposal network. Note that as W1 is not computed, we can’t directly perform gradient descent for the fraction proposal network. Instead, we use the grad_ys argument in the tensorflow operator tf.gradients to assign ∂W1∂τi to the optimizer. In addition, one can use entropy of q as a regularization term H(q) = − ∑N−1 i=0 qi log qi to prevent the distribution from degenerating into a deterministic one. We borrow the idea of implicit representations from IQN to our quantile value network. To be specific, we compute the embedding of τ , denoted by φ(τ), with φj(τ) := ReLU ( n−1∑ i=0 cos(iπτ)wij + bj ) , where wij and bj are network parameters. We then compute the element-wise (Hadamard) product of state feature ψ(x) and embedding φ(τ). Let denote element-wise product, the quantile values are given by F−1Z (τ) ≈ F −1 Z,w2 (ψ(x) φ(τ)). In IQN, after the set of τ is sampled from a uniform distribution, instead of using differences between τ as probabilities of the quantiles, the mean of the quantile values is used to compute action-value Q. While in expectation, Q = ∑N−1 i=0 (τi+1 − τi)F −1 Z ( τi+τi+1 2 ) with τ0 = 0, τN = 1 and Q = 1N ∑N i=1 F −1 Z (τi) are equal, we use the former one to consist with our projection operation. 4 Experiments We test our algorithm on the Atari games from Arcade Learning Environment (ALE) Bellemare et al. [2013]. We select the most relative algorithm to ours, IQN [Dabney et al., 2018a], as baseline, and compare FQF with QR-DQN [Dabney et al., 2018b], C51 [Bellemare et al., 2017], prioritized experience replay [Schaul et al., 2016] and Rainbow [Hessel et al., 2018], the current state-of-art that combines the advantages of several RL algorithms including distributional RL. The baseline algorithm is implemented by Castro et al. [2018] in the Dopamine framework, with slightly lower performance than reported in IQN. We implement FQF based on the Dopamine framework. Unfortunately, we fail to test our algorithm on Surround and Defender as Surround is not supported by the Dopamine framework and scores of Defender is unreliable in Dopamine. Following the common practice [Van Hasselt et al., 2016], we use the 30-noop evaluation settings to align with previous works. Results of FQF and IQN using sticky action for evaluation proposed by Machado et al. [2018] are also provided in the appendix. In all, the algorithms are tested on 55 Atari games. Our hyper-parameter setting is aligned with IQN for fair comparison. The number of τ for FQF is 32. The weights of the fraction proposal network are initialized so that initial probabilities are uniform as in QR-DQN, also the learning rates are relatively small compared with the quantile value network to keep the probabilities relatively stable while training. We run all agents with 200 million frames. At the training stage, we use -greedy with = 0.01. For each evaluation stage, we test the agent for 0.125 million frames with = 0.001. For each algorithm we run 3 random seeds. All experiments are performed on NVIDIA Tesla V100 16GB graphics cards. Table 1 compares the mean and median human normalized scores across 55 Atari games with up to 30 random no-op starts, and the full score table is provided in the Appendix. It shows that FQF outperforms all existing distributional RL algorithms, including Rainbow [Hessel et al., 2018] that combines C51 with prioritized replay, and n-step updates. We also set a new record on the number of games where non-distributed RL agent performs better than human. Figure 2 shows the training curves of several Atari games. Even on games where FQF and IQN have similar performance such as Centipede , FQF is generally much faster thanks to self-adjusting fractions. However, one side effect of the full parameterization in FQF is that the training speed is decreased. With same settings, FQF is roughly 20% slower than IQN due to the additional fraction proposal network. As the number of τ increases, FQF slows down significantly while IQN’s training speed is not sensitive to the number of τ samples. 5 Discussion and Conclusions Based on previous works of distributional RL, we propose a more general complete approximation of the return distribution. Compared with previous distributional RL algorithms, FQF focuses not only on learning the target, e.g. probabilities for C51, quantile values for QR-DQN and IQN, but also which target to learn, i.e quantile fraction. This allows FQF to learn a better approximation of the true distribution under restrictions of network capacity. Experiment result shows that FQF does achieve significant improvement. There are some open questions we are yet unable to address in this paper. We will have some discussions here. First, does the 1-Wasserstein error converge to its minimal value when the quantile function is not fixed? We cannot guarantee convergence of the fraction proposal network in deep neural networks where we involve quantile regression and Bellman update. Second, though we empirically believe so, does the contraction mapping result for fixed probabilities given by Dabney et al. [2018b] also apply on self-adjusting probabilities? Third, while FQF does provide potentially better distribution approximation with same amount of fractions, how will a better approximated distribution affect agent’s policy and how will it affect the training process? More generally, how important is quantile fraction selection during training? As for future work, we believe that studying the trained quantile fractions will provide intriguing results. Such as how sensitive are the quantile fractions to state and action, and that how the quantile fractions will evolve in a single run. Also, the combination of distributional RL and DDPG in D4PG [Barth-Maron et al., 2018] showed that distributional RL can also be extended to continuous control settings. Extending our algorithm to continuous settings is another interesting topic. Furthermore, in our algorithm we adopted the concept of selecting the best target to learn. Can this intuition be applied to areas other than RL? Finally, we also noticed that most of the games we fail to reach human-level performance involves complex rules that requires exploration based policies, such as Montezuma Revenge and Venture. Integrating distributional RL will be another potential direction as in [Tang and Agrawal, 2018]. In general, we believe that our algorithm can be viewed as a natural extension of existing distributional RL algorithms, and that distributional RL may integrate greatly with other algorithms to reach higher performance.
1. What is the novel method introduced in the paper for optimizing the selection of quantiles in distributional RL? 2. How does the proposed method compare to previous methods in terms of accuracy and stability? 3. Are there any unreported experiments that demonstrate the instability of sampling-based methods? 4. How certain is the result that FQF outperforms other distributional RL algorithms, considering the variance of performance over random trials? 5. What is the measure used to compare the speed and stability of different algorithms? 6. How does minimizing the error in the approximated distribution affect the approximation of the mean? 7. How stable is the algorithm to the relative setting of learning rates, and how easy is it to select hyperparameters? 8. Does the 1-Wasserstein error converge to its minimal value when minimizing the surrogate function derived by its derivative? 9. What is the significance of the paper's contribution to the development of new distributional RL algorithms? 10. How important is it to have an accurate approximation of the return distribution to the performance of distributional RL algorithms?
Review
Review Originality: This work introduces a novel method for optimizing the selection of quantiles to minimize error in the return distribution. This method is then combined with IQN to produce a new RL algorithm that improves the ability of a network to accurately estimate the return distribution. Quality: The paper makes the following claims: 1) “Self-adjusting probabilities can approximate the true distribution better than fixed or sampled probabilities” -- line 53 2) “we believe that the former one would achieve better results since it approximates the quantile function with a stair case function, instead of viewing the process as sampling which leads to instability”-- line 194 3) “It shows that FQF out-performed all existing distributional RL algorithms” -- line 219 4)“FQF is generally much faster and stabler thanks to self-adjusting probabilities” -- line 224 In trying to evaluate these claims I have questions about their scope and justification. 1) In Figure 1, it is shown for some distribution optimizing the choice of quantiles can provide a better approximation to the distribution. However, the contribution of this paper is a method to better estimate the return distribution, but no experiment is conducted that shows how well the introduced method approximates the return distribution compared to previous methods. 2) Are there unreported experiments that show “sampling leads to instability”. This would be a nice small contribution to add to the paper that could influence future research. 3) The experiments were conducted with 3 trials. With the known high variance of performance over random trials how certain is the result that FQF is better than other distributional RL algorithms? Reproducibility worksheet indicates that there is a clear description of the central tendency and variation, but I do not see mention of the variance or uncertainty of the performance measurements. The error bands on the learn curve plots are also undefined. Additionally, FQF used the same hyperparameters as IQN. This comparison only shows that FQF obtained higher performance for these hyperparameters. Does this imply that FQF will always outperform IQN with the same hyperparameters? I believe the result needs to be qualified with the uncertainty of the performance estimate and to what extent the experimental data shows FQN being greater than the other algorithms. 4) This claim has two parts: faster and more stable. I assume (please correct me if I am wrong) that faster is with respect to how quickly the algorithm learns. Under what measure is the speed of each algorithm being compared? Similarly, for stability what is the measure that is being used to compare these algorithms? In my reading of the paper, I developed other questions that I could not find answers for and that would improve the quality of the paper if answered. If actions are selected via expectations, then does it matter how well the distribution is approximated so long as the mean is preserved? How does minimizing the error in the approximated distribution affect the approximation of the mean? It is mentioned that “learning rates are relatively small compared with the quantile network to keep the probabilities relatively stable while training”. How stable is this algorithm to the relative setting of learning rates? Is selecting hyperparameters easy? "First, does the 1-Wasserstein error converge to its minimal value when we minimize the surrogate function derived by its derivative? While the answer is yes on some toy problems we test with fixed quantile functions," This toy problem would be great to see! It would add a lot to the paper to see how well the distribution is approximated. This is the critical claim of the paper. Experiments demonstrating this are more valuable than getting SOTA performance results on Atari. These experiments could also be conducted on domains that require less computational than Atari. The discussion about the increased computational cost of the new method is appreciated (lines 228-231). Clarity: some minor notes to help the clarity of the writing Add mention that there is a proof of Proposition 1 in the appendix. In the discussion section, it states that: “Our algorithm, by the time of writing, is the first distributional RL algorithm removing all inefficient hyper-parameters that require manual tuning previously. What does “inefficient hyperparameters” mean? I am looking for a precise clarification of this statement as to which hyperparameters are removed. Significance: The main idea presented in this paper of optimizing the selection of quantiles could have significant impacts for distributional RL. Answering the question “How important is it to have an accurate approximation of return to the performance of distributional RL algorithms?” would provide a valuable contribution to the investigation and development of new distributional RL algorithms. This paper attempts to answer how performance is impacted but does not address to what extent the accuracy of the approximated distribution impacted performance. Thus, the knowledge generated by this paper is limited to the presentation of the idea and a performance measure. These are less actionable than furthering the understanding of how to best approximate the return distribution. ---Update I give my thanks to the authors for their clear response. I think my question about to what degree does having an accurate estimate of the distribution affect performance was not conveyed correctly. It was not about comparing distributional RL methods to regular, but about how accurate does the distribution approximation needs to be? If one can use say 3 quantiles versus 30 and still select the optimal action, then having a close approximation does not matter. Unless it allows for better representation learning and quicker policy improvement. Nevertheless, I appreciate the experiments detailing that FQF did learn a better approximation to the return distribution than some methods and updated my score. Please include these experiments with IQN and a description in the next version of the paper.
NIPS
Title Unsupervised Reinforcement Learning with Contrastive Intrinsic Control Abstract We introduce Contrastive Intrinsic Control (CIC), an unsupervised reinforcement learning (RL) algorithm that maximizes the mutual information between statetransitions and latent skill vectors. CIC utilizes contrastive learning between state-transitions and skills vectors to learn behaviour embeddings and maximizes the entropy of these embeddings as an intrinsic reward to encourage behavioural diversity. We evaluate our algorithm on the Unsupervised RL Benchmark (URLB) in the asymptotic state-based setting, which consists of a long reward-free pretraining phase followed by a short adaptation phase to downstream tasks with extrinsic rewards. We find that CIC improves over prior exploration algorithms in terms of adaptation efficiency to downstream tasks on state-based URLB. 1 Deep RL is a powerful approach toward solving complex control tasks in the presence of extrinsic rewards. Successful applications include playing video games from pixels [1], mastering the game of Go [2, 3], robotic locomotion [4, 5, 6] and dexterous manipulation [7, 8, 9] policies. While effective, the above advances produced agents that are unable to generalize to new downstream tasks beyond the one they were trained to solve. Humans and animals on the other hand are able to acquire skills with minimal supervision and apply them to solve a variety of downstream tasks. In this work, we seek to train agents that acquire skills without supervision with generalization capabilities by efficiently adapting these skills to downstream tasks. Over the last few years, unsupervised RL has emerged as a promising framework for developing RL agents that can generalize to new tasks. In the unsupervised RL setting, agents are first pre-trained with self-supervised intrinsic rewards and then finetuned to downstream tasks with extrinsic rewards. Unsupervised RL algorithms broadly fall into three categories knowledge-based, data-based, and competence-based methods2. Knowledge-based methods maximize the error or uncertainty of a predictive model [12, 13, 14]. Data-based methods maximize the entropy of the agent’s visitation [15, 16]. Competence-based methods learn skills that generate diverse behaviors [17, 18]. This work falls into the latter category of competence-based exploration methods. Unlike knowledge-based and data-based algorithms, competence-based algorithms simultaneously address both the exploration challenge as well as distilling the generated experience in the form of reusable skills. This makes them particularly appealing, since the resulting skill-based policies (or skills themselves) can be finetuned to efficiently solve downstream tasks. While there are many self-supervised objectives that can be utilized, our work falls into a family of methods that learns skills by maximizing the mutual information between visited states and latent skill vectors. Many earlier Project website and code: https://sites.google.com/view/cicneurips2022/ These categories for exploration algorithms were introduced by [10] and inspired by [11]. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). N/A Deep RL is a powerful approach toward solving complex control tasks in the presence of extrinsic rewards. Successful applications include playing video games from pixels [1], mastering the game of Go [2, 3], robotic locomotion [4, 5, 6] and dexterous manipulation [7, 8, 9] policies. While effective, the above advances produced agents that are unable to generalize to new downstream tasks beyond the one they were trained to solve. Humans and animals on the other hand are able to acquire skills with minimal supervision and apply them to solve a variety of downstream tasks. In this work, we seek to train agents that acquire skills without supervision with generalization capabilities by efficiently adapting these skills to downstream tasks. Over the last few years, unsupervised RL has emerged as a promising framework for developing RL agents that can generalize to new tasks. In the unsupervised RL setting, agents are first pre-trained with self-supervised intrinsic rewards and then finetuned to downstream tasks with extrinsic rewards. Unsupervised RL algorithms broadly fall into three categories - knowledge-based, data-based, and competence-based methods2. Knowledge-based methods maximize the error or uncertainty of a predictive model [12, 13, 14]. Data-based methods maximize the entropy of the agent’s visitation [15, 16]. Competence-based methods learn skills that generate diverse behaviors [17, 18]. This work falls into the latter category of competence-based exploration methods. Unlike knowledge-based and data-based algorithms, competence-based algorithms simultaneously address both the exploration challenge as well as distilling the generated experience in the form of reusable skills. This makes them particularly appealing, since the resulting skill-based policies (or skills themselves) can be finetuned to efficiently solve downstream tasks. While there are many self-supervised objectives that can be utilized, our work falls into a family of methods that learns skills by maximizing the mutual information between visited states and latent skill vectors. Many earlier 1Project website and code: https://sites.google.com/view/cicneurips2022/ 2These categories for exploration algorithms were introduced by [10] and inspired by [11]. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). works have investigated optimizing such objectives [17, 18, 19, 20]. However, competence-based methods have been empirically challenging to train and have under-performed when compared to knowledge and data-based methods [21]. In this work, we take a closer look at the challenges of pre-training agents with competence-based algorithms. We introduce Contrastive Intrinsic Control (CIC) – an exploration algorithm that uses a new estimator for the mutual information objective. CIC combines particle estimation for state entropy [22, 15] and noise contrastive estimation [23] for the conditional entropy which enables it to both generate diverse behaviors (explore) and discriminate high-dimensional continuous skills (exploit). To the best of our knowledge, CIC is the first exploration algorithm to utilize noise contrastive estimation to discriminate between state transitions and latent skill vectors. Empirically, we show that CIC adapts to downstream tasks more efficiently than prior exploration approaches on the state-based Unsupervised Reinforcement Learning Benchmark (URLB). CIC achieves 79% higher returns on downstream tasks than prior competence-based algorithms and 18% higher returns than the next-best exploration algorithm overall. 1 Background and Notation Markov Decision Process: We operate under the assumption that our system is described by a Markov Decision Process (MDP) [24]. An MDP consists of the tuple (S,A,P, r, γ) which has states s ∈ S , actions a ∈ A, transition dynamics p(s′|s, a) ∼ P , a reward function r, and a discount factor γ. In an MDP, at each timestep t, an agent observes the current state s, selects an action from a policy a ∼ π(·|s), and then observes the reward r and next state s′ once it acts in the environment. Note that usually r refers to an extrinsic reward. However, in this work we will first be pre-training an agent with intrinsic rewards rint and finetuning on extrinsic rewards rext. For convenience we also introduce the variable τ = (s, s′) which is a tuple denoting a transition between two consecutive states. Importantly, τ does not denote a state-action trajectory. In addition to the standard MDP notation, we will also be learning skills z ∈ Z where Z is the skill set, which can be a discrete or continuous real-valued vector space, and our policy will be skill-conditioned a ∼ π(·|s, z). Unsupervised Skill Discovery through Mutual Information Maximization: Most competencebased approaches to exploration maximize the mutual information between states and skills. Our work and a large body of prior research [17, 20, 18, 25, 26, 27] aims to maximize a mutual information objective with the following general form: I(τ ; z) = H(z)−H(z|τ) = H(τ)−H(τ |z) (1) Competence-based algorithms use different choices for τ and can condition on additional information such as actions or starting states. For a full summary of competence-based algorithms and their objectives see Table 2. Lower Bound Estimates of Mutual Information: The mutual information I(s; z) is intractable to compute directly. Since we wish to maximize I(s; z), we can approximate this objective by instead maximizing a lower bound estimate. Most known mutual information maximization algorithms use the variational lower bound introduced in [28]: I(τ ; z) = H(τ)−H(τ |z) ≥ H(τ) + Eτ,z[log q(τ |z)] (2) The variational lower bound can be applied to both decompositions of the mutual information. The design decisions of a competence-based algorithm therefore come down to (i) which decomposition of I(τ ; z) to use, (ii) whether to use discrete or continuous skills, (iii) how to estimate H(z) or H(τ), and finally (iv) how to estimate H(z|τ) or H(τ |z). 2 Motivation Results from the recent Unsupervised Reinforcement Learning Benchmark (URLB) [21] show that competence-based approaches underperform relative to knowledge-based and data-based baselines on DeepMind Control (DMC). We argue that the underlying issue with current competence-based algorithms when deployed on harder exploration environments like DMC has to do with the currently used estimators for I(τ ; z) rather than the objective itself. To produce structured skills that lead to diverse behaviors, I(τ ; z) estimators must (i) explicitly encourage diverse behaviors and (ii) have the capacity to discriminate between high-dimensional continuous skills. Current approaches do not satisfy both criteria. Competence-base algorithms do not ensure diverse behaviors: Most of the best known competencebased approaches [17, 18, 25, 26], optimize the first decomposition of the mutual information H(z)−H(z|τ). The issue with this decomposition is that while it ensures diversity of skill vectors it does not ensure diverse behavior from the policy, meaning maxH(z) does not imply maxH(τ). Of course, if H(z)−H(z|τ) is maximized and the skill dimension is sufficiently large, then H(τ) will also be maximized implicitly. Yet in practice, to learn an accurate discriminator q(z|τ), the above methods assume skill spaces that are much smaller than the state space (see Table 2), and thus behavioral diversity may not be guaranteed. In contrast, the decomposition I(τ ; z) = H(τ)−H(τ |z) ensures diverse behaviors through the entropy term H(τ). Methods that utilize this decomposition include [27, 20]. Why it is important to utilize high-dimensional skills: Once a policy is capable of generating diverse behaviors, it is important that the discriminator can distill these behaviors into distinct skills. If the set of behaviors outnumbers the set of skills, this will result in degenerate skills – when one skill maps to multiple different behaviors. It is therefore important that the discriminator can accommodate continuous skills of sufficiently high dimension. Empirically, the discriminators used in prior work utilize only low-dimensional continuous skill vectors. DIAYN [17] utilized 16 dimensional skills, DADS [20] utilizes continuous skills of dimension 2− 5, while APS [27], an algorithm that utilizes successor features [30, 31] for the discriminator, is only capable of learning continuous skills with dimension 10. We show how small skill spaces can lead to ineffective exploration in a simple gridworld setting in Appendix G and evidence that skill dimension affects performance in Fig. 5. On the importance of benchmarks for evaluation: While prior competence-based approaches such as DIAYN [17] were evaluated on OpenAI Gym [32], Gym environment episodes terminate when the agent loses balance thereby leaking some aspects of extrinsic signal to the exploration agent. On the other hand, DMC episodes have fixed length. We show in Fig 3 that this small difference in environments results in large performance differences. Specifically, we find that DIAYN is able to learn diverse skills in Gym but not in DMC, which is consistent with both observations from DIAYN and URLB papers. Due to fixed episode lengths, DMC tasks are harder for reward-free exploration since agents must learn to balance without supervision. 3 Method 3.1 Contrastive Intrinsic Control From Section 2 we are motivated to find a lower bound for I(τ ; z) with a discriminator that is capable of supporting high-dimensional continuous skills3. Additionally, we wish to increase the diversity of behaviors so that the discriminator can continue learning new skills throughout training. We choose the forward decomposition of MI I(τ ; z) = H(τ)−H(τ |z) similar to [27] and estimate the lower bound with Eq. 2. The entropy is estimated H(τ) with a particle-based estimator similar to [15]. As such, the primary technical contribution of this work is a novel estimator for the discriminator. To improve the discriminator, we propose to utilize noise contrastive estimation (NCE) [23] between state-transitions and latent skills as a lower bound for I(τ ; z). It has been shown previously that such estimators provide a valid lower bound for mutual information [33]. However, to the best of our knowledge, this is the first work to investigate contrastive representation learning for intrinsic control. Representation Learning: Specifically, we propose to learn embeddings by parameterizing the discriminator with a contrastive density estimator. This is a novel choice that differs from prior works which utilize a classifier [17] or non-contrastive density estimation [20]. log q(τ |z) := f(τ, z)− log 1 N N∑ j=1 exp(f(τj , z)). (3) where f(τ, z) is any real valued function. 3In high-dimensional state-action spaces the number of distinct behaviors can be quite large. For our practical algorithm, we parameterize this function as f(τ, z) = gψ1(τ) ⊤gψ2(z)/∥gψ1(τ)∥∥gψ2(z)∥T where τ = (s, s′) is a transition tuple, gψk are neural encoders, and T is a temperature parameter. This inner product is similar to the one used in SimCLR [34]. The representation learning loss backpropagates gradients from the NCE loss which maximizes similarity between state-transitions and corresponding skills. FCIC(τ) = gψ1(τi) ⊤gψ2(zi) ∥gψ1(τi)∥∥gψ2(zi)∥T − log 1 N N∑ j=1 exp ( gψ1(τj) ⊤gψ2(zi) ∥gψ1(τj)∥∥gψ2(zi)∥T ) (4) whereN−1 is the number of negatives. The total number of elements in the summation isN because it includes one positive, so the index j includes the positive index i similar to the objective in [33]. We provide pseudocode for the CIC representation learning loss: 1 """ 2 PyTorch -like pseudocode for the CIC loss 3 """ 4 5 def cic_loss(s, s_next , z, temp): 6 """ 7 states: s, s_next (B, D), skills: z (B, D) 8 """ 9 10 tau = concat(s, s_next , dim=1) 11 12 query = query_net(z) 13 key = key_net(tau) 14 15 query = normalize(query , dim =1) 16 key = normalize(key , dim =1) 17 18 logits = matmul(query , key.T) / temp #logits are (B, B) 19 labels = arange(logits.shape [0]) # positives are on diagonal 20 21 # softmax_cross_entropy API same as in PyTorch docs 22 loss = softmax_cross_entropy(logits , labels) 23 24 return loss Listing 1: Pseudocode for the CIC loss. Intrinsic reward: Although we have a representation learning objective, we still need to specify the intrinsic reward for the algorithm for which there can be multiple choices. Prior works consider specifying an intrinsic reward that is proportional to state-transition entropy [15], the discriminator [17], a similarity score between states and skills [36], and the uncertainty of the discriminator [37]. We investigate each of these choices in Section 6 and find that an intrinsic reward that maximizes state-transition entropy coupled with representation learning via the CPC loss defined in Sec. 3.1 is the simplest variant that also performs well (see Table 1), which we use for all other experiments. For the intrinsic reward, we use a particle estimate [22, 38] as in [15] of the state-transition entropy. Similar to [15, 16] we estimate the entropy up to a proportionality constant, because we want the agent to maximize entropy rather than estimate its exact value. The APT particle entropy estimate is proportional to the distance between the current visited state transition and previously seen neighboring points. Hparticle(τ) ∝ 1 Nk Nk∑ h⋆i ∈Nk log ∥hi − h⋆i ∥ (5) where hi is an embedding of τi shown in Fig. 2, h∗i is a kNN embedding, Nk is the number of kNNs. Explore and Exploit: With these design choices the two components of the CIC algorithm can be interpreted as exploration with intrinsic rewards and exploitation using representation learning to distill behaviors into skills. The marginal entropy maximizes the diversity of state-transition embeddings while the contrastive discriminator log q(τ |z) encourages exploitation by ensuring that skills z lead to predictable states τ . Together the two terms incentivize the discovery of diverse yet predictable behaviors from the RL agent. While CIC shares a similar intrinsic reward structure to APT [15], we show that the new representation learning loss from the CIC estimator results in substantial performance gains in Sec 6. 4 Practical Implementation Our practical implementation consists of two main components: the RL optimization algorithm and the CIC architecture. For fairness and clarity of comparison, we use the same RL optimization algorithm for our method and all baselines in this work. Since the baselines implemented in URLB [21] use a DDPG4 [29] as their backbone, we opt for the same DDPG architecture to optimize our method as well (see Appendix B). CIC Architecture: We use a particle estimator as in [15] to estimate H(τ). To compute the variational density q(τ |z), we first sample skills from uniform noise z ∼ p(z) where p(z) is the uniform distribution over the [0, 1] interval. We then use two MLP encoders to embed gψ1(τ) and gψ2(z), 4It was recently was shown that a DDPG achieves state-of-the-art performance [39] on DeepMind Control [40] and is more stable than SAC [41] on this benchmark. and optimize the parameters ψ1, ψ2 with the CPC loss similar to SimCLR [34] since f(τ, z) = gψ1(τ) T gψ2(z). We fix the hyperparameters across all domains and downstream tasks. We refer the reader to the Appendices D and E for the full algorithm and a full list of hyperparameters. Adapting to downstream tasks: To adapt to downstream tasks we follow the same procedure for competence-based method adaptation as in URLB [21]. During the first 4k environment interactions we populate the DDPG replay buffer with samples and use the extrinsic rewards collected during this period to finetune the skill vector z. While it’s common to finetune skills with Cross Entropy Adaptation (CMA), given our limited budget of 4k samples (only 4 episodes) we find that a simple grid sweep of skills over the interval [0, 1] produces the best results (see Fig. 5). After this, we fix the skill z and finetune the DDPG actor-critic parameters against the extrinsic reward for the remaining 96k steps. Note that competence-based methods in URLB also finetune their skills during the first 4k finetuning steps ensuring a fair comparison between the methods. The full adaptation procedure is detailed in Appendix D. 5 Experimental Setup Environments We evaluate our approach on tasks from URLB, which consists of twelve downstream tasks across three challenging continuous control domains for exploration algorithms – walker, quadruped, and Jaco arm. Walker requires a biped constrained to a 2D vertical plane to perform locomotion tasks while balancing. Quadruped is more challenging due to a higher-dimensional state-action space and requires a quadruped to in a 3D environment to learn locomotion skills. Jaco arm is a 6-DOF robotic arm with a three-finger gripper to move and manipulate objects without locking. All three environments are challenging in the absence of an extrinsic reward. Baselines: We implemented CIC using the URLB [21] codebase 5 and compare CIC to baselines included in URLB across all three exploration categories. Knowledge-based basedlines include ICM [12], Disagreement [13], and RND [14]. Data-based baselines incude APT [15] and ProtoRL [16]. Competence-based baselines include DIAYN [17], SMM [26], and APS [27]. The closest baselines to CIC are APT, which is similar to CIC but without state-skill CPC representation learning (no discriminator), and APS which uses the same decomposition of mutual information as CIC and also uses a particle entropy estimate for H(τ). The main difference between APS and CIC is that APS uses successor features while CIC uses a contrastive estimator for the discriminator. For further details regarding baselines we refer the reader to Appendix C. Evaluation: We follow an identical evaluation to the 2M pre-training setup in URLB. First, we pre-train each RL agent with the intrinsic rewards for 2M steps. Then, we finetune each agent to the downstream task with extrinsic rewards for 100k steps. All baselines were run for 10 seeds per downstream task for each algorithm using the code and hyperparameters provided by URLB [21]. Built on top of URLB, CIC is also run for 10 seeds per task. A total of 1080 = 9 algorithms × 12 tasks × 10 seeds experiments were run for the main results. Importantly, all baselines and CIC use a DDPG agent as their backbone. To ensure that our evaluation statistics are unbiased we use stratified bootstrap confidence intervals to report aggregate statistics across M runs with N seeds as described in Rliable [35] to report statistics for our main results in Fig. 4. Our primary success metric is the interquartile mean (IQM) and the Optimality Gap (OG). IQM discards the top and bottom 25% of runs and then computes the mean. It is less susceptible to outliers than the mean and was shown to be the most reliable statistic for reporting results for RL experiments in [35]. OG measures how far a policy is from optimal (expert) performance. To define expert performance we use the convention in URLB, which is the score achieved by a randomly initialized DDPG after 2M steps of finetuning (20x more steps than our finetuning budget). 6 Results We investigate empirical answers to the following research questions: (Q1) How does CIC adaptation efficiency compare to prior competence-based algorithms and exploration algorithms more broadly? 5URLB is open-sourced under an MIT license https://github.com/rll-research/url_benchmark/ blob/main/LICENSE. (Q2) Which intrinsic reward instantiation of CIC performs best? (Q3) How do the two terms in the CIC objective affect algorithm performance? (Q4) How does skill selection affect the quality of the pre-trained policy? (Q5) Which architecture details matter most? Adaptation efficiency of CIC and exploration baslines: Expert normalized scores of CIC and exploration algorithms from URLB are shown in Fig. 4. We find that CIC substantially outperforms prior competence-based algorithms (DIAYN, SMM, APS) achieving a 79% higher IQM than the next best competence-based method (APS) and, more broadly, achieving a 18% higher IQM than the next best overall baseline (ProtoRL). In further ablations, we find that the contributing factors to CIC’s performance are its ability to accommodate substantially larger continuous skill spaces than prior competence-based methods. Intrinsic reward specification: The intrinsic reward for competence-based algorithms can be instantiated in many different ways. Here, we analyze intrinsic reward for CIC with the form rint = H(τ) + D(τ, z), where D is some function of (τ, z). Prior works, select D to be (i) the discriminator [27], (ii) a cosine similarity between embeddings [36], (iii) uncertainty of the discriminator [37], and (iv) just the entropy D(τ, z) = 0 [15]. We run CIC with each of these variants on the walker and quadruped tasks and measure the final mean performance across the downstream tasks (see Tab. 1). The results show that the entropy-only intrinsic reward performs best followed by an uncertainty-based intrinsic reward. We hypothesize that the reason why a simple entropy-only intrinsic reward works well is that state-skill CPC representation learning clusters similar behaviors together. Since similar behaviors are clustered, maximizing the entropy of state-transition embeddings produces increasingly diverse behaviors. The importance of representation learning: To what extent does representation learning with the state-skill CIC loss affect the agent’s exploration capability? To answer this question we train the CIC agent with the entropy intrinsic reward with and without the representation learning auxiliary loss for 2M steps. The zero-shot reward plotted in Fig. 6 indicates that without representation learning the policy collapses. With representation learning, the agent is able to discover diverse skills evidenced by the non-zero reward. This result suggests that state-skill CPC representation learning is a critical part of CIC. Qualitative analysis of CIC behaviors: Qualitatively, we find that CIC is able to learn locomotion behaviors in DMC without extrinsic information such as early termination as in OpenAI Gym. While most skills are higher entropy and thus more chaotic, we show in Fig 1 that structured behaviors can be isolated by fixing a particular skill vector. For example, in the walker and quadruped domains - balancing, walking, and flipping skills can be isolated. For more qualitative investigations we refer the reader to Appendix H. Skill architecture and adaptation ablations: We find that projecting the skill to a latent space before inputting it as the key for the contrastive loss is an important design decision (see Fig. 5a), most likely because this reduces the diversity of the skill vector making the discriminator task simpler. We also find empirically that the skill dimension is an important hyperparameter and that larger skills results in better zero-shot performance (see Fig. 5b), which empirically supports the hypothesis posed in Section 2 and Appendix G that larger skill spaces are important for internalizing diverse behaviors. Interestingly, CIC zero-shot performance is poor in lower skill dimensions (e.g. dim(z) < 10), suggesting that when dim(z) is small CIC performs no better than prior competence-based methods such as DIAYN, and that scaling to larger skills enables CIC to pre-train effectively. To measure the effect of skill finetuning described in Section 4, we sweep mean skill values along the interval of the uniform prior [0, 1] with a budget of 4k total environment interactions and read out the performance on the downstream task. By sweeping, we mean simply iterating over the interval [0, 1] with fixed step size (e.g. v = 0, 0.1, . . . , 0.9, 1) and setting zi = v for all i. This is not an optimal skill sampling strategy but works well due to the extremely limited number of samples for skill selection. We evaluate this ablation on the Quadruped Stand and Run downstream tasks. The results shown in Fig. 5 indicate that skill selection can substantially affect zero-shot downstream task performance. 7 Related Work The most closely related prior algorithms to CIC are APT [15] and APS [27]. Both CIC and APS use the H(τ) − H(τ |z) decomposition of the mutual information and both used a particle estimator [22] to compute the state entropy as in [15]. The main difference between CIC and APS is the discriminator. APS uses successor features as in [31] for its discriminator while CIC uses a noise contrastive estimator. Unlike successor features, which empirically only accommodate low-dimensional continuous skill spaces (see Table 2), the noise contrastive discriminator is able to leverage higher continuous dimensional skill vectors. Like APT, CIC has an intrinsic reward that maximizes H(τ). However, CIC also does contrastive skill learning to shape the embedding space and outputs a skill-conditioned policy. The CIC discriminator is similar to the one used in DISCERN [36], a goal-conditioned unsupervised RL algorithm. Both methods use a contrastive discriminator by sampling negatives and computing an inner product between queries and keys. The main differences are (i) that DISCERN maximizes I(τ ; g) where g are image goal embeddings while CIC maximizes I(τ ; z) where z are abstract skill vectors; (ii) DISCERN uses the DIAYN-style decomposition I(τ ; g) = H(g)−H(g|τ) while CIC decomposes through H(τ)−H(τ |z), and (iii) DISCERN discards the H(g) term by sampling goals uniformly while CIC explicitly maximizes H(τ). While DISCERN and CIC share similarities, DISCERN operates over image goals while CIC operates over abstract skill vectors so the two methods are not directly comparable. Finally, another similar algorithm to CIC is DADS [20] which also decomposes through H(τ) − H(τ |z). While CIC uses a contrastive density estimate for the discriminator, DADS uses a maximum likelihood estimator similar to DIAYN. DADS maximizes I(s′|s, z) and estimates entropy H(s′|s) by marginalizing over z such that H(s′|s) = − log ∑ i q(s ′|s, zi) while CIC uses a particle estimator. Table 2: Competence-based Unsupervised Skill Discovery Algorithms Algorithm Intrinsic Reward Decomposition Explicit maxH(τ) Skill Dim. Skill Space SSN4HRL [42] log qψ(z|st) H(z)−H(z|τ) No 6 discrete one-hot VIC [18] log qψ(z|sH)) H(z)−H(z|τ) No 60 discrete one-hot VALOR [25] log qψ(z|s1:H) H(z)−H(z|τ) No 64 discrete one-hot DIAYN [17] log qψ(z|st) H(z)−H(z|τ) No 128 discrete one-hot DADS [20] qψ(s′|z, s)− ∑ i log q(s ′|zi, s) H(τ)−H(τ |z) Yes 5 continuous VISR [31] log qψ(z|st) H(z)−H(z|τ) No 10 continuous APS [27] FSuccessor(s|z) +Hparticle(s) H(τ)−H(τ |z) Yes 10 continuous CIC FCIC(s, s′|z) +Hparticle(s, s′) H(τ)−H(τ |z) Yes 64 continuous Table 3: A list of competence-based algorithms. We describe the intrinsic reward optimized by each method and the decomposition of the mutual information utilized by the method. We also note whether the method explicitly maximizes state transition entropy. Finally, we note the maximal dimension used in each work and whether the skills are discrete or continuous. All methods prior to CIC only support small skill spaces, either because they are discrete or continuous but low-dimensional. 8 Limitations and Impact While CIC achieves leading results on state-based URLB, we would also like to address its limitations. First, in this paper we only consider MDPs (and not partially observed MDPs) where the full state is observable. We focus on MDPs because generating diverse behaviors in environments with large state spaces has been the primary bottleneck for competence-based exploration. Combining CIC with visual representation learning to scale this method to pixel-based inputs is a promising future direction for research not considered in this work. One issue with unsupervised RL algorithms (and hence CIC) in terms of potentially negative societal impact is that self-supervised exploration can be dangerous. Since self-supervised agents maximize intrinsic rewards, this can lead to destructive behavior. For example, when deploying CIC on a Walker or Quadruped robot it learns chaotic exploration behaviors 6 that would most likely break the robot in real-world settings. Alignment of exploration agents to prevent them from learning dangerous policies is a promising direction for future work. 9 Conclusion We have introduced a new competence-based algorithm – Contrastive Intrinsic Control (CIC) – which enables more effective exploration than prior unsupervised skill discovery algorithms by explicitly encouraging diverse behavior while distilling predictable behaviors into skills with a contrastive discriminator. We showed that CIC is the first competence-based approach to achieve leading performance on URLB. We hope that this encourages further research in developing RL agents capable of generalization. 10 Acknowledgements We would like to thank Ademi Adeniji, Xinyang Geng, Fangchen Liu for helpful discussions. We would also like to thank Phil Bachman for useful feedback. This work was partially supported by Berkeley DeepDrive, NSF AI4OPT AI Institute for Advances in Optimization under NSF 2112533, and the Office of Naval Research grant N00014-21-1-2769.
1. What is the focus and contribution of the paper on exploration and skill discovery in unsupervised reinforcement learning? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its combination of well-known techniques and lack of clear theoretical justification? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. What are the questions raised by the reviewer regarding the use of CPC loss as a variational distribution, the definition and implementation of F_CIC, the intrinsic reward specified, and the connection to MI maximization? 5. How does the reviewer view the limitations of the work, including the limited novelty in the proposed method, potential negative societal impacts, and discrepancies between the theory and experiments?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper considers the problem of exploration and skill discovery in unsupervised reinforcement learning. To this end, the paper proposes CIC, an algorithm to explore novel states as well as distill the experience into reusable skills, which can be efficiently adapted to downstream tasks. This work builds upon the ideas of several previous works that maximize the mutual information between (some function of) the visited states and the latent skill vectors. These works differ in the different design choices for estimating the mutual information (generally it is a lower bound), as succinctly summarized in the paper in lines 74-76. CIC estimates the entropy term, H ( τ ) , using a particle estimate similar to some previous work. The contribution of the paper is the use of a contrastive density estimator, specifically the NCE loss proposed in CPC (Oord et al, 2018), for the variational term in the MI lower bound. The unsupervised pre-training phase involves optimizing the CPC loss for contrastive representation learning between state transitions and skills as well as maximizing the intrinsic reward. The fine-tuning phase involves training the agent (with a few steps) using only the external task reward. This approach is shown to outperform similar skill discovery methods on the URLB benchmark using DDPG as the backbone, and extensive experiments investigating some design and hyperparameter choices have been performed. Strengths And Weaknesses The problem of exploring novel states and generalizing to multiple downstream tasks are central challenges in RL, and skill discovery algorithms provide an appealing solution to both these problems. There are several works which aim to maximize (some lower bound on) the MI between states and skill vectors. In this regard, the paper has somewhat limited novelty. This work proposes maximizing MI explicitly using CPC loss for representation learning (in addition to the intrinsic reward) and the contrastive density estimator used to estimate the conditional entropy term is also well known. Nevertheless, a combination of well-known techniques can still be valuable, provided it is supported with substantial theoretical and empirical analysis. The paper performs many experiments to showcase and analyze the proposed method, however, the theoretical justification is lacking and unclear in a few places. The paper contextualizes their work well by providing a succinct and fairly complete overview of similar work (Table 1 is quite helpful). The authors provide intuition-based but clear motivation for their design choices in Section 2, which sets up the proposed method well. However, parts of Section 3 are not clear, particularly the use of CPC loss as the variational distribution, mismatch between the specification of F C I C and its computation in the pseudocode, and the connection to MI maximization. The intrinsic reward mentioned from Section 3 onwards is not defined until Section 6, which might be confusing for the reader. It is also not clear what intrinsic reward is used in the ‘default’ CIC algorithm (used in Fig 4, 5, 6) - I assume it's just the entropy term since it is shown to be the best performing variant in Section 6. If so, the discussion about the variational lower bound on MI and the contribution of using a contrastive density estimator are made redundant. The experimental results, notwithstanding the issues mentioned here, are extensive and the ablation studies for various design choices are insightful. Questions The use of CPC loss as variational distribution: The noise contrastive density estimator used for q ( τ | z ) and the representation learning loss is exactly the same, which is the CPC loss from (Oord et al., 2018). The CPC loss is a lower bound on the MI, so it is unclear why this should be a good choice for the variational distribution. Moreover, replacing Eq. (3) in Eq. (2) contradicts the result of (Oord et al., 2018). Since CIC frames itself within the MI maximization family of skill discovery algorithms, it is crucial that the connection between the proposed method and MI maximization is presented as clearly as possible. F C I C definition vs. implementation: The notation in the definition of F C I C is not clear, and there seem to be inconsistencies between this definition and the pseudocode provided for calculating this loss. The index i is not specified, but assuming it is the batch index, it would seem that F C I C is calculated for some z i with τ i as the positive sample and the remaining τ j in the batch as negative samples. The pseudocode, however, samples a batch of τ i and a batch of z i , similar to how most contrastive learning methods (simCLR, CURL etc.) implement this loss. Intrinsic reward specified not specified clearly: The intrinsic reward is specified in the middle of the experimental results section as a written description, rather than explicit mathematical definitions. Also, it is not clear which intrinsic reward is used when the authors compare CIC with the baselines or perform ablations (such as in Fig 4, 5, 6). It is appreciable that the paper considers a variety of intrinsic rewards to understand which one works best, however, I believe that specifying them clearly would help improve the readability of the paper. CIC used in experiments does not use intrinsic reward to maximize MI: The paper presents CIC as part of a family of methods that maximize the MI using intrinsic reward (Table 2), however their best performing model (and presumably most of their empirical results) considers only the entropy term as the intrinsic reward. This disconnect between the theory (which describes a novel method of estimating the variational lower bound on MI), and the experiments (which seem to not use this variational lower bound) significantly hurts the quality of the paper. If my understanding is correct, the intrinsic reward closest to the MI maximization objective would be (i) discriminator (from line 253-254), which is also the worst-performing variant of CIC. In addition, the contribution of using a contrastive density estimator in the variational lower bound on MI is made redundant. Minor comments: Typo in Line 67, should direct to Table 2 in Appendix A. CPC loss, referred in line 165 and numerous other places, is not defined. I assume it refers to F C I C , which is same as the CPC loss. N is not defined for Eq. (3), assumed to be the batch size. The experiment plots, Fig 4, 5, 6, are present on different pages and going back and forth between Section 6 and the plots hurts the readability of the paper. I suggest moving the plots closer to the relevant text. Limitations The authors acknowledge some limitations of their work, particularly that the method is described for MDPs (not POMDPs), and that they do not consider visual inputs in their experiments. The authors have also given though to the potential negative societal impacts of their work (really unsupervised RL in general), which is appreciable. The main limitations of the work as per my review are the somewhat limited novelty in the proposed method, some questions on the theoretical soundness, and the discrepancy between the method described in the theory section and the one used in the experiments. There are also some concerns regarding the notation and writing, which are relatively minor and easily fixable.
NIPS
Title Unsupervised Reinforcement Learning with Contrastive Intrinsic Control Abstract We introduce Contrastive Intrinsic Control (CIC), an unsupervised reinforcement learning (RL) algorithm that maximizes the mutual information between statetransitions and latent skill vectors. CIC utilizes contrastive learning between state-transitions and skills vectors to learn behaviour embeddings and maximizes the entropy of these embeddings as an intrinsic reward to encourage behavioural diversity. We evaluate our algorithm on the Unsupervised RL Benchmark (URLB) in the asymptotic state-based setting, which consists of a long reward-free pretraining phase followed by a short adaptation phase to downstream tasks with extrinsic rewards. We find that CIC improves over prior exploration algorithms in terms of adaptation efficiency to downstream tasks on state-based URLB. 1 Deep RL is a powerful approach toward solving complex control tasks in the presence of extrinsic rewards. Successful applications include playing video games from pixels [1], mastering the game of Go [2, 3], robotic locomotion [4, 5, 6] and dexterous manipulation [7, 8, 9] policies. While effective, the above advances produced agents that are unable to generalize to new downstream tasks beyond the one they were trained to solve. Humans and animals on the other hand are able to acquire skills with minimal supervision and apply them to solve a variety of downstream tasks. In this work, we seek to train agents that acquire skills without supervision with generalization capabilities by efficiently adapting these skills to downstream tasks. Over the last few years, unsupervised RL has emerged as a promising framework for developing RL agents that can generalize to new tasks. In the unsupervised RL setting, agents are first pre-trained with self-supervised intrinsic rewards and then finetuned to downstream tasks with extrinsic rewards. Unsupervised RL algorithms broadly fall into three categories knowledge-based, data-based, and competence-based methods2. Knowledge-based methods maximize the error or uncertainty of a predictive model [12, 13, 14]. Data-based methods maximize the entropy of the agent’s visitation [15, 16]. Competence-based methods learn skills that generate diverse behaviors [17, 18]. This work falls into the latter category of competence-based exploration methods. Unlike knowledge-based and data-based algorithms, competence-based algorithms simultaneously address both the exploration challenge as well as distilling the generated experience in the form of reusable skills. This makes them particularly appealing, since the resulting skill-based policies (or skills themselves) can be finetuned to efficiently solve downstream tasks. While there are many self-supervised objectives that can be utilized, our work falls into a family of methods that learns skills by maximizing the mutual information between visited states and latent skill vectors. Many earlier Project website and code: https://sites.google.com/view/cicneurips2022/ These categories for exploration algorithms were introduced by [10] and inspired by [11]. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). N/A Deep RL is a powerful approach toward solving complex control tasks in the presence of extrinsic rewards. Successful applications include playing video games from pixels [1], mastering the game of Go [2, 3], robotic locomotion [4, 5, 6] and dexterous manipulation [7, 8, 9] policies. While effective, the above advances produced agents that are unable to generalize to new downstream tasks beyond the one they were trained to solve. Humans and animals on the other hand are able to acquire skills with minimal supervision and apply them to solve a variety of downstream tasks. In this work, we seek to train agents that acquire skills without supervision with generalization capabilities by efficiently adapting these skills to downstream tasks. Over the last few years, unsupervised RL has emerged as a promising framework for developing RL agents that can generalize to new tasks. In the unsupervised RL setting, agents are first pre-trained with self-supervised intrinsic rewards and then finetuned to downstream tasks with extrinsic rewards. Unsupervised RL algorithms broadly fall into three categories - knowledge-based, data-based, and competence-based methods2. Knowledge-based methods maximize the error or uncertainty of a predictive model [12, 13, 14]. Data-based methods maximize the entropy of the agent’s visitation [15, 16]. Competence-based methods learn skills that generate diverse behaviors [17, 18]. This work falls into the latter category of competence-based exploration methods. Unlike knowledge-based and data-based algorithms, competence-based algorithms simultaneously address both the exploration challenge as well as distilling the generated experience in the form of reusable skills. This makes them particularly appealing, since the resulting skill-based policies (or skills themselves) can be finetuned to efficiently solve downstream tasks. While there are many self-supervised objectives that can be utilized, our work falls into a family of methods that learns skills by maximizing the mutual information between visited states and latent skill vectors. Many earlier 1Project website and code: https://sites.google.com/view/cicneurips2022/ 2These categories for exploration algorithms were introduced by [10] and inspired by [11]. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). works have investigated optimizing such objectives [17, 18, 19, 20]. However, competence-based methods have been empirically challenging to train and have under-performed when compared to knowledge and data-based methods [21]. In this work, we take a closer look at the challenges of pre-training agents with competence-based algorithms. We introduce Contrastive Intrinsic Control (CIC) – an exploration algorithm that uses a new estimator for the mutual information objective. CIC combines particle estimation for state entropy [22, 15] and noise contrastive estimation [23] for the conditional entropy which enables it to both generate diverse behaviors (explore) and discriminate high-dimensional continuous skills (exploit). To the best of our knowledge, CIC is the first exploration algorithm to utilize noise contrastive estimation to discriminate between state transitions and latent skill vectors. Empirically, we show that CIC adapts to downstream tasks more efficiently than prior exploration approaches on the state-based Unsupervised Reinforcement Learning Benchmark (URLB). CIC achieves 79% higher returns on downstream tasks than prior competence-based algorithms and 18% higher returns than the next-best exploration algorithm overall. 1 Background and Notation Markov Decision Process: We operate under the assumption that our system is described by a Markov Decision Process (MDP) [24]. An MDP consists of the tuple (S,A,P, r, γ) which has states s ∈ S , actions a ∈ A, transition dynamics p(s′|s, a) ∼ P , a reward function r, and a discount factor γ. In an MDP, at each timestep t, an agent observes the current state s, selects an action from a policy a ∼ π(·|s), and then observes the reward r and next state s′ once it acts in the environment. Note that usually r refers to an extrinsic reward. However, in this work we will first be pre-training an agent with intrinsic rewards rint and finetuning on extrinsic rewards rext. For convenience we also introduce the variable τ = (s, s′) which is a tuple denoting a transition between two consecutive states. Importantly, τ does not denote a state-action trajectory. In addition to the standard MDP notation, we will also be learning skills z ∈ Z where Z is the skill set, which can be a discrete or continuous real-valued vector space, and our policy will be skill-conditioned a ∼ π(·|s, z). Unsupervised Skill Discovery through Mutual Information Maximization: Most competencebased approaches to exploration maximize the mutual information between states and skills. Our work and a large body of prior research [17, 20, 18, 25, 26, 27] aims to maximize a mutual information objective with the following general form: I(τ ; z) = H(z)−H(z|τ) = H(τ)−H(τ |z) (1) Competence-based algorithms use different choices for τ and can condition on additional information such as actions or starting states. For a full summary of competence-based algorithms and their objectives see Table 2. Lower Bound Estimates of Mutual Information: The mutual information I(s; z) is intractable to compute directly. Since we wish to maximize I(s; z), we can approximate this objective by instead maximizing a lower bound estimate. Most known mutual information maximization algorithms use the variational lower bound introduced in [28]: I(τ ; z) = H(τ)−H(τ |z) ≥ H(τ) + Eτ,z[log q(τ |z)] (2) The variational lower bound can be applied to both decompositions of the mutual information. The design decisions of a competence-based algorithm therefore come down to (i) which decomposition of I(τ ; z) to use, (ii) whether to use discrete or continuous skills, (iii) how to estimate H(z) or H(τ), and finally (iv) how to estimate H(z|τ) or H(τ |z). 2 Motivation Results from the recent Unsupervised Reinforcement Learning Benchmark (URLB) [21] show that competence-based approaches underperform relative to knowledge-based and data-based baselines on DeepMind Control (DMC). We argue that the underlying issue with current competence-based algorithms when deployed on harder exploration environments like DMC has to do with the currently used estimators for I(τ ; z) rather than the objective itself. To produce structured skills that lead to diverse behaviors, I(τ ; z) estimators must (i) explicitly encourage diverse behaviors and (ii) have the capacity to discriminate between high-dimensional continuous skills. Current approaches do not satisfy both criteria. Competence-base algorithms do not ensure diverse behaviors: Most of the best known competencebased approaches [17, 18, 25, 26], optimize the first decomposition of the mutual information H(z)−H(z|τ). The issue with this decomposition is that while it ensures diversity of skill vectors it does not ensure diverse behavior from the policy, meaning maxH(z) does not imply maxH(τ). Of course, if H(z)−H(z|τ) is maximized and the skill dimension is sufficiently large, then H(τ) will also be maximized implicitly. Yet in practice, to learn an accurate discriminator q(z|τ), the above methods assume skill spaces that are much smaller than the state space (see Table 2), and thus behavioral diversity may not be guaranteed. In contrast, the decomposition I(τ ; z) = H(τ)−H(τ |z) ensures diverse behaviors through the entropy term H(τ). Methods that utilize this decomposition include [27, 20]. Why it is important to utilize high-dimensional skills: Once a policy is capable of generating diverse behaviors, it is important that the discriminator can distill these behaviors into distinct skills. If the set of behaviors outnumbers the set of skills, this will result in degenerate skills – when one skill maps to multiple different behaviors. It is therefore important that the discriminator can accommodate continuous skills of sufficiently high dimension. Empirically, the discriminators used in prior work utilize only low-dimensional continuous skill vectors. DIAYN [17] utilized 16 dimensional skills, DADS [20] utilizes continuous skills of dimension 2− 5, while APS [27], an algorithm that utilizes successor features [30, 31] for the discriminator, is only capable of learning continuous skills with dimension 10. We show how small skill spaces can lead to ineffective exploration in a simple gridworld setting in Appendix G and evidence that skill dimension affects performance in Fig. 5. On the importance of benchmarks for evaluation: While prior competence-based approaches such as DIAYN [17] were evaluated on OpenAI Gym [32], Gym environment episodes terminate when the agent loses balance thereby leaking some aspects of extrinsic signal to the exploration agent. On the other hand, DMC episodes have fixed length. We show in Fig 3 that this small difference in environments results in large performance differences. Specifically, we find that DIAYN is able to learn diverse skills in Gym but not in DMC, which is consistent with both observations from DIAYN and URLB papers. Due to fixed episode lengths, DMC tasks are harder for reward-free exploration since agents must learn to balance without supervision. 3 Method 3.1 Contrastive Intrinsic Control From Section 2 we are motivated to find a lower bound for I(τ ; z) with a discriminator that is capable of supporting high-dimensional continuous skills3. Additionally, we wish to increase the diversity of behaviors so that the discriminator can continue learning new skills throughout training. We choose the forward decomposition of MI I(τ ; z) = H(τ)−H(τ |z) similar to [27] and estimate the lower bound with Eq. 2. The entropy is estimated H(τ) with a particle-based estimator similar to [15]. As such, the primary technical contribution of this work is a novel estimator for the discriminator. To improve the discriminator, we propose to utilize noise contrastive estimation (NCE) [23] between state-transitions and latent skills as a lower bound for I(τ ; z). It has been shown previously that such estimators provide a valid lower bound for mutual information [33]. However, to the best of our knowledge, this is the first work to investigate contrastive representation learning for intrinsic control. Representation Learning: Specifically, we propose to learn embeddings by parameterizing the discriminator with a contrastive density estimator. This is a novel choice that differs from prior works which utilize a classifier [17] or non-contrastive density estimation [20]. log q(τ |z) := f(τ, z)− log 1 N N∑ j=1 exp(f(τj , z)). (3) where f(τ, z) is any real valued function. 3In high-dimensional state-action spaces the number of distinct behaviors can be quite large. For our practical algorithm, we parameterize this function as f(τ, z) = gψ1(τ) ⊤gψ2(z)/∥gψ1(τ)∥∥gψ2(z)∥T where τ = (s, s′) is a transition tuple, gψk are neural encoders, and T is a temperature parameter. This inner product is similar to the one used in SimCLR [34]. The representation learning loss backpropagates gradients from the NCE loss which maximizes similarity between state-transitions and corresponding skills. FCIC(τ) = gψ1(τi) ⊤gψ2(zi) ∥gψ1(τi)∥∥gψ2(zi)∥T − log 1 N N∑ j=1 exp ( gψ1(τj) ⊤gψ2(zi) ∥gψ1(τj)∥∥gψ2(zi)∥T ) (4) whereN−1 is the number of negatives. The total number of elements in the summation isN because it includes one positive, so the index j includes the positive index i similar to the objective in [33]. We provide pseudocode for the CIC representation learning loss: 1 """ 2 PyTorch -like pseudocode for the CIC loss 3 """ 4 5 def cic_loss(s, s_next , z, temp): 6 """ 7 states: s, s_next (B, D), skills: z (B, D) 8 """ 9 10 tau = concat(s, s_next , dim=1) 11 12 query = query_net(z) 13 key = key_net(tau) 14 15 query = normalize(query , dim =1) 16 key = normalize(key , dim =1) 17 18 logits = matmul(query , key.T) / temp #logits are (B, B) 19 labels = arange(logits.shape [0]) # positives are on diagonal 20 21 # softmax_cross_entropy API same as in PyTorch docs 22 loss = softmax_cross_entropy(logits , labels) 23 24 return loss Listing 1: Pseudocode for the CIC loss. Intrinsic reward: Although we have a representation learning objective, we still need to specify the intrinsic reward for the algorithm for which there can be multiple choices. Prior works consider specifying an intrinsic reward that is proportional to state-transition entropy [15], the discriminator [17], a similarity score between states and skills [36], and the uncertainty of the discriminator [37]. We investigate each of these choices in Section 6 and find that an intrinsic reward that maximizes state-transition entropy coupled with representation learning via the CPC loss defined in Sec. 3.1 is the simplest variant that also performs well (see Table 1), which we use for all other experiments. For the intrinsic reward, we use a particle estimate [22, 38] as in [15] of the state-transition entropy. Similar to [15, 16] we estimate the entropy up to a proportionality constant, because we want the agent to maximize entropy rather than estimate its exact value. The APT particle entropy estimate is proportional to the distance between the current visited state transition and previously seen neighboring points. Hparticle(τ) ∝ 1 Nk Nk∑ h⋆i ∈Nk log ∥hi − h⋆i ∥ (5) where hi is an embedding of τi shown in Fig. 2, h∗i is a kNN embedding, Nk is the number of kNNs. Explore and Exploit: With these design choices the two components of the CIC algorithm can be interpreted as exploration with intrinsic rewards and exploitation using representation learning to distill behaviors into skills. The marginal entropy maximizes the diversity of state-transition embeddings while the contrastive discriminator log q(τ |z) encourages exploitation by ensuring that skills z lead to predictable states τ . Together the two terms incentivize the discovery of diverse yet predictable behaviors from the RL agent. While CIC shares a similar intrinsic reward structure to APT [15], we show that the new representation learning loss from the CIC estimator results in substantial performance gains in Sec 6. 4 Practical Implementation Our practical implementation consists of two main components: the RL optimization algorithm and the CIC architecture. For fairness and clarity of comparison, we use the same RL optimization algorithm for our method and all baselines in this work. Since the baselines implemented in URLB [21] use a DDPG4 [29] as their backbone, we opt for the same DDPG architecture to optimize our method as well (see Appendix B). CIC Architecture: We use a particle estimator as in [15] to estimate H(τ). To compute the variational density q(τ |z), we first sample skills from uniform noise z ∼ p(z) where p(z) is the uniform distribution over the [0, 1] interval. We then use two MLP encoders to embed gψ1(τ) and gψ2(z), 4It was recently was shown that a DDPG achieves state-of-the-art performance [39] on DeepMind Control [40] and is more stable than SAC [41] on this benchmark. and optimize the parameters ψ1, ψ2 with the CPC loss similar to SimCLR [34] since f(τ, z) = gψ1(τ) T gψ2(z). We fix the hyperparameters across all domains and downstream tasks. We refer the reader to the Appendices D and E for the full algorithm and a full list of hyperparameters. Adapting to downstream tasks: To adapt to downstream tasks we follow the same procedure for competence-based method adaptation as in URLB [21]. During the first 4k environment interactions we populate the DDPG replay buffer with samples and use the extrinsic rewards collected during this period to finetune the skill vector z. While it’s common to finetune skills with Cross Entropy Adaptation (CMA), given our limited budget of 4k samples (only 4 episodes) we find that a simple grid sweep of skills over the interval [0, 1] produces the best results (see Fig. 5). After this, we fix the skill z and finetune the DDPG actor-critic parameters against the extrinsic reward for the remaining 96k steps. Note that competence-based methods in URLB also finetune their skills during the first 4k finetuning steps ensuring a fair comparison between the methods. The full adaptation procedure is detailed in Appendix D. 5 Experimental Setup Environments We evaluate our approach on tasks from URLB, which consists of twelve downstream tasks across three challenging continuous control domains for exploration algorithms – walker, quadruped, and Jaco arm. Walker requires a biped constrained to a 2D vertical plane to perform locomotion tasks while balancing. Quadruped is more challenging due to a higher-dimensional state-action space and requires a quadruped to in a 3D environment to learn locomotion skills. Jaco arm is a 6-DOF robotic arm with a three-finger gripper to move and manipulate objects without locking. All three environments are challenging in the absence of an extrinsic reward. Baselines: We implemented CIC using the URLB [21] codebase 5 and compare CIC to baselines included in URLB across all three exploration categories. Knowledge-based basedlines include ICM [12], Disagreement [13], and RND [14]. Data-based baselines incude APT [15] and ProtoRL [16]. Competence-based baselines include DIAYN [17], SMM [26], and APS [27]. The closest baselines to CIC are APT, which is similar to CIC but without state-skill CPC representation learning (no discriminator), and APS which uses the same decomposition of mutual information as CIC and also uses a particle entropy estimate for H(τ). The main difference between APS and CIC is that APS uses successor features while CIC uses a contrastive estimator for the discriminator. For further details regarding baselines we refer the reader to Appendix C. Evaluation: We follow an identical evaluation to the 2M pre-training setup in URLB. First, we pre-train each RL agent with the intrinsic rewards for 2M steps. Then, we finetune each agent to the downstream task with extrinsic rewards for 100k steps. All baselines were run for 10 seeds per downstream task for each algorithm using the code and hyperparameters provided by URLB [21]. Built on top of URLB, CIC is also run for 10 seeds per task. A total of 1080 = 9 algorithms × 12 tasks × 10 seeds experiments were run for the main results. Importantly, all baselines and CIC use a DDPG agent as their backbone. To ensure that our evaluation statistics are unbiased we use stratified bootstrap confidence intervals to report aggregate statistics across M runs with N seeds as described in Rliable [35] to report statistics for our main results in Fig. 4. Our primary success metric is the interquartile mean (IQM) and the Optimality Gap (OG). IQM discards the top and bottom 25% of runs and then computes the mean. It is less susceptible to outliers than the mean and was shown to be the most reliable statistic for reporting results for RL experiments in [35]. OG measures how far a policy is from optimal (expert) performance. To define expert performance we use the convention in URLB, which is the score achieved by a randomly initialized DDPG after 2M steps of finetuning (20x more steps than our finetuning budget). 6 Results We investigate empirical answers to the following research questions: (Q1) How does CIC adaptation efficiency compare to prior competence-based algorithms and exploration algorithms more broadly? 5URLB is open-sourced under an MIT license https://github.com/rll-research/url_benchmark/ blob/main/LICENSE. (Q2) Which intrinsic reward instantiation of CIC performs best? (Q3) How do the two terms in the CIC objective affect algorithm performance? (Q4) How does skill selection affect the quality of the pre-trained policy? (Q5) Which architecture details matter most? Adaptation efficiency of CIC and exploration baslines: Expert normalized scores of CIC and exploration algorithms from URLB are shown in Fig. 4. We find that CIC substantially outperforms prior competence-based algorithms (DIAYN, SMM, APS) achieving a 79% higher IQM than the next best competence-based method (APS) and, more broadly, achieving a 18% higher IQM than the next best overall baseline (ProtoRL). In further ablations, we find that the contributing factors to CIC’s performance are its ability to accommodate substantially larger continuous skill spaces than prior competence-based methods. Intrinsic reward specification: The intrinsic reward for competence-based algorithms can be instantiated in many different ways. Here, we analyze intrinsic reward for CIC with the form rint = H(τ) + D(τ, z), where D is some function of (τ, z). Prior works, select D to be (i) the discriminator [27], (ii) a cosine similarity between embeddings [36], (iii) uncertainty of the discriminator [37], and (iv) just the entropy D(τ, z) = 0 [15]. We run CIC with each of these variants on the walker and quadruped tasks and measure the final mean performance across the downstream tasks (see Tab. 1). The results show that the entropy-only intrinsic reward performs best followed by an uncertainty-based intrinsic reward. We hypothesize that the reason why a simple entropy-only intrinsic reward works well is that state-skill CPC representation learning clusters similar behaviors together. Since similar behaviors are clustered, maximizing the entropy of state-transition embeddings produces increasingly diverse behaviors. The importance of representation learning: To what extent does representation learning with the state-skill CIC loss affect the agent’s exploration capability? To answer this question we train the CIC agent with the entropy intrinsic reward with and without the representation learning auxiliary loss for 2M steps. The zero-shot reward plotted in Fig. 6 indicates that without representation learning the policy collapses. With representation learning, the agent is able to discover diverse skills evidenced by the non-zero reward. This result suggests that state-skill CPC representation learning is a critical part of CIC. Qualitative analysis of CIC behaviors: Qualitatively, we find that CIC is able to learn locomotion behaviors in DMC without extrinsic information such as early termination as in OpenAI Gym. While most skills are higher entropy and thus more chaotic, we show in Fig 1 that structured behaviors can be isolated by fixing a particular skill vector. For example, in the walker and quadruped domains - balancing, walking, and flipping skills can be isolated. For more qualitative investigations we refer the reader to Appendix H. Skill architecture and adaptation ablations: We find that projecting the skill to a latent space before inputting it as the key for the contrastive loss is an important design decision (see Fig. 5a), most likely because this reduces the diversity of the skill vector making the discriminator task simpler. We also find empirically that the skill dimension is an important hyperparameter and that larger skills results in better zero-shot performance (see Fig. 5b), which empirically supports the hypothesis posed in Section 2 and Appendix G that larger skill spaces are important for internalizing diverse behaviors. Interestingly, CIC zero-shot performance is poor in lower skill dimensions (e.g. dim(z) < 10), suggesting that when dim(z) is small CIC performs no better than prior competence-based methods such as DIAYN, and that scaling to larger skills enables CIC to pre-train effectively. To measure the effect of skill finetuning described in Section 4, we sweep mean skill values along the interval of the uniform prior [0, 1] with a budget of 4k total environment interactions and read out the performance on the downstream task. By sweeping, we mean simply iterating over the interval [0, 1] with fixed step size (e.g. v = 0, 0.1, . . . , 0.9, 1) and setting zi = v for all i. This is not an optimal skill sampling strategy but works well due to the extremely limited number of samples for skill selection. We evaluate this ablation on the Quadruped Stand and Run downstream tasks. The results shown in Fig. 5 indicate that skill selection can substantially affect zero-shot downstream task performance. 7 Related Work The most closely related prior algorithms to CIC are APT [15] and APS [27]. Both CIC and APS use the H(τ) − H(τ |z) decomposition of the mutual information and both used a particle estimator [22] to compute the state entropy as in [15]. The main difference between CIC and APS is the discriminator. APS uses successor features as in [31] for its discriminator while CIC uses a noise contrastive estimator. Unlike successor features, which empirically only accommodate low-dimensional continuous skill spaces (see Table 2), the noise contrastive discriminator is able to leverage higher continuous dimensional skill vectors. Like APT, CIC has an intrinsic reward that maximizes H(τ). However, CIC also does contrastive skill learning to shape the embedding space and outputs a skill-conditioned policy. The CIC discriminator is similar to the one used in DISCERN [36], a goal-conditioned unsupervised RL algorithm. Both methods use a contrastive discriminator by sampling negatives and computing an inner product between queries and keys. The main differences are (i) that DISCERN maximizes I(τ ; g) where g are image goal embeddings while CIC maximizes I(τ ; z) where z are abstract skill vectors; (ii) DISCERN uses the DIAYN-style decomposition I(τ ; g) = H(g)−H(g|τ) while CIC decomposes through H(τ)−H(τ |z), and (iii) DISCERN discards the H(g) term by sampling goals uniformly while CIC explicitly maximizes H(τ). While DISCERN and CIC share similarities, DISCERN operates over image goals while CIC operates over abstract skill vectors so the two methods are not directly comparable. Finally, another similar algorithm to CIC is DADS [20] which also decomposes through H(τ) − H(τ |z). While CIC uses a contrastive density estimate for the discriminator, DADS uses a maximum likelihood estimator similar to DIAYN. DADS maximizes I(s′|s, z) and estimates entropy H(s′|s) by marginalizing over z such that H(s′|s) = − log ∑ i q(s ′|s, zi) while CIC uses a particle estimator. Table 2: Competence-based Unsupervised Skill Discovery Algorithms Algorithm Intrinsic Reward Decomposition Explicit maxH(τ) Skill Dim. Skill Space SSN4HRL [42] log qψ(z|st) H(z)−H(z|τ) No 6 discrete one-hot VIC [18] log qψ(z|sH)) H(z)−H(z|τ) No 60 discrete one-hot VALOR [25] log qψ(z|s1:H) H(z)−H(z|τ) No 64 discrete one-hot DIAYN [17] log qψ(z|st) H(z)−H(z|τ) No 128 discrete one-hot DADS [20] qψ(s′|z, s)− ∑ i log q(s ′|zi, s) H(τ)−H(τ |z) Yes 5 continuous VISR [31] log qψ(z|st) H(z)−H(z|τ) No 10 continuous APS [27] FSuccessor(s|z) +Hparticle(s) H(τ)−H(τ |z) Yes 10 continuous CIC FCIC(s, s′|z) +Hparticle(s, s′) H(τ)−H(τ |z) Yes 64 continuous Table 3: A list of competence-based algorithms. We describe the intrinsic reward optimized by each method and the decomposition of the mutual information utilized by the method. We also note whether the method explicitly maximizes state transition entropy. Finally, we note the maximal dimension used in each work and whether the skills are discrete or continuous. All methods prior to CIC only support small skill spaces, either because they are discrete or continuous but low-dimensional. 8 Limitations and Impact While CIC achieves leading results on state-based URLB, we would also like to address its limitations. First, in this paper we only consider MDPs (and not partially observed MDPs) where the full state is observable. We focus on MDPs because generating diverse behaviors in environments with large state spaces has been the primary bottleneck for competence-based exploration. Combining CIC with visual representation learning to scale this method to pixel-based inputs is a promising future direction for research not considered in this work. One issue with unsupervised RL algorithms (and hence CIC) in terms of potentially negative societal impact is that self-supervised exploration can be dangerous. Since self-supervised agents maximize intrinsic rewards, this can lead to destructive behavior. For example, when deploying CIC on a Walker or Quadruped robot it learns chaotic exploration behaviors 6 that would most likely break the robot in real-world settings. Alignment of exploration agents to prevent them from learning dangerous policies is a promising direction for future work. 9 Conclusion We have introduced a new competence-based algorithm – Contrastive Intrinsic Control (CIC) – which enables more effective exploration than prior unsupervised skill discovery algorithms by explicitly encouraging diverse behavior while distilling predictable behaviors into skills with a contrastive discriminator. We showed that CIC is the first competence-based approach to achieve leading performance on URLB. We hope that this encourages further research in developing RL agents capable of generalization. 10 Acknowledgements We would like to thank Ademi Adeniji, Xinyang Geng, Fangchen Liu for helpful discussions. We would also like to thank Phil Bachman for useful feedback. This work was partially supported by Berkeley DeepDrive, NSF AI4OPT AI Institute for Advances in Optimization under NSF 2112533, and the Office of Naval Research grant N00014-21-1-2769.
1. What is the main contribution of the paper regarding unsupervised reinforcement learning? 2. How does the proposed method encourage behavioral diversity in state transitions and skill vectors? 3. Can you provide more clarity on the mathematical notation used in the paper, particularly in lines 53, 55, 57, 60, 86-95, and footnote 4? 4. How do the experimental results support the effectiveness of the proposed method in adapting to downstream tasks? 5. Can you explain the significance of the work in addressing a problem for which most methods were not originally designed? 6. What are some potential negative societal impacts of the proposed method?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper presents a contrastive-learning based method that encourages behavioural diversity for unsupervised RL (where unsupervised means without reward). Their method operates by maximizing the mutual information between state-transitions and skill vectors. The authors provide experimental evidence with the Unsupervised Reinforcement Learning Benchmark (URLB) for the efficiency of their proposed method by examining whether pre-training agents with their method can better adapt to downstream tasks. Strengths And Weaknesses Originality This work is mostly an extension of the work of Liu & Abbeel 2021 ([15] and [27] in paper) where the main contribution (as acknowledged by the authors) is in a novel discriminator (where the discrimination is between latent skills). Their proposed improvement is also based on existing work (e.g. the noise-contrastive estimator of Gutmann & Hyvärinen 2010 ([23] in paper), the parameterization of the function for density estimation of Chen et al. 2020 ([34] in paper)). However, in my opinion this is an original and novel combination of existing methods towards addressing a problem for which (most of) those methods were not originally designed. Quality Overall I feel the quality of the paper is reasonably good. There are a few issues with clarity that I mention below. I appreciated the ablations and sensitivity analyses done in Figure 5, as it really helped motivate the design choices made in the paper. Finally, there are a few questions related to experiment quality in the Questions section belo. Clarity This is perhaps the weakest point of the paper (although Figure 2 is really nice!). The writing oscillates between proper mathematical notation and a github README file (e.g. " r , s ′ ∼ env.step(a)" in line 57). Further, there are some improperly specified mathematical objects in the writing, which I list below: In line 53, you say " r " is "sampled" from env.step(a), but a few lines above you stated that r was the reward function. In line 55 it says " τ ( s ) ... refers to any function of the states s ". This is ill-defined and should be more specific. τ is a function of what to what? In line 57 you're using τ as a tuple (e.g. τ = ( s , s ′ ) ), so it's not at all clear what it's meant to represent. In line 60, Z has not been introduced. The paragraph in lines 86-95 is not at all clear because it's not clear what τ is. In footnote 4 in page 4 it says "Note that τ is not a trajectory but some function of states" which, again, is not at all clear what it's meant to represent. It's not clear what is meant in lines 173-174. In particular what are "negatives" and "positives"? Is N the same thing as N k in the equation? One method of finetuning is presented in lines 195-204, but another one is presented in lines 223-224. It's not clear which one was actually used for the experiments, and if both were used, when each was used. The histogram in Figure 5(d) (and accompanying discussion) is not clear. You're evaluating over a pre-selected grid of latents, and the histogram shows the performance of each of these grids? It's not clear how this is measuring the effect of skill fine-tuning? Significance This work is addressing an important problem for a large portion of RL research (lifelong learning, zero-shot transfer, generalizability, etc.), and as such can have an important impact on the community. The empirical evaluations presented in Figure 5 are quite useful, as they help shed light on the most necessary components of the proposed algorithm. Questions In equation (2), what is the expectation over? Is the second to last line of the pseudocode on page 5 correct? It's not clear that the cross_entropy call gives you F C I C . In particular, it seems like it's missing the first term? In line 277 it says "most likely because this reduces the diversity of the skill vector" to justify the projection, but it's not clear why this projection is necessarily reducing the diversity of the skill vector? In the "skill architecture and adaptation ablations", did you try running DIAYN with larger skills? Limitations Both limitations and potential negative societal impacts were discussed adequately (in my opinion) in section 7.
NIPS
Title Unsupervised Reinforcement Learning with Contrastive Intrinsic Control Abstract We introduce Contrastive Intrinsic Control (CIC), an unsupervised reinforcement learning (RL) algorithm that maximizes the mutual information between statetransitions and latent skill vectors. CIC utilizes contrastive learning between state-transitions and skills vectors to learn behaviour embeddings and maximizes the entropy of these embeddings as an intrinsic reward to encourage behavioural diversity. We evaluate our algorithm on the Unsupervised RL Benchmark (URLB) in the asymptotic state-based setting, which consists of a long reward-free pretraining phase followed by a short adaptation phase to downstream tasks with extrinsic rewards. We find that CIC improves over prior exploration algorithms in terms of adaptation efficiency to downstream tasks on state-based URLB. 1 Deep RL is a powerful approach toward solving complex control tasks in the presence of extrinsic rewards. Successful applications include playing video games from pixels [1], mastering the game of Go [2, 3], robotic locomotion [4, 5, 6] and dexterous manipulation [7, 8, 9] policies. While effective, the above advances produced agents that are unable to generalize to new downstream tasks beyond the one they were trained to solve. Humans and animals on the other hand are able to acquire skills with minimal supervision and apply them to solve a variety of downstream tasks. In this work, we seek to train agents that acquire skills without supervision with generalization capabilities by efficiently adapting these skills to downstream tasks. Over the last few years, unsupervised RL has emerged as a promising framework for developing RL agents that can generalize to new tasks. In the unsupervised RL setting, agents are first pre-trained with self-supervised intrinsic rewards and then finetuned to downstream tasks with extrinsic rewards. Unsupervised RL algorithms broadly fall into three categories knowledge-based, data-based, and competence-based methods2. Knowledge-based methods maximize the error or uncertainty of a predictive model [12, 13, 14]. Data-based methods maximize the entropy of the agent’s visitation [15, 16]. Competence-based methods learn skills that generate diverse behaviors [17, 18]. This work falls into the latter category of competence-based exploration methods. Unlike knowledge-based and data-based algorithms, competence-based algorithms simultaneously address both the exploration challenge as well as distilling the generated experience in the form of reusable skills. This makes them particularly appealing, since the resulting skill-based policies (or skills themselves) can be finetuned to efficiently solve downstream tasks. While there are many self-supervised objectives that can be utilized, our work falls into a family of methods that learns skills by maximizing the mutual information between visited states and latent skill vectors. Many earlier Project website and code: https://sites.google.com/view/cicneurips2022/ These categories for exploration algorithms were introduced by [10] and inspired by [11]. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). N/A Deep RL is a powerful approach toward solving complex control tasks in the presence of extrinsic rewards. Successful applications include playing video games from pixels [1], mastering the game of Go [2, 3], robotic locomotion [4, 5, 6] and dexterous manipulation [7, 8, 9] policies. While effective, the above advances produced agents that are unable to generalize to new downstream tasks beyond the one they were trained to solve. Humans and animals on the other hand are able to acquire skills with minimal supervision and apply them to solve a variety of downstream tasks. In this work, we seek to train agents that acquire skills without supervision with generalization capabilities by efficiently adapting these skills to downstream tasks. Over the last few years, unsupervised RL has emerged as a promising framework for developing RL agents that can generalize to new tasks. In the unsupervised RL setting, agents are first pre-trained with self-supervised intrinsic rewards and then finetuned to downstream tasks with extrinsic rewards. Unsupervised RL algorithms broadly fall into three categories - knowledge-based, data-based, and competence-based methods2. Knowledge-based methods maximize the error or uncertainty of a predictive model [12, 13, 14]. Data-based methods maximize the entropy of the agent’s visitation [15, 16]. Competence-based methods learn skills that generate diverse behaviors [17, 18]. This work falls into the latter category of competence-based exploration methods. Unlike knowledge-based and data-based algorithms, competence-based algorithms simultaneously address both the exploration challenge as well as distilling the generated experience in the form of reusable skills. This makes them particularly appealing, since the resulting skill-based policies (or skills themselves) can be finetuned to efficiently solve downstream tasks. While there are many self-supervised objectives that can be utilized, our work falls into a family of methods that learns skills by maximizing the mutual information between visited states and latent skill vectors. Many earlier 1Project website and code: https://sites.google.com/view/cicneurips2022/ 2These categories for exploration algorithms were introduced by [10] and inspired by [11]. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). works have investigated optimizing such objectives [17, 18, 19, 20]. However, competence-based methods have been empirically challenging to train and have under-performed when compared to knowledge and data-based methods [21]. In this work, we take a closer look at the challenges of pre-training agents with competence-based algorithms. We introduce Contrastive Intrinsic Control (CIC) – an exploration algorithm that uses a new estimator for the mutual information objective. CIC combines particle estimation for state entropy [22, 15] and noise contrastive estimation [23] for the conditional entropy which enables it to both generate diverse behaviors (explore) and discriminate high-dimensional continuous skills (exploit). To the best of our knowledge, CIC is the first exploration algorithm to utilize noise contrastive estimation to discriminate between state transitions and latent skill vectors. Empirically, we show that CIC adapts to downstream tasks more efficiently than prior exploration approaches on the state-based Unsupervised Reinforcement Learning Benchmark (URLB). CIC achieves 79% higher returns on downstream tasks than prior competence-based algorithms and 18% higher returns than the next-best exploration algorithm overall. 1 Background and Notation Markov Decision Process: We operate under the assumption that our system is described by a Markov Decision Process (MDP) [24]. An MDP consists of the tuple (S,A,P, r, γ) which has states s ∈ S , actions a ∈ A, transition dynamics p(s′|s, a) ∼ P , a reward function r, and a discount factor γ. In an MDP, at each timestep t, an agent observes the current state s, selects an action from a policy a ∼ π(·|s), and then observes the reward r and next state s′ once it acts in the environment. Note that usually r refers to an extrinsic reward. However, in this work we will first be pre-training an agent with intrinsic rewards rint and finetuning on extrinsic rewards rext. For convenience we also introduce the variable τ = (s, s′) which is a tuple denoting a transition between two consecutive states. Importantly, τ does not denote a state-action trajectory. In addition to the standard MDP notation, we will also be learning skills z ∈ Z where Z is the skill set, which can be a discrete or continuous real-valued vector space, and our policy will be skill-conditioned a ∼ π(·|s, z). Unsupervised Skill Discovery through Mutual Information Maximization: Most competencebased approaches to exploration maximize the mutual information between states and skills. Our work and a large body of prior research [17, 20, 18, 25, 26, 27] aims to maximize a mutual information objective with the following general form: I(τ ; z) = H(z)−H(z|τ) = H(τ)−H(τ |z) (1) Competence-based algorithms use different choices for τ and can condition on additional information such as actions or starting states. For a full summary of competence-based algorithms and their objectives see Table 2. Lower Bound Estimates of Mutual Information: The mutual information I(s; z) is intractable to compute directly. Since we wish to maximize I(s; z), we can approximate this objective by instead maximizing a lower bound estimate. Most known mutual information maximization algorithms use the variational lower bound introduced in [28]: I(τ ; z) = H(τ)−H(τ |z) ≥ H(τ) + Eτ,z[log q(τ |z)] (2) The variational lower bound can be applied to both decompositions of the mutual information. The design decisions of a competence-based algorithm therefore come down to (i) which decomposition of I(τ ; z) to use, (ii) whether to use discrete or continuous skills, (iii) how to estimate H(z) or H(τ), and finally (iv) how to estimate H(z|τ) or H(τ |z). 2 Motivation Results from the recent Unsupervised Reinforcement Learning Benchmark (URLB) [21] show that competence-based approaches underperform relative to knowledge-based and data-based baselines on DeepMind Control (DMC). We argue that the underlying issue with current competence-based algorithms when deployed on harder exploration environments like DMC has to do with the currently used estimators for I(τ ; z) rather than the objective itself. To produce structured skills that lead to diverse behaviors, I(τ ; z) estimators must (i) explicitly encourage diverse behaviors and (ii) have the capacity to discriminate between high-dimensional continuous skills. Current approaches do not satisfy both criteria. Competence-base algorithms do not ensure diverse behaviors: Most of the best known competencebased approaches [17, 18, 25, 26], optimize the first decomposition of the mutual information H(z)−H(z|τ). The issue with this decomposition is that while it ensures diversity of skill vectors it does not ensure diverse behavior from the policy, meaning maxH(z) does not imply maxH(τ). Of course, if H(z)−H(z|τ) is maximized and the skill dimension is sufficiently large, then H(τ) will also be maximized implicitly. Yet in practice, to learn an accurate discriminator q(z|τ), the above methods assume skill spaces that are much smaller than the state space (see Table 2), and thus behavioral diversity may not be guaranteed. In contrast, the decomposition I(τ ; z) = H(τ)−H(τ |z) ensures diverse behaviors through the entropy term H(τ). Methods that utilize this decomposition include [27, 20]. Why it is important to utilize high-dimensional skills: Once a policy is capable of generating diverse behaviors, it is important that the discriminator can distill these behaviors into distinct skills. If the set of behaviors outnumbers the set of skills, this will result in degenerate skills – when one skill maps to multiple different behaviors. It is therefore important that the discriminator can accommodate continuous skills of sufficiently high dimension. Empirically, the discriminators used in prior work utilize only low-dimensional continuous skill vectors. DIAYN [17] utilized 16 dimensional skills, DADS [20] utilizes continuous skills of dimension 2− 5, while APS [27], an algorithm that utilizes successor features [30, 31] for the discriminator, is only capable of learning continuous skills with dimension 10. We show how small skill spaces can lead to ineffective exploration in a simple gridworld setting in Appendix G and evidence that skill dimension affects performance in Fig. 5. On the importance of benchmarks for evaluation: While prior competence-based approaches such as DIAYN [17] were evaluated on OpenAI Gym [32], Gym environment episodes terminate when the agent loses balance thereby leaking some aspects of extrinsic signal to the exploration agent. On the other hand, DMC episodes have fixed length. We show in Fig 3 that this small difference in environments results in large performance differences. Specifically, we find that DIAYN is able to learn diverse skills in Gym but not in DMC, which is consistent with both observations from DIAYN and URLB papers. Due to fixed episode lengths, DMC tasks are harder for reward-free exploration since agents must learn to balance without supervision. 3 Method 3.1 Contrastive Intrinsic Control From Section 2 we are motivated to find a lower bound for I(τ ; z) with a discriminator that is capable of supporting high-dimensional continuous skills3. Additionally, we wish to increase the diversity of behaviors so that the discriminator can continue learning new skills throughout training. We choose the forward decomposition of MI I(τ ; z) = H(τ)−H(τ |z) similar to [27] and estimate the lower bound with Eq. 2. The entropy is estimated H(τ) with a particle-based estimator similar to [15]. As such, the primary technical contribution of this work is a novel estimator for the discriminator. To improve the discriminator, we propose to utilize noise contrastive estimation (NCE) [23] between state-transitions and latent skills as a lower bound for I(τ ; z). It has been shown previously that such estimators provide a valid lower bound for mutual information [33]. However, to the best of our knowledge, this is the first work to investigate contrastive representation learning for intrinsic control. Representation Learning: Specifically, we propose to learn embeddings by parameterizing the discriminator with a contrastive density estimator. This is a novel choice that differs from prior works which utilize a classifier [17] or non-contrastive density estimation [20]. log q(τ |z) := f(τ, z)− log 1 N N∑ j=1 exp(f(τj , z)). (3) where f(τ, z) is any real valued function. 3In high-dimensional state-action spaces the number of distinct behaviors can be quite large. For our practical algorithm, we parameterize this function as f(τ, z) = gψ1(τ) ⊤gψ2(z)/∥gψ1(τ)∥∥gψ2(z)∥T where τ = (s, s′) is a transition tuple, gψk are neural encoders, and T is a temperature parameter. This inner product is similar to the one used in SimCLR [34]. The representation learning loss backpropagates gradients from the NCE loss which maximizes similarity between state-transitions and corresponding skills. FCIC(τ) = gψ1(τi) ⊤gψ2(zi) ∥gψ1(τi)∥∥gψ2(zi)∥T − log 1 N N∑ j=1 exp ( gψ1(τj) ⊤gψ2(zi) ∥gψ1(τj)∥∥gψ2(zi)∥T ) (4) whereN−1 is the number of negatives. The total number of elements in the summation isN because it includes one positive, so the index j includes the positive index i similar to the objective in [33]. We provide pseudocode for the CIC representation learning loss: 1 """ 2 PyTorch -like pseudocode for the CIC loss 3 """ 4 5 def cic_loss(s, s_next , z, temp): 6 """ 7 states: s, s_next (B, D), skills: z (B, D) 8 """ 9 10 tau = concat(s, s_next , dim=1) 11 12 query = query_net(z) 13 key = key_net(tau) 14 15 query = normalize(query , dim =1) 16 key = normalize(key , dim =1) 17 18 logits = matmul(query , key.T) / temp #logits are (B, B) 19 labels = arange(logits.shape [0]) # positives are on diagonal 20 21 # softmax_cross_entropy API same as in PyTorch docs 22 loss = softmax_cross_entropy(logits , labels) 23 24 return loss Listing 1: Pseudocode for the CIC loss. Intrinsic reward: Although we have a representation learning objective, we still need to specify the intrinsic reward for the algorithm for which there can be multiple choices. Prior works consider specifying an intrinsic reward that is proportional to state-transition entropy [15], the discriminator [17], a similarity score between states and skills [36], and the uncertainty of the discriminator [37]. We investigate each of these choices in Section 6 and find that an intrinsic reward that maximizes state-transition entropy coupled with representation learning via the CPC loss defined in Sec. 3.1 is the simplest variant that also performs well (see Table 1), which we use for all other experiments. For the intrinsic reward, we use a particle estimate [22, 38] as in [15] of the state-transition entropy. Similar to [15, 16] we estimate the entropy up to a proportionality constant, because we want the agent to maximize entropy rather than estimate its exact value. The APT particle entropy estimate is proportional to the distance between the current visited state transition and previously seen neighboring points. Hparticle(τ) ∝ 1 Nk Nk∑ h⋆i ∈Nk log ∥hi − h⋆i ∥ (5) where hi is an embedding of τi shown in Fig. 2, h∗i is a kNN embedding, Nk is the number of kNNs. Explore and Exploit: With these design choices the two components of the CIC algorithm can be interpreted as exploration with intrinsic rewards and exploitation using representation learning to distill behaviors into skills. The marginal entropy maximizes the diversity of state-transition embeddings while the contrastive discriminator log q(τ |z) encourages exploitation by ensuring that skills z lead to predictable states τ . Together the two terms incentivize the discovery of diverse yet predictable behaviors from the RL agent. While CIC shares a similar intrinsic reward structure to APT [15], we show that the new representation learning loss from the CIC estimator results in substantial performance gains in Sec 6. 4 Practical Implementation Our practical implementation consists of two main components: the RL optimization algorithm and the CIC architecture. For fairness and clarity of comparison, we use the same RL optimization algorithm for our method and all baselines in this work. Since the baselines implemented in URLB [21] use a DDPG4 [29] as their backbone, we opt for the same DDPG architecture to optimize our method as well (see Appendix B). CIC Architecture: We use a particle estimator as in [15] to estimate H(τ). To compute the variational density q(τ |z), we first sample skills from uniform noise z ∼ p(z) where p(z) is the uniform distribution over the [0, 1] interval. We then use two MLP encoders to embed gψ1(τ) and gψ2(z), 4It was recently was shown that a DDPG achieves state-of-the-art performance [39] on DeepMind Control [40] and is more stable than SAC [41] on this benchmark. and optimize the parameters ψ1, ψ2 with the CPC loss similar to SimCLR [34] since f(τ, z) = gψ1(τ) T gψ2(z). We fix the hyperparameters across all domains and downstream tasks. We refer the reader to the Appendices D and E for the full algorithm and a full list of hyperparameters. Adapting to downstream tasks: To adapt to downstream tasks we follow the same procedure for competence-based method adaptation as in URLB [21]. During the first 4k environment interactions we populate the DDPG replay buffer with samples and use the extrinsic rewards collected during this period to finetune the skill vector z. While it’s common to finetune skills with Cross Entropy Adaptation (CMA), given our limited budget of 4k samples (only 4 episodes) we find that a simple grid sweep of skills over the interval [0, 1] produces the best results (see Fig. 5). After this, we fix the skill z and finetune the DDPG actor-critic parameters against the extrinsic reward for the remaining 96k steps. Note that competence-based methods in URLB also finetune their skills during the first 4k finetuning steps ensuring a fair comparison between the methods. The full adaptation procedure is detailed in Appendix D. 5 Experimental Setup Environments We evaluate our approach on tasks from URLB, which consists of twelve downstream tasks across three challenging continuous control domains for exploration algorithms – walker, quadruped, and Jaco arm. Walker requires a biped constrained to a 2D vertical plane to perform locomotion tasks while balancing. Quadruped is more challenging due to a higher-dimensional state-action space and requires a quadruped to in a 3D environment to learn locomotion skills. Jaco arm is a 6-DOF robotic arm with a three-finger gripper to move and manipulate objects without locking. All three environments are challenging in the absence of an extrinsic reward. Baselines: We implemented CIC using the URLB [21] codebase 5 and compare CIC to baselines included in URLB across all three exploration categories. Knowledge-based basedlines include ICM [12], Disagreement [13], and RND [14]. Data-based baselines incude APT [15] and ProtoRL [16]. Competence-based baselines include DIAYN [17], SMM [26], and APS [27]. The closest baselines to CIC are APT, which is similar to CIC but without state-skill CPC representation learning (no discriminator), and APS which uses the same decomposition of mutual information as CIC and also uses a particle entropy estimate for H(τ). The main difference between APS and CIC is that APS uses successor features while CIC uses a contrastive estimator for the discriminator. For further details regarding baselines we refer the reader to Appendix C. Evaluation: We follow an identical evaluation to the 2M pre-training setup in URLB. First, we pre-train each RL agent with the intrinsic rewards for 2M steps. Then, we finetune each agent to the downstream task with extrinsic rewards for 100k steps. All baselines were run for 10 seeds per downstream task for each algorithm using the code and hyperparameters provided by URLB [21]. Built on top of URLB, CIC is also run for 10 seeds per task. A total of 1080 = 9 algorithms × 12 tasks × 10 seeds experiments were run for the main results. Importantly, all baselines and CIC use a DDPG agent as their backbone. To ensure that our evaluation statistics are unbiased we use stratified bootstrap confidence intervals to report aggregate statistics across M runs with N seeds as described in Rliable [35] to report statistics for our main results in Fig. 4. Our primary success metric is the interquartile mean (IQM) and the Optimality Gap (OG). IQM discards the top and bottom 25% of runs and then computes the mean. It is less susceptible to outliers than the mean and was shown to be the most reliable statistic for reporting results for RL experiments in [35]. OG measures how far a policy is from optimal (expert) performance. To define expert performance we use the convention in URLB, which is the score achieved by a randomly initialized DDPG after 2M steps of finetuning (20x more steps than our finetuning budget). 6 Results We investigate empirical answers to the following research questions: (Q1) How does CIC adaptation efficiency compare to prior competence-based algorithms and exploration algorithms more broadly? 5URLB is open-sourced under an MIT license https://github.com/rll-research/url_benchmark/ blob/main/LICENSE. (Q2) Which intrinsic reward instantiation of CIC performs best? (Q3) How do the two terms in the CIC objective affect algorithm performance? (Q4) How does skill selection affect the quality of the pre-trained policy? (Q5) Which architecture details matter most? Adaptation efficiency of CIC and exploration baslines: Expert normalized scores of CIC and exploration algorithms from URLB are shown in Fig. 4. We find that CIC substantially outperforms prior competence-based algorithms (DIAYN, SMM, APS) achieving a 79% higher IQM than the next best competence-based method (APS) and, more broadly, achieving a 18% higher IQM than the next best overall baseline (ProtoRL). In further ablations, we find that the contributing factors to CIC’s performance are its ability to accommodate substantially larger continuous skill spaces than prior competence-based methods. Intrinsic reward specification: The intrinsic reward for competence-based algorithms can be instantiated in many different ways. Here, we analyze intrinsic reward for CIC with the form rint = H(τ) + D(τ, z), where D is some function of (τ, z). Prior works, select D to be (i) the discriminator [27], (ii) a cosine similarity between embeddings [36], (iii) uncertainty of the discriminator [37], and (iv) just the entropy D(τ, z) = 0 [15]. We run CIC with each of these variants on the walker and quadruped tasks and measure the final mean performance across the downstream tasks (see Tab. 1). The results show that the entropy-only intrinsic reward performs best followed by an uncertainty-based intrinsic reward. We hypothesize that the reason why a simple entropy-only intrinsic reward works well is that state-skill CPC representation learning clusters similar behaviors together. Since similar behaviors are clustered, maximizing the entropy of state-transition embeddings produces increasingly diverse behaviors. The importance of representation learning: To what extent does representation learning with the state-skill CIC loss affect the agent’s exploration capability? To answer this question we train the CIC agent with the entropy intrinsic reward with and without the representation learning auxiliary loss for 2M steps. The zero-shot reward plotted in Fig. 6 indicates that without representation learning the policy collapses. With representation learning, the agent is able to discover diverse skills evidenced by the non-zero reward. This result suggests that state-skill CPC representation learning is a critical part of CIC. Qualitative analysis of CIC behaviors: Qualitatively, we find that CIC is able to learn locomotion behaviors in DMC without extrinsic information such as early termination as in OpenAI Gym. While most skills are higher entropy and thus more chaotic, we show in Fig 1 that structured behaviors can be isolated by fixing a particular skill vector. For example, in the walker and quadruped domains - balancing, walking, and flipping skills can be isolated. For more qualitative investigations we refer the reader to Appendix H. Skill architecture and adaptation ablations: We find that projecting the skill to a latent space before inputting it as the key for the contrastive loss is an important design decision (see Fig. 5a), most likely because this reduces the diversity of the skill vector making the discriminator task simpler. We also find empirically that the skill dimension is an important hyperparameter and that larger skills results in better zero-shot performance (see Fig. 5b), which empirically supports the hypothesis posed in Section 2 and Appendix G that larger skill spaces are important for internalizing diverse behaviors. Interestingly, CIC zero-shot performance is poor in lower skill dimensions (e.g. dim(z) < 10), suggesting that when dim(z) is small CIC performs no better than prior competence-based methods such as DIAYN, and that scaling to larger skills enables CIC to pre-train effectively. To measure the effect of skill finetuning described in Section 4, we sweep mean skill values along the interval of the uniform prior [0, 1] with a budget of 4k total environment interactions and read out the performance on the downstream task. By sweeping, we mean simply iterating over the interval [0, 1] with fixed step size (e.g. v = 0, 0.1, . . . , 0.9, 1) and setting zi = v for all i. This is not an optimal skill sampling strategy but works well due to the extremely limited number of samples for skill selection. We evaluate this ablation on the Quadruped Stand and Run downstream tasks. The results shown in Fig. 5 indicate that skill selection can substantially affect zero-shot downstream task performance. 7 Related Work The most closely related prior algorithms to CIC are APT [15] and APS [27]. Both CIC and APS use the H(τ) − H(τ |z) decomposition of the mutual information and both used a particle estimator [22] to compute the state entropy as in [15]. The main difference between CIC and APS is the discriminator. APS uses successor features as in [31] for its discriminator while CIC uses a noise contrastive estimator. Unlike successor features, which empirically only accommodate low-dimensional continuous skill spaces (see Table 2), the noise contrastive discriminator is able to leverage higher continuous dimensional skill vectors. Like APT, CIC has an intrinsic reward that maximizes H(τ). However, CIC also does contrastive skill learning to shape the embedding space and outputs a skill-conditioned policy. The CIC discriminator is similar to the one used in DISCERN [36], a goal-conditioned unsupervised RL algorithm. Both methods use a contrastive discriminator by sampling negatives and computing an inner product between queries and keys. The main differences are (i) that DISCERN maximizes I(τ ; g) where g are image goal embeddings while CIC maximizes I(τ ; z) where z are abstract skill vectors; (ii) DISCERN uses the DIAYN-style decomposition I(τ ; g) = H(g)−H(g|τ) while CIC decomposes through H(τ)−H(τ |z), and (iii) DISCERN discards the H(g) term by sampling goals uniformly while CIC explicitly maximizes H(τ). While DISCERN and CIC share similarities, DISCERN operates over image goals while CIC operates over abstract skill vectors so the two methods are not directly comparable. Finally, another similar algorithm to CIC is DADS [20] which also decomposes through H(τ) − H(τ |z). While CIC uses a contrastive density estimate for the discriminator, DADS uses a maximum likelihood estimator similar to DIAYN. DADS maximizes I(s′|s, z) and estimates entropy H(s′|s) by marginalizing over z such that H(s′|s) = − log ∑ i q(s ′|s, zi) while CIC uses a particle estimator. Table 2: Competence-based Unsupervised Skill Discovery Algorithms Algorithm Intrinsic Reward Decomposition Explicit maxH(τ) Skill Dim. Skill Space SSN4HRL [42] log qψ(z|st) H(z)−H(z|τ) No 6 discrete one-hot VIC [18] log qψ(z|sH)) H(z)−H(z|τ) No 60 discrete one-hot VALOR [25] log qψ(z|s1:H) H(z)−H(z|τ) No 64 discrete one-hot DIAYN [17] log qψ(z|st) H(z)−H(z|τ) No 128 discrete one-hot DADS [20] qψ(s′|z, s)− ∑ i log q(s ′|zi, s) H(τ)−H(τ |z) Yes 5 continuous VISR [31] log qψ(z|st) H(z)−H(z|τ) No 10 continuous APS [27] FSuccessor(s|z) +Hparticle(s) H(τ)−H(τ |z) Yes 10 continuous CIC FCIC(s, s′|z) +Hparticle(s, s′) H(τ)−H(τ |z) Yes 64 continuous Table 3: A list of competence-based algorithms. We describe the intrinsic reward optimized by each method and the decomposition of the mutual information utilized by the method. We also note whether the method explicitly maximizes state transition entropy. Finally, we note the maximal dimension used in each work and whether the skills are discrete or continuous. All methods prior to CIC only support small skill spaces, either because they are discrete or continuous but low-dimensional. 8 Limitations and Impact While CIC achieves leading results on state-based URLB, we would also like to address its limitations. First, in this paper we only consider MDPs (and not partially observed MDPs) where the full state is observable. We focus on MDPs because generating diverse behaviors in environments with large state spaces has been the primary bottleneck for competence-based exploration. Combining CIC with visual representation learning to scale this method to pixel-based inputs is a promising future direction for research not considered in this work. One issue with unsupervised RL algorithms (and hence CIC) in terms of potentially negative societal impact is that self-supervised exploration can be dangerous. Since self-supervised agents maximize intrinsic rewards, this can lead to destructive behavior. For example, when deploying CIC on a Walker or Quadruped robot it learns chaotic exploration behaviors 6 that would most likely break the robot in real-world settings. Alignment of exploration agents to prevent them from learning dangerous policies is a promising direction for future work. 9 Conclusion We have introduced a new competence-based algorithm – Contrastive Intrinsic Control (CIC) – which enables more effective exploration than prior unsupervised skill discovery algorithms by explicitly encouraging diverse behavior while distilling predictable behaviors into skills with a contrastive discriminator. We showed that CIC is the first competence-based approach to achieve leading performance on URLB. We hope that this encourages further research in developing RL agents capable of generalization. 10 Acknowledgements We would like to thank Ademi Adeniji, Xinyang Geng, Fangchen Liu for helpful discussions. We would also like to thank Phil Bachman for useful feedback. This work was partially supported by Berkeley DeepDrive, NSF AI4OPT AI Institute for Advances in Optimization under NSF 2112533, and the Office of Naval Research grant N00014-21-1-2769.
1. What is the focus and contribution of the paper on unsupervised reinforcement learning? 2. What are the strengths of the proposed method, particularly in its experimental setup and ablations? 3. What are the weaknesses of the paper regarding originality and motivation? 4. How does the reviewer assess the connection between the motivation and contribution of the paper? 5. What are the concerns regarding the estimation of decomposed mutual information and its distribution across different methods? 6. Is there a trade-off between diverse behaviors and skills in the decomposition of mutual information? 7. How does the reviewer view the choice of skill dimension and its potential impact on hyperparameter tuning? 8. Are there any limitations addressed by the authors that the reviewer agrees with?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This work proposes CIC, a competence-based unsupervised RL method that maximizes the mutual information between latent skills and state transitions. CIC outperforms related baselines on the Unsupervised RL benchmark (URLB) where the agent only has access to the extrinsic reward for a short adaptation phase in the downstream tasks. Strengths And Weaknesses Strengths This study has significant strengths in the well-established experimental section. Decent experimental setup The URLB is a well-established benchmark to evaluate the unsupervised adaptation performance of RL methods. I appreciate the comparison between Gym and DMControl. High statistical significance I appreciate the authors running experiments across as many as 10 seeds, and reporting IQM scores. This provides high statistical significance for the results compared to other methods that report only mean results with 3~5 seeds. Also they compared many relevant prior works. Ablations Because there are many overlapping contributions with existing studies (APS, APT, CURL,..etc.), it is easy to give a mixed feeling of contribution. But the authors managed to prevent this through ablations in Figure 5. Weaknesses The weaknesses of this work lie in the originality and motivation. As this is a completely experimental paper, I understand that theoretically, it is difficult to prove the strength of this study. I felt the connection between the motivation and contribution seemed a bit vague. Originality As the authors are aware, there were previous works that use particle-based entropy or contrastive learning for unsupervised learning (APT, APS, CURL). The novelty of CIC is that it is the first that used contrastive learning between state transition and latent skill. The absence of a related work section may make it feel that novelty is lacking. From this point of view, I appreciate the comparison made in Appendix D. Maybe it would be better if the authors add such content to the main manuscript with a little more detail. Motivation The two points made in the motivation section ("Competence-based algorithms do not ensure diverse behaviors", "Why it is important to utilize high-dimensional skills") are indeed very important and useful points. But there seems to be a lack of a link between these motivations and the contrastive learning and particle-based entropy. Some questions remain about whether contrastive learning and particle-based entropy are the "only" or the "best" ways to estimate H ( τ | z ) and H ( τ ) , respectively. Some of these concerns seem to be addressed in Appendix L, which I think would be better to be in the main manuscript. Questions I'm curious how the estimation of decomposed mutual information in Table 3 would distribute across different methods. And how CIC may outperform baselines. The decomposition of mutual information used by CIC encourages diverse behaviors through the entropy term H ( τ ) . I'm curious whether there is a trade-off for losing some diversity of the skills H ( z ) ? I see that previous works employ too small skill dimensions and it's great that CIC employs a larger skill dimension. But the skill dimension should not be too large as well, so there should be some sweet spot that may require hyperparameter tuning. Since the tasks targeted are unsupervised adaptation, does it make sense to tune the dimension without the knowledge of the diversity of the downstream tasks? Limitations The authors have adequately addressed the limitations.
NIPS
Title Generalization Bounds for Neural Networks via Approximate Description Length Abstract We investigate the sample complexity of networks with bounds on the magnitude of its weights. In particular, we consider the class N = {Wt ◦ ρ ◦Wt−1 ◦ ρ . . . ◦ ρ ◦W1 : W1, . . . ,Wt−1 ∈Md×d,Wt ∈M1,d} where the spectral norm of each Wi is bounded by O(1), the Frobenius norm is bounded by R, and ρ is the sigmoid function e x 1+ex or the smoothened ReLU function ln (1 + e). We show that for any depth t, if the inputs are in [−1, 1], the sample complexity of N is Õ ( dR 2 ) . This bound is optimal up to log-factors, and substantially improves over the previous state of the art of Õ ( dR 2 ) , that was established in a recent line of work [9, 4, 7, 5, 2, 8]. We furthermore show that this bound remains valid if instead of considering the magnitude of the Wi’s, we consider the magnitude of Wi −W 0 i , where W 0 i are some reference matrices, with spectral norm of O(1). By taking the W 0 i to be the matrices at the onset of the training process, we get sample complexity bounds that are sub-linear in the number of parameters, in many typical regimes of parameters. To establish our results we develop a new technique to analyze the sample complexity of familiesH of predictors. We start by defining a new notion of a randomized approximate description of functions f : X → R. We then show that if there is a way to approximately describe functions in a classH using d bits, then d 2 examples suffices to guarantee uniform convergence. Namely, that the empirical loss of all the functions in the class is -close to the true loss. Finally, we develop a set of tools for calculating the approximate description length of classes of functions that can be presented as a composition of linear function classes and non-linear functions. N/A N = {Wt ◦ ρ ◦Wt−1 ◦ ρ . . . ◦ ρ ◦W1 : W1, . . . ,Wt−1 ∈Md×d,Wt ∈M1,d} where the spectral norm of each Wi is bounded by O(1), the Frobenius norm is bounded by R, and ρ is the sigmoid function e x 1+ex or the smoothened ReLU function ln (1 + ex). We show that for any depth t, if the inputs are in [−1, 1]d, the sample complexity of N is Õ ( dR2 2 ) . This bound is optimal up to log-factors, and substantially improves over the previous state of the art of Õ ( d2R2 2 ) , that was established in a recent line of work [9, 4, 7, 5, 2, 8]. We furthermore show that this bound remains valid if instead of considering the magnitude of the Wi’s, we consider the magnitude of Wi −W 0i , where W 0i are some reference matrices, with spectral norm of O(1). By taking the W 0i to be the matrices at the onset of the training process, we get sample complexity bounds that are sub-linear in the number of parameters, in many typical regimes of parameters. To establish our results we develop a new technique to analyze the sample complexity of familiesH of predictors. We start by defining a new notion of a randomized approximate description of functions f : X → Rd. We then show that if there is a way to approximately describe functions in a classH using d bits, then d 2 examples suffices to guarantee uniform convergence. Namely, that the empirical loss of all the functions in the class is -close to the true loss. Finally, we develop a set of tools for calculating the approximate description length of classes of functions that can be presented as a composition of linear function classes and non-linear functions. 1 Introduction We analyze the sample complexity of networks with bounds on the magnitude of their weights. Let us consider a prototypical case, where the input space is X = [−1, 1]d, the output space is R, the number of layers is t, all hidden layers has d neurons, and the activation function is ρ : R→ R. The class of functions computed by such an architecture is N = {Wt ◦ ρ ◦Wt−1 ◦ ρ . . . ◦ ρ ◦W1 : W1, . . . ,Wt−1 ∈Md×d,Wt ∈M1,d} As the class N is defined by (t − 1)d2 + d = O(d2) parameters, classical results (e.g. [1]) tell us that order of d2 examples are sufficient and necessary in order to learn a function from N (in a standard worst case analysis). However, modern networks often succeed to learn with substantially less examples. One way to provide alternative results, and a potential explanation to the phenomena, 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. is to take into account the magnitude of the weights. This approach was a success story in the days of SVM [3] and Boosting [10], provided a nice explanation to generalization with sub-linear (in the number of parameters) number of examples, and was even the deriving force behind algorithmic progress. It seems just natural to adopt this approach in the context of modern networks. For instance, it is natural to consider the class NR = {Wt ◦ ρ ◦Wt−1 ◦ ρ . . . ◦ ρ ◦W1 : ∀i, ‖Wi‖F ≤ R, ‖Wi‖ ≤ O(1)} where ‖W‖ = max‖x‖=1 ‖Wx‖ is the spectral norm and ‖W‖F = √∑d i,j=1W 2 ij is the Frobenius norm. This class has been analyzed in several recent works [9, 4, 7, 5, 2, 8]. Best known results show a sample complexity of Õ ( d2R2 2 ) (for the sake of simplicity, in the introduction, we ignore the dependence on the depth in the big-O notation). In this paper we prove, for various activations, a stronger bound of Õ ( dR2 2 ) , which is optimal, up to log factors, for constant depth networks. How good is this bound? Does it finally provide sub-linear bound in typical regimes of the parameters? To answer this question, we need to ask how large R is. While this question of course don’t have a definite answer, empirical studies (e.g. [12]) show that it is usually the case that the norm (spectral, Frobenius, and others) of the weight matrices is at the same order of magnitude as the norm of the matrix in the onset of the training process. In most standard training methods, the initial matrices are random matrices with independent (or almost independent) entries, with mean zero and variance of order 1d . The Frobenius norm of such a matrix is of order √ d. Hence, the magnitude of R is of order √ d. Going back to our Õ ( dR2 2 ) bound, we get a sample complexity of Õ ( d2 2 ) , which is unfortunately still linear in the number of parameters. Since our bound is almost optimal, we can ask whether this is the end of the story? Should we abandon the aforementioned approach to network sample complexity? A more refined examination of the training process suggests another hope for this approach. Indeed, the training process doesn’t start from the zero matrix, but rather form a random initialization matrix. Thus, it stands to reason that instead of considering the magnitude of the weight matrices Wi, we should consider the magnitude of Wi −W 0i , where W 0i is the initial weight matrix. Indeed, empirical studies [6] show that the Frobenius norm of Wi −W 0i is often order of magnitude smaller than the Frobenius norm of Wi. Following this perspective, it is natural to consider the class NR(W 01 , . . . ,W 0t ) = { Wt ◦ ρ ◦Wt−1 ◦ ρ . . . ◦ ρ ◦W1 : ‖Wi −W 0i ‖ ≤ O(1), ‖Wi −W 0i ‖F ≤ R } For some fixed matrices, W 01 , . . . ,W 0 t of spectral norm 1 O(1). It is natural to expect that considering balls around the initial W 0i ’s instead of zero, shouldn’t change the sample complexity of the class at hand. In other words, we can expect that the sample complexity of NR(W 01 , . . . ,W 0t ) should be approximately the sample complexity of NR. Namely, we expect a sample complexity of Õ ( dR2 2 ) . Such a bound would finally be sub-linear, as in practice, it is often the case that R2 d. This approach was pioneered by [4] who considered the class N 2,1R (W 0 1 , . . . ,W 0 t ) = { Wt ◦ ρ . . . ◦ ρ ◦W1 : ‖Wi −W 0i ‖ ≤ O(1), ‖Wi −W 0i ‖2,1 ≤ R } where ‖W‖2,1 = ∑d i=1 √∑d j=1W 2 ij . For this class they proved a sample complexity bound of Õ ( dR2 2 ) . Since, ‖W‖2,1 ≤ √ d‖W‖F , this implies a sample complexity bound of Õ ( d2R2 2 ) on NR(W 01 , . . . ,W 0t ), which is still not sublinear2. In this paper we finally prove a sub-linear sample complexity bound of Õ ( dR2 2 ) on NR(W 01 , . . . ,W 0t ). To prove our results, we develop a new technique for bounding the sample complexity of function classes. Roughly speaking, we define a notion of approximate description of a function, and count 1The bound of O(1) on the spectral norm of the W 0i ’s and Wi −W 0i is again motivated by the practice of neural networks – the spectral norm of W 0i , with standard initializations, is O(1), and empirical studies [6, 12] show that the spectral norm of Wi −W 0i is usually very small. 2We note that ‖W‖2,1 = Θ( √ d) even if W is a random matrix with variance that is calibrated so that ‖W‖F = Θ(1) (namely, each entry has variance 1d2 ). how many bits are required in order to give an approximate description for the functions in the class under study. We then show that this number, called the approximate description length (ADL), gives an upper bound on the sample complexity. The advantage of our method over existing techniques is that it behaves nicely with compositions. That is, once we know the approximate description length of a class H of functions from X to Rd, we can also bound the ADL of ρ ◦ H, as well as L ◦ H, where L is a class of linear functions. This allows us to utilize the compositional structure of neural networks. 2 Preliminaries Notation We denote by med(x1, . . . , xk) the median of x1, . . . , xk ∈ R. For vectors x1, . . . ,xk ∈ Rd we denote med(x1, . . . ,xk) = ( med(x11, . . . , x k 1), . . . ,med(x 1 d, . . . , x k d) ) . We use log to denote log2, and ln to denote loge An expression of the form f(n) . g(n) means that there is a universal constant c > 0 for which f(n) ≤ cg(n). For a finite set A and f : A → R we let Ex∈A f = Ex∈A f(a) = 1|A| ∑ a∈A f(a). We denote BdM = {x ∈ Rd : ‖x‖ ≤ M} and Bd = Bd1. Likewise, we denote Sd−1 = {x ∈ Rd : ‖x‖ = 1} We denote the Frobenius norm of a matrix W by ‖W‖2F = ∑ ijW 2 ij , while the spectral norm is denoted by ‖W‖ = max‖x‖=1 ‖Wx‖. For a pair of vectors x,y ∈ Rd we denote by xy ∈ Rd their point-wise product xy = (x1y1, . . . , xdyd) Uniform Convergence and Covering Numbers Fix an instance space X , a label space Y and a loss ` : Rd × Y → [0,∞). We say that ` is Lipschitz / Bounded / etc. if for any y ∈ Y , `(·, y) is. Fix a class H from X to Rd. For a distribution D and a sample S ∈ (X × Y)m we define the representativeness of S as repD(S,H) = sup h∈H `D(h)−`S(h) for `D(h) = E (x,y)∼D `(h(x), y) and `S(h) = 1 m m∑ i=1 `(h(xi), yi) We note that if repD(S,H) ≤ then any algorithm that is guaranteed to return a function ĥ ∈ H will enjoy a generalization bound `D(h) ≤ `S(h) + . In particular, the ERM algorithm will return a function whose loss is optimal, up to an additive factor of . We will focus on bounds on repD(S,H) when S ∼ Dm. To this end, we will rely on the connection between representativeness and the covering numbers ofH. Definition 2.1. Fix a classH of functions from X to Rd, an integer m, > 0 and 1 ≤ p ≤ ∞. We define Np(H,m, ) as the minimal integer for which the following holds. For every A ⊂ X of size ≤ m there exists H̃ ⊂ ( Rd )X such that ∣∣∣H̃∣∣∣ ≤ Np(H,m, ) and for any h ∈ H there is h̃ ∈ H̃ with( Ex∈A ∥∥∥h(x)− h̃(x)∥∥∥p ∞ ) 1 p ≤ . For p = 2, we denote N(H,m, ) = N2(H,m, ) We conclude with a lemma, which will be useful in this paper. The proof can be found in the supplementary material. Lemma 2.2. Let ` : Rd × Y → R be L-Lipschitz w.r.t. ‖ · ‖∞ and B-bounded. Assume that for any 0 < ≤ 1, log (N(H,m, )) ≤ n 2 . Then ES∼Dm repD(S,H) . (L+B) √ n√ m log(m). Furthermore, with probability at least 1− δ, repD(S,H) . (L+B) √ n√ m log(m) +B √ 2 ln(2/δ) m A Basic Inequality Lemma 2.3. Let X1, . . . , Xn be independent r.v. with that that are σ-estimators to µ. Then Pr (|med(X1, . . . , Xn)− µ| > kσ) < ( 2 k )n 3 Simplified Approximate Description Length To give a soft introduction to our techniques, we first consider a simplified version of it. We next define the approximate description length of a classH of functions from X to Rd, which quantifies the number of bits it takes to approximately describe a function fromH. We will use the following notion of approximation Definition 3.1. A random vector X ∈ Rd is a σ-estimator to x ∈ Rd if EX = x and ∀u ∈ Sd−1, VAR(〈u, X〉) = E 〈u, X − x〉2 ≤ σ2 A random function f̂ : X → Rd is a σ-estimator to f : X → Rd if for any x ∈ X , f̂(x) is a σ-estimator to f(x). A (σ, n)-compressor C for a classH takes as input a function h ∈ H, and outputs a (random) function Ch such that (i) Ch is a σ-estimator of h and (ii) it takes n bits to describe Ch. Formally, Definition 3.2. A (σ, n)-compressor forH is a triplet (C,Ω, µ) where µ is a probability measure on Ω, and C is a function C : Ω×H → ( Rd )X such that 1. For any h ∈ H and x ∈ X , (Cωh)(x), ω ∼ µ is a σ-estimator of h(x). 2. There are functions E : Ω×H → {±1}n and D : {±1}n → ( Rd )X for which C = D ◦E Definition 3.3. We say that a classH of functions from X to Rd has approximate description length n if there exists a (1, n)-compressor forH It is not hard to see that if (C,Ω, µ) is a (σ, n)-compressor forH, then (Cω1,...,ωkh)(x) := ∑k i=1(Cωih)(x) k is a ( σ√ k , kn ) -compressor forH. Hence, if the approximate description length ofH is n, then for any 1 ≥ > 0 there exists an ( , nd −2e ) -compressor forH. We next connect the approximate description length, to covering numbers and representativeness. We separate it into two lemmas, one for d = 1 and one for general d, as for d = 1 we can prove a slightly stronger bound. Lemma 3.4. Fix a classH of functions from X to R with approximate description length n. Then, log (N(H,m, )) ≤ n ⌈ −2 ⌉ . Hence, if ` : Rd ×Y → R is L-Lipschitz and B-bounded, then for any distribution D on X × Y , ES∼Dm repD(S,H) . (L+B) √ n√ m log(m). Furthermore, with probability at least 1− δ, repD(S,H) . (L+B) √ n√ m log(m) +B √ 2 ln(2/δ) m Lemma 3.5. Fix a classH of functions from X to Rd with approximate description length n. Then, log (N∞(H,m, )) ≤ log (N(H,m, )) ≤ n ⌈ 16 −2 ⌉ dlog(dm)e Hence, if ` : Rd × Y → R is L-Lipschitz w.r.t. ‖ · ‖∞ and B-bounded, then for any distribution D on X × Y , ES∼Dm repD(S,H) . (L+B) √ n log(dm)√ m log(m). Furthermore, with probability at least 1− δ, repD(S,H) . (L+B) √ n log(dm)√ m log(m) +B √ 2 ln(2/δ) m 3.1 Linear Functions We next bound the approximate description length of linear functions with bounded Frobenius norm. Theorem 3.6. Let class Ld1,d2,M = { x ∈ Bd1 7→Wx : W is d2 × d1 matrix with ‖W‖F ≤M } has approximate description length n ≤ ⌈ 1 4 + 2M2 ⌉ 2 dlog (2d1d2(M + 1))e Hence, if ` : Rd2 × Y → R is L-Lipschitz w.r.t. ‖ · ‖∞ and B-bounded, then for any distribution D on X × Y E S∼Dm repD(S,Ld1,d2,M ) . (L+B) √ M2 log(d1d2M) log(d2m)√ m log(m) Furthermore, with probability at least 1− δ, repD(S,Ld1,d2,M ) . (L+B) √ M2 log(d1d2M) log(d2m)√ m log(m) +B √ 2 ln (2/δ) m We remark that the above bounds on the representativeness coincides with standard bounds ([11] for instance), up to log factors. The advantage of these bound is that they remain valid for any output dimension d2. In order to prove theorem 3.6 we will use a randomized sketch of a matrix. Definition 3.7. Let w ∈ Rd be a vector. A random sketch of w is a random vector ŵ that is samples as follows. Choose i w.p. pi = w2i 2‖w‖2 + 1 2d . Then, w.p. wi pi − ⌊ wi pi ⌋ let b = 1 and otherwise b = 0. Finally, let ŵ = (⌊ wi pi ⌋ + b ) ei. A random k-sketch of w is an average of k-independent random sketches of w. A random sketch and a random k-sketch of a matrix is defined similarly, with the standard matrix basis instead of the standard vector basis. The following useful lemma shows that an sketch w is a √ 1 4 + 2‖w‖2-estimator of w. Lemma 3.8. Let ŵ be a random sketch of w ∈ Rd. Then, (1) E ŵ = w and (2) for any u ∈ Sd−1, E (〈u, ŵ〉 − 〈u,w〉)2 ≤ E 〈u, ŵ〉2 ≤ 14 + 2‖w‖ 2 Proof. (of theorem 3.6) We construct a compressor for Ld1,d2,M as follows. Given W , we will sample a k-sketch Ŵ of W for k = ⌈ 1 4 + 2M 2 ⌉ , and will return the function x 7→ Ŵx. We claim that that W 7→ Ŵ is a (1, 2k dlog(2d1d2(M + 1))e)-compressor for Ld1,d2,M . Indeed, to specify a sketch of W we need dlog(d1d2)e bits to describe the chosen index, as well as log (2d1d2M + 2) bits to describe the value in that index. Hence, 2k dlog(2d1d2(M + 1))e bits suffices to specify a k-sketch. It remains to show that for x ∈ Bd1 , Ŵx is a 1-estimator of Wx. Indeed, by lemma 3.8, E Ŵ = W and therefore E Ŵx = Wx. Likewise, for u ∈ Sd2−1. We have E (〈 u, Ŵx 〉 − 〈u,Wx〉 )2 = E (〈 Ŵ ,xuT 〉 − 〈 W,xuT 〉)2 ≤ 1 4 + 2M 2 k ≤ 1 3.2 Simplified Depth 2 Networks To demonstrate our techniques, we consider the following class of functions. We let the domain X to be Bd. We fix an activation function ρ : R→ R that is assumed to be a polynomial ρ(x) = ∑k i=0 aix i with ∑n n=1 |an| = 1. For any W ∈ Md,d we define hW (x) = 1√ d ∑d i=1 ρ(〈wi,x〉) Finally, we let H = { hW : ∀i, ‖wi‖ ≤ 12 } In order to build compressors for classes of networks, we will utilize to compositional structure of the classes. Specifically, we have that H = Λ ◦ ρ ◦ F where F = {x 7→Wx : W is d× d matrix with ‖wi‖ ≤ 1 for all i} and Λ(x) = 1√d ∑d i=1 xi. As F is a subset of Ld,d,√d, we know that there exists a (1, O (d log(d)))-compressor for it. We will use this compressor to build a compressor to ρ ◦ F , and then to Λ ◦ ρ ◦ F . We will start with the latter, linear case, which is simpler Lemma 3.9. Let X be a σ-estimator to x ∈ Rd1 . Let A ∈ Md2,d1 be a matrix of spectral norm ≤ r. Then, AX is a (rσ)-estimator to Ax. In particular, if C is a (1, n)-compressor to a classH of functions from X to Rd. Then C′ω(Λ ◦ h) = Λ ◦ Cωh is a (1, n)-compressor to Λ ◦ H We next consider the composition of F with the non-linear ρ. As opposed to composition with a linear function, we cannot just generate a compression version using F’s compressor and then compose with ρ. Indeed, if X is a σ-estimator to x, it is not true in general that ρ(X) is an estimator of ρ(x). For instance, consider the case that ρ(x) = x2, and X = (X1, . . . , Xd) is a vector of independent standard Gaussians. X is a 1-estimator of 0 ∈ Rd. On the other hand, ρ(X) = (X21 , . . . , X2n) is not an estimator of 0 = ρ(0). We will therefore take a different approach. Given f ∈ F , we will sample k independent estimators {Cωif}ki=1 from F’s compressor, and define the compressed version of σ ◦ h as C′ω1,...,ωkf = ∑d i=0 ai ∏i j=0 Cωif . This construction is analyzed in the following lemma Lemma 3.10. If C is a ( 1 2 , n ) -compressor of a classH of functions from X to [ − 12 , 1 2 ]d . Then C′ is a (1, n)-compressor of ρ ◦ H Combining theorem 3.6 and lemmas 3.9, 3.10 we have: Theorem 3.11. H has approximation length . d log(d). Hence, if ` : R × Y → R is L-Lipschitz and B-bounded, then for any distribution D on X × Y E S∼Dm repD(S,H) . (L+B) √ d log(d)√ m log(m) Furthermore, with probability at least 1− δ, repD(S,H) . (L+B) √ d log(d)√ m log(m) +B √ 2 ln (2/δ) m Lemma 3.10 is implied by the following useful lemma: Lemma 3.12. 1. If X is a σ-estimator of x then aX is a (|a|σ)-estimator of aX 2. Suppose that for n = 1, 2, 3, . . . Xn is a σn-estimator of xn ∈ Rd. Assume furthermore that ∑∞ n=1 xn and ∑∞ n=1 σn converge to x ∈ Rd and σ ∈ [0,∞). Then, ∑∞ n=1Xn is a σ-estimator of x 3. Suppose that {Xi}ki=1 are independent σi-estimators of xi ∈ Rd. Then ∏k i=1Xi is a σ′-estimator of ∏k i=1 xi for σ ′2 = ∏k i=1 ( σ2i + ‖xi‖ 2 ∞ ) − ∏k i=1 ‖xi‖ 2 ∞ We note that the bounds in the above lemma are all tight. 4 Approximation Description Length In this section we refine the definition of approximate description length that were given in section 3. We start with the encoding of the compressed version of the functions. Instead of standard strings, we will use what we call bracketed string. The reason for that often, in order to create a compressed version of a function, we concatenate compressed versions of other functions. This results with strings with a nested structure. For instance, consider the case that a function h is encoded by the concatenation of h1 and h2. Furthermore, assume that h1 is encoded by the string 01, while h2 is encoded by the concatenation of h3, h4 and h5 that are in turn encoded by the strings 101, 0101 and 1110. The encoding of h will then be [[01][[101][0101][1110]]]. We note that in section 3 we could avoid this issue since the length of the strings and the recursive structure were fixed, and did not depend on the function we try to compress. Formally, we define Definition 4.1. A bracketed string is a rooted tree S, such that (i) the children of each edge are ordered, (ii) there are no nodes with a singe child, and (iii) the leaves are labeled by {0, 1}. The length, len(S) of S is the number of its leaves. Let S be a bracketed string. There is a linear order on its leaves that is defined as follows. Fix a pair of leaves, v1 and v2, and let u be their LCA. Let u1 (resp. u2) be the child of u that lie on the path to v1 (resp. v2). We define v1 < v2 if u1 < u2 and v1 > v2 otherwise (note that necessarily u1 6= u2). Let v1, . . . , vn be the leaves of T , ordered according to the above order, and let b1, . . . , bn be the corresponding bits. The string associated with T is s = b1 . . . bn. We denote by Sn the collection of bracketed strings of length ≤ n, and by S = ∪∞n=1Sn the collection of all bracketed strings. The following lemma shows that in log-scale, the number of bracketed strings of length ≤ n differ from standard strings of length ≤ n by only a constant factor Lemma 4.2. |Sn| ≤ 32n We next revisit the definition of a compressor for a classH. The definition of compressor will now have a third parameter, ns, in addition to σ and n. We will make three changes in the definition. The first, which is only for the sake of convenience, is that we will use bracketed strings rather than standard strings. The second change, is that the length of the encoding string will be bounded only in expectation. The final change is that the compressor can now output a seed. That is, given a function h ∈ H that we want to compress, the compressor can generate both a non-random seed Es(h) ∈ Sns and a random encoding E(ω, h) ∈ S with Eω∼µ len(E(ω, h)) ≤ n. Together, Es(h) and E(ω, h) encode a σ-estimator. Namely, there is a function D : Sns × S → ( Rd )X such that D(Es(h), E(ω, h)), ω ∼ µ is a σ-estimator of h. The advantage of using seeds is that it will allow us to generate many independent estimators, at a lower cost. In the case that n ns, the cost of generating k independent estimators of h ∈ H is ns + kn bits (in expectation) instead of k(ns + n) bits. Indeed, we can encode k estimators by a single seed Es(h) and k independent “regular" encodings E(ωk, h), . . . , E(ωk, h). The formal definition is given next. Definition 4.3. A (σ, ns, n)-compressor for H is a 5-tuple C = (Es, E,D,Ω, µ) where µ is a probability measure on Ω, and Es, E,D are functions Es : H → T ns , E : Ω × H → T , and D : T ns × T → ( Rd )X such that for any h ∈ H and x ∈ X (1) D(Es(h), E(ω, h)), ω ∼ µ is a σ-estimator of h and (2) Eω∼µ len(E(ω, h)) ≤ n We finally revisit the definition of approximate description length. We will add an additional parameter, to accommodate the use of seeds. Likewise, the approximate description length will now be a function of m – we will say thatH has approximate description length (ns(m), n(m)) if there is a (1, ns(m), n(m))-compressor for the restriction ofH to any set A ⊂ X of size at most m. Formally: Definition 4.4. We say that a classH of functions from X to Rd has approximate description length (ns(m), n(m)) if for any set A ⊂ X of size ≤ m there exists a (1, ns(m), n(m))-compressor for H|A It is not hard to see that if H has approximate description length (ns(m), n(m)), then for any 1 ≥ > 0 and a setA ⊂ X of size≤ m, there exists an ( , ns(m), n(m)d −2e ) -compressor forH|A. We next connect the approximate description length, to covering numbers and representativeness. The proofs are similar the the proofs of lemmas 3.4 and 3.5. Lemma 4.5. Fix a class H of functions from X to R with approximate description length (ns(m), n(m)). Then, log (N(H,m, )) . ns(m) + n(m) 2 Hence, if ` : R d × Y → R is L-Lipschitz and B-bounded, then for any distribution D on X × Y E S∼Dm repD(S,H) . (L+B) √ ns(m) + n(m)√ m log(m) Furthermore, with probability at least 1− δ, repD(S,H) . (L+B) √ ns(m) + n(m)√ m log(m) +B √ 2 ln (2/δ) m Lemma 4.6. Fix a class H of functions from X to Rd with approximate description length (ns(m), n(m)). Then, log (N(H,m, )) ≤ log (N∞(H,m, )) . ns(m) + n(m) log(dm) 2 . Hence, if ` : Rd × Y → R is L-Lipschitz w.r.t. ‖ · ‖∞ and B-bounded, then for any distribution D on X × Y E S∼Dm repD(S,H) . (L+B) √ ns(m) + n(m) log(dm)√ m log(m) Furthermore, with probability at least 1− δ, repD(S,H) . (L+B) √ ns(m) + n(m) log(dm)√ m log(m) +B √ 2 ln (2/δ) m We next analyze the behavior of the approximate description length under various operations Lemma 4.7. LetH1,H2 be classes of functions from X to Rd with approximate description length of (n1s(m), n 1(m)) and (n2s(m), n 2(m)). Then H1 + H2 has approximate description length of (n1s(m) + n 2 s(m), 2n 1(m) + 2n2(m)) Lemma 4.8. Let H be a class of functions from X to Rd with approximate description length of (ns(m), n(m)). Let A be d2 × d1 matrix. Then A ◦ H1 has approximate description length( ns(m), ⌈ ‖A‖2 ⌉ n(m) ) Definition 4.11. A function f : R → R is B-strongly-bounded if for all n ≥ 1, ‖f (n)‖∞ ≤ n!Bn. Likewise, f is strongly-bounded if it is B-strongly-bounded for some B We note that Lemma 4.12. If f is B-strongly-bounded then f is analytic and its Taylor coefficients around any point are bounded by Bn The following lemma gives an example to a strongly bounded sigmoid function, as well as a strongly bounded smoothened version of the ReLU (see figure 1). Lemma 4.13. The functions ln (1 + ex) and e x 1+ex are strongly-bounded Lemma 4.14. Let H be a class of functions from X to Rd with approximate description length of (ns(m), n(m)). Let ρ : R→ R be B-strongly-bounded. Then, ρ ◦ H has approximate description length of ( ns(m) +O ( n(m)B2 log(md) ) , O ( n(m)B2 log(d) )) 5 Sample Complexity of Neural Networks Fix the instance space X to be the ball of radius √ d in Rd (in particular [−1, 1]d ⊂ X ) and a Bstrongly-bounded activation ρ. Fix matrices W 0i ∈Mdi,di−1 , i = 1, . . . , t. Consider the following class of depth-t networks Nr,R(W 01 , . . . ,W 0t ) = { Wt ◦ ρ ◦Wt−1 ◦ ρ . . . ◦ ρ ◦W1 : ‖Wi −W 0i ‖ ≤ r, ‖Wi −W 0i ‖F ≤ R } We note that Nr,R(W 01 , . . . ,W 0t ) = Nr,R(W 0t ) ◦ . . . ◦ Nr,R(W 01 ) The following lemma analyzes the cost, in terms of approximate description length, when moving from a classH to Nr,R(W 0) ◦ H. Lemma 5.1. Let H be a class of functions from X to Rd1 with approximate description length (ns(m), n(m)) and ‖h(x)‖ ≤M for any x ∈ X and h ∈ H. Fix W 0 ∈Md2,d1 . Then, Nr,R(W 0t ) ◦ H has approximate description length of( ns(m) + n ′(m)B2 log(md2), n ′(m)B2 log(d2) ) for n′(m) = n(m)O(r2 + ‖W 0‖2 + 1) +O ( (d1 +M 2)(R2 + 1) log(Rd1d2 + 1) ) The lemma is follows by combining lemmas 4.7, 4.8, 4.10 and 4.14. We note that in the case that d1, d2 ≤ d, M = O( √ d1), B, r, ‖W 0‖ = O(1) (and hence R = O (√ d ) ) and R ≥ 1 we get that Nr,R(W 0) ◦ H has approximate description length of( ns(m) +O (n(m) log(md)) , O (n(m) log(d)) +O ( d1R 2 log2(d) )) By induction, the approximate description length of Nr,R(W 01 , . . . ,W 0t ) is( dR2O (log(d)) t log(md), dR2O (log(d)) t+1 )
1. What is the primary objective of the paper regarding sample complexity? 2. What is the novel aspect of the technique used in the paper? 3. How does the paper interpret the obtained results? 4. What is the potential impact of the paper's findings on practitioners using neural networks? 5. Who might be interested in the theoretical aspects presented in the paper?
Review
Review The major aim of this paper is to provide improved bounds for the sample complexity of the networks. To this end, the authors resort to a technique that takes into consideration the magnitudes of the weights. Even if similar techniques have been used previously in machine learning, the way in which the method is applied in here seems to be new. The novelty stems from the use of the approximate description length in proving the theoretical results. The authors provide a fair interpretation of the obtained results. It is hard to anticipate the impact of these results on the practitioners who are employing networks in their works. It is likely that the results presented in this manuscript to be of interest for other researchers who are focused on theoretical aspects related to neural networks. ******************************************* I have decided for this submission to keep the same score as in the first phase of the review process.
NIPS
Title Generalization Bounds for Neural Networks via Approximate Description Length Abstract We investigate the sample complexity of networks with bounds on the magnitude of its weights. In particular, we consider the class N = {Wt ◦ ρ ◦Wt−1 ◦ ρ . . . ◦ ρ ◦W1 : W1, . . . ,Wt−1 ∈Md×d,Wt ∈M1,d} where the spectral norm of each Wi is bounded by O(1), the Frobenius norm is bounded by R, and ρ is the sigmoid function e x 1+ex or the smoothened ReLU function ln (1 + e). We show that for any depth t, if the inputs are in [−1, 1], the sample complexity of N is Õ ( dR 2 ) . This bound is optimal up to log-factors, and substantially improves over the previous state of the art of Õ ( dR 2 ) , that was established in a recent line of work [9, 4, 7, 5, 2, 8]. We furthermore show that this bound remains valid if instead of considering the magnitude of the Wi’s, we consider the magnitude of Wi −W 0 i , where W 0 i are some reference matrices, with spectral norm of O(1). By taking the W 0 i to be the matrices at the onset of the training process, we get sample complexity bounds that are sub-linear in the number of parameters, in many typical regimes of parameters. To establish our results we develop a new technique to analyze the sample complexity of familiesH of predictors. We start by defining a new notion of a randomized approximate description of functions f : X → R. We then show that if there is a way to approximately describe functions in a classH using d bits, then d 2 examples suffices to guarantee uniform convergence. Namely, that the empirical loss of all the functions in the class is -close to the true loss. Finally, we develop a set of tools for calculating the approximate description length of classes of functions that can be presented as a composition of linear function classes and non-linear functions. N/A N = {Wt ◦ ρ ◦Wt−1 ◦ ρ . . . ◦ ρ ◦W1 : W1, . . . ,Wt−1 ∈Md×d,Wt ∈M1,d} where the spectral norm of each Wi is bounded by O(1), the Frobenius norm is bounded by R, and ρ is the sigmoid function e x 1+ex or the smoothened ReLU function ln (1 + ex). We show that for any depth t, if the inputs are in [−1, 1]d, the sample complexity of N is Õ ( dR2 2 ) . This bound is optimal up to log-factors, and substantially improves over the previous state of the art of Õ ( d2R2 2 ) , that was established in a recent line of work [9, 4, 7, 5, 2, 8]. We furthermore show that this bound remains valid if instead of considering the magnitude of the Wi’s, we consider the magnitude of Wi −W 0i , where W 0i are some reference matrices, with spectral norm of O(1). By taking the W 0i to be the matrices at the onset of the training process, we get sample complexity bounds that are sub-linear in the number of parameters, in many typical regimes of parameters. To establish our results we develop a new technique to analyze the sample complexity of familiesH of predictors. We start by defining a new notion of a randomized approximate description of functions f : X → Rd. We then show that if there is a way to approximately describe functions in a classH using d bits, then d 2 examples suffices to guarantee uniform convergence. Namely, that the empirical loss of all the functions in the class is -close to the true loss. Finally, we develop a set of tools for calculating the approximate description length of classes of functions that can be presented as a composition of linear function classes and non-linear functions. 1 Introduction We analyze the sample complexity of networks with bounds on the magnitude of their weights. Let us consider a prototypical case, where the input space is X = [−1, 1]d, the output space is R, the number of layers is t, all hidden layers has d neurons, and the activation function is ρ : R→ R. The class of functions computed by such an architecture is N = {Wt ◦ ρ ◦Wt−1 ◦ ρ . . . ◦ ρ ◦W1 : W1, . . . ,Wt−1 ∈Md×d,Wt ∈M1,d} As the class N is defined by (t − 1)d2 + d = O(d2) parameters, classical results (e.g. [1]) tell us that order of d2 examples are sufficient and necessary in order to learn a function from N (in a standard worst case analysis). However, modern networks often succeed to learn with substantially less examples. One way to provide alternative results, and a potential explanation to the phenomena, 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. is to take into account the magnitude of the weights. This approach was a success story in the days of SVM [3] and Boosting [10], provided a nice explanation to generalization with sub-linear (in the number of parameters) number of examples, and was even the deriving force behind algorithmic progress. It seems just natural to adopt this approach in the context of modern networks. For instance, it is natural to consider the class NR = {Wt ◦ ρ ◦Wt−1 ◦ ρ . . . ◦ ρ ◦W1 : ∀i, ‖Wi‖F ≤ R, ‖Wi‖ ≤ O(1)} where ‖W‖ = max‖x‖=1 ‖Wx‖ is the spectral norm and ‖W‖F = √∑d i,j=1W 2 ij is the Frobenius norm. This class has been analyzed in several recent works [9, 4, 7, 5, 2, 8]. Best known results show a sample complexity of Õ ( d2R2 2 ) (for the sake of simplicity, in the introduction, we ignore the dependence on the depth in the big-O notation). In this paper we prove, for various activations, a stronger bound of Õ ( dR2 2 ) , which is optimal, up to log factors, for constant depth networks. How good is this bound? Does it finally provide sub-linear bound in typical regimes of the parameters? To answer this question, we need to ask how large R is. While this question of course don’t have a definite answer, empirical studies (e.g. [12]) show that it is usually the case that the norm (spectral, Frobenius, and others) of the weight matrices is at the same order of magnitude as the norm of the matrix in the onset of the training process. In most standard training methods, the initial matrices are random matrices with independent (or almost independent) entries, with mean zero and variance of order 1d . The Frobenius norm of such a matrix is of order √ d. Hence, the magnitude of R is of order √ d. Going back to our Õ ( dR2 2 ) bound, we get a sample complexity of Õ ( d2 2 ) , which is unfortunately still linear in the number of parameters. Since our bound is almost optimal, we can ask whether this is the end of the story? Should we abandon the aforementioned approach to network sample complexity? A more refined examination of the training process suggests another hope for this approach. Indeed, the training process doesn’t start from the zero matrix, but rather form a random initialization matrix. Thus, it stands to reason that instead of considering the magnitude of the weight matrices Wi, we should consider the magnitude of Wi −W 0i , where W 0i is the initial weight matrix. Indeed, empirical studies [6] show that the Frobenius norm of Wi −W 0i is often order of magnitude smaller than the Frobenius norm of Wi. Following this perspective, it is natural to consider the class NR(W 01 , . . . ,W 0t ) = { Wt ◦ ρ ◦Wt−1 ◦ ρ . . . ◦ ρ ◦W1 : ‖Wi −W 0i ‖ ≤ O(1), ‖Wi −W 0i ‖F ≤ R } For some fixed matrices, W 01 , . . . ,W 0 t of spectral norm 1 O(1). It is natural to expect that considering balls around the initial W 0i ’s instead of zero, shouldn’t change the sample complexity of the class at hand. In other words, we can expect that the sample complexity of NR(W 01 , . . . ,W 0t ) should be approximately the sample complexity of NR. Namely, we expect a sample complexity of Õ ( dR2 2 ) . Such a bound would finally be sub-linear, as in practice, it is often the case that R2 d. This approach was pioneered by [4] who considered the class N 2,1R (W 0 1 , . . . ,W 0 t ) = { Wt ◦ ρ . . . ◦ ρ ◦W1 : ‖Wi −W 0i ‖ ≤ O(1), ‖Wi −W 0i ‖2,1 ≤ R } where ‖W‖2,1 = ∑d i=1 √∑d j=1W 2 ij . For this class they proved a sample complexity bound of Õ ( dR2 2 ) . Since, ‖W‖2,1 ≤ √ d‖W‖F , this implies a sample complexity bound of Õ ( d2R2 2 ) on NR(W 01 , . . . ,W 0t ), which is still not sublinear2. In this paper we finally prove a sub-linear sample complexity bound of Õ ( dR2 2 ) on NR(W 01 , . . . ,W 0t ). To prove our results, we develop a new technique for bounding the sample complexity of function classes. Roughly speaking, we define a notion of approximate description of a function, and count 1The bound of O(1) on the spectral norm of the W 0i ’s and Wi −W 0i is again motivated by the practice of neural networks – the spectral norm of W 0i , with standard initializations, is O(1), and empirical studies [6, 12] show that the spectral norm of Wi −W 0i is usually very small. 2We note that ‖W‖2,1 = Θ( √ d) even if W is a random matrix with variance that is calibrated so that ‖W‖F = Θ(1) (namely, each entry has variance 1d2 ). how many bits are required in order to give an approximate description for the functions in the class under study. We then show that this number, called the approximate description length (ADL), gives an upper bound on the sample complexity. The advantage of our method over existing techniques is that it behaves nicely with compositions. That is, once we know the approximate description length of a class H of functions from X to Rd, we can also bound the ADL of ρ ◦ H, as well as L ◦ H, where L is a class of linear functions. This allows us to utilize the compositional structure of neural networks. 2 Preliminaries Notation We denote by med(x1, . . . , xk) the median of x1, . . . , xk ∈ R. For vectors x1, . . . ,xk ∈ Rd we denote med(x1, . . . ,xk) = ( med(x11, . . . , x k 1), . . . ,med(x 1 d, . . . , x k d) ) . We use log to denote log2, and ln to denote loge An expression of the form f(n) . g(n) means that there is a universal constant c > 0 for which f(n) ≤ cg(n). For a finite set A and f : A → R we let Ex∈A f = Ex∈A f(a) = 1|A| ∑ a∈A f(a). We denote BdM = {x ∈ Rd : ‖x‖ ≤ M} and Bd = Bd1. Likewise, we denote Sd−1 = {x ∈ Rd : ‖x‖ = 1} We denote the Frobenius norm of a matrix W by ‖W‖2F = ∑ ijW 2 ij , while the spectral norm is denoted by ‖W‖ = max‖x‖=1 ‖Wx‖. For a pair of vectors x,y ∈ Rd we denote by xy ∈ Rd their point-wise product xy = (x1y1, . . . , xdyd) Uniform Convergence and Covering Numbers Fix an instance space X , a label space Y and a loss ` : Rd × Y → [0,∞). We say that ` is Lipschitz / Bounded / etc. if for any y ∈ Y , `(·, y) is. Fix a class H from X to Rd. For a distribution D and a sample S ∈ (X × Y)m we define the representativeness of S as repD(S,H) = sup h∈H `D(h)−`S(h) for `D(h) = E (x,y)∼D `(h(x), y) and `S(h) = 1 m m∑ i=1 `(h(xi), yi) We note that if repD(S,H) ≤ then any algorithm that is guaranteed to return a function ĥ ∈ H will enjoy a generalization bound `D(h) ≤ `S(h) + . In particular, the ERM algorithm will return a function whose loss is optimal, up to an additive factor of . We will focus on bounds on repD(S,H) when S ∼ Dm. To this end, we will rely on the connection between representativeness and the covering numbers ofH. Definition 2.1. Fix a classH of functions from X to Rd, an integer m, > 0 and 1 ≤ p ≤ ∞. We define Np(H,m, ) as the minimal integer for which the following holds. For every A ⊂ X of size ≤ m there exists H̃ ⊂ ( Rd )X such that ∣∣∣H̃∣∣∣ ≤ Np(H,m, ) and for any h ∈ H there is h̃ ∈ H̃ with( Ex∈A ∥∥∥h(x)− h̃(x)∥∥∥p ∞ ) 1 p ≤ . For p = 2, we denote N(H,m, ) = N2(H,m, ) We conclude with a lemma, which will be useful in this paper. The proof can be found in the supplementary material. Lemma 2.2. Let ` : Rd × Y → R be L-Lipschitz w.r.t. ‖ · ‖∞ and B-bounded. Assume that for any 0 < ≤ 1, log (N(H,m, )) ≤ n 2 . Then ES∼Dm repD(S,H) . (L+B) √ n√ m log(m). Furthermore, with probability at least 1− δ, repD(S,H) . (L+B) √ n√ m log(m) +B √ 2 ln(2/δ) m A Basic Inequality Lemma 2.3. Let X1, . . . , Xn be independent r.v. with that that are σ-estimators to µ. Then Pr (|med(X1, . . . , Xn)− µ| > kσ) < ( 2 k )n 3 Simplified Approximate Description Length To give a soft introduction to our techniques, we first consider a simplified version of it. We next define the approximate description length of a classH of functions from X to Rd, which quantifies the number of bits it takes to approximately describe a function fromH. We will use the following notion of approximation Definition 3.1. A random vector X ∈ Rd is a σ-estimator to x ∈ Rd if EX = x and ∀u ∈ Sd−1, VAR(〈u, X〉) = E 〈u, X − x〉2 ≤ σ2 A random function f̂ : X → Rd is a σ-estimator to f : X → Rd if for any x ∈ X , f̂(x) is a σ-estimator to f(x). A (σ, n)-compressor C for a classH takes as input a function h ∈ H, and outputs a (random) function Ch such that (i) Ch is a σ-estimator of h and (ii) it takes n bits to describe Ch. Formally, Definition 3.2. A (σ, n)-compressor forH is a triplet (C,Ω, µ) where µ is a probability measure on Ω, and C is a function C : Ω×H → ( Rd )X such that 1. For any h ∈ H and x ∈ X , (Cωh)(x), ω ∼ µ is a σ-estimator of h(x). 2. There are functions E : Ω×H → {±1}n and D : {±1}n → ( Rd )X for which C = D ◦E Definition 3.3. We say that a classH of functions from X to Rd has approximate description length n if there exists a (1, n)-compressor forH It is not hard to see that if (C,Ω, µ) is a (σ, n)-compressor forH, then (Cω1,...,ωkh)(x) := ∑k i=1(Cωih)(x) k is a ( σ√ k , kn ) -compressor forH. Hence, if the approximate description length ofH is n, then for any 1 ≥ > 0 there exists an ( , nd −2e ) -compressor forH. We next connect the approximate description length, to covering numbers and representativeness. We separate it into two lemmas, one for d = 1 and one for general d, as for d = 1 we can prove a slightly stronger bound. Lemma 3.4. Fix a classH of functions from X to R with approximate description length n. Then, log (N(H,m, )) ≤ n ⌈ −2 ⌉ . Hence, if ` : Rd ×Y → R is L-Lipschitz and B-bounded, then for any distribution D on X × Y , ES∼Dm repD(S,H) . (L+B) √ n√ m log(m). Furthermore, with probability at least 1− δ, repD(S,H) . (L+B) √ n√ m log(m) +B √ 2 ln(2/δ) m Lemma 3.5. Fix a classH of functions from X to Rd with approximate description length n. Then, log (N∞(H,m, )) ≤ log (N(H,m, )) ≤ n ⌈ 16 −2 ⌉ dlog(dm)e Hence, if ` : Rd × Y → R is L-Lipschitz w.r.t. ‖ · ‖∞ and B-bounded, then for any distribution D on X × Y , ES∼Dm repD(S,H) . (L+B) √ n log(dm)√ m log(m). Furthermore, with probability at least 1− δ, repD(S,H) . (L+B) √ n log(dm)√ m log(m) +B √ 2 ln(2/δ) m 3.1 Linear Functions We next bound the approximate description length of linear functions with bounded Frobenius norm. Theorem 3.6. Let class Ld1,d2,M = { x ∈ Bd1 7→Wx : W is d2 × d1 matrix with ‖W‖F ≤M } has approximate description length n ≤ ⌈ 1 4 + 2M2 ⌉ 2 dlog (2d1d2(M + 1))e Hence, if ` : Rd2 × Y → R is L-Lipschitz w.r.t. ‖ · ‖∞ and B-bounded, then for any distribution D on X × Y E S∼Dm repD(S,Ld1,d2,M ) . (L+B) √ M2 log(d1d2M) log(d2m)√ m log(m) Furthermore, with probability at least 1− δ, repD(S,Ld1,d2,M ) . (L+B) √ M2 log(d1d2M) log(d2m)√ m log(m) +B √ 2 ln (2/δ) m We remark that the above bounds on the representativeness coincides with standard bounds ([11] for instance), up to log factors. The advantage of these bound is that they remain valid for any output dimension d2. In order to prove theorem 3.6 we will use a randomized sketch of a matrix. Definition 3.7. Let w ∈ Rd be a vector. A random sketch of w is a random vector ŵ that is samples as follows. Choose i w.p. pi = w2i 2‖w‖2 + 1 2d . Then, w.p. wi pi − ⌊ wi pi ⌋ let b = 1 and otherwise b = 0. Finally, let ŵ = (⌊ wi pi ⌋ + b ) ei. A random k-sketch of w is an average of k-independent random sketches of w. A random sketch and a random k-sketch of a matrix is defined similarly, with the standard matrix basis instead of the standard vector basis. The following useful lemma shows that an sketch w is a √ 1 4 + 2‖w‖2-estimator of w. Lemma 3.8. Let ŵ be a random sketch of w ∈ Rd. Then, (1) E ŵ = w and (2) for any u ∈ Sd−1, E (〈u, ŵ〉 − 〈u,w〉)2 ≤ E 〈u, ŵ〉2 ≤ 14 + 2‖w‖ 2 Proof. (of theorem 3.6) We construct a compressor for Ld1,d2,M as follows. Given W , we will sample a k-sketch Ŵ of W for k = ⌈ 1 4 + 2M 2 ⌉ , and will return the function x 7→ Ŵx. We claim that that W 7→ Ŵ is a (1, 2k dlog(2d1d2(M + 1))e)-compressor for Ld1,d2,M . Indeed, to specify a sketch of W we need dlog(d1d2)e bits to describe the chosen index, as well as log (2d1d2M + 2) bits to describe the value in that index. Hence, 2k dlog(2d1d2(M + 1))e bits suffices to specify a k-sketch. It remains to show that for x ∈ Bd1 , Ŵx is a 1-estimator of Wx. Indeed, by lemma 3.8, E Ŵ = W and therefore E Ŵx = Wx. Likewise, for u ∈ Sd2−1. We have E (〈 u, Ŵx 〉 − 〈u,Wx〉 )2 = E (〈 Ŵ ,xuT 〉 − 〈 W,xuT 〉)2 ≤ 1 4 + 2M 2 k ≤ 1 3.2 Simplified Depth 2 Networks To demonstrate our techniques, we consider the following class of functions. We let the domain X to be Bd. We fix an activation function ρ : R→ R that is assumed to be a polynomial ρ(x) = ∑k i=0 aix i with ∑n n=1 |an| = 1. For any W ∈ Md,d we define hW (x) = 1√ d ∑d i=1 ρ(〈wi,x〉) Finally, we let H = { hW : ∀i, ‖wi‖ ≤ 12 } In order to build compressors for classes of networks, we will utilize to compositional structure of the classes. Specifically, we have that H = Λ ◦ ρ ◦ F where F = {x 7→Wx : W is d× d matrix with ‖wi‖ ≤ 1 for all i} and Λ(x) = 1√d ∑d i=1 xi. As F is a subset of Ld,d,√d, we know that there exists a (1, O (d log(d)))-compressor for it. We will use this compressor to build a compressor to ρ ◦ F , and then to Λ ◦ ρ ◦ F . We will start with the latter, linear case, which is simpler Lemma 3.9. Let X be a σ-estimator to x ∈ Rd1 . Let A ∈ Md2,d1 be a matrix of spectral norm ≤ r. Then, AX is a (rσ)-estimator to Ax. In particular, if C is a (1, n)-compressor to a classH of functions from X to Rd. Then C′ω(Λ ◦ h) = Λ ◦ Cωh is a (1, n)-compressor to Λ ◦ H We next consider the composition of F with the non-linear ρ. As opposed to composition with a linear function, we cannot just generate a compression version using F’s compressor and then compose with ρ. Indeed, if X is a σ-estimator to x, it is not true in general that ρ(X) is an estimator of ρ(x). For instance, consider the case that ρ(x) = x2, and X = (X1, . . . , Xd) is a vector of independent standard Gaussians. X is a 1-estimator of 0 ∈ Rd. On the other hand, ρ(X) = (X21 , . . . , X2n) is not an estimator of 0 = ρ(0). We will therefore take a different approach. Given f ∈ F , we will sample k independent estimators {Cωif}ki=1 from F’s compressor, and define the compressed version of σ ◦ h as C′ω1,...,ωkf = ∑d i=0 ai ∏i j=0 Cωif . This construction is analyzed in the following lemma Lemma 3.10. If C is a ( 1 2 , n ) -compressor of a classH of functions from X to [ − 12 , 1 2 ]d . Then C′ is a (1, n)-compressor of ρ ◦ H Combining theorem 3.6 and lemmas 3.9, 3.10 we have: Theorem 3.11. H has approximation length . d log(d). Hence, if ` : R × Y → R is L-Lipschitz and B-bounded, then for any distribution D on X × Y E S∼Dm repD(S,H) . (L+B) √ d log(d)√ m log(m) Furthermore, with probability at least 1− δ, repD(S,H) . (L+B) √ d log(d)√ m log(m) +B √ 2 ln (2/δ) m Lemma 3.10 is implied by the following useful lemma: Lemma 3.12. 1. If X is a σ-estimator of x then aX is a (|a|σ)-estimator of aX 2. Suppose that for n = 1, 2, 3, . . . Xn is a σn-estimator of xn ∈ Rd. Assume furthermore that ∑∞ n=1 xn and ∑∞ n=1 σn converge to x ∈ Rd and σ ∈ [0,∞). Then, ∑∞ n=1Xn is a σ-estimator of x 3. Suppose that {Xi}ki=1 are independent σi-estimators of xi ∈ Rd. Then ∏k i=1Xi is a σ′-estimator of ∏k i=1 xi for σ ′2 = ∏k i=1 ( σ2i + ‖xi‖ 2 ∞ ) − ∏k i=1 ‖xi‖ 2 ∞ We note that the bounds in the above lemma are all tight. 4 Approximation Description Length In this section we refine the definition of approximate description length that were given in section 3. We start with the encoding of the compressed version of the functions. Instead of standard strings, we will use what we call bracketed string. The reason for that often, in order to create a compressed version of a function, we concatenate compressed versions of other functions. This results with strings with a nested structure. For instance, consider the case that a function h is encoded by the concatenation of h1 and h2. Furthermore, assume that h1 is encoded by the string 01, while h2 is encoded by the concatenation of h3, h4 and h5 that are in turn encoded by the strings 101, 0101 and 1110. The encoding of h will then be [[01][[101][0101][1110]]]. We note that in section 3 we could avoid this issue since the length of the strings and the recursive structure were fixed, and did not depend on the function we try to compress. Formally, we define Definition 4.1. A bracketed string is a rooted tree S, such that (i) the children of each edge are ordered, (ii) there are no nodes with a singe child, and (iii) the leaves are labeled by {0, 1}. The length, len(S) of S is the number of its leaves. Let S be a bracketed string. There is a linear order on its leaves that is defined as follows. Fix a pair of leaves, v1 and v2, and let u be their LCA. Let u1 (resp. u2) be the child of u that lie on the path to v1 (resp. v2). We define v1 < v2 if u1 < u2 and v1 > v2 otherwise (note that necessarily u1 6= u2). Let v1, . . . , vn be the leaves of T , ordered according to the above order, and let b1, . . . , bn be the corresponding bits. The string associated with T is s = b1 . . . bn. We denote by Sn the collection of bracketed strings of length ≤ n, and by S = ∪∞n=1Sn the collection of all bracketed strings. The following lemma shows that in log-scale, the number of bracketed strings of length ≤ n differ from standard strings of length ≤ n by only a constant factor Lemma 4.2. |Sn| ≤ 32n We next revisit the definition of a compressor for a classH. The definition of compressor will now have a third parameter, ns, in addition to σ and n. We will make three changes in the definition. The first, which is only for the sake of convenience, is that we will use bracketed strings rather than standard strings. The second change, is that the length of the encoding string will be bounded only in expectation. The final change is that the compressor can now output a seed. That is, given a function h ∈ H that we want to compress, the compressor can generate both a non-random seed Es(h) ∈ Sns and a random encoding E(ω, h) ∈ S with Eω∼µ len(E(ω, h)) ≤ n. Together, Es(h) and E(ω, h) encode a σ-estimator. Namely, there is a function D : Sns × S → ( Rd )X such that D(Es(h), E(ω, h)), ω ∼ µ is a σ-estimator of h. The advantage of using seeds is that it will allow us to generate many independent estimators, at a lower cost. In the case that n ns, the cost of generating k independent estimators of h ∈ H is ns + kn bits (in expectation) instead of k(ns + n) bits. Indeed, we can encode k estimators by a single seed Es(h) and k independent “regular" encodings E(ωk, h), . . . , E(ωk, h). The formal definition is given next. Definition 4.3. A (σ, ns, n)-compressor for H is a 5-tuple C = (Es, E,D,Ω, µ) where µ is a probability measure on Ω, and Es, E,D are functions Es : H → T ns , E : Ω × H → T , and D : T ns × T → ( Rd )X such that for any h ∈ H and x ∈ X (1) D(Es(h), E(ω, h)), ω ∼ µ is a σ-estimator of h and (2) Eω∼µ len(E(ω, h)) ≤ n We finally revisit the definition of approximate description length. We will add an additional parameter, to accommodate the use of seeds. Likewise, the approximate description length will now be a function of m – we will say thatH has approximate description length (ns(m), n(m)) if there is a (1, ns(m), n(m))-compressor for the restriction ofH to any set A ⊂ X of size at most m. Formally: Definition 4.4. We say that a classH of functions from X to Rd has approximate description length (ns(m), n(m)) if for any set A ⊂ X of size ≤ m there exists a (1, ns(m), n(m))-compressor for H|A It is not hard to see that if H has approximate description length (ns(m), n(m)), then for any 1 ≥ > 0 and a setA ⊂ X of size≤ m, there exists an ( , ns(m), n(m)d −2e ) -compressor forH|A. We next connect the approximate description length, to covering numbers and representativeness. The proofs are similar the the proofs of lemmas 3.4 and 3.5. Lemma 4.5. Fix a class H of functions from X to R with approximate description length (ns(m), n(m)). Then, log (N(H,m, )) . ns(m) + n(m) 2 Hence, if ` : R d × Y → R is L-Lipschitz and B-bounded, then for any distribution D on X × Y E S∼Dm repD(S,H) . (L+B) √ ns(m) + n(m)√ m log(m) Furthermore, with probability at least 1− δ, repD(S,H) . (L+B) √ ns(m) + n(m)√ m log(m) +B √ 2 ln (2/δ) m Lemma 4.6. Fix a class H of functions from X to Rd with approximate description length (ns(m), n(m)). Then, log (N(H,m, )) ≤ log (N∞(H,m, )) . ns(m) + n(m) log(dm) 2 . Hence, if ` : Rd × Y → R is L-Lipschitz w.r.t. ‖ · ‖∞ and B-bounded, then for any distribution D on X × Y E S∼Dm repD(S,H) . (L+B) √ ns(m) + n(m) log(dm)√ m log(m) Furthermore, with probability at least 1− δ, repD(S,H) . (L+B) √ ns(m) + n(m) log(dm)√ m log(m) +B √ 2 ln (2/δ) m We next analyze the behavior of the approximate description length under various operations Lemma 4.7. LetH1,H2 be classes of functions from X to Rd with approximate description length of (n1s(m), n 1(m)) and (n2s(m), n 2(m)). Then H1 + H2 has approximate description length of (n1s(m) + n 2 s(m), 2n 1(m) + 2n2(m)) Lemma 4.8. Let H be a class of functions from X to Rd with approximate description length of (ns(m), n(m)). Let A be d2 × d1 matrix. Then A ◦ H1 has approximate description length( ns(m), ⌈ ‖A‖2 ⌉ n(m) ) Definition 4.11. A function f : R → R is B-strongly-bounded if for all n ≥ 1, ‖f (n)‖∞ ≤ n!Bn. Likewise, f is strongly-bounded if it is B-strongly-bounded for some B We note that Lemma 4.12. If f is B-strongly-bounded then f is analytic and its Taylor coefficients around any point are bounded by Bn The following lemma gives an example to a strongly bounded sigmoid function, as well as a strongly bounded smoothened version of the ReLU (see figure 1). Lemma 4.13. The functions ln (1 + ex) and e x 1+ex are strongly-bounded Lemma 4.14. Let H be a class of functions from X to Rd with approximate description length of (ns(m), n(m)). Let ρ : R→ R be B-strongly-bounded. Then, ρ ◦ H has approximate description length of ( ns(m) +O ( n(m)B2 log(md) ) , O ( n(m)B2 log(d) )) 5 Sample Complexity of Neural Networks Fix the instance space X to be the ball of radius √ d in Rd (in particular [−1, 1]d ⊂ X ) and a Bstrongly-bounded activation ρ. Fix matrices W 0i ∈Mdi,di−1 , i = 1, . . . , t. Consider the following class of depth-t networks Nr,R(W 01 , . . . ,W 0t ) = { Wt ◦ ρ ◦Wt−1 ◦ ρ . . . ◦ ρ ◦W1 : ‖Wi −W 0i ‖ ≤ r, ‖Wi −W 0i ‖F ≤ R } We note that Nr,R(W 01 , . . . ,W 0t ) = Nr,R(W 0t ) ◦ . . . ◦ Nr,R(W 01 ) The following lemma analyzes the cost, in terms of approximate description length, when moving from a classH to Nr,R(W 0) ◦ H. Lemma 5.1. Let H be a class of functions from X to Rd1 with approximate description length (ns(m), n(m)) and ‖h(x)‖ ≤M for any x ∈ X and h ∈ H. Fix W 0 ∈Md2,d1 . Then, Nr,R(W 0t ) ◦ H has approximate description length of( ns(m) + n ′(m)B2 log(md2), n ′(m)B2 log(d2) ) for n′(m) = n(m)O(r2 + ‖W 0‖2 + 1) +O ( (d1 +M 2)(R2 + 1) log(Rd1d2 + 1) ) The lemma is follows by combining lemmas 4.7, 4.8, 4.10 and 4.14. We note that in the case that d1, d2 ≤ d, M = O( √ d1), B, r, ‖W 0‖ = O(1) (and hence R = O (√ d ) ) and R ≥ 1 we get that Nr,R(W 0) ◦ H has approximate description length of( ns(m) +O (n(m) log(md)) , O (n(m) log(d)) +O ( d1R 2 log2(d) )) By induction, the approximate description length of Nr,R(W 01 , . . . ,W 0t ) is( dR2O (log(d)) t log(md), dR2O (log(d)) t+1 )
1. What is the focus of the paper regarding generalization error bounds for norm-bounded neural networks? 2. What are the strengths of the proposed approach compared to prior works, particularly in improving the bound on the Frobenius norm? 3. Do you have any concerns or suggestions regarding the presentation of the technique and its relation to previous works? 4. How does the reviewer assess the novelty and significance of the improvement in the generalization bound? 5. Are there any other relevant works that could be discussed or compared in the paper?
Review
Review In this paper the authors establish upper bounds on the generalization error of classes of norm-bounded neural networks. There is a long line of literature on this exact question, and this paper claims to resolve an interesting open question in this area (at least when the depth of the network is viewed as a constant). In particular, the paper considers generalization bounds for a class of fully-connected networks of constant depth and whose matrices are of bounded norm. Work by Bartlett et al. ("Spectrally normalized margin bounds on neural networks", ref [4] in the paper) proved an upper bound on generalization error that contains a factor growing as the (1,2)-matrix norm of any layer. If one further assumes that the depth as well as all the spectral norms are constants, then this is the dominant term (up to logarithmic factors) in their generalization bound. In this regime, the main result of this paper improves this (1,2)-matrix norm to the Frobenius norm (i.e., (2,2)-matrix norm). The authors achieve this by defining a notion of approximate description length of a class of functions, which roughly speaking, describes how many bits one needs to compress a function in this class to in order to be able to approximately recover the network from these bits. Overall the paper is rather clearly written. One concern I have is that there is not a lot of discussion (e.g., in the introduction) about what in their technique allows them to improve upon the previous work of Bartlett et al, or in their language, to remove the additional factor of d (depth), from sample complexity bounds. This would give more confidence in their result. In particular, something describing the following might be useful: their proof proceeds in a similar pattern to previous covering number-based upper bounds on generalization error of neural networks, except by replacing utilization of covering numbers with approximate description length. The key in their proof that allows them to improve the bound of Bartlett at al. seems to be Lemma 3.6, which allows one to replace the (2,1)-norm in covering number bounds for linear maps with the Frobenius norm (and using approximate description length instead of covering number). At least part of the reason for that improvement seems to result from the fact that they measure the covering number in the target with respect to the L_infty as opposed to L_2 norm (Defn 2.1). If this is the case, I wonder if the proof of the main result can be made to only use the language of covering numbers? Finally, there is some work appearing after the Bartlett et al. paper [4] establishing generalization bounds for networks with various bounds on norms and parameters (e.g., [LLW+18] below). Some discussion of these papers might be useful. [LLW+18] Xingguo Li, Junwei Lu, Zhaoran Wang, Jarvis Haupt, and Tuo Zhao. On tighter generalization bound for deep neural networks: CNNs, Resnets, and beyond. arXiv:1806.05159, 2018. I did not verify correctness of proofs in the supplementary material. Overall, this feels like a somewhat significant contribution that introduces some new ideas to resolve an open question. Miscellaneous comments: In the equation of Theorem 3.5, I think the middle term should be dropped (or flipped with the first term). (L_infty covering numbers are always at least the L_2 covering numbers.) Line 149: samples -> sampled Line 159: that that -> that Line 170, 2nd word: to -> the Line 186: I think the sigma at the beginning of the line should be rho? Update: I was already quite positive about the paper, and nothing in the authors' response changed my opinion.
NIPS
Title Generalization Bounds for Neural Networks via Approximate Description Length Abstract We investigate the sample complexity of networks with bounds on the magnitude of its weights. In particular, we consider the class N = {Wt ◦ ρ ◦Wt−1 ◦ ρ . . . ◦ ρ ◦W1 : W1, . . . ,Wt−1 ∈Md×d,Wt ∈M1,d} where the spectral norm of each Wi is bounded by O(1), the Frobenius norm is bounded by R, and ρ is the sigmoid function e x 1+ex or the smoothened ReLU function ln (1 + e). We show that for any depth t, if the inputs are in [−1, 1], the sample complexity of N is Õ ( dR 2 ) . This bound is optimal up to log-factors, and substantially improves over the previous state of the art of Õ ( dR 2 ) , that was established in a recent line of work [9, 4, 7, 5, 2, 8]. We furthermore show that this bound remains valid if instead of considering the magnitude of the Wi’s, we consider the magnitude of Wi −W 0 i , where W 0 i are some reference matrices, with spectral norm of O(1). By taking the W 0 i to be the matrices at the onset of the training process, we get sample complexity bounds that are sub-linear in the number of parameters, in many typical regimes of parameters. To establish our results we develop a new technique to analyze the sample complexity of familiesH of predictors. We start by defining a new notion of a randomized approximate description of functions f : X → R. We then show that if there is a way to approximately describe functions in a classH using d bits, then d 2 examples suffices to guarantee uniform convergence. Namely, that the empirical loss of all the functions in the class is -close to the true loss. Finally, we develop a set of tools for calculating the approximate description length of classes of functions that can be presented as a composition of linear function classes and non-linear functions. N/A N = {Wt ◦ ρ ◦Wt−1 ◦ ρ . . . ◦ ρ ◦W1 : W1, . . . ,Wt−1 ∈Md×d,Wt ∈M1,d} where the spectral norm of each Wi is bounded by O(1), the Frobenius norm is bounded by R, and ρ is the sigmoid function e x 1+ex or the smoothened ReLU function ln (1 + ex). We show that for any depth t, if the inputs are in [−1, 1]d, the sample complexity of N is Õ ( dR2 2 ) . This bound is optimal up to log-factors, and substantially improves over the previous state of the art of Õ ( d2R2 2 ) , that was established in a recent line of work [9, 4, 7, 5, 2, 8]. We furthermore show that this bound remains valid if instead of considering the magnitude of the Wi’s, we consider the magnitude of Wi −W 0i , where W 0i are some reference matrices, with spectral norm of O(1). By taking the W 0i to be the matrices at the onset of the training process, we get sample complexity bounds that are sub-linear in the number of parameters, in many typical regimes of parameters. To establish our results we develop a new technique to analyze the sample complexity of familiesH of predictors. We start by defining a new notion of a randomized approximate description of functions f : X → Rd. We then show that if there is a way to approximately describe functions in a classH using d bits, then d 2 examples suffices to guarantee uniform convergence. Namely, that the empirical loss of all the functions in the class is -close to the true loss. Finally, we develop a set of tools for calculating the approximate description length of classes of functions that can be presented as a composition of linear function classes and non-linear functions. 1 Introduction We analyze the sample complexity of networks with bounds on the magnitude of their weights. Let us consider a prototypical case, where the input space is X = [−1, 1]d, the output space is R, the number of layers is t, all hidden layers has d neurons, and the activation function is ρ : R→ R. The class of functions computed by such an architecture is N = {Wt ◦ ρ ◦Wt−1 ◦ ρ . . . ◦ ρ ◦W1 : W1, . . . ,Wt−1 ∈Md×d,Wt ∈M1,d} As the class N is defined by (t − 1)d2 + d = O(d2) parameters, classical results (e.g. [1]) tell us that order of d2 examples are sufficient and necessary in order to learn a function from N (in a standard worst case analysis). However, modern networks often succeed to learn with substantially less examples. One way to provide alternative results, and a potential explanation to the phenomena, 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. is to take into account the magnitude of the weights. This approach was a success story in the days of SVM [3] and Boosting [10], provided a nice explanation to generalization with sub-linear (in the number of parameters) number of examples, and was even the deriving force behind algorithmic progress. It seems just natural to adopt this approach in the context of modern networks. For instance, it is natural to consider the class NR = {Wt ◦ ρ ◦Wt−1 ◦ ρ . . . ◦ ρ ◦W1 : ∀i, ‖Wi‖F ≤ R, ‖Wi‖ ≤ O(1)} where ‖W‖ = max‖x‖=1 ‖Wx‖ is the spectral norm and ‖W‖F = √∑d i,j=1W 2 ij is the Frobenius norm. This class has been analyzed in several recent works [9, 4, 7, 5, 2, 8]. Best known results show a sample complexity of Õ ( d2R2 2 ) (for the sake of simplicity, in the introduction, we ignore the dependence on the depth in the big-O notation). In this paper we prove, for various activations, a stronger bound of Õ ( dR2 2 ) , which is optimal, up to log factors, for constant depth networks. How good is this bound? Does it finally provide sub-linear bound in typical regimes of the parameters? To answer this question, we need to ask how large R is. While this question of course don’t have a definite answer, empirical studies (e.g. [12]) show that it is usually the case that the norm (spectral, Frobenius, and others) of the weight matrices is at the same order of magnitude as the norm of the matrix in the onset of the training process. In most standard training methods, the initial matrices are random matrices with independent (or almost independent) entries, with mean zero and variance of order 1d . The Frobenius norm of such a matrix is of order √ d. Hence, the magnitude of R is of order √ d. Going back to our Õ ( dR2 2 ) bound, we get a sample complexity of Õ ( d2 2 ) , which is unfortunately still linear in the number of parameters. Since our bound is almost optimal, we can ask whether this is the end of the story? Should we abandon the aforementioned approach to network sample complexity? A more refined examination of the training process suggests another hope for this approach. Indeed, the training process doesn’t start from the zero matrix, but rather form a random initialization matrix. Thus, it stands to reason that instead of considering the magnitude of the weight matrices Wi, we should consider the magnitude of Wi −W 0i , where W 0i is the initial weight matrix. Indeed, empirical studies [6] show that the Frobenius norm of Wi −W 0i is often order of magnitude smaller than the Frobenius norm of Wi. Following this perspective, it is natural to consider the class NR(W 01 , . . . ,W 0t ) = { Wt ◦ ρ ◦Wt−1 ◦ ρ . . . ◦ ρ ◦W1 : ‖Wi −W 0i ‖ ≤ O(1), ‖Wi −W 0i ‖F ≤ R } For some fixed matrices, W 01 , . . . ,W 0 t of spectral norm 1 O(1). It is natural to expect that considering balls around the initial W 0i ’s instead of zero, shouldn’t change the sample complexity of the class at hand. In other words, we can expect that the sample complexity of NR(W 01 , . . . ,W 0t ) should be approximately the sample complexity of NR. Namely, we expect a sample complexity of Õ ( dR2 2 ) . Such a bound would finally be sub-linear, as in practice, it is often the case that R2 d. This approach was pioneered by [4] who considered the class N 2,1R (W 0 1 , . . . ,W 0 t ) = { Wt ◦ ρ . . . ◦ ρ ◦W1 : ‖Wi −W 0i ‖ ≤ O(1), ‖Wi −W 0i ‖2,1 ≤ R } where ‖W‖2,1 = ∑d i=1 √∑d j=1W 2 ij . For this class they proved a sample complexity bound of Õ ( dR2 2 ) . Since, ‖W‖2,1 ≤ √ d‖W‖F , this implies a sample complexity bound of Õ ( d2R2 2 ) on NR(W 01 , . . . ,W 0t ), which is still not sublinear2. In this paper we finally prove a sub-linear sample complexity bound of Õ ( dR2 2 ) on NR(W 01 , . . . ,W 0t ). To prove our results, we develop a new technique for bounding the sample complexity of function classes. Roughly speaking, we define a notion of approximate description of a function, and count 1The bound of O(1) on the spectral norm of the W 0i ’s and Wi −W 0i is again motivated by the practice of neural networks – the spectral norm of W 0i , with standard initializations, is O(1), and empirical studies [6, 12] show that the spectral norm of Wi −W 0i is usually very small. 2We note that ‖W‖2,1 = Θ( √ d) even if W is a random matrix with variance that is calibrated so that ‖W‖F = Θ(1) (namely, each entry has variance 1d2 ). how many bits are required in order to give an approximate description for the functions in the class under study. We then show that this number, called the approximate description length (ADL), gives an upper bound on the sample complexity. The advantage of our method over existing techniques is that it behaves nicely with compositions. That is, once we know the approximate description length of a class H of functions from X to Rd, we can also bound the ADL of ρ ◦ H, as well as L ◦ H, where L is a class of linear functions. This allows us to utilize the compositional structure of neural networks. 2 Preliminaries Notation We denote by med(x1, . . . , xk) the median of x1, . . . , xk ∈ R. For vectors x1, . . . ,xk ∈ Rd we denote med(x1, . . . ,xk) = ( med(x11, . . . , x k 1), . . . ,med(x 1 d, . . . , x k d) ) . We use log to denote log2, and ln to denote loge An expression of the form f(n) . g(n) means that there is a universal constant c > 0 for which f(n) ≤ cg(n). For a finite set A and f : A → R we let Ex∈A f = Ex∈A f(a) = 1|A| ∑ a∈A f(a). We denote BdM = {x ∈ Rd : ‖x‖ ≤ M} and Bd = Bd1. Likewise, we denote Sd−1 = {x ∈ Rd : ‖x‖ = 1} We denote the Frobenius norm of a matrix W by ‖W‖2F = ∑ ijW 2 ij , while the spectral norm is denoted by ‖W‖ = max‖x‖=1 ‖Wx‖. For a pair of vectors x,y ∈ Rd we denote by xy ∈ Rd their point-wise product xy = (x1y1, . . . , xdyd) Uniform Convergence and Covering Numbers Fix an instance space X , a label space Y and a loss ` : Rd × Y → [0,∞). We say that ` is Lipschitz / Bounded / etc. if for any y ∈ Y , `(·, y) is. Fix a class H from X to Rd. For a distribution D and a sample S ∈ (X × Y)m we define the representativeness of S as repD(S,H) = sup h∈H `D(h)−`S(h) for `D(h) = E (x,y)∼D `(h(x), y) and `S(h) = 1 m m∑ i=1 `(h(xi), yi) We note that if repD(S,H) ≤ then any algorithm that is guaranteed to return a function ĥ ∈ H will enjoy a generalization bound `D(h) ≤ `S(h) + . In particular, the ERM algorithm will return a function whose loss is optimal, up to an additive factor of . We will focus on bounds on repD(S,H) when S ∼ Dm. To this end, we will rely on the connection between representativeness and the covering numbers ofH. Definition 2.1. Fix a classH of functions from X to Rd, an integer m, > 0 and 1 ≤ p ≤ ∞. We define Np(H,m, ) as the minimal integer for which the following holds. For every A ⊂ X of size ≤ m there exists H̃ ⊂ ( Rd )X such that ∣∣∣H̃∣∣∣ ≤ Np(H,m, ) and for any h ∈ H there is h̃ ∈ H̃ with( Ex∈A ∥∥∥h(x)− h̃(x)∥∥∥p ∞ ) 1 p ≤ . For p = 2, we denote N(H,m, ) = N2(H,m, ) We conclude with a lemma, which will be useful in this paper. The proof can be found in the supplementary material. Lemma 2.2. Let ` : Rd × Y → R be L-Lipschitz w.r.t. ‖ · ‖∞ and B-bounded. Assume that for any 0 < ≤ 1, log (N(H,m, )) ≤ n 2 . Then ES∼Dm repD(S,H) . (L+B) √ n√ m log(m). Furthermore, with probability at least 1− δ, repD(S,H) . (L+B) √ n√ m log(m) +B √ 2 ln(2/δ) m A Basic Inequality Lemma 2.3. Let X1, . . . , Xn be independent r.v. with that that are σ-estimators to µ. Then Pr (|med(X1, . . . , Xn)− µ| > kσ) < ( 2 k )n 3 Simplified Approximate Description Length To give a soft introduction to our techniques, we first consider a simplified version of it. We next define the approximate description length of a classH of functions from X to Rd, which quantifies the number of bits it takes to approximately describe a function fromH. We will use the following notion of approximation Definition 3.1. A random vector X ∈ Rd is a σ-estimator to x ∈ Rd if EX = x and ∀u ∈ Sd−1, VAR(〈u, X〉) = E 〈u, X − x〉2 ≤ σ2 A random function f̂ : X → Rd is a σ-estimator to f : X → Rd if for any x ∈ X , f̂(x) is a σ-estimator to f(x). A (σ, n)-compressor C for a classH takes as input a function h ∈ H, and outputs a (random) function Ch such that (i) Ch is a σ-estimator of h and (ii) it takes n bits to describe Ch. Formally, Definition 3.2. A (σ, n)-compressor forH is a triplet (C,Ω, µ) where µ is a probability measure on Ω, and C is a function C : Ω×H → ( Rd )X such that 1. For any h ∈ H and x ∈ X , (Cωh)(x), ω ∼ µ is a σ-estimator of h(x). 2. There are functions E : Ω×H → {±1}n and D : {±1}n → ( Rd )X for which C = D ◦E Definition 3.3. We say that a classH of functions from X to Rd has approximate description length n if there exists a (1, n)-compressor forH It is not hard to see that if (C,Ω, µ) is a (σ, n)-compressor forH, then (Cω1,...,ωkh)(x) := ∑k i=1(Cωih)(x) k is a ( σ√ k , kn ) -compressor forH. Hence, if the approximate description length ofH is n, then for any 1 ≥ > 0 there exists an ( , nd −2e ) -compressor forH. We next connect the approximate description length, to covering numbers and representativeness. We separate it into two lemmas, one for d = 1 and one for general d, as for d = 1 we can prove a slightly stronger bound. Lemma 3.4. Fix a classH of functions from X to R with approximate description length n. Then, log (N(H,m, )) ≤ n ⌈ −2 ⌉ . Hence, if ` : Rd ×Y → R is L-Lipschitz and B-bounded, then for any distribution D on X × Y , ES∼Dm repD(S,H) . (L+B) √ n√ m log(m). Furthermore, with probability at least 1− δ, repD(S,H) . (L+B) √ n√ m log(m) +B √ 2 ln(2/δ) m Lemma 3.5. Fix a classH of functions from X to Rd with approximate description length n. Then, log (N∞(H,m, )) ≤ log (N(H,m, )) ≤ n ⌈ 16 −2 ⌉ dlog(dm)e Hence, if ` : Rd × Y → R is L-Lipschitz w.r.t. ‖ · ‖∞ and B-bounded, then for any distribution D on X × Y , ES∼Dm repD(S,H) . (L+B) √ n log(dm)√ m log(m). Furthermore, with probability at least 1− δ, repD(S,H) . (L+B) √ n log(dm)√ m log(m) +B √ 2 ln(2/δ) m 3.1 Linear Functions We next bound the approximate description length of linear functions with bounded Frobenius norm. Theorem 3.6. Let class Ld1,d2,M = { x ∈ Bd1 7→Wx : W is d2 × d1 matrix with ‖W‖F ≤M } has approximate description length n ≤ ⌈ 1 4 + 2M2 ⌉ 2 dlog (2d1d2(M + 1))e Hence, if ` : Rd2 × Y → R is L-Lipschitz w.r.t. ‖ · ‖∞ and B-bounded, then for any distribution D on X × Y E S∼Dm repD(S,Ld1,d2,M ) . (L+B) √ M2 log(d1d2M) log(d2m)√ m log(m) Furthermore, with probability at least 1− δ, repD(S,Ld1,d2,M ) . (L+B) √ M2 log(d1d2M) log(d2m)√ m log(m) +B √ 2 ln (2/δ) m We remark that the above bounds on the representativeness coincides with standard bounds ([11] for instance), up to log factors. The advantage of these bound is that they remain valid for any output dimension d2. In order to prove theorem 3.6 we will use a randomized sketch of a matrix. Definition 3.7. Let w ∈ Rd be a vector. A random sketch of w is a random vector ŵ that is samples as follows. Choose i w.p. pi = w2i 2‖w‖2 + 1 2d . Then, w.p. wi pi − ⌊ wi pi ⌋ let b = 1 and otherwise b = 0. Finally, let ŵ = (⌊ wi pi ⌋ + b ) ei. A random k-sketch of w is an average of k-independent random sketches of w. A random sketch and a random k-sketch of a matrix is defined similarly, with the standard matrix basis instead of the standard vector basis. The following useful lemma shows that an sketch w is a √ 1 4 + 2‖w‖2-estimator of w. Lemma 3.8. Let ŵ be a random sketch of w ∈ Rd. Then, (1) E ŵ = w and (2) for any u ∈ Sd−1, E (〈u, ŵ〉 − 〈u,w〉)2 ≤ E 〈u, ŵ〉2 ≤ 14 + 2‖w‖ 2 Proof. (of theorem 3.6) We construct a compressor for Ld1,d2,M as follows. Given W , we will sample a k-sketch Ŵ of W for k = ⌈ 1 4 + 2M 2 ⌉ , and will return the function x 7→ Ŵx. We claim that that W 7→ Ŵ is a (1, 2k dlog(2d1d2(M + 1))e)-compressor for Ld1,d2,M . Indeed, to specify a sketch of W we need dlog(d1d2)e bits to describe the chosen index, as well as log (2d1d2M + 2) bits to describe the value in that index. Hence, 2k dlog(2d1d2(M + 1))e bits suffices to specify a k-sketch. It remains to show that for x ∈ Bd1 , Ŵx is a 1-estimator of Wx. Indeed, by lemma 3.8, E Ŵ = W and therefore E Ŵx = Wx. Likewise, for u ∈ Sd2−1. We have E (〈 u, Ŵx 〉 − 〈u,Wx〉 )2 = E (〈 Ŵ ,xuT 〉 − 〈 W,xuT 〉)2 ≤ 1 4 + 2M 2 k ≤ 1 3.2 Simplified Depth 2 Networks To demonstrate our techniques, we consider the following class of functions. We let the domain X to be Bd. We fix an activation function ρ : R→ R that is assumed to be a polynomial ρ(x) = ∑k i=0 aix i with ∑n n=1 |an| = 1. For any W ∈ Md,d we define hW (x) = 1√ d ∑d i=1 ρ(〈wi,x〉) Finally, we let H = { hW : ∀i, ‖wi‖ ≤ 12 } In order to build compressors for classes of networks, we will utilize to compositional structure of the classes. Specifically, we have that H = Λ ◦ ρ ◦ F where F = {x 7→Wx : W is d× d matrix with ‖wi‖ ≤ 1 for all i} and Λ(x) = 1√d ∑d i=1 xi. As F is a subset of Ld,d,√d, we know that there exists a (1, O (d log(d)))-compressor for it. We will use this compressor to build a compressor to ρ ◦ F , and then to Λ ◦ ρ ◦ F . We will start with the latter, linear case, which is simpler Lemma 3.9. Let X be a σ-estimator to x ∈ Rd1 . Let A ∈ Md2,d1 be a matrix of spectral norm ≤ r. Then, AX is a (rσ)-estimator to Ax. In particular, if C is a (1, n)-compressor to a classH of functions from X to Rd. Then C′ω(Λ ◦ h) = Λ ◦ Cωh is a (1, n)-compressor to Λ ◦ H We next consider the composition of F with the non-linear ρ. As opposed to composition with a linear function, we cannot just generate a compression version using F’s compressor and then compose with ρ. Indeed, if X is a σ-estimator to x, it is not true in general that ρ(X) is an estimator of ρ(x). For instance, consider the case that ρ(x) = x2, and X = (X1, . . . , Xd) is a vector of independent standard Gaussians. X is a 1-estimator of 0 ∈ Rd. On the other hand, ρ(X) = (X21 , . . . , X2n) is not an estimator of 0 = ρ(0). We will therefore take a different approach. Given f ∈ F , we will sample k independent estimators {Cωif}ki=1 from F’s compressor, and define the compressed version of σ ◦ h as C′ω1,...,ωkf = ∑d i=0 ai ∏i j=0 Cωif . This construction is analyzed in the following lemma Lemma 3.10. If C is a ( 1 2 , n ) -compressor of a classH of functions from X to [ − 12 , 1 2 ]d . Then C′ is a (1, n)-compressor of ρ ◦ H Combining theorem 3.6 and lemmas 3.9, 3.10 we have: Theorem 3.11. H has approximation length . d log(d). Hence, if ` : R × Y → R is L-Lipschitz and B-bounded, then for any distribution D on X × Y E S∼Dm repD(S,H) . (L+B) √ d log(d)√ m log(m) Furthermore, with probability at least 1− δ, repD(S,H) . (L+B) √ d log(d)√ m log(m) +B √ 2 ln (2/δ) m Lemma 3.10 is implied by the following useful lemma: Lemma 3.12. 1. If X is a σ-estimator of x then aX is a (|a|σ)-estimator of aX 2. Suppose that for n = 1, 2, 3, . . . Xn is a σn-estimator of xn ∈ Rd. Assume furthermore that ∑∞ n=1 xn and ∑∞ n=1 σn converge to x ∈ Rd and σ ∈ [0,∞). Then, ∑∞ n=1Xn is a σ-estimator of x 3. Suppose that {Xi}ki=1 are independent σi-estimators of xi ∈ Rd. Then ∏k i=1Xi is a σ′-estimator of ∏k i=1 xi for σ ′2 = ∏k i=1 ( σ2i + ‖xi‖ 2 ∞ ) − ∏k i=1 ‖xi‖ 2 ∞ We note that the bounds in the above lemma are all tight. 4 Approximation Description Length In this section we refine the definition of approximate description length that were given in section 3. We start with the encoding of the compressed version of the functions. Instead of standard strings, we will use what we call bracketed string. The reason for that often, in order to create a compressed version of a function, we concatenate compressed versions of other functions. This results with strings with a nested structure. For instance, consider the case that a function h is encoded by the concatenation of h1 and h2. Furthermore, assume that h1 is encoded by the string 01, while h2 is encoded by the concatenation of h3, h4 and h5 that are in turn encoded by the strings 101, 0101 and 1110. The encoding of h will then be [[01][[101][0101][1110]]]. We note that in section 3 we could avoid this issue since the length of the strings and the recursive structure were fixed, and did not depend on the function we try to compress. Formally, we define Definition 4.1. A bracketed string is a rooted tree S, such that (i) the children of each edge are ordered, (ii) there are no nodes with a singe child, and (iii) the leaves are labeled by {0, 1}. The length, len(S) of S is the number of its leaves. Let S be a bracketed string. There is a linear order on its leaves that is defined as follows. Fix a pair of leaves, v1 and v2, and let u be their LCA. Let u1 (resp. u2) be the child of u that lie on the path to v1 (resp. v2). We define v1 < v2 if u1 < u2 and v1 > v2 otherwise (note that necessarily u1 6= u2). Let v1, . . . , vn be the leaves of T , ordered according to the above order, and let b1, . . . , bn be the corresponding bits. The string associated with T is s = b1 . . . bn. We denote by Sn the collection of bracketed strings of length ≤ n, and by S = ∪∞n=1Sn the collection of all bracketed strings. The following lemma shows that in log-scale, the number of bracketed strings of length ≤ n differ from standard strings of length ≤ n by only a constant factor Lemma 4.2. |Sn| ≤ 32n We next revisit the definition of a compressor for a classH. The definition of compressor will now have a third parameter, ns, in addition to σ and n. We will make three changes in the definition. The first, which is only for the sake of convenience, is that we will use bracketed strings rather than standard strings. The second change, is that the length of the encoding string will be bounded only in expectation. The final change is that the compressor can now output a seed. That is, given a function h ∈ H that we want to compress, the compressor can generate both a non-random seed Es(h) ∈ Sns and a random encoding E(ω, h) ∈ S with Eω∼µ len(E(ω, h)) ≤ n. Together, Es(h) and E(ω, h) encode a σ-estimator. Namely, there is a function D : Sns × S → ( Rd )X such that D(Es(h), E(ω, h)), ω ∼ µ is a σ-estimator of h. The advantage of using seeds is that it will allow us to generate many independent estimators, at a lower cost. In the case that n ns, the cost of generating k independent estimators of h ∈ H is ns + kn bits (in expectation) instead of k(ns + n) bits. Indeed, we can encode k estimators by a single seed Es(h) and k independent “regular" encodings E(ωk, h), . . . , E(ωk, h). The formal definition is given next. Definition 4.3. A (σ, ns, n)-compressor for H is a 5-tuple C = (Es, E,D,Ω, µ) where µ is a probability measure on Ω, and Es, E,D are functions Es : H → T ns , E : Ω × H → T , and D : T ns × T → ( Rd )X such that for any h ∈ H and x ∈ X (1) D(Es(h), E(ω, h)), ω ∼ µ is a σ-estimator of h and (2) Eω∼µ len(E(ω, h)) ≤ n We finally revisit the definition of approximate description length. We will add an additional parameter, to accommodate the use of seeds. Likewise, the approximate description length will now be a function of m – we will say thatH has approximate description length (ns(m), n(m)) if there is a (1, ns(m), n(m))-compressor for the restriction ofH to any set A ⊂ X of size at most m. Formally: Definition 4.4. We say that a classH of functions from X to Rd has approximate description length (ns(m), n(m)) if for any set A ⊂ X of size ≤ m there exists a (1, ns(m), n(m))-compressor for H|A It is not hard to see that if H has approximate description length (ns(m), n(m)), then for any 1 ≥ > 0 and a setA ⊂ X of size≤ m, there exists an ( , ns(m), n(m)d −2e ) -compressor forH|A. We next connect the approximate description length, to covering numbers and representativeness. The proofs are similar the the proofs of lemmas 3.4 and 3.5. Lemma 4.5. Fix a class H of functions from X to R with approximate description length (ns(m), n(m)). Then, log (N(H,m, )) . ns(m) + n(m) 2 Hence, if ` : R d × Y → R is L-Lipschitz and B-bounded, then for any distribution D on X × Y E S∼Dm repD(S,H) . (L+B) √ ns(m) + n(m)√ m log(m) Furthermore, with probability at least 1− δ, repD(S,H) . (L+B) √ ns(m) + n(m)√ m log(m) +B √ 2 ln (2/δ) m Lemma 4.6. Fix a class H of functions from X to Rd with approximate description length (ns(m), n(m)). Then, log (N(H,m, )) ≤ log (N∞(H,m, )) . ns(m) + n(m) log(dm) 2 . Hence, if ` : Rd × Y → R is L-Lipschitz w.r.t. ‖ · ‖∞ and B-bounded, then for any distribution D on X × Y E S∼Dm repD(S,H) . (L+B) √ ns(m) + n(m) log(dm)√ m log(m) Furthermore, with probability at least 1− δ, repD(S,H) . (L+B) √ ns(m) + n(m) log(dm)√ m log(m) +B √ 2 ln (2/δ) m We next analyze the behavior of the approximate description length under various operations Lemma 4.7. LetH1,H2 be classes of functions from X to Rd with approximate description length of (n1s(m), n 1(m)) and (n2s(m), n 2(m)). Then H1 + H2 has approximate description length of (n1s(m) + n 2 s(m), 2n 1(m) + 2n2(m)) Lemma 4.8. Let H be a class of functions from X to Rd with approximate description length of (ns(m), n(m)). Let A be d2 × d1 matrix. Then A ◦ H1 has approximate description length( ns(m), ⌈ ‖A‖2 ⌉ n(m) ) Definition 4.11. A function f : R → R is B-strongly-bounded if for all n ≥ 1, ‖f (n)‖∞ ≤ n!Bn. Likewise, f is strongly-bounded if it is B-strongly-bounded for some B We note that Lemma 4.12. If f is B-strongly-bounded then f is analytic and its Taylor coefficients around any point are bounded by Bn The following lemma gives an example to a strongly bounded sigmoid function, as well as a strongly bounded smoothened version of the ReLU (see figure 1). Lemma 4.13. The functions ln (1 + ex) and e x 1+ex are strongly-bounded Lemma 4.14. Let H be a class of functions from X to Rd with approximate description length of (ns(m), n(m)). Let ρ : R→ R be B-strongly-bounded. Then, ρ ◦ H has approximate description length of ( ns(m) +O ( n(m)B2 log(md) ) , O ( n(m)B2 log(d) )) 5 Sample Complexity of Neural Networks Fix the instance space X to be the ball of radius √ d in Rd (in particular [−1, 1]d ⊂ X ) and a Bstrongly-bounded activation ρ. Fix matrices W 0i ∈Mdi,di−1 , i = 1, . . . , t. Consider the following class of depth-t networks Nr,R(W 01 , . . . ,W 0t ) = { Wt ◦ ρ ◦Wt−1 ◦ ρ . . . ◦ ρ ◦W1 : ‖Wi −W 0i ‖ ≤ r, ‖Wi −W 0i ‖F ≤ R } We note that Nr,R(W 01 , . . . ,W 0t ) = Nr,R(W 0t ) ◦ . . . ◦ Nr,R(W 01 ) The following lemma analyzes the cost, in terms of approximate description length, when moving from a classH to Nr,R(W 0) ◦ H. Lemma 5.1. Let H be a class of functions from X to Rd1 with approximate description length (ns(m), n(m)) and ‖h(x)‖ ≤M for any x ∈ X and h ∈ H. Fix W 0 ∈Md2,d1 . Then, Nr,R(W 0t ) ◦ H has approximate description length of( ns(m) + n ′(m)B2 log(md2), n ′(m)B2 log(d2) ) for n′(m) = n(m)O(r2 + ‖W 0‖2 + 1) +O ( (d1 +M 2)(R2 + 1) log(Rd1d2 + 1) ) The lemma is follows by combining lemmas 4.7, 4.8, 4.10 and 4.14. We note that in the case that d1, d2 ≤ d, M = O( √ d1), B, r, ‖W 0‖ = O(1) (and hence R = O (√ d ) ) and R ≥ 1 we get that Nr,R(W 0) ◦ H has approximate description length of( ns(m) +O (n(m) log(md)) , O (n(m) log(d)) +O ( d1R 2 log2(d) )) By induction, the approximate description length of Nr,R(W 01 , . . . ,W 0t ) is( dR2O (log(d)) t log(md), dR2O (log(d)) t+1 )
1. Can the proof of the proposed analysis be simplified by utilizing the property of contraction functions? 2. Is there a way to circumvent the description length argument using the property of contraction functions? 3. How does the reviewer assess the sophistication level of the proposed analysis? 4. What is the main concern of the reviewer regarding the proof presented in the paper? 5. Does the reviewer believe that the bounding method used in the paper is sub-linear to the number of parameters?
Review
Review Although the proposed analysis is quite elegant, a natural question is whether it has to be this sophisticated. A large portion of the efforts was devoted to bounding the description length of compositing an arbitrary function with the activation function \rho. I noticed that all activation functions studied in the paper are contraction functions. That is, for any t and t', |\rho(t) - \rho(t')| <= |t - t'|. I am wondering if the proof can be greatly simplified (or the description length argument can be completely circumstanced) if this property is properly used. In particular, if matrix W' is an approximation of W such that ||W' - W|| <= \eps1, and two vectors x' and x satisfies ||x' - x|| <= \eps2, then ||\rho(W'x') - \rho(Wx)|| <= ||W'x' - Wx|| <= ||W' - W||*||x'|| + ||W||*||x - x'|| <= \eps1*||x'|| \eps2 * ||W||. It means that we can upper bound the difference of the output (\rho(W'x') - \rho(Wx)) using an upper bound on the input (||x' - x||). By recursively applying this inequality layer by layer, it leads to a function approximation bound, and can be used to derive a covering number bound and finally a generalization bound. I got a feeling that this bound is also sub-linear to the number of parameters, but I am not sure. Hopefully the authors can convince me by the rebuttal that I am wrong. == I have read the authors' rebuttal. I am not totally convinced but I tend to believe that it's true. My score remains unchanged.
NIPS
Title Improved Error Bounds for Tree Representations of Metric Spaces Abstract Estimating optimal phylogenetic trees or hierarchical clustering trees from metric data is an important problem in evolutionary biology and data analysis. Intuitively, the goodness-of-fit of a metric space to a tree depends on its inherent treeness, as well as other metric properties such as intrinsic dimension. Existing algorithms for embedding metric spaces into tree metrics provide distortion bounds depending on cardinality. Because cardinality is a simple property of any set, we argue that such bounds do not fully capture the rich structure endowed by the metric. We consider an embedding of a metric space into a tree proposed by Gromov. By proving a stability result, we obtain an improved additive distortion bound depending only on the hyperbolicity and doubling dimension of the metric. We observe that Gromov’s method is dual to the well-known single linkage hierarchical clustering (SLHC) method. By means of this duality, we are able to transport our results to the setting of SLHC, where such additive distortion bounds were previously unknown. 1 Introduction Numerous problems in data analysis are formulated as the question of embedding high-dimensional metric spaces into “simpler" spaces, typically of lower dimension. In classical multidimensional scaling (MDS) techniques [18], the goal is to embed a space into two or three dimensional Euclidean space while preserving interpoint distances. Classical MDS is helpful in exploratory data analysis, because it allows one to find hidden groupings in amorphous data by simple visual inspection. Generalizations of MDS exist for which the target space can be a tree metric space—see [13] for a summary of some of these approaches, written from the point of view of metric embeddings. The metric embeddings literature, which grew out of MDS, typically highlights the algorithmic gains made possible by embedding a complicated metric space into a simpler one [13]. The special case of MDS where the target space is a tree has been of interest in phylogenetics for quite some time [19, 5]; the numerical taxonomy problem (NTP) is that of finding an optimal tree embedding for a given metric space (X, dX), i.e. a tree (X, tX) such that the additive distortion, defined as ‖dX − tX‖`∞(X×X), is minimal over all possible tree metrics on X . This problem turns out to be NP-hard [3]; however, a 3-approximation algorithm exists [3], and a variant of this problem, 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. that of finding an optimal ultrametric tree, can be solved in polynomial time [11]. An ultrametric tree is a rooted tree where every point is equidistant from the root—for example, ultrametric trees are the outputs of hierarchical clustering (HC) methods that show groupings in data across different resolutions. A known connection between HC and MDS is that the output ultrametric of single linkage hierarchical clustering (SLHC) is a 2-approximation to the optimal ultrametric tree embedding [16], thus providing a partial answer to the NTP. However, it appears that the existing line of work regarding NTP does not address the question of quantifying the `∞ distance between a metric (X, dX) and its optimal tree metric, or even the optimal ultrametric. More specifically, we can ask: Question 1. Given a set X , a metric dX , and an optimal tree metric toptX (or an optimal ultrametric uoptX ), can one find a nontrivial upper bound on ‖dX − t opt X ‖`∞(X×X) (resp. ‖dX − u opt X ‖`∞(X×X)) depending on properties of the metric dX? The question of distortion bounds is treated from a different perspective in the discrete algorithms literature. In this domain, tree embeddings are typically described with multiplicative distortion bounds (described in §2) depending on the cardinality of the underlying metric space, along with (typically) pathological counterexamples showing that these bounds are tight [4, 10]. We remark immediately that (1) multiplicative distortion is distinct from the additive distortion encountered in the NTP, and (2) these embeddings are rarely used in machine learning, where HC and MDS methods are the main workhorses. Moreover, such multiplicative distortion bounds do not take two considerations into account: (1) the ubiquitousness of very large data sets means that a bound dependent on cardinality is not desirable, and (2) “nice" properties such as low intrinsic dimensionality or treeness of real-world datasets are not exploited in cardinality bounds. We prove novel additive distortion bounds for two methods of tree embeddings: one into general trees, and one into ultrametric trees. These additive distortion bounds take into account (1) whether the data is treelike, and (2) whether the data has low doubling dimension, which is a measure of its intrinsic dimension. Thus we prove an answer to Question 1 above, namely, that the approximation error made by an optimal tree metric (or optimal ultrametric) can be bounded nontrivially. Remark 1. The trivial upper bound is ‖dX − toptX ‖`∞(X×X) ≤ diam(X, dX). To see this, observe that any ultrametric is a tree, and that SLHC yields an ultrametric uX that is bounded above by dX . An overview of our approach. A common measure of treeness is Gromov’s δ-hyperbolicity, which is a local condition on 4-point subsets of a metric space. Hyperbolicity has been shown to be a useful statistic for evaluating the quality of trees in [7]. The starting point for our work is a method used by Gromov to embed metric spaces into trees, which we call Gromov’s embedding [12]. A known result, which we call Gromov’s embedding theorem, is that if every 4-point subset of an n-point metric space is δ-hyperbolic, then the metric space embeds into a tree with `∞ distortion bounded above by 2δ log2(2n). The proof proceeds by a linkage argument, i.e. by invoking the definition of hyperbolicity at different scales along chains of points. By virtue of the embedding theorem, one can argue that hyperbolicity is a measure of the “treeness" of a given metric space. It has been shown in [1, 2] that various real-world data sets, such as Internet latencies and biological, social, and collaboration networks are inherently treelike, i.e. have low hyperbolicity. Thus, by Gromov’s result, these real-world data sets can be embedded into trees with additive distortion controlled by their respective cardinalities. The cardinality bound might of course be undesirable, especially for very large data sets such as the Internet. However, it has been claimed without proof in [1] that Gromov’s embedding can yield a 3-approximation to the NTP, independent of [3]. We note that the assumption of a metric input is not apparent in Gromov’s embedding theorem. Moreover, the proof of the theorem does not utilize any metric property. This leads one to hope for bounds where the dependence on cardinality is replaced by a dependence on some metric notion. A natural candidate for such a metric notion is the doubling dimension of a space [15], which has already found applications in learning [17] and algorithm design [15]. In this paper, we present novel upper bounds on the additive distortion of a Gromov embedding, depending only on the hyperbolicity and doubling dimension of the metric space. Our main tool is a stability theorem that we prove using a metric induced by a Voronoi partition. This result is then combined with the results of Gromov’s linkage argument. Both the stability theorem and Gromov’s theorem rely on the embedding satisfying a particular linkage condition, which can be described as follows: for any embedding f : (X, dX) → (X, tX), and any x, x′ ∈ X , we have tX(x, x ′) = maxc mini Ψ(xi, xi+1), where c = {xi}ki=0 is a chain of points joining x to x′ and Ψ is some function of dX . A dual notion is to flip the order of the max,min operations. Interestingly, under the correct objective function Ψ, this leads to the well-studied notion of SLHC. By virtue of this duality, the arguments of both the stability theorem and the scaling theorem apply in the SLHC setting. We introduce a new metric space statistic that we call ultrametricity (analogous to hyperbolicity), and are then able to obtain novel lower bounds, depending only on doubling dimension and ultrametricity, for the distortion incurred by a metric space when embedding into an ultrametric tree via SLHC. We remark that just by virtue of the duality between Gromov’s embedding and the SLHC embedding, it is possible to obtain a distortion bound for SLHC depending on cardinality. We were unable to find such a bound in the existing HC literature, so it appears that even the knowledge of this duality, which bridges the domains of HC and MDS methods, is not prevalent in the community. The paper is organized as follows. The main thrust of our work is explained in §1. In §2 we develop the context of our work by highlighting some of the surrounding literature. We provide all definitions and notation, including the Voronoi partition construction, in §3. In §4 we describe Gromov’s embedding and present Gromov’s distortion bound in Theorem 3. Our contributions begin with Theorem 4 in §4 and include all the results that follow: namely the stability results in §5, the improved distortion bounds in §6, and the proof of tightness in §7. The supplementary material contains (1) an appendix with proofs omitted from the body, (2) a practical demonstration in §A where we apply Gromov’s embedding to a bitmap image of a tree and show that our upper bounds perform better than the bounds suggested by Gromov’s embedding theorem, and (3) Matlab .m files containing demos of Gromov’s embedding being applied to various images of trees. 2 Related Literature MDS is explained thoroughly in [18]. In metric MDS [18] one attempts to find an embedding of the data X into a low dimensional Euclidean space given by a point cloud Y ⊂ Rd (where often d = 2 or d = 3) such that the metric distortion (measured by the Frobenius norm of the difference of the Gram matrices of X and Y ) is minimized. The most common non-metric variant of MDS is referred to as ordinal embedding, and has been studied in [14]. A common problem with metric MDS is that when the intrinsic dimension of the data is higher than the embedding dimension, the clustering in the original data may not be preserved [21]. One variant of MDS that preserves clusters is the tree preserving embedding [20], where the goal is to preserve the single linkage (SL) dendrogram from the original data. This is especially important for certain types of biological data, for the following reasons: (1) due to speciation, many biological datasets are inherently “treelike", and (2) the SL dendrogram is a 2-approximation to the optimal ultrametric tree embedding [16], so intuitively, preserving the SL dendrogram preserves the “treeness" of the data. Preserving the treeness of a metric space is related to the notion of finding an optimal embedding into a tree, which ties back to the numerical taxonomy problem. The SL dendrogram is an embedding of a metric space into an ultrametric tree, and can be used to find the optimal ultrametric tree [8]. The quality of an embedding is measured by computing its distortion, which has different definitions in different domain areas. Typically, a tree embedding is defined to be an injective map f : X → Y between metric spaces (X, dX) and (Y, tY ), where the target space is a tree. We have defined the additive distortion of a tree embedding in an `∞ setting above, but `p notions, for p ∈ [1,∞), can also be defined. Past efforts to embed a metric into a tree with low additive distortion are described in [19, Chapter 7]. One can also define a multiplicative distortion [4, 10], but this is studied in the domain of discrete algorithms and is not our focus in the current work. 3 Preliminaries on metric spaces, distances, and doubling dimension A finite metric space (X, dX) is a finite set X together with a function dX : X × X → R+ such that: (1) dX(x, x′) = 0 ⇐⇒ x = x′, (2) dX(x, x′) = dX(x′, x), and (3) dX(x, x′) ≤ dX(x, x ′′) + dX(x ′′, x′) for any x, x′, x′′ ∈ X . A pointed metric space is a triple (X, dX , p), where (X, dX) is a finite metric space and p ∈ X . All the spaces we consider are assumed to be finite. For a metric space (X, dX), the diameter is defined to be diam(X, dX) := maxx,x′∈X dX(x, x′). The hyperbolicity of (X, dX) was defined by Gromov [12] as follows: hyp(X, dX) := max x1,x2,x3,x4∈X ΨhypX (x1, x2, x3, x4), where ΨhypX (x1, x2, x3, x4) : = 1 2 ( dX(x1, x2) + dX(x3, x4) −max ( dX(x1, x3) + dX(x2, x4), dX(x1, x4) + dX(x2, x3) )) . A tree metric space (X, tX) is a finite metric space such that hyp(X, tX) = 0 [19]. In our work, we strengthen the preceding characterization of trees to the special class of ultrametric trees. Recall that an ultrametric space (X,uX) is a metric space satisfying the strong triangle inequality: uX(x, x ′) ≤ max(uX(x, x′′), uX(x′′, x′)), ∀x, x′, x′′ ∈ X. Definition 1. We define the ultrametricity of a metric space (X, dX) as: ult(X, dX) := max x1,x2,x3∈X ΨultX (x1, x2, x3), where ΨultX (x1, x2, x3) := dX(x1, x3)−max ( dX(x1, x2), dX(x2, x3) ) . We introduce ultrametricity to quantify the deviation of a metric space from being ultrametric. Notice that (X,uX) is an ultrametric space if and only if ult(X,uX) = 0. One can verify that an ultrametric space is a tree metric space. We will denote the cardinality of a set X by writing |X|. Given a set X and two metrics dX , d′X defined on X ×X , we denote the `∞ distance between dX and d′X as follows: ‖dX − d′X‖`∞(X×X) := max x,x′∈X |dX(x, x′)− d′X(x, x′)|. We use the shorthand ‖dX−d′X‖∞ to mean ‖dX−d′X‖`∞(X×X). We write≈ to mean “approximately equal to." Given two functions f, g : N→ R, we will write f g to mean asymptotic tightness, i.e. that there exist constants c1, c2 such that c1|f(n)| ≤ |g(n)| ≤ c2|f(n)| for sufficiently large n ∈ N. Induced metrics from Voronoi partitions. A key ingredient of our stability result involves a Voronoi partition construction. Given a metric space (X, dX) and a subset A ⊆ X , possibly with its own metric dA, we can define a new metric dAX on X ×X using a Voronoi partition. First write A = {x1, . . . , xn}. For each 1 ≤ i ≤ n, we define Ṽi := {x ∈ X : dX(x, xi) ≤ minj 6=i dX(x, xj)} . Then X = ⋃n i=1 Ṽi. Next we perform the following disjointification trick: V1 := Ṽ1, V2 := Ṽ2 \ Ṽ1, . . . , Vn := Ṽn \ ( n−1⋃ i=1 Ṽi ) . Then X = ⊔n i=1 Vi, a disjoint union of Voronoi cells Vi. Next define the nearest-neighbor map η : X → A by η(x) = xi for each x ∈ Vi. The map η simply sends each x ∈ X to its closest neighbor in A, up to a choice when there are multiple nearest neighbors. Then we can define a new (pseudo)metric dAX : X ×X → R+ as follows: dAX(x, x ′) := dA(η(x), η(x ′)). Observe that dAX(x, x ′) = 0 if and only if x, x′ ∈ Vi for some 1 ≤ i ≤ n. Symmetry also holds, as does the triangle inequality. A special case of this construction occurs when A is an ε-net of X endowed with a restriction of the metric dX . Given a finite metric space (X, dX), an ε-net is a subset Xε ⊂ X such that: (1) for any x ∈ X , there exists s ∈ Xε such that dX(x, s) < ε, and (2) for any s, s′ ∈ Xε, we have dX(s, s ′) ≥ ε [15]. For notational convenience, we write dεX to refer to dX ε X . In this case, we obtain: ‖dX − dεX‖`∞(X×X) = max x,x′∈X ∣∣dX(x, x′)− dεX(x, x′)∣∣ = max 1≤i,j≤n max x∈Vi,x′∈Vj ∣∣dX(x, x′)− dεX(x, x′)∣∣ = max 1≤i,j≤n max x∈Vi,x′∈Vj ∣∣dX(x, x′)− dX(xi, xj)∣∣ ≤ max 1≤i,j≤n max x∈Vi,x′∈Vj ( dX(x, xi) + dX(x ′, xj) ) ≤ 2ε. (1) Covering numbers and doubling dimension. For a finite metric space (X, dX), the open ball of radius ε centered at x ∈ X is denoted B(x, ε). The ε-covering number of (X, dX) is defined as: NX(ε) := min { n ∈ N : X ⊂ n⋃ i=1 B(xi, ε) for x1, . . . , xn ∈ X } . Notice that the ε-covering number of X is always bounded above by the cardinality of an ε-net. A related quantity is the doubling dimension ddim(X, dX) of a metric space (X, dX), which is defined to be the minimal value ρ such that any ε-ball in X can be covered by at most 2ρ ε/2-balls [15]. The covering number and doubling dimension of a metric space (X, dX) are related as follows: Lemma 2. Let (X, dX) be a finite metric space with doubling dimension bounded above by ρ > 0. Then for all ε ∈ (0,diam(X)], we have NX(ε) ≤ ( 8 diam(X)/ε )ρ . 4 Duality between Gromov’s embedding and SLHC Given a metric space (X, dX) and any two points x, x′ ∈ X , we define a chain from x to x′ over X as an ordered set of points in X starting at x and ending at x′: c = {x0, x1, x2, . . . , xn : x0 = x, xn = x′, xi ∈ X for all 0 ≤ i ≤ n} . The set of all chains from x to x′ over X will be denoted CX(x, x′). The cost of a chain c = {x0 . . . , xn} over X is defined to be costX(c) := max0≤i<n dX(xi, xi+1). For any metric space (X, dX) and any p ∈ X , the Gromov product of X with respect to p is a map gX,p : X ×X → R+ defined by: gX,p(x, x ′) := 12 ( dX(x, p) + dX(x ′, p)− dX(x, x′) ) . We can define a map gTX,p : X ×X → R+ as follows: gTX,p(x, x ′)p := max c∈CX(x,x′) min xi,xi+1∈c gX,p(xi, xi+1). This induces a new metric tX,p : X ×X → R+: tX,p(x, x ′) := dX(x, p) + dX(x ′, p)− 2gTX,p(x, x′). Gromov observed that the space (X, tX,p) is a tree metric space, and that tX,p(x, x′) ≤ dX(x, x′) for any x, x′ ∈ X [12]. This yields the trivial upper bound: ‖dX − tX‖∞ ≤ diam(X, dX). The Gromov embedding T is defined for any pointed metric space (X, dX , p) as T (X, dX , p) := (X, tX,p). Note that each choice of p ∈ X will yield a tree metric tX,p that depends, a priori, on p. Theorem 3 (Gromov’s embedding theorem [12]). Let (X, dX , p) be an n-point pointed metric space, and let (X, tX,p) = T (X, dX , p). Then, ‖tX,p − dX‖l∞(X×X) ≤ 2 log2(2n) hyp(X, dX). Gromov’s embedding is an MDS method where the target is a tree. We observe that its construction is dual—in the sense of swapping max and min operations—to the construction of the ultrametric space produced as an output of SLHC. Recall that the SLHC method H is defined for any metric space (X, dX) asH(X, dX) = (X,uX), where uX : X ×X → R+ is the ultrametric defined below: uX(x, x ′) := min c∈CX(x,x′) costX(c). As a consequence of this duality, we can bound the additive distortion of SLHC as below: Theorem 4. Let (X, dX) be an n-point metric space, and let (X,uX) = H(X, dX). Then we have: ‖dX − uX‖`∞(X×X) ≤ log2(2n) ult(X, dX). Moreover, this bound is asymptotically tight. The proof of Theorem 4 proceeds by invoking the definition of ultrametricity at various scales along chains of points; we provide details in Appendix B. We remark that the bounds in Theorems 3, 4 depend on both a local (ultrametricity/hyperbolicity) and a global property (cardinality); however, a natural improvement would be to exploit a global property that takes into account the metric structure of the underlying space. The first step in this improvement is to prove a set of stability theorems. 5 Stability of SLHC and Gromov’s embedding It is known that SLHC is robust to small perturbations of the input data with respect to the GromovHausdorff distance between metric spaces, whereas other HC methods, such as average linkage and complete linkage, do not enjoy this stability [6]. We prove a particular stability result for SLHC involving the `∞ distance, and then we exploit the duality observed in §4 to prove a similar stability result for Gromov’s embedding. Theorem 5. Let (X, dX) be a metric space, and let (A, dA) be any subspace with the restriction metric dA := dX |A×A. LetH denote the SLHC method. Write (X,uX) = H(X, dX) and (A, uA) = H(A, dA). Also write uAX(x, x′) := uA(η(x), η(x′)) for x, x′ ∈ X . Then we have: ‖H(X, dX)−H(A, dA)‖∞ := ‖uX − uAX‖∞ ≤ ‖dX − dAX‖∞. Theorem 6. Let (X, dX , p) be a pointed metric space, and let (A, dA, a) be any subspace with the restriction metric dA := dX |A×A such that η(p) = a. Let T denote the Gromov embedding. Write (X, tX,p) = T (X, dX , p) and (A, tA,a) = T (A, dA, a). Also write tAX,p(x, x′) := tA,a(η(x), η(x′)) for x, x′ ∈ X . Then we have: ‖T (X, dX , p)− T (A, dA, a)‖∞ := ‖tX,p − tAX,p‖∞ ≤ 5‖dX − dAX‖∞. The proofs for both of these results use similar techniques, and we present them in Appendix B. 6 Improvement via Doubling Dimension Our main theorems, providing additive distortion bounds for Gromov’s embedding and for SLHC, are stated below. The proofs for both theorems are similar, so we only present that of the former. Theorem 7. Let (X, dX) be a n-point metric space with doubling dimension ρ, hyperbolicity hyp(X, dX) = δ, and diameter D. Let p ∈ X , and write (X, tX) = T (X, dX , p). Then we obtain: Covering number bound: ‖dX − tX‖∞ ≤ min ε∈(0,D] ( 12ε+ 2δ log2(2NX(ε)) ) . (2) Also suppose D ≥ δρ6 ln 2 . Then, Doubling dimension bound: ‖dX − tX‖∞ ≤ 2δ + 2δρ ( 13 2 + log2 ( D δρ )) . (3) Theorem 8. Let (X, dX) be a n-point metric space with doubling dimension ρ, ultrametricity ult(X, dX) = ν, and diameter D. Write (X,uX) = H(X, dX). Then we obtain: Covering number bound: ‖dX − uX‖∞ ≤ min ε∈(0,D] ( 4ε+ ν log2(2NX(ε)) ) . (4) Also suppose D ≥ νρ4 ln 2 . Then, Doubling dimension bound: ‖dX − uX‖∞ ≤ ν + νρ ( 6 + log2 ( D νρ )) . (5) Remark 9 (A remark on the NTP). We are now able to return to Question 1 and provide some answers. Consider a metric space (X, dX). We can upper bound ‖dX − uoptX ‖∞ using the bounds in Theorem 8, and ‖dX − toptX ‖∞ using the bounds in Theorem 7. Remark 10 (A remark on parameters). Notice that as hyperbolicity δ approaches 0 (or ultrametricity approaches 0), the doubling dimension bounds (hence the covering number bounds) approach 0. Also note that as ε ↓ 0, we get NX(ε) ↑ |X|, so Theorems 7,8 reduce to Theorems 3,4. Experiments lead us to believe that the interesting range of ε values is typically a subinterval of (0, D]. Proof of Theorem 7. Fix ε ∈ (0, D] and let Xε = {x1, x2, ..., xk} be a collection of k = NX(ε) points that form an ε-net of X . Then we may define dεX and t ε X on X ×X as in §3. Subsequent application of Theorem 3 and Lemma 2 gives the bound ‖dεX − tεX‖`∞(X×X) ≤ ‖dXε − tXε‖`∞(Xε×Xε) ≤ 2δ log2(2k) ≤ 2δ log2(2Cε−ρ), where we define C := (8D)ρ. Then by the triangle inequality for the `∞ distance, the stability of T (Theorem 6), and using the result that ‖dX − dεX‖`∞(X×X) ≤ 2ε (Inequality 1), we get: ‖dX − tX‖∞ ≤ ‖dX − dεX‖∞ + ‖dεX − tεX‖∞ + ‖tεX − tX‖∞ ≤ 6‖dX − dεX‖∞ + ‖dεX − tεX‖∞ ≤ 12ε+ 2δ log2(2NX(ε)). Since ε ∈ (0, D] was arbitrary, this suffices to prove Inequality 2. Applying Lemma 2 yields: ‖dX − tX‖∞ ≤ 12ε+ 2δ log2(2Cε−ρ). Notice that Cε−ρ ≥ NX(ε) ≥ 1, so the term on the right of the inequality above is positive. Consider the function f(ε) = 12ε+ 2δ + 2δ log2 C − 2δρ log2 ε. The minimizer of this function is obtained by taking a derivative with respect to ε: f ′(ε) = 12− 2δρ ε ln 2 = 0 =⇒ ε = δρ 6 ln 2 . Since ε takes values in (0, D], and limε→0 f(ε) = +∞, the value of f(ε) is minimized at min(D, δρ6 ln 2 ). By assumption, D ≥ δρ 6 ln 2 . Since ‖dX − tX‖∞ ≤ f(ε) for all ε ∈ (0, D], it follows that: ‖dX−tX‖∞ ≤ f ( δρ 6 ln 2 ) = 2δρ ln 2 +2δ+2δρ log2 ( 48D ln 2 δρ ) ≤ 2δ+2δρ ( 13 2 + log2 ( D δρ )) . 7 Tightness of our bounds in Theorems 7 and 8 By the construction provided below, we show that our covering number bound for the distortion of SLHC is asymptotically tight. A similar construction can be used to show that our covering number bound for Gromov’s embedding is also asymptotically tight. Proposition 11. There exists a sequence (Xn, dXn)n∈N of finite metric spaces such that as n→∞, ‖dXn − uXn‖∞ min ε∈(0,Dn] ( 4ε+ νn log2(2NXn(ε)) ) → 0. Here we have written (Xn, uXn) = H(Xn, dXn), νn = ult(Xn, dXn), and Dn = diam(Xn, dXn). Proof of Proposition 11. After defining Xn for n ∈ N below, we will denote the error term, our covering number upper bound, and our Gromov-style upper bound as follows: En := ‖dXn − uXn‖∞, Bn := min ε∈(0,Dn] ρ(n, ε), Gn := log2(2|Xn|) ult(Xn, dXn), where ρ : N× [0,∞)→ R is defined by ρ(n, ε) = 4ε+ νn log2(2NXn(ε)). Here we write |S| to denote the cardinality of a set S. Recall that the separation of a finite metric space (X, dX) is the quantity sep(X, dX) := minx6=x′∈X dX(x, x′). Let (V, uV ) be the finite ultrametric space consisting of two equidistant points with common distance 1. For each n ∈ N, let Ln denote the line metric space obtained by choosing (n+ 1) equally spaced points with separation 1n2 from the interval [0, 1n ], and endowing this set with the restriction of the Euclidean metric, denoted dLn . One can verify that ult(Ln, dLn) ≈ 12n . Finally, for each n ∈ N we define Xn := V × Ln, and endow Xn with the following metric: dXn ( (v, l), (v′, l′) ) := max ( dV (v, v ′), dLn(l, l ′) ) , v, v′ ∈ V, l, l′ ∈ Ln. Claim 1. ult(Xn, dXn) = ult(Ln, dLn) ≈ 12n . For a proof, see Appendix B. Claim 2. En diam(Ln, dLn) = 1n . To see this, let n ∈ N, and let x = (v, l), x ′ = (v′, l′) ∈ Xn be two points realizing En. Suppose first that v = v′. Then an optimal chain from (v, l), (v, l′) only has to incur the cost of moving along the Ln coordinate. As such, we obtain uXn(x, x ′) ≤ 1n2 , with equality if and only if l 6= l′. Then, En = max x,x′∈Xn |dXn(x, x′)− uXn(x, x′)| = max l,l′∈Ln |dLn(l, l′)− 1n2 | = 1 n − 1 n2 1 n . Note that the case v 6= v′ is not allowed, because then we would obtain dXn(x, x′) = dV (v, v′) = uXn(x, x ′), since sep(V, dV ) ≥ diam(Ln, dLn) and all the points in V are equidistant. Thus we would obtain |dXn(x, x′)− uXn(x, x′)| = 0, which is a contradiction because we assumed that x, x′ realize En. Claim 3. For each n ∈ N, ε ∈ (0, Dn], we have: NXn(ε) = NV (ε) : ε > sep(V, dV ), |V | : diam(Ln, dLn) < ε ≤ sep(V, dV ), |V |NLn(ε) : ε ≤ diam(Ln, dLn). To see this, note that in the first two cases, any ε-ball centered at a point (v, l) automatically contains all of {v} × Ln, so NXn(ε) = NV (ε). Specifically in the range diam(Ln, dLn) < ε ≤ sep(V, dV ), we need exactly one ε-ball for each v ∈ V to cover Xn. Finally in the third case, we need NLn(ε) ε-balls to cover {v} × Ln for each v ∈ V . This yields the stated estimate. By the preceding claims, we now have the following for each n ∈ N, ε ∈ (0, Dn]: ρ(n, ε) ≈ 4ε+ 12n log2(2NXn(ε)) = 4ε+ 12n log2(2NV (ε)) : ε > sep(V ), 4ε+ 12n log2(2|V |) : diam(Ln) < ε ≤ sep(V ), 4ε+ 12n log2(2|V |NLn(ε)) : ε ≤ diam(Ln). Notice that for sufficiently large n, infε>diam(Ln) ρ(n, ε) = ρ(n, 1 n ). Then we have: 1 n ≤ En ≤ Bn = minε∈(0,Dn] ρ(n, ε) ≤ ρ(n, 1n ) ≈ C n , for some constant C > 0. Here the first inequality follows from the proof of Claim 2, the second from Theorem 8, and the third from our observation above. It follows that En Bn 1n → 0. Remark 12. Given the setup of the preceding proof, note that the Gromov-style bound behaves as: Gn = ρ(n, 0) = 1 2n log2(2|V |(n+ 1)) ≈ C ′ log2(n+1) n , for some constant C ′ > 0. So Gn approaches 0 at a rate strictly slower than that of En and Bn. 8 Discussion We are motivated by a particular aspect of the numerical taxonomy problem, namely, the distortion incurred when passing from a metric to its optimal tree embedding. We describe and explore a duality between a tree embedding method proposed by Gromov and the well known SLHC method for embedding a metric space into an ultrametric tree. Motivated by this duality, we propose a novel metric space statistic that we call ultrametricity, and give a novel, tight bound on the distortion of the SLHC method depending on cardinality and ultrametricity. We improve this Gromov-style bound by replacing the dependence on cardinality by a dependence on doubling dimension, and produce a family of examples proving tightness of this dimension-based bound. By invoking duality again, we are able to improve Gromov’s original bound on the distortion of his tree embedding method. More specifically, we replace the dependence on cardinality—a set-theoretic notion—by a dependence on doubling dimension, which is truly a metric notion. Through Proposition 11, we are able to prove that our bound is not just asymptotically tight, but that it is strictly better than the corresponding Gromov-style bound. Indeed, Gromov’s bound can perform arbitrarily worse than our dimension-based bound. We construct an explicit example to verify this claim in Appendix A, Remark 14, where we also provide a practical demonstration of our methods.
1. What are the main contributions and novel aspects introduced by the paper in the field of embedding finite metric spaces? 2. What are the strengths of the paper regarding its writing quality and the interest in the subject matter? 3. Do you have any concerns or reservations about the significance and impact of the results presented in the paper? 4. Are there any minor issues or suggestions you have for improving the paper, such as adding missing definitions or reorganizing sections?
Review
Review The paper is about embedding finite metric space in tree metric spaces. More specifically the paper studies the additive distortions of the so-called Gromov's embedding and the single linkage hierarchical clustering. The paper starts from the observation that the distortion given by Gromov depends on the cardinality of the embedded metric space, which is of course not satisfactory in the case of large point clouds. The paper provides new distortions bounds based on covering numbers and a notion of hyperbolicity (for the Gromov's embedding). The main idea behind these results is to consider as an intermediate construction some metrics induced by Voronoi partitions. I have a mixed opinion about this paper. On one side, I find the subject very interesting. Moreover the paper is well written and pleasant to read. On the other side, I am not sure that the results of this paper are real advances for this field. Indeed the results are mostly based on known results and the stability results are not very surprising in my opinion. Moreover the authors are not able to discuss if there final bounds are tight are not. Applying the results to specific class of metric spaces would be interesting. Maybe such an important problem deserves additional work. Minor concerns : + Some definitions (2- or 3-approximations) are missing in the paper + Section 2 and some definitions given at the begining of Section 3 could be given in the Introduction
NIPS
Title Improved Error Bounds for Tree Representations of Metric Spaces Abstract Estimating optimal phylogenetic trees or hierarchical clustering trees from metric data is an important problem in evolutionary biology and data analysis. Intuitively, the goodness-of-fit of a metric space to a tree depends on its inherent treeness, as well as other metric properties such as intrinsic dimension. Existing algorithms for embedding metric spaces into tree metrics provide distortion bounds depending on cardinality. Because cardinality is a simple property of any set, we argue that such bounds do not fully capture the rich structure endowed by the metric. We consider an embedding of a metric space into a tree proposed by Gromov. By proving a stability result, we obtain an improved additive distortion bound depending only on the hyperbolicity and doubling dimension of the metric. We observe that Gromov’s method is dual to the well-known single linkage hierarchical clustering (SLHC) method. By means of this duality, we are able to transport our results to the setting of SLHC, where such additive distortion bounds were previously unknown. 1 Introduction Numerous problems in data analysis are formulated as the question of embedding high-dimensional metric spaces into “simpler" spaces, typically of lower dimension. In classical multidimensional scaling (MDS) techniques [18], the goal is to embed a space into two or three dimensional Euclidean space while preserving interpoint distances. Classical MDS is helpful in exploratory data analysis, because it allows one to find hidden groupings in amorphous data by simple visual inspection. Generalizations of MDS exist for which the target space can be a tree metric space—see [13] for a summary of some of these approaches, written from the point of view of metric embeddings. The metric embeddings literature, which grew out of MDS, typically highlights the algorithmic gains made possible by embedding a complicated metric space into a simpler one [13]. The special case of MDS where the target space is a tree has been of interest in phylogenetics for quite some time [19, 5]; the numerical taxonomy problem (NTP) is that of finding an optimal tree embedding for a given metric space (X, dX), i.e. a tree (X, tX) such that the additive distortion, defined as ‖dX − tX‖`∞(X×X), is minimal over all possible tree metrics on X . This problem turns out to be NP-hard [3]; however, a 3-approximation algorithm exists [3], and a variant of this problem, 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. that of finding an optimal ultrametric tree, can be solved in polynomial time [11]. An ultrametric tree is a rooted tree where every point is equidistant from the root—for example, ultrametric trees are the outputs of hierarchical clustering (HC) methods that show groupings in data across different resolutions. A known connection between HC and MDS is that the output ultrametric of single linkage hierarchical clustering (SLHC) is a 2-approximation to the optimal ultrametric tree embedding [16], thus providing a partial answer to the NTP. However, it appears that the existing line of work regarding NTP does not address the question of quantifying the `∞ distance between a metric (X, dX) and its optimal tree metric, or even the optimal ultrametric. More specifically, we can ask: Question 1. Given a set X , a metric dX , and an optimal tree metric toptX (or an optimal ultrametric uoptX ), can one find a nontrivial upper bound on ‖dX − t opt X ‖`∞(X×X) (resp. ‖dX − u opt X ‖`∞(X×X)) depending on properties of the metric dX? The question of distortion bounds is treated from a different perspective in the discrete algorithms literature. In this domain, tree embeddings are typically described with multiplicative distortion bounds (described in §2) depending on the cardinality of the underlying metric space, along with (typically) pathological counterexamples showing that these bounds are tight [4, 10]. We remark immediately that (1) multiplicative distortion is distinct from the additive distortion encountered in the NTP, and (2) these embeddings are rarely used in machine learning, where HC and MDS methods are the main workhorses. Moreover, such multiplicative distortion bounds do not take two considerations into account: (1) the ubiquitousness of very large data sets means that a bound dependent on cardinality is not desirable, and (2) “nice" properties such as low intrinsic dimensionality or treeness of real-world datasets are not exploited in cardinality bounds. We prove novel additive distortion bounds for two methods of tree embeddings: one into general trees, and one into ultrametric trees. These additive distortion bounds take into account (1) whether the data is treelike, and (2) whether the data has low doubling dimension, which is a measure of its intrinsic dimension. Thus we prove an answer to Question 1 above, namely, that the approximation error made by an optimal tree metric (or optimal ultrametric) can be bounded nontrivially. Remark 1. The trivial upper bound is ‖dX − toptX ‖`∞(X×X) ≤ diam(X, dX). To see this, observe that any ultrametric is a tree, and that SLHC yields an ultrametric uX that is bounded above by dX . An overview of our approach. A common measure of treeness is Gromov’s δ-hyperbolicity, which is a local condition on 4-point subsets of a metric space. Hyperbolicity has been shown to be a useful statistic for evaluating the quality of trees in [7]. The starting point for our work is a method used by Gromov to embed metric spaces into trees, which we call Gromov’s embedding [12]. A known result, which we call Gromov’s embedding theorem, is that if every 4-point subset of an n-point metric space is δ-hyperbolic, then the metric space embeds into a tree with `∞ distortion bounded above by 2δ log2(2n). The proof proceeds by a linkage argument, i.e. by invoking the definition of hyperbolicity at different scales along chains of points. By virtue of the embedding theorem, one can argue that hyperbolicity is a measure of the “treeness" of a given metric space. It has been shown in [1, 2] that various real-world data sets, such as Internet latencies and biological, social, and collaboration networks are inherently treelike, i.e. have low hyperbolicity. Thus, by Gromov’s result, these real-world data sets can be embedded into trees with additive distortion controlled by their respective cardinalities. The cardinality bound might of course be undesirable, especially for very large data sets such as the Internet. However, it has been claimed without proof in [1] that Gromov’s embedding can yield a 3-approximation to the NTP, independent of [3]. We note that the assumption of a metric input is not apparent in Gromov’s embedding theorem. Moreover, the proof of the theorem does not utilize any metric property. This leads one to hope for bounds where the dependence on cardinality is replaced by a dependence on some metric notion. A natural candidate for such a metric notion is the doubling dimension of a space [15], which has already found applications in learning [17] and algorithm design [15]. In this paper, we present novel upper bounds on the additive distortion of a Gromov embedding, depending only on the hyperbolicity and doubling dimension of the metric space. Our main tool is a stability theorem that we prove using a metric induced by a Voronoi partition. This result is then combined with the results of Gromov’s linkage argument. Both the stability theorem and Gromov’s theorem rely on the embedding satisfying a particular linkage condition, which can be described as follows: for any embedding f : (X, dX) → (X, tX), and any x, x′ ∈ X , we have tX(x, x ′) = maxc mini Ψ(xi, xi+1), where c = {xi}ki=0 is a chain of points joining x to x′ and Ψ is some function of dX . A dual notion is to flip the order of the max,min operations. Interestingly, under the correct objective function Ψ, this leads to the well-studied notion of SLHC. By virtue of this duality, the arguments of both the stability theorem and the scaling theorem apply in the SLHC setting. We introduce a new metric space statistic that we call ultrametricity (analogous to hyperbolicity), and are then able to obtain novel lower bounds, depending only on doubling dimension and ultrametricity, for the distortion incurred by a metric space when embedding into an ultrametric tree via SLHC. We remark that just by virtue of the duality between Gromov’s embedding and the SLHC embedding, it is possible to obtain a distortion bound for SLHC depending on cardinality. We were unable to find such a bound in the existing HC literature, so it appears that even the knowledge of this duality, which bridges the domains of HC and MDS methods, is not prevalent in the community. The paper is organized as follows. The main thrust of our work is explained in §1. In §2 we develop the context of our work by highlighting some of the surrounding literature. We provide all definitions and notation, including the Voronoi partition construction, in §3. In §4 we describe Gromov’s embedding and present Gromov’s distortion bound in Theorem 3. Our contributions begin with Theorem 4 in §4 and include all the results that follow: namely the stability results in §5, the improved distortion bounds in §6, and the proof of tightness in §7. The supplementary material contains (1) an appendix with proofs omitted from the body, (2) a practical demonstration in §A where we apply Gromov’s embedding to a bitmap image of a tree and show that our upper bounds perform better than the bounds suggested by Gromov’s embedding theorem, and (3) Matlab .m files containing demos of Gromov’s embedding being applied to various images of trees. 2 Related Literature MDS is explained thoroughly in [18]. In metric MDS [18] one attempts to find an embedding of the data X into a low dimensional Euclidean space given by a point cloud Y ⊂ Rd (where often d = 2 or d = 3) such that the metric distortion (measured by the Frobenius norm of the difference of the Gram matrices of X and Y ) is minimized. The most common non-metric variant of MDS is referred to as ordinal embedding, and has been studied in [14]. A common problem with metric MDS is that when the intrinsic dimension of the data is higher than the embedding dimension, the clustering in the original data may not be preserved [21]. One variant of MDS that preserves clusters is the tree preserving embedding [20], where the goal is to preserve the single linkage (SL) dendrogram from the original data. This is especially important for certain types of biological data, for the following reasons: (1) due to speciation, many biological datasets are inherently “treelike", and (2) the SL dendrogram is a 2-approximation to the optimal ultrametric tree embedding [16], so intuitively, preserving the SL dendrogram preserves the “treeness" of the data. Preserving the treeness of a metric space is related to the notion of finding an optimal embedding into a tree, which ties back to the numerical taxonomy problem. The SL dendrogram is an embedding of a metric space into an ultrametric tree, and can be used to find the optimal ultrametric tree [8]. The quality of an embedding is measured by computing its distortion, which has different definitions in different domain areas. Typically, a tree embedding is defined to be an injective map f : X → Y between metric spaces (X, dX) and (Y, tY ), where the target space is a tree. We have defined the additive distortion of a tree embedding in an `∞ setting above, but `p notions, for p ∈ [1,∞), can also be defined. Past efforts to embed a metric into a tree with low additive distortion are described in [19, Chapter 7]. One can also define a multiplicative distortion [4, 10], but this is studied in the domain of discrete algorithms and is not our focus in the current work. 3 Preliminaries on metric spaces, distances, and doubling dimension A finite metric space (X, dX) is a finite set X together with a function dX : X × X → R+ such that: (1) dX(x, x′) = 0 ⇐⇒ x = x′, (2) dX(x, x′) = dX(x′, x), and (3) dX(x, x′) ≤ dX(x, x ′′) + dX(x ′′, x′) for any x, x′, x′′ ∈ X . A pointed metric space is a triple (X, dX , p), where (X, dX) is a finite metric space and p ∈ X . All the spaces we consider are assumed to be finite. For a metric space (X, dX), the diameter is defined to be diam(X, dX) := maxx,x′∈X dX(x, x′). The hyperbolicity of (X, dX) was defined by Gromov [12] as follows: hyp(X, dX) := max x1,x2,x3,x4∈X ΨhypX (x1, x2, x3, x4), where ΨhypX (x1, x2, x3, x4) : = 1 2 ( dX(x1, x2) + dX(x3, x4) −max ( dX(x1, x3) + dX(x2, x4), dX(x1, x4) + dX(x2, x3) )) . A tree metric space (X, tX) is a finite metric space such that hyp(X, tX) = 0 [19]. In our work, we strengthen the preceding characterization of trees to the special class of ultrametric trees. Recall that an ultrametric space (X,uX) is a metric space satisfying the strong triangle inequality: uX(x, x ′) ≤ max(uX(x, x′′), uX(x′′, x′)), ∀x, x′, x′′ ∈ X. Definition 1. We define the ultrametricity of a metric space (X, dX) as: ult(X, dX) := max x1,x2,x3∈X ΨultX (x1, x2, x3), where ΨultX (x1, x2, x3) := dX(x1, x3)−max ( dX(x1, x2), dX(x2, x3) ) . We introduce ultrametricity to quantify the deviation of a metric space from being ultrametric. Notice that (X,uX) is an ultrametric space if and only if ult(X,uX) = 0. One can verify that an ultrametric space is a tree metric space. We will denote the cardinality of a set X by writing |X|. Given a set X and two metrics dX , d′X defined on X ×X , we denote the `∞ distance between dX and d′X as follows: ‖dX − d′X‖`∞(X×X) := max x,x′∈X |dX(x, x′)− d′X(x, x′)|. We use the shorthand ‖dX−d′X‖∞ to mean ‖dX−d′X‖`∞(X×X). We write≈ to mean “approximately equal to." Given two functions f, g : N→ R, we will write f g to mean asymptotic tightness, i.e. that there exist constants c1, c2 such that c1|f(n)| ≤ |g(n)| ≤ c2|f(n)| for sufficiently large n ∈ N. Induced metrics from Voronoi partitions. A key ingredient of our stability result involves a Voronoi partition construction. Given a metric space (X, dX) and a subset A ⊆ X , possibly with its own metric dA, we can define a new metric dAX on X ×X using a Voronoi partition. First write A = {x1, . . . , xn}. For each 1 ≤ i ≤ n, we define Ṽi := {x ∈ X : dX(x, xi) ≤ minj 6=i dX(x, xj)} . Then X = ⋃n i=1 Ṽi. Next we perform the following disjointification trick: V1 := Ṽ1, V2 := Ṽ2 \ Ṽ1, . . . , Vn := Ṽn \ ( n−1⋃ i=1 Ṽi ) . Then X = ⊔n i=1 Vi, a disjoint union of Voronoi cells Vi. Next define the nearest-neighbor map η : X → A by η(x) = xi for each x ∈ Vi. The map η simply sends each x ∈ X to its closest neighbor in A, up to a choice when there are multiple nearest neighbors. Then we can define a new (pseudo)metric dAX : X ×X → R+ as follows: dAX(x, x ′) := dA(η(x), η(x ′)). Observe that dAX(x, x ′) = 0 if and only if x, x′ ∈ Vi for some 1 ≤ i ≤ n. Symmetry also holds, as does the triangle inequality. A special case of this construction occurs when A is an ε-net of X endowed with a restriction of the metric dX . Given a finite metric space (X, dX), an ε-net is a subset Xε ⊂ X such that: (1) for any x ∈ X , there exists s ∈ Xε such that dX(x, s) < ε, and (2) for any s, s′ ∈ Xε, we have dX(s, s ′) ≥ ε [15]. For notational convenience, we write dεX to refer to dX ε X . In this case, we obtain: ‖dX − dεX‖`∞(X×X) = max x,x′∈X ∣∣dX(x, x′)− dεX(x, x′)∣∣ = max 1≤i,j≤n max x∈Vi,x′∈Vj ∣∣dX(x, x′)− dεX(x, x′)∣∣ = max 1≤i,j≤n max x∈Vi,x′∈Vj ∣∣dX(x, x′)− dX(xi, xj)∣∣ ≤ max 1≤i,j≤n max x∈Vi,x′∈Vj ( dX(x, xi) + dX(x ′, xj) ) ≤ 2ε. (1) Covering numbers and doubling dimension. For a finite metric space (X, dX), the open ball of radius ε centered at x ∈ X is denoted B(x, ε). The ε-covering number of (X, dX) is defined as: NX(ε) := min { n ∈ N : X ⊂ n⋃ i=1 B(xi, ε) for x1, . . . , xn ∈ X } . Notice that the ε-covering number of X is always bounded above by the cardinality of an ε-net. A related quantity is the doubling dimension ddim(X, dX) of a metric space (X, dX), which is defined to be the minimal value ρ such that any ε-ball in X can be covered by at most 2ρ ε/2-balls [15]. The covering number and doubling dimension of a metric space (X, dX) are related as follows: Lemma 2. Let (X, dX) be a finite metric space with doubling dimension bounded above by ρ > 0. Then for all ε ∈ (0,diam(X)], we have NX(ε) ≤ ( 8 diam(X)/ε )ρ . 4 Duality between Gromov’s embedding and SLHC Given a metric space (X, dX) and any two points x, x′ ∈ X , we define a chain from x to x′ over X as an ordered set of points in X starting at x and ending at x′: c = {x0, x1, x2, . . . , xn : x0 = x, xn = x′, xi ∈ X for all 0 ≤ i ≤ n} . The set of all chains from x to x′ over X will be denoted CX(x, x′). The cost of a chain c = {x0 . . . , xn} over X is defined to be costX(c) := max0≤i<n dX(xi, xi+1). For any metric space (X, dX) and any p ∈ X , the Gromov product of X with respect to p is a map gX,p : X ×X → R+ defined by: gX,p(x, x ′) := 12 ( dX(x, p) + dX(x ′, p)− dX(x, x′) ) . We can define a map gTX,p : X ×X → R+ as follows: gTX,p(x, x ′)p := max c∈CX(x,x′) min xi,xi+1∈c gX,p(xi, xi+1). This induces a new metric tX,p : X ×X → R+: tX,p(x, x ′) := dX(x, p) + dX(x ′, p)− 2gTX,p(x, x′). Gromov observed that the space (X, tX,p) is a tree metric space, and that tX,p(x, x′) ≤ dX(x, x′) for any x, x′ ∈ X [12]. This yields the trivial upper bound: ‖dX − tX‖∞ ≤ diam(X, dX). The Gromov embedding T is defined for any pointed metric space (X, dX , p) as T (X, dX , p) := (X, tX,p). Note that each choice of p ∈ X will yield a tree metric tX,p that depends, a priori, on p. Theorem 3 (Gromov’s embedding theorem [12]). Let (X, dX , p) be an n-point pointed metric space, and let (X, tX,p) = T (X, dX , p). Then, ‖tX,p − dX‖l∞(X×X) ≤ 2 log2(2n) hyp(X, dX). Gromov’s embedding is an MDS method where the target is a tree. We observe that its construction is dual—in the sense of swapping max and min operations—to the construction of the ultrametric space produced as an output of SLHC. Recall that the SLHC method H is defined for any metric space (X, dX) asH(X, dX) = (X,uX), where uX : X ×X → R+ is the ultrametric defined below: uX(x, x ′) := min c∈CX(x,x′) costX(c). As a consequence of this duality, we can bound the additive distortion of SLHC as below: Theorem 4. Let (X, dX) be an n-point metric space, and let (X,uX) = H(X, dX). Then we have: ‖dX − uX‖`∞(X×X) ≤ log2(2n) ult(X, dX). Moreover, this bound is asymptotically tight. The proof of Theorem 4 proceeds by invoking the definition of ultrametricity at various scales along chains of points; we provide details in Appendix B. We remark that the bounds in Theorems 3, 4 depend on both a local (ultrametricity/hyperbolicity) and a global property (cardinality); however, a natural improvement would be to exploit a global property that takes into account the metric structure of the underlying space. The first step in this improvement is to prove a set of stability theorems. 5 Stability of SLHC and Gromov’s embedding It is known that SLHC is robust to small perturbations of the input data with respect to the GromovHausdorff distance between metric spaces, whereas other HC methods, such as average linkage and complete linkage, do not enjoy this stability [6]. We prove a particular stability result for SLHC involving the `∞ distance, and then we exploit the duality observed in §4 to prove a similar stability result for Gromov’s embedding. Theorem 5. Let (X, dX) be a metric space, and let (A, dA) be any subspace with the restriction metric dA := dX |A×A. LetH denote the SLHC method. Write (X,uX) = H(X, dX) and (A, uA) = H(A, dA). Also write uAX(x, x′) := uA(η(x), η(x′)) for x, x′ ∈ X . Then we have: ‖H(X, dX)−H(A, dA)‖∞ := ‖uX − uAX‖∞ ≤ ‖dX − dAX‖∞. Theorem 6. Let (X, dX , p) be a pointed metric space, and let (A, dA, a) be any subspace with the restriction metric dA := dX |A×A such that η(p) = a. Let T denote the Gromov embedding. Write (X, tX,p) = T (X, dX , p) and (A, tA,a) = T (A, dA, a). Also write tAX,p(x, x′) := tA,a(η(x), η(x′)) for x, x′ ∈ X . Then we have: ‖T (X, dX , p)− T (A, dA, a)‖∞ := ‖tX,p − tAX,p‖∞ ≤ 5‖dX − dAX‖∞. The proofs for both of these results use similar techniques, and we present them in Appendix B. 6 Improvement via Doubling Dimension Our main theorems, providing additive distortion bounds for Gromov’s embedding and for SLHC, are stated below. The proofs for both theorems are similar, so we only present that of the former. Theorem 7. Let (X, dX) be a n-point metric space with doubling dimension ρ, hyperbolicity hyp(X, dX) = δ, and diameter D. Let p ∈ X , and write (X, tX) = T (X, dX , p). Then we obtain: Covering number bound: ‖dX − tX‖∞ ≤ min ε∈(0,D] ( 12ε+ 2δ log2(2NX(ε)) ) . (2) Also suppose D ≥ δρ6 ln 2 . Then, Doubling dimension bound: ‖dX − tX‖∞ ≤ 2δ + 2δρ ( 13 2 + log2 ( D δρ )) . (3) Theorem 8. Let (X, dX) be a n-point metric space with doubling dimension ρ, ultrametricity ult(X, dX) = ν, and diameter D. Write (X,uX) = H(X, dX). Then we obtain: Covering number bound: ‖dX − uX‖∞ ≤ min ε∈(0,D] ( 4ε+ ν log2(2NX(ε)) ) . (4) Also suppose D ≥ νρ4 ln 2 . Then, Doubling dimension bound: ‖dX − uX‖∞ ≤ ν + νρ ( 6 + log2 ( D νρ )) . (5) Remark 9 (A remark on the NTP). We are now able to return to Question 1 and provide some answers. Consider a metric space (X, dX). We can upper bound ‖dX − uoptX ‖∞ using the bounds in Theorem 8, and ‖dX − toptX ‖∞ using the bounds in Theorem 7. Remark 10 (A remark on parameters). Notice that as hyperbolicity δ approaches 0 (or ultrametricity approaches 0), the doubling dimension bounds (hence the covering number bounds) approach 0. Also note that as ε ↓ 0, we get NX(ε) ↑ |X|, so Theorems 7,8 reduce to Theorems 3,4. Experiments lead us to believe that the interesting range of ε values is typically a subinterval of (0, D]. Proof of Theorem 7. Fix ε ∈ (0, D] and let Xε = {x1, x2, ..., xk} be a collection of k = NX(ε) points that form an ε-net of X . Then we may define dεX and t ε X on X ×X as in §3. Subsequent application of Theorem 3 and Lemma 2 gives the bound ‖dεX − tεX‖`∞(X×X) ≤ ‖dXε − tXε‖`∞(Xε×Xε) ≤ 2δ log2(2k) ≤ 2δ log2(2Cε−ρ), where we define C := (8D)ρ. Then by the triangle inequality for the `∞ distance, the stability of T (Theorem 6), and using the result that ‖dX − dεX‖`∞(X×X) ≤ 2ε (Inequality 1), we get: ‖dX − tX‖∞ ≤ ‖dX − dεX‖∞ + ‖dεX − tεX‖∞ + ‖tεX − tX‖∞ ≤ 6‖dX − dεX‖∞ + ‖dεX − tεX‖∞ ≤ 12ε+ 2δ log2(2NX(ε)). Since ε ∈ (0, D] was arbitrary, this suffices to prove Inequality 2. Applying Lemma 2 yields: ‖dX − tX‖∞ ≤ 12ε+ 2δ log2(2Cε−ρ). Notice that Cε−ρ ≥ NX(ε) ≥ 1, so the term on the right of the inequality above is positive. Consider the function f(ε) = 12ε+ 2δ + 2δ log2 C − 2δρ log2 ε. The minimizer of this function is obtained by taking a derivative with respect to ε: f ′(ε) = 12− 2δρ ε ln 2 = 0 =⇒ ε = δρ 6 ln 2 . Since ε takes values in (0, D], and limε→0 f(ε) = +∞, the value of f(ε) is minimized at min(D, δρ6 ln 2 ). By assumption, D ≥ δρ 6 ln 2 . Since ‖dX − tX‖∞ ≤ f(ε) for all ε ∈ (0, D], it follows that: ‖dX−tX‖∞ ≤ f ( δρ 6 ln 2 ) = 2δρ ln 2 +2δ+2δρ log2 ( 48D ln 2 δρ ) ≤ 2δ+2δρ ( 13 2 + log2 ( D δρ )) . 7 Tightness of our bounds in Theorems 7 and 8 By the construction provided below, we show that our covering number bound for the distortion of SLHC is asymptotically tight. A similar construction can be used to show that our covering number bound for Gromov’s embedding is also asymptotically tight. Proposition 11. There exists a sequence (Xn, dXn)n∈N of finite metric spaces such that as n→∞, ‖dXn − uXn‖∞ min ε∈(0,Dn] ( 4ε+ νn log2(2NXn(ε)) ) → 0. Here we have written (Xn, uXn) = H(Xn, dXn), νn = ult(Xn, dXn), and Dn = diam(Xn, dXn). Proof of Proposition 11. After defining Xn for n ∈ N below, we will denote the error term, our covering number upper bound, and our Gromov-style upper bound as follows: En := ‖dXn − uXn‖∞, Bn := min ε∈(0,Dn] ρ(n, ε), Gn := log2(2|Xn|) ult(Xn, dXn), where ρ : N× [0,∞)→ R is defined by ρ(n, ε) = 4ε+ νn log2(2NXn(ε)). Here we write |S| to denote the cardinality of a set S. Recall that the separation of a finite metric space (X, dX) is the quantity sep(X, dX) := minx6=x′∈X dX(x, x′). Let (V, uV ) be the finite ultrametric space consisting of two equidistant points with common distance 1. For each n ∈ N, let Ln denote the line metric space obtained by choosing (n+ 1) equally spaced points with separation 1n2 from the interval [0, 1n ], and endowing this set with the restriction of the Euclidean metric, denoted dLn . One can verify that ult(Ln, dLn) ≈ 12n . Finally, for each n ∈ N we define Xn := V × Ln, and endow Xn with the following metric: dXn ( (v, l), (v′, l′) ) := max ( dV (v, v ′), dLn(l, l ′) ) , v, v′ ∈ V, l, l′ ∈ Ln. Claim 1. ult(Xn, dXn) = ult(Ln, dLn) ≈ 12n . For a proof, see Appendix B. Claim 2. En diam(Ln, dLn) = 1n . To see this, let n ∈ N, and let x = (v, l), x ′ = (v′, l′) ∈ Xn be two points realizing En. Suppose first that v = v′. Then an optimal chain from (v, l), (v, l′) only has to incur the cost of moving along the Ln coordinate. As such, we obtain uXn(x, x ′) ≤ 1n2 , with equality if and only if l 6= l′. Then, En = max x,x′∈Xn |dXn(x, x′)− uXn(x, x′)| = max l,l′∈Ln |dLn(l, l′)− 1n2 | = 1 n − 1 n2 1 n . Note that the case v 6= v′ is not allowed, because then we would obtain dXn(x, x′) = dV (v, v′) = uXn(x, x ′), since sep(V, dV ) ≥ diam(Ln, dLn) and all the points in V are equidistant. Thus we would obtain |dXn(x, x′)− uXn(x, x′)| = 0, which is a contradiction because we assumed that x, x′ realize En. Claim 3. For each n ∈ N, ε ∈ (0, Dn], we have: NXn(ε) = NV (ε) : ε > sep(V, dV ), |V | : diam(Ln, dLn) < ε ≤ sep(V, dV ), |V |NLn(ε) : ε ≤ diam(Ln, dLn). To see this, note that in the first two cases, any ε-ball centered at a point (v, l) automatically contains all of {v} × Ln, so NXn(ε) = NV (ε). Specifically in the range diam(Ln, dLn) < ε ≤ sep(V, dV ), we need exactly one ε-ball for each v ∈ V to cover Xn. Finally in the third case, we need NLn(ε) ε-balls to cover {v} × Ln for each v ∈ V . This yields the stated estimate. By the preceding claims, we now have the following for each n ∈ N, ε ∈ (0, Dn]: ρ(n, ε) ≈ 4ε+ 12n log2(2NXn(ε)) = 4ε+ 12n log2(2NV (ε)) : ε > sep(V ), 4ε+ 12n log2(2|V |) : diam(Ln) < ε ≤ sep(V ), 4ε+ 12n log2(2|V |NLn(ε)) : ε ≤ diam(Ln). Notice that for sufficiently large n, infε>diam(Ln) ρ(n, ε) = ρ(n, 1 n ). Then we have: 1 n ≤ En ≤ Bn = minε∈(0,Dn] ρ(n, ε) ≤ ρ(n, 1n ) ≈ C n , for some constant C > 0. Here the first inequality follows from the proof of Claim 2, the second from Theorem 8, and the third from our observation above. It follows that En Bn 1n → 0. Remark 12. Given the setup of the preceding proof, note that the Gromov-style bound behaves as: Gn = ρ(n, 0) = 1 2n log2(2|V |(n+ 1)) ≈ C ′ log2(n+1) n , for some constant C ′ > 0. So Gn approaches 0 at a rate strictly slower than that of En and Bn. 8 Discussion We are motivated by a particular aspect of the numerical taxonomy problem, namely, the distortion incurred when passing from a metric to its optimal tree embedding. We describe and explore a duality between a tree embedding method proposed by Gromov and the well known SLHC method for embedding a metric space into an ultrametric tree. Motivated by this duality, we propose a novel metric space statistic that we call ultrametricity, and give a novel, tight bound on the distortion of the SLHC method depending on cardinality and ultrametricity. We improve this Gromov-style bound by replacing the dependence on cardinality by a dependence on doubling dimension, and produce a family of examples proving tightness of this dimension-based bound. By invoking duality again, we are able to improve Gromov’s original bound on the distortion of his tree embedding method. More specifically, we replace the dependence on cardinality—a set-theoretic notion—by a dependence on doubling dimension, which is truly a metric notion. Through Proposition 11, we are able to prove that our bound is not just asymptotically tight, but that it is strictly better than the corresponding Gromov-style bound. Indeed, Gromov’s bound can perform arbitrarily worse than our dimension-based bound. We construct an explicit example to verify this claim in Appendix A, Remark 14, where we also provide a practical demonstration of our methods.
1. What is the focus of the paper in data analysis? 2. What is the proposed approach to measuring the quality of a tree, and how does it differ from previous methods? 3. What are the additive distortion bounds provided by the authors for general trees and ultrametric trees? 4. How does the paper's approach relate to single linkage hierarchical clustering? 5. Are there any concerns or limitations regarding the presented method, particularly in specific situations or disciplines?
Review
Review Embedding high-dimensional spaces into two dimensional trees is an important problem in data analysis. To measure the quality of such a tree, the authors suggest an approach which is based on the metric structure of the space rather than just the cardinality as has been done earlier. They provide an additive distortion bound for both general trees and ultrametric trees. The authors also show that by duality, the bounds apply to the single linkage hierarchical clustering.The paper is written well, with Section 1 describing the problem and proposed approach effectively. The main contributions and claims also carry sufficient mathematical detail. However, I found Section 7 to be very short. It seems to end abruptly, without explaining the results. Moreover, further detail should be provide for remark 12, which says that in certain situations the additive bounds do not perform better than the trivial diameter bound. It would be useful to examine situations/disciplines where trees are expected to have large hyperbolicity.
NIPS
Title Improved Error Bounds for Tree Representations of Metric Spaces Abstract Estimating optimal phylogenetic trees or hierarchical clustering trees from metric data is an important problem in evolutionary biology and data analysis. Intuitively, the goodness-of-fit of a metric space to a tree depends on its inherent treeness, as well as other metric properties such as intrinsic dimension. Existing algorithms for embedding metric spaces into tree metrics provide distortion bounds depending on cardinality. Because cardinality is a simple property of any set, we argue that such bounds do not fully capture the rich structure endowed by the metric. We consider an embedding of a metric space into a tree proposed by Gromov. By proving a stability result, we obtain an improved additive distortion bound depending only on the hyperbolicity and doubling dimension of the metric. We observe that Gromov’s method is dual to the well-known single linkage hierarchical clustering (SLHC) method. By means of this duality, we are able to transport our results to the setting of SLHC, where such additive distortion bounds were previously unknown. 1 Introduction Numerous problems in data analysis are formulated as the question of embedding high-dimensional metric spaces into “simpler" spaces, typically of lower dimension. In classical multidimensional scaling (MDS) techniques [18], the goal is to embed a space into two or three dimensional Euclidean space while preserving interpoint distances. Classical MDS is helpful in exploratory data analysis, because it allows one to find hidden groupings in amorphous data by simple visual inspection. Generalizations of MDS exist for which the target space can be a tree metric space—see [13] for a summary of some of these approaches, written from the point of view of metric embeddings. The metric embeddings literature, which grew out of MDS, typically highlights the algorithmic gains made possible by embedding a complicated metric space into a simpler one [13]. The special case of MDS where the target space is a tree has been of interest in phylogenetics for quite some time [19, 5]; the numerical taxonomy problem (NTP) is that of finding an optimal tree embedding for a given metric space (X, dX), i.e. a tree (X, tX) such that the additive distortion, defined as ‖dX − tX‖`∞(X×X), is minimal over all possible tree metrics on X . This problem turns out to be NP-hard [3]; however, a 3-approximation algorithm exists [3], and a variant of this problem, 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. that of finding an optimal ultrametric tree, can be solved in polynomial time [11]. An ultrametric tree is a rooted tree where every point is equidistant from the root—for example, ultrametric trees are the outputs of hierarchical clustering (HC) methods that show groupings in data across different resolutions. A known connection between HC and MDS is that the output ultrametric of single linkage hierarchical clustering (SLHC) is a 2-approximation to the optimal ultrametric tree embedding [16], thus providing a partial answer to the NTP. However, it appears that the existing line of work regarding NTP does not address the question of quantifying the `∞ distance between a metric (X, dX) and its optimal tree metric, or even the optimal ultrametric. More specifically, we can ask: Question 1. Given a set X , a metric dX , and an optimal tree metric toptX (or an optimal ultrametric uoptX ), can one find a nontrivial upper bound on ‖dX − t opt X ‖`∞(X×X) (resp. ‖dX − u opt X ‖`∞(X×X)) depending on properties of the metric dX? The question of distortion bounds is treated from a different perspective in the discrete algorithms literature. In this domain, tree embeddings are typically described with multiplicative distortion bounds (described in §2) depending on the cardinality of the underlying metric space, along with (typically) pathological counterexamples showing that these bounds are tight [4, 10]. We remark immediately that (1) multiplicative distortion is distinct from the additive distortion encountered in the NTP, and (2) these embeddings are rarely used in machine learning, where HC and MDS methods are the main workhorses. Moreover, such multiplicative distortion bounds do not take two considerations into account: (1) the ubiquitousness of very large data sets means that a bound dependent on cardinality is not desirable, and (2) “nice" properties such as low intrinsic dimensionality or treeness of real-world datasets are not exploited in cardinality bounds. We prove novel additive distortion bounds for two methods of tree embeddings: one into general trees, and one into ultrametric trees. These additive distortion bounds take into account (1) whether the data is treelike, and (2) whether the data has low doubling dimension, which is a measure of its intrinsic dimension. Thus we prove an answer to Question 1 above, namely, that the approximation error made by an optimal tree metric (or optimal ultrametric) can be bounded nontrivially. Remark 1. The trivial upper bound is ‖dX − toptX ‖`∞(X×X) ≤ diam(X, dX). To see this, observe that any ultrametric is a tree, and that SLHC yields an ultrametric uX that is bounded above by dX . An overview of our approach. A common measure of treeness is Gromov’s δ-hyperbolicity, which is a local condition on 4-point subsets of a metric space. Hyperbolicity has been shown to be a useful statistic for evaluating the quality of trees in [7]. The starting point for our work is a method used by Gromov to embed metric spaces into trees, which we call Gromov’s embedding [12]. A known result, which we call Gromov’s embedding theorem, is that if every 4-point subset of an n-point metric space is δ-hyperbolic, then the metric space embeds into a tree with `∞ distortion bounded above by 2δ log2(2n). The proof proceeds by a linkage argument, i.e. by invoking the definition of hyperbolicity at different scales along chains of points. By virtue of the embedding theorem, one can argue that hyperbolicity is a measure of the “treeness" of a given metric space. It has been shown in [1, 2] that various real-world data sets, such as Internet latencies and biological, social, and collaboration networks are inherently treelike, i.e. have low hyperbolicity. Thus, by Gromov’s result, these real-world data sets can be embedded into trees with additive distortion controlled by their respective cardinalities. The cardinality bound might of course be undesirable, especially for very large data sets such as the Internet. However, it has been claimed without proof in [1] that Gromov’s embedding can yield a 3-approximation to the NTP, independent of [3]. We note that the assumption of a metric input is not apparent in Gromov’s embedding theorem. Moreover, the proof of the theorem does not utilize any metric property. This leads one to hope for bounds where the dependence on cardinality is replaced by a dependence on some metric notion. A natural candidate for such a metric notion is the doubling dimension of a space [15], which has already found applications in learning [17] and algorithm design [15]. In this paper, we present novel upper bounds on the additive distortion of a Gromov embedding, depending only on the hyperbolicity and doubling dimension of the metric space. Our main tool is a stability theorem that we prove using a metric induced by a Voronoi partition. This result is then combined with the results of Gromov’s linkage argument. Both the stability theorem and Gromov’s theorem rely on the embedding satisfying a particular linkage condition, which can be described as follows: for any embedding f : (X, dX) → (X, tX), and any x, x′ ∈ X , we have tX(x, x ′) = maxc mini Ψ(xi, xi+1), where c = {xi}ki=0 is a chain of points joining x to x′ and Ψ is some function of dX . A dual notion is to flip the order of the max,min operations. Interestingly, under the correct objective function Ψ, this leads to the well-studied notion of SLHC. By virtue of this duality, the arguments of both the stability theorem and the scaling theorem apply in the SLHC setting. We introduce a new metric space statistic that we call ultrametricity (analogous to hyperbolicity), and are then able to obtain novel lower bounds, depending only on doubling dimension and ultrametricity, for the distortion incurred by a metric space when embedding into an ultrametric tree via SLHC. We remark that just by virtue of the duality between Gromov’s embedding and the SLHC embedding, it is possible to obtain a distortion bound for SLHC depending on cardinality. We were unable to find such a bound in the existing HC literature, so it appears that even the knowledge of this duality, which bridges the domains of HC and MDS methods, is not prevalent in the community. The paper is organized as follows. The main thrust of our work is explained in §1. In §2 we develop the context of our work by highlighting some of the surrounding literature. We provide all definitions and notation, including the Voronoi partition construction, in §3. In §4 we describe Gromov’s embedding and present Gromov’s distortion bound in Theorem 3. Our contributions begin with Theorem 4 in §4 and include all the results that follow: namely the stability results in §5, the improved distortion bounds in §6, and the proof of tightness in §7. The supplementary material contains (1) an appendix with proofs omitted from the body, (2) a practical demonstration in §A where we apply Gromov’s embedding to a bitmap image of a tree and show that our upper bounds perform better than the bounds suggested by Gromov’s embedding theorem, and (3) Matlab .m files containing demos of Gromov’s embedding being applied to various images of trees. 2 Related Literature MDS is explained thoroughly in [18]. In metric MDS [18] one attempts to find an embedding of the data X into a low dimensional Euclidean space given by a point cloud Y ⊂ Rd (where often d = 2 or d = 3) such that the metric distortion (measured by the Frobenius norm of the difference of the Gram matrices of X and Y ) is minimized. The most common non-metric variant of MDS is referred to as ordinal embedding, and has been studied in [14]. A common problem with metric MDS is that when the intrinsic dimension of the data is higher than the embedding dimension, the clustering in the original data may not be preserved [21]. One variant of MDS that preserves clusters is the tree preserving embedding [20], where the goal is to preserve the single linkage (SL) dendrogram from the original data. This is especially important for certain types of biological data, for the following reasons: (1) due to speciation, many biological datasets are inherently “treelike", and (2) the SL dendrogram is a 2-approximation to the optimal ultrametric tree embedding [16], so intuitively, preserving the SL dendrogram preserves the “treeness" of the data. Preserving the treeness of a metric space is related to the notion of finding an optimal embedding into a tree, which ties back to the numerical taxonomy problem. The SL dendrogram is an embedding of a metric space into an ultrametric tree, and can be used to find the optimal ultrametric tree [8]. The quality of an embedding is measured by computing its distortion, which has different definitions in different domain areas. Typically, a tree embedding is defined to be an injective map f : X → Y between metric spaces (X, dX) and (Y, tY ), where the target space is a tree. We have defined the additive distortion of a tree embedding in an `∞ setting above, but `p notions, for p ∈ [1,∞), can also be defined. Past efforts to embed a metric into a tree with low additive distortion are described in [19, Chapter 7]. One can also define a multiplicative distortion [4, 10], but this is studied in the domain of discrete algorithms and is not our focus in the current work. 3 Preliminaries on metric spaces, distances, and doubling dimension A finite metric space (X, dX) is a finite set X together with a function dX : X × X → R+ such that: (1) dX(x, x′) = 0 ⇐⇒ x = x′, (2) dX(x, x′) = dX(x′, x), and (3) dX(x, x′) ≤ dX(x, x ′′) + dX(x ′′, x′) for any x, x′, x′′ ∈ X . A pointed metric space is a triple (X, dX , p), where (X, dX) is a finite metric space and p ∈ X . All the spaces we consider are assumed to be finite. For a metric space (X, dX), the diameter is defined to be diam(X, dX) := maxx,x′∈X dX(x, x′). The hyperbolicity of (X, dX) was defined by Gromov [12] as follows: hyp(X, dX) := max x1,x2,x3,x4∈X ΨhypX (x1, x2, x3, x4), where ΨhypX (x1, x2, x3, x4) : = 1 2 ( dX(x1, x2) + dX(x3, x4) −max ( dX(x1, x3) + dX(x2, x4), dX(x1, x4) + dX(x2, x3) )) . A tree metric space (X, tX) is a finite metric space such that hyp(X, tX) = 0 [19]. In our work, we strengthen the preceding characterization of trees to the special class of ultrametric trees. Recall that an ultrametric space (X,uX) is a metric space satisfying the strong triangle inequality: uX(x, x ′) ≤ max(uX(x, x′′), uX(x′′, x′)), ∀x, x′, x′′ ∈ X. Definition 1. We define the ultrametricity of a metric space (X, dX) as: ult(X, dX) := max x1,x2,x3∈X ΨultX (x1, x2, x3), where ΨultX (x1, x2, x3) := dX(x1, x3)−max ( dX(x1, x2), dX(x2, x3) ) . We introduce ultrametricity to quantify the deviation of a metric space from being ultrametric. Notice that (X,uX) is an ultrametric space if and only if ult(X,uX) = 0. One can verify that an ultrametric space is a tree metric space. We will denote the cardinality of a set X by writing |X|. Given a set X and two metrics dX , d′X defined on X ×X , we denote the `∞ distance between dX and d′X as follows: ‖dX − d′X‖`∞(X×X) := max x,x′∈X |dX(x, x′)− d′X(x, x′)|. We use the shorthand ‖dX−d′X‖∞ to mean ‖dX−d′X‖`∞(X×X). We write≈ to mean “approximately equal to." Given two functions f, g : N→ R, we will write f g to mean asymptotic tightness, i.e. that there exist constants c1, c2 such that c1|f(n)| ≤ |g(n)| ≤ c2|f(n)| for sufficiently large n ∈ N. Induced metrics from Voronoi partitions. A key ingredient of our stability result involves a Voronoi partition construction. Given a metric space (X, dX) and a subset A ⊆ X , possibly with its own metric dA, we can define a new metric dAX on X ×X using a Voronoi partition. First write A = {x1, . . . , xn}. For each 1 ≤ i ≤ n, we define Ṽi := {x ∈ X : dX(x, xi) ≤ minj 6=i dX(x, xj)} . Then X = ⋃n i=1 Ṽi. Next we perform the following disjointification trick: V1 := Ṽ1, V2 := Ṽ2 \ Ṽ1, . . . , Vn := Ṽn \ ( n−1⋃ i=1 Ṽi ) . Then X = ⊔n i=1 Vi, a disjoint union of Voronoi cells Vi. Next define the nearest-neighbor map η : X → A by η(x) = xi for each x ∈ Vi. The map η simply sends each x ∈ X to its closest neighbor in A, up to a choice when there are multiple nearest neighbors. Then we can define a new (pseudo)metric dAX : X ×X → R+ as follows: dAX(x, x ′) := dA(η(x), η(x ′)). Observe that dAX(x, x ′) = 0 if and only if x, x′ ∈ Vi for some 1 ≤ i ≤ n. Symmetry also holds, as does the triangle inequality. A special case of this construction occurs when A is an ε-net of X endowed with a restriction of the metric dX . Given a finite metric space (X, dX), an ε-net is a subset Xε ⊂ X such that: (1) for any x ∈ X , there exists s ∈ Xε such that dX(x, s) < ε, and (2) for any s, s′ ∈ Xε, we have dX(s, s ′) ≥ ε [15]. For notational convenience, we write dεX to refer to dX ε X . In this case, we obtain: ‖dX − dεX‖`∞(X×X) = max x,x′∈X ∣∣dX(x, x′)− dεX(x, x′)∣∣ = max 1≤i,j≤n max x∈Vi,x′∈Vj ∣∣dX(x, x′)− dεX(x, x′)∣∣ = max 1≤i,j≤n max x∈Vi,x′∈Vj ∣∣dX(x, x′)− dX(xi, xj)∣∣ ≤ max 1≤i,j≤n max x∈Vi,x′∈Vj ( dX(x, xi) + dX(x ′, xj) ) ≤ 2ε. (1) Covering numbers and doubling dimension. For a finite metric space (X, dX), the open ball of radius ε centered at x ∈ X is denoted B(x, ε). The ε-covering number of (X, dX) is defined as: NX(ε) := min { n ∈ N : X ⊂ n⋃ i=1 B(xi, ε) for x1, . . . , xn ∈ X } . Notice that the ε-covering number of X is always bounded above by the cardinality of an ε-net. A related quantity is the doubling dimension ddim(X, dX) of a metric space (X, dX), which is defined to be the minimal value ρ such that any ε-ball in X can be covered by at most 2ρ ε/2-balls [15]. The covering number and doubling dimension of a metric space (X, dX) are related as follows: Lemma 2. Let (X, dX) be a finite metric space with doubling dimension bounded above by ρ > 0. Then for all ε ∈ (0,diam(X)], we have NX(ε) ≤ ( 8 diam(X)/ε )ρ . 4 Duality between Gromov’s embedding and SLHC Given a metric space (X, dX) and any two points x, x′ ∈ X , we define a chain from x to x′ over X as an ordered set of points in X starting at x and ending at x′: c = {x0, x1, x2, . . . , xn : x0 = x, xn = x′, xi ∈ X for all 0 ≤ i ≤ n} . The set of all chains from x to x′ over X will be denoted CX(x, x′). The cost of a chain c = {x0 . . . , xn} over X is defined to be costX(c) := max0≤i<n dX(xi, xi+1). For any metric space (X, dX) and any p ∈ X , the Gromov product of X with respect to p is a map gX,p : X ×X → R+ defined by: gX,p(x, x ′) := 12 ( dX(x, p) + dX(x ′, p)− dX(x, x′) ) . We can define a map gTX,p : X ×X → R+ as follows: gTX,p(x, x ′)p := max c∈CX(x,x′) min xi,xi+1∈c gX,p(xi, xi+1). This induces a new metric tX,p : X ×X → R+: tX,p(x, x ′) := dX(x, p) + dX(x ′, p)− 2gTX,p(x, x′). Gromov observed that the space (X, tX,p) is a tree metric space, and that tX,p(x, x′) ≤ dX(x, x′) for any x, x′ ∈ X [12]. This yields the trivial upper bound: ‖dX − tX‖∞ ≤ diam(X, dX). The Gromov embedding T is defined for any pointed metric space (X, dX , p) as T (X, dX , p) := (X, tX,p). Note that each choice of p ∈ X will yield a tree metric tX,p that depends, a priori, on p. Theorem 3 (Gromov’s embedding theorem [12]). Let (X, dX , p) be an n-point pointed metric space, and let (X, tX,p) = T (X, dX , p). Then, ‖tX,p − dX‖l∞(X×X) ≤ 2 log2(2n) hyp(X, dX). Gromov’s embedding is an MDS method where the target is a tree. We observe that its construction is dual—in the sense of swapping max and min operations—to the construction of the ultrametric space produced as an output of SLHC. Recall that the SLHC method H is defined for any metric space (X, dX) asH(X, dX) = (X,uX), where uX : X ×X → R+ is the ultrametric defined below: uX(x, x ′) := min c∈CX(x,x′) costX(c). As a consequence of this duality, we can bound the additive distortion of SLHC as below: Theorem 4. Let (X, dX) be an n-point metric space, and let (X,uX) = H(X, dX). Then we have: ‖dX − uX‖`∞(X×X) ≤ log2(2n) ult(X, dX). Moreover, this bound is asymptotically tight. The proof of Theorem 4 proceeds by invoking the definition of ultrametricity at various scales along chains of points; we provide details in Appendix B. We remark that the bounds in Theorems 3, 4 depend on both a local (ultrametricity/hyperbolicity) and a global property (cardinality); however, a natural improvement would be to exploit a global property that takes into account the metric structure of the underlying space. The first step in this improvement is to prove a set of stability theorems. 5 Stability of SLHC and Gromov’s embedding It is known that SLHC is robust to small perturbations of the input data with respect to the GromovHausdorff distance between metric spaces, whereas other HC methods, such as average linkage and complete linkage, do not enjoy this stability [6]. We prove a particular stability result for SLHC involving the `∞ distance, and then we exploit the duality observed in §4 to prove a similar stability result for Gromov’s embedding. Theorem 5. Let (X, dX) be a metric space, and let (A, dA) be any subspace with the restriction metric dA := dX |A×A. LetH denote the SLHC method. Write (X,uX) = H(X, dX) and (A, uA) = H(A, dA). Also write uAX(x, x′) := uA(η(x), η(x′)) for x, x′ ∈ X . Then we have: ‖H(X, dX)−H(A, dA)‖∞ := ‖uX − uAX‖∞ ≤ ‖dX − dAX‖∞. Theorem 6. Let (X, dX , p) be a pointed metric space, and let (A, dA, a) be any subspace with the restriction metric dA := dX |A×A such that η(p) = a. Let T denote the Gromov embedding. Write (X, tX,p) = T (X, dX , p) and (A, tA,a) = T (A, dA, a). Also write tAX,p(x, x′) := tA,a(η(x), η(x′)) for x, x′ ∈ X . Then we have: ‖T (X, dX , p)− T (A, dA, a)‖∞ := ‖tX,p − tAX,p‖∞ ≤ 5‖dX − dAX‖∞. The proofs for both of these results use similar techniques, and we present them in Appendix B. 6 Improvement via Doubling Dimension Our main theorems, providing additive distortion bounds for Gromov’s embedding and for SLHC, are stated below. The proofs for both theorems are similar, so we only present that of the former. Theorem 7. Let (X, dX) be a n-point metric space with doubling dimension ρ, hyperbolicity hyp(X, dX) = δ, and diameter D. Let p ∈ X , and write (X, tX) = T (X, dX , p). Then we obtain: Covering number bound: ‖dX − tX‖∞ ≤ min ε∈(0,D] ( 12ε+ 2δ log2(2NX(ε)) ) . (2) Also suppose D ≥ δρ6 ln 2 . Then, Doubling dimension bound: ‖dX − tX‖∞ ≤ 2δ + 2δρ ( 13 2 + log2 ( D δρ )) . (3) Theorem 8. Let (X, dX) be a n-point metric space with doubling dimension ρ, ultrametricity ult(X, dX) = ν, and diameter D. Write (X,uX) = H(X, dX). Then we obtain: Covering number bound: ‖dX − uX‖∞ ≤ min ε∈(0,D] ( 4ε+ ν log2(2NX(ε)) ) . (4) Also suppose D ≥ νρ4 ln 2 . Then, Doubling dimension bound: ‖dX − uX‖∞ ≤ ν + νρ ( 6 + log2 ( D νρ )) . (5) Remark 9 (A remark on the NTP). We are now able to return to Question 1 and provide some answers. Consider a metric space (X, dX). We can upper bound ‖dX − uoptX ‖∞ using the bounds in Theorem 8, and ‖dX − toptX ‖∞ using the bounds in Theorem 7. Remark 10 (A remark on parameters). Notice that as hyperbolicity δ approaches 0 (or ultrametricity approaches 0), the doubling dimension bounds (hence the covering number bounds) approach 0. Also note that as ε ↓ 0, we get NX(ε) ↑ |X|, so Theorems 7,8 reduce to Theorems 3,4. Experiments lead us to believe that the interesting range of ε values is typically a subinterval of (0, D]. Proof of Theorem 7. Fix ε ∈ (0, D] and let Xε = {x1, x2, ..., xk} be a collection of k = NX(ε) points that form an ε-net of X . Then we may define dεX and t ε X on X ×X as in §3. Subsequent application of Theorem 3 and Lemma 2 gives the bound ‖dεX − tεX‖`∞(X×X) ≤ ‖dXε − tXε‖`∞(Xε×Xε) ≤ 2δ log2(2k) ≤ 2δ log2(2Cε−ρ), where we define C := (8D)ρ. Then by the triangle inequality for the `∞ distance, the stability of T (Theorem 6), and using the result that ‖dX − dεX‖`∞(X×X) ≤ 2ε (Inequality 1), we get: ‖dX − tX‖∞ ≤ ‖dX − dεX‖∞ + ‖dεX − tεX‖∞ + ‖tεX − tX‖∞ ≤ 6‖dX − dεX‖∞ + ‖dεX − tεX‖∞ ≤ 12ε+ 2δ log2(2NX(ε)). Since ε ∈ (0, D] was arbitrary, this suffices to prove Inequality 2. Applying Lemma 2 yields: ‖dX − tX‖∞ ≤ 12ε+ 2δ log2(2Cε−ρ). Notice that Cε−ρ ≥ NX(ε) ≥ 1, so the term on the right of the inequality above is positive. Consider the function f(ε) = 12ε+ 2δ + 2δ log2 C − 2δρ log2 ε. The minimizer of this function is obtained by taking a derivative with respect to ε: f ′(ε) = 12− 2δρ ε ln 2 = 0 =⇒ ε = δρ 6 ln 2 . Since ε takes values in (0, D], and limε→0 f(ε) = +∞, the value of f(ε) is minimized at min(D, δρ6 ln 2 ). By assumption, D ≥ δρ 6 ln 2 . Since ‖dX − tX‖∞ ≤ f(ε) for all ε ∈ (0, D], it follows that: ‖dX−tX‖∞ ≤ f ( δρ 6 ln 2 ) = 2δρ ln 2 +2δ+2δρ log2 ( 48D ln 2 δρ ) ≤ 2δ+2δρ ( 13 2 + log2 ( D δρ )) . 7 Tightness of our bounds in Theorems 7 and 8 By the construction provided below, we show that our covering number bound for the distortion of SLHC is asymptotically tight. A similar construction can be used to show that our covering number bound for Gromov’s embedding is also asymptotically tight. Proposition 11. There exists a sequence (Xn, dXn)n∈N of finite metric spaces such that as n→∞, ‖dXn − uXn‖∞ min ε∈(0,Dn] ( 4ε+ νn log2(2NXn(ε)) ) → 0. Here we have written (Xn, uXn) = H(Xn, dXn), νn = ult(Xn, dXn), and Dn = diam(Xn, dXn). Proof of Proposition 11. After defining Xn for n ∈ N below, we will denote the error term, our covering number upper bound, and our Gromov-style upper bound as follows: En := ‖dXn − uXn‖∞, Bn := min ε∈(0,Dn] ρ(n, ε), Gn := log2(2|Xn|) ult(Xn, dXn), where ρ : N× [0,∞)→ R is defined by ρ(n, ε) = 4ε+ νn log2(2NXn(ε)). Here we write |S| to denote the cardinality of a set S. Recall that the separation of a finite metric space (X, dX) is the quantity sep(X, dX) := minx6=x′∈X dX(x, x′). Let (V, uV ) be the finite ultrametric space consisting of two equidistant points with common distance 1. For each n ∈ N, let Ln denote the line metric space obtained by choosing (n+ 1) equally spaced points with separation 1n2 from the interval [0, 1n ], and endowing this set with the restriction of the Euclidean metric, denoted dLn . One can verify that ult(Ln, dLn) ≈ 12n . Finally, for each n ∈ N we define Xn := V × Ln, and endow Xn with the following metric: dXn ( (v, l), (v′, l′) ) := max ( dV (v, v ′), dLn(l, l ′) ) , v, v′ ∈ V, l, l′ ∈ Ln. Claim 1. ult(Xn, dXn) = ult(Ln, dLn) ≈ 12n . For a proof, see Appendix B. Claim 2. En diam(Ln, dLn) = 1n . To see this, let n ∈ N, and let x = (v, l), x ′ = (v′, l′) ∈ Xn be two points realizing En. Suppose first that v = v′. Then an optimal chain from (v, l), (v, l′) only has to incur the cost of moving along the Ln coordinate. As such, we obtain uXn(x, x ′) ≤ 1n2 , with equality if and only if l 6= l′. Then, En = max x,x′∈Xn |dXn(x, x′)− uXn(x, x′)| = max l,l′∈Ln |dLn(l, l′)− 1n2 | = 1 n − 1 n2 1 n . Note that the case v 6= v′ is not allowed, because then we would obtain dXn(x, x′) = dV (v, v′) = uXn(x, x ′), since sep(V, dV ) ≥ diam(Ln, dLn) and all the points in V are equidistant. Thus we would obtain |dXn(x, x′)− uXn(x, x′)| = 0, which is a contradiction because we assumed that x, x′ realize En. Claim 3. For each n ∈ N, ε ∈ (0, Dn], we have: NXn(ε) = NV (ε) : ε > sep(V, dV ), |V | : diam(Ln, dLn) < ε ≤ sep(V, dV ), |V |NLn(ε) : ε ≤ diam(Ln, dLn). To see this, note that in the first two cases, any ε-ball centered at a point (v, l) automatically contains all of {v} × Ln, so NXn(ε) = NV (ε). Specifically in the range diam(Ln, dLn) < ε ≤ sep(V, dV ), we need exactly one ε-ball for each v ∈ V to cover Xn. Finally in the third case, we need NLn(ε) ε-balls to cover {v} × Ln for each v ∈ V . This yields the stated estimate. By the preceding claims, we now have the following for each n ∈ N, ε ∈ (0, Dn]: ρ(n, ε) ≈ 4ε+ 12n log2(2NXn(ε)) = 4ε+ 12n log2(2NV (ε)) : ε > sep(V ), 4ε+ 12n log2(2|V |) : diam(Ln) < ε ≤ sep(V ), 4ε+ 12n log2(2|V |NLn(ε)) : ε ≤ diam(Ln). Notice that for sufficiently large n, infε>diam(Ln) ρ(n, ε) = ρ(n, 1 n ). Then we have: 1 n ≤ En ≤ Bn = minε∈(0,Dn] ρ(n, ε) ≤ ρ(n, 1n ) ≈ C n , for some constant C > 0. Here the first inequality follows from the proof of Claim 2, the second from Theorem 8, and the third from our observation above. It follows that En Bn 1n → 0. Remark 12. Given the setup of the preceding proof, note that the Gromov-style bound behaves as: Gn = ρ(n, 0) = 1 2n log2(2|V |(n+ 1)) ≈ C ′ log2(n+1) n , for some constant C ′ > 0. So Gn approaches 0 at a rate strictly slower than that of En and Bn. 8 Discussion We are motivated by a particular aspect of the numerical taxonomy problem, namely, the distortion incurred when passing from a metric to its optimal tree embedding. We describe and explore a duality between a tree embedding method proposed by Gromov and the well known SLHC method for embedding a metric space into an ultrametric tree. Motivated by this duality, we propose a novel metric space statistic that we call ultrametricity, and give a novel, tight bound on the distortion of the SLHC method depending on cardinality and ultrametricity. We improve this Gromov-style bound by replacing the dependence on cardinality by a dependence on doubling dimension, and produce a family of examples proving tightness of this dimension-based bound. By invoking duality again, we are able to improve Gromov’s original bound on the distortion of his tree embedding method. More specifically, we replace the dependence on cardinality—a set-theoretic notion—by a dependence on doubling dimension, which is truly a metric notion. Through Proposition 11, we are able to prove that our bound is not just asymptotically tight, but that it is strictly better than the corresponding Gromov-style bound. Indeed, Gromov’s bound can perform arbitrarily worse than our dimension-based bound. We construct an explicit example to verify this claim in Appendix A, Remark 14, where we also provide a practical demonstration of our methods.
1. What are the main contributions and improvements introduced by the paper in distortion bounds for embedding metric spaces? 2. How does the reviewer assess the significance and impact of the paper's results, particularly in comparison to prior works such as Gromov's upper bound? 3. What are the limitations and weaknesses of the paper's approach and results, according to the reviewer? 4. Are there any concerns regarding the tightness of the proposed distortion bounds, especially in light of the provided examples? 5. How could the authors improve their demonstrations or explanations to better motivate and support their work?
Review
Review This paper proves distortion bounds for embedding metric spaces into tree metrics, in terms of hyperbolicity and doubling dimension, instead of just cardinality. The authors also show a duality between their results and single linkage hierarchical clustering. Even though this bound is an improvement on Gromov’s upper bound, it is still quite loose. In the author’s demonstration (Figure 1), their bound appears to be barely non-trivial. Gromov’s upper bound is 1.23, the trivial bound is 1.0, this paper is 0.87 and the true distortion is almost an order of magnitude lower at 0.1732. This is the best result the authors show — in the other two examples (supplementary material) their bound is trivial. Though the author’s analysis is quite thorough, these results do not demonstrate the significance of their work. Perhaps an alternative demonstration or explanation would better motivate this paper.
NIPS
Title Improved Error Bounds for Tree Representations of Metric Spaces Abstract Estimating optimal phylogenetic trees or hierarchical clustering trees from metric data is an important problem in evolutionary biology and data analysis. Intuitively, the goodness-of-fit of a metric space to a tree depends on its inherent treeness, as well as other metric properties such as intrinsic dimension. Existing algorithms for embedding metric spaces into tree metrics provide distortion bounds depending on cardinality. Because cardinality is a simple property of any set, we argue that such bounds do not fully capture the rich structure endowed by the metric. We consider an embedding of a metric space into a tree proposed by Gromov. By proving a stability result, we obtain an improved additive distortion bound depending only on the hyperbolicity and doubling dimension of the metric. We observe that Gromov’s method is dual to the well-known single linkage hierarchical clustering (SLHC) method. By means of this duality, we are able to transport our results to the setting of SLHC, where such additive distortion bounds were previously unknown. 1 Introduction Numerous problems in data analysis are formulated as the question of embedding high-dimensional metric spaces into “simpler" spaces, typically of lower dimension. In classical multidimensional scaling (MDS) techniques [18], the goal is to embed a space into two or three dimensional Euclidean space while preserving interpoint distances. Classical MDS is helpful in exploratory data analysis, because it allows one to find hidden groupings in amorphous data by simple visual inspection. Generalizations of MDS exist for which the target space can be a tree metric space—see [13] for a summary of some of these approaches, written from the point of view of metric embeddings. The metric embeddings literature, which grew out of MDS, typically highlights the algorithmic gains made possible by embedding a complicated metric space into a simpler one [13]. The special case of MDS where the target space is a tree has been of interest in phylogenetics for quite some time [19, 5]; the numerical taxonomy problem (NTP) is that of finding an optimal tree embedding for a given metric space (X, dX), i.e. a tree (X, tX) such that the additive distortion, defined as ‖dX − tX‖`∞(X×X), is minimal over all possible tree metrics on X . This problem turns out to be NP-hard [3]; however, a 3-approximation algorithm exists [3], and a variant of this problem, 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. that of finding an optimal ultrametric tree, can be solved in polynomial time [11]. An ultrametric tree is a rooted tree where every point is equidistant from the root—for example, ultrametric trees are the outputs of hierarchical clustering (HC) methods that show groupings in data across different resolutions. A known connection between HC and MDS is that the output ultrametric of single linkage hierarchical clustering (SLHC) is a 2-approximation to the optimal ultrametric tree embedding [16], thus providing a partial answer to the NTP. However, it appears that the existing line of work regarding NTP does not address the question of quantifying the `∞ distance between a metric (X, dX) and its optimal tree metric, or even the optimal ultrametric. More specifically, we can ask: Question 1. Given a set X , a metric dX , and an optimal tree metric toptX (or an optimal ultrametric uoptX ), can one find a nontrivial upper bound on ‖dX − t opt X ‖`∞(X×X) (resp. ‖dX − u opt X ‖`∞(X×X)) depending on properties of the metric dX? The question of distortion bounds is treated from a different perspective in the discrete algorithms literature. In this domain, tree embeddings are typically described with multiplicative distortion bounds (described in §2) depending on the cardinality of the underlying metric space, along with (typically) pathological counterexamples showing that these bounds are tight [4, 10]. We remark immediately that (1) multiplicative distortion is distinct from the additive distortion encountered in the NTP, and (2) these embeddings are rarely used in machine learning, where HC and MDS methods are the main workhorses. Moreover, such multiplicative distortion bounds do not take two considerations into account: (1) the ubiquitousness of very large data sets means that a bound dependent on cardinality is not desirable, and (2) “nice" properties such as low intrinsic dimensionality or treeness of real-world datasets are not exploited in cardinality bounds. We prove novel additive distortion bounds for two methods of tree embeddings: one into general trees, and one into ultrametric trees. These additive distortion bounds take into account (1) whether the data is treelike, and (2) whether the data has low doubling dimension, which is a measure of its intrinsic dimension. Thus we prove an answer to Question 1 above, namely, that the approximation error made by an optimal tree metric (or optimal ultrametric) can be bounded nontrivially. Remark 1. The trivial upper bound is ‖dX − toptX ‖`∞(X×X) ≤ diam(X, dX). To see this, observe that any ultrametric is a tree, and that SLHC yields an ultrametric uX that is bounded above by dX . An overview of our approach. A common measure of treeness is Gromov’s δ-hyperbolicity, which is a local condition on 4-point subsets of a metric space. Hyperbolicity has been shown to be a useful statistic for evaluating the quality of trees in [7]. The starting point for our work is a method used by Gromov to embed metric spaces into trees, which we call Gromov’s embedding [12]. A known result, which we call Gromov’s embedding theorem, is that if every 4-point subset of an n-point metric space is δ-hyperbolic, then the metric space embeds into a tree with `∞ distortion bounded above by 2δ log2(2n). The proof proceeds by a linkage argument, i.e. by invoking the definition of hyperbolicity at different scales along chains of points. By virtue of the embedding theorem, one can argue that hyperbolicity is a measure of the “treeness" of a given metric space. It has been shown in [1, 2] that various real-world data sets, such as Internet latencies and biological, social, and collaboration networks are inherently treelike, i.e. have low hyperbolicity. Thus, by Gromov’s result, these real-world data sets can be embedded into trees with additive distortion controlled by their respective cardinalities. The cardinality bound might of course be undesirable, especially for very large data sets such as the Internet. However, it has been claimed without proof in [1] that Gromov’s embedding can yield a 3-approximation to the NTP, independent of [3]. We note that the assumption of a metric input is not apparent in Gromov’s embedding theorem. Moreover, the proof of the theorem does not utilize any metric property. This leads one to hope for bounds where the dependence on cardinality is replaced by a dependence on some metric notion. A natural candidate for such a metric notion is the doubling dimension of a space [15], which has already found applications in learning [17] and algorithm design [15]. In this paper, we present novel upper bounds on the additive distortion of a Gromov embedding, depending only on the hyperbolicity and doubling dimension of the metric space. Our main tool is a stability theorem that we prove using a metric induced by a Voronoi partition. This result is then combined with the results of Gromov’s linkage argument. Both the stability theorem and Gromov’s theorem rely on the embedding satisfying a particular linkage condition, which can be described as follows: for any embedding f : (X, dX) → (X, tX), and any x, x′ ∈ X , we have tX(x, x ′) = maxc mini Ψ(xi, xi+1), where c = {xi}ki=0 is a chain of points joining x to x′ and Ψ is some function of dX . A dual notion is to flip the order of the max,min operations. Interestingly, under the correct objective function Ψ, this leads to the well-studied notion of SLHC. By virtue of this duality, the arguments of both the stability theorem and the scaling theorem apply in the SLHC setting. We introduce a new metric space statistic that we call ultrametricity (analogous to hyperbolicity), and are then able to obtain novel lower bounds, depending only on doubling dimension and ultrametricity, for the distortion incurred by a metric space when embedding into an ultrametric tree via SLHC. We remark that just by virtue of the duality between Gromov’s embedding and the SLHC embedding, it is possible to obtain a distortion bound for SLHC depending on cardinality. We were unable to find such a bound in the existing HC literature, so it appears that even the knowledge of this duality, which bridges the domains of HC and MDS methods, is not prevalent in the community. The paper is organized as follows. The main thrust of our work is explained in §1. In §2 we develop the context of our work by highlighting some of the surrounding literature. We provide all definitions and notation, including the Voronoi partition construction, in §3. In §4 we describe Gromov’s embedding and present Gromov’s distortion bound in Theorem 3. Our contributions begin with Theorem 4 in §4 and include all the results that follow: namely the stability results in §5, the improved distortion bounds in §6, and the proof of tightness in §7. The supplementary material contains (1) an appendix with proofs omitted from the body, (2) a practical demonstration in §A where we apply Gromov’s embedding to a bitmap image of a tree and show that our upper bounds perform better than the bounds suggested by Gromov’s embedding theorem, and (3) Matlab .m files containing demos of Gromov’s embedding being applied to various images of trees. 2 Related Literature MDS is explained thoroughly in [18]. In metric MDS [18] one attempts to find an embedding of the data X into a low dimensional Euclidean space given by a point cloud Y ⊂ Rd (where often d = 2 or d = 3) such that the metric distortion (measured by the Frobenius norm of the difference of the Gram matrices of X and Y ) is minimized. The most common non-metric variant of MDS is referred to as ordinal embedding, and has been studied in [14]. A common problem with metric MDS is that when the intrinsic dimension of the data is higher than the embedding dimension, the clustering in the original data may not be preserved [21]. One variant of MDS that preserves clusters is the tree preserving embedding [20], where the goal is to preserve the single linkage (SL) dendrogram from the original data. This is especially important for certain types of biological data, for the following reasons: (1) due to speciation, many biological datasets are inherently “treelike", and (2) the SL dendrogram is a 2-approximation to the optimal ultrametric tree embedding [16], so intuitively, preserving the SL dendrogram preserves the “treeness" of the data. Preserving the treeness of a metric space is related to the notion of finding an optimal embedding into a tree, which ties back to the numerical taxonomy problem. The SL dendrogram is an embedding of a metric space into an ultrametric tree, and can be used to find the optimal ultrametric tree [8]. The quality of an embedding is measured by computing its distortion, which has different definitions in different domain areas. Typically, a tree embedding is defined to be an injective map f : X → Y between metric spaces (X, dX) and (Y, tY ), where the target space is a tree. We have defined the additive distortion of a tree embedding in an `∞ setting above, but `p notions, for p ∈ [1,∞), can also be defined. Past efforts to embed a metric into a tree with low additive distortion are described in [19, Chapter 7]. One can also define a multiplicative distortion [4, 10], but this is studied in the domain of discrete algorithms and is not our focus in the current work. 3 Preliminaries on metric spaces, distances, and doubling dimension A finite metric space (X, dX) is a finite set X together with a function dX : X × X → R+ such that: (1) dX(x, x′) = 0 ⇐⇒ x = x′, (2) dX(x, x′) = dX(x′, x), and (3) dX(x, x′) ≤ dX(x, x ′′) + dX(x ′′, x′) for any x, x′, x′′ ∈ X . A pointed metric space is a triple (X, dX , p), where (X, dX) is a finite metric space and p ∈ X . All the spaces we consider are assumed to be finite. For a metric space (X, dX), the diameter is defined to be diam(X, dX) := maxx,x′∈X dX(x, x′). The hyperbolicity of (X, dX) was defined by Gromov [12] as follows: hyp(X, dX) := max x1,x2,x3,x4∈X ΨhypX (x1, x2, x3, x4), where ΨhypX (x1, x2, x3, x4) : = 1 2 ( dX(x1, x2) + dX(x3, x4) −max ( dX(x1, x3) + dX(x2, x4), dX(x1, x4) + dX(x2, x3) )) . A tree metric space (X, tX) is a finite metric space such that hyp(X, tX) = 0 [19]. In our work, we strengthen the preceding characterization of trees to the special class of ultrametric trees. Recall that an ultrametric space (X,uX) is a metric space satisfying the strong triangle inequality: uX(x, x ′) ≤ max(uX(x, x′′), uX(x′′, x′)), ∀x, x′, x′′ ∈ X. Definition 1. We define the ultrametricity of a metric space (X, dX) as: ult(X, dX) := max x1,x2,x3∈X ΨultX (x1, x2, x3), where ΨultX (x1, x2, x3) := dX(x1, x3)−max ( dX(x1, x2), dX(x2, x3) ) . We introduce ultrametricity to quantify the deviation of a metric space from being ultrametric. Notice that (X,uX) is an ultrametric space if and only if ult(X,uX) = 0. One can verify that an ultrametric space is a tree metric space. We will denote the cardinality of a set X by writing |X|. Given a set X and two metrics dX , d′X defined on X ×X , we denote the `∞ distance between dX and d′X as follows: ‖dX − d′X‖`∞(X×X) := max x,x′∈X |dX(x, x′)− d′X(x, x′)|. We use the shorthand ‖dX−d′X‖∞ to mean ‖dX−d′X‖`∞(X×X). We write≈ to mean “approximately equal to." Given two functions f, g : N→ R, we will write f g to mean asymptotic tightness, i.e. that there exist constants c1, c2 such that c1|f(n)| ≤ |g(n)| ≤ c2|f(n)| for sufficiently large n ∈ N. Induced metrics from Voronoi partitions. A key ingredient of our stability result involves a Voronoi partition construction. Given a metric space (X, dX) and a subset A ⊆ X , possibly with its own metric dA, we can define a new metric dAX on X ×X using a Voronoi partition. First write A = {x1, . . . , xn}. For each 1 ≤ i ≤ n, we define Ṽi := {x ∈ X : dX(x, xi) ≤ minj 6=i dX(x, xj)} . Then X = ⋃n i=1 Ṽi. Next we perform the following disjointification trick: V1 := Ṽ1, V2 := Ṽ2 \ Ṽ1, . . . , Vn := Ṽn \ ( n−1⋃ i=1 Ṽi ) . Then X = ⊔n i=1 Vi, a disjoint union of Voronoi cells Vi. Next define the nearest-neighbor map η : X → A by η(x) = xi for each x ∈ Vi. The map η simply sends each x ∈ X to its closest neighbor in A, up to a choice when there are multiple nearest neighbors. Then we can define a new (pseudo)metric dAX : X ×X → R+ as follows: dAX(x, x ′) := dA(η(x), η(x ′)). Observe that dAX(x, x ′) = 0 if and only if x, x′ ∈ Vi for some 1 ≤ i ≤ n. Symmetry also holds, as does the triangle inequality. A special case of this construction occurs when A is an ε-net of X endowed with a restriction of the metric dX . Given a finite metric space (X, dX), an ε-net is a subset Xε ⊂ X such that: (1) for any x ∈ X , there exists s ∈ Xε such that dX(x, s) < ε, and (2) for any s, s′ ∈ Xε, we have dX(s, s ′) ≥ ε [15]. For notational convenience, we write dεX to refer to dX ε X . In this case, we obtain: ‖dX − dεX‖`∞(X×X) = max x,x′∈X ∣∣dX(x, x′)− dεX(x, x′)∣∣ = max 1≤i,j≤n max x∈Vi,x′∈Vj ∣∣dX(x, x′)− dεX(x, x′)∣∣ = max 1≤i,j≤n max x∈Vi,x′∈Vj ∣∣dX(x, x′)− dX(xi, xj)∣∣ ≤ max 1≤i,j≤n max x∈Vi,x′∈Vj ( dX(x, xi) + dX(x ′, xj) ) ≤ 2ε. (1) Covering numbers and doubling dimension. For a finite metric space (X, dX), the open ball of radius ε centered at x ∈ X is denoted B(x, ε). The ε-covering number of (X, dX) is defined as: NX(ε) := min { n ∈ N : X ⊂ n⋃ i=1 B(xi, ε) for x1, . . . , xn ∈ X } . Notice that the ε-covering number of X is always bounded above by the cardinality of an ε-net. A related quantity is the doubling dimension ddim(X, dX) of a metric space (X, dX), which is defined to be the minimal value ρ such that any ε-ball in X can be covered by at most 2ρ ε/2-balls [15]. The covering number and doubling dimension of a metric space (X, dX) are related as follows: Lemma 2. Let (X, dX) be a finite metric space with doubling dimension bounded above by ρ > 0. Then for all ε ∈ (0,diam(X)], we have NX(ε) ≤ ( 8 diam(X)/ε )ρ . 4 Duality between Gromov’s embedding and SLHC Given a metric space (X, dX) and any two points x, x′ ∈ X , we define a chain from x to x′ over X as an ordered set of points in X starting at x and ending at x′: c = {x0, x1, x2, . . . , xn : x0 = x, xn = x′, xi ∈ X for all 0 ≤ i ≤ n} . The set of all chains from x to x′ over X will be denoted CX(x, x′). The cost of a chain c = {x0 . . . , xn} over X is defined to be costX(c) := max0≤i<n dX(xi, xi+1). For any metric space (X, dX) and any p ∈ X , the Gromov product of X with respect to p is a map gX,p : X ×X → R+ defined by: gX,p(x, x ′) := 12 ( dX(x, p) + dX(x ′, p)− dX(x, x′) ) . We can define a map gTX,p : X ×X → R+ as follows: gTX,p(x, x ′)p := max c∈CX(x,x′) min xi,xi+1∈c gX,p(xi, xi+1). This induces a new metric tX,p : X ×X → R+: tX,p(x, x ′) := dX(x, p) + dX(x ′, p)− 2gTX,p(x, x′). Gromov observed that the space (X, tX,p) is a tree metric space, and that tX,p(x, x′) ≤ dX(x, x′) for any x, x′ ∈ X [12]. This yields the trivial upper bound: ‖dX − tX‖∞ ≤ diam(X, dX). The Gromov embedding T is defined for any pointed metric space (X, dX , p) as T (X, dX , p) := (X, tX,p). Note that each choice of p ∈ X will yield a tree metric tX,p that depends, a priori, on p. Theorem 3 (Gromov’s embedding theorem [12]). Let (X, dX , p) be an n-point pointed metric space, and let (X, tX,p) = T (X, dX , p). Then, ‖tX,p − dX‖l∞(X×X) ≤ 2 log2(2n) hyp(X, dX). Gromov’s embedding is an MDS method where the target is a tree. We observe that its construction is dual—in the sense of swapping max and min operations—to the construction of the ultrametric space produced as an output of SLHC. Recall that the SLHC method H is defined for any metric space (X, dX) asH(X, dX) = (X,uX), where uX : X ×X → R+ is the ultrametric defined below: uX(x, x ′) := min c∈CX(x,x′) costX(c). As a consequence of this duality, we can bound the additive distortion of SLHC as below: Theorem 4. Let (X, dX) be an n-point metric space, and let (X,uX) = H(X, dX). Then we have: ‖dX − uX‖`∞(X×X) ≤ log2(2n) ult(X, dX). Moreover, this bound is asymptotically tight. The proof of Theorem 4 proceeds by invoking the definition of ultrametricity at various scales along chains of points; we provide details in Appendix B. We remark that the bounds in Theorems 3, 4 depend on both a local (ultrametricity/hyperbolicity) and a global property (cardinality); however, a natural improvement would be to exploit a global property that takes into account the metric structure of the underlying space. The first step in this improvement is to prove a set of stability theorems. 5 Stability of SLHC and Gromov’s embedding It is known that SLHC is robust to small perturbations of the input data with respect to the GromovHausdorff distance between metric spaces, whereas other HC methods, such as average linkage and complete linkage, do not enjoy this stability [6]. We prove a particular stability result for SLHC involving the `∞ distance, and then we exploit the duality observed in §4 to prove a similar stability result for Gromov’s embedding. Theorem 5. Let (X, dX) be a metric space, and let (A, dA) be any subspace with the restriction metric dA := dX |A×A. LetH denote the SLHC method. Write (X,uX) = H(X, dX) and (A, uA) = H(A, dA). Also write uAX(x, x′) := uA(η(x), η(x′)) for x, x′ ∈ X . Then we have: ‖H(X, dX)−H(A, dA)‖∞ := ‖uX − uAX‖∞ ≤ ‖dX − dAX‖∞. Theorem 6. Let (X, dX , p) be a pointed metric space, and let (A, dA, a) be any subspace with the restriction metric dA := dX |A×A such that η(p) = a. Let T denote the Gromov embedding. Write (X, tX,p) = T (X, dX , p) and (A, tA,a) = T (A, dA, a). Also write tAX,p(x, x′) := tA,a(η(x), η(x′)) for x, x′ ∈ X . Then we have: ‖T (X, dX , p)− T (A, dA, a)‖∞ := ‖tX,p − tAX,p‖∞ ≤ 5‖dX − dAX‖∞. The proofs for both of these results use similar techniques, and we present them in Appendix B. 6 Improvement via Doubling Dimension Our main theorems, providing additive distortion bounds for Gromov’s embedding and for SLHC, are stated below. The proofs for both theorems are similar, so we only present that of the former. Theorem 7. Let (X, dX) be a n-point metric space with doubling dimension ρ, hyperbolicity hyp(X, dX) = δ, and diameter D. Let p ∈ X , and write (X, tX) = T (X, dX , p). Then we obtain: Covering number bound: ‖dX − tX‖∞ ≤ min ε∈(0,D] ( 12ε+ 2δ log2(2NX(ε)) ) . (2) Also suppose D ≥ δρ6 ln 2 . Then, Doubling dimension bound: ‖dX − tX‖∞ ≤ 2δ + 2δρ ( 13 2 + log2 ( D δρ )) . (3) Theorem 8. Let (X, dX) be a n-point metric space with doubling dimension ρ, ultrametricity ult(X, dX) = ν, and diameter D. Write (X,uX) = H(X, dX). Then we obtain: Covering number bound: ‖dX − uX‖∞ ≤ min ε∈(0,D] ( 4ε+ ν log2(2NX(ε)) ) . (4) Also suppose D ≥ νρ4 ln 2 . Then, Doubling dimension bound: ‖dX − uX‖∞ ≤ ν + νρ ( 6 + log2 ( D νρ )) . (5) Remark 9 (A remark on the NTP). We are now able to return to Question 1 and provide some answers. Consider a metric space (X, dX). We can upper bound ‖dX − uoptX ‖∞ using the bounds in Theorem 8, and ‖dX − toptX ‖∞ using the bounds in Theorem 7. Remark 10 (A remark on parameters). Notice that as hyperbolicity δ approaches 0 (or ultrametricity approaches 0), the doubling dimension bounds (hence the covering number bounds) approach 0. Also note that as ε ↓ 0, we get NX(ε) ↑ |X|, so Theorems 7,8 reduce to Theorems 3,4. Experiments lead us to believe that the interesting range of ε values is typically a subinterval of (0, D]. Proof of Theorem 7. Fix ε ∈ (0, D] and let Xε = {x1, x2, ..., xk} be a collection of k = NX(ε) points that form an ε-net of X . Then we may define dεX and t ε X on X ×X as in §3. Subsequent application of Theorem 3 and Lemma 2 gives the bound ‖dεX − tεX‖`∞(X×X) ≤ ‖dXε − tXε‖`∞(Xε×Xε) ≤ 2δ log2(2k) ≤ 2δ log2(2Cε−ρ), where we define C := (8D)ρ. Then by the triangle inequality for the `∞ distance, the stability of T (Theorem 6), and using the result that ‖dX − dεX‖`∞(X×X) ≤ 2ε (Inequality 1), we get: ‖dX − tX‖∞ ≤ ‖dX − dεX‖∞ + ‖dεX − tεX‖∞ + ‖tεX − tX‖∞ ≤ 6‖dX − dεX‖∞ + ‖dεX − tεX‖∞ ≤ 12ε+ 2δ log2(2NX(ε)). Since ε ∈ (0, D] was arbitrary, this suffices to prove Inequality 2. Applying Lemma 2 yields: ‖dX − tX‖∞ ≤ 12ε+ 2δ log2(2Cε−ρ). Notice that Cε−ρ ≥ NX(ε) ≥ 1, so the term on the right of the inequality above is positive. Consider the function f(ε) = 12ε+ 2δ + 2δ log2 C − 2δρ log2 ε. The minimizer of this function is obtained by taking a derivative with respect to ε: f ′(ε) = 12− 2δρ ε ln 2 = 0 =⇒ ε = δρ 6 ln 2 . Since ε takes values in (0, D], and limε→0 f(ε) = +∞, the value of f(ε) is minimized at min(D, δρ6 ln 2 ). By assumption, D ≥ δρ 6 ln 2 . Since ‖dX − tX‖∞ ≤ f(ε) for all ε ∈ (0, D], it follows that: ‖dX−tX‖∞ ≤ f ( δρ 6 ln 2 ) = 2δρ ln 2 +2δ+2δρ log2 ( 48D ln 2 δρ ) ≤ 2δ+2δρ ( 13 2 + log2 ( D δρ )) . 7 Tightness of our bounds in Theorems 7 and 8 By the construction provided below, we show that our covering number bound for the distortion of SLHC is asymptotically tight. A similar construction can be used to show that our covering number bound for Gromov’s embedding is also asymptotically tight. Proposition 11. There exists a sequence (Xn, dXn)n∈N of finite metric spaces such that as n→∞, ‖dXn − uXn‖∞ min ε∈(0,Dn] ( 4ε+ νn log2(2NXn(ε)) ) → 0. Here we have written (Xn, uXn) = H(Xn, dXn), νn = ult(Xn, dXn), and Dn = diam(Xn, dXn). Proof of Proposition 11. After defining Xn for n ∈ N below, we will denote the error term, our covering number upper bound, and our Gromov-style upper bound as follows: En := ‖dXn − uXn‖∞, Bn := min ε∈(0,Dn] ρ(n, ε), Gn := log2(2|Xn|) ult(Xn, dXn), where ρ : N× [0,∞)→ R is defined by ρ(n, ε) = 4ε+ νn log2(2NXn(ε)). Here we write |S| to denote the cardinality of a set S. Recall that the separation of a finite metric space (X, dX) is the quantity sep(X, dX) := minx6=x′∈X dX(x, x′). Let (V, uV ) be the finite ultrametric space consisting of two equidistant points with common distance 1. For each n ∈ N, let Ln denote the line metric space obtained by choosing (n+ 1) equally spaced points with separation 1n2 from the interval [0, 1n ], and endowing this set with the restriction of the Euclidean metric, denoted dLn . One can verify that ult(Ln, dLn) ≈ 12n . Finally, for each n ∈ N we define Xn := V × Ln, and endow Xn with the following metric: dXn ( (v, l), (v′, l′) ) := max ( dV (v, v ′), dLn(l, l ′) ) , v, v′ ∈ V, l, l′ ∈ Ln. Claim 1. ult(Xn, dXn) = ult(Ln, dLn) ≈ 12n . For a proof, see Appendix B. Claim 2. En diam(Ln, dLn) = 1n . To see this, let n ∈ N, and let x = (v, l), x ′ = (v′, l′) ∈ Xn be two points realizing En. Suppose first that v = v′. Then an optimal chain from (v, l), (v, l′) only has to incur the cost of moving along the Ln coordinate. As such, we obtain uXn(x, x ′) ≤ 1n2 , with equality if and only if l 6= l′. Then, En = max x,x′∈Xn |dXn(x, x′)− uXn(x, x′)| = max l,l′∈Ln |dLn(l, l′)− 1n2 | = 1 n − 1 n2 1 n . Note that the case v 6= v′ is not allowed, because then we would obtain dXn(x, x′) = dV (v, v′) = uXn(x, x ′), since sep(V, dV ) ≥ diam(Ln, dLn) and all the points in V are equidistant. Thus we would obtain |dXn(x, x′)− uXn(x, x′)| = 0, which is a contradiction because we assumed that x, x′ realize En. Claim 3. For each n ∈ N, ε ∈ (0, Dn], we have: NXn(ε) = NV (ε) : ε > sep(V, dV ), |V | : diam(Ln, dLn) < ε ≤ sep(V, dV ), |V |NLn(ε) : ε ≤ diam(Ln, dLn). To see this, note that in the first two cases, any ε-ball centered at a point (v, l) automatically contains all of {v} × Ln, so NXn(ε) = NV (ε). Specifically in the range diam(Ln, dLn) < ε ≤ sep(V, dV ), we need exactly one ε-ball for each v ∈ V to cover Xn. Finally in the third case, we need NLn(ε) ε-balls to cover {v} × Ln for each v ∈ V . This yields the stated estimate. By the preceding claims, we now have the following for each n ∈ N, ε ∈ (0, Dn]: ρ(n, ε) ≈ 4ε+ 12n log2(2NXn(ε)) = 4ε+ 12n log2(2NV (ε)) : ε > sep(V ), 4ε+ 12n log2(2|V |) : diam(Ln) < ε ≤ sep(V ), 4ε+ 12n log2(2|V |NLn(ε)) : ε ≤ diam(Ln). Notice that for sufficiently large n, infε>diam(Ln) ρ(n, ε) = ρ(n, 1 n ). Then we have: 1 n ≤ En ≤ Bn = minε∈(0,Dn] ρ(n, ε) ≤ ρ(n, 1n ) ≈ C n , for some constant C > 0. Here the first inequality follows from the proof of Claim 2, the second from Theorem 8, and the third from our observation above. It follows that En Bn 1n → 0. Remark 12. Given the setup of the preceding proof, note that the Gromov-style bound behaves as: Gn = ρ(n, 0) = 1 2n log2(2|V |(n+ 1)) ≈ C ′ log2(n+1) n , for some constant C ′ > 0. So Gn approaches 0 at a rate strictly slower than that of En and Bn. 8 Discussion We are motivated by a particular aspect of the numerical taxonomy problem, namely, the distortion incurred when passing from a metric to its optimal tree embedding. We describe and explore a duality between a tree embedding method proposed by Gromov and the well known SLHC method for embedding a metric space into an ultrametric tree. Motivated by this duality, we propose a novel metric space statistic that we call ultrametricity, and give a novel, tight bound on the distortion of the SLHC method depending on cardinality and ultrametricity. We improve this Gromov-style bound by replacing the dependence on cardinality by a dependence on doubling dimension, and produce a family of examples proving tightness of this dimension-based bound. By invoking duality again, we are able to improve Gromov’s original bound on the distortion of his tree embedding method. More specifically, we replace the dependence on cardinality—a set-theoretic notion—by a dependence on doubling dimension, which is truly a metric notion. Through Proposition 11, we are able to prove that our bound is not just asymptotically tight, but that it is strictly better than the corresponding Gromov-style bound. Indeed, Gromov’s bound can perform arbitrarily worse than our dimension-based bound. We construct an explicit example to verify this claim in Appendix A, Remark 14, where we also provide a practical demonstration of our methods.
1. What are the main contributions and novel aspects introduced by the paper regarding stability results and bounds on distortion for tree embeddings? 2. What are the strengths of the paper, particularly in terms of its theoretical analysis? 3. How could the paper be reorganized to improve readability and impact? 4. What are some examples of repetitions and unnecessary information in the current version of the paper? 5. How can the paper provide more intuitive illustrations and explanations of the main theorems and their importance? 6. How can the paper develop Section 7 further and provide more structure overall? 7. What is the reviewer's opinion on the current introduction and how could it be improved?
Review
Review This paper provides new stability results and bounds on the distortion for the tree embedding of a metric space proposed by Gromov. It also exploits a duality between Gromov's embedding and the single linkage hierarchical clustering. As a consequence, stability and bounds are given for SLHC as well. The bounds are illustrated on simulations.The paper is very interesting and provides good theoretical results. Such an analysis is welcome in the field. However, I think that it could be re-organized to improve its readibility and impact. There are repetitions which may hinder a smooth reading. (see examples below) In particular, I believe that the proofs could be put in the supplementary materials (although short sketch proofs can stay and give the main idea and steps). The definitions could be restricted only to what is absolutely needed. There are long math developments which may not be integrated enough into the global reasoning: the reader does not necessarily know why he needs to read this. This deletion/reorganization would give room for: (i) providing intuitive illustrations of metric spaces, trees, voronoi cells, doubling dimension; (ii) giving the intuition behind the theorems 5, 6, 7, 8 (intuition behind inegalities on D in Th.7,8?) and their importance; (iii) developing Section 7 to a page, (iv) providing more structure to the paper (subsections?), (v) write a short conclusion. For now, the intuition is given in "An overview of our approach" in the introduction, but not step by step in the core of the paper. Examples of repetitions: - Additive distortion is defined twice, in Introduction and Section 2. - The trivial bound is given in Introduction and Section 4. The introduction is maybe too technical? - Gromov's embedding is a 3-approximation to the optimal tree representation in Introduction and Section 7 Examples of unecessary math background: - many details on multiplicative distortion even though not used. Eg: what is the point of the first paragraph of Introduction and last paragraph of Sec. 2? Could they be shortened? - pages 4 and 5 could be embedded in a global reasoning, to keep the reader interested?
NIPS
Title Improved Error Bounds for Tree Representations of Metric Spaces Abstract Estimating optimal phylogenetic trees or hierarchical clustering trees from metric data is an important problem in evolutionary biology and data analysis. Intuitively, the goodness-of-fit of a metric space to a tree depends on its inherent treeness, as well as other metric properties such as intrinsic dimension. Existing algorithms for embedding metric spaces into tree metrics provide distortion bounds depending on cardinality. Because cardinality is a simple property of any set, we argue that such bounds do not fully capture the rich structure endowed by the metric. We consider an embedding of a metric space into a tree proposed by Gromov. By proving a stability result, we obtain an improved additive distortion bound depending only on the hyperbolicity and doubling dimension of the metric. We observe that Gromov’s method is dual to the well-known single linkage hierarchical clustering (SLHC) method. By means of this duality, we are able to transport our results to the setting of SLHC, where such additive distortion bounds were previously unknown. 1 Introduction Numerous problems in data analysis are formulated as the question of embedding high-dimensional metric spaces into “simpler" spaces, typically of lower dimension. In classical multidimensional scaling (MDS) techniques [18], the goal is to embed a space into two or three dimensional Euclidean space while preserving interpoint distances. Classical MDS is helpful in exploratory data analysis, because it allows one to find hidden groupings in amorphous data by simple visual inspection. Generalizations of MDS exist for which the target space can be a tree metric space—see [13] for a summary of some of these approaches, written from the point of view of metric embeddings. The metric embeddings literature, which grew out of MDS, typically highlights the algorithmic gains made possible by embedding a complicated metric space into a simpler one [13]. The special case of MDS where the target space is a tree has been of interest in phylogenetics for quite some time [19, 5]; the numerical taxonomy problem (NTP) is that of finding an optimal tree embedding for a given metric space (X, dX), i.e. a tree (X, tX) such that the additive distortion, defined as ‖dX − tX‖`∞(X×X), is minimal over all possible tree metrics on X . This problem turns out to be NP-hard [3]; however, a 3-approximation algorithm exists [3], and a variant of this problem, 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. that of finding an optimal ultrametric tree, can be solved in polynomial time [11]. An ultrametric tree is a rooted tree where every point is equidistant from the root—for example, ultrametric trees are the outputs of hierarchical clustering (HC) methods that show groupings in data across different resolutions. A known connection between HC and MDS is that the output ultrametric of single linkage hierarchical clustering (SLHC) is a 2-approximation to the optimal ultrametric tree embedding [16], thus providing a partial answer to the NTP. However, it appears that the existing line of work regarding NTP does not address the question of quantifying the `∞ distance between a metric (X, dX) and its optimal tree metric, or even the optimal ultrametric. More specifically, we can ask: Question 1. Given a set X , a metric dX , and an optimal tree metric toptX (or an optimal ultrametric uoptX ), can one find a nontrivial upper bound on ‖dX − t opt X ‖`∞(X×X) (resp. ‖dX − u opt X ‖`∞(X×X)) depending on properties of the metric dX? The question of distortion bounds is treated from a different perspective in the discrete algorithms literature. In this domain, tree embeddings are typically described with multiplicative distortion bounds (described in §2) depending on the cardinality of the underlying metric space, along with (typically) pathological counterexamples showing that these bounds are tight [4, 10]. We remark immediately that (1) multiplicative distortion is distinct from the additive distortion encountered in the NTP, and (2) these embeddings are rarely used in machine learning, where HC and MDS methods are the main workhorses. Moreover, such multiplicative distortion bounds do not take two considerations into account: (1) the ubiquitousness of very large data sets means that a bound dependent on cardinality is not desirable, and (2) “nice" properties such as low intrinsic dimensionality or treeness of real-world datasets are not exploited in cardinality bounds. We prove novel additive distortion bounds for two methods of tree embeddings: one into general trees, and one into ultrametric trees. These additive distortion bounds take into account (1) whether the data is treelike, and (2) whether the data has low doubling dimension, which is a measure of its intrinsic dimension. Thus we prove an answer to Question 1 above, namely, that the approximation error made by an optimal tree metric (or optimal ultrametric) can be bounded nontrivially. Remark 1. The trivial upper bound is ‖dX − toptX ‖`∞(X×X) ≤ diam(X, dX). To see this, observe that any ultrametric is a tree, and that SLHC yields an ultrametric uX that is bounded above by dX . An overview of our approach. A common measure of treeness is Gromov’s δ-hyperbolicity, which is a local condition on 4-point subsets of a metric space. Hyperbolicity has been shown to be a useful statistic for evaluating the quality of trees in [7]. The starting point for our work is a method used by Gromov to embed metric spaces into trees, which we call Gromov’s embedding [12]. A known result, which we call Gromov’s embedding theorem, is that if every 4-point subset of an n-point metric space is δ-hyperbolic, then the metric space embeds into a tree with `∞ distortion bounded above by 2δ log2(2n). The proof proceeds by a linkage argument, i.e. by invoking the definition of hyperbolicity at different scales along chains of points. By virtue of the embedding theorem, one can argue that hyperbolicity is a measure of the “treeness" of a given metric space. It has been shown in [1, 2] that various real-world data sets, such as Internet latencies and biological, social, and collaboration networks are inherently treelike, i.e. have low hyperbolicity. Thus, by Gromov’s result, these real-world data sets can be embedded into trees with additive distortion controlled by their respective cardinalities. The cardinality bound might of course be undesirable, especially for very large data sets such as the Internet. However, it has been claimed without proof in [1] that Gromov’s embedding can yield a 3-approximation to the NTP, independent of [3]. We note that the assumption of a metric input is not apparent in Gromov’s embedding theorem. Moreover, the proof of the theorem does not utilize any metric property. This leads one to hope for bounds where the dependence on cardinality is replaced by a dependence on some metric notion. A natural candidate for such a metric notion is the doubling dimension of a space [15], which has already found applications in learning [17] and algorithm design [15]. In this paper, we present novel upper bounds on the additive distortion of a Gromov embedding, depending only on the hyperbolicity and doubling dimension of the metric space. Our main tool is a stability theorem that we prove using a metric induced by a Voronoi partition. This result is then combined with the results of Gromov’s linkage argument. Both the stability theorem and Gromov’s theorem rely on the embedding satisfying a particular linkage condition, which can be described as follows: for any embedding f : (X, dX) → (X, tX), and any x, x′ ∈ X , we have tX(x, x ′) = maxc mini Ψ(xi, xi+1), where c = {xi}ki=0 is a chain of points joining x to x′ and Ψ is some function of dX . A dual notion is to flip the order of the max,min operations. Interestingly, under the correct objective function Ψ, this leads to the well-studied notion of SLHC. By virtue of this duality, the arguments of both the stability theorem and the scaling theorem apply in the SLHC setting. We introduce a new metric space statistic that we call ultrametricity (analogous to hyperbolicity), and are then able to obtain novel lower bounds, depending only on doubling dimension and ultrametricity, for the distortion incurred by a metric space when embedding into an ultrametric tree via SLHC. We remark that just by virtue of the duality between Gromov’s embedding and the SLHC embedding, it is possible to obtain a distortion bound for SLHC depending on cardinality. We were unable to find such a bound in the existing HC literature, so it appears that even the knowledge of this duality, which bridges the domains of HC and MDS methods, is not prevalent in the community. The paper is organized as follows. The main thrust of our work is explained in §1. In §2 we develop the context of our work by highlighting some of the surrounding literature. We provide all definitions and notation, including the Voronoi partition construction, in §3. In §4 we describe Gromov’s embedding and present Gromov’s distortion bound in Theorem 3. Our contributions begin with Theorem 4 in §4 and include all the results that follow: namely the stability results in §5, the improved distortion bounds in §6, and the proof of tightness in §7. The supplementary material contains (1) an appendix with proofs omitted from the body, (2) a practical demonstration in §A where we apply Gromov’s embedding to a bitmap image of a tree and show that our upper bounds perform better than the bounds suggested by Gromov’s embedding theorem, and (3) Matlab .m files containing demos of Gromov’s embedding being applied to various images of trees. 2 Related Literature MDS is explained thoroughly in [18]. In metric MDS [18] one attempts to find an embedding of the data X into a low dimensional Euclidean space given by a point cloud Y ⊂ Rd (where often d = 2 or d = 3) such that the metric distortion (measured by the Frobenius norm of the difference of the Gram matrices of X and Y ) is minimized. The most common non-metric variant of MDS is referred to as ordinal embedding, and has been studied in [14]. A common problem with metric MDS is that when the intrinsic dimension of the data is higher than the embedding dimension, the clustering in the original data may not be preserved [21]. One variant of MDS that preserves clusters is the tree preserving embedding [20], where the goal is to preserve the single linkage (SL) dendrogram from the original data. This is especially important for certain types of biological data, for the following reasons: (1) due to speciation, many biological datasets are inherently “treelike", and (2) the SL dendrogram is a 2-approximation to the optimal ultrametric tree embedding [16], so intuitively, preserving the SL dendrogram preserves the “treeness" of the data. Preserving the treeness of a metric space is related to the notion of finding an optimal embedding into a tree, which ties back to the numerical taxonomy problem. The SL dendrogram is an embedding of a metric space into an ultrametric tree, and can be used to find the optimal ultrametric tree [8]. The quality of an embedding is measured by computing its distortion, which has different definitions in different domain areas. Typically, a tree embedding is defined to be an injective map f : X → Y between metric spaces (X, dX) and (Y, tY ), where the target space is a tree. We have defined the additive distortion of a tree embedding in an `∞ setting above, but `p notions, for p ∈ [1,∞), can also be defined. Past efforts to embed a metric into a tree with low additive distortion are described in [19, Chapter 7]. One can also define a multiplicative distortion [4, 10], but this is studied in the domain of discrete algorithms and is not our focus in the current work. 3 Preliminaries on metric spaces, distances, and doubling dimension A finite metric space (X, dX) is a finite set X together with a function dX : X × X → R+ such that: (1) dX(x, x′) = 0 ⇐⇒ x = x′, (2) dX(x, x′) = dX(x′, x), and (3) dX(x, x′) ≤ dX(x, x ′′) + dX(x ′′, x′) for any x, x′, x′′ ∈ X . A pointed metric space is a triple (X, dX , p), where (X, dX) is a finite metric space and p ∈ X . All the spaces we consider are assumed to be finite. For a metric space (X, dX), the diameter is defined to be diam(X, dX) := maxx,x′∈X dX(x, x′). The hyperbolicity of (X, dX) was defined by Gromov [12] as follows: hyp(X, dX) := max x1,x2,x3,x4∈X ΨhypX (x1, x2, x3, x4), where ΨhypX (x1, x2, x3, x4) : = 1 2 ( dX(x1, x2) + dX(x3, x4) −max ( dX(x1, x3) + dX(x2, x4), dX(x1, x4) + dX(x2, x3) )) . A tree metric space (X, tX) is a finite metric space such that hyp(X, tX) = 0 [19]. In our work, we strengthen the preceding characterization of trees to the special class of ultrametric trees. Recall that an ultrametric space (X,uX) is a metric space satisfying the strong triangle inequality: uX(x, x ′) ≤ max(uX(x, x′′), uX(x′′, x′)), ∀x, x′, x′′ ∈ X. Definition 1. We define the ultrametricity of a metric space (X, dX) as: ult(X, dX) := max x1,x2,x3∈X ΨultX (x1, x2, x3), where ΨultX (x1, x2, x3) := dX(x1, x3)−max ( dX(x1, x2), dX(x2, x3) ) . We introduce ultrametricity to quantify the deviation of a metric space from being ultrametric. Notice that (X,uX) is an ultrametric space if and only if ult(X,uX) = 0. One can verify that an ultrametric space is a tree metric space. We will denote the cardinality of a set X by writing |X|. Given a set X and two metrics dX , d′X defined on X ×X , we denote the `∞ distance between dX and d′X as follows: ‖dX − d′X‖`∞(X×X) := max x,x′∈X |dX(x, x′)− d′X(x, x′)|. We use the shorthand ‖dX−d′X‖∞ to mean ‖dX−d′X‖`∞(X×X). We write≈ to mean “approximately equal to." Given two functions f, g : N→ R, we will write f g to mean asymptotic tightness, i.e. that there exist constants c1, c2 such that c1|f(n)| ≤ |g(n)| ≤ c2|f(n)| for sufficiently large n ∈ N. Induced metrics from Voronoi partitions. A key ingredient of our stability result involves a Voronoi partition construction. Given a metric space (X, dX) and a subset A ⊆ X , possibly with its own metric dA, we can define a new metric dAX on X ×X using a Voronoi partition. First write A = {x1, . . . , xn}. For each 1 ≤ i ≤ n, we define Ṽi := {x ∈ X : dX(x, xi) ≤ minj 6=i dX(x, xj)} . Then X = ⋃n i=1 Ṽi. Next we perform the following disjointification trick: V1 := Ṽ1, V2 := Ṽ2 \ Ṽ1, . . . , Vn := Ṽn \ ( n−1⋃ i=1 Ṽi ) . Then X = ⊔n i=1 Vi, a disjoint union of Voronoi cells Vi. Next define the nearest-neighbor map η : X → A by η(x) = xi for each x ∈ Vi. The map η simply sends each x ∈ X to its closest neighbor in A, up to a choice when there are multiple nearest neighbors. Then we can define a new (pseudo)metric dAX : X ×X → R+ as follows: dAX(x, x ′) := dA(η(x), η(x ′)). Observe that dAX(x, x ′) = 0 if and only if x, x′ ∈ Vi for some 1 ≤ i ≤ n. Symmetry also holds, as does the triangle inequality. A special case of this construction occurs when A is an ε-net of X endowed with a restriction of the metric dX . Given a finite metric space (X, dX), an ε-net is a subset Xε ⊂ X such that: (1) for any x ∈ X , there exists s ∈ Xε such that dX(x, s) < ε, and (2) for any s, s′ ∈ Xε, we have dX(s, s ′) ≥ ε [15]. For notational convenience, we write dεX to refer to dX ε X . In this case, we obtain: ‖dX − dεX‖`∞(X×X) = max x,x′∈X ∣∣dX(x, x′)− dεX(x, x′)∣∣ = max 1≤i,j≤n max x∈Vi,x′∈Vj ∣∣dX(x, x′)− dεX(x, x′)∣∣ = max 1≤i,j≤n max x∈Vi,x′∈Vj ∣∣dX(x, x′)− dX(xi, xj)∣∣ ≤ max 1≤i,j≤n max x∈Vi,x′∈Vj ( dX(x, xi) + dX(x ′, xj) ) ≤ 2ε. (1) Covering numbers and doubling dimension. For a finite metric space (X, dX), the open ball of radius ε centered at x ∈ X is denoted B(x, ε). The ε-covering number of (X, dX) is defined as: NX(ε) := min { n ∈ N : X ⊂ n⋃ i=1 B(xi, ε) for x1, . . . , xn ∈ X } . Notice that the ε-covering number of X is always bounded above by the cardinality of an ε-net. A related quantity is the doubling dimension ddim(X, dX) of a metric space (X, dX), which is defined to be the minimal value ρ such that any ε-ball in X can be covered by at most 2ρ ε/2-balls [15]. The covering number and doubling dimension of a metric space (X, dX) are related as follows: Lemma 2. Let (X, dX) be a finite metric space with doubling dimension bounded above by ρ > 0. Then for all ε ∈ (0,diam(X)], we have NX(ε) ≤ ( 8 diam(X)/ε )ρ . 4 Duality between Gromov’s embedding and SLHC Given a metric space (X, dX) and any two points x, x′ ∈ X , we define a chain from x to x′ over X as an ordered set of points in X starting at x and ending at x′: c = {x0, x1, x2, . . . , xn : x0 = x, xn = x′, xi ∈ X for all 0 ≤ i ≤ n} . The set of all chains from x to x′ over X will be denoted CX(x, x′). The cost of a chain c = {x0 . . . , xn} over X is defined to be costX(c) := max0≤i<n dX(xi, xi+1). For any metric space (X, dX) and any p ∈ X , the Gromov product of X with respect to p is a map gX,p : X ×X → R+ defined by: gX,p(x, x ′) := 12 ( dX(x, p) + dX(x ′, p)− dX(x, x′) ) . We can define a map gTX,p : X ×X → R+ as follows: gTX,p(x, x ′)p := max c∈CX(x,x′) min xi,xi+1∈c gX,p(xi, xi+1). This induces a new metric tX,p : X ×X → R+: tX,p(x, x ′) := dX(x, p) + dX(x ′, p)− 2gTX,p(x, x′). Gromov observed that the space (X, tX,p) is a tree metric space, and that tX,p(x, x′) ≤ dX(x, x′) for any x, x′ ∈ X [12]. This yields the trivial upper bound: ‖dX − tX‖∞ ≤ diam(X, dX). The Gromov embedding T is defined for any pointed metric space (X, dX , p) as T (X, dX , p) := (X, tX,p). Note that each choice of p ∈ X will yield a tree metric tX,p that depends, a priori, on p. Theorem 3 (Gromov’s embedding theorem [12]). Let (X, dX , p) be an n-point pointed metric space, and let (X, tX,p) = T (X, dX , p). Then, ‖tX,p − dX‖l∞(X×X) ≤ 2 log2(2n) hyp(X, dX). Gromov’s embedding is an MDS method where the target is a tree. We observe that its construction is dual—in the sense of swapping max and min operations—to the construction of the ultrametric space produced as an output of SLHC. Recall that the SLHC method H is defined for any metric space (X, dX) asH(X, dX) = (X,uX), where uX : X ×X → R+ is the ultrametric defined below: uX(x, x ′) := min c∈CX(x,x′) costX(c). As a consequence of this duality, we can bound the additive distortion of SLHC as below: Theorem 4. Let (X, dX) be an n-point metric space, and let (X,uX) = H(X, dX). Then we have: ‖dX − uX‖`∞(X×X) ≤ log2(2n) ult(X, dX). Moreover, this bound is asymptotically tight. The proof of Theorem 4 proceeds by invoking the definition of ultrametricity at various scales along chains of points; we provide details in Appendix B. We remark that the bounds in Theorems 3, 4 depend on both a local (ultrametricity/hyperbolicity) and a global property (cardinality); however, a natural improvement would be to exploit a global property that takes into account the metric structure of the underlying space. The first step in this improvement is to prove a set of stability theorems. 5 Stability of SLHC and Gromov’s embedding It is known that SLHC is robust to small perturbations of the input data with respect to the GromovHausdorff distance between metric spaces, whereas other HC methods, such as average linkage and complete linkage, do not enjoy this stability [6]. We prove a particular stability result for SLHC involving the `∞ distance, and then we exploit the duality observed in §4 to prove a similar stability result for Gromov’s embedding. Theorem 5. Let (X, dX) be a metric space, and let (A, dA) be any subspace with the restriction metric dA := dX |A×A. LetH denote the SLHC method. Write (X,uX) = H(X, dX) and (A, uA) = H(A, dA). Also write uAX(x, x′) := uA(η(x), η(x′)) for x, x′ ∈ X . Then we have: ‖H(X, dX)−H(A, dA)‖∞ := ‖uX − uAX‖∞ ≤ ‖dX − dAX‖∞. Theorem 6. Let (X, dX , p) be a pointed metric space, and let (A, dA, a) be any subspace with the restriction metric dA := dX |A×A such that η(p) = a. Let T denote the Gromov embedding. Write (X, tX,p) = T (X, dX , p) and (A, tA,a) = T (A, dA, a). Also write tAX,p(x, x′) := tA,a(η(x), η(x′)) for x, x′ ∈ X . Then we have: ‖T (X, dX , p)− T (A, dA, a)‖∞ := ‖tX,p − tAX,p‖∞ ≤ 5‖dX − dAX‖∞. The proofs for both of these results use similar techniques, and we present them in Appendix B. 6 Improvement via Doubling Dimension Our main theorems, providing additive distortion bounds for Gromov’s embedding and for SLHC, are stated below. The proofs for both theorems are similar, so we only present that of the former. Theorem 7. Let (X, dX) be a n-point metric space with doubling dimension ρ, hyperbolicity hyp(X, dX) = δ, and diameter D. Let p ∈ X , and write (X, tX) = T (X, dX , p). Then we obtain: Covering number bound: ‖dX − tX‖∞ ≤ min ε∈(0,D] ( 12ε+ 2δ log2(2NX(ε)) ) . (2) Also suppose D ≥ δρ6 ln 2 . Then, Doubling dimension bound: ‖dX − tX‖∞ ≤ 2δ + 2δρ ( 13 2 + log2 ( D δρ )) . (3) Theorem 8. Let (X, dX) be a n-point metric space with doubling dimension ρ, ultrametricity ult(X, dX) = ν, and diameter D. Write (X,uX) = H(X, dX). Then we obtain: Covering number bound: ‖dX − uX‖∞ ≤ min ε∈(0,D] ( 4ε+ ν log2(2NX(ε)) ) . (4) Also suppose D ≥ νρ4 ln 2 . Then, Doubling dimension bound: ‖dX − uX‖∞ ≤ ν + νρ ( 6 + log2 ( D νρ )) . (5) Remark 9 (A remark on the NTP). We are now able to return to Question 1 and provide some answers. Consider a metric space (X, dX). We can upper bound ‖dX − uoptX ‖∞ using the bounds in Theorem 8, and ‖dX − toptX ‖∞ using the bounds in Theorem 7. Remark 10 (A remark on parameters). Notice that as hyperbolicity δ approaches 0 (or ultrametricity approaches 0), the doubling dimension bounds (hence the covering number bounds) approach 0. Also note that as ε ↓ 0, we get NX(ε) ↑ |X|, so Theorems 7,8 reduce to Theorems 3,4. Experiments lead us to believe that the interesting range of ε values is typically a subinterval of (0, D]. Proof of Theorem 7. Fix ε ∈ (0, D] and let Xε = {x1, x2, ..., xk} be a collection of k = NX(ε) points that form an ε-net of X . Then we may define dεX and t ε X on X ×X as in §3. Subsequent application of Theorem 3 and Lemma 2 gives the bound ‖dεX − tεX‖`∞(X×X) ≤ ‖dXε − tXε‖`∞(Xε×Xε) ≤ 2δ log2(2k) ≤ 2δ log2(2Cε−ρ), where we define C := (8D)ρ. Then by the triangle inequality for the `∞ distance, the stability of T (Theorem 6), and using the result that ‖dX − dεX‖`∞(X×X) ≤ 2ε (Inequality 1), we get: ‖dX − tX‖∞ ≤ ‖dX − dεX‖∞ + ‖dεX − tεX‖∞ + ‖tεX − tX‖∞ ≤ 6‖dX − dεX‖∞ + ‖dεX − tεX‖∞ ≤ 12ε+ 2δ log2(2NX(ε)). Since ε ∈ (0, D] was arbitrary, this suffices to prove Inequality 2. Applying Lemma 2 yields: ‖dX − tX‖∞ ≤ 12ε+ 2δ log2(2Cε−ρ). Notice that Cε−ρ ≥ NX(ε) ≥ 1, so the term on the right of the inequality above is positive. Consider the function f(ε) = 12ε+ 2δ + 2δ log2 C − 2δρ log2 ε. The minimizer of this function is obtained by taking a derivative with respect to ε: f ′(ε) = 12− 2δρ ε ln 2 = 0 =⇒ ε = δρ 6 ln 2 . Since ε takes values in (0, D], and limε→0 f(ε) = +∞, the value of f(ε) is minimized at min(D, δρ6 ln 2 ). By assumption, D ≥ δρ 6 ln 2 . Since ‖dX − tX‖∞ ≤ f(ε) for all ε ∈ (0, D], it follows that: ‖dX−tX‖∞ ≤ f ( δρ 6 ln 2 ) = 2δρ ln 2 +2δ+2δρ log2 ( 48D ln 2 δρ ) ≤ 2δ+2δρ ( 13 2 + log2 ( D δρ )) . 7 Tightness of our bounds in Theorems 7 and 8 By the construction provided below, we show that our covering number bound for the distortion of SLHC is asymptotically tight. A similar construction can be used to show that our covering number bound for Gromov’s embedding is also asymptotically tight. Proposition 11. There exists a sequence (Xn, dXn)n∈N of finite metric spaces such that as n→∞, ‖dXn − uXn‖∞ min ε∈(0,Dn] ( 4ε+ νn log2(2NXn(ε)) ) → 0. Here we have written (Xn, uXn) = H(Xn, dXn), νn = ult(Xn, dXn), and Dn = diam(Xn, dXn). Proof of Proposition 11. After defining Xn for n ∈ N below, we will denote the error term, our covering number upper bound, and our Gromov-style upper bound as follows: En := ‖dXn − uXn‖∞, Bn := min ε∈(0,Dn] ρ(n, ε), Gn := log2(2|Xn|) ult(Xn, dXn), where ρ : N× [0,∞)→ R is defined by ρ(n, ε) = 4ε+ νn log2(2NXn(ε)). Here we write |S| to denote the cardinality of a set S. Recall that the separation of a finite metric space (X, dX) is the quantity sep(X, dX) := minx6=x′∈X dX(x, x′). Let (V, uV ) be the finite ultrametric space consisting of two equidistant points with common distance 1. For each n ∈ N, let Ln denote the line metric space obtained by choosing (n+ 1) equally spaced points with separation 1n2 from the interval [0, 1n ], and endowing this set with the restriction of the Euclidean metric, denoted dLn . One can verify that ult(Ln, dLn) ≈ 12n . Finally, for each n ∈ N we define Xn := V × Ln, and endow Xn with the following metric: dXn ( (v, l), (v′, l′) ) := max ( dV (v, v ′), dLn(l, l ′) ) , v, v′ ∈ V, l, l′ ∈ Ln. Claim 1. ult(Xn, dXn) = ult(Ln, dLn) ≈ 12n . For a proof, see Appendix B. Claim 2. En diam(Ln, dLn) = 1n . To see this, let n ∈ N, and let x = (v, l), x ′ = (v′, l′) ∈ Xn be two points realizing En. Suppose first that v = v′. Then an optimal chain from (v, l), (v, l′) only has to incur the cost of moving along the Ln coordinate. As such, we obtain uXn(x, x ′) ≤ 1n2 , with equality if and only if l 6= l′. Then, En = max x,x′∈Xn |dXn(x, x′)− uXn(x, x′)| = max l,l′∈Ln |dLn(l, l′)− 1n2 | = 1 n − 1 n2 1 n . Note that the case v 6= v′ is not allowed, because then we would obtain dXn(x, x′) = dV (v, v′) = uXn(x, x ′), since sep(V, dV ) ≥ diam(Ln, dLn) and all the points in V are equidistant. Thus we would obtain |dXn(x, x′)− uXn(x, x′)| = 0, which is a contradiction because we assumed that x, x′ realize En. Claim 3. For each n ∈ N, ε ∈ (0, Dn], we have: NXn(ε) = NV (ε) : ε > sep(V, dV ), |V | : diam(Ln, dLn) < ε ≤ sep(V, dV ), |V |NLn(ε) : ε ≤ diam(Ln, dLn). To see this, note that in the first two cases, any ε-ball centered at a point (v, l) automatically contains all of {v} × Ln, so NXn(ε) = NV (ε). Specifically in the range diam(Ln, dLn) < ε ≤ sep(V, dV ), we need exactly one ε-ball for each v ∈ V to cover Xn. Finally in the third case, we need NLn(ε) ε-balls to cover {v} × Ln for each v ∈ V . This yields the stated estimate. By the preceding claims, we now have the following for each n ∈ N, ε ∈ (0, Dn]: ρ(n, ε) ≈ 4ε+ 12n log2(2NXn(ε)) = 4ε+ 12n log2(2NV (ε)) : ε > sep(V ), 4ε+ 12n log2(2|V |) : diam(Ln) < ε ≤ sep(V ), 4ε+ 12n log2(2|V |NLn(ε)) : ε ≤ diam(Ln). Notice that for sufficiently large n, infε>diam(Ln) ρ(n, ε) = ρ(n, 1 n ). Then we have: 1 n ≤ En ≤ Bn = minε∈(0,Dn] ρ(n, ε) ≤ ρ(n, 1n ) ≈ C n , for some constant C > 0. Here the first inequality follows from the proof of Claim 2, the second from Theorem 8, and the third from our observation above. It follows that En Bn 1n → 0. Remark 12. Given the setup of the preceding proof, note that the Gromov-style bound behaves as: Gn = ρ(n, 0) = 1 2n log2(2|V |(n+ 1)) ≈ C ′ log2(n+1) n , for some constant C ′ > 0. So Gn approaches 0 at a rate strictly slower than that of En and Bn. 8 Discussion We are motivated by a particular aspect of the numerical taxonomy problem, namely, the distortion incurred when passing from a metric to its optimal tree embedding. We describe and explore a duality between a tree embedding method proposed by Gromov and the well known SLHC method for embedding a metric space into an ultrametric tree. Motivated by this duality, we propose a novel metric space statistic that we call ultrametricity, and give a novel, tight bound on the distortion of the SLHC method depending on cardinality and ultrametricity. We improve this Gromov-style bound by replacing the dependence on cardinality by a dependence on doubling dimension, and produce a family of examples proving tightness of this dimension-based bound. By invoking duality again, we are able to improve Gromov’s original bound on the distortion of his tree embedding method. More specifically, we replace the dependence on cardinality—a set-theoretic notion—by a dependence on doubling dimension, which is truly a metric notion. Through Proposition 11, we are able to prove that our bound is not just asymptotically tight, but that it is strictly better than the corresponding Gromov-style bound. Indeed, Gromov’s bound can perform arbitrarily worse than our dimension-based bound. We construct an explicit example to verify this claim in Appendix A, Remark 14, where we also provide a practical demonstration of our methods.
1. What is the focus of the paper in terms of contributions and improvements? 2. What are the strengths of the proposed approach or methodology? 3. Are there any concerns or limitations regarding the presented ideas? 4. How does the reviewer assess the clarity and quality of the paper's content?
Review
Review The paper obtained an improved distortion bound of Gromov metric depending only on the hyperbolicity and doubling dimension of the metric. By leveraging the duality between Gromov’s embedding and SLHC embedding, the paper applied the conclusion of the distortion bound to SLHC that such additive bounds were previously unknown.The paper has clear explanations for the problems, I have no questions to ask.